Hacker News with comments/articles inlined for offline reading

Authors ranked on leaderboard
Last updated:
Reload to view new stories

April 18, 2025 12:05



Front Page/ShowHN stories over 4 points from last 7 days
If internet connection drops, you can still read the stories
If there were any historical discussions on the story, links to all the previous stories on Hacker News will appear just above the comments.

Historical Discussions: CVE program faces swift end after DHS fails to renew contract [updated] (April 16, 2025: 1897 points)

(1897) CVE program faces swift end after DHS fails to renew contract [updated]

1897 points 2 days ago by healsdata in 674th position

www.csoonline.com | Estimated reading time – 1 minutes | comments | anchor

"First, the federated model and CVE Numbering Authorities (CNA) can no longer assign IDs and send info to MITRE for quick publication. Second, all of that is the foundation for the National Vulnerability Database (NVD), which is already beyond struggling, with a backlog of over 30,000 vulnerabilities and the recent announcement of over 80,000 'deferred' (meaning will not be fully analyzed by their current standards)."

Martin added, "Third, every company that maintains 'their own vulnerability database' that is essentially lipstick on the CVE pig will have to find alternate sources of intelligence. Fourth, national vulnerability databases like China's and Russia's, among others, will largely dry up (Russia more than China). Fourth [sic], hundreds, if not thousands, of National / Regional CERTs around the world, no longer have that source of free vulnerability intelligence. Fifth [sic], every company in the world that relied on CVE/NVD for vulnerability intelligence is going to experience swift and sharp pains to their vulnerability management program."

Why is the contract ending?

It's unclear what led to DHS's decision to end the contract after 25 years of funding the highly regarded program. The Trump administration, primarily through Elon Musk's Department of Government Efficiency initiative, has been slashing government spending across the board, particularly at the Cybersecurity and Infrastructure Security Agency (CISA), through which DHS funds the MITRE CVE program.




All Comments: [-] | anchor

bytematic(10000) 3 days ago [-]

What are the implications of this? No more centralized store of vulnerability information?

neuronexmachina(10000) 3 days ago [-]

According to Brian Krebs: https://infosec.exchange/@briankrebs/114343835430587973

> Hearing a bit more on this. Apparently it's up to the CVE board to decide what to do, but for now no new CVEs will be added after tomorrow. the CVE website will still be up.

Incipient(10000) 2 days ago [-]

Basically when any software/library/whatever has a vulnerability, they have to communicate that out themselves, in some format.

If I'm developing a product built on 20 libraries, it won't just be a matter of scanning CVEs for major vulnerabilities any more, so I'm more likely to miss one.

'always update' doesn't always work, when to manage a product you realistically have to version pin.

joshuanapoli(3528) 2 days ago [-]

Is MITRE's CVE program redundant with NIST's National Vulnerability Database? I'm having a hard time telling how the two are related, or if NVD is simply performing the same service as MITRE.

detaro(695) 2 days ago [-]

NIST NVE relies on the CVE program. (vulnerabilities get reported, MITRE assigns CVEs and publishes them, NIST then copies that list and adds their own scoring etc to it)

Spooky23(3545) 2 days ago [-]

Once they fire everyone at NIST, they'll have that in common.

Rebelgecko(10000) 2 days ago [-]

I'm trying to steelman but I really can't think of a non- nefarious justification for this

duxup(3407) 2 days ago [-]

The process seems to be to dismantle anything not nailed down in government.

Now if you want that (even just funding) to be a thing ... you have to go through Trump & Co and pay your bribe to get it back up.

esafak(10000) 2 days ago [-]

Privatize all teh things?

giraffe_lady(10000) 2 days ago [-]

> I'm trying to steelman

Why? This administration is not acting in good faith, you don't have to act as if they are. People and institutions doing that is part of how we got here in the first place.

rqtwteye(3305) 2 days ago [-]

I think it's ignorance and arrogance. The US seems to be on a path to lose technological and science leadership. The current leadership doesn't seem to understand things that aren't flashy. I wonder when they'll dial back on food safety. I am sure RFK knows some vitamins that protect against salmonella

polski-g(10000) 2 days ago [-]

We have a 2tn deficit. If Congress wants to fund this, they need to make it mandatory spending and raise taxes.

alephnerd(3583) 2 days ago [-]

> I really can't think of a non- nefarious justification for this

Tragedy of the commons - NVD and the CVE project havr been backlogged and facing funding issues for a couple years now, and most security vendors are either cagey about providing vulns in a timely manner (as it can reduce their own comparative advantage), or try upsell their own alternative risk prioritization scores.

Every company will gladly use NVD and CVE data, but no one wants to subsidize it and help a competitor, especially in an industry as competitive as cybersecurity.

WesternWind(10000) 2 days ago [-]

It's incredibly foolish. Whatever the justification is, it doesn't matter as much as the horrible outcome.

This is one of those things the government does for the benefit of the whole.

ajross(10000) 2 days ago [-]
Probably the thinking goes that someone in the international community will step in. CVE is in practice a global registry for all, thus 'Why should the USA Department of Homeland Security pay for all the freeloaders'.

Still shortsighted and stupid, but it's plausible this is intended as leverage to get someone else to pony up.

Cthulhu_(3510) 2 days ago [-]

Reduce government spending; since it's not actually a government organization (as far as I can tell, I never looked into it before), other organizations can fund it. How much goes into this organization a year anyway? I'm seeing a Mitre corporation that does lots of other stuff too that has a revenue of 2.2 billion a year.

Multi-trillion-dollar companies benefit from and contribute to this system, surely they can spare 0.01% of their revenue to this bit of critical infrastruture?

karel-3d(3042) 2 days ago [-]

Reduce spending. Steelmanning (not actually believing this): it probably cost a lot for what is essentially a database, and can be done cheaply by private sector (Google, Microsoft).

myko(2223) 2 days ago [-]

It's a dying empire, really nothing else to say. The USA led world order is over, we've voted ourselves out of it, and now need to learn how to deal with that.

throw4847285(10000) 2 days ago [-]

I'll admit this is a bugbear of mine, but I think this is the reason 'steelmanning' is counterproductive.

Steelmanning is a neologism that serves no purpose other than in-group signaling. There was already a perfectly acceptable term for the same concept, one with more nuance and a rich history: Charitability.

The major difference is that charitability is about treating your interlocutor with respect. Steelmanning is about using one's own intellect to make your interlocutor's argument better than them. Because charitability is based on a concept of mutual respect, if somebody clearly doesn't respect you one iota, then why would you be charitable? Steelmanning tries to divorce the person from the argument, and is ironically both arrogant and naive.

hansvm(10000) 2 days ago [-]

Weren't there major problems with the current CVE implementation, especially with the waves of script kiddies and AI tools spamming the database and the fact that projects who take security seriously have little to no say in the 'score' that gets assigned?

czk(10000) 2 days ago [-]

and then a random 9.8 critical comes that affects some software you have in a way that makes it a 0 in your environment but it doesn't matter cause the cve tanks your organizational Security Score (tm) by 10 arbitrary points and management is wondering when you'll secure the company again because the Security Score is their only tangible deliverable to measure success

sepositus(10000) 2 days ago [-]

I don't know of anyone who doesn't quickly become exhausted after running a CVE scanner on their code.

gcr(3671) 2 days ago [-]

These sound like downstream effects of funding stress to me, no?

tdb7893(10000) 2 days ago [-]

The scores were never going to be that accurate across people's environments (IDK how much other places relied on them, places I worked never did that much) and issues with the scores don't seem to be a good justification to torch the whole CVE system anyway.

cantrecallmypwd(10000) 2 days ago [-]

This is bikeshedding. The point is an authoritative process and an identifier

All this does is help Putin and other rich grifters.

aprilthird2021(10000) 2 days ago [-]

Sure. There's also major problems with the video encoding pipeline at my big tech job. Let's just delete it

ajross(10000) 2 days ago [-]

> Weren't there major problems with the current CVE implementation

Absolutely. And if the headline was 'DHS proposes improvements and streamlining to the CVE program' we'd all probably be cheering.

Leaping from 'This is Flawed' to 'Let's kill This' is a logical fallacy. A flawed security registry is clearly better than no security registry.

worthless-trash(10000) 2 days ago [-]

This will get lost in the noise, but i think you mean cvss.

CVE is simply identification of a flaw, not a scoring system.

bjackman(3220) 2 days ago [-]

As an active consumer of CVEs: yea there are major problems. No there's nothing better and no I don't have any better ideas.

The scores are mostly useless, I would not care if they disappeared, I do not look at them. I don't really understand why people get so upset about garbage scores though. If a high CVSS score creates a bunch of work for you then your vuln mag process is broken IMO. (Or alternatively, you are in the business of compliance rather than security. If you don't like working in compliance, CVSS scores aren't the root cause of your misery).

Having a central list of 'here's a bunch of things with stable IDs that you might or might not care about' is very valuable.

bamboozled(3414) 2 days ago [-]

Getting a bit tired of posts like this (no offense), something dumb / nefarious happens like funding is cut for <useful thing>, then someone posts an off the cuff comment or question like, 'wasn't this <useful thing> not that useful because <superficial reason>?'.

Why do people do this, to down play all the destruction of the last few months? Seems to be some type of coping mechanism.

rco8786(10000) 2 days ago [-]

Every system has problems. The challenge is to address the problems and fix them. Not just delete the entire system and claim a win.

declan_roberts(10000) 2 days ago [-]

Yes it earnestly needed new direction and leadership.

bearjaws(10000) 2 days ago [-]

Classic 'oh its broken so throw it all away'.

It's the way it is because there isn't a good alternative. They cannot possibly know every environment that we operate in.

To this day we still have large corporations down playing their issues, and it was way worse 20 years ago.

ggm(1620) 2 days ago [-]

I wish this hadn't happened.

I wonder what level of compartmentalisation inside DHS means they didn't see this as having sufficient downsides?

I ask this, because I don't think anyone in the subject matter specialist space would have made a strong case 'kill it, we don't need this' and I am sure if asked would have made a strong case 'CRISSAKE WE NEED THIS DONT TOUCH IT' -But I could believe senior finance would do their own research (tm) and mis-understand what they saw in how other people work with CVE, and who funds it.

hackyhacky(10000) 2 days ago [-]

> I wonder what level of compartmentalisation inside DHS means they didn't see this as having sufficient downsides?

This was not a carefully-weighed decision based on a cost-benefit analysis. This was a political order, consistent with the administration's policy of 'cut everything, recklessly, indiscriminately.'

markhahn(10000) 2 days ago [-]

it might be ignorance; it might be malice.

it might also be deliberate: that they actually don't think the government should be involved in this sort of thing. after all, someone could be making a profit on this, and that seems to be their highest value. if gov is involved, that makes it a communal effort, and you know what else starts with 'commun-'?

yes, those reasons are stupid and ignorant AND intentional.

but is there any evidence against that interpretation?

Aurornis(10000) 2 days ago [-]

This sort of thing is happening across the federal government. There is no rhyme or reason. DOGE has been given an unrealistic target for cuts and they're desperately cutting whatever they can get their hands on. If you look at the federal budget it's nearly impossible for DOGE to hit their stated goals without touching benefits like medicare and social security (which are off limits so far) so the only option is deep, deep cuts into the narrow slice of the federal budget that excludes those protected categories.

There is no rhyme or reason to what gets cut, other than someone under pressure to hit KPIs (dollars cut) was desperately searching for things that looked easy to cancel.

This is happening everywhere the federal government touches. Most people aren't aware of it until they come around and pull the rug on something that intersects with your own life.

Even my die-hard Republican distant relatives are suddenly shocked because programs they benefited from are being cut. They thought they voted for something different.

Spooky23(3545) 2 days ago [-]

No, we're in a middle of a coup. Palantir or some other odious company will get paid 100x more to do something.

epistasis(2965) 2 days ago [-]

Your words don't make any sense in this environment. The idea that any person at an agency could stand up to or convince the DOGE team of anything is preposterous.

Anything that weakens the US or puts our cybersecurity in a place that Russia can exfiltrate data will happen. This is not about the US needing anything and it's silly to think otherwise. See also the NLRB whistleblower and the security backdoors that DOGE demanded to allow data exfiltration and the subsequent death threats to the whistle blower.

You mindset is behind the times and needs to adjust to a, frankly, insane current reality.

tgsovlerkhgsel(10000) 2 days ago [-]

If you made this careful analysis, you'd hear 'CRISSAKE WE NEED THIS DONT TOUCH IT' for almost everything (and it likely would be right for a significant portion but not everything).

That's why the current approach seems to be to axe everything, listen to how much screaming there is, then reinstate only the projects where the screaming is really loud.

overfeed(10000) 2 days ago [-]

> 'kill it, we don't need this'

'We are paying MITRE how much? Bigballs and co will write a better ststem in 1 week and have it integrated with xAI. How hard could it be? Send out a first draft of an xAI contract to our DHS contact'

IOT_Apprentice(10000) 2 days ago [-]

They were at the mercy of 20 year olds from doge. I wonder when doge enters the NSA & NRO WHAT information will they steal & put in their hard drives.

All of this is criminal behavior on the the current regime.

eadmund(3321) 2 days ago [-]

> I wonder what level of compartmentalisation inside DHS means they didn't see this as having sufficient downsides?

The National Vulnerability Database has been unable to keep up with the flow of CVEs for over a year now:

- https://anchore.com/blog/national-vulnerability-database-opa...

- https://www.cyberreport.io/news/cve-backlog-update-the-nvd-s...

- https://www.ibm.com/think/insights/cve-backlog-update-nvd-st...

- and many, many, many others

It has been a complete disaster for months. At this point, perhaps the thinking is to radically change approaches?

rco8786(10000) 2 days ago [-]

> I wonder what level of compartmentalisation inside DHS means they didn't see this as having sufficient downsides?

Come on, are you living under a rock right now? There are massive indiscriminate funding cuts to anything that Elon/Doge deems to be 'fraud', and they explicitly do not care about the collateral damage.

This is not about the DHS or 'compartmentalization'. This is just a politician running amok and having real consequences.

paulmendoza(10000) 2 days ago [-]

No one analyzed it most likely. It's possible on of the college students working for Doge doesn't understand security because they are a child with no real world experience that Elon brought in to slash costs.

transpute(353) 2 days ago [-]

If you work on OSS software on CVE management, then you already know that NVD funding reductions have been ongoing for more than a year.

April 2024, https://nvd.nist.gov/general/news/nvd-program-transition-ann...

  NIST maintains the National Vulnerability Database (NVD).. This is a key piece of the nation's cybersecurity infrastructure. There is a growing backlog of vulnerabilities.. based on.. an increase in software and, therefore, vulnerabilities, as well as a change in interagency support.. We are also looking into longer-term solutions to this challenge, including the establishment of a consortium of industry, government, and other stakeholder organizations that can collaborate on research to improve the NVD.
Sep 2024, Yocto Project, 'An open letter to the CVE Project and CNAs', https://github.com/yoctoproject/cve-cna-open-letter/blob/mai...

> Security and vulnerability handling in software is of ever increasing importance. Recent events have adversely affected many project's ability to identify and ensure these issues are addressed in a timely manner. This is extremely worrying.. Until recently many of us were relying not on the CVE project's data but on the NVD data that added that information.

Five years ago (2019), I helped to organize a presentation by the CERT Director from Carnegie Mellon, who covered the CVE backlog and lack of resources, e.g. many reported vulnerabilities never even receive a CVE number. It has since averaged < 100 views per year, even as the queue increased and funding decreased, https://www.youtube.com/watch?v=WmC65VrnBPI

kulahan(10000) 2 days ago [-]

What has been ongoing for more than a year?

The funding appears to have been cut off today, and both of these comments seem to talk about continuing work and how important it is.

Do you mean to say that some form of threat to the NVD has been around for over a year now? Just want to be sure I'm parsing correctly!

cowpig(10000) 2 days ago [-]

I've noticed that there's a post like this in most articles on HN that could be construed as negative for the current administration: some vague false statement followed by either a factually incorrect explanation or some quote that does not support the statement.

matthewdgreen(10000) 2 days ago [-]

I did find this post to be non-helpful and confusing. It would be helpful to edit it (or write differently in the future) to clarify that the sudden defunding event occurring today is separate and not related to the previous funding cuts. If that's the case.

RVuRnvbM2e(10000) 2 days ago [-]

There is nothing in that article mentioning funding reductions.

That article is about how the volume of software vulnerabilities are increasing, resulting in difficulty keeping up by the CVE and NVD projects.

Please stop spamming this thread with political spin.

bradac56(10000) 2 days ago [-]
dang(143) 2 days ago [-]

I'm not sure, but the current article looks to have somewhat more information in it, so I've merged that thread hither instead.

9283409232(10000) 2 days ago [-]

Reminds me of Trump's first term where he said if we stopped testing for Covid, we'd stop catching new cases and case numbers would go down. If you stop testing for vulnerabilities then vulnerabilities go down. Easy stuff.

goku12(10000) 2 days ago [-]

That's exactly what they're saying about the HHS cuts and the measles outbreak.

flanked-evergl(10000) 2 days ago [-]

What I don't get is why people make things up and then get angry at the thing they made up. Is there not enough real things to be angry at?

mjevans(10000) 2 days ago [-]

Mr. President, Do you want China to get the reports instead, or do you want the NSA to have a lead time where the vuln's are useful tools?

hsbauauvhabzb(10000) 2 days ago [-]

If you /s/China/Russia/, when asking Trump, it's no longer a rhetorical question.

mjevans(10000) 2 days ago [-]

It seems phrasing it in the form of a joke was too much.

I was trying to convey (with levity/humor) WHY it should continue to be funded as well as the argument that should be made to the one currently in control of the spineless US Congress.

Yes, fixing the vulnerabilities is important. However what the government probably does gain from it is an inside advantage in the lead time for vulnerabilities to protect against, as well as to exploit on adversaries.

stego-tech(10000) 2 days ago [-]

Man, I just can't even muster the snark I usually have for these sorts of boneheaded decisions.

This sucks, plain and simple.

aprilthird2021(10000) 2 days ago [-]

I can't believe what a bunch of bollocks this administration is. I couldn't believe it the first time, and this time I thought 'Well at least I'm ready, it will be a lot like last time' and it's so much worse

outside1234(3632) 2 days ago [-]

These four years are going to be the death of all of us.

cantrecallmypwd(10000) 2 days ago [-]

War with China and doing enough reprehensible acts to stoke protests to declare martial law to stay in power indefinitely.

Latty(10000) 2 days ago [-]

I find it a little incredible people are still talking about 'four years'.

They tried to reject the election result and do a coup, and were rewarded for it by getting back into power. They are refusing to follow the law or the courts. They are sending people to gulags in foreign countries. All the checks and balances were destroyed last time. The party has been stripped of anyone who would fight the admin or reject this illegality. They have set up a power grab over elections.

There will not be free and fair elections in four years unless they are simply too incompetent to rig it, the rubicon was crossed long ago. Without mass protest that makes it impossible for them to hold power, American democracy is dead.

They have tried to do it, they say they want to do it, they have the ability to do it, they are actively doing it, and no one is stopping them. How are people still acting like in four years they are going to neatly hand over power to be prosecuted for their crimes?

wichitawch(10000) 2 days ago [-]

I'm surprised that it was USA's responsibility to fund this in the first place. Why weren't other countries providing funds?

defrost(3078) 2 days ago [-]

It's a near certitude that Russia and China each have databases of exploitable software errors and prize zero days.

It was to the advantage of the US and allies to coordinate and lead in tracking and fixing such errors.

Multiple countries, companies, and individuals contributed finding and fixing bugs.

The administrative task of keeping track was one part of a greater picture, a part that came with first to be advised and other perks.

It's not that the US had a responsibility to take on the lead admin task, more that in past times the US saw an advantage to being at the centre of global action.

This is just another part of increasing US isolationism.

insane_dreamer(10000) 2 days ago [-]

It's called providing leadership. Worth the money. China will happily fill the void.

lars_francke(3289) 2 days ago [-]

The CVE program was started over 25 years ago. It is very reputable (until yesterday) and it was very much in the interest of the US to be seen as the stewards of this.

The funding requirements can't be that high and I'm willing to bet that other countries and entities would have happily stepped up if they had the chance.

Up until recently CVE was very centralized and only in the last few years have there been steps in more decentralization with CNAs taking more responsibility, Red Hat as a CNA of last-resort etc. So, the cost of doing all of this work has already been shifted partially (!) away from the US but I have not seen any movement towards e.g. moving the program to a foundation which could have been done.

Personally I would conclude that it was the responsibility of the US to pay for this because they wanted to and it was in their best interest to control this program.

happosai(10000) 2 days ago [-]

Because USA was a superpower that can afford it easily. Taking the leadership in everything is quite cheap price to pay when the other end of the bargain is everyone else has to follow you.

Now of course USA is ceasing (voluntarily, by stripping down every international soft power effector in government) to be a superpower, to the great glee of dictators all around the world.

The 'we can't afford being great' is a direct admission that USA is no longer a superpower. And is not going to become great again, just another nation again (at whims of China).

aabhay(10000) 2 days ago [-]

I'm surprised that the world's greatest universities are in the United States. Why weren't other countries providing funds?

tdb7893(10000) 2 days ago [-]

The US has made at least hundreds of billions of dollars from it's tech companies and has had a dominance over global tech for a long time. The tech industry has brought a crazy amount of money and power to the US so it makes sense the US puts extra effort to support it.

The US isn't supporting it out of charity, it's good for US businesses to have someone coordinating this for everyone. Why would we want to rely on other countries to be supporting our tech sector? At least now we are subject to only the capricious whims of our own government, as little comfort as that is right now (if another country was funding it we would be relying on the whims of a foreign government, which isn't ideal when tech is the golden goose of your modern economy).

jeroenhd(3638) 2 days ago [-]

It's a program the US government spun up to serve America's interests. Why would someone else pay for American interests?

Other countries have their own programs, some cooperating with the US, others separate. China has the CNNVD if you're interested in helping Chinese society safe. My government operates https://advisories.ncsc.nl/advisories to serve my country's interests.

Of course, the US is free to abandon their programme and rely on Chinese, Russian, and European vulnerability databases to keep their country safe. It does save them a couple of million after all!

phtrivier(10000) 2 days ago [-]

Because, contrary to popular views, there is no 'government of the world'.

So, since the US government needed that (it provides security to US businesses), they organised and funded it (as everything else, with US taxpayers money, and savings from investors in US and abroad.)

Now, the US government decided to commit temporary-seppuku, so a number of things will happen:

* state-level government will use their local-taxpayer money to fund similar efforts (with duplication of effort), or share it with everyone

* another country or block of country will do it, and decide whether they want to 'share'. (I suppose Russia and China have more of an incentive to keep their CVE DB private, given their level of dis-integration with US economy ? EU maybe ?)

* an international, ad-hoc organisation is created to share the funding (something like NATO.) Multi-latteralism is not exactly in fashion this days, but if EU does it, it will be 'international' by design since we're not really a federation ; so, states in 'Southern Canada' are welcome to join.

* or none of that happens, the CVE db rots for a while, until a sufficiently embarrassing cybersecurity problem occurs, and the CVE db is deemed worthy of the '10% you need to bring back' by President Elon.

Pray your company, families and friends are never on the wrong side of the 'reverse-Chersteron's fence'.

yawnxyz(2207) 2 days ago [-]

I guess their new business model is to sell zero days to the highest bidder

alephnerd(3583) 2 days ago [-]

The private sector zero day market collapsed last year with Zerodium - corporate bug bounties, nation states in-housing offensive security operations, and the democratization of knowhow destroyed the Zero Day market.

markhahn(10000) 2 days ago [-]

Trump stupidity hurts the country and world.

But maybe this is an opportunity to do CVE better.

cantrecallmypwd(10000) 2 days ago [-]

> But maybe this is an opportunity to do CVE better.

Okay, how? This sounds like looking for lemonade in a genocide.

nkassis(3432) 2 days ago [-]

My tinfoil hat says they want to privatize this through one of the administrations friends. A disastrous decision here.

9283409232(10000) 2 days ago [-]

Palantir is about to get a contract.

epistasis(2965) 2 days ago [-]

Why would they spend money to replace it? The idea is to weaken and destroy the US and its institutions. Giving Palantir money might mean that security improves, and that goes against their goals. They have already demanded that Russia stop being treated as a cybersecurity threat in other areas of the government, this is a way to ensure that systems are vulnerable to attack.

bathtub365(3476) 2 days ago [-]

Now the NSA can hoard more 0days and the general public suffers. Win win for this administration

goku12(10000) 2 days ago [-]

It's more likely to boost the zero day black market. I don't know if I want to attribute this to idiocy (indiscriminate cost cutting), greed (contracts for their crony pals) or malice (hoarding and trading 0 days).

mmooss(10000) 2 days ago [-]

> In a stunning development

Who is still stunned by these things? They want you to be stunned; they want you to tell everyone else that you're stunned to spread feelings of terror and powerlessness. If you actually are stunned, you are stunningly ignorant. If you are not and still saying it, perhaps to emphasize your unhappiness, you are a 'useful idiot'. Either way, if you are saying it, you are a useful idiot.

You should have known decades ago: The GOP impeached a President for lying about sex; they fabricated intelligence to invade another country (killing thousands of Americans and 100,000+ Iraqis) - and that was all before 2004. They've voted almost unanimously, multiple times, to bankrupt the country (by refusing to authorize debt for existing obligations). Nobody (i.e., the Dems failed to) stopped them or made them pay a price, so why wouldn't they keep doing those things. (Edit: And if you object because the analysis criticizes one side and therefore you reject it as partisan, that's a big part of the reason nothing was done.)

This time they published Project 2025, telling you what they were going to do.

mcintyre1994(10000) 2 days ago [-]

Project 2025 literally calls for dismantling the DHS. Seems pretty unsurprising that the CVE database wouldn't be in the list of things they'd care to maintain in that process.

arghandugh(10000) 2 days ago [-]

This industry relentlessly lionized Trump and Musk, elevating them to positions of power and handing them the power to destroy at will.

This is your moment! Enjoy it!

Gigachad(10000) 2 days ago [-]

It's astounding that the users here watched all the horrendous things going on and ignored them. But now the CVE numbers are gone it's shocking and too far.

Ferret7446(10000) 2 days ago [-]

I don't see why this should be publicly funded, so I don't really see an issue with this. The industry benefits from having a CVE database, so the industry should fund it.

klysm(10000) 2 days ago [-]

There are going to be all kinds of messed up incentives if this is funded from industry.

guhidalg(10000) 2 days ago [-]

No, 'the industry' is all of us alive in the 21st century who depend on software to make material decisions and to be resilient to attacks and tampering. We were all funding it, and now surely we will see some big tech company now assume responsibility from the federal government (please god don't let it be Oracle...)

kristjansson(10000) 2 days ago [-]

Because secure systems benefit the public generally, not just the corporations that make a profit operating those systems.

maronato(10000) 2 days ago [-]

The industry won't want to fund it. It'll want to profit from it.

insane_dreamer(10000) 2 days ago [-]

So you trust industry now?

Xelynega(10000) 2 days ago [-]

Don't open source developers and users of their software also benefit from the CVE database?

If it were privately funded, what incentive would these private companies have to track bugs for these open source projects that don't make money?

sMarsIntruder(10000) 2 days ago [-]

The insane number of downvotes you're getting for saying basic common sense stuff, it's why we should push for stricter political rules here in HN.

You didn't say something wrong or controversial, just an opinion. Some ideologies love to pay things with other people's wallets, and they'll do whatever they can to pursue this.

the_doctah(10000) 2 days ago [-]

Why is the government responsible for CVEs again?

throitallaway(10000) 2 days ago [-]

Every now and then the government decides to fund things. Public schools, roads, police, firemen, GPS, NOAA, cybersecurity, government cheese, etc.

sschueller(621) 2 days ago [-]

'the government' aka 'We the people'. It is in all our interest. This is like asking why the government is responsible for roads.

jowea(10000) 2 days ago [-]

National (technological) security?

JackYoustra(10000) 2 days ago [-]

There are quite a few threads on hackernews that were cautiously optimistic about doge with, frankly, pretty naive libertarian takes about how the government works.

The government is not particular (in the sense of particularism) and cannot be easily tuned to fix particular problems; rather, its best solutions come through institutional procedure and design, such as the tension between the FAA and the NTSB that, at a first glance, would seem like obviously needless duplication and waste.

It is a broad, blunt, wasteful instrument to solve broad, blunt problems in a way that may not be the best but that work far, far better than alternatives that have been tried.

That the effort to treat government like a personal budget has ended up destroying important things is a sad inevitability of such efforts. I hope it goes remembered.

simpaticoder(10000) 2 days ago [-]
>I hope it goes remembered.

It won't be. Willful ignorance is a cornerstone of the movement. You can't lie about what you don't know. You can't have a bad take if you don't know. Upton Sinclaire said in the 1930's: 'It is difficult to get a man to understand something, when his salary depends on his not understanding it.' Now add to 'salary' 'identity', 'relationships', 'sense of belonging to the group'. This is why critical, independent thinking, speaking truth to power, must be separately honored and encouraged by a healthy culture, because these attributes are by default mercilessly punished. (Physical courage and heroism are honored by a healthy culture for similar reasons.)

apexalpha(10000) 2 days ago [-]

Why is this sponsored by such an American gov entity?

I guess it's one of those things you never think about until it goes wrong.

The world would do well to move this kind of stuff out of the US quickly, just like ICANN and stuff.

kbumsik(1416) 2 days ago [-]

Because gov infra also relies on CVE?

rurban(1407) 2 days ago [-]

So who will maintain it then? Either the EU or China I suppose. They can easily fund it.

Maybe the Dutch should go ahead.

lars_francke(3289) 2 days ago [-]

ENISA in Europe has the mandate of building a EU vulnerability database for the NIS 2 directive anyway and it's coming soon...

And CIRCL in Luxembourg are providing vulnerability-lookup which can also assign IDs but in a more decentralized way: https://www.vulnerability-lookup.org/documentation/

VulnerableCode can help with discovery etc. https://vulnerablecode.readthedocs.io/en/latest/introduction...

So, parts of this are already in place and I assume this will be a big boost towards a new vulnerability ecosystem.

jeroenhd(3638) 2 days ago [-]

Us Dutch have https://advisories.ncsc.nl/advisories although a lot of that is just analysing CVEs and their impact on society.

An EU solution would probably be much better. Would suck for Americans, though, they'd need to get up early to meet European office hours.

cbondurant(10000) 2 days ago [-]

Am I missing something or was this literally announced with less than 24 hours of warning that one of the critical components to the cyber security landscape was disappearing.

What the fuck are you supposed to do about this. This is something that should have had multiple MONTHS of warning in order to allow those who depend on the CVE infrastructure to plan what to do next with their security posture.

mrtesthah(10000) 2 days ago [-]

Consider this part of the attack on the American infrastructure, economy, and society. Attacks do not abide by laws, official procedures, or come with warnings.

pjc50(1402) 2 days ago [-]

CVE-zero: the attack is coming from inside the White House.

porridgeraisin(10000) 2 days ago [-]

Good. CVEs were the poster boy of goodharts law for the longest time. Most security vulnerabilities behind CVEs are utterly meaningless.

goku12(10000) 2 days ago [-]

Ah! Another one to add to the following list:

- What disease did the CDC ever prevent?

- What improvement did the NHTSA ever bring to full self driving?

- What improvement in airline safety did the FAA bring?

- What good did FEMA do in any disasters?

I don't want to quip about how their achievements are invisible because they prevented the disasters that would have brought the spotlight on them, even when they were too underfunded to properly do their jobs. But I sure would like to see the people making these smart comments to give it a try and see how that goes. Then again, I have no complaints - at this rate, we'll get that chance soon.

4ndrewl(3642) 2 days ago [-]

To the 'I wish HN would stay out of politics' crew.

You can stay out of politics, but politics will always come and find you.

t0lo(10000) 2 days ago [-]

Everything is political now by design. It's meant to reach into every facet of society and community and restructure it.

okeuro49(10000) 2 days ago [-]

'You can stay out of politics, but politics will always come and find you.'

No, it's just recognising that it is silly to talk about politics, as certain views are just downvoted.

dmckeon(3337) 2 days ago [-]

People trying to ignore politics are like fish trying to ignore water.

cantrecallmypwd(10000) 2 days ago [-]

Yep. It's also true of people who think they can simply move out of the US and that 'solves' the problem too. America's problems are still (almost) everyone's problems too.

blueflow(3670) 2 days ago [-]

The problem is not political topics, it is how people discuss them.

h1fra(10000) 2 days ago [-]

HN and founders will say 'no politics here' on the regulated internet, drinking regulated water, eating regulated food, breathing regulated air.

scandox(3188) 2 days ago [-]

What people mean when they say this is that they don't want to engage in party political and/or tribal political discussions. They don't want to do this because it just means rehearsing talking points.

People are not dumb. They know that politics is everywhere but they want to live and love and talk about things that are interesting.

belorn(10000) 2 days ago [-]

I view the archive.org, Wikipedia, CVE program, and Linux Kernel to all have had discussions on HN about how to they should be funded. Is that kind of politics the kind that people wish that HN stayed out from?

atmosx(10000) 2 days ago [-]

This quote is essentially unworkable. Everything you say, or choose not to say, inevitably advances some political perspective over another.

What we should really aim for is thoughtful, civilized, and maybe even aesthetically pleasing discourse. That's what educated people strive for.

Trying to "avoid politics" is like collecting seashells while a tsunami is rolling in.

elcritch(3678) 2 days ago [-]

> The ancient Greek understanding of an "idiot" referred to someone who was a private citizen or a person who did not actively participate in public life or politics.

spacebanana7(10000) 2 days ago [-]

To play devil's advocate - it's horrible when gaming, programming, business or even porn forums get overrun by politics.

It's not that the political topics are unimportant but all my feeds just end up looking the same as each other and the same as a newspaper app. I hate election nights because of this.

keybored(10000) 2 days ago [-]

Apolitical person: Ugh politics is so dumb

Same person: Why is the world organized in such a dumb way?

pjmlp(113) 2 days ago [-]

Technology without politics is a pipe dream, even the FOSS licenses depend on politics.

bamboozled(3414) 2 days ago [-]

100% agree, staying out of politics has been a luxury not everyone has, it's totally unavoidable now.

pif(3653) 2 days ago [-]

There's politics and there are facts.

Trump voters are stupid. This is a fact.

Right or left leaning, that's politics.

mrtksn(1939) 2 days ago [-]

The problem with discussing politics is that it gives you the kicks. Its very easy to get into a feedback loop and take things quite far off civility. I am also guilty of it, many times.

IMHO there needs to be a mechanism for breaking the loop and then we can have civil online political discussions. Unfortunately most places just ban it or ban those who got into the loop, either way its ugly.

IRL when discussing politics and things don't go badly its thanks to 3rd party who will moderate or calm down the heated debaters.

deadbabe(10000) 2 days ago [-]

Not keeping politics out of our lives is the reason we've ended up with a totalitarian fascist dictatorship. If politics is forbidden, people have to just make up their own minds and vote for what makes sense to them, instead of banding together and slowly intensifying to the most radical extremes in bids to outdo each other.

Everytime you discuss politics on the internet, you entrench the current administration.

Pxtl(3644) 2 days ago [-]

> the 'I wish HN would stay out of politics' crew.

Sadly, this crew includes the site's moderation.

mardifoufs(10000) 2 days ago [-]

ah yes, losing the... CVE database is truly the wake up call to get engaged in politics.

I mean sorry but I'm not sure if you're being ironic. It sounds like something you'd read on ngate

orblivion(10000) 2 days ago [-]

HN can stay out of politics just fine for the most part. If a political topic comes into tech we can talk about it then, and stay out of other crap that insufferable people drag in because 'there's no such thing as being neutral' or whatever.

dhx(2975) 2 days ago [-]

The latest contract[1] (I hope this is the right one) for MITRE's involvement with CVE and CWE programs was USD$29.1m for the period 2024-04-17 to 2025-04-16 with optional extension of expenditure up to USD$57.8m and to an end date of 2026-04-16.

Seemingly MITRE hasn't been advised yet whether the option to extend the contract from 2025-04-16 to 2026-04-16 will be executed. And there doesn't appear to be any other publicly listed approach to market for a replacement contract.

[1] https://www.fpds.gov/ezsearch/jsp/viewLinkController.jsp?age...

gwd(10000) 2 days ago [-]

I can't figure out why the hue and cry wasn't raised until the very last minute. Did they not know a month ago that they were running out of time? Is it standard practice for the government not to say they're going to extend the contract until the day beforehand or something?

NilayK(10000) 2 days ago [-]

> A coalition of CVE Board members launched a new CVE Foundation 'to ensure the long-term viability, stability, and independence of the Common Vulnerabilities and Exposures (CVE) Program.'

> https://www.thecvefoundation.org

https://mastodon.social/@serghei/114346660986059236

hahajk(10000) 2 days ago [-]

So if the govt stops paying them they'll continue to do the work for free?

gnfargbl(10000) 2 days ago [-]

This kind of a consortium needs to explicitly avoid being captured by both the product vendors (who could be incentivised to manipulate the CVE issuance process to support their own remediation timescales), and by security companies (who could be incentivised to obtain a competitive advantage via preferential access to the CVE database).

It isn't impossible for a commercially-funded organisation to avoid this kind of capture, but it isn't easy either. My mind immediately jumps to the relationship between the Mozilla Foundation and Google.

pama(1887) 2 days ago [-]

This smells like a quick attempt to enable phishing for vulnerabilities, and not a legit way to make progress. The comment is from a person that runs a security startup and the site is a google site that people can report to google as a scam. (Edit: downvote as you like it— perhaps my language was too harsh to help make the point clear. It is interesting how easy non-sec people fall for names and quotes and authority.. building trust does not come overnight, in fact it is never fully there, and infosec experts would not fall for such supply chain redirections with questionable future. Hopefully we will not have to test this idea soon, though some level of reliability and long-term automation would be welcome. We need technical, generally agreed upon systems, not a "foundation").

londons_explore(10000) 2 days ago [-]

How much was this contract worth?

If it was $5000/yr it's very different to if it's $5M/year for what amounts to little more than an instance of mediawiki.

mzhaase(10000) 2 days ago [-]

Long term its probably good to have a less US-centric world.

jeroenhd(3638) 2 days ago [-]

This is a chance for the EU to step up and take over. If the US government won't pay for the CVE program, the EU surely could. Many EU countries already run a program like this to server their own interests, and I believe the EU does as well.

If the US is willing to give up influence and control over the cybersecurity sector, we should accept that gift and use it to our advantage.

gorbachev(2089) 2 days ago [-]

I wonder what would happen to CVE program funding if Tesla and SpaceX would be zero-dayed to hell and back.

redleader55(10000) 2 days ago [-]

We will soon find out, probably.

WillAdams(10000) 2 days ago [-]

FWIW, I've never understood why this sort of thing wasn't just directly handled by the NSA --- aren't they the group which should be tasked with cybersecurity?

I always suspected that 'Department of Homeland Security' would lead to Banana-republic-like shenanigans --- could we defund them?

donohoe(128) 2 days ago [-]

I don't think anyone trusts the NSA to run a program like this.

dfedbeef(10000) 2 days ago [-]

'National Security' doesn't mean you personally. It's the government only. There's a conflict of interest that immediately arises if a part of the DoD (who owns cyberwarafe, which uses vulns) maintains a public vuln database.

(Edited to be less salty, sorry)

i_love_retros(10000) 2 days ago [-]

At this point it's not crazy to believe Russia is running the country

dfedbeef(10000) 2 days ago [-]

This level of stupidity seems pretty American to me

jeff_carr(10000) 2 days ago [-]

The contract with MITRE has been extended.

https://www.forbes.com/sites/kateoflahertyuk/2025/04/16/cve-...

My guess indefinitely.

DOGE might be a bunch of idiots, but in the entire DOD, there are non-idiots.

metalliqaz(10000) 2 days ago [-]

not just idiots... malicious idiots

tlogan(2756) 2 days ago [-]

My guess is that they'll be phased out next year. The long-term goal seems to be transitioning the CVE program into something more like an industry-led consortium. (If you did not notice they operate zero budgeting approach: cut everything and if something is very important reverse it. But you cut first and then ask questions.)

It's worth noting that MITRE is a DoD contractor (with minor contracts from other agencies like this one). Having the CVE program operated by a company funded by the U.S. military raises valid concerns about conflicts of interest—especially in an ecosystem that depends on neutrality and global trust.

lynndotpy(3619) 2 days ago [-]

This is good news, but in general. We can not rely on the DoD to make smart decisions.

Ultimately, Pete Hegseth, with a career as a Fox News character, calls the shot.

plasma_beam(10000) 2 days ago [-]

It doesn't appear to have posted to FPDS yet: https://www.fpds.gov/ezsearch/fpdsportal?q=PIID%3A%2270RCSJ2...

The contract expired today, but had an option period through March of 2026. DHS just needed to exercise the option.

Edit: Note the contract ended today April 16 - so performance would stop midnight tonight if the option wasn't exercised. Government contracts routinely go down to the wire like this, and often are late getting exercised. Why the uproar over this one? Did CISA signal to MITRE that they weren't going to exercise the option?

andreygrehov(1663) 2 days ago [-]

But the article says, quote:

> It's unclear what led to DHS's decision to end the contract after 25 years

and then suddenly it gets extended. What does it have to do with DOGE?

InsideOutSanta(10000) 2 days ago [-]

This makes me wonder what other stuff most people don't know exists but is important to our society has quietly disappeared in the last few weeks. We know about this one because we know it's important. What are the things we don't know about?

jeroenhd(3638) 2 days ago [-]
https://www.project2025.observer/ lists a few. Of course, those are only the agencies the Trump people know about and explicitly want to destroy, but it's a start.
knowaveragejoe(10000) 2 days ago [-]

The cheerleaders don't care. Americans' relative certainty and quality of life is backstopped by institutions they either barely understand or have never heard of. Let them touch the stove, I guess.

jl6(10000) 2 days ago [-]

It's a reckless move to cut funding so abruptly, but taking a step back from the short-term chaos, it probably is an anomaly that this was government funded. All of private tech relies on it, and private tech is big enough to pay for it. I hope that the trillion dollar babies consider this an opportunity to pool together to form a foundation that funds this, and a bunch of other open source projects run by one random person in Nebraska.

kbumsik(1416) 2 days ago [-]

> it probably is an anomaly that this was government funded

Companies can definitely fund it. But to be fair the gov, including NIST, also relies on CVE.

chasontherobot(3333) 2 days ago [-]

ah yes, let private entities pay for it. then when there is a vulnerability with one of those entities' software, they can pay a bit more to bury it!

padjo(10000) 2 days ago [-]

Ah yes the old "well can't concerned citizens band together, form a committee, collect revenue and fund things that are in the common interest" answer you hear from small government types that makes me think you lot don't really understand what government actually is.

JCharante(10000) 2 days ago [-]

> it probably is an anomaly that this was government funded. All of private tech relies on it, and private tech is big enough to pay for it.

I mean doesn't big tech and the people they give salary money to pay taxes? Ground transportation companies rely on public roads and but we fund it because having the infrastructure is an economic multiplier.

I'm not arguing in favor of funding the CVE program, I just don't think that's a good reason.

bspammer(10000) 2 days ago [-]

The US government itself uses the database, so there is a strong national security interest in it not being in private hands.

phillipcarter(10000) 2 days ago [-]

Considering the large number of government agencies that have sponsored the program, no, I don't think it was an anomaly: https://www.cve.org/About/History

bslanej(10000) 2 days ago [-]

Just seeing HN mad like this makes things like these so much worth it.

goku12(10000) 2 days ago [-]

Oh! It will be even more fun when the entire infotech and infosec industry starts seething soon. Then the rest of the world will just make alternative arrangements and move on, leaving the US behind because they can't be trusted anymore. HN's reaction is just a small taste of things to come.

cookiengineer(3494) 2 days ago [-]

If there are any Europeans here, I'd love to make my vulnerability database that's accumulated from all linux security trackers and the CVE/NVD open source if I can manage to find some folks who'd help with maintenance.

Currently hosting costs are unclear, but it should be doable if we offer API access for like 5 bucks / month for private and 100 / month for corporate or similar.

Already did a backup of the NVD in the last couple hours, currently backing up the security trackers and OVAL feeds.

Gonna need some sleep now, it's morning again.

My project criteria:

- hosting within the EU

- must have a copyleft license (AGPL)

- must have open source backend and frontend

- dataset size is around 90-148 GB (compressed vs uncompressed)

- ideally an e.V. for managing funds and costs, so it can survive me

- already built my vulnerability scraper in Go, would contribute it under AGPL

- already built all schema parsers, would contribute them also under AGPL

- backend and frontend needs to be built

- would make it prerendered, so that cves can be static HTML files that can be hosted on a CDN

- needs submission/PoC/advisory web forms and database/workflow for it

- data is accumulated into a JSON format (sources are mixed non standard formats for each security tracker. Enterprise distros use odata or oval for the most parts)

If you are interested, write me on linkedin.com/in/cookiengineer or here.

f_devd(3388) 2 days ago [-]

Maybe something to bring up to one of these e.V.'s if it ends up being difficult to get started: Codeberg.org, nlnet.nl, ccc.de

weinzierl(233) 2 days ago [-]

Try to talk to the people from the Sovereign Tech Fund, they have a history of sponsoring security relevant projects in the EU.

greenRust(10000) 2 days ago [-]

Great idea. I'm interested in helping. I'll dm you.

Ucalegon(10000) 2 days ago [-]

The EU should just buy MITRE. Move it to the EU and make it a EU based project.

juicyyy(10000) 2 days ago [-]

Im also interested in helping

JimBlackwood(10000) 2 days ago [-]

I'm interested to help! I added you on LinkedIn, so will message there after you accept. :)

anontrot(10000) 2 days ago [-]

Try if you can find some help here https://openssf.org/

wustus(10000) 2 days ago [-]

Depending on deployment strategy I could help with Kubernetes stuff.

sneak(874) 2 days ago [-]

The AGPL is a nonfree (and nonsensical) license.

There's nothing wrong with normal GPL.

goodpoint(10000) 2 days ago [-]

There are already many security trackers, why writing a new one? The issue is paying people to handle the advisories.

harrisi(10000) 2 days ago [-]

I'm not European but I'd love to help.

mwe-dfn(10000) 2 days ago [-]

The European, GDPR compliant subnet of the Internet Computer could suit your needs. The app would be decentralized out of the box and it can't be shut down by a single entity like a traditional cloud provider or nation state. Hosting 100GB costs about 500$ per year [0]. This is not a traditional hosting provider, it's a decentralized cloud. Reach out on the forum [1] or to me if this sounds like a good fit to you (I think it does, from your list of requirements).

[0] https://internetcomputer.org/docs/building-apps/essentials/c... [1] https://forum.dfinity.org/

tecleandor(10000) 2 days ago [-]

(Spain, doing storage and web hosting) What usually worries me the most is the administrative or management part, which I don't know how big would be for this project...

senda(10000) 2 days ago [-]

messaged on linkedin fyi

lars_francke(3289) 2 days ago [-]

Honest question: Does this not already exist?

- https://vulnerability.circl.lu/

- https://osv.dev/

- https://vuldb.com/

And a few others?

mitjam(10000) 2 days ago [-]

The main costs definitely not hosting and can be quite significant. MITRE had $2.37B revenue in 2023, most if it contributions. I don't know how much of it can be attributed to the CVE, but I assume it's not an insignificant part of it: https://projects.propublica.org/nonprofits/organizations/422...

hypercube33(10000) 2 days ago [-]

I would email someone like Patch My PC they seem good stewards of stuff open source from my vague looking and they are good people. They may just host a clone of it that's open.

newsclues(10000) 2 days ago [-]

Why EU?

Canada may be another friendly option

sberder(3251) 2 days ago [-]

Looks like some people are already getting things moving: https://www.thecvefoundation.org/

worthless-trash(10000) 2 days ago [-]

Some cnas may also submit. Is this something you are open to?

dev_l1x_be(10000) 2 days ago [-]

We should host it and collect membership fee from people who need this data. This way we can make it resilient against lack of government support. I would love to pay 5-10EUR/month to use such a service.

insane_dreamer(10000) 2 days ago [-]

CVE was anti-American woke.

No, more seriously, just like with shutting down NOAA services, it seems the goal is to:

1. cut services (we saved taxpayer money!!)

2. at some point later: oh, we actually need those services

3. pay <insert your favorite vendor here, preferably one connected to Musk> to provide the service (see! we don't need to pay gov employees!!) (fine print: the vendor costs 2-3x the original cost). But by then no one is looking at the spending numbers anymore.

Slick moves.

SirHumphrey(10000) 2 days ago [-]

And here lies the problem. Even from a libertarian perspective DOGE is counterproductive because maintaining a system is much more cost effective than starting it anew.

Especially when you cut something recklessly, figure out in month that you need back that capability right now and have very little leverage to negotiate with private providers.

When you look at the last cutting effort in the Clinton administration the difference in jarring.

Combine that with the fact that with a few exceptions DOGE has been cutting the most cost effective programs (i can't think of a better bang for buck science program than NOAA) it's saved very little vs the amount of pain it has caused.

gabesullice(3339) 2 days ago [-]

As a newly minted cynic, this seems like a cynical play to save someone's budget.

Step 1: Post discreetly to a forum with minimal information and an absurdly short deadline

Step 2: Phone your friend, the former board member, to make your case on LinkedIn

Step 3: Ring up a friendly journalist and give them a tip

Step 4: Reference the insuing chaos as justification for keeping your project funded

Note that the article carefully avoids pinning the blame on DOGE or the Whitehouse while heavily implying it. MITRE is technically a private entity, albeit a non-profit. And the very last paragraph of the article states:

> A CISA spokesperson told CSO, "CISA is the primary sponsor for the Common Vulnerabilities and Exposure (CVE) program... Although CISA's contract with the MITRE Corporation will lapse after April 16, we are urgently working to mitigate impact and to maintain CVE services on which global stakeholders rely."

To be clear, the point isn't to say that the CVE program isn't valuable, nor is it to say that it's good for a shenanigan like this to be necessary.

The point is that, unless you're directly involved in this subject (not impacted—involved), it's probably best to maintain a 'wait and see' attitude rather than succumb to catastrophizing this news.

girvo(3632) 2 days ago [-]

Have you seen proof that this is what has been happening? Your explanation is much more convoluted than 'DHS cut funding, like the administration has said it is going to do'.

wengo314(10000) 2 days ago [-]

vibe coding could not have come at a worse moment.

sgt(3284) 2 days ago [-]

Just tell the AI: 'Make this code secure' /s

redleader55(10000) 2 days ago [-]

I see this as the perfect moment to get into consulting - either development, or security. People were not sure what jobs AI will create: 'GenAI babysitting' is one of them.

skirge(10000) 2 days ago [-]

only one country pays but all benefit from it. It should be funded by all who benefit like UN.

goku12(10000) 2 days ago [-]

I'm sure that a hundred other countries will step up to fund it. But have you given any thought about why the US was so willing to sponsor it alone in the past?

jowea(10000) 2 days ago [-]

I thought most people in the US wanted the UN to have less control over this stuff? Remember the talk about moving control of the Internet to the ITU (International Telecommunication Union)?

hubabuba44(10000) 2 days ago [-]

The real irony here is that a lot of ycombinator founders and the people reading HN were exactly the ones making this possible and now start to wonder why the snake eats its own tail.

cantrecallmypwd(10000) 2 days ago [-]

Sorry, I made the mistake of installing PyPy.

this15testingg(10000) 2 days ago [-]

exactly; I hope ycombinator and its proponents can enjoy living in the ancap fantasy land where you have to pay to be alerted for a climate change fueled mega hurricane (also caused by this exact same reckless, unregulated greed) because NOAA was disbanded. Billionaires shouldn't exist, but neither should millionaires.

j-krieger(10000) 2 days ago [-]

The missing funding is something like 2 million dollars. Any US company could make this issue go away in an instant.

nosianu(3636) 2 days ago [-]

Or they wanted this, because this could be part of the privatization of many government functions. They, or at least some of them, could see this as controlling this function for money. It's a regular stream too, the valuable subscription model and customers who really need the service (and if they don't, just add a new law in the name of IT security forcing firms to sign up).

gcollard-(3262) 2 days ago [-]

Forget everything you know and consider that it might be a misguided and risky negotiation tactic.

Disclaimer: This is not business advice and should be read using Cartman's voice.

Step 1: Announce publicly that you are not renewing your contract.

Step 2: If the market has viable alternatives or the service you are negotiating isn't that hard to replicate, other actors will manifest to fill in the gaps, especially if your business is attractive. (E.g., The top comment is building an alternative; other comments point to alternative services.)

Step 3: Congratulations, you now have leverage for a significant discount with your previous provider because they face the real prospect of losing your business entirely to a competitor. If the competitor is private, you can even double dip by investing in their company before attributing them the contract.

Aperocky(10000) 2 days ago [-]

There's always a cost even if there doesn't seem to be one, credibility is measurable in markets and when it bite I think we'll all be in rough times.

froggertoaster(10000) 2 days ago [-]

Believe me when I say that DOGE is filled with smart people (I know a few of them).

Just because they're scattershot cutting doesn't mean they're stupid.

raegis(10000) 2 days ago [-]

I guess I'm naive, but given the current situation, wouldn't a smart person resign from DOGE? If I were smart and highly employable, like these guys, I would not want to be associated with all the indiscriminate firings of DOGE.

p0w3n3d(10000) 2 days ago [-]

One man appears at one position and so many things stop working in so little time

Alifatisk(3260) 2 days ago [-]

Yet, he is still praised and cherished. I can't comprehend how.

blindriver(10000) 2 days ago [-]

How much does CVE cost to maintain and why must the US fund the entire thing?

manmal(10000) 2 days ago [-]

The bureaucracy of internationalizing it would likely be more expensive than the current cost.

GuinansEyebrows(10000) 2 days ago [-]

We can afford it.





Historical Discussions: But what if I want a faster horse? (April 11, 2025: 1484 points)
But what if I want a faster horse? (April 04, 2025: 3 points)

(1484) But what if I want a faster horse?

1484 points 7 days ago by saeedesmaili in 850th position

rakhim.exotext.com | Estimated reading time – 3 minutes | comments | anchor

People in tech business circles love this quote by Henry Ford:

If I had asked people what they wanted, they would have said faster horses.

The idea is to think outside the box and create entirely new markets instead of just new products in existing ones. Like Apple creating the iPhone (sure, smartphones existed before—but cars also existed before the Ford Model T).

But sometimes, I really want a faster horse.

Netflix in 2012 was a super fast horse. It had a simple but massive catalog of movies and shows, solid recommendations, and basic library management. Compared to my limited local media library it was great. You could actively tune your tastes and rate things with a 5-star system.

Netflix today is very different. It's not a library—it's an experience. Instead of reliably showing me what I 'have' and recommending what I might like, it shuffles content on each interaction, sometimes changing the cover images of shows in real time, like some black-market charlatan. It has no meaningful catalog, no real categories—just short-lived, auto-generated groups like "Binge-worthy" or "Festive spirit."

Even the "New" section is meaningless. It opens with a "For You" row (huh?), then "Continue Watching", followed by generic 'Popular in ' rows. It feels like YouTube search: ask for something specific, get a few hits, and then a flood of unrelated 'popular' and 'recommended' content.

"My List" on Netflix randomly shuffles items and changes their covers every few hours. "Continue Watching" may or may not include what I actually watched recently. Sometimes, the engagement algorithms resurrect some random Slovakian cartoon I opened three years ago—one and immediately closed because it that had no English subtitles here in Finland, even though they do exist in other regions.

I just want a faster horse.

Spotify in 2015 was also a super fast horse. It was like my iTunes library, but with millions more tracks. Getting new music became faster, but it didn't change the nature of my relationship with music.

Spotify today is... basically Netflix. An inconsistent stream of ever-changing content, weak library tools, and an endless barrage of podcasts.

Overall, consistency, user control, and actual UX innovation are in decline. Everything is converging on TikTok—which is basically TV with infinite channels. You don't control anything except the channel switch. It's like Carcinisation, a form of convergent evolution where unrelated crustaceans all evolve into something vaguely crab-shaped.

The list goes on:

  • YouTube. YouTube: Once a video catalog with social discovery. Now? TikTok.
  • LinkedIn. Once a network of resumes. Now? TikTok.
  • Substack. Yeah, a newsletter platform... now launching TikTok-style videos. Seriously.




All Comments: [-] | anchor

gostsamo(3330) 7 days ago [-]

Sorry, no money in horses, donkeys are all that we can offer you. What color would you like your donkey in?

JKCalhoun(3408) 7 days ago [-]

Any color as long it is black (of course).

arkh(10000) 7 days ago [-]

All the result of A/B tests. Everything will converge to give you an engaging experience for most people. The only not too bad student is reddit which lets you keep using their older UI if you want to. But everything else is pushing new driven by A/B tests UI optimized for engagement.

bflesch(10000) 7 days ago [-]

With the onslaught of Javascript-parsing bots and crawlers, how useful are A/B testing results any more?

wazoox(3671) 7 days ago [-]

'Engaging experience' being actually a weasel word for 'sucking your brains out to make you watch ads and valueless nonsense'.

ballenf(10000) 7 days ago [-]

My hunch is these algos are also optimized for hiding the long tail of content that's more expensive to serve as it's not edge-cached. And it was the long tail that drew many of us to these services in the first place. At least that's my feeling using Youtube and Netflix these days.

mxfh(386) 7 days ago [-]

Not just A/B test but all happening while cost optimizations happen.

The key metric seems to be no longer how many users you can make sign up, but how can I keep an subscription running at lowest cost to serve possible.

The UHD price is not worth it for a long term subscription, and the HD quality is subpar.

spicyusername(10000) 7 days ago [-]

I think it's very likely this kind of optimization is giving people want they 'will' want, instead of what they 'do' want.

If you ask a heroine user if they want to use, I suspect most will say no.

But if you A/B test their behavior and build a product based on what they actually do, you're going to start selling more heroin and encourage more heroin use.

To everyone's detriment.

nyclounge(10000) 7 days ago [-]

>But everything else is pushing new driven by A/B tests UI optimized for engagement

That really hit the nail. Advertising industry along has ruined web! Everything is for trigger what action we want user to do on the page, how can we see what user is thinking.

Very creepy indeed from a user perspective. Now days I don't care if telementary is aggregated or open or if it helps developer makes better software.

How about NO telementary!!! NO tracking!!!

dwedge(10000) 7 days ago [-]

I fear old reddit is going to be killed off this year. They're getting rid of the red envelope for messages/replies, they've pushed the notification and chat with red icons into old reddit and more and more content seems to 'accidentally' link to new reddit.

They left it alone for years but now they're converging them, looks like it's only a matter of time

kcatskcolbdi(10000) 7 days ago [-]

Going back to horses sounds so nice.

thijson(10000) 7 days ago [-]

I see people riding them around the hood here in Philly. There's also pop up stables here and there.

hackitup7(3555) 7 days ago [-]

'If I had asked people what they wanted, they would have said faster horses.'

This line is especially silly when making B2B products, especially very expensive enterprise ones. It's often used to justify building 'great ideas' from some exec or overzealous PM/engineer over concrete asks from customers. Like you really think that a team of 20 experienced people paying >$1M to help run their multi-billion dollar business, both have no idea what they actually want and don't understand the capabilities of new technologies in the market? Totally condescending.

hobs(3264) 7 days ago [-]

Have you ... done enterprise sales? The idea that a group of people working for a multi-billion dollar business having no idea what they want and no understanding of capabilities of new technologies is ... standard?

I have seen it personally ... dozens? of times? Its the reasons startups can even succeed at all given the enormous momentum and cash reserves of these bigger companies - their goals, management, approach - it all becomes more diffuse and poorly executed.

bluGill(10000) 7 days ago [-]

What I don't like about the line is it only applies when there is a non-horse option. No amount of effort in 1600 would have resulted in either a bicycle or an automobile - there were too many needed things not available. In 1600 most people wouldn't have wanted a faster horse - sure they knew what a horse was but they couldn't afford to feed it and so they were not interested - a car is cheaper than a horse for nearly all uses.

furyg3(3649) 7 days ago [-]

The TikTok-ification of advertising supported platforms is terrible, but makes sense to me. LinkedIn pivoted from making money on subscriptions and fees for job postings to ads, which mean the leading drivers are 'engagement' e.g. time you spend doom scrolling on their platform. This will end in disaster for the platform as a place to find jobs or employees.

Netflix I understand much less. They make money from subscriptions. If you perceive having a fantastic experience on the site by just going there, finding something you enjoy watching, and leaving... they win. Why they would foster a doom-scrolling experience I really can't really explain, other than imagining some dark pattern like they have to pay per view and want you to watch C grade movies? More time spent looking for something to watch means less time streaming?

I don't get it.

kilian(1809) 7 days ago [-]

This is strongly in tin-foil hat territory but: streaming video costs a lot more money than streaming some JSON to populate a UI. Every minute you spent browsing the catalogue over playing a video is probably a significant costs saving for Netflix.

chii(2993) 7 days ago [-]

> More time spent looking for something to watch means less time viewing?

or, if you're presented with more random 'clips' or movie snippets, this turns on your gambling reward center. It's like a slot machine - where you 'win' by finding a good series to watch after searching. And because this is random, you end up getting addicted to looking thru the list/snippet, trying to encounter a perfect series to watch.

lotsofpulp(10000) 7 days ago [-]

Netflix is winning, see net income trends:

https://www.macrotrends.net/stocks/charts/NFLX/netflix/net-i...

Maybe it is winning despite what Netflix leaders are choosing to do, and maybe their choices will cause them to falter soon. And maybe Netflix could be doing better than they are. But it is always easier to pontificate than execute.

I don't buy Netflix solely because they don't integrate with the search in the iOS/macOS TV app.

Unfortunately, based on media trends before streaming and Netflix was a thing, lots of people like C grade productions. If you recall, "reality" TV shows were taking over in the 2000s. People like the Tiktok-ificiation (or otherwise lowering of quality).

teeray(3101) 7 days ago [-]

> Why they would foster a doom-scrolling experience I really can't really explain

Because regardless of whether or not the business model depends upon it, investors have been trained that "engagement" is inherently good quality for their investments to have. Increase engagement, stonk price go up.

sanderjd(10000) 7 days ago [-]

I guess my thing with LinkedIn is that there's just no reason to use the feed. It's still a place to connect with people I've worked with and keep up with what they've been doing. It's incredibly useful for that. I really don't find the feed to be either a boon or a hindrance in that use case. I know it's there, I know it annoys some people, but it's just irrelevant to me.

JackMorgan(10000) 7 days ago [-]

You've got it backwards, Netflix doesn't want people to just doom-scroll, the users want to doom-scroll.

Attention destroying apps reduce the long term focus and reward centers such that doom-scrolling through the catalog probably feels better than just watching something. Most of the folks I know who start a movie or show immediately pull out their phones anyway to scroll elsewhere.

pharrington(10000) 7 days ago [-]

As is always the case, they are high on their own supply. Netflix, and a ton of other companies, are terminally ill gambling addicts.

gnatolf(10000) 7 days ago [-]

Mostly it's to cover up that the catalogue isn't as great anymore, isn't it? Since almost every big label took back the rights and started their own streaming service, Netflix simply doesn't have as much content (that anyone would want to see) anymore.

I quit all those platforms recently and I'm not missing the frustration of having to 'switch channels' through their incomprehensible categories and views anymore.

neutronicus(10000) 7 days ago [-]

I assume it's about papering over the gaps in their content library.

You can't provide a seamless UX for turning on the TV and watching The Office if you don't own the rights to The Office. They want to habituate you to scrolling through content Netflix actually owns and picking something, because it's apocalyptic for them if you ever treat the services as fungible content libraries that you hop between month-to-month.

patapong(10000) 7 days ago [-]

I think Netflix faces the problem that measuring the causality between a user watching specific content and choosing to stay subscribed is super hard. Therefore, they focus on a metric that is easy to measure, namely time spent in the app. This is likely not the metric they should be optimizing for, but since they _can_ measure it, it becomes the target anyway.

duped(10000) 7 days ago [-]

> Why they would foster a doom-scrolling experience I really can't really explain

Entertainment is a zero-sum market. More time spent doom scrolling means less time spent on another service, which probably reduces their churn (also, ads)

bluetidepro(3092) 7 days ago [-]

Think of it this way, the less time they spend actually WATCHING content, the longer they will pay their monthly service because they have this massive 'watch list' that they never actually get through. They just keep paying month after month never getting through a backlog that they inspire to watch. I don't agree with it, but it makes sense to me. If you can never feel satisfied, you will pay over and over again chasing that satisfaction of watching 'everything.'

Many people will pay Netflix for years hardly watching content for months just because the convenience factor of not having to subscribe/unsubscribe when they know a new season of X will be out in the next year. It's wild to me, but people are lazy. So again, the more you keep them from actually watching the content and realizing they are 'done', the longer they likely just keep their subscription active. Get them to add as much potential content they want to watch to a never ending backlog watch list.

raincole(10000) 7 days ago [-]

> Why they would foster a doom-scrolling experience I really can't really explain

They want to take the bargaining power from creators (and old IP owners).

They don't want the customers to search for a specific show. They want the customers to watch whatever is shown to them. This way Netflix will have tremendous power over show creators - if our algorithm doesn't favor you, it doesn't matter how good your show is or how much money you spend on marketing outside Netflix.

codexb(10000) 7 days ago [-]

Netflix's primary goal used to be to attract new subscribers. Now it's a more about maintaining subscribers and finding new ways to monetize the existing subscriber base. That's why you're seeing things like 'sharing' subscriptions, and advertising, and premium plans.

notatoad(10000) 7 days ago [-]

i think people's view of netflix's business model is heavliy biased by what they want netflix to be.

i get it, i hate what they've become too. i'd like to believe there's a world where paying for content is a better model than selling ads. but the reality is that every time netflix makes a decision that the internet gets angry about, their balance sheet looks better.

joe_the_user(3127) 7 days ago [-]

The thing about the situation is, now that when Tik-tok-ification has grown big enough, it (no-choice interfaces, 'enshitification', etc) becomes the only paradigm UI designers, managers and investors understand. Moreover, it's interface that essentially completely controls the user - all the choices they have are essentially fake and control always appeals to managers and control may not immediately make money but it can make money long term so it can be justified.

You can see how Sonos enshitified their interface and even with a user rebellion wouldn't back down, just as an example.

api(1616) 7 days ago [-]

What this is describing is not what the Ford quote is talking about. Netflix and all the rest didn't TikTokify because they were trying to create some massive visionary innovation, but the opposite.

They did it because it's more profitable to shovel slop than to distribute quality. Quality content is expensive to make. Slop isn't. The way you do that is by hypnotizing people with addiction. To do that you have to have control over what people see and use algorithms to optimize that to 'maximize engagement.' You need your users mindlessly scrolling, not searching and categorizing and exploring. You need to disengage the neocortex and engage the brain stem.

TikTok is being copied by everyone because they nailed this formula better than anyone. They didn't invent it, just perfected it. I'd say Meta/Facebook invented it, which is why Zuckerberg should be known as the man who destroyed the Internet.

The next step beyond TikTok is a 100% AI generated algorithmic feed. Drop the human creators entirely. Everyone gets a personalized feed of low-quality AI slop tuned for maximum engagement.

Addiction is the best business model.

kelnos(10000) 7 days ago [-]

Part of the problem specifically with Netflix is that they lost the rights to most of the good stuff, or at least the stuff that everyone wants to see, because the Disneys of the world set up their own streaming services and pulled their content from Netflix.

So in a way Netflix had to learn how to push slop. Because they can't make their own Star Wars or MCU or Friends or whatever. It's just not easy to build a catalog of reliably-profitable franchises. Especially when many of those franchises were born decades before Netflix even existed.

Even the good stuff Netflix has (like say Black Mirror) isn't going to be enough to keep customers unless they get people watching some slop.

Taek(3093) 7 days ago [-]

The root problem seems to be monopoly and fragmentation.

When Ford was working on a car, people who wanted a faster horse could go to the horse store. There were reasonable alternatives to Ford's new method of transportation.

But here, you can't recreate Spotify from 2015. You'll never get the rights to play the music for users. Same with Netflix, you'll never get the rights to show the movies.

Same thing with Twitter, Facebook, etc. Even if you know exactly what content your user wants, you can't fetch it for them because it was posted in some other walled garden, and that wall stops you from competing.

If you want a faster horse, change the laws so that people can build faster horses and compete.

ks2048(3275) 7 days ago [-]

Maybe it depends on your listening habits, but for me, Spotify and Netflix are very different experiences.

Spotify has almost anything I look for. Netflix I struggle to find anything of interest.

gampleman(3617) 7 days ago [-]

Good luck riding your fast horse through most urban areas (and parking it... er stabling it). All of those things were routine in urban areas before car adoption (I believe Manhattan for instance often had stables in upper floors, leading to some interesting design to get horses up and down).

WorldPeas(3604) 7 days ago [-]

So many of the burrs on my experience with anything are that I still expect the paradigm I had with my discman as a kid to the nth degree, back then I would load my favorite songs onto a disc, then play them, or play an album on repeat. I-tunes lets me do this still, but it's trying to push more of its streaming features on me like when I search my library, it defaults to searching Apple's network music volume, that I'm not interested in. I fear that the iphone will continue to hamper one's efforts to download media until you are forced into more fiscally expedient platforms like Spotify, where my favorite PM dawn song was replaced by a 'superior' remaster where the artist was much older and lost the tone of his voice. Sadly one of the consequences of convergence is that so much else in the phone is done right I'd probably still have to use it.

ryandrake(10000) 7 days ago [-]

Yea, iTunes (renamed Apple Music) is getting bad. The only thing I care about is what I've cared about since 2000: Playing a bunch of MP3 files in my collection. That functionality is now relegated to 'third tab from the left,' shoved aside behind a glass case like a relic from a former era.

ozim(10000) 7 days ago [-]

I hate algo feeds that change each time I refresh.

On LI I lost already like 3 articles that I really wanted to read but I clicked notification and I can never get that articles back.

robofanatic(10000) 7 days ago [-]

Its like when I go bird watching and finally see that elusive bird. but if I lose my focus for a split second, its gone, never to be seen again.

jakey_bakey(891) 7 days ago [-]

Honestly that last sentence about Substack hit me hard.

I just want them to import a syntax highlighting library but instead they are pushing video content into my face

nthingtohide(10000) 7 days ago [-]

Why don't companies have multiple recommendation strategies. One for power-users. One for casual users etc. Have the router infront of these models to intelligent switch between the different styles. In fact, there are times when I want indepth analysis. But after understanding the topic, I need short form content or memes which 'update' or 'entertain' the same topic in ongoing manner.

andai(3664) 7 days ago [-]

I think Spotify was perfect in 2008. I think people's need to justify their existence by constantly doing things creates an unfortunate incentive where perfect products are mutated beyond recognizability.

(This coupled with the tendency to hire more people as you get more popular, you have more people mutating the thing. Also novelty bias...)

madmountaingoat(10000) 7 days ago [-]

In those early days the Spotify user experience needed to try and differentiate and put up barriers to being copied. Later it suffered from being purely metric driven and tracking things like user-engagement thinking it's a proxy for happiness with the platform. And then later still they start to mostly care about the cost of delivery.

drcongo(3247) 7 days ago [-]

I just stop paying for things when they do this.

cratermoon(344) 7 days ago [-]

They don't care. They now make more money selling ads and the user data they've collected.

tuna74(10000) 6 days ago [-]

If you payed for things that are better (like physical media over Netflix etc) the producers would actually respond and there would be more of that.

andai(3664) 7 days ago [-]

Ironically the old horses were faster! Run XP on modern hardware (if you can get it running at all) and you'll see what I mean. Explorer opens fully rendered in the span of a single frame (0.016 seconds). And XP was very slow and bloated for its time!

It'll do this even in VirtualBox, running about 20x snappier than the native host, which boggles my mind.

svachalek(10000) 7 days ago [-]

It's amazing how fast we can eat up new hardware capabilities. The old 6502 1-MHz CPUs were capable of running much more sophisticated software than most people today imagine, with 1/1000 or 1/millionth the hardware. And now we're asking LLMs to answer math questions, using billions of operations to perform something a single CPU instruction can handle.

IshKebab(10000) 7 days ago [-]

To be fair even with modern software bloat the overall experience is a lot better now than it was in the XP days. I think it's mainly due to SSDs. They were a huge step change in performance and we fortunately haven't regressed back to the slowness of the HDD era.

At least on most hardware. I have a shitty Dell laptop for work that's basically permanently thermally throttled... :(

noisy_boy(10000) 7 days ago [-]

I think they were designed at the time of less powerful machines so they had to be designed better. Nowadays there is not as much push to eke out every last bit of performance because there is loads of power at everyone's disposal and developers are pushed to focus on features first without being given time to refine performance because features mean adoption. So the bloat creeps up, and hardware makers keep designing more powerful machines which further enables the bloatiness. It is a vicious cycle.

washadjeffmad(10000) 7 days ago [-]

This is part of why I still have a MacBook2,1 running Snow Leopard. Even with its 4GB of memory and Core2Duo, it's optimized to prioritize my input. It also never changes, which is a form of stability I've come to cherish.

Another point is that you can train a horse, or even eat it if in dire straits. You own that horse. I can't disable things I want to disable, and names, locations, and features change (or are removed) with no notice between minor version updates. I can't tell you the last time I built something for a new Mac, or wanted to.

I don't know MacOS today, and it certainly doesn't make me feel like I own my computer.

I'm less harsh about modern Windows because I view it as amends for Microsoft causing the bot/ransomware crisis of the last 15 years. Still not for me, but at least I neuter it into usefulness.

Gud(10000) 7 days ago [-]

My setup(FreeBSD+XFCE) hasn't changed at all over the last 20 years and is just as fast as it's always been.

I use virtualisation for the rest.

greenie_beans(1490) 6 days ago [-]

i'm off grid right now and the only fast websites are hacker news, old reddit, and my app https://bookhead.net that is html + a little bit of htmx + a little vanilla javascript

piperswe(3568) 6 days ago [-]

Hell, my Windows XP system with a nearly 20 year old processor (Q6600, ~17ish years old) still instantly does almost everything.

ghusto(10000) 7 days ago [-]

So happy to read this and all the comments in agreement. I thought it was just me.

In my bombastic opinion, Spotify has the _worst_ goddamn user interface of anything I have ever used, including my dishwasher with a single button. Netflix is less frustrating, but that's likely because 'here are some films' is more acceptable than 'here are some songs, but fuck you if want to listen by album'.

Smashing content into my face isn't making me love you.

metabagel(10000) 7 days ago [-]

To me, Spotify's UI is super counterintuitive.

eadmund(3321) 7 days ago [-]

> YouTube. YouTube: Once a video catalog with social discovery. Now? TikTok.

I hate YouTube Shorts with a passion. They are low-effort engagement bait. They cannot be disabled.

Even worse, my Google TV will not play them when my phone is connected to it, and my phone will not play them when it is connected to my TV. Both devices can play them fine, they just don't want to play them when they are connected.

There can be no good technical reason for this. It's just delivering a bad experience because it can.

dcrazy(10000) 7 days ago [-]

Many channels seem to use Shorts as a vehicle to get you to their long-form content. I don't mind that as a discovery mechanism; it's introduced me to some fun stuff. Other channels make Shorts-specific content, which I really dislike.

sambeau(2267) 7 days ago [-]

Self driving cars, where you have to supervise and occasionally have to reign in from going off the path, are essentially faster horses.

So they are finally here.

codexb(10000) 7 days ago [-]

I've never thought about it this way, but it's funny to think that horses are largely self driving on roads.

cjs_ac(10000) 7 days ago [-]

For any given thing or category of thing, a tiny minority of the human population will be enthusiasts of that thing, but those enthusiasts will have an outsize effect in determining everyone else's taste for that thing. For example, very few people have any real interest in driving a car at 200 MPH, but Ferraris, Lamborghinis and Porsches are widely understood as desirable cars, because the people who are into cars like those marques.

If you're designing a consumer-oriented web service like Netflix or Spotify or Instagram, you will probably add in some user analytics service, and use the insights from that analysis to inform future development. However, that analysis will aggregate its results over all your users, and won't pick out the enthusiasts, who will shape discourse and public opinion about your service. Consequently, your results will be dominated by people who don't really have an opinion, and just take whatever they're given.

Think about web browsers. The first popular browser was Netscape Navigator; then, Internet Explorer came onto the scene. Mozilla Firefox clawed back a fair chunk of market share, and then Google Chrome came along and ate everyone's lunch. In all of these changes, most of the userbase didn't really care what browser they were using: the change was driven by enthusiasts recommending the latest and greatest to their less-technically-inclined friends and family.

So if you develop your product by following your analytics, you'll inevitably converge on something that just shoves content into the faces of an indiscriminating userbase, because that's what the median user of any given service wants. (This isn't to say that most people are tasteless blobs; I think everyone is a connoisseur of something, it's just that for any given individual, that something probably isn't your product.) But who knows - maybe that really is the most profitable way to run a tech business.

soco(10000) 7 days ago [-]

Then, how could a business identify its (or market's) trend-setters, enthusiasts, or whatever we call them, which will push towards something new? I see this as essential for either making the business better, shinier, or to avoid losing users.

sokoloff(3027) 7 days ago [-]

> Ferraris, Lamborghinis and Porsches

For street usage, I think those cars are popular because they're beautiful more than because they're fast (or because enthusiasts like them).

My utterly soulless Lexus will drive more than fast enough to get me in serious trouble. No one will look at it and feel stirred by its beauty, whereas the typical Ferrari or Porsche coupe will look at least appealing to most and beautiful to many, even those who can't tell the three marques apart or even unaided recall the name Lamborghini.

another-dave(10000) 7 days ago [-]

which is also what I feel about the Spotify algorthim at times — no matter what I'm listening to, it invariably brings me back to what it thinks are my 'old reliables' once it gets onto recommending stuff.

I might just listen to it, if I have it on in the background, which then in turn feeds the algorithm that it made the 'correct choice', but it's a million miles away from, say, listening to a radio DJ where you like their rough output but they're cherry-picking what to play next.

subpixel(10000) 7 days ago [-]

I'm experiencing this in Peloton-land. They have an app that purports to be for home gym enthusiasts but is actually optimized for people who want to take instructor-led classes on their phone. Certain features don't work as advertised and I quickly reasoned that while this is a pain in my side most users don't care. If they did, Peloton would fix it.

otabdeveloper4(10000) 7 days ago [-]

> the change was driven by enthusiasts recommending the latest and greatest to their less-technically-inclined friends and family

No it wasn't. It was driven by shady crapware distribution schemes and intentionally subtly broken sites under the big G umbrella.

SamBam(10000) 7 days ago [-]

> However, that analysis will aggregate its results over all your users, and won't pick out the enthusiasts, who will shape discourse and public opinion about your service. Consequently, your results will be dominated by people who don't really have an opinion, and just take whatever they're given.

> In all of these changes, most of the userbase didn't really care what browser they were using: the change was driven by enthusiasts recommending the latest and greatest to their less-technically-inclined friends and family.

I'm confused as to whether your saying change is caused by catering to the median who doesn't care, or the enthusiast who recommends the latest and greatest. You seem to be saying both.

scarface_74(3598) 7 days ago [-]

You're giving it way too much of a positive spend. None of the companies are using analytics to increase the desirability for the majority of users.

They are doing it to increase "engagement" and so more people will stay on their site longer.

Why else wouldn't Netflix show the "continue watching" row first instead of forcing you to scroll past algorithmic generated crap?

It is the same reason that Google went from describing success as people getting off their site faster and going to one of the "ten blue links" to the shit show it is today.

chasd00(10000) 7 days ago [-]

Luxury watches are a good analogy too. A $5 watch from the gas station will give you the time just fine but there's a market for watches costing hundreds of thousands of dollars.

yapyap(10000) 7 days ago [-]

eh, I feel like this is a nicely typed out comment but it hits some wrong notes.

1. I wouldn't say the car veands you mentioned are popular because they can hit high speeds. In my experience nearly any car can with the right engine and equipment in it (of course due to weight distribution and other details I assume they're not all equally safe but that aside).

Personally when I look at those brands I think they're sleek and pretty and when I feel like wanting one it's because they're expensive cars, driven by the rich. They're not chosen only by the rich cause they have the best taste, they're chosen by the rich because they are the only ones to have the financial means to afford one.

Also I feel like the changes made based on analytics arent made to please (more) users but to make as much money as possible, whether that be pleasing users in the starting phases of your company or in the latter phases when you already dominate the market squeezing money out of your big existing userbase.

whall6(10000) 7 days ago [-]

Wow - this is great insight. I hadn't thought of it this way. Thank you for sharing.

tlogan(2756) 7 days ago [-]

> But who knows - maybe that really

> is the most profitable way to run a tech business.

Yes, I agree. This does seem to be the most profitable model for running a tech business: maximizing user engagement or increasing the time users spend on the platform. Whether that's achieved through intentionally convoluted UI or by aggressively surfacing certain content, the end goal remains the same.

That said, I don't think there's much room left for significant innovation in video streaming interfaces. The core challenge continues to be content — whoever offers the best or most compelling library wins. UI changes might tweak engagement metrics by a few percentage points, but they're marginal compared to the impact of strong content.

At the end of the day, if there's a great movie or series to watch, people will show up. If the content isn't there, no amount of clever interface design will convince someone to spend 30 minutes on something they're not actually interested in.

_kush(2685) 7 days ago [-]

This is the cycle I keep seeing:

Most great products start out for enthusiasts and often by enthusiasts. They're opinionated, sharp, sometimes rough, but exciting.

Then VC funding comes in, and the product has to appeal to a broader audience. Things get smoothed out and the metrics rule decisions.

Eventually, the original enthusiasts feel left out. The product's no longer for them.

So a new product comes out, started again by enthusiasts for enthusiasts. And the cycle repeats - unless someone chooses to grow slowly and sustainably, without raising, and stays focused on the niche.

cratermoon(344) 7 days ago [-]

> if you develop your product by following your analytics, you'll inevitably converge on something that just shoves content into the faces of an indiscriminating userbase, because that's what the median user of any given service wants

Except you're making the mistake of thinking these services are optimizing for their userbase. They are not. They are optimizing for revenue and profit growth, a very different target. More ads, cheaper and easier-to-product content, lower opex.

They are converging to churning out the least offensive slop at the cheapest cost with the maximum revenue.

None of the analytics are about what people using the product want, they are about making the most money and growing the fastest. Nothing would look like the services mentioned in the article if they listened to what the users really preferred.

darkhorse222(10000) 7 days ago [-]

That is exactly what is happening to Reddit. Made famous by its submitters and moderators. Business decision driven by metrics based on view counts because that sells ads. Let this be a lesson: metrics are not the only way to measure success. I worked at a company where metrics were viewed as a way to cut through dissonance and bias. Newflash: leaders should be opinionated and have visions that do not yet exist. They should be investors in their product and its culture. Metrics should play a role in that decision, but perhaps a tiny one. Because what metrics you choose, how you measure it, and most importantly, what is even measurable, have a tremendous impact on the effect of those metrics.

You cannot paint by numbers.

mlhpdx(3094) 7 days ago [-]

Honestly, I think it's just simple imitation.

Something is popular, folks are envious of it, they end up building something much like it. Doesn't matter if it's houses, logos, or user experiences – seems to be how things work.

heisenbit(3672) 7 days ago [-]

The short term data driven optimizations somehow erode the original product architecture and some of its value. I also think treating the consumer as static. Trick me one shame on you, trick me twice (admittedly I get tricked even more often to click on stuff) shame on me but eventually I learn and what worked turn into a constant irritating torn-off. These irritations accumulate. Good product management should strive to minimize such irritations but I guess we lost that with Jobs.

setgree(10000) 7 days ago [-]

'Shoving content into the faces of an indiscriminating userbase' maximizes eyeball time which maximizes ad dollars. Netflix's financials are a bit more opaque but I think that's the key driver of the carcinisation story here, the thing for which 'what the median user wants' is ultimately a proxy.

Likewise, all social media converges on one model. Strava, which started out a weirder platform for serious athletes, is now is just an infinity scroll with DMs [0]

I do however think that this is an important insight:

> This isn't to say that most people are tasteless blobs; I think everyone is a connoisseur of something, it's just that for any given individual, that something probably isn't your product.

A lot of these companies probably were founded by people who wanted to cater to connoisseurs, but something about the financials of SaaS companies makes scaling to the ad-maximizing format a kind of destiny.

[0] https://www.nytimes.com/2023/12/05/style/strava-messaging.ht...

red_admiral(10000) 7 days ago [-]

I get your point but I think the browser analogy is wrong.

IE had something like 90% market share back in the day because it was bundled with the OS and cost $0.

Chrome ate everyone's lunch because everyone was using google to search for stuff, and they could advertise their browser on their home page or together with their search results. They also took out ads, in some countries, on billboards, in newspapers and even in cinemas.

I'm sure technical people talking to their families had a small effect (though wouldn't they recommend firefox, because FOSS?), but I think that pales in comparison to google being able to advertise chrome on their search page.

hn_throwaway_99(10000) 7 days ago [-]

What you are describing is explained beautifully in 'The Tyranny of the Marginal User' essay that got a lot of commentary on HN previously, https://news.ycombinator.com/item?id=37509507.

My favorite quote ('Marl' is the hypothetical name for the marginal user):

> Marl's tolerance for user interface complexity is zero. As far as you can tell he only has one working thumb, and the only thing that thumb can do is flick upwards in a repetitive, zombielike scrolling motion.

whiddershins(2769) 7 days ago [-]

this is such a fantastic comment because it makes a charitable attempt to explain how data driven decisions go off the rails.

and it matters because this seems to be an omnipresent phenomenon.

everything everywhere seems driven by this unless someone with decision making power is executing a specific and conscious strategy that pushes back against it.

toss1(1325) 7 days ago [-]

Nice example, but not everything is like automobiles where probably not even one in 1000 people has ever been to a track day let alone actually raced a car, but sporty marques are desired.

A very large portion of people actually cares about what they are searching for, and want the ability to ACTUALLY search and find that, with real parameters, not merely get some not-even-close stuff shoved onto their screen instead. That is NOT the serendipity of browsing the stacks in a great library.

A great example of failure is Amazon. I run a small design & manufacturing business, and years ago started getting pestered by Amazon about 'Amazon Business' trying to supply both office staples and parts to businesses. This was an area that had enormous potential. Yet, they have entirely failed. I've never bought a single item, and it has faded.

Their primary competitor is McMaster-Carr [0] who does it right. Well-defined categories of everything, and highly specific search capabilities, at reasonable but not bargain prices. EVERYTHING you might search for is fully parameterized in every dimension and feature. Min/max/exact, width/depth/height/thread/diameter/material/containerType/etc./etc./etc. appropriate for each type of product. The key is McMaster DOES NOT WASTE MY TIME. I can go there, quickly find what I want or determine that they don't have it, and get on with my day.

The smaller company that does it right is still beating the tech giant a decade later. Same for other similar suppliers who actually have a clue about what their customers really want.

They continue to prevail over tech giants and VC-funded sites BECAUSE THEY ARE NOT STUPID.

It would be nice if the tech/vc crowd would also stop being stupid. They started out not stupid, but they really lose the plot when they think a few extra eyeballs this week will really win in the long run. At least provide two modes, a strict and serious search and their new messy UI. But they are stupid and this will not happen. Enshittification rules the day.

[0] https://www.mcmaster.com/

wouldbecouldbe(10000) 7 days ago [-]

The irony is that he argued for a faster horse and that's what all his providers are doing. TikTok is the faster horse. What he really is asking for is a step out of the paradigm, although he argues for a romantic conservative product instead of an innovative product like Ford.

sheepscreek(10000) 7 days ago [-]

You're way overestimating the effect an enthusiast has. Evangelism only goes far enough to introduce people to the thing. How often someone uses the thing depends entirely on its utility (usefulness).

As long as Netflix was successfully reading the author's mind, they were satisfied with the experience. However, Netflix assumed that they want to keep watching the same content, oblivious to the author's desire to discover something entirely new. Netflix failed to meet the expectations of those seeking something entirely different.

I can understand why Netflix made this change. They've replaced many shows with their own in-house productions. By doing so, they prevent users from searching for specific shows and then realizing that Netflix doesn't have them. If this happens frequently, they risk losing customers.

On the other hand, Spotify doesn't face this issue. Therefore, I'm puzzled by why they've made it more challenging to explore content by categories. (Disclaimer: I don't use Spotify, so my experience is based solely on author's observations.)

rightbyte(10000) 7 days ago [-]

> This isn't to say that most people are tasteless blobs; I think everyone is a connoisseur of something, it's just that for any given individual, that something probably isn't your product.

I think this is a great nuance that is often overlooked when discussing this.

raincole(10000) 7 days ago [-]

> Ferraris, Lamborghini

I think the big difference is that nobody is going to pay $10m for a web service or browser.

hinkley(10000) 7 days ago [-]

Some people have claimed that pure A/B testing is an agent for enshittification, both on a quality and ethical dimension. And I can't see how those people are particularly wrong.

There are systems out there that can do AB/CD testing and those do a better job of finding pairs of changed that have compounding effects.

You cannot A/B test your way from chocolate and peanut butter to cherry and vanilla. So we get to deal with tone deaf companies who feel their analytics are proving that customers either don't know what they want or are lying about what they want. But that's not something A/B testing can prove. It takes more sophisticated experiments than that.

safety1st(10000) 7 days ago [-]

I think when you're a startup, you have to invest in all of these things - you want to hire some experts early on because they'll have insights that help you design a better product, and if your product appeals to experts it will be a PR win. But of course your goal is scale and distribution so you have to respect a certain lowest common denominator as well lest you become too niche.

Once you become a bloated monopolist like the three companies you just mentioned, your distribution strategy is solved in other ways (like, you've done some bundling and some acquisitions, maybe pressured a few companies into exclusivity agreements and are probably breaking some anti-trust law or other but you have lawyers). Then you don't care about the experts, PR or niches anymore, and you serve up slop. When the analytics recommend slop you go with the analytics, when they don't you ignore them.

None of this is to discount your insightful comment, just saying once you're big enough, your strategy is just doing tricky distribution deals, really (a fact no record executive would dispute).

cs702(1217) 7 days ago [-]

My takeaway:

The Nash Equilibrium of streaming UIs is a TikTok experience.

:-(

Suppafly(10000) 7 days ago [-]

>maybe that really is the most profitable way to run a tech business.

That's the issue, it seems like it really is the most profitable way to do things. Everything sucks now because shooting brainrot and advertisements at our eyes and ears is more profitable than actually giving us what we want.

ookblah(10000) 7 days ago [-]

yeah except a lot of those companies almost went bankrupt trying to make those cars for enthusiasts and only for them.

porsche and lambo didn't see the outsized success they have now (financially) until they started pumping out SUVs. hell, the purosangue was made precisely to capitalize on that boring market segment.

i feel there's a little suvivorship bias at play here. i think the important thing is to not forget your enthusiasts perhaps, but a lot of these 'successes' wouldn't even be around were it not for appealing to the greater masses. ofc some market segments fare better and you can build a business around enthusiasts.

amarant(3401) 7 days ago [-]

That's a very keen observation!

It's probably profitable in a lot of cases to follow those metrics, shovelware content is cheaper to produce, and since the median user pays the same subscription fee as the enthusiast, you get better margins producing slop for the uncaring masses.

You need enthusiast businesses owners to produce quality product.

Damn, I never thought of this before, but it explains so much!

NegativeLatency(10000) 7 days ago [-]

Chrome leveraged Google's near monopoly on search to gain users

synergy7(10000) 7 days ago [-]

I think the second paragraph in the parent comment fits really well with mimetic theory and this René Girard quote: 'Man is the creature who does not know what to desire, and he turns to others in order to make up his mind. We desire what others desire because we imitate their desires.' This, however, doesn't mean that the current Netflix solution is the only one possible.

mrandish(10000) 7 days ago [-]

> you will probably add in some user analytics service, and use the insights from that analysis to inform future development. However, that analysis will aggregate its results over all your users, and won't pick out the enthusiasts, who will shape discourse and public opinion about your service. Consequently, your results will be dominated by people who don't really have an opinion, and just take whatever they're given.

This is so spot on. I was a long-time serial entrepreneur who spent a couple decades across three successful startups discovering, shipping and growing new categories of tech products primarily for consumer, prosumer and hobbyists. Then I sold my last startup to a very large F500 silicon valley tech leader and ended up a senior product exec there. While there were a lot of positives like more mature engineering processes, testing and devops as a discipline, the exact issue you describe was a nightmare of product-damaging mistakes I called 'analytics abuse.' In my startups I valued having increasingly robust analytics over the years. In part because they helped increase my overall understanding of usage but mostly because they provoked good questions to explore. That exploration happened naturally because as the 'product guy / founder' I never stopped spending a lot of time with our most passionate, opinionated, thought-leading customers. Over years of iteration I'd learned how to engage deeply and listen carefully to input from these customers. This involved interpreting, filtering and curating the mess of divergent personal preferences and pet feature ideas to tease out the more actionable product signals that could increase broad usage, adoption and passion around our products. I'd then bring those curated signals back to the product teams for evaluation and prioritization.

At BigCo they were diligent about meeting with customers, in fact they had entire processes around it, but their rigorous structures and meeting agendas often got in the way of just directly engaging and actively listening. Worse, the customer meetings the more senior product decision makers actually attended in person were mostly with the highest revenue customers. Junior PMs (and sometimes new grads) were delegated to meeting with the broader base of customers and filing reports. Those reports were then aggregated by ever-helpful program managers into tables of data and, eventually, slides - losing all nuance and any ability to spot an emerging outlier signal and tug on that thread to see where it goes.

I tried to convince everyone that we were missing important customer signals, especially from our smartest, most committed users. Being only one level removed from the CEO and quite credible based on prior success, I was definitely heard and most people agreed there was something being lost but no one could suggest a way to modify what we were doing that could scale across dozens of major products and hundreds of product managers, designers, execs and other stakeholders. In my experience, this general problem is why large companies, even the most well-run, successful ones full of smart people trying their best, end up gradually nerfing the deeper appeal in their own products. Frustratingly, almost every small, single step in that long slide pushes some short-term metric upward but the cumulative effect is the product loses another tiny piece of the soul that made our most evangelistic, thought-leading customers love the product and promote it widely. Ultimately, I ended up constantly arguing we should forego the uplift from some small, easy-to-prove, metric-chasing change to preserve some cumulative whole most people in the org weren't fully convinced even existed. It was exhausting. And there's no fighting the tide of people incentivized on narrow KPIs come bonus season.

I'm sorry to report I never found a solution to this problem, despite my best efforts over several years. I think it's just fundamental. Eventually I just told friends, 'It's a genetic problem that's, sadly, endemic to the breed' (the 'breed' being well-run, very large tech companies with the smartest product people HR can hire at sufficient scale). Even if I was anointed CEO, given the size of the product matrix, I could only have personally driven a handful of products. I do think codifying premises and principles from the CEO level can help but it still gets diluted as the number of products, people and processes scales.

SergeAx(3124) 7 days ago [-]

> Ferraris, Lamborghinis and Porsches are widely understood as desirable cars

... primarily for their price tag. There are a lot of enthusiasts for money in the world, much more than for driving at 200 mph.

> the change was driven by enthusiasts recommending the latest and greatest to their less-technically-inclined friends and family

It was never about recommendations. MSIE and Chrome were (and are, but with Edge Browser instead of MSIE) shoved into consumers' throats by ads, marketing, bundled distribution and outrageous lies.

krisoft(10000) 7 days ago [-]

> very few people have any real interest in driving a car at 200 MPH

I agree with that.

> but Ferraris, Lamborghinis and Porsches are widely understood as desirable cars

I agree with that too.

> because the people who are into cars like those marques.

I think that is not true. I don't care about cars. Never had one. Don't even have a driving licence.

The reason why i think Ferraris, Lamborghinis and Porsches are desirable cars is because they look cool, and they sound cool. They were designed to be like that. If i see one on the street i notice it. I couldn't care less about the opinion of gearheads. If a car would come out looking like my grandpa's skoda, but all the car lovers would love it I wouldn't even hear about it.

It is all about flashyness of the industrial design. And rarity of course.

montagg(10000) 7 days ago [-]

You're talking both about tastemakers and the silent majority vs loud minority.

I promise it is NOT always a good idea to follow the enthusiasts, because they are not at all like everyone else who uses your thing. Following them will skew your decisions—unless they are your entire customer base, so, have at it.

This article imo is complaining about the effect of middle management product owners at large companies. There are two dynamics that both converge on enshittification:

1. These product managers (or product designers) are early in their careers and want to make a splash. They are given lower priority projects but try to break out by making them bigger, better, more non-horse-like. They over-design and over-complicate the solutions as a result, because they don't yet know when the right solution is just a refinement of what's tried and true. They are incentivized to do this because they want to break out of the mold.

2. The managers above them, or a layer or two above depending on company size, are risk AVERSE. They are tasked with delivering results regularly and consistently. If you have the innovation bug or are creative at this layer, you get moved onto projects where this is required, which is not most of them. Overcomplicated is fine sometimes with you but WEIRD is absolutely not okay (the stuff that actually could be innovative), and no one gets fired for following The Metrics.

These two incentives clash to create overcomplicated but functionally poor products that aren't helping anybody out. A healthy skepticism of complication and a healthy skepticism of engagement as the sole metric (or metrics in general) is necessary to make good shit. Sometimes it is actually understanding and using things as an enthusiast would, but you need to bring in an understanding of how the rest of your users are distinctly different from the enthusiasts, too. Using your thing yourself and actually following your own subtler feelings is what produces really useful innovation, slowly and surely over time.

coldtea(1697) 6 days ago [-]

>For any given thing or category of thing, a tiny minority of the human population will be enthusiasts of that thing, but those enthusiasts will have an outsize effect in determining everyone else's taste for that thing.

I think that's a self-dellusion many tech enthusiasts have, that they're somehow trend-setters.

And then the same enthusiasts say for the original iPod 'No wireless. Less space than a Nomad. Lame', and see the masses jump to buy it, and themselves only catch up later.

Or they see the masses never caring for their e.g. desktop Linux, whose mass dominance (not mere 'works for me' or 'have set it up for my elderly parents and they don't even know it's not Windows') would come 'any day now' for the last 30 years...

Trend-setters exist, but they're a different group than the 'tiny minority' of enthusiasts. More like some musician paid to spot Beats headphones, or some cool dude sporting some gadget early on.

>For example, very few people have any real interest in driving a car at 200 MPH, but Ferraris, Lamborghinis and Porsches are widely understood as desirable cars, because the people who are into cars like those marques.

A hell of a lot of people had a real interest in driving a car at 200 MPH, if they could have the chance. And even more admired Ferraris, Lamborghinis and Porsches because of their design and elegance (and price, people aspire to luxury goods, even when they can't afford them), not because some sport-car afficionados said so.

It's the same in other areas: the popular books, or comics, or movies, or music, etc. are rarely if ever what the 'inner' crowd of each niche admires. Most people buy Reacher and such, not Finnegan's Wake.

>So if you develop your product by following your analytics, you'll inevitably converge on something that just shoves content into the faces of an indiscriminating userbase, because that's what the median user of any given service wants.

More likely, if you want to keep and continue increasing your margins, and your stock price, you'll incrementally continue to shit all over your product trying to squeeze ever more money.

Neither the 'enthusiasts'/tech-savvy users NOR the 'median user' wants Netflix to be the shit it has become, or Google search to be so fucked up, or ads and nags on Windows UI, and so on.

They're just given those, and they accept them having no recourse. The moment there's a better recourse, they jump to it (like IE -> Firefox -> Chrome, or BS early search engines -> Altavista -> Google).

greenie_beans(1490) 6 days ago [-]

this reminds me of american politics

mystified5016(10000) 6 days ago [-]

This is exactly the situation unfolding with JetBrains right now. They've lost all touch with their professional enthusiast core and have gone hell-bent on acquiring new users at the cost of alienating a big chunk of their core. I don't think it's going to go well for them, they don't have the chops to compete with Microsoft like they're presently trying to do.

raxxorraxor(10000) 4 days ago [-]

Just for protocol: In your example the peasant rabble recommend Chrome, while the enthusiasts with deep technical knowledge, broad perspective, wisdom, charm and good looks recommended Firefox.

myself248(10000) 7 days ago [-]

I've heard this called the 'Tyranny of the Marginal User'.

To keep the line going up, platforms have to appeal to wider and wider swaths of a population, eventually lapping at the shores of a population that really doesn't care or want this service. But if you can hook them with some dopamine in a 5-second video, or a quest to rediscover some neat thing that they saw two page-loads ago but is now mysteriously gone from the very same list it appeared in, then you've clawed one additional user into your metrics and the VCs give you a treat.

These people don't care about the service and they're the worst users to cater to, but everyone caters to them because they're the only ones left. Hence, TikTokization.

jiggawatts(10000) 6 days ago [-]

Thank you for that term!

I finally know what to call these idiotic trends that I've learned to recognise but couldn't name.

The one that grind my gears the most has been Microsoft breaking decades-old Windows paradigms to cater for Linux-developers-on-Windows, which is a very marginal, even actively disinterested group. All this at the expense of the 99.9% of their loyal user base.

For example, VS Code had the opposite shortcut (literally with the arrow keys going in opposing directions) for 'go back in search history' to every other editor ever made for Windows... but matching the Linux equivalent.

Similarly, they recently broke 'cls' to match the broken(!) behaviour of 'clear' in Linux because of basically just one or two Linux users complaining in a GitHub Issue ticket. Windows users weren't listened to, because they're already users, not potential new users.

boramalper(1795) 6 days ago [-]

The Tyranny of the Marginal User: why consumer software gets worse, not better, over time

https://nothinghuman.substack.com/p/the-tyranny-of-the-margi...

rendaw(3067) 6 days ago [-]

The implication here is appeal to a wider audience _at the expense_ of the existing customer base, right? Otherwise it wouldn't be a tyranny at all.

What I don't get is at some point the marginal user increase for a change has got to be smaller than the number of customers you tick off and lose by changing things.

Is the idea that all services converge on the same N billion people target audience who wants something almost entirely unlike the initial product? I feel like 'marginal' doesn't really capture this nuance if so.

jmull(10000) 7 days ago [-]

If people can't find a sustainable business model around that thing you want, it's just not going to be widely available.

It's hard to say for sure if Netflix could have/should have kept going in the direction they were going in 2012. But they didn't seem to think so.

You can't necessarily count on businesses springing up to satisfy your personal interests and tastes. Especially large-scale businesses, which are always going to gravitate toward the center of large markets. It's great when it happens, but it's basically just luck when it does.

joshuaturner(10000) 7 days ago [-]

The problem is with the definition of "sustainable business model."

Could you maintain a profitable business and continue steady growth? Sure. Could you become a unicorn and IPO within the next 5 years? Unlikely.

OtherShrezzing(10000) 7 days ago [-]

>The idea is to think outside the box and create entirely new markets instead of just new products in existing ones

It's interesting that SV outwardly says it 'wants to create entirely new markets instead of products in existing ones', meanwhile the actual experienced outcome for users is the same experience across multiple markets.

SV is somehow failing on both of its metrics here. It's creating entirely homogeneous products across all existing markets.

ikanreed(10000) 7 days ago [-]

By 'create new markets' they've always meant 'Become useless middlemen by displacing the existing bridge between makers and consumers'

Usually their new bridge is modestly more convenient in some way, but opens the door to the worst kind of enshittification.

patapong(10000) 7 days ago [-]

I am astonished by how much less delightful software has become. Computers used to feel like a magical tool, that would respond instantly and allow me to perform complicated transformations at the press of a button.

Now, I feel like I am fighting against software most of the time, having to compete with someones vision for how I should be using their program, which is likely aimed at the least technically sophisticated user. Nothing wrong with allowing such users to use the software, but please retain the functionality and speed for the power users!

sureIy(10000) 7 days ago [-]

Is this about software or is it about you?

I loved my computer when I was a kid, now I only see flaws. I don't think software was flawless at the time, it's just that I became very keenly aware of its current issues because this is my field.

zonkerdonker(10000) 7 days ago [-]

How much has been lost to the altar of shareholder value? And how much gained?

It will be interesting to see how these first decades of the millennium will be remembered.

ivanjermakov(10000) 7 days ago [-]

Delightful software is still there and still being made. It's the industry that targets average Joe, who doesn't care about technology.

hedora(3373) 7 days ago [-]

I don't understand the draw of Spotify. There's no network effect that I can see (even if it is built into your car, the other services have good experiences in your car too), everyone complains about it, they pay less per stream to artists than their competitors, and their library isn't any bigger than the competition. (It was smaller the last time I compared.)

On top of that, their recommendation algorithms are (were?) terrible compared to the other services (since then, they added more payola), and they're actively trying to burn down the last open corner of the internet (podcasts).

Also, the pricing is comparable, even if the other options feel more premium.

What am I missing?

dharmab(10000) 7 days ago [-]

Spotify has a free tier. Apple Music and YouTube Music do not. Young people start on the free tier and don't want ti have to move their libraries/playlists. And young people share Spotify playlists, not Apple or Youtube playlists, because they know their friends have Spotify.

skerit(10000) 7 days ago [-]

I unfortunately pay for Spotify.

I also pay for Youtube premium, but I can't even switch to that because their music player is even worse than Spotify.

I really miss the good old days of music players that were _packed_ with features. The players of current streaming services are so basic. And as long as I can't find a replacement that fits my needs I don't really want to bother switching.

tuesdaynight(10000) 7 days ago [-]

I have used Tidal, Deezer and Amazon Music in the past, but I've always went back to Spotify. I prefer the UX, but not only that, the recommendations are WAY better for me than other streaming services. However, my music taste is very eclectic, so maybe that helps a lot to recommend something within my taste.

metabagel(10000) 7 days ago [-]

My local music format public radio station provides song links to: Spotify, iTunes, and Amazon.

https://www.kcrw.com/playlists

dijit(2016) 7 days ago [-]

I don't really know how to form this into words on a short-form text medium like this. So please read charitably.

I'm by no means a conspiracy theorist, however as I've risen the ranks of my chosen technical field I see more and more that what George Carlin said was really poignant. 'You don't need a formal conspiracy when incentives align'[0].

And incentives align really easily.

Every company has some form of market analysis going on. CEO's will be invited to rub shoulders with the same groups of people. Conglomerates will have information sharing of some kind across all subsidiaries.

Everyone is acting independently, but towards the same goal. It's actually quite shocking to have been part of (and hearing about) meetings between CEOs where 'new information from CMK (consumer market knowledge) indicates that smaller dev teams all onsite are the best way to do things' - and everyone gets the same 'information' at the same time, and thus the entire market moves in that direction, as if it was a fixed horse race and they were acting on a secret tip they heard from their uncle...

I'm a bit counter-culture in my missive, so take what I'm saying with a grain of salt, but a little nudge across a limited population seems to be enough - and it exists.

Controversially: Blackrocks DEI initiatives are perfect public example of what I mean, no matter if you are pro or con, you can't deny the impact.

[0]: https://youtube.com/watch?v=XE3sYUJASLY

miltonlost(10000) 7 days ago [-]

All the shitty CEOs start doing the same shit at the same time, because most CEOs are not exceptional workers or thinkers or innovators. They are simply the (in)human conduits doing as much as possible to siphon money from their users to the shareholders and Board Member class. They follow the trends that their consulting firms tell them to follow (the same consultants that work at multiple companies within the industry), which is why we get massive hiring at the same time, massive layoffs at the same time, RTO at the same time. The US has allowed collusion and market coordination via 3rd parties (so we have, e.g., landlords sharing rental prices with a 3rd party consultant, who then combines this data and illegally collude to set prices but with a Computer instead of Bob). Modern-day capitalism has said 'monopolies and huge conglomerates are good because they're EfFiCiEnT!!!' (though what kind of efficiency and to whom the efficiency gain are given is entirely ignored -- the efficiency to max profit is the only one that matters).

> It's actually quite shocking to have been part of (and hearing about) meetings between CEOs where 'new information from CMK (consumer market knowledge) indicates that smaller dev teams all onsite are the best way to do things' - and everyone gets the same 'information' at the same time, and thus the entire market moves in that direction, as if it was a fixed horse race and they were acting on a secret tip they heard from their uncle...

The same thing too when companies hire consultants to look at the 'market wage' and then set salaries based on what the consultant said. Every worker at the same 'market wage' with no incentives to be above that.

nthingtohide(10000) 7 days ago [-]

> And incentives align really easily.

Today incentives align more easily. All these CEOs are in the same whatsapp group. That's how we got the RTO mandates from all CEOs at the same time. There was story here a year or two ago.

fsflover(2571) 6 days ago [-]

This is a well-know enshittification, which is going on for a long time already: https://pluralistic.net/2024/08/17/hack-the-planet/

yakkomajuri(1179) 7 days ago [-]

I feel like this with my (current) bank of choice here in Brazil. They were one of the first to focus on being digital-first and allowed opening an account without going to a branch etc. They grew fast and became one of the largest banks in the country and generally considered pretty solid. I've been banking there for like a decade.

Now they've decided to be what they call a 'SuperApp'. This goddamn super app has a Twitter-like thing inside of it, shopping, and literally dozens of other products. Some core banking features are now hard to find but more importantly I had quite a few issues with investments as well. People who work there also tell me about messy problems on the financial services bits. It's very clear to me that in trying to become everything, they've deprioritized the fundamental products they offer, which are those related to banking. I want to store money, send and receive it, invest it, and have access to credit. But the experience of using those features has become significantly worse as new verticals sprouted up.

jgilias(3365) 7 days ago [-]

That's because WeChat has really taken off in China. So there are companies in different markets trying to replicate that. And, well, from business perspective it does make sense. If you manage to pull it off, the reward is massive.

hcarvalhoalves(3569) 7 days ago [-]

I believe the 'Peter principle' [1] also holds for companies. A company grows until it eventually outlives its mission and loses focus.

[1] https://en.wikipedia.org/wiki/Peter_principle

rambambram(10000) 7 days ago [-]

I have the same with my banking app here in The Netherlands. I don't know if they try to be a super app, but since a year or two they put all kinds of annoying ads inside their app and unnecessary notifications on top of my account overview. Just show me the numbers, I pay for your service.

alister(3318) 6 days ago [-]

> I feel like this with my (current) bank of choice here in Brazil. Now they've decided to be what they call a 'SuperApp'.

I'm curious to know the name of that digital bank.

gmuslera(10000) 7 days ago [-]

Doesn't matter what you want anymore. You are not the client, but the product. They are the ones getting faster horses.

cratermoon(344) 7 days ago [-]

> They are the ones getting faster horses.

To a point, until stage 3 enshittification hits, and the business claws back all the value.

bluGill(10000) 7 days ago [-]

Until I finally get fed up and leave. There is value in my sharing pictures of my kids with distance friends and seeing pictures of their kids - but Facebook has got so bad at that I finally gave up logging in and not I'm not a product that exists for them. And in turn because I'm not there facebook is less valueable for my friends and so they are more likely to leave in the future.

The only question are people like me outliers that can be ignored - there will always be a few people you can't get. However I could be a sign of the end.

Freak_NL(3289) 7 days ago [-]

One upside: by degrading the experience1 Netflix did make it a lot easier to simply stop your subscription and hop over to another streaming service for a few months.

A very interesting development: in the Netherlands KPN, one of the largest telcos, introduced a feature where any household with several of their products in use (e.g., two cellphones and fiber internet) could choose a free 'gift'2. The gift is a choice from a bunch of subscriptions, including Netflix, Disney+, and HBO Max. And you get to switch monthly if you want to. So we ditched our own Netflix subscription and started watching Disney+ for now. Perhaps we'll switch in a few months.

These services probably realise that their customers are made up of 'hoppers', and 'stackers' (people who take out multiple subscriptions to streaming services at once). I wonder what the distribution for each service is.

1: In part forced upon them by the content owners waking up and wanting to set up their own exclusive shops of course, and in part because of, well, greed (the UI suckiness).

2: The trade-off is obviously that this stimulates consumers to consolidate their telco products with them. In my case this was already so, so for me this is just a small incentive to stay with them (i.e., it saves me €9 a month).

Cthulhu_(3510) 7 days ago [-]

I'm surprised that the services don't seem to have updated for that reality yet; it feels like there's only one or two 'hits' on each service per year. They did already adapt a bit by no longer releasing a whole season in one go, so you need at least three months of subscription for a 10 episode weekly series.

But what they need is rolling releases across the whole year, so that once one production is 'done', the next one rolls around.

(maybe they already do, I don't know, I'm just thinking of Stranger Things which seems to be Netflix' main seller at the moment)

vanschelven(3228) 7 days ago [-]

The title is a great hook, but it doesn't really cover what's being described... which is the TikTokification of everything and (implicitly) that there's some bait & switch going on.

nthingtohide(10000) 7 days ago [-]

Earlier people used to spend 2-3 hrs watching and absorbing a single movie. Now people spend 5 hrs scrolling tiktok. So in a sense time spent on content has actually increased. People don't need filler and lengthy buildups. People have been exposed to so much culture they can almost predict the general plotline so no need to spend time on that. Give me the plot twist or the drop (in case of spotify) with short relevant context. I remember Balaji saying something to this effect. He said don't give me filler content, just give me 'fixed point' content which doesn't change after successively summarization and pruning.

bloak(3310) 7 days ago [-]

This sounds like an economic problem with no obvious solution: network effects => monopoly => 'optimising' for typical user. Where there isn't a monopoly (or anything close to a monopoly) you find different firms specialising in different ways. For example, small independent restaurants survive by being distinctive, not by trying to imitate McDonald's.

YouTube and LinkedIn are practically monopolies. Netflix isn't a monopoly in the same way but you usually don't have a choice of streaming services for watching a particular film or series so it's different from being able to buy the same cheese or the same wine from any of several different supermarkets.

JKCalhoun(3408) 7 days ago [-]

Yeah, more like Netflix (and we might as well add Amazon here) became popular because of 'the long tail'. Once, I could easily find 1930's classics like 'Stella Dallas' on Netflix (and early Ultravox! on Amazon when they would have to be ordered from brick and mortar music stores at the time).

For some reason (perhaps because it costs money to keep a large catalog?) Netflix retracted the long tail while Amazon at least kept theirs unfurled.

cratermoon(344) 7 days ago [-]

> no obvious solution: network effects => monopoly => 'optimising' for typical user.

ahem. We have a solution for the monopoly part. We've had it since the 19th century. We just stopped enforcing it in the 70s and 80s when the Chicago School convinced everyone that as long as judge Robert Bork's 'consumer welfare' can be trotted out to prove that the 'free market' is working and prices are low.

FinnLobsien(10000) 7 days ago [-]

I also dislike the TikTokification of everything, but I also know that all of us on this platform are wrong in the sense that we're not the user being designed for.

Consumer apps at massive scale like TikTok and Netflix don't design for nerds like us, they design for the average person. Actually, they design for the average behavior of the average person.

And most people on this planet are more or less happy with whatever they're presented with because they don't care about technology.

And when you control what's presented to people, not they (and they don't care), you can push them to consume what you want them to consume.

I heard a YC group partner once that he's worked with a ton of delivery apps. Many of them start out as differentiated apps for ordering from the best 'hole in the wall' places or the app for authentic foreign cuisines, only to discover that the best growth hack is getting McDonald's on the app, because that'll be your top seller, instantly.

Most people just do the default thing everyone does—and we're probably all like that in one aspect or another of our lives, and that's who many experiences are designed for.

bombcar(3444) 7 days ago [-]

There's a lot of money to be made in letting people order takeout from McDonalds while not feeling like the kind of person who orders takeout from McDonald's.

mppm(10000) 7 days ago [-]

> And most people on this planet are more or less happy with whatever they're presented with because they don't care about technology.

I think this is a debatable statement. It could be true, but I am increasingly convinced that enshittification, TikTokification, AIfication, etc. is proceeding despite what the average person wants. Average does not mean gaping, uninspired idiot. I think people in general do notice that everything is broken, short-lived, watered down and ad-ridden. But what to do? When every company does it, voting with your wallet becomes practically impossible.

bluGill(10000) 7 days ago [-]

Which is a real problem for the rare person (ie me) who doesn't like McDonalds. Go to a new city and I get recommendations of McDonalds, and the dozen 'you won't believe we are not McDonalds' - never mind that I don't like burgers, that is about all I can find when looking for a meal.

klabb3(10000) 7 days ago [-]

Overwhelmingly, products are designed to maximize total recurring user interaction, aka engagement or attention grabbing. This is the proxy for ad revenue, the most popular business model (even if Netflix is different). Look at Quora, LinkedIn and even SO, which essentially degraded into content farms for these reasons, largely downstream of the Google search funnel.

But engagement maximization looks the same everywhere – it's communicating with the amygdala of the user, not their consciousness. And in a way, everyone's amygdala is kind of the same and generic (sugar foods, violence, rage bait, boobs, chock value etc). Products that are largely designed for higher consciousness are more varied, such as most books. But those drive less engagement.

The amygdala wants more of the same, and the prefrontal cortex seems to want variation. My view is that you can't have the chocolate muffins and raw carrots on the same plate, or a bookshelf with both Dostoevsky and Playboy magazines. You have to compartmentalize to protect yourself from your own amygdala. Same goes for media. Even well meaning product managers will be completely fooled if they simply follow the metrics.

conductr(10000) 7 days ago [-]

> Actually, they design for the average behavior of the average person.

They're generally designed for engagement. Nobody is particularly asking for this type of experience it's just that Tiktok has discovered the most addictive - eh hum, I mean engaging - experience thus far. So they're being copied.

Netflix is a little different though as if people open the app and always see the same top titles listed due to it being an alphabetical index, then they quickly think nothing new is ever there. Or, it's too hard to find. So they're tricking people into thinking there's a bunch of fresh/good content. There's also a cultural phenomenon where everyone discusses 'what shows have you been watching lately?' so the Trending aspects of their recommendations is to help people get on board with the trend; and, to push momentum and create the trend too obviously.

techpineapple(10000) 7 days ago [-]

I think I understand the economics here, but it bugs me there aren't more slow-growth self-funded places to fill in these niches.

pal9000i(10000) 7 days ago [-]

Companies work on averages, statistically what retains, engages the users.

But Spotify is far better now than it was 10 years ago. I still have playlists, I can still instantly find any song I want. The added bonus is the discovery engine. So the UX now is a superset of what it was before.

dogleash(3422) 7 days ago [-]

Oh come on. I have this thing open all day when I'm working, you can't bullshit me like that. It's a not good UI, its serviceable.

It's not good by any conceivable metric other than those they have internally decided represent business goals. If you want to have a tautological argument that makes it good, because those goals are the only goals that matter. That's a boring response to an article about how business incentives have turned the UI into trash.

FFS the Play button frequently breaks requiring a refresh. And as much as I appreciate the inevitable response that I'm holding it wrong, how is that my problem?

raldi(434) 7 days ago [-]

People overuse the original quote as an excuse to never listen to customers, but the real wisdom is to ask why they're asking for a faster horse (to get around quicker) and see if you can think of a better way to meet that goal.

9rx(10000) 7 days ago [-]

Overused and misattributed. What Ford actually said was: "If there is any one secret of success, it lies in the ability to get the other person's point of view and see things from that person's angle as well as from your own"

billmalarky(10000) 7 days ago [-]

^ this guy knows Jobs To Be Done theory ;)

For those who don't, reading 'Competing Against Luck' by Clayton Christensen will dramatically improve your ability to create successful products/services.

jerf(3620) 7 days ago [-]

Sometimes you just have to do it yourself. I'm lucky enough to have had a CD collection before music streaming is a thing. Now my phone has enough capacity (since I still use phones that can take SD cards) to casually carry my entire collection around. I can play it in any order I want.

I've even still got a streaming service I can do exploring on, since YouTube bundles one with Premium. I find it's a good thing I have my own collection though since it tracks my interests poorly.

I've gotten back into buying my own video too. I don't consume a ton of video and I dropped Netflix streaming a while ago because the delta between me marking something for the queue and actually getting to it was becoming routinely larger than the amount of time Netflix would still have the thing I wanted to see.

The problem is, I don't even see the second derivative on this trend turning, let alone the first. Metric-driven development, by its very nature, will take away every knob from you that you could conceivably use to drive their metrics lower. I think that's a reasonable approximation of the root cause of the reality observed in the OP. If you happen to agree with their metrics then hey, good times for you, but the odds of that are low since you're probably not looking to maximize the monetization they can extract from you as priority one.

Therefore, the only option is, get off metric-driven-development platforms. There is no alternative and will be even less of one as time goes on.

I suspect in the very long run this metric-driven development will eventually die off as all consumers come around to this realization one way or another and start turning to other alternatives, but it can easily be 5-10 years before there's enough of us for those 'alternatives' to be able to survive in the market. Fortunately, MP3 players haven't gone anywhere. (Although it takes some searching to find ones that aren't also trying to match the streaming services and stick to old-school 'play what you ask for and not anything else, unless you ask for shuffling or randomness explicitly'.)

MortyWaves(10000) 7 days ago [-]

Where do you buy videos from? Do you mean new films and shows? How, I thought practically all of it is locked down DRM only streaming? Or do you mean DVD/BluRay?

WorldPeas(3604) 7 days ago [-]

> I still use phones that use SD cards

I can't tell you how much I miss removable storage

bigstrat2003(10000) 7 days ago [-]

This is the way. If I care about watching something in the future, I buy the Blu-ray and rip it. I already have basically all the music I could ever want in mp3 format. Plex (or Jellyfin if you prefer that) provides a pleasant UI, and I don't need those services any more.

rambambram(10000) 7 days ago [-]

This. Masterfully written down, by the way. I subscribed to your blog through RSS, because I also want to do 'the algorithm' myself. Interesting story about the intersection of law and tech you have on your blog!

MortyWaves(10000) 7 days ago [-]

That Netflix screenshot looks fucking great: clear, usable, no distractions, more than 5 items on a page. What a mess 'modern' UX/UI has turned into.

WorldPeas(3604) 7 days ago [-]

truly the mcMaster-Carr of video

bonoboTP(10000) 3 days ago [-]

I think there's psychological research that showing too many options leads to less engagement, supposedly for fear of making the wrong choice. If you give fewer options, people are more confident they picked something good.

Wowfunhappy(3384) 7 days ago [-]

All of the examples listed have something in common: they are services for accessing content you don't own. So it is in the provider's interest to find ways to satisfy you with less and/or cheaper content.

The Netflix changes aren't attempts to make their product better. They are attempts to save money by obscuring the amount and/or quality of available content.

By contrast, if you buy BluRays from one company and BluRay players from another company, everyones incentives are better aligned.

phh(10000) 7 days ago [-]

> It is therefore in the provider's interest to make you satisfied with less and/or cheaper content.

After getting annoyed by their interface that was showing 80% of content I have already seen, I've come to a realization:

Their incentive is not even to make me watch crap. No! Their best outcome for them is for me to watch nothing and still pay.

Showing me old shows gives me the warm feelings and make me associate them with Netflix, making me keep the subscription even

Hypnodrones are corporate dreams

ryandrake(10000) 7 days ago [-]

> It is therefore in the provider's interest to make you satisfied with less and/or cheaper content

If I was a conspiracy theorist, I'd think that all these 'content companies' are colluding in a mass 'Taste Removal' campaign, deliberately getting users used to bland, vanilla, generic 'content' so they can one day just shove AI slop at us all day and only people who were alive in the 90s would remember when movies and TV were great. The rest happily will watch Ow, My Balls and ads for Carl's Jr.

lern_too_spel(10000) 7 days ago [-]

And the Blurays show ads for the first company's other products.

dswalter(10000) 7 days ago [-]

There's a fundamental reality that shapes both Netflix and Spotify's trajectory: content licensing. 2012 Netflix had access to vastly more of everyone else's library, so it was closer to an indexed search of what was available that one could watch and then getting that video onto your screen. Over time, other companies understood that they were underpricing their content and Netflix was reaping the benefits. Once external forces adjusted, the TV/film bidding wars began. Today, netflix doesn't have nearly as much content as they used to have.

That risk (losing all content and facing extinction) is what pushed Netflix in the direction of being a content-producer, rather than a content aggregator. I agree with everyone's points on the influence of the median user in diluting the quality of the content Netflix produces, but that's not the only forced that pushed us here. Spotify faced a similar crossroads and decided to broaden beyond music once they started losing bidding wars for licensing.

Being a faster horse wasn't an option available to either Netflix or Spotify; there is no path for a 'better 2012 version of netflix or spotify' in 2025. They each had to change species or die, and they chose to keep living.

esperent(10000) 7 days ago [-]

> Spotify faced a similar crossroads and decided to broaden beyond music once they started losing bidding wars for licensing.

I wasn't aware that Spotify lacked much in the way of mainstream western music.

Are they having licensing issues?

al_borland(10000) 7 days ago [-]

Apple Music still offers library management, with their entire catalog to choose from. They try to play all sides, with algorithmic playback, radio, add to library, and playlists. Adding to library and playlists do seem to be core features, but I'm curious how many people put in the effort when it's not explicitly required.

titzer(10000) 7 days ago [-]

So glad I collect physical media of all the good stuff.

crote(10000) 7 days ago [-]

> They each had to change species or die, and they chose to keep living.

Did they, though? 2025 Netflix is extremely close to having a worse UX than piracy, and it's already far more expensive. Are people going to pay a fortune for Netflix when their handy nephew can hook them up to his far superior Jellyfin instance for a sixpack of beer?

It's a tragedy of the commons, really. The whole value is in having a complete catalogue available for the casual viewer, and making $10-$20 from someone wanting to watch a random decade-old movie twice a month or so. Break up that catalogue into twenty different services each charging $15, and that same casual viewer isn't going to subscribe to a single one of them.

If the streaming industry doesn't get its shit together they are either going to lose viewers to piracy, or to a completely different medium.

acyou(10000) 7 days ago [-]

That quote is pretty dumb, I see it quoted a lot. It's arrogant, assuming, demeaning, elitist. And I don't think it's true. Who would say 'a faster horse'? It doesn't make any sense, because people know/knew that horses are what they are.

A better, more constructive approach is to proactively identify how emerging technology can fit people's needs. And for sure, you need to verify that there is an actual need for what you are building, and then go build it.

Netflix and TikTok are not the 'faster horse' here. Generative AI is clearly the 'faster horse'. It's a disruptive technology that will change the entire structure of society, much like the internal combustion engine. And no one said they wanted that either, that doesn't make people dumb, or user surveys pointless. Who is currently saying they want a 'faster computer'?

Henry Ford saying that would probably be like hearing Sam Altman say 'If I had asked people what they wanted, they would have said a faster computer'. It's not true, it doesn't match reality.

solumunus(10000) 7 days ago [-]

I think you're taking things a little too literally.

hooverd(10000) 7 days ago [-]

I wonder if like cars, LLMs will be as equally destructive to our social fabric.

JodieBenitez(10000) 7 days ago [-]

> Who is currently saying they want a 'faster computer'?

well... I definitely want more performance per watt. And I stress 'performance', because more MIPS are useless if wasted.

SirFatty(10000) 7 days ago [-]

You should probably read the article... the author did not say that Netflix and TikTok are the faster horse, the opposite actually. You seem really focused on the quote for some reason.

ehsankia(10000) 7 days ago [-]

As bad as Netflix is, honestly the UX is the best amongst major streaming services.

For me, the cardinal sin of a streaming service is, if I open your service every single day and watch the next episode of ONE show, then the next time I open your service, PLEASE HAVE MY SHOW AT THE TOP OF THE HOME PAGE.

This is such a simple and obvious user journey, but the majority of streaming services, on purpose or not, fuck it up. The number of times I've opened a streaming service, scroll through the entire home page with the shitty tv remote, then had to type the name of my show manually in search. Makes me want to unsubscribe right then and there and just use Plex instead.

J_Shelby_J(10000) 7 days ago [-]

They want you to start a new show so you have something in the queue when you finish the show you turned the tv on to watch.

jedberg(3314) 7 days ago [-]

OPs specific complaints about Netflix and Spotify are mostly a result of their success. Back in 2012 Netflix had a lot of movies because Hollywood didn't value streaming and were willing to sell the streaming right for most of their content for tiny amounts of money. And there were no other streamer.

Spotify is in a similar boat. The music companies didn't value streaming and were willing to sell their entire catalog to the one player in the ecosystem (or in the case of music, to everyone for the same low price)

But also, personalization actually drives a ton of revenue. When I worked at Netflix, when the recommendation system went down and we defaulted to curated lists, streaming would drop 20%. And that was in 2013. I can only imagine what the drop is today when that system goes down.

Personalization drives a ton of revenue, and TikTok is the best at it, so it's no surprise that OP sees everything 'going to TikTok'

tobr(421) 7 days ago [-]

Weren't the big record labels terrified that streaming would cannibalize CD sales? I think it was a pretty huge thing that Spotify got them onboard at all. I'm not sure how much that matters to your overall point but saying they "didn't value streaming" doesn't seem quite right with how I remember the discussion at the time - they were afraid of it because they could see its value, and how that might disrupt their lucrative plastic disc business.

greatgib(3476) 7 days ago [-]

I just hate so so so much the Netflix of nowadays, they manage to keep me because of a few good movies/series and releasing new seasons of shows that I watched previously.

But otherwise, this interface is so much bat shit! Incredible to me that anyone can pretend to Product manager of something so badly designed and unergonomic.

The most important thing is 'continue watching', that should be almost the first line, but no it is randomly spread at different levels. Some times you can't even find it, sometimes it lacks the movie that you were just watching and that reappears later.

It is very hard to find something to watch because they still show you the hundred of things that you saw already, or that old crappy movie that anyone saw ten times on tv, or things that you are not interested anyway.

And there is absolutely no way to filter to not be a frustrating experience.

In addition you have the asshole dark patterns like showing multiple times the same movie/series in a given category when you scroll.

My hypothesis is that they used to have a lot of great content, so that was their strength, and no they have very little valuable and recent content and as they don't want to be upfront about that, they use a lot of dark patterns to confuse you to still give the impression that they have an impressive catalog.

But that has the consequence of the user being frustrated, impossible to find something proper to watch, but still having to spend hours browsing in the app as you might think that the good thing exist but it is just you that can't find it.

peeters(10000) 7 days ago [-]

It feels to me like they poached some high-level product executive from an intrusive ad company, trained in the art of dark patterns, and pointed them at their paying customers. It's a truly offensive way of looking at your user base, as solely engagement metrics to be optimized. It's what happens when an entire business is built around gamifying one KPI.

kilroy123(3630) 7 days ago [-]

Same. I gave up on netflix and just use Plex. Usually, I use this app on Android TV to play my plex library https://www.quasitv.app.

Sooo much better.

boznz(3573) 7 days ago [-]

I spent my last 3 months using Amazon Prime on my smart TV, opening the app, scrolling for 15 minutes through the same stuff as last time, turning off the TV and reading a book. I cancelled and now have 15 extra minutes reading time, though I do miss the cheap delivery it got me.

marcellus23(10000) 7 days ago [-]

> The most important thing is 'continue watching', that should be almost the first line, but no it is randomly spread at different levels

This seems to be common among the streaming services. I can't imagine any reason other than they want to force people to see their other content.

3minus1(10000) 7 days ago [-]

I really don't think bad Product Manager's is a good explanation for the UI. Any big company like Netflix is going to heavily A/B test any and every change to the UI. They will only ever add things that boost metrics like engagement. You may not like the UI; it may annoy you, but you should have some appreciation for the fact that they are using sophisticated techniques to optimize for what they care about.

tyre(3677) 6 days ago [-]

Some people have turned to downloading qBittorrent[0] and use 1337x.to or thepiratebay.org (to start).

At some point these apps are so user hostile that it's simply isn't worth subscribing to. Their margins on content are so low on an individual—effectively zero since a flat fee means ~infinite content—that the effect on their business is incredibly small. Especially for people who have subscribed for months but don't watch consistently.

For movies that are 5+ years old, some would say that the companies have made the vast majority of what they will and copyright is so out of control, bought by those same companies, that it's not bad faith to counter-balance it.

Not sure. These are arguments.

[0]: https://www.qbittorrent.org/

freedomben(1381) 7 days ago [-]

This gets especially interesting when you consider that horses are still better than motorized vehicles at accessing certain terrain. For example, a horse can trivially climb a steep hill in the wilderness with no road, or ford a river with no nearby bridges, that even rugged ATVs can't really handle. The vast majority of transportation needs are better served by motorized vehicles, but horses still have some unique advantages and in some areas are unbeatable. Now that said, some of the freaky AI robots with legs might finally render horses inferior, but those are pretty inaccessible to most people.

wpm(10000) 6 days ago [-]

I can't wait for the day my hiking trails aren't festooned with piles of horseshit.





Historical Discussions: Harvard's response to federal government letter demanding changes (April 14, 2025: 1367 points)

(1367) Harvard's response to federal government letter demanding changes

1367 points 4 days ago by impish9208 in 195th position

www.harvard.edu | Estimated reading time – 4 minutes | comments | anchor

Dear Members of the Harvard Community, For three-quarters of a century, the federal government has awarded grants and contracts to Harvard and other universities to help pay for work that, along with investments by the universities themselves, has led to groundbreaking innovations across a wide range of medical, engineering, and scientific fields. These innovations have made countless people in our country and throughout the world healthier and safer. In recent weeks, the federal government has threatened its partnerships with several universities, including Harvard, over accusations of antisemitism on our campuses. These partnerships are among the most productive and beneficial in American history. New frontiers beckon us with the prospect of life-changing advances—from treatments for diseases such as Alzheimer's, Parkinson's, and diabetes, to breakthroughs in artificial intelligence, quantum science and engineering, and numerous other areas of possibility. For the government to retreat from these partnerships now risks not only the health and well-being of millions of individuals but also the economic security and vitality of our nation. Late Friday night, the administration issued an updated and expanded list of demands, warning that Harvard must comply if we intend to "maintain [our] financial relationship with the federal government." It makes clear that the intention is not to work with us to address antisemitism in a cooperative and constructive manner. Although some of the demands outlined by the government are aimed at combating antisemitism, the majority represent direct governmental regulation of the "intellectual conditions" at Harvard. I encourage you to read the letter to gain a fuller understanding of the unprecedented demands being made by the federal government to control the Harvard community. They include requirements to "audit" the viewpoints of our student body, faculty, staff, and to "reduc[e] the power" of certain students, faculty, and administrators targeted because of their ideological views. We have informed the administration through our legal counsel that we will not accept their proposed agreement. The University will not surrender its independence or relinquish its constitutional rights. The administration's prescription goes beyond the power of the federal government. It violates Harvard's First Amendment rights and exceeds the statutory limits of the government's authority under Title VI. And it threatens our values as a private institution devoted to the pursuit, production, and dissemination of knowledge. No government—regardless of which party is in power—should dictate what private universities can teach, whom they can admit and hire, and which areas of study and inquiry they can pursue. Our motto—Veritas, or truth—guides us as we navigate the challenging path ahead. Seeking truth is a journey without end. It requires us to be open to new information and different perspectives, to subject our beliefs to ongoing scrutiny, and to be ready to change our minds. It compels us to take up the difficult work of acknowledging our flaws so that we might realize the full promise of the University, especially when that promise is threatened. We have made it abundantly clear that we do not take lightly our moral duty to fight antisemitism. Over the past fifteen months, we have taken many steps to address antisemitism on our campus. We plan to do much more. As we defend Harvard, we will continue to:

  • nurture a thriving culture of open inquiry on our campus; develop the tools, skills, and practices needed to engage constructively with one another; and broaden the intellectual and viewpoint diversity within our community;
  • affirm the rights and responsibilities we share; respect free speech and dissent while also ensuring that protest occurs in a time, place, and manner that does not interfere with teaching, learning, and research; and enhance the consistency and fairness of disciplinary processes; and
  • work together to find ways, consistent with law, to foster and support a vibrant community that exemplifies, respects, and embraces difference. As we do, we will also continue to comply with Students For Fair Admissions v. Harvard, which ruled that Title VI of the Civil Rights Act makes it unlawful for universities to make decisions "on the basis of race."

These ends will not be achieved by assertions of power, unmoored from the law, to control teaching and learning at Harvard and to dictate how we operate. The work of addressing our shortcomings, fulfilling our commitments, and embodying our values is ours to define and undertake as a community. Freedom of thought and inquiry, along with the government's longstanding commitment to respect and protect it, has enabled universities to contribute in vital ways to a free society and to healthier, more prosperous lives for people everywhere. All of us share a stake in safeguarding that freedom. We proceed now, as always, with the conviction that the fearless and unfettered pursuit of truth liberates humanity—and with faith in the enduring promise that America's colleges and universities hold for our country and our world. Sincerely, Alan M. Garber




All Comments: [-] | anchor

rocqua(10000) 4 days ago [-]

Harvard just earned some reputation with me. It was already a place with great research. But now, it is also in institution with actual moral fiber.

apercu(10000) 4 days ago [-]

> actual moral fiber.

Maybe? Or maybe they realize that they will lose all future credibility with students, government and NGO's if they bow to the conservative & Christian right?

There are two outcomes for the the current American government situation - a slide in to authoritarianism (it's right there in Project 2025), or these wackjobs get voted out because they are destroying global financial stability.

If it's the former, Harvard eventually has to cave because literal Nazi's.

If it's the latter, Harvard is screwed if they capitulate.

oehtXRwMkIs(10000) 4 days ago [-]

I don't know, is it moral to give legitimacy and a platform to someone like J. Mark Ramseyer (https://en.wikipedia.org/wiki/J._Mark_Ramseyer)? Less clear example would be keeping around Roland Fryer.

palmotea(10000) 4 days ago [-]

> Harvard just earned some reputation with me. It was already a place with great research. But now, it is also in institution with actual moral fiber.

I'm not so sure. The Harvard endowment is huge. I might not be so much 'moral fiber' as having enough fuck you money that risks don't matter as much as they do to others.

hn_throwaway_99(10000) 3 days ago [-]

While I agree with this, if you read the letter of demands from the administration I don't think Harvard had any choice. I think the letter was much more egregious than what the Columbia demands were (at least from what I read about the Colombia demands). I think if Harvard had acquiesed it wouldn't have much reason to exist anymore, and I say this as a Harvard alum who took plenty of issue with the direction of the university in recent years.

In contrast, most of the demands I read for Columbia, except for the one about putting the Middle Eastern studies department under some sort of 'conservatorship', seemed relatively reasonable to me if they hadn't come from the barrel of a gun and from an administration who has clearly defined any criticism of Israel and any support for Palestinians as anti-Semitism.

areoform(1518) 4 days ago [-]

If you've read history, this rhymes with certain acts that have happened before under certain regimes. Under a non-authoritarian Government, this type of showboating can be dismissed, but when habeas corpus and the right to due process is suspended — such actions take on a very different cast indeed.

It's good that Harvard is fighting this. The more people accede, the more they will accelerate down a path where there is no coming back from.

ghusto(10000) 4 days ago [-]

The point of no return is Trump getting a third term. The parallels are strong there.

I was just thinking this morning that we very much needed the USA's help fighting Nazi Germany, but who will we turn to when we're fighting fascists coming from the East _and_ West? (Russia and the USA)

repeekad(10000) 4 days ago [-]

$9 billion dollars from the federal government to Harvard equates to nearly $30 per American, that is an ignorant amount of money for a single academic institution, surely the world isn't so black and white that we can have a conversation about how much money is leaking out of our tax dollars without it always coming back to 'fascism'?

outer_web(10000) 4 days ago [-]

Habeas corpus - still in effect unless you're already in El Salvador.

andrepd(3074) 4 days ago [-]

It was very depressing (if financially understandable) to see other institutions immediately caving in.

FloorEgg(10000) 4 days ago [-]

Did you read the letter sent from the government to Harvard?

ren_engineer(3241) 4 days ago [-]

these types of moves wouldn't be possible in the first place if these institutions hadn't spent decades burning their own credibility. They even mention Alzheimer's research in this post, something that has literally wasted billions of taxpayer dollars due to an academic cartel shutting down anybody trying to expose the fact that they were completely wrong about amyloid plaques

fitsumbelay(10000) 4 days ago [-]

FYI habeas corpus has been under attack by GOP administrations for nearly a quarter of a century - https://en.wikipedia.org/wiki/Habeas_corpus_in_the_United_St...

squigz(10000) 4 days ago [-]

> the more they will accelerate down a path where there is no coming back from.

Why do you say this? At practically every point in history where a government or dictator goes too far, we've come back from it.

slowmovintarget(10000) 4 days ago [-]

Harvard can do whatever they want. They can also not get taxpayer funding for it.

Whoppertime(10000) 4 days ago [-]

It seems like the government has a soft Monopsony. There are many universities willing to sell research, but the government is the biggest buyer and controls the research grant market

riskassessment(10000) 4 days ago [-]

This isn't close to a monopsony but it's more directionally correct than it is wrong. Keep in mind research institutes can be funded by private foundations, state and local governments, industry (e.g. pharma), venture, or even foreign governments. The federal government is undoubtedly the largest buyer though. I do think there are other motivations to rely primarily on federal grants beyond number of dollars. In particular, funding sources other than federal grant money is often looked down on from an academic prestige perspective. Until now federal money came with very few strings attached compared to the perceived loss of objectivity that could occur when receiving money from other sources. The current situation may alter or relax the prevailing view on which sources of research money are perceived of as potentially compromising.

jltsiren(10000) 4 days ago [-]

Universities don't sell or do research. They provide facilities, equipment, services, and sometimes funding for research. The actual research is done by individuals, who are nominally employed by the university but largely independent from it. If a researcher doesn't like a particular university, they can usually take their funding and projects to another university.

When grants are revoked for political reasons, it affects individuals who happen to be affiliated with the university more than the university itself. And it particularly affects people doing STEM research, because humanities and social sciences receive much less external funding. If the decline in public funding is permanent, it makes humanities and social sciences relatively stronger within the university. They are more viable without public subsidies than the more expensive STEM fields.

jsbg(3613) 4 days ago [-]

Anyone whose research is profitable is free to work for a private entity. The government is a 'monopsony' in 'buying' unprofitable research the same way it's a 'monopsony' subsidizing any industry that would otherwise fail in a free market. That is not typically how the concept of monopsony is meant.

bo1024(10000) 4 days ago [-]

It's not a very good analogy because federally-funded research is a public investment, a public good like roads. The research is supported by the public (the government) and becomes available for anyone to use, learn from, and build off of. And in fact most successful U.S. business are built on the backs of technological innovation that was originally funded by the government, or at the very least, innovation from PhD's whose educations were largely federally funded. (Disclaimer: federally funded researcher)

You couldn't replace that with a private company 'buying' research and expect the same societal benefits.

hedayet(10000) 4 days ago [-]

Presidents and their policies come and go; knowledge stays and grows.

As long as educators aren't selling themselves short, I remain optimistic about the future.

killjoywashere(2377) 4 days ago [-]

Einstein essentially gave up his professorship at the University of Berlin. How far into the future are you looking?

https://www.nytimes.com/1932/10/18/archives/einstein-would-q...

stevenwoo(3570) 3 days ago [-]

The current administration have interrupted the pipeline of students to research - current research funded or partially funded by federal government is stopping or will be curtailed and future students will question whether is a rational decision to go into any sort of path that leads to research because it would only be stable for maybe two to three years, assuming a sane, science respecting House, Senate and President were in office and used the regular norms to pass bills and implement programs. I do not see a recovery path from this unless American public gets a similar thrashing like the Great Depression and decides to not elect nut jobs for 50 years. I keep seeing interviews with those who vote for Trump and are hurt by his tariffs or immigration changes and insisting they still support Trump. Those (mostly older) people are going to have to die of natural causes and be replaced by demographic shifts before things change, but the last election showing young men shifting to Trump and this administration trying to suppress the vote of women does not point to this.

bedhead(3539) 4 days ago [-]

One framework I like to use is, "If this thing didn't exist today, and someone proposed it, how would people react to it?"

I think it's fair to say that if none of this existed today, and someone proposed that the federal government simply give universities like Harvard seemingly endless billions, it would be laughed out of existence by republicans and democrats alike. All of this is the product of inertia at best, corruption at worst. It's a different world today and we don't need our tax dollars going to these places.

triceratops(10000) 4 days ago [-]

'If thing doesn't exist, gets proposed, gets laughed out of the room, good idea' is your framework? It doesn't sound like a good framework.

yencabulator(10000) 3 days ago [-]

Wait till you hear of countries where university education is 100% tax funded. And you get money from the government while you're a full-time student.

bretpiatt(3433) 4 days ago [-]

With their endowment above $50 billion, combined with Federal plus Non-Federal sponsored revenue at 16% of operating budget, it makes sense to me they just forgo Federal funds and operate independently.

If all 16% is canceled, then they'd need to draw an additional $1 billion per year from endowment at current budget levels.

That would put them above 7% draw so potentially unsustainable for perpetuity, historically they've averaged 11% returns though, so if past performance is a predictor of future, they can cover 100% of Federal gap and still grow the endowment annually with no new donations.

gruez(10000) 4 days ago [-]

This article lists out why it's not good of an idea as you think.

>Universities' endowments are not as much help as their billion-dollar valuations would suggest. For a start, much of the money is reserved for a particular purpose, funding a specific professorship or research centre, say. Legal covenants often prevent it from being diverted for other purposes. In any case, the income from an endowment is typically used to fund a big share of a university's operating costs. Eat into the principal and you eat into that revenue stream.

>What is more, eating into the principal is difficult. Many endowments, in search of higher income, have invested heavily in illiquid assets, such as private equity, property and venture capital. That is a reasonable strategy for institutions that plan to be around for centuries, but makes it far harder to sell assets to cover a sudden budgetary shortfall. And with markets in turmoil, prices of liquid assets such as stocks and government bonds have gyrated in recent days. Endowments that "decapitalise" now would risk crystallising big losses.

More worrying is the fact that the federal government can inflict even more harm aside from cutting off federal funding:

>the Trump administration has many other ways to inflict financial pain on universities apart from withholding research funding. It could make it harder for students to tap the government's financial-aid programmes. It could issue fewer visas to foreign students, who tend to pay full tuition. With Congress's help, it could amend tax laws in ways that would hurt universities.

https://archive.is/siUqm

Obscurity4340(10000) 4 days ago [-]

He's not gonna be happy they can operate financially without his assent

inglor_cz(10000) 4 days ago [-]

They could also possibly fire some administrators. Not every vice-provost out there is strictly necessary.

Just a few years ago, Harvard Crimson carried an op-ed complaining about the bloat:

https://www.thecrimson.com/article/2022/11/29/anderson-burea...

sandworm101(3006) 4 days ago [-]

This is about lots more than money. Sure, Harvard can go without federal funds. Then comes federal tax breaks. Then Harvard's ability to recruit foreign students (no visas, no foreign students/professors). After that comes the really draconian stuff like the fed revoking clearances or not hiring/doing business with Harvard grads. Such things were once thought illegal but are now very much on the table. That is why Harvard needs to win the money fight no matter the numbers.

__jl__(10000) 4 days ago [-]

I think the 9 billion is very misleading. More than half goes to hospitals affiliated with Harvard. I am not sure but I don't think they get anything from the endowment. The impact of loosing this money would be very uneven across different parts of the university and hospitals affiliated with it.

The faculty of arts and science would be fine. Yes, some cuts, a hiring freeze etc. The med school and public health school would feel a big impact. They employ so many people on 'soft money' through grants including many faculty members.

The hospitals are a different story and I am not sure why they are even lumped together.

fma(10000) 4 days ago [-]

Harvard is probably thinking they just need to draw the $1 billion extra for another 4 years. Unless, Trump runs for a 3rd time which he has floated. If that happens then I think everyone's just screwed.

robocat(3527) 4 days ago [-]

Republicans Are Floating Plans To Raise the Endowment Tax. Here's What You Need To Know : https://www.thecrimson.com/article/2025/2/11/increasing-endo...

Proposed College Endowment Tax Hike: What to Know : https://thecollegeinvestor.com/52851/proposed-college-endowm...

  College endowments are typically tax-exempt, but a 2017 law imposed a 1.4% tax on investment income for a small group of wealthy private universities. A new proposal seeks to increase the endowment tax rate to 14%
Other article:

  proposing an 8.6 percent tax hike
When hacking the government rules is used against you.
bitcoin_anon(10000) 4 days ago [-]

I agree. Also, the quality and independence of the research will improve when it is funded outside of government influence.

ren_engineer(3241) 4 days ago [-]

those endowments, especially for the Ivy League schools, aren't liquid at all. They'd take a massive haircut if they had to start pulling funds from it

paulpauper(104) 4 days ago [-]

80% of the endowment funds are heavily restricted as per donor requests and cannot be used unconditionally.

janalsncm(10000) 4 days ago [-]

This might be true for Harvard, but I don't think free speech should only be for those who can afford it. I know my school couldn't if the government came knocking.

benrapscallion(10000) 4 days ago [-]

Harvard affiliated hospitals are dependent on NIH funding for survival. Wonder if they are included in the scope of this.

acmj(10000) 4 days ago [-]

People here have little idea about how Harvard works. Harvard is financially vulnerable. It is currently running on a deficiency considering the endowment. And Harvard can't freely use most endowment for personnels anyway. If the government takes away funding, Harvard will have a financial crisis. I guess the leadership made the decision in hope someone could stop the government before bad things happen but when bad things do happen, you will probably see mass layoffs of researchers in particular in life sciences and biomedical research.

soup10(10000) 4 days ago [-]

Harvard has a 50 billion endowment, what do they need federal funds for. If they value their intellectual independence so much, then cut the cord.

nradov(537) 4 days ago [-]

Much of that federal funding is for research, the same as any other R1 university. We all benefit from research findings. Endowments are used for other purposes.

There are a few colleges that take no federal funding in order to maintain total independence (mostly for religious reasons). But their research output is virtually zero.

jncfhnb(10000) 4 days ago [-]

The federal funds are for doing research that the government wants to fund, not keeping the university's lights on. This is about terminating a productive partnership, not ending a subsidy handout to schools.

tgma(10000) 4 days ago [-]

Next step: taxing that endowment (which is a good idea irrespective of the other demands: universities are government-subsidized tax-free hedge funds)

JohnCClarke(10000) 4 days ago [-]

I think that's what they're saying.

twright(10000) 4 days ago [-]

I think this is the common-sense response. The push back I've heard is that endowments are apportioned to specific things. That is, it's not an open piggy bank. Nevertheless, $50B is a _lot_ even if the smallest allocation is 1% of the largest that is likely on the order of tens of millions.

op00to(10000) 4 days ago [-]

Do you have money in the bank? Do you have income? If so, you don't really need any help from the government. If you value your personal independence so much, then cut the cord.

malshe(778) 4 days ago [-]

As a university professor, I agree with you. I think universities must cut the cord and be independent. The university faculty gave up the control to administrators and administrators, in turn, gave up the control to politicians.

legitster(10000) 4 days ago [-]

They don't. This is the federal government threatening to withhold payment for research they commissioned.

throw_m239339(3625) 4 days ago [-]

> Harvard has a 50 billion endowment, what do they need federal funds for. If they value their intellectual independence so much, then cut the cord.

I agree. Gulf monarchies will probably come in a give even more billions to these institutions anyway to make up for the losses. No strings attached of course...

Harvard probably already secured some more funding from Qatar and what not.

somethoughts(10000) 4 days ago [-]

It'd be an interesting strategy if you could split the organization based on departments that depend heavily on federal funds (i.e. perhaps STEM fields such as medicine and physics/hard sciences, etc.) and those that are not (and perhaps simultaneously requiring more freedom of thought).

Perhaps resurrect the Radcliffe College to support the more intellectual, free thought based departments. [1]

[1] https://www.radcliffe.harvard.edu/about-the-institute/histor...

droopyEyelids(3595) 4 days ago [-]

It'll be nice if an institution finally decides to oppose some of the recent government overreach.

It's really shocking to see an institution in our country take action that is not in its immediate financial best interest (assuming this letter translates to an action)

immibis(10000) 4 days ago [-]

It's not just about finances. Trump just announced (possibly accidentally) that he's going to start deporting American citizens to El Salvador gulags: https://news.sky.com/story/donald-trump-says-the-us-could-de...

and they've been painting political enemies as criminals. It's pretty much the same situation as Russia/Putin but at an earlier stage of its development, and people want to avoid being the tallest grass that gets mowed.

It's good that some institutions are standing up but I don't expect it to go well for them.

colechristensen(10000) 4 days ago [-]

I would have preferred a much more concise refusal.

Vegenoid(10000) 4 days ago [-]

I'm not sure if you wanted it shorter for tonal reasons rather than simply for length of time to read, but I think it was pretty concise.

PerilousD(10000) 4 days ago [-]

I guess that Harvard probably does not need the Feds as much as the Feds need Harvard but I'm glad they are standing up to the Fascists. I'm going to have to see what NYU is doing now.

nonethewiser(3585) 4 days ago [-]

What does the Federal Gov need Harvard for? Harvard gets 16% of its funding from them - what outweighs that on the aide of the Federal government?

duxup(3407) 4 days ago [-]

The GOP / Trump administration shows no real focus on employing experts, Trump shows no curiosity about anything. They're slashing research and science across the board department by department. They employ anti science people as heads of departments that require science.

I don't think the GOP & Trump thinks they need anything from Harvard other than agreeing to impose first amendment violations on others on behalf of the GOP and Trump.

amalcon(10000) 4 days ago [-]

The thing to remember is that these grants are their research budget. The endowment is largely earmarked for educational projects. Your average university professor is there because they want to do research, not because they want to teach - so the research budget is critical for educating as well.

I assume Harvard has a plan for dealing with this dynamic. They have some extremely smart people there, so I don't doubt they've found a way.

FloorEgg(10000) 4 days ago [-]

Genuinely curious: what part of the federal government's letter to Harvard seems fascist to you?

Is the government asking a university to shift their bias away from skin color diversity to viewpoint diversity fascist?

Is there a historical parallel?

Or is it just the fact that the government is asking for reform, and any reform request would be considered fascist? If so, do you also consider the DEI reform requests fascist?

bakugo(1828) 4 days ago [-]

> I'm glad they are standing up to the Fascists

Today I learned that demanding an end to racial discrimination makes you a fascist. I swear this word becomes more meaningless by the day.

laweijfmvo(10000) 4 days ago [-]

the irony of the evil being perpetrated around the world in the name of 'antisemitism' is mind boggling

A_D_E_P_T(2124) 4 days ago [-]

In the name of 'fighting antisemitism'?

It's true, though. It's a convenient tool. 'What do you mean you don't want to cede control to us? Don't you want to fight antisemitism?!'

darknavi(2851) 4 days ago [-]

Smells awfully like Putin's trumped up (ayy) play in Ukraine to 'de-nazify'.

almogo(10000) 4 days ago [-]

No mention of anti-Asian discrimination? It made big rounds in all the American media circles a few years back, and if memory serves, MAGA boarded that train too.

kridsdale1(10000) 4 days ago [-]

The page acknowledges that Harvard lost that case and will comply with the ruling.

overfeed(10000) 4 days ago [-]

These 'values' are not sincerely held, but tactical. Once they got the SCOTUS win and affirmative action was toast, they quickly moved on from fighting anti-Asian hate to a new fig-leaf/tool to useful for fighting the next ideological battle, which was prominent protests against government policy, which happened to be pro-Palestine, so this is the best tool for the job.

The messaging is very similar too, conflating pro-diversity with anti-whiteness, or anti-asian when needed, and now redefining being pro-Palestine as anti-Semitic or pro-Hamas. It's dumb, lacks nuance, but effective when the Fifth estate is pliant, co-opted or otherwise ineffective.

yongjik(10000) 4 days ago [-]

MAGA loves to say how universities screw over poor hard-working Asian students, and then they turn around and defund universities and fire researchers. Their pity on Asians is not sincere, because they detest higher education in the first place.

And I'm saying this as an Asian father whose kid is going to a US college this year.

comte7092(10000) 4 days ago [-]

> MAGA boarded that train too

More like they found some useful idiots

ghusto(10000) 4 days ago [-]

This is the only correct response, but I don't think I'm being overly cynical in thinking they're being opportunistic either.

They're quite happy to turn a blind eye to unfashionable political views being silenced, so there's a pinch of hypocrisy in making such a show of standing for openness.

All in all though, I'm happy to see this.

stemlord(10000) 4 days ago [-]

It's my understanding that the issue is not that they're 'espousing the right views' but rather that they have the constitutional right as a private institution to espouse whatever views their students and faculty want under the first amendment.

darioush(10000) 4 days ago [-]

right, freedom of speech is free as long as it agrees with the viewpoint of who's in power. similar to how history is written by victors but this part is conveniently ignored. it's just facts in the open marketplace of ideas yay!

hn_throwaway_99(10000) 3 days ago [-]

I mean, while this is the only correct response, it could still cost Harvard around $9 billion, which isn't chump change, even for Harvard.

And while I agree and have been disgusted with Harvard's slow slide to demanding ideological conformity over the past decade plus (e.g. https://www.thefire.org/news/harvard-gets-worst-score-ever-f...), I believe they have made some belated changes in the right direction over the past year.

priyadarshy(10000) 4 days ago [-]

The wildest thing I read was:

> Harvard will immediately report to federal authorities, including the Department of Homeland Security and State Department, any foreign student, including those on visas and with green cards, who commits a conduct violation.

Conduct violations at Universities are a pretty broad set of rules at universities and don't necessarily line up with what's legal or not but more with the university's cultural and social norms.

cypherpunks01(10000) 3 days ago [-]

Another good one, 'Reforming Programs with Egregious Records of Antisemitism or Other Bias .. The programs, schools, and centers of concern include:'

> Harvard Divinity School

> Graduate School of Education

> School of Public Health

> Medical School

> Carr Center for Human Rights at the Harvard Kennedy School

> Harvard Law School International Human Rights Clinic

(partial list)

I must have missed the time when the Medical School racked up a record of egregious antisematism.

stevenwoo(3570) 3 days ago [-]

Some of those international students with their visas revoked apparently only had traffic violations according to what I read in the Texas Tribune. They are going after any level of law breaking in order to match their stated goal of kicking out criminals, since they are having trouble reaching the numbers promised in campaign speeches.

jmward01(10000) 4 days ago [-]

We are well past the point where in a future history class a student will raise their hand and ask 'Why didn't anyone stop them?' followed by 'Why were so many people members of that party?'

Vilian(10000) 3 days ago [-]

All of the information is saved, it's going to be interesting to study, the first 'class' of people to leave are the ones from tech, you know, the backbone of USA services, it's going to be interesting, it's going to be an economy fall that didn't happen in Nazi Germany

jacobs123(10000) 4 days ago [-]

> 'Harvard must implement a comprehensive mask ban with serious and immediate penalties for violation, not less than suspension.'

Wow. Imagine being sick with something serious like pneumonia and having to decide whether to get everyone around you sick, or risk being suspended from school.

yencabulator(10000) 3 days ago [-]

I think you mean jailed, tortured and deported.

DecoySalamander(10000) 3 days ago [-]

If you're seriously ill, you should get treatment, not walk around hoping that a piece of cloth will save others from exposure to whatever it is you're coughing up.

inglor_cz(10000) 3 days ago [-]

While I am not a friend of a mask ban, universities should absolutely teach their students to stay home when sick. Going to work sick is an abomination that should be rooted out. And it is a nice liberal cause too.

sam_goody(10000) 3 days ago [-]

Off topic, but _why_ is it good that the gov gives hundreds of billions of dollars [if you include grants] to higher ed.

I work in a startup where none of the programmers have been to college, and they seem to get along just fine.

I volunteer in a youth group that teaches 'soft' sciences, and I am sure that groups like ours do a better job at that with a lot less funding.

Trade schools cater to the lower income, are much more effective dollar for dollar, and get a lot less federal funds. If that money were to be poured into trade schools instead of universities, it would help create a better middle class.

Why should Harvard be so entitled?

EDIT: IMO, The reason youth go to college is to have fun. The real reason the parents are willing to pay, is because their children will forge connections with other wealthy families that is worth the money. It may be good for the wealthy that the money stays in their circle, but IMO this is not something the Gov should subsidize.

jhp123(10000) 3 days ago [-]

the money is for research not education.

A lot of modern industry started as academic research. Things like semiconductors, EUV lithography, mRNA vaccines, or AI originate in government-funded academic research.

The health effects of smoking and leaded gas were established by academic research, allowing government programs to massively improve our collective health.

Climate change has been recognized, diagnosed, and its solutions invented mostly by academic researchers, an effort that may save all industrial civilization.

chneu(10000) 3 days ago [-]

Nearly everything you use on a daily basis came from university research. Heck, most of what we know about the universe comes from university research.

Every piece of technology is because of collaboration between taxpayer funding and universities. It is relatively rare nowadays for a private business to create anything truly new without some form of university support. Or it's built on top of university research.

If you like new knowledge you like these types of programs. They make modern life possible.

Universities provide staff, equipment and expertise while the government(and often private enterprise) provide the funding.

legitster(10000) 4 days ago [-]

Even if Harvard wanted to comply with the government letter, it's full of so many non-sequiturs and self-conflictions that it reads more like a piece of satire:

> The University must immediately shutter all diversity, equity, and inclusion (DEI) programs, offices, committees, positions, and initiatives, under whatever name, and stop all DEI-based policies, including DEI-based disciplinary or speech control policies, under whatever name

> Every department or field found to lack viewpoint diversity must be reformed by hiring a critical mass of new faculty within that department or field who will provide viewpoint diversity

> In particular, Harvard must end support and recognition of those student groups or clubs that engaged in anti-Semitic activity since October 7th, 2023

> Discipline at Harvard must include immediate intervention and stoppage of disruptions or deplatforming, including by the Harvard police when necessary to stop a disruption or deplatforming

The letter is a complete joke. Giving it any sort of compliance would be giving validation to a set of rules that are literally impossible to follow by design. There is literally nothing Harvard could do to not be in trouble later.

Also buried in the letter is this gem:

> Harvard must implement a comprehensive mask ban with serious and immediate penalties for violation, not less than suspension.

Keep in mind Harvard also runs a medical school!

This is Maoist-style social reform through and through.

kashunstva(10000) 4 days ago [-]

> Keep in mind Harvard also runs a medical school!

Aseptic surgical procedures may soon go the way of vaccines.

cypherpunks01(10000) 3 days ago [-]

Harvard Medical School?

Ah yes I've heard of that, it's one of the 'Programs with Egregious Records of Antisemitism or Other Bias' which most fuels antisemitic harassment and reflects ideological capture!

pjmlp(113) 4 days ago [-]

As information, the current administration is doing similar demands to foreign universities, trying to impose the point of view of the world in a president we didn't vote for.

Here is an article about the Trump administration demands to our universities.

https://www-publico-pt.translate.goog/2025/04/11/ciencia/not...

frm88(10000) 3 days ago [-]

Thank you for that link. I knew about letters to parts of the European industry but not to universities. 7. 12. 14. and 15. are mind blowing.

outside1234(3632) 4 days ago [-]

I hope everyone is ready for a general strike because that time is coming up at us rapidly.

AlexandrB(3651) 4 days ago [-]

General strike when >50% of those who voted wanted this? What world are you living in?

Edit: I stand corrected, 49.81%. It doesn't change the point much. Especially when that ~49% includes many 'working class'[1] voters. Who's going to participate in this general strike? A bunch of office workers?

[1] https://www.reuters.com/world/us/trumps-return-power-fueled-...

pbreit(3117) 4 days ago [-]

Good for Harvard. As idiotic as many of its policies are, this is clearly government infringement of freedom and speech.

Jsebast24(10000) 4 days ago [-]

That's right. Infringement of freedom and speech should be left in the hands of government funded institutions like Harvard.

kombine(10000) 4 days ago [-]

These people (not only MAGA) perverted the very meaning of antisemitism to the point that it means nothing today. I am saying that as someone who's lost a family member to Holocaust. When I hear someone mention antisemitism today, 90% of the time it is to punish someone's views critical of Israel.

pcthrowaway(3666) 4 days ago [-]

Same, having descended from Holocaust survivors, what is happening in the U.S. and Palestine right now is chilling to me in its similarity.

Latty(10000) 4 days ago [-]

Which is, of course, deeply antisemitic of the people claiming antisemitism when they are talking about only criticism of Israel, to equate all Jewish people with the Israeli state.

arp242(10000) 4 days ago [-]

When I was active on the Politics Stack Exchange site years ago I was 'reported to the ADL' for merging the [jews] and [judaism] tags. Right out of the gate after I casually mentioned it in another discussion: not even a big fight about it. But the same person outright ignored the Trump-supporting holocaust denying user who harrassed a Jewish user with antisemitic slurs such (e.g. [1]).

Sadly antisemitism obviously exists, and sadly some pro-Palestinian activists have veered off into antisemitism. But the selective outrage is hard to take serious.

Remember, Caesar subjugated Gaul and killed or enslaved about a quarter of all Gauls in the process, to 'protect' them from invading Germanic tribes. 'Top kek', as I believe the old Latin saying goes.

[1]: https://politics.meta.stackexchange.com/q/3596 – I am the author of that, I deleted my account since in large party due to all of this

greasegum(10000) 4 days ago [-]

It's just words, obviously contradicted by many of Harvard's recent actions, but all I can think is what a fucking lay-up. If only Columbia's administration had half a spine they would have responded similarly.

bhouston(2119) 4 days ago [-]

> all I can think is what a fucking lay-up

I am nervous about the US right now. So many cases are going to end up at the Supreme Court that is controlled by conservatives. It may not be the lay-up you think it is.

Also what happens if Trump just decides to ignore a court loss as he did with the recent deportation of Kilmar Garcia?

t0lo(10000) 4 days ago [-]

Columbia's administration obviously has no issues silencing free speech and dissent based on their actions though.

duxup(3407) 4 days ago [-]

From the feds documents they describe the federal government as thought police:

>Viewpoint Diversity in Admissions and Hiring. By August 2025, the University shall commission an external party, which shall satisfy the federal government as to its competence and good faith, to audit the student body, faculty, staff, and leadership for viewpoint diversity, such that each department, field, or teaching unit must be individually viewpoint diverse.

Even ICE had a deleted tweet that makes it clear the thought police are active:

https://i0.wp.com/www.techdirt.com/wp-content/uploads/2025/0...

NoImmatureAdHom(10000) 3 days ago [-]

I prefer these thought police to the thought police we had previously.

The 'diversity' thought police had very strong views about what the only acceptable thoughts were. These people are like, 'if we could get it up to 30% that would be a huge victory'. Actual diversity in thought at top American universities would be a boon.

clivestaples(10000) 4 days ago [-]

Likely I'm very naive. But here goes... It seems that taxpayers fund a lot of research. This research is very valuable and lucrative. It finds its way into the hands of those who know how to profit from it. The taxpayer is again screwed paying exorbitant prices for said breakthroughs. Insulin is one area of interest to me and it very much seems to be the case in the diabetes world.

This was how NAFTA was sold. Move car manufacturing to Mexico and they will enjoy better living wages while we get more affordable cars. Except that I don't recall cars produced in Mexico ever getting more affordable. I'm sure corporate profits were great. Should probably look into this someday and see if my perception is correct.

ipaddr(10000) 4 days ago [-]

Between 1935 and today car price inflation is at 2.41% per year while general inflation is 3.56%. You may have not noticed. Since free trade it's been less than 2%.

You may not have noticed but it happened.

zamadatix(10000) 4 days ago [-]

Keep in mind labor is something like 10%-15% of the cost of a new car so even if you cut that down by 80%, including transport, and ignored recouping capital cost to actually move the production lines then you'd still need to move the production in less than 2 years to actually see the price decrease rather than 'not move up as fast' at 3% car price inflation of the early 90s. Interestingly there was a dip in the price increase rate of cars at the end of the 90s https://www.in2013dollars.com/New-cars/price-inflation but it's too large to have been reasonably attributable to this trade change.

jsbg(3613) 4 days ago [-]

> Except that I don't recall cars produced in Mexico ever getting more affordable.

According to this site[0], new car prices were about 6% higher at the end of NAFTA in 2020 compared with at the start of NAFTA in 1994. Considering inflation on other things was on average much higher and also that more recent cars are significantly safer, more performant, and fuel-efficient—i.e. more provide more value—it does look like cars did effectively get cheaper.

[0] https://www.in2013dollars.com/New-cars/price-inflation

killjoywashere(2377) 4 days ago [-]

Much like outbreaks that never turn into pandemics, no one remembers the efficiency measures that prevent price increases.

hermannj314(10000) 4 days ago [-]

I think a conversation about what the taxpayer should get back from university research funding is a good question, I personally don't like privatization of medical breakthroughs discovered with public money.

However, I am cautious to extend that argument to this situation. This is an attempt to use federal funding as a backdoor around the 1st amendment (from what I can tell). I'm not going to extend this administration any leeway when their bull in a china shop policies inadvertently break something I don't like. I don't want to improve taxpayer funding of research by losing the 1st amendment.

duxup(3407) 4 days ago [-]

I don't think your concept her is bad at all.

But I also don't think your concept has anything to do with the situation at Harvard.

chneu(10000) 3 days ago [-]

Part of nafta was to slow the increasing costs of production, not lower them.

When looking over time it definitely worked in many regards. Things didn't get as expensive as they would have otherwise.

kweingar(10000) 4 days ago [-]

The aggregate demands of the administration are confusing and contradictory. They seem to be simultaneously asking for:

- an end to diversity initiatives

- a new diversity initiative for diverse points of view

- a new policy of not admitting international students with certain points of view

- ending speech-control policies

- auditing the speech of certain departments and programs

- ending discipline of students who violate policies related to inclusion

- disciplining particular students who violated policies related to inclusion

TimorousBestie(10000) 4 days ago [-]

It's a good strategy. Even if Harvard had attempted to satisfy every bullet point, the govt could still retort that their demands were not satisfied.

jiriknesl(10000) 4 days ago [-]

The demands are simple and not confusing at all.

- Stop promoting Democrats' agendas as the ultimate truth; stop bullying people for non-Democratic views - Allow Republicans' agendas to be equally represented

Is it really so difficult to understand?

Out of many bad things Trump has done, this isn't really bad for anyone except core Democrats voters.

The US academia has become hostile to anyone except one particular culture. This should stop.

nineplay(3298) 4 days ago [-]

The demands of the administration are the demands of a bully who doesn't want your lunch money, he just wants you to know he can take it away at any time.

Vilian(10000) 4 days ago [-]

because they can use as excuse to stop the funding nonetheless, it's impossible to 100% comply with contradictory requests

chairmansteve(10000) 4 days ago [-]

They go after their enemies (liberals, trans, pro palestinians, brown migrants) and help their friends (right wing white people).

whatshisface(10000) 4 days ago [-]

They want to have the old system (deliberate bias and vehement denials of there being any 'bias,') but working for them, and the way to demand that without describing it is to require all of the results and 'forbid,' by name only, the necessary methods.

empath75(2913) 4 days ago [-]

What the demand is, is institutional fealty to Donald Trump. Trying to interpret it as anything else is going to lead these institutions into poor decision making. Harvard is doing the right thing.

exe34(10000) 4 days ago [-]

it's pretty clear. it's twitter's policy. neo-Nazi rhetoric must be allowed, empathy must be banned.

babypuncher(10000) 4 days ago [-]

It makes sense when you realize that their true position is 'free speech for me but not for thee'. The contradictions are about censoring speech they disagree with and promoting speech they like.

hayst4ck(10000) 4 days ago [-]

Authoritarian governments are arbitrary governments, all decisions are made arbitrarily. Consistency is unnecessary. That's the trouble with choosing power as a guiding principle over reason or consent.

atoav(10000) 4 days ago [-]

It all makes sense with a fascist power logic. The goal isn't to implement consistent policy to reach rational targets. The goal is to wield power and slowly errode any opposition with divisive actions that support anybody that is loyal to you. Importantly being loyal doesn't guarantee you will be spared. In these goals consistency is irrelevant, in fact being inconsistent and acting with arbitrary despotism is a feature since it produces more fear.

If you ever find any fascist critique of their enemies you will quickly realize that all of which they accuse their enemies of doing, they will do themselves. Decry freedom of speech as no one is 'allowed' to say sexist/racist things anymore? Be sure they will go in and ban books, political thoughts and literal words. Hillarys emails? We literally operate our foreign policy in signal groups.

Quite frankly I am a bit puzzled by the neutrality with which some Americans try to analyze this absolutely crazy political situation. It is like pondering over the gas mixture in the smoke while your house is on fire, absolutely unhinged.

UncleMeat(10000) 4 days ago [-]

It makes sense if you understand that they aren't focused on general principles. Diversity is bad when it involves non-whites, women, gay people or research involving these groups. Diversity is good when it involves 'race realists.' Free speech is bad when students are advocating for divestment initiatives. Free speech is good when a professor calls somebody the n-word online.

The goal is white supremacy and antifeminism.

gambiting(10000) 4 days ago [-]

>>- a new diversity initiative for diverse points of view

I'm sure we both know what this one means though. Forcing the university to hire people who think the earth is flat and that climate change isn't real - for the sake of diversity of course.

spyder(10000) 4 days ago [-]

and the irony at the beginning of the demanding government letter:

'But an investment is not an entitlement.'

aposm(10000) 3 days ago [-]

Nothing they do makes sense until you accept that hypocrisy is a feature, not a bug, for them and their base. They know that what they're asking for is impossible to meaningfully comply with...

immibis(10000) 3 days ago [-]

To the fascist regime, 'diversity' means 'hiring black or gay people'. Likewise 'diverse points of view' means 'viewpoints that think it's okay for black and gay people to be hired and for transgender people to pee'. And 'speech control' means 'kicking out people who shout Hitler did nothing wrong in the middle of the library'. And 'inclusion' means 'letting black or gay people study'. It's all newspeak.

davegri(10000) 3 days ago [-]

The demands only seem inconsistent if you don't look at the actual principle underlying them. Political discourse tends to present opposing ideologies as being about principles like 'free speech' or 'free markets' - it's really all about power, who has it, and who wants it.

In this case its strengthening particular social and economic hierarchies - america vs the rest of the world, and white christians over non-whites or non-christian.

What's interesting is that this is not necessarily a struggle between the top of a hierarchy vs the bottom of one, but between two different hierarchies. The democrats support cultural non racial and economic hierarchies, while the republicans support racial international and the same economic hierarchies. So while they both support the rich over the working class, there is a struggle over whether to support racial and international hierarchies. Democrats tend to support globalization, i.e unifying of the power of the top of the economic hierarchy across international boundaries, while eliminating racial and sexual hierarchies as they are seen as 'inefficient' from a neoliberal perspective. Republicans are more focused on the 'national elite', the rich people that depend on america being a global hegemon specifically, energy industry, military industira-complex, etc..

dspillett(10000) 3 days ago [-]

It is easier to understand their thinking when you combine each pair of demands: what they want is reversals, they've just split each into two steps because they think that will be more palatable. It makes it easier to sell to their own base certainly, because they can concentrate on whichever half has the most emotive effect in any given speech, and easier for their base to parrot: they just repeat the half they want and don't need to think about the other.

The end to current diversity policies and the start of others combined is a demand for u-turn: stop allowing the things we don't like, start allowing the things you were stopping.

Same for speech: stop auditing the speech we want to say, start auditing the speech you were previously allowing.

And so on.

In the minds of the administration it makes sense, because they think of each item separately where there is conflict and together where there is not. Such cognitive dissonance seems to be their natural state of mind, the seem to seek it.

Much like their cries of "but what about tolerance?!"1 when you mention punching nazis. They want the complete about-turn: LBTQ out, racism/sexism/phobias in. You are supposed to tolerate what they want you to tolerate, and little or nothing else.

--------

[1] My answer there has often become "you didn't want tolerance, you specifically voted against continued tolerance, what you voted for won, intolerance is your democratically chosen desire, who am I to deny the will of your people?".

chrsw(10000) 3 days ago [-]

I don't think it's confusing. It's classic 'my way or the highway' stance. 'Free speech for everyone! (except for things I don't like...)'.

reverendsteveii(10000) 3 days ago [-]

You see the establishment of separate, unwritten classes of things here, right? It will be a case-by-case basis which of these rules is invoked, that way no matter what happens they're 'just following the rules we all agreed to' but they get to hand-select which thoughts are compulsory and which are forbidden.

nine_k(3565) 4 days ago [-]

The university, as a private institution, has every right to hold whatever views and enforce whatever policies it sees fit within itself.

The government, on the other hand, has every right to put conditions its counterparty should conform to in order to get money from the government.

It's best when the bargaining about such conditions happens with mutual respect and without overreach, but respect and sobriety are in very short supply in the current administration. Even better it is when a university does not need to receive the government money and can run off the gigantic endowment it already has, thus having no need to comply with any such conditions.

(It's additionally unfun how the antisemitism is barely mentioned as a problem, in a very muffed way, and any other kind of discrimination based on ethnicity, culture, or religion is not mentioned at all. Is fighting discrimination out of fashion now?..)

duxup(3407) 4 days ago [-]

The governments conditions are not unlimited.

Their proposed 'viewpoint diversity' is absurd at face value.

skyyler(10000) 4 days ago [-]

Do you believe antisemitism is a problem at Harvard? If so, what led you to believe this?

dclowd9901(10000) 4 days ago [-]

Do we really believe there is a rooted undercurrent of antisemitism at Harvard of all places? Or is this just anti-zionist expansion straw manning? I'm sorry but the continuously faithless positioning of the Trump administration right now makes me believe the antisemitic accusations are a pretext.

guax(10000) 4 days ago [-]

The government does not have all that right tho. First amendment and all.

I would invite you to read the government letter if you have not, but look at each demand and put yourself in the position of the recently affected but also try to see if you can hold a 'controversial' view of the world that should be fine but would be put in danger by these demands: https://www.harvard.edu/research-funding/wp-content/uploads/...

Civil rights, suffrage, they were all the controversial opinion at some point. Some people still argue that they are but anyone against those can go pound sand.

tikhonj(3216) 4 days ago [-]

> The government, on the other hand, has every right to put conditions its counterparty should conform to in order to get money from the government.

It really doesn't. There are both normal laws and Constitutional restrictions on how the government can make decisions, and the reasons it can have for making those decisions.

I'm very much not an expert here, but this includes restrictions on viewpoint discrimination in funding.

insane_dreamer(10000) 4 days ago [-]

> antisemitism is barely mentioned as a problem

Because it's very obviously being used as a cover to exert control over universities which are deemed to be too 'woke' (which has nothing to do with anti-semitism).

Yes, antisemitism exists, like many other social ills. But is it a major problem at Harvard and these elite institutions? No, it is not.

arp242(10000) 4 days ago [-]

So first they demand 'Merit-Based Hiring Reform' and 'Merit-Based Admissions Reform', and then it continues to demand 'Viewpoint Diversity in Admissions and Hiring'.

I can't even engage with these levels of cognitive dissonance. Or bad faith. Or whatever it is.

saalweachter(3273) 4 days ago [-]

Never mistake a man's rhetoric for his principles.

enaaem(10000) 4 days ago [-]

I have never been a 'woke' person, but Trump really makes me doubt the meritocracy argument. If Trump was a black woman he would never get away with half the things he is doing now.

sys32768(10000) 4 days ago [-]

Harvard admitted it needs to '...broaden the intellectual and viewpoint diversity within our community...'

This is a no-brainer considering only 2.3% of their faculty identifies as conservative.

https://www.thecrimson.com/article/2023/5/22/faculty-survey-...

NoImmatureAdHom(10000) 3 days ago [-]

It's not cognitive dissonance, or bad faith. Of course.

If you let Harvard do 'merit-based hiring', they'll move a little in the direction of actually complying with employment law, but not much. If you institute a regime such as the one that existed for race and sex for decades (i.e., if you don't have 'enough' black people, you need to show how your recruitment pipeline means that's necessarily the case, like not enough get the required type of degree), you'll get much better compliance.

jdthedisciple(3143) 3 days ago [-]

If you genuinely cannot distinguish the two then that's about equally as bad as cognitive dissonance:

Phenotype diversity != Viewpoint diversity

The former is what current academia and DEI focus on, the latter is what the administration demands.

Does this simple logic need to be expressed in Rust for HN folks to wrap their mind around it?

veny20(10000) 4 days ago [-]

Public funds should not be subsidizing wealthy private universities. The end.

wnoise(10000) 4 days ago [-]

Unless you're speaking about the high overhead rates, that's really the wrong framing. The public funds at issue are buying things like research, or hospital services.

worik(3644) 4 days ago [-]

What an outrageous and incoherent letter

So much for academic freedom

worik(3644) 4 days ago [-]

Awesome response from Alan Garber

xqcgrek2(3134) 4 days ago [-]

With their large untaxed endowment, they should be fine without federal funding. Make it so.

tzs(2985) 4 days ago [-]

They are already are spending billions a year from their endowment, which covers nearly 40% of their operating revenue, which is around the maximum they can sustainably spend.

Sustainable spending is the whole point of an endowment.

Also endowments are created by a vast number of individual donations which often come with restrictions. For example someone leaves a bunch of money to university to support a professorship. That money and its earnings can only be used for that.

Generally the things that are funded by research grants from the government are things that cannot be funded from the endowment.

skadamat(935) 4 days ago [-]

Re: endowments, really good post on why universities can't just tap into endowments for budget shortfalls:

https://medium.com/@myassa_62896/why-you-cant-just-use-the-e...

hnburnsy(2080) 3 days ago [-]

>It's more like a patchwork of locked treasure chests, each with its own key and its own label: this one funds scholarships, that one supports cancer research, another pays for upkeep on a library.

Explain why direct donations cannot accomplish the same. I suspect that universities want endowment donations because they grow tax free.

rogermungo(10000) 3 days ago [-]

Whats the problem.. just get your pal Soros to give you the money instead.. With $36T debt, Federal Government cannot continue splashing out money like there is no tomorrow

qgin(10000) 3 days ago [-]

If they were concerned with spending, they'd just cut the spending.

They're making the spending conditional on Harvard following their ideological instructions.

otterley(3404) 3 days ago [-]

Almost every economist believes there is no serious and immediate problem with our current debt level (which is actually increasing under both Trump administrations, despite their fake expressions of concern). Why do you believe you are right and they are all collectively wrong?

chneu(10000) 3 days ago [-]

Trump is increasing the debt tho and did in his first term.

Republicans only care about debt when it can be used to either bash Democrats or used as a talking point to eliminate something they don't like. Lookup 'Starve The Beast'.

Republicans do not care about the debt. They care that it can be used as a tool. That's it.

They run up the debt when they want and then turn around to blame Democrats for the debt they ran up.

Nobody is really concerned with the US debt outside of silly wanna-be patriots and the politicians who use it to scare them. Now, one way to make the US debt a much bigger deal is to cause a recession...hmm...wonder if anyone is trying to do that...

porphyra(10000) 4 days ago [-]

Merit-based admission sounds good to me. Harvard is vigorously defending its 'right' to continue to deny admissions to highly qualified Asian applicants out of nothing but pure racism, and somehow they are the good guys?

thrance(10000) 4 days ago [-]

Do you seriously believe MAGA has any interest in fair access to education? Or are you just saying that as a disingenuous talking point?

Vilian(10000) 4 days ago [-]

because the answer for the racism against admissions from asians is deny admission and deport everyone that isn't us-american

os2warpman(10000) 4 days ago [-]

Merit is not easily definable.

Standardized tests are bullshit, IQ tests are phrenology, class rankings are not comparable across school districts. Someone who was president of every club at school may be less able than a kid who had to flip burgers in the evenings to help make rent.

Merit to a university may mean 'someone whose charisma and social connections will bring great repute to the institution' more than 'a child prodigy who will burn out at 27 and end up fixing typewriters in his parent's garage because they actually had an undiagnosed mental illness growing up'.

Merit may mean 'a middling student smart enough to pass who will stick around working as a post-doc temporarily forever because they have no ambition beyond performing slave wage labor in exchange for the cold comfort of the known and familiar'.

Any definition of merit is going to be irredeemably faulty. Like recruiting sporting talent based solely on stats without considering if the talent is an asshole who will destroy the atmosphere in the clubhouse and immediately get arrested for DUI after being signed.

I thought we wanted to let the market decide?

The government funding aspect is irrelevant. Nearly every business in the country receives some form of government funding either direct or indirect and they hire based on a wide variety of criteria. I was once hired to a position I would need time to be a productive in because I am a ham radio guy and my boss wanted someone to talk radios with.

const_cast(10000) 3 days ago [-]

When the 'other side' is pretty much evil, yeah, you are the good guys. Like, by default. I would even go so far to say Harvard could do much, much worse and they would still be the good guys.

On a closely related note, you are legitimately out of touch with reality if you believe any part of this is done with the intention of 'merit'. This is done to strengthen allegiance to MAGA and conservative ideology.

Does that sound a bit scary and fascist-like? You decide. But it's explicitly stated as the goal of this constriction on higher education in Project 2025. So, take it up with them, not me.

Zamaamiro(10000) 3 days ago [-]

Merit as defined by an administration whose cabinet is composed of Fox News personalities, DUI hires, and some of the least qualified people for the jobs they were given.

This administration has ZERO credibility to define what 'merit' is.

casey2(10000) 3 days ago [-]

It really isn't. Harvard used to be a special cultural institution now it's just another research institute. Whoopee, nothing can be special, everything has to all be the same gray sludge cause otherwise it isn't '''fair'''

TrackerFF(10000) 3 days ago [-]

If the Trump admin could directly control admission, I truly believe future classes would consist of close to 100% far right leaning ('anti-woke') WASP types.

Bluescreenbuddy(10000) 3 days ago [-]

Or maybe there are better applicants than your highly qualified asian applicants. But sure, an Asian canadian came over here, helped kill AA, and nothing's changed. Well done Asian community. You fucked over a tiny fucking minority for nothing.

blindriver(10000) 4 days ago [-]

The law in the immigration act to disallow people who espouse support for terrorism is a good one.

We protect freedom of speech for citizens because we have to. They are part of our country.

I don't believe this extends to foreigners. We should allow only immigrants who do not support terrorism and want to be productive members of society. This isn't too much to ask.

This is not a right or left issue. This is a pro-America vs con-America issue.

tastyface(2992) 4 days ago [-]

Define "terrorism."

The administration, for example, freely uses the word to describe someone with no criminal record and no proven gang affiliations: https://bsky.app/profile/atrupar.com/post/3lmrwrrkbnf2e

They also use the word to describe Tesla vandals: https://amp.cnn.com/cnn/2025/03/25/us/fbi-task-force-tesla-a...

spacemadness(10000) 4 days ago [-]

Assumption: everything critical of Israel's actions in Gaza is supporting terrorism. That's quite the take.

ajross(10000) 3 days ago [-]

'Congress shall make no law' is not unclear, nor is the idea from the declaration that ' all men are endowed by their Creator with certain unalienable Rights'. There is no spot in the founding philosophy of this nation that makes a home for 'rights of citizens' only, and there was copious space to fill that in if they wanted. You made that shit up.

What you're doing is scriptural prestidigitation. It's the equivalent of christians deciding that Satan and the serpent in the garden are the same entity, even though it's very clear that they aren't[1]. You're doing it because it makes your world view seem like less of an incoherent mess, not because it's true.

zoogeny(10000) 4 days ago [-]

This is a larger idea, just tangentially related to this particular case.

In 2011 there was Occupy Wall Street. It was a movement that argued that many of the financial problems we saw in 2008 were a result of a 1% of wealthy business people who were prioritizing their own wealth over the needs of the populations of the countries they operated within. I mean, they created a financial crisis by inventing obviously risky financial assets based on peoples housing. They knew it was a house of cards that would fall in time but they did it anyway with callous disregard to the inevitable human cost.

It was in the wake of that the 'wokeness' became a buzzword, seemingly overnight. Suddenly, corporate policies were amended, management teams were upended, advertising campaigns were aligned to this new focus. Women, minorities and marginalized groups were championed and ushered in to key public positions. In a brief 14 years, then entire garbage dump of modern capitalism was placed like a hot potato into the hands of a new naively optimistic crew. This coincided with huge money printing and zero percent interest rate, the likes of which we haven't seen. That new elite grew in wealth, stature and public focus. They became the face of the 'system' as if they had created it instead of inheriting it.

And now that the zero interest rates are done and suddenly everyone believes in the scary size of the deficit and the ballooning debt, the people sitting in power as we are about to actually feel the crash instead of just kicking it down the road yet again, those people are the target of public ire. I actually see people in these very comments acting as if the looming crash was caused by the DEI departments which formed just a little over a decade ago.

And guess who is coming back to claim they will save us from these DEI monsters? The people who created the actual mess in the first place. Yet now, instead of calling for their heads on spikes like the public was in 2011, we are literally begging them to save us from these DEI proponents.

Our anger has been redirected away from the wealthy and towards the minorities with such skill I almost admire it. The collective anger at DEI is at such a level that we are willing to cede core rights just to damage them.

matwood(10000) 4 days ago [-]

This is spot on. The US has enjoyed enormous wealth and prosperity, but it's been mostly captured by the top 1% of private individuals. The GOP has done a masterful job redirecting the blame to China, DEI, immigration, etc... when the real problem is that we have not spread around the prosperity through programs like universal healthcare, free college, and heck, even UBI.

hnburnsy(2080) 4 days ago [-]

Can someone confirm that if Harvard turned down Pell Grants and Federal student support, they could admit whoever they want?

>Private clubs are generally exempt from anti-discrimination laws under certain conditions. For example, being genuinely private and not engaging in business with non-members. However, there are exceptions to these exemptions. For instance, when a club receives significant government benefits or operates as a commercial enterprise.

telotortium(948) 3 days ago [-]

They could. Look up Bob Jones College or Hillsdale College, both of which operate without any federal funding. It appears that the elite universities are going to find out the same thing that the small Christian universities found out in the 1970s, which is that the federal government Can control you if they fund you. I believe Bob Jones in particular won a case in front of the Supreme Court giving them the right to racially discriminate in their admissions if they refuse to take any federal funding.

kashunstva(10000) 4 days ago [-]

From the United States government letter to Harvard: 'Harvard must implement a comprehensive mask ban with serious and immediate penalties for violation, not less than suspension.'

So if a student has, say, an immunodeficiency syndrome and wears a mask to protect their health during the riskier seasons of the year, they would face dismissal from the university? (Or worse - whatever that is - according to the letter.)

This is how we know that the Republican party has no interest in freedom as the word is conventionally defined.

Loughla(10000) 4 days ago [-]

They want freedom for themselves. They're free to impose their will on others without judgement. That's the purpose.

NoImmatureAdHom(10000) 3 days ago [-]

A 'comprehensive mask ban' would presumably include exceptions for people who are immunocompromised, actively sick with an upper-respiratory infection, etc.

Steelman, don't straw man.

EasyMark(3653) 2 days ago [-]

The current regime in Washington is clearly fascist, there is nothing democratic at all about them. They want to banish Americans to foreign concentration camps for torture, he said that just before his interview with the El Salvador President who is hosting at least one of said concentration camps. Yet the media says little.

nickpsecurity(3676) 4 days ago [-]

So, many of these universities were taken over in positions of power by people promoting intersectionality which also promotes systematic discrimination (eg DEI) against specific groups. That's a highly-divisive philosophy with no proven benefits that's similar to Marxism which killed 50 million people and wrecked countries. They did this while describing themselves as open-minded institutions commited to everyone's success.

In the degree programs, they forced these beliefs on students in 'diversity' classes, rewarded those on their side, and canceled or limited people with differing views. Those who make it through the process are more likely to force it on others in government and business, which they often do. Worse, being federally funded means taxpayers are paying for students' indoctrination in intersectionality and systematically discrimination it claimed to oppose.

Yeah, I want their funding cut entirely since theyre already rich as can be. I also would like to see those running it take it back to what it used to be. That's a Christian school balancing character and intellectual education. Also, one where many views can be represented with no cancel culture. That is worth federal funding.

On top of it, how about these schools with billions in endowments put their money where their mouth is on social issues and start funding high-quality, community colleges and trade schools and Udemy-like programs everywhere? Why do they talk so much and take in so much money but do so little? (Credit to MIT for EdX and Harvard for its open courses.)

shadowgovt(10000) 4 days ago [-]

> That's a Christian school

> That is worth federal funding.

... interesting.

margalabargala(10000) 4 days ago [-]

> people promoting intersectionality which also promotes systematic discrimination (eg DEI) against specific groups. That's a highly-divisive philosophy with no proven benefits that's similar to Marxism which killed 50 million people and wrecked countries

Just like all people connecting to 'Kevin Bacon', and all Wikipedia pages first links connecting to 'Philosophy', every idea can be connected to mass murder if you're willing to manufacture enough links.

'Intersectionality' is a descriptive, rather than prescriptive, idea. It promotes nothing.

pjfin123(3662) 4 days ago [-]

The Federal government making funding to a university contingent on them 'reforming' specifically named departments whose foreign policy views the executive branch disagrees with (Israel/Palestine policy) seems like a clear violation of the First Amendment.

cma(3612) 3 days ago [-]

They are deporting permanent residents for op-eds.

One permanent resident was sent to a concentration camp in El Salvator without due process, none over speech yet that I know of but his was for being spuriously labeled a terrorist.

nailer(487) 3 days ago [-]

My understanding is that racial discrimination is forbidden under title nine at least.

Animats(2975) 4 days ago [-]

It's a weak response, in that it accepts the Trump Administration's position on antisemitism. This is tied to the broad definition of antisemitism which includes acts by the State of Israel.[1] That definition comes from the International Holocaust Remembrance Alliance. There's a more balanced definition called the Jerusalem Declaration here.[2][3]

This will lead to a controversial discussion, so I'll stop here, with the comment that getting involved in religious wars of other countries hasn't gone well for the US. The US has constitutional freedom of religion partly because the drafters of the constitution knew how that had gone in Europe.

'Maybe they is not evil. Maybe they is just enemies.' - Poul Anderson

[1] https://www.state.gov/defining-antisemitism/

[2] https://jerusalemdeclaration.org/

[3] https://en.wikipedia.org/wiki/Jerusalem_Declaration_on_Antis...

otterley(3404) 3 days ago [-]

Why did the response have to include it? It's not tactically useful.

yes_really(10000) 3 days ago [-]

We can debate about specific requests from the Trump administration, but it is pretty clear that Harvard has been horrible. The previous administrations completely failed to fix it.

- Harvard has been discriminating against Whites and Asians in admissions for decades.

- Harvard deliberately refused to protect Jewish students against intimidation and harassment. Students camped in school property for weeks against Harvard's official rules. They chanted that they would bring islamic terrorism to America ('intifada, intifada, coming to America'), established a self-appointed security system that monitored and recorded Jews, and remained there for almost a month while the school simply refused to remove them. [1]

- Harvard's president stated that calling for the genocide of Jews did not necessarily constitute harassment. This is particularly bizarre when contrasted to Harvard's approach to other groups, like when it considers 'misgendering' of trans individuals to be harassment.

[1] https://www.tabletmag.com/sections/news/articles/harvard-jew...

yes_really(10000) 3 days ago [-]

For the people downvoting: can you actually provide arguments for why you think these points are incorrect?

If you are downvoting simply because you disagree politically with what I commented, you are going against the guidelines: https://news.ycombinator.com/newsguidelines.html

pmags(3338) 3 days ago [-]

I predict a surge of alumni donations in the weeks and months to come, not just at Harvard but also at other institutions that are showing their willingness to stand up against the creeping fascism of the current administration.

I think people who value education, academic freedom, and understand the economic and societal role that universities play, were hoping to see one or more of the major institutions stand up for these principles.

nailer(487) 2 days ago [-]

But they're not standing up for freedom. They are admitting and hiring people based on a monoculture.





Historical Discussions: Leaked data reveals Israeli govt campaign to remove pro-Palestine posts on Meta (April 11, 2025: 1203 points)

(1203) Leaked data reveals Israeli govt campaign to remove pro-Palestine posts on Meta

1203 points 7 days ago by jbegley in 53rd position

www.dropsitenews.com | Estimated reading time – 11 minutes | comments | anchor

Pro-Palestine protesters in front of Meta headquarters on November 3, 2023. Photo by Tayfun Coskun/Anadolu via Getty Images.

A sweeping crackdown on posts on Instagram and Facebook that are critical of Israel—or even vaguely supportive of Palestinians—was directly orchestrated by the government of Israel, according to internal Meta data obtained by Drop Site News. The data show that Meta has complied with 94% of takedown requests issued by Israel since October 7, 2023. Israel is the biggest originator of takedown requests globally by far, and Meta has followed suit—widening the net of posts it automatically removes, and creating what can be called the largest mass censorship operation in modern history.

Government requests for takedowns generally focus on posts made by citizens inside that government's borders, Meta insiders said. What makes Israel's campaign unique is its success in censoring speech in many countries outside of Israel. What's more, Israel's censorship project will echo well into the future, insiders said, as the AI program Meta is currently training how to moderate content will base future decisions on the successful takedown of content critical of Israel's genocide.

The data, compiled and provided to Drop Site News by whistleblowers, reveal the internal mechanics of Meta's "Integrity Organization"—an organization within Meta dedicated to ensuring the safety and authenticity on its platforms. Takedown requests (TDRs) allow individuals, organizations, and government officials to request the removal of content that allegedly violates Meta's policies. The documents indicate that the vast majority of Israel's requests—95%—fall under Meta's "terrorism" or "violence and incitement" categories. And Israel's requests have overwhelmingly targeted users from Arab and Muslim-majority nations in a massive effort to silence criticism of Israel.

Multiple independent sources inside Meta confirmed the authenticity of the information provided by the whistleblowers. The data also show that Meta removed over 90,000 posts to comply with TDRs submitted by the Israeli government in an average of 30 seconds. Meta also significantly expanded automated takedowns since October 7, resulting in an estimated 38.8 million additional posts being "actioned upon" across Facebook and Instagram since late 2023. "Actioned upon" in Facebook terms means that a post was either removed, banned, or suppressed.

Number of posts reported by the Israeli government over time, by country of post origin. Obtained by Drop Site News.
Number of posts actioned upon by Meta over time, by country of post origin. Obtained by Drop Site News.

All of the Israeli government's TDRs post-October 7th contain the exact same complaint text, according to the leaked information, regardless of the substance of the underlying content being challenged. Sources said that not a single Israeli TDR describes the exact nature of the content being reported, even though the requests link to an average of 15 different pieces of content. Instead, the reports simply state, in addition to a description of the October 7th attacks, that:

This is an urgent request regarding videos posted on Facebook which contain inciting content. The file attached to this request contains link [sic] to content which violated articles 24(a) and 24(b) of the Israeli Counter-Terrorism Act (2016), which prohibits incitement to terrorism praise for acts of terrorism and identification or support of terror organizations. Moreover, several of the links violate article 2(4) of the Privacy Protection Act (1982), which prohibits publishing images in circumstances that could humiliate the person depicted, as they contain images of the killed, injured, and kidnapped. Additionally, to our understanding, the content in the attached report violates Facebook's community standards.

Meta's content enforcement system processes user-submitted reports through different pathways, depending on who is reporting it. Regular users can report posts via the platform's built-in reporting function, triggering a review. Reported posts are typically first labeled as violating or non-violating by machine-learning models, though sometimes human moderators review them as well. If the AI assigns a high confidence score indicating a violation, the post is removed automatically. If the confidence score is low, human moderators review the post before deciding whether to take action.

Governments and organizations, on the other hand, have privileged channels to trigger content review. Reports submitted through these channels receive higher priority and are almost always reviewed by human moderators rather than AI. Once reviewed by humans, the reviews are fed back into Meta's AI system to help it better assess similar content in the future. While everyday users can also file TDRs, they are rarely acted upon. Government-submitted TDRs are far more likely to result in content removal.

Meta has overwhelmingly complied with Israel's requests, making an exception for the government account by taking down posts without human reviews, according to the whistleblowers, while still feeding that data back into Meta's AI. A Human Rights Watch (HRW) report investigating Meta's moderation of pro-Palestine content post-October 7th found that, of 1,050 posts HRW documented as taken-down or suppressed on Facebook or Instagram, 1,049 involved peaceful content in support of Palestine, while just one post was content in support of Israel.

A source within Meta's Integrity Organization confirmed that internal reviews of their automated moderation found that pro-Palestinian content that did not violate Meta's policies was frequently removed. In other cases, pro-Palestinian content that should have been simply removed was given a "strike," which indicates a more serious offense. Should a single account receive too many strikes on content that it publishes, the entire account can be removed from Meta platforms.

When concerns about overenforcement against pro-Palestinian content were raised inside the Integrity Organization, the source said, leadership responded by saying that they preferred to overenforce against potentially violating content, rather than underenforce and risk leaving violating content live on Meta platforms.

Within Meta, several key leadership positions are filled by figures with personal connections to the Israeli government. The Integrity Organization is run by Guy Rosen, a former Israeli military official who served in the Israeli military's signals intelligence unit, Unit 8200. Rosen was the founder of Onavo, a web analytics and VPN firm that then-Facebook acquired in October 2013. (Previous reporting has revealed that, prior to acquiring the company, Facebook used data Onavo collected from their VPN users to monitor the performance of competitors—part of the anti-competitive behavior alleged by the Federal Trade Commission under the Biden administration in its suit against Meta.)

Rosen's Integrity Organization works synergistically with Meta's Policy Organization, according to employees. The Policy Organization sets the rules, and the Integrity Organization enforces them—but the two feed one another, they said. "Policy changes are often driven by data from the integrity org," explained one Meta employee. As of this year, Joel Kaplan replaced Nick Clegg as the head of the Policy Organization. Kaplan is a former Bush administration official who has worked with Israeli officials in the past on fighting "online incitement."

Meta's Director of Public Policy for Israel and the Jewish Diaspora, Jordana Cutler, has also intervened to investigate pro-Palestine content. Cutler is a former senior Israeli government official and advisor to Prime Minister Benjamin Netanyahu. Cutler has reportedly used her role to flag pro-Palestine content. According to internal communications reviewed by Drop Site, as recently as March, Cutler actively instructed employees of the company to search for and review content mentioning Ghassan Kanafani, an Arab novelist considered to be a pioneer of Palestinian literature. Immediately prior to joining Meta as a senior policymaker, she spent nearly three years as Chief of Staff at the Israeli Embassy in Washington, D.C—and nearly five years serving as deputy to one of Netanyahu's senior advisors, before becoming Netanyahu's advisor on Diaspora Affairs.

According to internal information reviewed by Drop Site, Cutler has continued to demand the review of content related to Kanafani under Meta's policy "Glorification, Support or Representation" of individuals or organizations "that proclaim a violent mission or are engaged in violence to have a presence on our platforms." Kanafani, who was killed in a 1972 car bombing orchestrated by the Mossad, served as a spokesperson for the left-wing Palestinian nationalist group, the Popular Front for the Liberation of Palestine (PFLP). The PFLP was designated as a terrorist group over a quarter century after he was killed, which, according to Meta's guidelines and Cutler's efforts, serves as a basis to flag his content for removal, strikes, and possible suspension.

The leaked documents reveal that Israel's takedown requests have overwhelmingly targeted users from Arab and Muslim-majority nations, with the top 12 countries affected being: Egypt (21.1%), Jordan (16.6%), Palestine (15.6%), Algeria (8.2%), Yemen (7.5%), Tunisia (3.3%), Morocco (2.9%), Saudi Arabia (2.7%), Lebanon (2.6%), Iraq (2.6%), Syria (2%), Turkey (1.5%). In total, users from over 60 countries have reported censorship of content related to Palestine, according to Human Rights Watch—with posts being removed, accounts suspended, and visibility reduced through shadow banning.

Notably, only 1.3% of Israel's takedown requests target Israeli users, making Israel an outlier among governments that typically focus their censorship efforts on their own citizens. For example, 63% of Malaysia's takedown requests target Malaysian content, and 95% of Brazil's requests target Brazilian content. Israel, however, has turned its censorship efforts outward, focusing on silencing critics and narratives that challenge its policies, particularly in the context of the ongoing conflict in Gaza and the West Bank.

Despite Meta's awareness of Israel's aggressive censorship tactics for at least seven years, according to Meta whistleblowers, the company has failed to curb the abuse. Instead, one said, the company "actively provided the Israeli government with a legal entry-point for carrying out its mass censorship campaign."

Leave a comment




All Comments: [-] | anchor

ethbr1(3611) 7 days ago [-]

>> The data show that Meta has complied with 94% of takedown requests issued by Israel since October 7, 2023.

Nice to see Zuckerberg taking free speech as seriously as he claims.

seydor(3491) 7 days ago [-]

I m not sure he ever claimed that

googlryas(10000) 7 days ago [-]

I'd like to see examples of actual posts that were taken down, rather than talk of the quantity, or who filed the reports.

mef51(3579) 7 days ago [-]

The HRW report[1] goes into details, at least on the 1050 takedowns they documented

> A Human Rights Watch (HRW) report investigating Meta's moderation of pro-Palestine content post-October 7th found that, of 1,050 posts HRW documented as taken-down or suppressed on Facebook or Instagram, 1,049 involved peaceful content in support of Palestine, while just one post was content in support of Israel.'

[1] https://www.hrw.org/report/2023/12/21/metas-broken-promises/...

thomassmith65(10000) 7 days ago [-]

The article mentions requests to remove posts quoting Ghassan Kanafani. The article introduces Kanafani as a literary figure, but then discusses his involvement in the PFLP. I don't know if they want the reader to form a particular judgement about this, or if they're just reporting the facts.

abeppu(10000) 7 days ago [-]

It sounds like you're using the fact that the posts aren't available for you to view to evaluate as a weakness of the reporting on this suppression campaign, but of course they're not available because of the suppression campaign.

Surely the burden should be on the censors to establish clearly that something is in fact incitement to violence, rather than on external reporters to magically show that content which has been taken down is not incitement?

esalman(3317) 7 days ago [-]

I am part of a neighborhood group where I grew up in Bangladesh and lived until 5th grade in the 90s.

The group admin this morning let us know via Facebook post that he has received warnings frm Facebook. The group is 'at a risk of being suspended' because way too many posts relating to 'dangerous organization and individuals' have been removed. He wants everyone to be extra careful when posting about p*l*s*i*e, I*r*e*, g*z*, j*w* etc. He used asterisks himself just to be extra careful himself.

Not to mention my country is dealing with rohingya crisis, which was fueled by Facebook and WhatsApp misinformation campaigns, and Facebook had 2 moderators for the whole country of Myanmar and refused to do anything about said misinformation campaigns. But they sure make exceptions for I*r*e*.

shihab(10000) 7 days ago [-]

As a recent example, the instagram of guardian journalist Owen jones (well known Israel critic) was suddenly suspended without any explanation today.

It has been since restored, after a predictable twitter storm.

nashashmi(10000) 7 days ago [-]

Every pro Palestinian protestor has experienced some form of awareness suppression and content removal. They have known this was a thing long before anyone else did.

Same thing happened during 9/11. Muslims saw suppression, bullying by the police and no one covered it. Then the tables turned on maga republicans after j6.

chacham15(10000) 7 days ago [-]

Since nobody here has actually read the article, it states that the reason the posts were taken down was 'prohibits incitement to terrorism praise for acts of terrorism and identification or support of terror organizations.' This type of speech (incitement) is illegal in the United States and support is very borderline depending on the type and meaning of 'support'. Now, if the reason doesnt match the actual content removed that should definitely be addressed which is your point, but I think that the reason is valid.

janalsncm(10000) 7 days ago [-]

The article links to a much longer article from Human Rights Watch with a good number of examples: https://www.hrw.org/report/2023/12/21/metas-broken-promises/...

zombiwoof(10000) 7 days ago [-]

People still use Facebook?

ben_w(10000) 7 days ago [-]

Personal anecdote: whever I log in to the feed, 1/3 of posts are ads, 1/3 are algorithmic recommendations, and 1/3 are pro-Palestine posts by a former partner.

Almost none of my other connections post anything, though there are occasional exceptions.

muddi900(10000) 5 days ago [-]

Groups and Marketplace

They ate still very popular

sriram_malhar(10000) 6 days ago [-]

Our minds have been so colonized or beaten down by powerful forces that _any_ support of the plight of the Palestine people is seen as pro-Hamas, even if I shout at the top of my voice that I don't care for the armed factions and political jockeying of either side.

I will expect to be downvoted to hell for this.

edanm(3676) 6 days ago [-]

> Our minds have been so colonized or beaten down by powerful forces that _any_ support of the plight of the Palestine people is seen as pro-Hamas,

What makes you say that? Plenty of people express support for the Palestinian people, including plenty of governments and heads of state, etc.

I personally think that being pro-Palestine means you should be anti-Hamas, since they are a brutal dictatorship that's plundered its people's resources to engage in a war with Israel that has destroyed their lives.

The main worrying thing is when someone is not pro-Palestinian, they're either pro-Hamas or anti-Israel.

mlindner(3663) 6 days ago [-]

That's because the Palestine protests are full of people who actually are pro-Hamas, and not only that but often rabidly antisemite on top of that. Your side linked the two together for whatever reason.

liorsbg(10000) 6 days ago [-]

I just re-read the article, and there's no evidence of wrong doing. There's a bunch of circumstantial stuff that people are choosing to feed into their narrative.

Facebook has some rules and community guidelines, the Israeli government recognized some posts that violate those and asked for them to be taken down, and Facebook complied in accordance to their own rules.

nabla9(144) 6 days ago [-]

Nothing illeagal. Just dirty.

jgil(10000) 6 days ago [-]

Having a system of rules does not mean that the system is inherently well-designed or well-intentioned.

mlindner(3663) 6 days ago [-]

The problem is the pro-Palestine movement irrecoverably linked themselves to Hamas, a terrorist organization, it's made supporting Palestine a toxic position to hold for anyone of any significance.

aussieguy1234(3672) 6 days ago [-]

Actually it's the other way around. Fascists in Israel and the US worked very hard to make it so that anyone seen to be sympathetic to the plight of the Palestinians is seen as pro Hamas, or pro terrorist.

Apparently there are some that even say the Palestinian flag itself is a 'terrorist flag' and anyone flying it is also a terrorist.

t0lo(10000) 6 days ago [-]

Actually the Israeli government (Netenyahu) funded Hamas as a way to destabilise the Palestinian authority and conflate palestinianism with terrorism (EU Policy chief Joesep Borrell has stated this on record https://www.politico.eu/article/israel-funded-hamas-claims-e...)

marcosdumay(10000) 6 days ago [-]

Just because a bunch of war criminals keep saying it, it doesn't make it true.

isaacremuant(10000) 6 days ago [-]

If this appals or surprises you but then you call others conspiracy theorists when they're disseminating things that don't align with your mainstream political views, you need to learn from it and stop playing the game.

overu589(10000) 6 days ago [-]

Or there truly are conspiracies against our natural destinies, we are merely ignorant and incompetent in identifying what they might be.

Covering own asses is natural enough. War crimes and crimes against humanity are serious concerns with serious considerations, yet what if we cannot ourselves be trusted by the very nature of our self lies?

aucisson_masque(10000) 7 days ago [-]

I like to think we are in a better place than russia for instance with all its propaganda and jailed journalists, but then i see these kind of article come over and over....

Most of the people in the 'free world' goes on mainstream media, like facebook to get their news. These companies are enticed to 'suck up' to the government because at the end they are business, they need to be in good term with ruling class.

you end up with most media complying with the official story pushed by government and friends, and most people believing that because no one has the time to fact check everything.

One could argue that the difference with russia is that someone can actually look for real information, but even in russia people have access to vpn to bypass the censorship.

Another difference would be that you are allowed to express your opinion, whereas in russia you would be put to jail, that's true but only in a very limited way. Since everyone goes on mainstream media and they enforce the government narrative, you can't speak there. you are merely allowed to speak out in your little corner out of reach to anyone, and even then since most people believe the government propaganda, your arguments won't be heard at all.

The more i think about it, the less difference i see.

gooosle(10000) 7 days ago [-]

The difference with Russia is that they are much worse at hiding their corruption and censorship.

newsclues(10000) 7 days ago [-]

It's not a better or worse government (although it may be), it's just different.

uniqueuid(10000) 7 days ago [-]

You're not arrested for posting this, so that is a pretty big difference to Russia (and other authoritarian nations like China and Turkey), no?

https://rsf.org/en/country/russia

scottyah(10000) 7 days ago [-]

It's still humans being humans, we just have a covert culture while they are more overt. I personally like being tricked/manipulated more than forced. I'd rather get Tom Sawyered into painting a fence than being held at gunpoint.

NoTeslaThrow(10000) 7 days ago [-]

Indeed. The editorial boards of these newsrooms are often staffed with people who attended the same schools and classes as those running the country. The social circles of the two worlds are extremely closely linked.

Of course, this means that the reporting isn't very good at addressing its blind spots–i.e., most of the news in the country, let alone the world, that isn't relevant to the ivy league coastal elites. And I say this as a member of that same class. Most of the political perspectives in my life are completely unrepresented in the opinion columns, which generally tend to pander upwards rather than downwards.

I don't tend to put much weight in freedom of the press so long as that press is floating on the cream of society and asking the government permission to report on what they're doing.

alistairSH(3420) 7 days ago [-]

Is Meta really considered "mainstream media"? I always took that phrase to refer to NBC, CBS, NY Post, etc - the big legacy news organizations (print and TV).

kubb(10000) 7 days ago [-]

Anna Politkovskaya – Investigative journalist and critic of the Chechen war, shot in Moscow (2006). Alexander Litvinenko – Ex-FSB officer poisoned with polonium in London (2006).

Stanislav Markelov & Anastasia Baburova – Human rights lawyer and journalist, shot in Moscow (2009).

Boris Nemtsov – Opposition leader, shot near the Kremlin (2015).

Denis Voronenkov – Former Russian MP, shot in Kyiv (2017).

Nikolai Andrushchenko – Journalist, beaten to death in St. Petersburg (2017).

Alexei Navalny – Opposition leader, died in prison after previous poisoning (2024).

---

The difference is that they murder their political opponents for show to make their people be afraid of dissent.

You comparing it with some (disgusting, vile) social media company (which would improve the world immensely if it disappeared) is completely inappropriate.

rrrrrrrrrrrryan(10000) 7 days ago [-]

I don't think this is necessarily an issue of censorship so much as it is highlighting that Facebook is clearly a fucking news publisher and should be treated as such under the law.

It's time to revoke section 230 for any social media network that amplifies or buries content with a non-transparent process.

In this case it isn't even merely an algorithm designed by humans. They have LITERAL human editors choosing which stories to put on the front page, just like the NYT, and they should be held liable for the content on their platforms just like legacy media is.

mnky9800n(10000) 7 days ago [-]

Russia doesn't just put people in jail for speaking against the government. They weaponise the generational fear of being disappeared by the government. This is not close to what happens in America where you can post anything anywhere and if Facebook deletes it you can always make your own website about it. If you did this in Russia you go to jail. Even if you say things like "it is sad Ukrainian children die in children's day in Russia" you go to jail. I don't think you can compare modern USA with modern Russia in this way. USA does plenty of other things that are bad like jailing so many people for petty crimes without pushing much on speech. USA has its own problems and all these comparisons only hide them.

Braxton1980(10000) 6 days ago [-]

>Another difference would be that you are allowed to express your opinion, whereas in russia you would be put to jail, that's true but only in a very limited way.

Although not even close in number and punishment the US government is deporting people for speaking against Israel.

I think we do have a much better system because we are aware of these cases, you can speak out about the issue, and our court system can rule against the current admin.

What makes this possible to either the level of Russia or the US is how much the supporters of the regime want it. This is regardless of morality, legality, or the precedent it sets.

mmooss(10000) 6 days ago [-]

This post is oddly nonsensical ...

> mainstream media, like facebook

Facebook is in the 'mainstream media'? That's a first in my experience. 'Mainstream media' usually describes established journalism organizations such as CNN, Fox, the NY Times, the WSJ. Facebook is universally grouped with 'social media' in my experience.

> Most of the people in the 'free world' goes on mainstream media

In fact, most people go on social media. The 'mainstream media' is losing audience rapidly.

> you end up with most media complying with the official story pushed by government and friends

I'm a bit confused here. Facebook complying with ... which government? The Israeli government has very little power over Facebook - Israel is a tiny market.

Meanwhile, Trump has been calling the 'mainstream media', the 'enemy of the people' - because they constantly report what he doesn't like.

Since the November election, many have shockingly capitulated but many remain. The NYT, for example, publishes negative news and criticisms of Trump and Israel daily.

> The more i think about it, the less difference i see.

You haven't established much of anything. Much of the comment doesn't make sense. Where is the Russian NYT? Which American journalists are in jail?

kombine(10000) 6 days ago [-]

> Another difference would be that you are allowed to express your opinion, whereas in russia you would be put to jail, that's true but only in a very limited way.

This is more subtle. I have a lot to say about Israel, and I do post occasionally on Facebook, but I tone it down a lot because I have a few high profile people in the industry and academia among my Facebook friends (not actual friends). If I were to post what I really think, this would have serious career repercussions for me. People would brand me as an antisemite (they don't know that my grandfather is Jewish and he practically raised me).

Can you compare this to Russia? Well, I am Russian and I live in the West, so my choice of living here gives an answer to this question. I'd be in jail in Russia if they read my Facebook posts about the war in Ukraine. Yet, I'm now disillusioned about the Western liberalism, all thanks to Gaza war.

hello_computer(3565) 6 days ago [-]

The college deportations are the government, but I would guess that the Meta compliance has more to do with the fact that Cheryl Sandberg is a politically-connected turbo-Zionist.

I wish we were neutral on this issue. As an American, it is not my business. I am in no position to justly arbitrate between them. But our politicians are whores, our Zionists have deep pockets, and they're not afraid to empty them out for the cause, so it looks like America's taxpayers are all on Team Zionist, whether we like it or not.

earnestinger(3607) 6 days ago [-]

Technically, they are the same. As in: people with power want to control the narrative.

This was so, is so and will always be so, everywhere.

But but but... details matter. A lot.

The west has traditions how and when to apply power, which is distinctly different from Russia.

I hand-pick two illustrations of Russia:

1. https://www.themoscowtimes.com/2022/09/27/moscow-police-accu...

> Officers "beat up Kamardin very badly and stuck a dumbbell in his anus," according to Novaya Gazeta Europe.

2. Bald man claim to power was accompanies with mysterious explosions of apartment buildings after which Chechens were declared enemies and war started.

Some interesting bits from wikipedia:

> Three Russian Federal Security Service (FSB) agents who had planted the devices at Ryazan were arrested by the local police.[6] The next day, FSB director Nikolai Patrushev announced that the incident in Ryazan had been an anti-terror drill and the device found there contained only sugar, and freed the FSB agents involved.[7]

And

> 13 September 1999: Russian Duma speaker Gennadiy Seleznyov makes an announcement about the bombing of an apartment building in the city of Volgodonsk that only takes place three days later

> 16 September 1999: Bombing in Volgodonsk, 17 are killed, 69 injured

https://en.m.wikipedia.org/wiki/1999_Russian_apartment_bombi...

somethingreen(10000) 6 days ago [-]

Corruption of power is an inherent property of power. It is expected that people in power will get corrupted. The methods of power grabs are also fairly universal.

The difference between a corrupt shithole and free world is not in what the government tries to do, but in how the governed respond.

NoOn3(10000) 6 days ago [-]

It is not so bad in Russia. Not so many sites are blocked. You can easily read foreign news if you want to. Hacker news is not blocked for example :).

AlexGrothen(10000) 6 days ago [-]

Well, there is a difference with Russia, actually. One of Palestinian professors, who studied freedom of speech, shaped it this way: The difference is that people from Russia, Arab countries etc DO know that their media is lying - but also they know the Western media is lying, because they read all that nonsense the westerners write about their countries.

Good for you that you started to realize how corrupt the Western media is.

wqaatwt(10000) 6 days ago [-]

> The more i think about it, the less difference i see.

You might consider trying not to view the world entirely in black and white then.

This sort of sentiment is not particularly productive especially in times like this..

klntsky(10000) 5 days ago [-]

The difference is not in the ability to be heard. The difference is in the consequences: jail or even death vs. merely not being heard.

therealpygon(10000) 5 days ago [-]

Sadly, that situation is also contorted to legitimize the spread verifiably false information by certain current political cults, led by a Turnip, that claim it is another party controlling media because they believe that they have the secret access to the "truth" that is being "blocked" on all other sources of media, and point to other suppressed stories (even if completely unrelated or blocked due to being outright lies) as proof. Look at attempts to curtail the spread of completely false vaccine information that is now being used as proof of something nefarious (even while more nefarious activity is being perpetrated). Some people took notes from other Dictators' control of media long ago and have been working toward it for many years via press-related misinformation to cause a loss of confidence. You would think the press would fight back harder against being de-legitimized, using stronger wording and calling lying what it is, but when your purse strings are being controlled by the same businesses that see opportunities to advantage themselves, it's not surprising.

neycoda(10000) 5 days ago [-]

The US isn't just trying to save the Jews... it's trying to leverage them to crush the Muslims for Christian domination.

mjlangiii(10000) 4 days ago [-]

you might enjoy reading, 'Manufacturing Consent'

canxerian(10000) 2 days ago [-]

Another often overlooked difference is that non US/UK citizens are typically bilingual, so by definition can access more news sources

msohailshah(10000) about 1 hour ago [-]

There is no difference between US and Russia in terms of free speech. Russia doesn't have promote a narrative of free speech while banning it. US suppresses it, punishes it and effectively deports anyone who criticizes Israel.

Holy cows are holy everywhere, its just that different cows are holy everywhere.

janalsncm(10000) 7 days ago [-]

So when the government pointed to the disproportionate support for Palestine on TikTok vs Instagram, it was actually because Instagram was suppressing it. It is ironic.

https://x.com/hawleymo/status/1717505662601609401

nashashmi(10000) 7 days ago [-]

Another reason why TikTok has to come under US ownership. How else are we going to censor things when they are under China's (lack of) control?

RedComet(10000) 7 days ago [-]

Yes. This was clearly the reason for the ban in the first place.

nikkwong(3383) 7 days ago [-]

While this may be part of the story, it's certainly not the full picture. We know that the CCP is actively manipulating the algorithm on Tiktok to further their agenda on multiple other geopolitical issues—something we have ample evidence for. I don't know if there is a smoking gun on this one topic in particular, but the CCP's goal has always been to divide the American audience; and we know that older Americans skew pro-Israel whilst younger Americans are more oriented towards being pro-Palestinian. If someone looked in the right places, they would more likely than not find evidence of algorithm manipulation to favor a Palestinian bend.

HDThoreaun(10000) 6 days ago [-]

Most americans support Israel in this conflict. Maybe the samples are just biased?

MPSFounder(10000) 7 days ago [-]

Realistically, how can we uncover this type of foreign interference? As in, is there any hack someone in our community can perform to expose Israeli propaganda? Israel locked journalists out of Ghaza, and has pretty much dominion over social media in the US. How can someone remain informed or expose misinformation campaigns (ideally without repercussions, which is a dangerous control they have over our gov)?

JKCalhoun(3408) 7 days ago [-]

Meta could start by being transparent when they are asked to take down a post and could be transparent when they comply.

elihu(10000) 6 days ago [-]

One defense against it might just be to actively crawl Facebook and externally record the contents of posts as soon as they're posted. Then you have a record of everything that got deleted.

I don't know how you scale that up to make it easy for everyone to find 'disappeared' content on any platform. Maybe some kind of peer-to-peer system where everyone's browser cache basically acts as a searchable archive, with a browser plugin that inserts a button into web pages to show disappeared content.

(It's also worth noting that probably a lot of content that was removed by moderators was removed for a legitimate reason. So, ideally you'd have some sort of crowd moderation to get rid of the stuff that really is spam or hate speech or whatever.)

plsbenice34(10000) 7 days ago [-]

Why is the word Israeli removed from the title? and Meta added? Seems like quite a politically-important modification

dang(143) 7 days ago [-]

Edit: ok you guys, all your responses have convinced me that I misread the room, and I'm going to reverse the title edit now.

-- original reply: --

I did those title edits to (marginally) reduce the flamebait effect of the title, in keeping with standard moderation practice (see https://news.ycombinator.com/newsguidelines.html). Titles have by far the biggest impact on discussion quality, so this is a big deal. Especially when the topic is divisive.

ncr100(10000) 7 days ago [-]

The current title (11:36 AM PST) is:

'Leaked Data Reveals Massive Israeli Campaign to Remove Pro-Palestine Posts on Facebook and Instagram'

@dang IDK if this matters, nor when the title was changed (from submission, to now). Just an FYI.

Maken(10000) 7 days ago [-]

The problematic point here is that Facebook is more than willing to obliterate certain topics and political views when requested, not which ones or by whom orders in particular.

switch007(10000) 7 days ago [-]

[flagged]

dang(143) 7 days ago [-]

No, what it proves is that users will flag unsubstantive flamewar posts on Hacker News, regardless of the topic or the commenter's position on the topic.

This is a good thing. Posts like your comment here break the site guidelines badly*, and the users who flagged it were quite correct to do so, regardless of your (or their) political opinion.

* for example, this one: 'Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.', and this one: 'Don't be snarky.'. Can you please review https://news.ycombinator.com/newsguidelines.html and stop doing those things? We'd appreciate it.

turnsout(10000) 7 days ago [-]

Mark needs to go

jsheard(301) 7 days ago [-]

I think he has a majority of the voting shares, so nobody can get rid of him unfortunately. Meta is too big to fail and Zuck is set to be dictator for life if he wants to be.

bazalia(10000) 7 days ago [-]

Is it just me or is this post very low on the hacker news order even though it has much more upvotes in a short time than much of the posts above it.

dang(143) 7 days ago [-]

This is in the FAQ: see https://news.ycombinator.com/newsfaq.html#whyrank ('Why is A ranked below B even though A has more points and is newer?'). But here's a longer answer.

In the case of a story like this, which has significant new information (SNI) [1] on a major ongoing topic (MOT) [2], and at least some hope of a substantive discussion, moderators sometimes turn off the user flags on the story [3] so that it can spend time on HN's front page.

In such cases, we usually adjust the degree to which we turn off the flags so that the tug-of-war between upvotes and flags isn't affected too dramatically. Usually the best way to support a substantive discussion is for the story to remain on HN's front page, but not in the highest few slots, where it would burn much hotter.

Since upvotes and submission time are public data but flags aren't, it can appear like a story is being downweighted when in fact the opposite is the case, as with this thread. That's not a rule, though—we do also downweight stories sometimes. That's why the FAQ explains that you can't derive rank from votes and time alone.

The reason moderation works this way, btw, is that HN is a curated site [4] (and always has been—here's me explaining this when I took over HN 11 years ago: https://news.ycombinator.com/item?id=7494621).

Moderators' job is to jiggle the system out of the failure modes [5] it would otherwise end up in if the software were running unattended. Turning off flags on certain stories, and adding downweight to other stories, are two examples. The goal is the same: to support substantive discussion on interesting stories, and (as a necessary condition for that) prevent the site from burning too hot if we can.

[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...

[2] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

[3] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

[4] https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...

[5] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

abeppu(10000) 7 days ago [-]

... so while we were all worried about TikTok, being owned by a Chinese company, would be a vector for that government to push a skewed/propagandized stream of content on the world, Meta has already been doing it for a foreign government despite not having foreign ownership.

brewtide(10000) 6 days ago [-]

It's all skewed, obviously. It's all about alignment.

mjevans(10000) 7 days ago [-]

I think my country (USA) would be healthier if a common sense viewpoint was selected and held.

Conflicts are always terrible, and the Eurasia / Africa region countries are particularly brutal.

Every citizen of every country has a human right (in a civilized civilization / society) to live a life that does not involve violence. A life where they are not worried about RPGs, bombings, (etc,) or military invasions.

Some sources of conflict involve places which various (different) religions hold as sacred / holy. Those sites should become UN world heritage locations and be managed by the UN in ways that only allow non-military peaceful access for any who want to visit.

With respect to Gaza my personal opinion remains unchanged. Both an innocent civilian people who suffer, and a terrorist government, remain in that region. The civilians should be evacuated. The terrorists who remain after (or whom are caught and found guilty in a trial) should be purged. The country should then be cleaned up, rebuilt, and returned to the innocent people along with a training-wheels UN supported government that brings stability, peace, and prevents a resurgence of hate and terrorism. In a few generations the country can grow more stable and graduate from the guided government structure.

That would be not just a two state solution, but a two states and global peace sites solution.

devilbunny(10000) 7 days ago [-]

I just don't see a way that a two-state solution works. A three-state solution might be feasible (Gaza and West Bank governed separately), but then you have to deal with internal Israeli politics, and I really don't know enough about them to make even an educated guess about how hard it would be to get that through (I would imagine very, but like I said, I know very little about their politics).

thot_experiment(10000) 7 days ago [-]

this is grossly misunderstanding the situation in Gaza, a two state solution was never acceptable to Israel, Hamas as it exists today is a result of Netanyahu policy. Israel created the monster to justify their genocide.

mef51(3579) 7 days ago [-]

'The civilians should be evacuated.' They don't want to leave and Israel uses these 'evacuations' to make sure Palestinians never return, as they did in 1948, 1967, etc[1][2]. This is whitewashing genocide and is an extremely violent view, packaged in reasonable sounding words. Israel has a long documented history of using terrorism to build its state. If you truly oppose terrorism I recommend starting with the books I've sourced.

[1] The ethnic cleansing of Palestine by Ilan Pappé

[2] The Hundred Years' War on Palestine: A History of Settler Colonialism and Resistance, 1917–2017 by Rashid Kahlidi

yoda97(10000) 7 days ago [-]

A two state solution is never possible when one state keeps expanding with impunity, and every time the second state resists it is called a terrorist state. My country resisted colonization in the mid 20th century and the resistance efforts were called terrorism by everyone, nobody calls them terrorists now.

tmnvix(10000) 7 days ago [-]

And I assume after this evacuation, purging, and installment of a new government Israel will magically change its ways? You need to address both sides to find a solution.

jmyeet(10000) 7 days ago [-]

The role of the media (including social media) is to move in lockstep with US domestic and foreign policy. This has been known for some time [1]. It's never as simple as the White House calling up Mark Zuckerberg and saying 'hey, silence X'. It's about a series of filters that decides who is in the media and who has their thumb on the algorithmic scales, as per the famous Noam Chomsky Andrew Marr interview [2] ('What I'm saying is if you believed something different, you wouldn't be sitting where you're sitting').

Noam Chomsky is a national treasure.

When a former Netanyahu adviser and Israeli embassy staffer seemingly has the power to suppress pro-Palestinian speech on Meta platforms [3], nobody should be surprised.

If you're a US citizen who is a journalist critical of a key US ally, that ally is allowed to assassinate you without any objection of repercussions [4].

This is also why Tiktok originally got banned in a bipartisan fashion: the Apartheid Defense League director Jonathon Goldblatt said (in leaked audio) 'we have a Tiktok problem' [5] and weeks later it was banned. Tiktok simply suppresses pro-Palestinian speech less than other platforms.

[1]: https://chomsky.info/consent01/

[2]: https://www.youtube.com/watch?v=qvGmBSHFuj0

[3]: https://www.middleeasteye.net/news/metas-israel-policy-chief...

[4]: https://www.aljazeera.com/news/2023/10/16/israeli-forces-kil...

[5]: https://x.com/Roots_Action/status/1767941861866348615

cypherpunks01(10000) 7 days ago [-]

Hey this Chomsky guy seems pretty smart! Would be great to get him on mainstream media sometime.. hah

rdtsc(3656) 6 days ago [-]

> It's never as simple as the White House calling up Mark Zuckerberg and saying 'hey, silence X'.

The government got so comfy it really got to be that easy:

https://www.pbs.org/newshour/politics/zuckerberg-says-the-wh... (Aug 27, 2024)

> White House, "repeatedly pressured" Facebook for months to take down "certain COVID-19 content including humor and satire."

> The officials "expressed a lot of frustration" when the company didn't agree, he said in the letter.

tdeck(3637) 7 days ago [-]

Not a surprise. I remember last year seeing that posts to https://www.birdsofgaza.com/ were being blocked, and it's hard to think of a more innocuous way of speaking out.

Ecstatify(10000) 6 days ago [-]

It's not only about suppression; it's about cultivating fear around expressing your opinions. There are groups actively working to have individuals fired for voicing support for Palestine.

For instance, a woman wrote "Freedom for Palestine" in Gaelic on LinkedIn, prompting a group of Israelis in a WhatsApp chat to actively coordinate efforts to get her fired.

The General Manager of Wix, Batsheva (Levine) Moshe, responded in a WhatsApp chat saying:

"Hi yes we know. Being taken care of since it was published. I believe there will be an announcement soon regarding our reaction."

Wix were orderd to pay €35K for unfair dismissal.

ref(s):

https://jackpoulson.substack.com/p/inside-the-pro-israel-inf...

https://www.breakingnews.ie/ireland/israeli-tech-firm-ordere...

gryzzly(10000) 6 days ago [-]

do you feel like it is "Israel's war on Gaza"? Does that represent reality fully? Is that what children should be taught, that there is a demonic people that kills children? You don't see any problem with omitting the massacre of israeli civilians, the captured hostages and many thousands of rocket launches towards densely populated civilian communities? is that how we achieve peace in your view?

pbiggar(2837) 3 days ago [-]

Similarly, pro-Palestine content on HN is highly suppressed.

DAGdug(10000) 7 days ago [-]

Just want to call out that the head of the trust and safety/integrity division, Guy Rosen, is an Israeli citizen with a strong pro-Israel bias. He's also a person of questionable morals. From Wikipedia:

" Guy Rosen and Roi Tiger founded Onavo in 2010. In October 2013, Onavo was acquired by Facebook, which used Onavo's analytics platform to monitor competitors. This influenced Facebook to make various business decisions, including its 2014 acquisition of WhatsApp. Since the acquisition, Onavo was frequently classified as being spyware, as the VPN was used to monetize application usage data collected within an allegedly privacy-focused environment."

That Meta considered his questionable ethics a feature not a bug, and repeatedly promoted him, is very problematic.

frob(3155) 7 days ago [-]

I was there during the onavo scandal. It was straight up spyware. They would regularly show graphs of snapchat usage vs messenger vs whatsapp and the snapchat data was explicitly attributed to onovo logs.

mmooss(10000) 6 days ago [-]

It's a conspiracy theory. Plenty of Israeli citizens support Palestinian rights and are opposed to what their government is doing. The guilt by association leads to things like antisemitism and anti-Palestinian hate and all the rest.

bawolff(3354) 7 days ago [-]

The missing part of this article: are the requests valid? Are they actually incitements to terrorism and violence or is it just a clamp down on criticism? The headline of the article implies the latter but the body does not provide any evidence for that.

Like there is a war going on, a pretty nasty one at that. I would expect there to be quite a lot of incitement to violence related to that. I would expect the israeli government to be mostly concerned with incitements of violence against its citizens. In the context of this conflict i would expect such incitements to be mostly be made by the demographics cited in the article due to the nature of the conflict. The article seems like it could be entirely consistent with take downs being used appropriately. It needs more then this to prove its headline.

Heck, from this post we dont even know relative numbers. How does this compare against take down requests from other groups?

janalsncm(10000) 7 days ago [-]

If you have valid rules but in practice only enforce them against a single group, then in some sense you are asking the wrong question.

In other words, for people who assume rule enforcement is supposed to be fair, they see unfair enforcement as hypocrisy. However, if you just see enforcement as another tool to wield against enemies, hypocrisy is irrelevant. What matters is power. It's my basketball, I make the rules.

garbagewoman(10000) 7 days ago [-]

What would you define as "valid"

elihu(10000) 6 days ago [-]

The article does mention it, but I agree that the story is incomplete without a clearer idea (including examples) of what is being censored.

> 'A Human Rights Watch (HRW) report investigating Meta's moderation of pro-Palestine content post-October 7th found that, of 1,050 posts HRW documented as taken-down or suppressed on Facebook or Instagram, 1,049 involved peaceful content in support of Palestine, while just one post was content in support of Israel.'

WhyNotHugo(2949) 6 days ago [-]

> The missing part of this article: are the requests valid?

They are enforced with neither human nor AI review, so the reality is that we don't know. They are enforced by virtue of who submits them, with no question on whether they are valid or not.

Having heard from friends the kind of censorship they face on the topic on Facebook and Instagram when discussing the topics at hand, I know of plenty of situations where people were censored without breaking any rules. They're a small sample of course.

michaelsshaw(10000) 6 days ago [-]

Defending yourself from genocide is not terrorism

buyucu(3661) 6 days ago [-]

Israel is comitting mass murder and genocide. Meta is helping to cover it up.

xg15(2454) 5 days ago [-]

Depends what you consider 'incitement'. The IL government seems to go by 'whoever is not for us is against us' logic:

> A Human Rights Watch (HRW) report investigating Meta's moderation of pro-Palestine content post-October 7th found that, of 1,050 posts HRW documented as taken-down or suppressed on Facebook or Instagram, 1,049 involved peaceful content in support of Palestine, while just one post was content in support of Israel.

> Like there is a war going on, a pretty nasty one at that.

Sorry, but this is already part if the narrative. (Or rather the implication is that this would justify everything because wars seemingly have different rules. But if course only for one side) It's a 'war' were one side inflicts 100 times as many casualties on one side than the other and still has no intention of stopping.

sgregnt(10000) 5 days ago [-]

From the lost of countries and knowing how rampant antisemitism is in these countries I suspect majority of the request are valid and express support and urge for terrorism.

aprilthird2021(10000) 4 days ago [-]

Ask anyone who works at Meta if they are valid, and they themselves will tell you, they don't really know. That should let you know how easy it would be for Israel to wield this tool in their favor. If they actually are doing it unfairly or not, we can never know since these posts are automatically taken down without human review.

xp84(10000) 7 days ago [-]

Edit: I'm deleting most of my post, to avoid politics part and only preserving my 'point'

Basically I'm saying: Nobody has a right to free wide distribution of their thoughts on social media anyway, and also, those who provide these free ad-supported platforms have many reasonable motivations to remove content -- including the belief that the speaker is wrong/spreading lies and propaganda. That doesn't 'silence' them any more than not letting them into my house silences them.

onionisafruit(10000) 7 days ago [-]

It would be interesting to see a random sample of these posts. I know any sample they released would be groomed to make them look good, but it would be interesting if it were possible.

basisword(1073) 7 days ago [-]

Fair enough, but the social media companies should be honest about it. Instead they brag hypocritically about free speech.

I disagree with you though. These global social media platforms have an incredible amount of sway over our society. As long as they have that reach, they should not be allowed to distort and silence.

spencerflem(10000) 7 days ago [-]

Judges have now ruled that suspected 'expected beliefs' that are 'otherwise lawful' is grounds for deportation, if those suspected thoughts are 'antisemitic' (read- supportive of peace in Palestine).

They are literally arresting and deporting people for suspected thoughts.

Student visas are being denied based on social media posts.

This is fascism.

hn_throwaway_99(10000) 7 days ago [-]

> Judges have now ruled that suspected 'expected beliefs' that are 'otherwise lawful' is grounds for deportation, if those suspected thoughts are 'antisemitic'

Do you have a link to what you are referring to?

anigbrowl(54) 6 days ago [-]

Just for context, that judge is an immigration judge, ie a Department of State employee. Immigration judges are not part of the judicial branch (despite the job title) and can't make precedent or interpret law. They are basically a rubber stamp for whatever policy the Secretary of State is pushing.

rdtsc(3656) 6 days ago [-]

> grounds for deportation,

Sadly but nobody is entitled to student visas. They never were. It's mostly at the whim of the state department and they may revoke it for a variety of reasons. Minor misdemeanors or getting caught with DUI would also lead to losing a visa. It's really a 'walk on eggshells' kind of situation. Yeah, in some cases appealing and finding a lawyer may help but it's huge uphill battle.

iddan(10000) 6 days ago [-]

Calling for the annihilation of the Jewish people is not being supportive of peace in Palestine. These students are not innocent.

herf(10000) 7 days ago [-]

This is a really hard problem. Just consider that there are ~150 Muslims for every Jew worldwide. In the USA it's the reverse - 2:1 in favor of Jews, concentrated in particular geographic areas.

Imagine what it means to get ranking right here - if you let just 1% of the international population into the USA ranking system, you have a majority in favor of Palestine, and of course these ideas will spread in communities without a lot of people who can represent Jewish history. It's clear to me why this happens, but fixing in an algorithmic but fair way is also extremely difficult.

wesselbindt(10000) 7 days ago [-]

I think there's an erroneous implicit assumption in your reasoning, namely that to be Zionist is equivalent to be Jewish, and to be anti-zionist is to be Muslim (otherwise, why would you be talking about Jew:Muslim ratios). The fact of the matter is that not every Zionist* is Jewish (in fact, the vast majority of Zionists are christian), and vice versa not every Jewish person is a Zionist (Jewish voice for peace, the ultra orthodox, etc).

But even beyond that, I think engaging in censorship to hide an ethnic cleansing is an affront to humanity.

* Here, I'm taking Zionism to mean to be in support of the way Israel has formed and continued to form in the past 77 or so years. I am aware that there are many different interpretations of Zionism (to illustrate the breadth; Noam Chomsky considered himself a Zionist), but this particular interpretation is the one that is relevant to this conversation.

yodsanklai(10000) 7 days ago [-]

And then Zuckerberg says he's all about free speech, even mocking Europe as not being free-speech enough

impossiblefork(10000) 6 days ago [-]

He's not wrong though, that Europe isn't free-speech enough. I don't care about the hypocrisy, because free speech is so good and so beneficial that I don't care if the proponent is iffy.

jmpman(10000) 6 days ago [-]

Yesterday, my high school son was sitting on the couch. Asked him what he was doing... "social studies on the partitioning of Palestine in 1948". More spicy a topic that I was expecting. Intrigued, I asked ChatGPT a few questions about the religious populations of modern Israel throughout the centuries. Got some interesting results and asked it for some clarification on the political sensitivity of this topic. It agreed it would be challenged by many. Anyway, decided to share it with my son, and texted it to him on his iPhone from my iPhone. Normally that would be sent via iMessage, fully end to end encrypted, and yet this time, when I was sending potentially politically charged views on israel, it was sent as SMS!! Now, I'm not much of a conspiracy theorist, but... that got me questioning why, on any of the thousands of messages I've sent my son, this specific one wasn't sent encrypted. Hmm

t0lo(10000) 6 days ago [-]

Izr revieves almost 50% of the worlds startup funding for cyber security in any given year. Think about what they do with this.

Philpax(761) 6 days ago [-]

You may be interested in seeing what Tal Broda, an executive at OpenAI, posted at the start of the war: https://x.com/StopZionistHate/status/1735471349278052584

switch007(10000) 6 days ago [-]

Isn't another wild thing here that Apple chooses whether to send it encrypted or not? Sorry, haven't used an iPhone with iMessage, not sure how it works.

Try Signal instead perhaps?

mlindner(3663) 6 days ago [-]

SMS messages get sent when you're outside of data network.

sfx77(10000) 6 days ago [-]

'Israeli Campaign' is it really that weird for someone to ask to take down posts calling for their annihilation?

khaledh(3673) 6 days ago [-]

The posts are not calling for Israel's annihilation. They call for stopping the genocide. The posts merely document what Israel is doing in Gaza, since Israel doesn't allow independent journalists to verify and show the world the carnage it's causing to the people of Gaza.

botanical(2899) 6 days ago [-]

If Apartheid South Africa could last just a little bit longer, they would still be an apartheid state like Israel is today.

Western media is just as complicit in this genocide as the fascists in charge of the Israeli government. And media are self-censoring which is reprehensible.

The idea of Hamas wouldn't exist if Gaza (and the West Bank) wasn't occupied by land, air and sea; their land stolen on a daily basis, and Palestinian people treated as subhuman animals.

YZF(10000) 6 days ago [-]

Palestinian violence predates the 1967 and 1948. Also Gaza wasn't occupied since Israel left it in 2005.

Here's is one example from 1954 when Israel did not control Gaza or the West Bank: https://en.wikipedia.org/wiki/Ma%27ale_Akrabim_massacre

'The Ma'ale Akrabim massacre, known in English as the Scorpions Pass Massacre, was an attack on an Israeli passenger bus, carried out on 17 March 1954, in the middle of the day. Eleven passengers were shot dead by the attackers who ambushed and boarded the bus. One passenger died 32 years later of his injuries, in a state of paralysis and partial consciousness. Four passengers survived, two of whom had been injured by the gunmen.'

Palestinians are largely in the reality they're in due to the violence.





Historical Discussions: Whistleblower details how DOGE may have taken sensitive NLRB data (April 15, 2025: 1081 points)

(1081) Whistleblower details how DOGE may have taken sensitive NLRB data

1081 points 3 days ago by rbanffy in 11th position

www.npr.org | Estimated reading time – 72 minutes | comments | anchor

The DOGE team may have taken data related to union organizing and labor complaints and hid its tracks, according to a whistleblower. Charlotte Gomez for NPR hide caption

toggle caption
Charlotte Gomez for NPR

In the first days of March, a team of advisers from President Trump's new Department of Government Efficiency initiative arrived at the Southeast Washington, D.C., headquarters of the National Labor Relations Board.

The small, independent federal agency investigates and adjudicates complaints about unfair labor practices. It stores reams of potentially sensitive data, from confidential information about employees who want to form unions to proprietary business information.

The DOGE employees, who are effectively led by White House adviser and billionaire tech CEO Elon Musk, appeared to have their sights set on accessing the NLRB's internal systems. They've said their unit's overall mission is to review agency data for compliance with the new administration's policies and to cut costs and maximize efficiency.

But according to an official whistleblower disclosure shared with Congress and other federal overseers that was obtained by NPR, subsequent interviews with the whistleblower and records of internal communications, technical staff members were alarmed about what DOGE engineers did when they were granted access, particularly when those staffers noticed a spike in data leaving the agency. It's possible that the data included sensitive information on unions, ongoing legal cases and corporate secrets — data that four labor law experts tell NPR should almost never leave the NLRB and that has nothing to do with making the government more efficient or cutting spending.

Meanwhile, according to the disclosure and records of internal communications, members of the DOGE team asked that their activities not be logged on the system and then appeared to try to cover their tracks behind them, turning off monitoring tools and manually deleting records of their access — evasive behavior that several cybersecurity experts interviewed by NPR compared to what criminal or state-sponsored hackers might do.

White House senior adviser Elon Musk walks to the White House after landing in Marine One with President Trump on March 9. Samuel Corum/Getty Images hide caption

toggle caption
Samuel Corum/Getty Images

The employees grew concerned that the NLRB's confidential data could be exposed, particularly after they started detecting suspicious log-in attempts from an IP address in Russia, according to the disclosure. Eventually, the disclosure continued, the IT department launched a formal review of what it deemed a serious, ongoing security breach or potentially illegal removal of personally identifiable information. The whistleblower believes that the suspicious activity warrants further investigation by agencies with more resources, like the Cybersecurity and Infrastructure Security Agency or the FBI.

The labor law experts interviewed by NPR fear that if the data gets out, it could be abused, including by private companies with cases before the agency that might get insights into damaging testimony, union leadership, legal strategies and internal data on competitors — Musk's SpaceX among them. It could also intimidate whistleblowers who might speak up about unfair labor practices, and it could sow distrust in the NLRB's independence, they said.

The new revelations about DOGE's activities at the labor agency come from a whistleblower in the IT department of the NLRB, who disclosed his concerns to Congress and the U.S. Office of Special Counsel in a detailed report that was then provided to NPR. Meanwhile, his attempts to raise concerns internally within the NLRB preceded someone 'physically taping a threatening note' to his door that included sensitive personal information and overhead photos of him walking his dog that appeared to be taken with a drone, according to a cover letter attached to his disclosure filed by his attorney, Andrew Bakaj of the nonprofit Whistleblower Aid.

The whistleblower's account is corroborated by internal documentation and was reviewed by 11 technical experts across other government agencies and the private sector. In total, NPR spoke to over 30 sources across the government, the private sector, the labor movement, cybersecurity and law enforcement who spoke to their own concerns about how DOGE and the Trump administration might be handling sensitive data, and the implications for its exposure. Much of the following account comes from the whistleblower's official disclosure and interviews with NPR.

'I can't attest to what their end goal was or what they're doing with the data,' said the whistleblower, Daniel Berulis, in an interview with NPR. 'But I can tell you that the bits of the puzzle that I can quantify are scary. ... This is a very bad picture we're looking at.'

The whistleblower's story sheds further light on how DOGE is operating inside federal systems and comes on the heels of testimony in more than a dozen court cases across the United States that reveal how DOGE rapidly gained access to private financial and personal information on hundreds of millions of Americans. It's unclear how or whether DOGE is protecting the privacy of that data. Meanwhile, the threatening note, though its origins are unknown, is reflective of the current climate of fear and intimidation toward whistleblowers.

Tim Bearese, the NLRB's acting press secretary, denied that the agency granted DOGE access to its systems and said DOGE had not requested access to the agency's systems. Bearese said the agency conducted an investigation after Berulis raised his concerns but 'determined that no breach of agency systems occurred.'

Notwithstanding the NLRB's denial, the whistleblower's disclosure to Congress and other federal overseers includes forensic data and records of conversations with colleagues that provide evidence of DOGE's access and activities. Meanwhile, NPR's extensive reporting makes clear that DOGE's access to data is a widespread concern. Across the government, 11 sources directly familiar with internal operations in federal agencies and in Congress told NPR that they share Berulis' concerns, and some have seen other evidence that DOGE is exfiltrating sensitive data for unknown reasons.

After this story published, White House spokesperson Anna Kelly said in a statement, 'It is months-old news that President Trump signed an Executive Order to hire DOGE employees at agencies and coordinate data sharing. Their highly-qualified team has been extremely public and transparent in its efforts to eliminate waste, fraud, and abuse across the Executive Branch, including the NLRB.'

Taking apart computers to protecting government data

Instead of a brand-new car for a 16th-birthday present, Berulis got his first computer.

It's a familiar story for tech nerds the world over: He methodically took the machine apart 'to figure out how it works,' just like he had dissected radios from the thrift store years earlier. 'I electrocuted myself once,' he recalled.

Berulis was always interested in public service, but the traditional paths didn't suit him.

A knee injury prevented him from joining the military. He served as a volunteer firefighter for a period and donated his time working for a local rape crisis hotline, answering calls from victims in need of someone to listen. But, he told NPR, 'I had an interest in serving my country.'

Berulis had been a technical consultant for many years, including in auditing and modernizing corporate systems, when a job opened up at the National Labor Relations Board.

Daniel Berulis started working at the National Labor Relations Board about six months before President Trump started his second term. Grace Raver/NPR hide caption

toggle caption
Grace Raver/NPR

While he didn't know much about the agency, Berulis quickly found its mission to protect employees' rights in line with his long-standing desire 'to help people.'

He started about six months before President Trump was inaugurated for his second term this past January. Berulis said he hit the ground running, securing the NLRB's cloud-based data servers and reinforcing what's called 'zero trust' principles, which means that users can get access only to the parts of the system they need in order to do their jobs — no more, no less. That way, if an attacker gets hold of a single username and password, the attacker can't access the whole system.

'When I first started, it was a dream come true,' he said. 'There was a great opportunity to build up and do some good.' But after the inauguration, he described a 'culture of fear' descending over the agency.

DOGE arrives

The first week of March, engineers associated with DOGE arrived at the NLRB's headquarters, according to Berulis' disclosure. Beforehand, they had asked about what software, hardware, programming languages and applications the NLRB was using. DOGE learned that it used commercially available cloud infrastructure that businesses typically use, which connects to government cloud systems at other agencies and can be accessed remotely.

Berulis said he and several colleagues saw a black SUV and police escort enter the garage, after which building security let the DOGE staffers in. They interacted with a small number of staffers, never introducing themselves to most of the IT team.

Berulis says he was told by colleagues that DOGE employees demanded the highest level of access, what are called 'tenant owner level' accounts inside the independent agency's computer systems, with essentially unrestricted permission to read, copy and alter data, according to Berulis' disclosure.

When an IT staffer suggested a streamlined process to activate those accounts in a way that would let their activities be tracked, in accordance with NLRB security policies, the IT staffers were told to stay out of DOGE's way, the disclosure continues.

For cybersecurity professionals, a failure to log activity is a cardinal sin and contradicts best practices as recommended by the National Institute of Standards and Technology and the Department of Homeland Security's Cybersecurity and Infrastructure Security Agency, as well as the FBI and the National Security Agency.

'That was a huge red flag,' said Berulis. 'That's something that you just don't do. It violates every core concept of security and best practice.'

Those forensic digital records are important for record-keeping requirements and they allow for troubleshooting, but they also allow experts to investigate potential breaches, sometimes even tracing the attacker's path back to the vulnerability that let them inside a network. The records can also help experts see what data might have been removed. Basic logs would likely not be enough to demonstrate the extent of a bad actor's activities, but it would be a start. There's no reason for any legitimate user to turn off logging or other security tools, cybersecurity experts say.

'None of this is normal,' said Jake Braun, the executive director of the Cyber Policy Initiative at the University of Chicago's Harris School of Public Policy and former acting principal deputy national cyber director at the White House, in an interview with NPR about the whistleblower's disclosure. 'This type of activity is why the government buys insider-threat-monitoring technology. So we can know things like this are happening and stop sensitive data exfiltration before it happens,' he told NPR.

However, the NLRB's budget hasn't had the money to pay for tools like that for years, Berulis said.

A backdoor to government systems?

A couple of days after DOGE arrived, Berulis saw something else that alarmed him while browsing the internet over the weekend.

Massachusetts Institute of Technology graduate and DOGE engineer Jordan Wick had been sharing information about coding projects he was working on to his public account with GitHub, a website that allows developers to create, store and collaborate on code.

After journalist Roger Sollenberger started posting on X about the account, Berulis noticed something Wick was working on: a project, or repository, titled 'NxGenBdoorExtract.'

Wick made it private before Berulis could investigate further, he told NPR. But to Berulis, the title itself was revealing.

'So when I saw this tool, I immediately panicked, just for lack of a better term,' he said. 'I kind of had a conniption and said, 'Whoa, whoa, whoa.'' He immediately alerted his whole team.

While NPR was unable to recover the code for that project, the name itself suggests that Wick could have been designing a backdoor, or 'Bdoor,' to extract files from the NLRB's internal case management system, known as NxGen, according to several cybersecurity experts who reviewed Berulis' conclusions.

Wick did not respond to NPR's requests for comment.

A screenshot of DOGE engineer Jordan Wick's public GitHub account that shows 'NxGenBdoorExtract.' The name itself suggests that Wick could have been designing a backdoor, or 'Bdoor,' to extract files from the NLRB's internal case management system. Daniel Berulis/Annotation by NPR hide caption

toggle caption
Daniel Berulis/Annotation by NPR

'It definitely seems rather odd to name it that,' said one of the engineers who built NxGen and asked for anonymity so as not to jeopardize their ability to work with the government again. 'Or brazen, if you're not worried about consequences.'

'The whole idea of removing logging and [getting] tenant-level access is the most disturbing part to me,' the engineer said.

NxGen is an internal system that was designed specifically for the NLRB in-house, according to several of the engineers who created the tool and who all spoke to NPR on condition of anonymity to avoid retaliation or adverse consequences for any future government work.

The engineers explained that while many of the NLRB's records are eventually made public, the NxGen case management system hosts proprietary data from corporate competitors, personal information about union members or employees voting to join a union, and witness testimony in ongoing cases. Access to that data is protected by numerous federal laws, including the Privacy Act.

Those engineers were also concerned by DOGE staffers' insistence that their activities not be logged, allowing them to probe the NLRB's systems and discover information about potential security flaws or vulnerabilities without being detected.

'If he didn't know the backstory, any [chief information security officer] worth his salt would look at network activity like this and assume it's a nation-state attack from China or Russia,' said Braun, the former White House cyber official.

Putting the puzzle pieces together

About a week after arriving, the DOGE engineers had left the NLRB and deleted their accounts, according to Berulis' disclosure to Congress.

In the office, Berulis had had limited visibility into what the DOGE team was up to in real time.

That's partly because, he said, the NLRB isn't advanced when it comes to detecting insider threats or potentially malicious actors inside the agency itself. 'We as an agency have not evolved to account for those,' he explained. 'We were looking for [bad actors] outside,' he said.

But he counted on DOGE leaving at least a few traces of its activity behind, puzzle pieces he could assemble to try to put together a picture of what happened — details he included in his official disclosure.

First, at least one DOGE account was created and later deleted for use in the NLRB's cloud systems, hosted by Microsoft: '[email protected].'

Then, DOGE engineers installed what's called a 'container,' a kind of opaque virtual computer that can run programs on a machine without revealing its activities to the rest of the network. On its own, that wouldn't be suspicious, though it did allow the engineers to work invisibly and left no trace of its activities once it was removed.

Then, Berulis started tracking sensitive data leaving the places it's meant to live, according to his official disclosure. First, he saw a chunk of data exiting the NxGen case management system's 'nucleus,' inside the NLRB system, Berulis explained. Then, he saw a large spike in outbound traffic leaving the network itself.

This screenshot shows a large spike in outbound traffic leaving the NLRB system. Whistleblower Aid hide caption

toggle caption
Whistleblower Aid

From what he could see, the data leaving, almost all text files, added up to around 10 gigabytes — or the equivalent of a full stack of encyclopedias if someone printed them, he explained. It's a sizable chunk of the total data in the NLRB system, though the agency itself hosts over 10 terabytes in historical data. It's unclear which files were copied and removed or whether they were consolidated and compressed, which could mean even more data was exfiltrated. It's also possible that DOGE ran queries looking for specific files in the NLRB's system and took only what it was looking for, according to the disclosure.

Regardless, that kind of spike is extremely unusual, Berulis explained, because data almost never directly leaves from the NLRB's databases. In his disclosure, Berulis shared a screenshot tracking data entering and exiting the system, and there's only one noticeable spike of data going out. He also confirmed that no one at the NLRB had been saving backup files that week or migrating data for any projects.

Even when external parties like lawyers or overseers like the inspector general are granted guest accounts on the system, it's only to view the files relevant to their case or investigation, explained labor law experts who worked with or at the NLRB, in interviews with NPR.

'None of that confidential and deliberative information should ever leave the agency,' said Richard Griffin, who was the NLRB general counsel from 2013 to 2017, in an interview with NPR.

'We are under assault right now'

For cybersecurity experts, that spike in data leaving the system is a key indicator of a breach, Berulis explained.

'We are under assault right now,' he remembered thinking.

When Berulis asked his IT colleagues whether they knew why the data was exfiltrated or whether anyone else had been using containers to run code on the system in recent weeks, no one knew anything about it or the other unusual activities on the network, according to his disclosure. In fact, when they looked into the spike, they found that logs that were used to monitor outbound traffic from the system were absent. Some actions taken on the network, including data exfiltration, had no attribution — except to a 'deleted account,' he continued. 'Nobody knows who deleted the logs or how they could have gone missing,' Berulis said.

The IT team met to discuss insider threats — namely, the DOGE engineers, whose activities it had little insight into or control over. 'We had no idea what they did,' he explained. Those conversations are reflected in his official disclosure.

They eventually launched a formal breach investigation, according to the disclosure, and prepared a request for assistance from the Cybersecurity and Infrastructure Security Agency (CISA). However, those efforts were disrupted without an explanation, Berulis said. That was deeply troubling to Berulis, who felt he needed help to try to get to the bottom of what happened and determine what new vulnerabilities might be exploited as a result.

In the days after Berulis and his colleagues prepared a request for CISA's help investigating the breach, Berulis found a printed letter in an envelope taped to his door, which included threatening language, sensitive personal information and overhead pictures of him walking his dog, according to the cover letter attached to his official disclosure. It's unclear who sent it, but the letter made specific reference to his decision to report the breach. Law enforcement is investigating the letter.

'If the underlying disclosure wasn't concerning enough, the targeted, physical intimidation and surveillance of my client is. If this is happening to Mr. Berulis, it is likely happening to others and brings our nation more in line with authoritarian regimes than with open and free democracies,' wrote Bakaj, his attorney, in a statement sent to NPR. 'It is time for everyone – and Congress in particular – to acknowledge the facts and stop our democracy, freedom, and liberties from slipping away, something that will take generations to repair.'

In part because of the stymied internal investigation and the attempts to silence him, Berulis decided to come forward publicly.

In fact, despite all that, Berulis managed to uncover some stranger and more troubling details about what happened while DOGE was logged on, which he enumerated in his official declaration.

Unknown users also gave themselves a high-level access key, what's called a SAS token, meaning 'shared access signature,' to access storage accounts, before deleting it. Berulis said there was no way to track what they did with it.

Someone had disabled controls that would prevent insecure or unauthorized mobile devices from logging on to the system without the proper security settings. There was an interface exposed to the public internet, potentially allowing malicious actors access to the NLRB's systems. Internal alerting and monitoring systems were found to be manually turned off. Multifactor authentication was disabled. And Berulis noticed that an unknown user had exported a 'user roster,' a file with contact information for outside lawyers who have worked with the NLRB.

Berulis said he noticed five PowerShell downloads on the system, a task automation program that would allow engineers to run automated commands. There were several code libraries that got his attention — tools that he said appeared to be designed to automate and mask data exfiltration. There was a tool to generate a seemingly endless number of IP addresses called 'requests-ip-rotator,' and a commonly used automation tool for web developers called 'browserless' — both repositories starred or favorited by Wick, the DOGE engineer, according to an archive of his GitHub account reviewed by NPR.

While investigating the data taken from the agency, Berulis tried to determine its ultimate destination. But whoever had exfiltrated it had disguised its destination too, according to the disclosure.

DOGE staffers had permission to access the system, but removing data is another matter.

Berulis says someone appeared to be doing something called DNS tunneling to prevent the data exfiltration from being detected. He came to that conclusion, outlined in his disclosure, after he saw a traffic spike in DNS requests parallel to the data being exfiltrated, a spike 1,000 times the normal number of requests.

When someone uses this kind of technique, they set up a domain name that pings the target system with questions or queries. But they configure the compromised server so that it answers those DNS queries by sending out packets of data, allowing the attacker to steal information that has been broken down into smaller chunks.

'We've seen Russian threat actors do things like this on U.S. government systems,' said one threat intelligence researcher who requested anonymity because they weren't authorized to speak publicly by their employer. That analyst, who has extensive experience hunting nation-state-sponsored hackers, reviewed the whistleblower's technical claims.

'The difference is, they were given the keys to the front door,' the researcher continued. While the researcher clarified that it would be difficult to fully verify what happened without full access to the NLRB system, they said Berulis' conclusions and accompanying evidence were a cause for concern. 'None of this is standard,' they said.

Russ Handorf, who served in the FBI for a decade in various cybersecurity roles, also reviewed Berulis' extensive technical forensic records and analysis and spoke to NPR about his conclusions.

'All of this is alarming,' he said. 'If this was a publicly traded company, I would have to report this [breach] to the Securities and Exchange Commission. The timeline of events demonstrates a lack of respect for the institution and for the sensitivity of the data that was exfiltrated. There is no reason to increase the security risk profile by disabling security controls and exposing them, less guarded, to the internet. They didn't exercise the more prudent standard practice of copying the data to encrypted and local media for escort.'

'Until there's an investigation done, there's no way to definitively prove who did it,' Handorf concluded.

'No reason whatsoever for accessing the information'

The National Labor Relations Board seal hangs inside a hearing room at the agency's headquarters in Washington, D.C., in 2019. Andrew Harrer/Bloomberg via Getty Images hide caption

toggle caption
Andrew Harrer/Bloomberg via Getty Images

DOGE's intentions with regard to the NLRB data remain unclear. Many of the systems that DOGE embedded itself in across the rest of the government have payment or employment data, information that it could use to evaluate which grants and programs to halt and whom to fire.

But the case management system is very different.

It houses information about ongoing contested labor cases, lists of union activists, internal case notes, personal information from Social Security numbers to home addresses, proprietary corporate data and more information that never gets published openly.

Experts interviewed by NPR acknowledge that there are inefficiencies across government that warrant further review, but they say they don't see a single legitimate reason that DOGE staffers would need to remove the data from the case management system to resolve those problems.

'There is no reason whatsoever for accessing the information. Now, could any agency be more efficient? More effective? Positively. But what you need for that is people who understand what the agency does. That is not by mining data, putting algorithms in and creating a breach of security,' said Harley Shaiken, a professor emeritus at the University of California, Berkeley who specializes in labor and information technology.

'There is nothing that I can see about what DOGE is doing that follows any of the standard procedures for how you do an audit that has integrity and that's meaningful and will actually produce results that serve the normal auditing function, which is to look for fraud, waste and abuse,' said Sharon Block, the executive director of Harvard Law School's Center for Labor and a Just Economy and a former NLRB board member.

'The mismatch between what they're doing and the established, professional way to do what they say they're doing ... that just kind of gives away the store, that they are not actually about finding more efficient ways for the government to operate,' Block said.

For labor law experts, the mere possibility that sensitive records were copied is a serious danger that could create a chilling effect for employees everywhere who turn to the National Labor Relations Board for protection.

'Just saying that they have access to the data is intimidating,' said Kate Bronfenbrenner, the director of labor education research at Cornell University and co-director of the Worker Empowerment Research Network. 'People are going to go, 'I'm not going to testify before the board because, you know, my employer might get access.''

Bronfenbrenner, the child of immigrant parents who fled the Soviet Union and Nazi-controlled Germany, said she spends a lot of time thinking about how systems can crumble under the right circumstances. 'You know, there's this belief that we have these checks and balances ... but anyone who's part of the labor movement should know that's not true,' she told NPR.

With access to the data, it would make it easier for companies to fire employees for union organizing or keep blacklists of organizers — illegal activities under federal labor laws enforced by the NLRB. But 'people get fired in this country all the time for the lawful act of trying to organize a union,' said Block.

Having a copy of the opposing counsel's notes as companies prepare for legal challenges would also be an attractive possibility, she continued.

It's not just employees who might suffer if this data got out. Companies also sometimes provide detailed statements on internal business planning and corporate structure in the midst of unfair-labor-practice complaint proceedings. If a company was attempting to fire someone who it alleged had disclosed trade secrets and was fighting an unfair-labor-practice complaint based around that decision, those trade secrets might come up in the board's investigation too. That information would be valuable to competitors, regulators and others.

Overall, the potential exposure of the NLRB's data could have serious implications.

'I think it is very concerning,' said Shaiken. 'It could result in damage to individual workers, to union-organizing campaigns and to unions themselves,' he said.

'It is bringing a wrecking ball into the dentist office, meaning this is wildly disproportionate and raises real dangers,' Shaiken continued.

A conflict of interest and the dangers of exposure

Labor law experts were particularly concerned about what they described as clear conflicts of interest, particularly when it comes to Elon Musk, his companies and his vast network of former employees and allies who are now getting access to government jobs and data.

Trump and Musk, during an interview with Fox News's Sean Hannity, said Musk would recuse himself from anything involving his companies. 'I haven't asked the president for anything ever,' Musk said. 'I'm getting a sort of a daily proctology exam here. You know, it's not like I'll be getting away [with] something in the dead of night.' However, DOGE has been granted high-level access to a lot of data that could benefit Musk, and there has been no evidence of a firewall preventing misuse of that data.

There are multiple ongoing cases involving Musk and the NLRB. For one, after a group of former SpaceX employees lodged a complaint with the NLRB, lawyers representing SpaceX, some of whom were recently hired into government jobs, filed suit against the NLRB. They argued that the agency's structure is unconstitutional.

Elon Musk speaks with then-President-elect Donald Trump and guests at a viewing of the launch of the sixth test flight of the SpaceX Starship rocket on Nov. 19, 2024, in Brownsville, Texas. Brandon Bell/Getty Images hide caption

toggle caption
Brandon Bell/Getty Images

Sen. Chris Murphy, D-Conn. raised his concerns about Musk accessing sensitive labor investigation data on cases against his companies or competitors during the confirmation hearing for Trump's labor secretary, Lori Chavez-DeRemer, in mid-February. He pressed her to answer whether she believed the NLRB is constitutional and to commit to keeping sensitive data confidential. While she said she was committed to 'privacy' and said she respects the NLRB's 'authority,' she insisted that Trump 'has the executive power to exercise it as he sees fit.'

All this is happening in the context of a broader attempt by the White House to hamstring labor agencies.

The NLRB was created 'to guarantee workers' rights to organize and to address problems that workers have in the workplace,' said Shaiken, of UC Berkeley. Under President Joe Biden, he recalled, the labor movement enjoyed an unusual amount of support from Washington. 'But what we have seen is a sharp slamming of the brakes to that and putting the vehicle in reverse in terms of what Trump has done so far,' he continued.

In addition to sending DOGE to the NLRB, the Trump administration tried to neutralize the board's power to enforce labor law by removing its member Gwynne Wilcox. Courts have gone back and forth on whether Wilcox's removal was illegal, as presidents are meant to demonstrate cause for dismissal of independent board members.

Representatives of DOGE and former colleagues of Musk's who have been installed across the federal government have failed to reassure the public or the courts that they have taken the proper precautions to protect the data they're ingesting and that private business interests won't influence how that data is used or what policy decisions are made, Block and the other labor law experts interviewed by NPR say.

'It's not that he's a random person who's getting information that a random person shouldn't have access to,' said Harvard Law's Block. 'But if they really did get everything, then he has information about the cases the government is building against him,' she said.

'DOGE is, whether they admit it or not, headed by somebody who is the subject of active investigation and prosecution of cases. It is incredibly troubling,' she said.

Musk's company xAI could also benefit from sucking up all the data DOGE has collected to train its algorithms. Cybersecurity experts like Bruce Schneier, a well-known cryptographer and adjunct lecturer at the Harvard Kennedy School, have pointed to this concern at length in interviews and written pieces.

According to two federal government sources who were not authorized to speak publicly about their workplaces and who shared email documentation with NPR, managers have consistently been warning employees that their data could be subject to AI review, particularly their email responses to the Musk-led campaign to get federal employees to detail 'what they did last week' in five bullet points every Monday.

'It's not a flight of imagination to see several DOGE staffers release some of that [data] surreptitiously to Musk or people close to him,' said Shaiken.

Access for adversaries

If the data isn't properly protected after it leaves the agency or if DOGE left a digital door open to the agency itself, data could also be exposed to potential sale or theft by criminals or foreign adversaries. An attacker could also try to take advantage of the connections between the NLRB's cloud account and other government cloud environments, using their access to the NLRB as a foothold to move to other networks.

'Both criminals and foreign adversaries traditionally have used information like this to enrich themselves through a variety of actions,' explained Handorf, the former FBI cyber official. 'That includes blackmail, targeting and prioritizing intellectual property theft for espionage or even harming a company to enrich another.'

Within minutes after DOGE accessed the NLRB's systems, someone with an IP address in Russia started trying to log in, according to Berulis' disclosure. The attempts were 'near real-time,' according to the disclosure. Those attempts were blocked, but they were especially alarming. Whoever was attempting to log in was using one of the newly created DOGE accounts — and the person had the correct username and password, according to Berulis. While it's possible the user was disguising their location, it's highly unlikely they'd appear to be coming from Russia if they wanted to avoid suspicion, cybersecurity experts interviewed by NPR explained.

On their own, a few failed login attempts from a Russian IP address aren't a smoking gun, those cybersecurity experts interviewed by NPR said. But given the overall picture of activity, it's a concerning sign that foreign adversaries may already be searching for ways into government systems that DOGE engineers may have left exposed.

'When you move fast and break stuff, the opportunity to ride the coattails of authorized access is ridiculously easy to achieve,' said Handorf. What he means is that if DOGE engineers left access points to the network open, it would be very easy for spies or criminals to break in and steal data behind DOGE.

He said he could also see foreign adversaries trying to recruit or pay DOGE team members for access to sensitive data. 'It would not surprise me if DOGE is accidentally compromised.'

'This is exactly why we usually architect systems using best practices like the principle of least privilege,' Ann Lewis, the former director of Technology Transformation Services at the General Services Administration, told NPR in an interview. 'The principle of least privilege is a fundamental cybersecurity concept ... that states that users should have only the minimum rights, roles and permissions required to perform their roles and responsibilities. This protects access to high-value data and critical assets and helps prevent unauthorized access, accidental damage from user errors and malicious actions. '

Bakaj, Berulis' lawyer, told NPR in a written statement: 'This case has been particularly sensitive as it involves the possibility of sophisticated foreign intelligence gaining access to sensitive government systems, which is why we went to the Senate Intelligence Committee directly.'

A troubling pattern

The NLRB isn't alone in those concerns.

In over a dozen lawsuits in federal courts around the country, judges have demanded that DOGE explain why it needs such expansive access to sensitive data on Americans, from Social Security records to private medical records and tax information. But the Trump administration has been unable to give consistent and clear answers, largely dismissing cybersecurity and privacy concerns.

In one case dealing with Treasury Department payment systems that control trillions of dollars in federal spending, U.S. District Judge Jeannette Vargas blocked DOGE access on Feb. 21, finding 'a real possibility exists that sensitive information has already been shared outside of the Treasury Department, in potential violation of federal law.'

It's an area of focus for Democratic lawmakers on the House Committee on Oversight and Government Reform.

U.S. District Judge Jeannette Vargas blocked DOGE access to the Treasury Department over the possibility that 'sensitive information has already been shared outside of the Treasury Department.' Alex Brandon/AP hide caption

toggle caption
Alex Brandon/AP

An aide for the Democratic minority on the House Oversight Committee who was not authorized to speak publicly told NPR that the committee is in possession of multiple verifiable reports showing that DOGE has exfiltrated sensitive government data across agencies for unknown purposes, revealing that Berulis' disclosure is not an isolated incident.

But government cybersecurity officials are already resigning or being fired, forced to relocate or put on administrative leave all over the federal government, from the Cybersecurity and Infrastructure Security Agency to the Interior Department. That has limited their power to respond to the ongoing disruptions or keep track of what DOGE is doing.

One of the first people to speak out about DOGE's access to sensitive data was Erie Meyer, who resigned as the chief technology officer at the Consumer Financial Protection Bureau (CFPB) in February. She has provided testimony in ongoing court cases surrounding DOGE's access and also spoke to NPR in an interview. The CFPB has sensitive and potentially market-moving data. Meyer said DOGE employees granted themselves 'God-tier' access to the CFPB's systems, turned off auditing and event logs and put the cybersecurity experts responsible for insider threat detection on administrative leave. When IT experts at the CFPB planned to conduct an 'after action' report on DOGE's activities, they were stonewalled, she continued.

When she heard about how DOGE engineers operated at the NLRB, particularly the steps they took to obfuscate their activities, she recognized a pattern.

'I am trembling,' she said upon hearing about the potential exposure of data from the NLRB. 'They can get every piece of whistleblower testimony, every report, everything. This is not good.'

Other technical employees working with government agencies who spoke to NPR shared Berulis' concerns.

'Our cyber teams are pissed because they have to sit on their hands when every single alarm system we have regarding insider threats is going off,' said one employee at an agency of the Interior Department who requested anonymity, fearing retribution. Cybersecurity teams wanted to shut off new users' access to the system, the employee continued, but were ordered to stand down.

Meanwhile, in a letter published on March 13 on Federal News Network, 46 former senior officials from the General Services Administration, one of the government agencies hardest hit by DOGE's cost-cutting efforts and that oversees nearly all federal buildings and purchasing, wrote that they believed 'highly-sensitive IT systems are being put at risk and sensitive information is being downloaded to unknown, unvetted external sources in clear violation of privacy and data-protection rules.'

The tip of the iceberg

The Trump administration could be trying to codify DOGE's practices into how the government shares information, said Kel McClanahan, the executive director of nonprofit public interest law firm National Security Counselors, who is representing federal employees in a lawsuit concerning the Office of Personnel Management's use of a private email server.

Weeks after DOGE staffers descended on federal buildings across Washington, Trump issued an executive order urging increased data sharing 'by eliminating information silos' in what's seen by experts like McClanahan as an attempt to give DOGE engineers further top cover in accessing and amalgamating sensitive federal data, despite laws concerning privacy and cybersecurity.

'The entire reason we have a Privacy Act is that Congress realized 50 years ago that the federal government was just overflowing with information about normal everyday people and needed some guardrails in place,' McClanahan told NPR. 'The information silos are there for a reason,' he continued. 'It's astonishing to me that the very people who not a handful of years ago were screaming about the government tracking us with vaccines now cheer for feeding every piece of information about themselves into Elon Musk's stupid Skynet.'

DOGE appears to still be in the process of visiting federal agencies across the country, including just recently the Securities and Exchange Commission, according to one former government source directly familiar with the matter who requested anonymity to share information they weren't authorized to share. Across the government, it's unclear how much sensitive data has been removed and collected and combined.

It's also unclear where the labor data went and who has access to it. But for experts in workers' rights, the threat is immediate and existential.

'This shocks the conscience,' said Richard Griffin, the former general counsel of the NLRB. 'And if DOGE operatives captured and removed case files, it could constitute a violation of the Privacy Act.'

For Berulis, it was important to speak out, because he believes people deserve to know how the government's data and computer systems are at risk, and to prevent further damage. As a former IT consultant, Berulis says he would have been fired for operating like DOGE.

Daniel Berulis hopes that there might be further investigations into mishandling of sensitive data across the federal government. Grace Raver/NPR hide caption

toggle caption
Grace Raver/NPR

Disclosing his concerns 'was a moral imperative at this point,' he said. 'I've never encountered this in my 20 years of IT.'

His hope is that there might be further investigations into mishandling of sensitive data across the federal government.

'I believe with all my heart that this goes far beyond just case data,' he said. 'I know there are [people] at other agencies who have seen similar behavior. I firmly believe that this is happening maybe even to a greater extent at other agencies.'

For overseers, investigators and IT experts in a similar position, he hopes to provide a road map of what to look for.

'It was my goal by disclosing to Congress not to focus on me at all, but to give them information that they might not necessarily have, the things that you don't necessarily look for unless you know where to look,' he continued.

The NLRB said it would cooperate with any investigations that stem from Berulis' disclosure to Congress.

'As an agency protecting employee rights, the NLRB respects its employee's right to bring whistleblower claims to Congress and the Office of Special Counsel, and the Agency looks forward to working with those entities to resolve the complaints,' said Bearese, the agency's acting spokesperson, in a statement.

Berulis had a simple request for the DOGE engineers: 'Be transparent. If you have nothing to hide, don't delete logs, don't be covert. ... Be open, because that's what efficiency is really about. If this is all a huge misunderstanding, then just prove it. Put it out there. That's all I'm asking.'

But ultimately, if the systems that DOGE accesses are left insecure, it might not matter if its intentions are honorable, he concluded.

'This could just be the start of the operation. ... They still haven't crossed that boundary where they're plugged into every federal system out there,' he continued. 'So maybe there is still time.'

NPR's Stephen Fowler contributed reporting. NPR's Brett Neely edited this story.

Have information or evidence to share about DOGE's access to data inside the federal government? Reach out to the author, Jenna McLaughlin, through encrypted communications on Signal at jennamclaughlin.54. Stephen Fowler is available on Signal at stphnfwlr.25. Please use a nonwork device.




All Comments: [-] | anchor

g42gregory(2377) 2 days ago [-]

Here is the thing that blows my mind: why is there an implicit assumption that this article is an honest reporting and not a propaganda piece? Don't get me wrong, I am not saying that it is. What I am saying is that, at the very least, this question should always be asked first about any reporting.

Llamamoe(10000) 2 days ago [-]

Because this would be very in line with how DOGE has conducted itself so far.

zelon88(10000) 1 day ago [-]

NPR is a public entity. It's funding, governance, and leadership structure are well known and well trusted. From Wikipedia...

.....Regarding financing;

>Funding for NPR comes from dues and fees paid by member stations, underwriting from corporate sponsors, and annual grants from the publicly funded Corporation for Public Broadcasting.[4] Most of its member stations are owned by non-profit organizations, including public school districts, colleges, and universities. NPR operates independently of any government or corporation, and has full control of its content.[5]

.....Regarding governance;

> NPR is a membership organization. Member stations are required to be non-commercial or non-commercial educational radio stations; have at least five full-time professional employees; operate for at least 18 hours per day; and not be designed solely to further a religious broadcasting philosophy or be used for classroom distance learning programming. Each member station receives one vote at the annual NPR board meetings—exercised by its designated Authorized Station Representative ('A-Rep').

Now, I do question the authenticity of your question. Everyone knows that NPR is reputable and everyone knows why. Their reputation precedes them. But I entertained your charade and now I implore you to entertain one of mine.

Can you provide me the same detailed information which demonstrates why someone should trust OAN? How about Breitbart? How about Newsmax? Can you please pick one and demonstrate why they are trustworthy using a similar format that I provided for you?

tlogan(2756) 3 days ago [-]

The unfortunate reality is that a half of the US population sees the NLRB as a burden on small businesses—primarily because its policies shift frequently, making compliance costly and complex for those without deep legal resources. [1]

And the same half of the population do not trust anything what npr.org says.

Understanding the above dynamic is key to grasping the current state of discourse in the U.S.

[1] https://edworkforce.house.gov/news/documentsingle.aspx?Docum...

axus(10000) 3 days ago [-]

Some may claim that NPR is retaliating for getting defunded for the next 2 years.

ajross(10000) 3 days ago [-]

I've said this repeatedly, but write this down: before this administration is out we are going to have a major (probably multiple) scandal where DOGE staffers get caught with some kind of horrifying self-enrichment scam based on the data they're hoovering. It could be simple insider trading, it could be selling the data to a FBI sting, it might take lots of forms. But it's going to happen.

These are a bunch of 20-something tech bro ego cases convinced of their crusade to remake government along libertarian axes they learned from Reddit/4chan/HN. These are simply not people motivated out of a genuine desire to improve the public good. And they've been given essentially unsupervised access to some outrageously tempting levers.

potato3732842(10000) 3 days ago [-]

Doesn't matter if they're good people or not 'given essentially unsupervised access to some outrageously tempting levers' that scandal WILL happen eventually.

ndsipa_pomu(10000) 3 days ago [-]

I think it's worse than that as the DOGE staffers are presumably picked according to Musk's preferences and he's not going to be looking for generous, well adjusted do-gooders, but selfish, arrogant, greedy racists. Presumably, they're also going to be targetted by other countries intelligence services with a mind to getting hold of the same data.

f38zf5vdt(10000) 3 days ago [-]

Personal enrichment? There's already an enormous amount of evidence here to indicate that DOGE is working on behalf of a foreign nation state. It is seeming more and more likely that members of the DOGE team are simply secret agents for a foreign military.

> Within minutes after DOGE accessed the NLRB's systems, someone with an IP address in Russia started trying to log in, according to Berulis' disclosure. The attempts were 'near real-time,' according to the disclosure. Those attempts were blocked, but they were especially alarming. Whoever was attempting to log in was using one of the newly created DOGE accounts — and the person had the correct username and password, according to Berulis.

pjc50(1402) 2 days ago [-]

> horrifying self-enrichment scam based on the data they're hoovering.

Did you miss the presidential cryptocurrency?

DOGE guys will probably end up wiring money directly to their own bank account, proudly brandish the receipts on national television, and no Republicans will make a move against them.

soco(10000) 3 days ago [-]

I'm not american so can somebody please explain me, how is deleting logs and every trace of your actions helping with government efficiency?

actionfromafar(10000) 3 days ago [-]

To more efficiently rout trouble-makers and unions.

croes(347) 3 days ago [-]

How is firing people helping government efficiency?

lesuorac(10000) 3 days ago [-]

Log storage is expensive.

rsynnott(10000) 3 days ago [-]

Nothing they are doing is related to government efficiency. You can't really put too much faith in names.

delusional(10000) 3 days ago [-]

That way they can save some money litigating Elon and his goons. It's not like that litigation would get anywhere anyway, so better to save the public the waste /s

alistairSH(3420) 3 days ago [-]

Nothing about DOGE or the Trump administration is about efficiency. It's just a label they use to con gullible voters.

Their real goal is more likely a combination of grift and settling grudges.

Edit - typos

dandanua(3675) 3 days ago [-]

The next administration won't be able to spend time and money investigating crimes of the current one /s

_heimdall(10000) 3 days ago [-]

In the same way that finding waste while increasing the federal budget isn't efficiency.

Technically, maybe you can squint and find small pieces that are more efficient but in the grand scheme of things they goal doesn't seem to be a smaller government.

AIPedant(10000) 3 days ago [-]

Even by the standards of this administration...... yikes:

  Meanwhile, his attempts to raise concerns internally within the NLRB preceded someone 'physically taping a threatening note' to his door that included sensitive personal information and overhead photos of him walking his dog that appeared to be taken with a drone, according to a cover letter attached to his disclosure filed by his attorney, Andrew Bakaj of the nonprofit Whistleblower Aid.
9283409232(10000) 3 days ago [-]

This is exactly what I expect from this administration. Mob tactics. Take the silver or get the lead.

acdha(2928) 3 days ago [-]

This part is really damning: a real efficiency audit might need a lot of access to look for signs of hidden activity, but they'd never need to hide traces of what they did:

> Meanwhile, according to the disclosure and records of internal communications, members of the DOGE team asked that their activities not be logged on the system and then appeared to try to cover their tracks behind them, turning off monitoring tools and manually deleting records of their access — evasive behavior that several cybersecurity experts interviewed by NPR compared to what criminal or state-sponsored hackers might do.

The subsequent message about Russian activity could be a coincidence–Internet background noise-but given how these are not very technically skilled and are moving very fast in systems they don't understand, I'd be completely unsurprised to learn that they unintentionally left something exposed or that one of them has been compromised.

avs733(10000) 3 days ago [-]

>A real efficiency audit might need a lot of access to look for signs of hidden activity, but they'd never need to hide traces of what they did

In fact I would imagine they would do exactly the opposite because they would look at the mere ability to hide what they did as an audit finding.

ndsipa_pomu(10000) 3 days ago [-]

> criminal or state-sponsored hackers

It looks to be both

tjpnz(3481) 3 days ago [-]

Everything's going to have to be replaced and it's going to be hugely expensive. But that's not going to happen until at least 2029 - plenty of time for bad actors to get settled in and cause real damage.

throw0101c(2292) 3 days ago [-]

> This part is really damning: a real efficiency audit

There were already people auditing departments, but they got fired early on:

* https://en.wikipedia.org/wiki/Inspector_general#United_State...

* https://en.wikipedia.org/wiki/2025_dismissals_of_inspectors_...

There's even an entire agency devoted to auditing:

* https://en.wikipedia.org/wiki/Government_Accountability_Offi...

Trying to find efficiency by bringing in the private sector is not a new thing:

* https://en.wikipedia.org/wiki/Grace_Commission

* https://en.wikipedia.org/wiki/Brownlow_Committee

* https://en.wikipedia.org/wiki/Hoover_Commission

* https://en.wikipedia.org/wiki/National_Partnership_for_Reinv...

Applejinx(10000) 3 days ago [-]

Compromised implies they're not the Russian team to start with. I'd be looking for one of them to lose nerve and betray that ALL of them are the Russian team.

z3c0(10000) 3 days ago [-]

The use of DNS tunneling and skirting logs makes my head spin. Even if justification of exfiltrating 10GB of sensitive data could be made, there's widely available means of doing so that aren't the methods of state-sponsored hackers and the like.

freejazz(10000) 3 days ago [-]

It also contradicts the idea that they are acting transparently.

Aurornis(10000) 2 days ago [-]

> The subsequent message about Russian activity could be a coincidence–Internet background noise

These weren't random login attempts. It says the Russian login attempts had the correct login credentials of newly created accounts.

If the article is correct, the accounts were created and then shortly afterward the correct credentials were used to attempt a login from a Russian source.

That's a huge issue if true. Could be that someone's laptop is compromised.

chrisweekly(10000) 2 days ago [-]

'Interviewed by NPR' -- ok we can stop right there. Remember, they're dangerous enemies of the state, along with PBS and Fred Rogers.

jmyeet(10000) 2 days ago [-]

So NLRB handles confidential complaints. The complainant's idenity might be kept confidential. Exact details may be kept confidential.

Why aren't we to believe that this is Elon Musk going after anyone filing a complaint to the NLRB (from X, Twitter or SpaceX) or, worse yet (from Elon's POV), anyone potentially organizing any unionization effort?

There's absolutely no reason DOGE should have access to this information. There's absolutely no reason their activity, such as what information they accessed, should be hidden.

tomaskafka(3390) about 2 hours ago [-]

It appears that "appearing dumb and clumsy while opening the doors for enemies" is a plausibly deniable mode of whole Trump's administration.

softwaredoug(878) 3 days ago [-]

Some context as I understand it is DOGE employees are all temporary gov't employees whose employment expires (in June?). Assuming they follow the law there (big If), then they scramble around these agencies with tremendous urgency trying to please Elon (or the powers that be?).

And they absolutely should be resisted with this deadline in mind...

tootie(10000) 3 days ago [-]

They are using heavy-handed tactics. Per this article, the whistleblower was threatened. At the SSA, a 26-year veteran was dragged out of the building. Similar story at the IRS. DOGE has the backing of US Marshalls and the president. They can resist, but they'll just end up locked out.

9283409232(10000) 3 days ago [-]

It should be clear at this point that DOGE is trying to create a unified database of all persons in the US for targeting. Every single bit of data that they can get about you from the government or social media will be tagged to you Minority Report style. They were clear about wanting to deport citizens to El Salvador as well. Once you are identified as the other side they will come for you. If you are waiting for it to get worse before taking action and getting involved, we are already at that point.

> And Berulis noticed that an unknown user had exported a 'user roster,' a file with contact information for outside lawyers who have worked with the NLRB.

Possibly looking for lawyers for Trump to target with EOs or blackmail.

wormlord(10000) 3 days ago [-]

How you are getting downvotes is beyond me. People are finally waking up to the idea that the whole point of the Trump admin is to privatize the government, but haven't woken up to the fact that we are entering an era of state terror. Keep your heads buried HN, you'll be dragged kicking and screaming into reality in a few months anyways.

ActorNightly(10000) 3 days ago [-]

If someone is incompetent enough to understand Cobol databases, I doubt they are thinking about it on this level.

Given all of Musks actions, he is probably wanting to destroy any agency that went against him, because he truly believes he is the humanities savior and his companies are doing things the right way.

ck2(613) 3 days ago [-]

That backdoor code is going to lurk for decades.

Not only will Musk be able to tap into it for years but foreign governments.

bilbo0s(10000) 3 days ago [-]

This is the real problem, and the reason we never should have allowed access to sensitive government and societal data in this fashion.

the_doctah(10000) 3 days ago [-]

Pure ridiculous conjecture.

pnutjam(10000) 3 days ago [-]

This checks out because all those DOGE hires appear to be hackers, and they are now state sponsored. Most of them could never pass a basic background check, much less a TS or even public trust from one of the more invasive Federal agencies.

flanked-evergl(10000) 3 days ago [-]

cite?

matthewdgreen(10000) 2 days ago [-]

It is worth pointing out that many of these people are probably violating Federal and possibly even some state laws. Violations of Federal laws can be pardoned, if the President is so inclined. State laws can't. No prosecution will occur during this administration, but this administration will not last forever.

_hyn3(10000) 2 days ago [-]

Those darn hackers. They probably hang out and get their news... someplace.

grandempire(10000) 3 days ago [-]

> particularly when those staffers noticed a spike in data leaving the agency. It's possible that the data included sensitive information on unions, ongoing legal cases and corporate secrets

This entire article appears to be speculation about data they MAY have taken with no evidence besides large file size that they are misusing something.

The discussion with the "whistle blower" and other experts is only about how serious it would be IF they misused it.

Am I reading it wrong?

9283409232(10000) 3 days ago [-]

Someone exfiltrated sensitive data. That isn't in question. The only question is who did it and why. As far as DOGE's involvement, there is no proof but there is plenty of evidence.

JumpCrisscross(69) 3 days ago [-]

There is evidence DOGE went out of its way to illegally conceal what it was doing. That, alone, is enough to put these kids in jail one day.

intermerda(10000) 3 days ago [-]

> Am I reading it wrong?

Based on your comments, you're not reading the article at all.

jasonlotito(3582) 3 days ago [-]

Yes. You claim:

'This entire article appears to be speculation about data they MAY have taken with no evidence besides large file size that they are misusing something ...[and] is only about how serious it would be IF they misused it.'

This paragraph makes it clear it's not just about misusing data and large file sizes.

> Those forensic digital records are important for record-keeping requirements and they allow for troubleshooting, but they also allow experts to investigate potential breaches, sometimes even tracing the attacker's path back to the vulnerability that let them inside a network.

Let's be clear:

> Those engineers were also concerned by DOGE staffers' insistence that their activities not be logged, allowing them to probe the NLRB's systems and discover information about potential security flaws or vulnerabilities without being detected.

Neither of these have to do with 'large file size' or misusing data.

'Am I reading it wrong?'

Yes. Now, before you go moving goal posts, you made claims, and I've debunked those claims with quotes you said you needed. Because clearly the article is ALSO talking about these other things as problematic as well, so it's not 'the entire article'. (Also, the 'entire article appears'? Appears? Just read it, it talks about numerous things, and is very clear on the different elements it's talking about.)

This isn't the only stuff mentioned, so be careful about claiming 'oh, I just missed that' or some such because there are other things that can be referenced, such as the massive amount of text spent on the whistleblower issues and the threats made to them.

And before you talk about this just being 'speculation,' that's why we have the process we have, so people can make claims that can then be investigated. And that's what's being stopped.

Finally, 'no evidence besides large file size' is also not true.

'Am I reading it wrong?'

As someone said, it's more likely you didn't even read it.

Sonnigeszeug(10000) 3 days ago [-]

There were already news from weeks ago how they started to put servers on the internet with access to systems, which should not have access to/from the internet for security reasons.

This is just on top of all the other things. happened.

insane_dreamer(10000) 3 days ago [-]

> Am I reading it wrong?

Yes

grandempire(10000) 2 days ago [-]

My original comment here has not been flagged - but all my responses to other comments have. This is distorting the conversation. There is only one DOGE narrative allowed on this site.

arunabha(10000) 3 days ago [-]

I am not sure how it's possible to defend the kind of stuff DOGE is doing anymore. Even the veneer of looking for efficiency is gone. There have only been claims of 'fraud' with no real evidence backing up the claimed scale of fraud.

At this point it simply looks like DOGE is yet another attempt to use a popular trope (Govt fraud and waste) to push through changes specifically designed to give unchecked power to one individual.

This much concentrated, unchecked power opens up vast opportunities for fraud and corruption and there are pretty much no instances in history where it turned out be to a good thing in retrospect.

Also, very surprised this story made it to the front page. Typically, stuff like this gets flagged off the front page within minutes.

bedane(10000) 3 days ago [-]

[flagged]

GolDDranks(3223) 3 days ago [-]

> Typically, stuff like this gets flagged off the front page within minutes.

Why would that be, because it's too 'political' for tech news? Or are there actual DOGE sympathies within the HN population?

JohnMakin(3635) 3 days ago [-]

It's flagged now - pretty embarrassing for a site called "hacker" news

knowaveragejoe(10000) 3 days ago [-]

Anyone who knew anything about the public sector knew there were already efficiency initiatives. USDS(which became DOGE) was this, and they were doing a great job. If you care about efficiency this is what you would support, not taking an axe to everything and having a near-singular focus on lower headcount.

bilekas(10000) 3 days ago [-]

This isn't really a shock to me, but what's more frustrating I guess is that absolutely nothing will come of this. I have zero confidence any of this will even be cleaned up, just the same ranting about 'fake news'.

Really feels like the fox is already in the coop.

stevenwoo(3570) 2 days ago [-]

That the intrusion came over Starlink from Russia with valid login credentials would be unbelievable in a tale from speculative fiction. Reality Winner looks like a hero compared to these clowns.

consumer451(1581) 3 days ago [-]

It is hilarious what does, and does not, get flagged on this website in 2025.

The other day on /active, there was a story about a French politician being banned from running for office, due to being convicted of outright fraud for the second time. Absolutely nothing to do with technology or business, nothing to do with the USA. Pure politics in a foreign country. Not flagged.

There was a story directly below which involved the USA, technology and business, but had an uncomfortable narrative for some users. Flagged.

As someone who still likes this site a lot, this just makes me laugh at this point. I don't know how else to react.

Capricorn2481(10000) 3 days ago [-]

Because, naturally, people on here want to harm you. We can't say it out loud, but that's where the U.S. climate is right now. HN is not immune from it, and is likely more susceptible to it given the demographic. They flag to keep people from saying it.

consumer451(1581) 3 days ago [-]

Follow-up: I should add that in 2025, deleting stories with a tinge of US politics is highly detrimental to the HN user base's understanding of what is happening in the business world.

Case in-point: a US-based family member employed at a FAANG just told me that his Canadian coworkers now reset their phones prior to entering the USA, then restore from backup. This is somewhat similar to what happens when they go to China.

This is terrible for business. This kind of information should not be ignored.

jmyeet(10000) 3 days ago [-]

Welcome to the Internet.

Many forums (including this one) have bans on 'politics' or topics that are 'inflammatory'. 95% of the time what constitutes either is simply 'things I disagree with'.

For US politics in particular, as much as the right-wing cries about being censored, social media in particular bends over backwards not to silence such views whereas anything critical of those right-wing positions gets flagged or downranked as being 'political' (eg [1]).

Typically this process isn't direct. ML systems will find certain features in submissions that get them marked as 'inflammatory' or 'low quality' but only on one end of the spectrum. For sites such as HN, reddit and Tiktok, right-wing views have successfully weaponized user safety systems by brigading posts and flagging them. That might then go to a human to review and their own biases come into play.

As for France vs the US, I'm sorry but France is irrelevant. As we've seen in the last 2 weeks, what the US does impacts the entire world. All the big social media sites are American (barring Tiktok) so American politics impacts what can and can't be said on those platforms.

Twitter has become 4chan, a hotbed for neo-Nazis, racists and homephobes.

And which French politican are we talking about? Marine Le Pen? If so, the relevance is the rise of fascism in Europe between National Front in France, Reform in the UK, AfD in Germany and, of course, Hungary.

[1]: https://www.dropsitenews.com/p/leaked-data-israeli-censorshi...

johnnyanmac(10000) 3 days ago [-]

I mean, there were Tesla earnings calls this year flagged, which would be front page news even a year ago. Tech earnings calls are almost never flagged otherwise.

I'm mostly convinced a lot of stuff is flagged and the mods work overtime to pick and choose what to unflag. On what metric? No clue, if I'm being honest.

regularjack(3665) 3 days ago [-]

I wouldn't be so quick to jump to conspiracy theory territory, it could just be that people get tired of reading the same bullshit everyday.

dang(143) 3 days ago [-]

There's always a ton of randomness with these things. People tend to underestimate how that affects nearly every aspect of HN. That is, they misinterpret a random outcome as some sort of meaningful thing and then attribute a meaning to it.

If you assume that rhyme or reason is involved, then of course the results seem bizarrely inconsistent and the only models that fit will be Rube Goldberg ones. Simply understand that randomness plays the largest role, and the mystery goes away. (But I know that's less internet fun.)

In terms of all these political stories getting flagged: it's a simple consequence of there being a huge influx of intense political stories while HN's capacity remains '30 slots on the frontpage' (https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...). If these stories mostly didn't get flagged or otherwise moderator, HN would turn overnight into a current affairs site, which it is not and never has been.

That still leaves room for some stories with political overlap, though not nearly as many as the politically passionate would prefer. Btw, this is a special case of a more general principle: there are not nearly as many stories on any topic X as the X-passionate would desire. The front page, in that sense, satisfies no one!

But back to the politics thing—here are some links to past explanations about how we deal with that:

https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

https://news.ycombinator.com/item?id=42978389 has a good list of more.

For those who are up for a more complex explanation, this is really how I think about this problem: https://news.ycombinator.com/item?id=42787306. The basic idea is to avoid predictable sequences: https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que....

jonnycomputer(3516) 3 days ago [-]

I think we should be trying to understand what NxGenBdoorExtract is. NxGen is a system for NLRB. Bdoor is pretty evocative of a back door. He took he git offline or made it private. I can't find it on archive.org.

jonnycomputer(3516) 2 days ago [-]

On the other hand, there are two things about that screenshot of the repo which is a little weird. First, the timestamp of that repo is cutoff, but, the items seem to be in reverse chronological order, which would put that repo sometime in 2021-ish, or before.

The owner could, of course, just make it public again, or put it back up, and end all the speculation.

anthonygarcia21(10000) 2 days ago [-]

I'm intrigued by the 'Mission 2' notation. That suggests, perhaps, that DOGE has a 'Mission 1' (its public, ostensible purpose) and a hidden 'Mission 2' known only to Musk and his minions.

e2le(3563) 2 days ago [-]

archive.today has a snapshot taken on 28 Feb 2025, although it doesn't show any repository with that name.

https://archive.ph/fUa5Q

theteapot(10000) 2 days ago [-]

> ... DOGE employees demanded the highest level of access ... When an IT staffer suggested a streamlined process to activate those accounts in a way that would let their activities be tracked, in accordance with NLRB security policies, the IT staffers were told to stay out of DOGE's way, the disclosure continues.

But did they actually 'turn off logging'?? How do you even do that? Anyone know what access control system they are talking about?

SpicyLemonZest(10000) 2 days ago [-]

It sounds to me like there's some application-level logging on this NxGen system, and DOGE obtained permissions to read the underlying storage without going through the application. But the article does also say later on that there are specific controls and monitoring systems Berulis did find turned off.

JohnMakin(3635) 3 days ago [-]

This seems important and incredibly relevant on a site called hackernews. It's credible and from a credible source. Why are we flagging it?

asveikau(10000) 2 days ago [-]

JD Vance is a poster boy for Y Combinator adjacent fascists. Marc Andreessen, when he is not cheering on opiate overdoses in his hometown and praising the British Raj, loves what's going on. We need to accept that Silicon Valley has major culpability here. After all, how much do you see on HN that you should ignore the law because it's better to ask forgiveness than permission?

DrNosferatu(10000) 3 days ago [-]

The "young and inexperienced" staffers narrative is very convenient to perform target operations on (specially) sensitive data.

DrNosferatu(10000) 2 days ago [-]

*targeted

vaxman(10000) 3 days ago [-]

They didn't use StarLink?! ROFLMAO

I hope he doesn't think Trump is his boy and will keep DOJ off his back. The problem is that the institutional funds and market makers will not support this level of Watergate/Enron/WorldCom-like risk and Trump isn't going to become entangled in that (since it means the corporate death penalty as far as public equity and access to bank capital is concerned).

BUT the Report is from a super controversial NGO that has long been targeted by Republicans and may soon be DOGEd, so it could be filled with speculation, half-truths, innuendo and lies.

Still...They didn't use StarLink?! I mean, is that not the greatest evidence you could ever hope for of an obvious NSA backdoor in StarLink? They were willing to risk obscure premises-based (bandwidth) monitoring over holding a mini-dish out the window for a few seconds..Too much! I feel like I owe someone $20 for a ticket.

vaxman(10000) about 22 hours ago [-]

Not even 24 hours later, I called it --the Administration IS asking Congress to de-fund NPR:

https://www.pbs.org/newshour/politics/trump-administration-p...

Meanwhile NPR has new reporting that DOGE has sent two of its boys back to NLRB, but they're going to work remotely. Is the hope here that this will provide ongoing justification for DOGE remote data access as the Feds sort out what they did in the first visit? Like even though NPR's first report stated that Russia has tried to login remotely using valid DOGE credentials just after DOGE personnel left the first time?

https://www.npr.org/2025/04/16/nx-s1-5366851/doge-nlrb-whist...

DavidPiper(10000) 2 days ago [-]

(Non-American here.) If they weren't already, it seems like private businesses, security researchers, and I suppose the general public, should start treating US government agencies as privacy and security threats, just like you'd treat any other phisher, scammer, etc.

If government agencies are compromised - via software backdoors or any other mechanism - any data and systems they can access should be considered compromised too.

exceptione(3110) 2 days ago [-]

Neoliberalism -> Corporatism -> Fascism/Autocracy

You are a Human Resource to be commercialized. Ad tech => Private Intelligence.

One is not a person. One has no rights. Unless one can free themself and their loved ones of neoliberal brainwashing.

garte(10000) 2 days ago [-]

this sounds exactly like that's the goal behind all this.

jokoon(10000) 2 days ago [-]

It's likely that this team was infiltrated by adversary countries

autoexec(10000) 2 days ago [-]

I'd always assumed that we had three letter agencies whose entire job was to keep this sort of thing from happening, but it seems that none of them are concerned about protecting our government's secrets or even our democracy. What good is the panopticon if the watchers are asleep on the job?

sherdil2022(10000) 1 day ago [-]

Why isn't this considered helping the enemy from within / treason?

Why are people being deported for no crimes or for far lesser crimes?

deepsun(10000) 1 day ago [-]

It is. But since citizens don't do anything about it, they don't need to care.





Historical Discussions: Googler... ex-Googler (April 11, 2025: 1061 points)

(1061) Googler... ex-Googler

1061 points 7 days ago by namukang in 1115th position

nerdy.dev | Estimated reading time – 3 minutes | comments | anchor

Last night, my role at Google was eliminated. I'm quite sick to my stomach, extremely sad, and even more angry. [email protected] is no more. Just like that.

There's not really anything about this that makes sense.

Goodbye Google, I guess #

I'm told this comes as a shock to my managers and other Chrome team leaders. I'm told it's not based on merit. I'm told I could find another role.

But I was also immediately ripped away from my calendar, docs, code, and more. Anything nice anyone has to say is immediately met with reality, and reality says 'don't let the door hit you on the way out.' If I was really welcome to another role, why treat me like a criminal?

I can't believe the timing #

I was at a Chrome team building offsite, quite literally having some of the most fun and creative innovation with Chrome folks I've had in a while; shoulder to shoulder with incredible engineers, planning ways to make web developers life's easier while raising the quality level of the web.

It's like none of these good moments ever happened.

Like I was never in any of these rooms. Like I wasn't assigned to high priority features or an owner of meaningful work streams.

  • I was supposed to record a Google IO video next week. A talk I was very very excited to give. Gone. Wasted.
  • I was supposed to be on stage at Google IO, gone.
  • I was supposed to run a booth right outside the main stage, gone.
  • I was supposed to help with the developer keynote, ensuring things matched reality and were beautiful. Gone.
  • CSS Working Group membership, gone.
  • Developer Office Hours, gone.
  • Code access to the Carousel Gallery, gone.
  • Helping with Overflow 5, or other CSS work at Google, gone.
  • Relationships that took me years to cultivate... mostly going to be gone too.

The list of things I was doing is huge. It's going to be a while until I can resume some of them, and many of them won't resume at all.

I feel back stabbed, unappreciated, tossed in the trash. I can't sleep. I'm ashamed. I'm pissed.

I really was just a fuckin cog in a mega corp.

Find me on Bluesky or email me at [email protected] if you feel compelled to reach out.

Sorry if I don't reply quickly, it's very overwhelming to read messages about this. The topic is quite sore.




All Comments: [-] | anchor

snvzz(2530) 7 days ago [-]

That's just how it is for a lay off, in megacorp and elsewhere.

Not sure how this is HN-worthy.

sangeeth96(3481) 7 days ago [-]

Adam was a very prominent Chrome DevRel and top voices of the web platform. I personally owe to his content (blog, snippets, podcast, talks, youtube, social media etc.) to stay up-to-date on things.

It's a bit of a shock to me that he of all people is getting laid off and that too in such an ugly way.

musicale(10000) 6 days ago [-]

Does Google (or whoever is making these decisions) think that layoffs are in the long-term best interest of the company? If so, are they correct?

Or is it related to the possibility that Google may have to divest itself of Chrome due to anti-trust enforcement?

amputect(3542) 5 days ago [-]

None of the people making these decisions care about the long-term best interest of the company. Sundar doesn't give a shit about Google's future, he is laser focused on what really matters to him and the people he reports to: the stock price. A big round of layoffs can juice the stock, and it's a nice way to keep the numbers going up in between industry events where they can show off deceptively edited product demos and knowingly lie about the capabilities of their current and future AI offerings.

To put it another way: Google doesn't want to be a software company anymore. Google does not care about making software, or products, or the people who make or use their products. Google wants to be a growth company where the stock price goes up by two-digit percentages every quarter. That is absolutely the only thing that Google cares about. Google has realized that the best way to make this happen is to commit securities fraud by lying to their investors about their products, and by drip-feeding layoffs to show that they're serious about their underlying financials. It's theater, playing pretend at being business people. The individual products are allowed to go about their business as long as they don't cost too much money, but Google doesn't want to make money by having good products that people love to use, Google wants to make money by being a hyper-growth unicorn again, and they will do anything at all to recapture that kind of growth even if they're slitting the throat of the company to do it.

Whether this attitude is good for Google or its users is left as an exercise to the reader.

tgsovlerkhgsel(10000) 4 days ago [-]

It may be a bet that AI will reduce the need for developers. Even if it can only write boilerplate, boilerplate still has to be written and is time consuming, so if it were to remove 20% of time that needs to be sunk into a project, the work of 5 people can now be done by 4 (less if you account for the reduced coordination overhead).

Whether these savings actually play out and whether management has accurate expectations and metrics remains to be seen, given messaging that makes it sound like AI saves huge percentages of time, when it at best saves huge percentages of something that's actually only a small percentage of day to day work.

xyst(3582) 4 days ago [-]

Wake up, buddy. This is the neoliberal/neoclassical economy we are living in. They are pumping the books to make their quarterlies look good.

Pump the stock, deliver 'shareholder value', and make billionaire class richer is the game. Oh, and also make room for stock buybacks of course!

musicale(10000) 6 days ago [-]

It can be shock to discover how little the company as an entity, and its upper management in particular, actually values you (or any other employee.) Employees are indeed cogs in a megacorp, and the relationship is transactional. The company demands loyalty because it can and because it is profitable, not because it will be reciprocated.

hyperliner(10000) 6 days ago [-]

Even those in "upper management" are cogs.

roman_soldier(10000) 4 days ago [-]

When it comes down to it everyone has their own interests as a priority so if a manager is told to let folks go they will gladly do it to keep their own job.

commandersaki(10000) 6 days ago [-]

It sucks and especially the abruptness, but I find it hard to muster sympathy. Google employees receive some of the highest renumeration in the industry. Combined with the prestige of Google on his resume he'll land back on his feet in no time.

kweingar(10000) 6 days ago [-]

> Combined with the prestige of Google on his resume he'll land back on his feet in no time.

I wouldn't count on that. The job market is really bad.

ivraatiems(10000) 6 days ago [-]

The reality of one's lack of value to one's own employer is often baffling. It makes you wonder how anyone manages to stay employed at all, since apparently everyone is replicable and unimportant. I have been through layoffs where other people on my team, doing the same job I did approximately as well, got laid off. No explanation given for why them and not me. And it could happen to me at any time.

It doesn't matter how good my evals are or how big my contributions. It doesn't matter that there are multiple multi-million-dollar revenue streams which exist in large part due to my contributions. It doesn't matter that I have been told I am good enough that I should be promoted to the next level. Raises barely exist, let alone promotions. Because theoretically some other engineer could have done the same work I actually did, the fact that I'm the one who did it doesn't matter and I deserve no reward for doing it beyond the minimum money necessary to secure my labor.

Under those conditions, why should I - or anyone - do any more than the minimum necessary to not get fired for cause? If the company doesn't see me as more than X dollars for X revenue, why should I?

hyperliner(10000) 6 days ago [-]

If you do only the minimum necessary to not get fired, then wouldn't you be the person that needs to be fired the next time the the budget is cut, since you are the lowest ROI of all, all other things equal?

weinzierl(233) 6 days ago [-]
'I have been through layoffs where other people on my team, doing the same job I did approximately as well, got laid off. No explanation given for why them and not me. And it could happen to me at any time.'

Usually there is a hidden variable that you don't know. It is your salary. That is why it sometimes looks surprising when senior roles are cut that look extremely valuable to the company from the outset. Maybe they were that valuable but still deemed to expensive.

somesortofthing(10000) 6 days ago [-]

Layoffs in particular are like this because they're planned very quickly by very small groups of people. Rumors of impending layoffs obliterate morale, so the people in charge do everything they can to maintain secrecy and minimize the time between people hearing about layoffs and the layoffs taking effect. This basically always translates to random-seeming decisions - priority 1 is to cut costs by X amount, choosing the right people to cut is secondary. This means that, for example, engineers that have received performance-based raises are punished since, on paper, they do the same job as lower-performing but lower-paid engineers.

Not defending the process(the right way to break this equilibrium is statutory requirements for layoffs a la the WARN act) but that's why you see the outcomes you do.

pjmlp(113) 6 days ago [-]

This is a lesson that all senior developers know pretty well, that is why companies rather hire naive juniors, instead folks that already mastered how the game gets played, and cannot be sold on company mission, values, or whatever snake oil gets talked about during interview process.

BurningFrog(10000) 4 days ago [-]

You spend half your waking hours at work.

Having a shitty attitude for that much of your life is no way to live.

nine_k(3565) 4 days ago [-]

Check out the book called 'The Gervais Principle' which develops this kind of cynical approach to a significant depth.

anal_reactor(10000) 4 days ago [-]

> Under those conditions, why should I - or anyone - do any more than the minimum necessary to not get fired for cause?

No, you shouldn't. I know it feels like 'but I thought that if I like cleaning my own apartment then getting a job as a janitor would leave me deeply fulfilled' but that's not how it works.

Ferret7446(10000) 4 days ago [-]

Your relationship with your employer is no different than any other business relationship. You can do the bare minimum, just as there are many businesses that do the bare minimum toward their customers, and those businesses often have a low subsistence level of success; if you do the same, you may have the same level of success in your career.

An employment relationship can offer a lot of things for both sides. For the employer, your labor of course. For the employee, a salary of course. But it can also offer experience, access to other talented and intelligent individuals and access to capital to learn and try things, networking, relationships, opportunities for promotion and perhaps opportunities to find better employment elsewhere, or the skills and/or connections to start your own business.

Your attitude toward work should be the same as the attitude you take towards the rest of your life. You can 'rot' or you can make the most of every opportunity.

windward(10000) 4 days ago [-]

You're right but our current model of society depends on there being people who don't ask the same question.

jimt1234(3571) 4 days ago [-]

I've noticed a disturbing trend in the last year or so where a company announces a significant layoff, saying it needed to let go of 'underperforming employees' or similar wording. I've been in this industry for a long time, experienced several layoffs, but this way to announce a layoff (publicly calling-out 'underperforming employees') feels new to me. It also feels shady - like, announcing to the industry, 'Don't hire these losers we just got rid of. LOL'

dumbledoren(10000) 6 days ago [-]

These megacorps will have so much fun in the upcoming recession. They turned public opinion against them through sociopathic profiteering and then mass layoffs. When the cows come home it won't be fun and games like before.

nsm1(10000) 4 days ago [-]

> sociopathic profiteering

That 'sociopathic' profiteering funds the 401(k), IRAs, and pension plans of tens of millions of Americans. God forbid these companies be run for the collective benefit of all shareholders (including special ed teachers, utility workers, and airline mechanics) and not just the lottery winners who scored the high-paying jobs at these companies.

> mass layoffs

The 'Day in the Life' videos that made the rounds on TikTok sapped the general public of whatever sympathy they may have otherwise had for the FANMAGers getting sacked from their $100-300k jobs.

rdtsc(3656) 6 days ago [-]

Sadly two management levels above we're just a line in a spreadsheet. Maybe even one level above.

"Hey look, this one is cog is spinning at a cost $200k/year, why don't we replace it with a cog from a low cost country and save some money?" Or "remove it and make this one other cog do the work of this obe?" People doing the replacement have to show they did something, as well!

lazide(10000) 5 days ago [-]

Upper management has targets they need to meet. If they don't, they're out the door even faster than your typical junior engineer who is struggling to code.

The targets often aren't what you'd think though.

uptownfunk(3317) 6 days ago [-]

Google is one of those places where you never need to ask if someone worked there.

fragmede(1245) 6 days ago [-]

self fulfilling prophecy though, because the people who worked at Google but don't tell you about it, won't tell you about it, so you don't know they did so you're only going to hear about it from the ones you hear about it from

jsemrau(10000) 4 days ago [-]

Can you explain for the uninitiated what that means? Is that like PTSD?

walterbell(23) 6 days ago [-]
https://www.sfchronicle.com/tech/article/google-layoffs-andr...

> Google laid off hundreds of employees from its platforms and devices unit, the team responsible for the Android operating system, Pixel phones and Chrome browser. The move, first reported by the Information, comes months after Google offered voluntary buyouts to all 20,000 employees in the division, signaling deeper structural changes at the tech giant.

danpalmer(3096) 5 days ago [-]

Correction, they did not offer buyouts to the entire division, they offered the ability to apply for a buyout to US-only employees, and application did not guarantee you'd get it.

h4ckaerman(10000) 6 days ago [-]

> Googler...

Whole things reads like someone leaving a cult.

It's ok to be sad about leaving a job but your identity shouldn't be so tied up in it that you're crying in a blog post online.

We all lose jobs and we all get on with it. Obviously they're talented and will land fine somewhere.

I'm not trying to be mean but it's bad that a person can get upset to this point around a job. The corp isn't caring.

nehal3m(10000) 4 days ago [-]

I disagree. This person apparently had a great time working this job and I imagine it's difficult to end up with the responsibilities they had without being intrinsically motivated. It's perfectly alright and valid to be sad about losing the ability to express that part of yourself to make a living. The whole point of the post is that yes, the company doesn't give a damn about anything but the bottom line, but the author did.

margalabargala(10000) 4 days ago [-]

I'm fine with 'Googler'. Google employs 180,000 people. There are cities half that size with their own demonym.

neilv(3544) 4 days ago [-]

You're criticizing people for caring so much because you think the best that employment can be is transactional money in exchange for competent work?

Wouldn't you want to hire and nurture people who cared so much about what they were working on and who they worked with, as the author seemed to be?

(Not that you'd want them to be upset if it ever had to end, but you'd want the goodness part to happen? Better to have loved and lost, than never to have loved at all?)

ragazzina(10000) 3 days ago [-]

>your identity shouldn't be so tied up in it that you're crying in a blog post online

If a personal blog isn't the right place to express distress when being fired, what is a personal blog even for?

ein0p(10000) 6 days ago [-]

As an ex-Googler I say: blessing in disguise. When working at a $MEGACORP it's easy to think there's barren wasteland out there beyond the walls, so it's scary. But that is very much not so. I get that opportunities to work on browsers are relatively few and far between, but if you can do something else, try working for a smaller company which treats you more like a human being, and less like a replaceable cog.

Not much of a consolation, I'm sure. I've never been laid off, so I can only hypothesize what that'd feel like, but know this: this too shall pass.

lazide(10000) 5 days ago [-]

It is much easier to handle when departing is voluntary. Layoffs, especially surprise ones, are the opposite.

For someone young with no dependents, it can be scary but doable. For those with kids? Not so much.

goldchainposse(10000) 4 days ago [-]

I want to get enough time at $MEGACORP to have FU money. After that, my fear is a lot of smaller companies are working on thing even more boring, but with less scale. Gluing a domain-specific API to a few LLMs sounds boring. I got into tech because I liked learning it, but a lot of it is getting repetitive.

canucker2016(10000) 6 days ago [-]

Tangentially, I thought the term Xoogler was used to refer to an ex-Googler.

Or has that term fallen into disuse now?

decimalenough(3504) 4 days ago [-]

The term still exists, but it's not one you'd expect people outside Google to be familiar with.

throwaway58670(10000) 6 days ago [-]

Please test your site on a phone. 2fps while scrolling text is not ok.

etse(10000) 6 days ago [-]

Hmm. Maybe you should test the site on a different phone. Not seeing an issue with responsiveness here.

xyst(3582) 4 days ago [-]

I noticed this as well on my underpowered MBA. Might be the bluesky integration causing the slow down.

riknos314(10000) 4 days ago [-]

This comment would be much more useful it it included the model of phone, OS version, and browser (ideally with version) you're using as context.

All of these variables are highly relevant to performance and any attempt to reproduce/fix the issue you're reporting.

sexy_seedbox(2687) 4 days ago [-]

Very choppy scrolling, if you delete the whole 'mentions' section in dev console, then the page will scroll smoothly again.

Tinos(10000) 5 days ago [-]

'I really was just a fuckin cog in a mega corp'

Yup. Must have been a horrific wake up call :(

benoau(10000) 4 days ago [-]

... and they haven't even spent all year searching for a new job yet!

NooneAtAll3(10000) 4 days ago [-]

my take on this is that '2 week notice' should probably apply to businesses as well?

t-writescode(10000) 4 days ago [-]

We have it, it's called the WARN Act [0]

Any company with more than 100 employees that does the 'you were laid off today, but you'll be paid for the next 2 months' thing is following the WARN Act

https://en.wikipedia.org/wiki/Worker_Adjustment_and_Retraini...

grandempire(10000) 4 days ago [-]

For what? He's probably getting fantastic severance so his time is best spent on the next thing. The employer isn't going to get more work - it's not wise or safe to let layed off individuals roam around the office.

mvdtnz(10000) 4 days ago [-]

I'm not taking Google's side here AT ALL but it's likely this person was given much more than 2 weeks of pay as severance.

windward(10000) 4 days ago [-]

2 weeks of pay is very little comfort and doesn't stop any of the feeling that you've immediately become a social pariah, banished from the network.

basfo(10000) 4 days ago [-]

I worked for a well-known SaaS company for 4 years. A few months after we were acquired, I decided it was time to move on. I gave a 3 week notice to ensure everything was properly handled before my departure.

Two days later, I couldn't log in to my PC. I was, for all intents and purposes, fired from my actual work. Technically, I was still employed and paid for those remaining days, but I was locked out and never got the chance to say goodbye. It was the worst experience I've had, and I never had any issues with any manager or anyone before that. Apparently, it was just the new company policy.

pkaye(10000) 4 days ago [-]

The 2 week notice is not a legal requirement in the US. I've seen a couple employees just do a silent quit and not turn up the next day.

readthenotes1(10000) 4 days ago [-]

'I'm told this comes as a shock to my managers and other Chrome team leaders. I'm told it's not based on merit'

If your manager is shocked by one of their team being laid off, the manager is probably next.

Of course the OP was told it wasn't based on merit, or any other arguable-in-court characteristic.

But it was. Someone decided Google was better off this way, or that OP was better off working somewhere else.

silisili(10000) 4 days ago [-]

Managers often feign cluelessness because what else can they do? Tell you they submitted you for layoffs? Tell you they knew for weeks and said nothing? There's really no upside option here.

I have no doubt that sometimes managers really don't know, but I'd wager that most who say they didn't know probably did.

DannyBee(3397) 4 days ago [-]

Eh - having had to do these myself at Google for large orgs over the past few years, i would not assume it was based on merit.

The cost disparities can be huge between team members and locations, and a lot of time it's being done to hit some EOY or mid-year budget number. They are also slowly trying to clean up location strategy.

So it's entirely possible it was based on cost and location, and not merit.

It would still be merit 'under the covers' if everyone was the same cost/location, but they aren't.

xyst(3582) 4 days ago [-]

In this neoclassical/neoliberal economy where the only thing that matters is 'delivering value for the shareholders' and profits for the billionaire class. I am not surprised. A bit jaded, honestly.

I have only started my career in the past 10 years and have seen this story unfold time and time again across many companies. Big, small, or medium company. It doesn't matter.

You. Are. Expendable.

I will say the problem is much more pronounced when it's a publicly traded American company; or a company that was recently acquired or funded by private equity, 'angel investment', or a vulture capitalist firm.

Folks. Our industry needs a trade union to protect our interests. We cannot keep relying on billionaire class to 'do right by us' because quite frankly. They do not give a shit.

windward(10000) 4 days ago [-]

>Our industry needs a trade union to protect our interests.

Ding dong. There's no grindsetting yourself out of the path of an uncaring locomotive.

goldchainposse(10000) 4 days ago [-]

If Google realizes they made an oopsie, I hope he respectfully tells them 'no, thanks.' I could never go back to an employer that did this to me, then said it was just a mistake.

mvdtnz(10000) 4 days ago [-]

I have a mortgage to pay. I certainly could.

windward(10000) 4 days ago [-]

This is the 'anger' stage, where you fantasise that anyone other than your shockingly impotent immedate manager will care about you. Anyone who's been dumped will recognise the feeling.

dailykoder(10000) 4 days ago [-]

> Just like that.

These statements always catch me a bit off-guard. Is there no such thing as a cancelation period in the US? When my employer wants to kick me out, he needs a good reason for that and I'd still be paid for 3 months. Which is often even longer, depending on how long you belong to a company.

Edit: I'm in germany

windward(10000) 4 days ago [-]

In my experience, it doesn't matter anyway. You can be paid to be sat at home and, while the worry of finances is kicked down the road, the big dark questions come home to roost very quickly.

somerandomqaguy(10000) 4 days ago [-]

In Canada in general either employer or employee can terminate without having to give reason. Typically it's either a few weeks of notice for termination but the employer can choose to require the employee to depart immediately and instead payout severance for equivalent time instead. There's nuance province to province though.

In the US it's similar but AFAIK it does vary state to state. To my knowledge there isn't any law that requires what you're describing in North America.

ur-whale(2802) 4 days ago [-]

> Edit: I'm in germany

Yeah, Germany is quite (in)famous for this.

I have seen quite a few times in my career large US tech corporations specifically choosing not to open a satellite EU sales office or a dev office in DE because of the horrendous labor laws.

Sure, very nice for the workers. But foreign money chooses to skip DE because of this.

Warm and comfy in a sinking ship, great!

daedrdev(10000) 4 days ago [-]

The US does not have such a tax

omoikane(10000) 4 days ago [-]

There is a WARN[1] period before the employee is officially laid off, but their access to all the corporate resources are cut off immediately. From the employee perspective, they have lost everything the moment they are being told that they are laid off. It doesn't matter that they are still getting paid.

[1] https://edd.ca.gov/en/jobs_and_training/Layoff_Services_WARN...

wiseowise(10000) 4 days ago [-]

> I really was just a fuckin cog in a mega corp.

Love sudden realization.

I wonder how many people within companies think "well, they are a cog, but I'm certainly not" just to be left on a road soon after.

2-3-7-43-1807(3350) 4 days ago [-]

the post oozes narcissism. and he even seems to think he contributed to the health of the internet by working at chrome.

kouteiheika(10000) 4 days ago [-]

Right, okay, let's look at their most recent SEC filling to see how much money they lost in 2024 to justify layoffs... right, they made 350 billion in revenue (the highest ever in their history from what I can see) with a 100 billion in net income. Yep, this checks out, they definitely need to lay off people, can't afford them.

cmrdporcupine(2889) 4 days ago [-]

Yes, and they're still hiring too, while doing this at the same time.

As a person who worked there for a long time, I never thought it was a good idea how rapidly they hired and never felt they needed that many people.

But the layoff process has been sadistic.

And the people who made the decisions to hire like crazy are not paying the consequences. In fact it feels very much like they're using this as an opportunity to push the median average age and compensation level of their staff down. Moving more and more positions to lower cost regions, and hiring younger developers while ditching higher paid senior staff.

Today's Google really sucks.

slivym(10000) 4 days ago [-]

They're not a charity. What do you want them to do? Hire $100Bn worth of engineers until their net income is 0? The possibly difficult truth at Google is that there's probably <1% of the company that is really essential to their monopolistic search business. The rest are either working on other projects which might be strategically interesting but not essential, or are working on the core product but not in a way that's driving business. Is it wrong for the management to say 'We need to be efficiently investing shareholder capital' or for the market to be looking at Google and saying 'We want your money spinning monopoly business please, not your eccentric other bets thanks'.

concordDance(10000) 4 days ago [-]

While the manner of layoff, role of layoff and person to lay off all seem foolish, profits do not mean that layoffs are a bad idea. You should hire people you need and if you want to good in the world, donate to the most effective charities (in QALYs/£).

akskos(10000) 4 days ago [-]

TL;DR this guy got laid off and is not happy about it.

ojagodzinski(10000) 4 days ago [-]

It scares me that people make a talking point out of it XD what an incredible event, someone got fired!

abdj8(10000) 4 days ago [-]

Layoffs are a difficult thing for employees and their managers. I have seen people (one was a VP of Engineering) escorted out of the building, sent in a cab to home along with a security guard (this was in India), not allowed access to computer or talk with other employees. But, recently have had a very different experience. The current company I work for announced 30% layoffs. The list was made public within one hour of announcement. The CEO detailed the process of selecting people. The severance was very generous (3-6 months pay) along with health and other benefits. The impacted employees were allowed to keep the laptop and any other assets they took from the company. They even paid the same severance to contractors.

After the announcement, the laid off employees were given a few days in the company to allow them to say good byes. I love the CEOs comment on this ' I trusted them yesterday, I trust them today'. This was by far the kindest way of laying off employees imo. People were treated with dignity and respect.

phamilton(10000) 4 days ago [-]

A nice addition to this I've seen twice now is a slack channel (via their personal emails) with continuing employees willing to help them practice interviewing and share their professional networks to help them find their next role.

biztos(10000) 4 days ago [-]

That's great, and the polar opposite of how I experienced layoffs (of others, then eventually of me).

But one thing that could be better is transparency around severance, so you know in advance what it will be should you get laid off. (Six months may or may not be "generous" depending on tenure.)

When I was laid off we got what was "customary" in that country, but before the offer was on the table nobody was sure we'd get it. It's so much nicer when this is a matter of law — I'm all for a ~ free labor market but severance requirements help to balance the risk so the employees can relax and do their best work.

apexalpha(10000) 4 days ago [-]

Weird, as someone from Europe I've never experience anything else.

Layoffs here are always done in conjunction with the unions. People are moved to different jobs, helped with training etc...

Only in very critical jobs they'd walk you out immediately but then you still get the pay.

throwaway2037(2851) 4 days ago [-]

Wow, the last paragraph is really touching. That comment from the CEO is brilliant: 'I trusted them yesterday, I trust them today.' That will stay with me for some time!

Ferret7446(10000) 4 days ago [-]

What happens if your company supports billions of dollars in economic output, and a few employees decides to go rogue and sabotage some systems that then causes an international loss of billions of dollars, and possibly property damages and loss of life? If you were the CEO, would you take criminal/financial responsibility for that?

ErigmolCt(10000) 4 days ago [-]

This is such a huge contrast to the usual cold, corporate layoff horror stories. Honestly, this is how it should be done if layoffs are truly unavoidable - with transparency, respect, and basic human decency.

EdwardDiego(3564) 4 days ago [-]

God I love living in a country with employment law that recognises the massive disparity between employers and employees.

crossroadsguy(10000) 4 days ago [-]

For anyone not from India — India does layoffs in every way. From "cut on zoom in 90 sec" to "please know that you have to resign and serve your two months notice and then go"; to also "if you want you can serve the notice period, or you can just leave today and still get the pay for two months". I have experienced the first and last and in the case of last for some reason I had chosen to serve the notice.

EE84M3i(10000) 4 days ago [-]

Wow, I've never heard of terminated employees being able to keep their corporate laptops before. Did IT at least wipe them first?

DannyBee(3397) 4 days ago [-]

Google is just really bad at this, but seems to think it's not bad at this. It's sad since there is no excuse for it - plenty of companies conduct regular layoffs and role eliminations in more compassionate ways, it would not take much to survey and learn from their practices. Hell, IBM was often more compassionate about layoffs than Google.

Some of it they've tried to become more formal about in ways that actually make it worse - so for example, the timing of this (which the person complains about) is because (AFAIK) they now have one day a month where ~all role eliminations that are going to happen that month, happen. Or so i'm told this is the case.

Ostensibly so you don't have random role eliminations every day, which makes some sense, but then you have no way for people on the ground to do anything more compassionate (like move the timing a bit) because they can't get through the bureaucracy.

In the end - it's simple - if you disempower all the people from helping you make it compassionate, it will not be compassionate. The counter argument is usually that those folks don't know how to do it in legally safe/etc ways. But this to me is silly - if you don't trust them to know how to do it, either train them and trust them, or fire them if they simply can't be trusted overall.

dzogchen(3428) 4 days ago [-]

> the laid off employees were given a few days in the company to allow them to say goodbyes

This is just so wild for me as an European, because at least in Germany if you get fired (or if you quit) you need to stay 1 - 3 MONTHS at the company still.

LPisGood(10000) 4 days ago [-]

Is that company in data storage?

rqtwteye(3305) 4 days ago [-]

'I have seen people (one was a VP of Engineering) escorted out of the building, sent in a cab to home along with a security guard (this was in India), not allowed access to computer or talk with other employees. '

Some companies are just paranoid. My company has now had several rounds of layoffs, people were kept on for a few months, got severance and everything went as harmonious as layoffs can be.

The cruelty the way some companies and now Musk with DOGE are doing it is simply not necessary and reflects a lot on the character of leadership. To me it looks like they are deeply insecure and hate their people.

therealpygon(10000) 4 days ago [-]

As it should be, but emotional people make emotional choices. The trusted and valued employee yesterday can turn on a dime and become malicious when they feel they have been wronged regardless of whether that is independently true. Their resulting actions can include anything from theft of IP to hand over to a competitor, to destruction of records or property. Worse, it is impossible to tell when someone will choose to feel they have been wronged, even when the employee could have had chronic absenteeism or underperformance that they justify with personal excuses. (I'm not suggesting there shouldn't be compassion, rather that most people will almost always make mental excuses to justify their behavior regardless of whether that reasoning is sound.)

Companies generally don't become militant about a subject unless they have experienced the other side of the equation. It's not just with layoffs, it can happen with protecting source code, licensing, network security, etc. I concede that a company could replace destroyed property and should be able to recover deleted data, then prosecute/sue to recover damages which could cost tens or hundreds of thousands(or millions depending on the level of access), but the disruption to business can be significant in some cases. Moreover, it is impossible to put an IP cat back in the bag.

For me, it seems easy to understand both sides on this one; compassion vs risk.

magicstefanos(10000) 4 days ago [-]

Good for you but how sad that being treated like a human is remarkable.

ghoshbishakh(2925) 4 days ago [-]

VDX.tv?

Aurornis(10000) 4 days ago [-]

> After the announcement, the laid off employees were given a few days in the company to allow them to say good byes.

I was at a company that did this. I thought it was very nice at first.

It didn't take long to see why most companies don't do this. It became common to have a couple people who turned their last days into a mission to poison the well and go on angry tirades. Those days became tense and messy as people trying to do work felt necessary to move it to private messages to avoid triggering anyone.

It gets really ugly when IT starts checking logs and sees outgoing employees doing things like accessing code they weren't even working on or downloading files in volume.

This was at a company with generous severance, too, so that wasn't the cause. A small number of people get irrationally vengeful upon being laid off. At Big Tech scale it's virtually guaranteed that at least one of the people you lay off is going to make some bad decisions.

mik09(10000) 2 days ago [-]

its nice to know even people like google are treated like this. even people with management roles.

underlines(10000) 2 days ago [-]

That's the normal way at least where I live (Switzerland) and I am shocked people are being disposed off like that in the states. Is this even legal there? We usually get 1-3 months notice period, then continue to work for these 3 months to teach the new hire or finish our open tasks. If we won't find another job in time, we would get 70-80% of the previous salary until we find another job.

sudomateo(10000) 4 days ago [-]

> But I was also immediately ripped away from my calendar, docs, code, and more.

Layoffs are never easy. I've been through a few myself and it really takes the wind out of your sails. That being said, this sentence made me pause a bit. None of these things mentioned are actually yours. They are the property of Google.

One thing that helped me immensely in my career is understanding that my relationship with a company is a business relationship. They pay me for my time and skills and nothing more. Today I can have a job and tomorrow maybe not. I recommended learning how to separate your value from your employer. It's not easy but it's necessary. I'm not saying you can't enjoy what you do or be excited by it but don't fully tether yourself and your well-being to a company.

Godspeed!

kopirgan(10000) 4 days ago [-]

Exactly.. Many see it as some sort of marriage in an age where even marriages are contractual relations

dullcrisp(10000) 4 days ago [-]

I think their point was that they were told they could look for another internal role, but at the same time had their access revoked, which sends a very mixed message.

anal_reactor(10000) 4 days ago [-]

> I recommended learning how to separate your value from your employer.

This is a very recent development. Through the entirety of human history you'd keep working for the same employer for your entire life, which means it was very much worth it to cultivate that relationship, it's only now that we change jobs every two years. A friend of mine has a company in a very small town, and was complaining about an employee being lazy. I suggested 'just fire him if he doesn't do his job', to which I heard 'and then what? I'll have a jobless bum walking around my town. Thanks but no'. This really shifted my perspective: the situation where employer and employee have no moral obligations towards one another and it's 'business only' is not how the society at large should function.

ErigmolCt(10000) 4 days ago [-]

Companies will always remind you it's 'just business' when it suits them - so it's healthy to keep that same energy in return

heresie-dabord(3254) 4 days ago [-]

> I recommended learning how to separate your value from your employer. It's not easy but it's necessary.

Agreed, it is necessary to make deprogramming oneself easier — less painful — to the extent that one has come to identify with the work and/or culture and/or employer.

But it is also exhausting to maintain a façade of allegiance to a harshly indifferent power structure.

windward(10000) 4 days ago [-]

>I recommended learning how to separate your value from your employer.

Not just that: separate it from your career. Ensure that you and others would still value yourself even if you weren't receiving top decile income for an easy job. A misanthropic software developer is begrudgingly useful; a plain misanthrope isn't even mediocre.

kaon_(10000) 4 days ago [-]

'One thing that helped me immensely in my career is understanding that my relationship with a company is a business relationship'

That is just a culture thing. Most prominently in the US. In many cultures there is no clear boundary between personal relationships and business relationships. And why would there be? I would like to live in a world where kindness, dependability, punctuality, warmness, openness and forgiveness are values upheld both by natural and legal persons. And I have worked with many companies that have! As you can read in the comments, for every bad example you can find companies lead by empathic people that treat their employees humanely.

Google always pretended to be that company. And maybe they were for a long time. Now they've shifted. They really didn't have to but they did. The excuse of 'it's just a business relationship' really is just that: an excuse. The symptom of a culture with values so bankrupt that it accepts citizens being treated poorly and then blames the victims for expecting to be treated humanely.

And yes, it saves you a lot of personal pain if you expect the worst from your employer from the outset. But is the world really better off if we all expect to treat each other like criminals?

zonkerdonker(10000) 4 days ago [-]

I hope you use your new free time to beat every expert song on Wacca

cab11150904(10000) 4 days ago [-]

This is a pretty dumb redduht level comment. I'd personally probably just remove it.

cadamsdotcom(10000) 4 days ago [-]

Yep, it sucks. Speaking from experience - I was laid off a few years ago. I was sad my time ended, but my path forward was to leave SF with money and time to visit countries I'd always wanted to see.

It's a trend away from the post-WW2 'promise of lifetime employment'. Over the decades, companies have crept toward 'human autoscaling' so slow no one noticed. You're far from alone - every other company is doing it. Go see the numbers at https://layoffs.fyi . When the whole industry is doing something, companies must follow suit to stay alive.

Nurture your network! Keep being present on their feeds. Reach out to the ones on your team that you had personal relationships with. Some will shun you; it's not personal, they're ashamed and fearful. It is human nature, same as the company's behavior toward you is a company's nature.

There was never a better time to take things into your own hands. Go look at @IndyDevDan's content on youtube and test the limits with agentic coding: https://agenticengineer.com/state-of-ai-coding/engineering-w...

Spend your 8-20 paid weeks agentic-coding (not vibe-coding) silly projects for your nieces and nephews. You'll come back stronger and more employable than ever.

Don't be sad to be kicked out. The boot that kicked you was attached to a Hills Hoist.

YZF(10000) 4 days ago [-]

Human autoscaling. That's a good one. I mean it's not good.

We live in weird times. Companies are drowning in earnings. Their stock sky rockets. But they are unable, or not interested, to put people to work to grow their business. Because they are so big it distorts the entire economy. Because they are so big and so entrenched it's also hard to compete with them.

Less people makes the stock goes up?

And then AI too in the mix with many executives apparently believing it can just replace all the people. Who is going to buy the products then?

I have a feeling this is temporary. The wheel will turn and suddenly companies will hire like there's no tomorrow on some new shiny thing. It's gotta - right? Otherwise what?

codr7(10000) 4 days ago [-]

I would recommend actually learning something valuable rather than wasting energy on AI and becoming dumber in the process.

windward(10000) 4 days ago [-]

That is potentially the least convincing website I've ever seen. I feel like I'm being sold a timeshare.

AndyKelley(1342) 4 days ago [-]

> I really was just a fuckin cog in a mega corp.

Yes, you were. Next time, please choose a company that contributes to society rather than shoving ads in everyone's faces.

Mond_(2960) 4 days ago [-]

No need to kick someone while they are down.

underdeserver(3633) 4 days ago [-]

Google contributes to society.

Search helps people find information. YouTube is quite possibly the most prolific source of learning ever created. Without Google Translate I'd have had a much harder time in a recent trip to Japan.

There's a lot of bad, but no contribution to society? That's a bit much.

Disclaimer: Ex-googler (left 2 years ago).

knorker(10000) 4 days ago [-]

Ironically you're statistically very likely to be writing this comment in a browser based on chrome.

And Chrome really helped save us from an Internet 'embraced and extended' by Microsoft. We were heading for Microsoft succeeding in their (not first) attempt at owning the Internet.

kome(1439) 4 days ago [-]

i am also extremely pissed at his complete lack of self awareness... of course, i am sad for what happened to him. but holly shit. do you think you were saving the world or what? you were working on a glorified spyware.

pb7(10000) 4 days ago [-]

He did, he worked at Google. What is your contribution to society? Some language reinventing the wheel for the 500th time? Google created a dozen of those alone and they don't even make the footnote of the contributions list.

mystifyingpoi(10000) 4 days ago [-]

> Relationships that took me years to cultivate... mostly going to be gone too.

I don't want to sound condescending, but if being forced out of the job means end for your relationships built for years, maybe these relationships weren't built as they should. They should have been built with the people as people, not coworkers, and definitely not using company as the communication ground.

neilv(3544) 4 days ago [-]

That sentence caught my eye too.

First thought was whether they meant corporate political capitol transactional relationships.

Second thought was maybe they meant that, inevitably (or so it seems, probably thinking depressed), they'd drift apart, since everyone's busy with family and work, and around the workplace was the only times they'd have to interact.

In the latter, even if you have beyond-work social relationships, the opportunities to interact outside of work and the lunchtime might tend to be like 'drinks after work', and effectively disappear as well. If that was your mode while working together, that's fine, and probably you don't want to see even more of each other then. That doesn't mean you weren't seeing them as people beyond coworkers. So, once no longer working with each other, you both need to actively change things to make opportunities to interact.

roncesvalles(10000) 4 days ago [-]

Most relationships do not survive being ripped away from the spatial and temporal context in which they were cultivated. How many of your middle school, high school and even college buddies do you still have a relationship with?

I think there's some stigma with confronting the fact that relationships are just ephemeral. We are social creatures in the sense that we can cooperate with each other on a task laid in front of us, but once that task is done, we mostly tend to drift apart onto the next task with another group of people. And that's okay. We're only weakly social with everyone except our direct family and significant others. The quality of a relationship is in no way measured by how long it endured.

riffraff(567) 4 days ago [-]

I see where you're coming from, but relationship need some amount of contact to survive.

Work forces you to be in contact, if the majority of your time is spent elsewhere due to changing job, or city, or gym, or having kids.. it's a blow.

I try to keep in touch with ex co-workers I cared about, but we live in different countries, at different stages in life, with different priorities, and it's hard to say the relationship is well.

That doesn't mean the relationships weren't built as they should, IMHO, they are just different kinds of relationships.

ErigmolCt(10000) 4 days ago [-]

I get where you're coming from, but I think it's a little more complicated than that.

jillesvangurp(3201) 4 days ago [-]

I experienced something similar at Nokia around the time things were starting to go bad (due to competition from Google and Apple). I got caught up in one of the earlier layoff rounds. As I've been able to reconstruct since then what happened was roughly that:

- I got a excellent performance review and a small raise. All good, keep on doing what you are doing! I was pretty happy.

- Nokia started to prepare for layoffs and gave units targets for numbers of people to lay off and amounts of money to save. They tried to spread the pain.

- Because of my team's multi site setup the choice came down to cutting at one of two sites. They picked my site. Management was concentrated at the other site.

- Because I was at the higher end of the spectrum in terms of salary, I was one of the natural choices for laying off. This was just done based on the numbers and had nothing to do with performance.

So, my bosses boss flew over to give us the news and that was it. Nokia was pretty nice about it. I was put on immediate gardening leave, I got the usual severance payment based on time served, and a decent amount of start up funding in the form of a grant.

Since things were chaotic, other teams in the same site were still hiring new people with roughly the same qualifications. I was actually bucketed in with a project I wasn't even a part of. That whole project got shut down and apparently it was convenient to pretend I was working on that just so they could avoid firing other people in different parts of the organization. Somebody had to solve a big puzzle and I was a piece that fit in the right place. It wasn't personal.

In retrospect, one of the best things Nokia could do for me was firing me. I was coasting and the whole thing forced me to rethink what I was doing. If you are late thirties and a bit comfortable in your job, you might want to make a move. Or at least think about what you would do if you were forced to suddenly.

Lesson learned: job security is an illusion and employment relations are business relations. Don't take it personal. These things happen. Part of a high salary is insuring yourself against this kind of stuff and dealing with it when it happens. Part of the job.

windward(10000) 4 days ago [-]

>job security is an illusion

It really is. Even government and blue chips aren't safe. In fact, those are where you'll find it's the most disconnected from your own agency.

mixermachine(10000) 4 days ago [-]

> job security is an illusion

Depends a bit on your country. My CEO can fire me but there is a longer notice period depending on how long I have been with the company.

- 2 years: 1 month

- 5 years: 2 months

- 8 years: 3 months

...

- 20 years: 7 months

Germany btw.

insomniacity(3426) 3 days ago [-]

> decent amount of start up funding in the form of a grant

This is fascinating? What was it in absolute terms, or relative to your base salary?

Did you have to have a viable startup idea and it was paid to the incorporated company? Or was it just extra cash in your personal bank account?

Did you do that, or did you just get another job?

quotemstr(3220) 4 days ago [-]

'The magic of first love is our ignorance it can ever end'.

One of the most difficult realizations you must confront in this industry is that almost everything you build will disappear. It will be ruined, ignored, slandered, and then forgotten. Almost all of your late night epiphanies and bugs conquests will fade anonymously into the anonymous blackbody spectrum entropy demands planet Earth emit.

You must come to peace with this reality. You must accept the transience of glory into your heart. You must prepare yourself, deep down, for any reality of off-sites and planned presentations and electric roadmaps to disappear in an instant. It gets easier after the first few times, trust me. You emerge a sadder and wiser man.

The only thing we can do is create moments of excellence --- occasions on which we can reflect when we are old and gray and take solace, even pride, in knowing we put every bit of ourselves into a task and did it well. There's honor and nobility in excellence even when doomed.

And who knows? You can't predict what will endure. If we're lucky, once in our careers, if we continually apply the best of ourselves, we'll do something that escapes all this destruction and endures.

fud101(10000) 4 days ago [-]

you looking for someone to mentor? damn.

JKCalhoun(3408) 4 days ago [-]

"Whatever you do in life will be insignificant but it is very important that you do it."

― Ghandi

throwaway2037(2851) 4 days ago [-]

First, this is pretty rough what happened to the person. My condolences.

Second, completely tangential to the content of the blog post: Was anyone else surprised by the number of comments/'mentions'/likes/reposts? I haven't seen so much activity on a single blog post in years. Normally, blog posts that accept comments have 10 or less comments. This one has hundreds.

Cthulhu_(3510) 4 days ago [-]

It looks like a Bluesky integration which will get a lot more engagement than a blog post. The author was a 'CSS advocate' at Google, which implies a strong emphasis on networking.

gary_0(3539) 4 days ago [-]

Their blog looks like it's integrated with Bluesky, where they have 15K followers, so that's where the activity is coming from. It's not uncommon for high-profile devs to get that much engagement there.

JimDabell(2160) 4 days ago [-]

I keep seeing this pop up everywhere. I'm sure he's a great guy, but the level of attention he's getting is massively disproportionate. A lot of great people have been laid off recently!

whiplash451(10000) 4 days ago [-]

« Relationships that took me years to cultivate... mostly going to be gone too »

Why? What prevents you from spending time with your ex-colleagues?

Strom(10000) 4 days ago [-]

Probably because most interactions were on company time. Because of course if the relationships were outside of work, then changing jobs would have little effect.

darknavi(2851) 4 days ago [-]

Relationships here might also mean professional relationships.

I think many of those can still survive a job transition, but some of them may rely on the fact that he is on the Chrome team doing Chrome things. Those relationships would now be moot (professionally).

bsimpson(3548) 4 days ago [-]

A potentially unique feature of Google (at least pre-pandemic/McKinsey) is that it cultivated communities of people in a particular discipline despite being spread across the world.

When I first met Adam, we were both UX Engineers. We'd all gather in NYC in the spring and in the Bay Area in the fall for internal conferences. Adam lives in Seattle. There are plenty of people who adore him who aren't geographically close enough to meet for the proverbial beer. I suspect that's also true for the connections he made outside of Google.

cess11(10000) 4 days ago [-]

That's a good time to read up on Google's involvement in genocide and tyranny.

cab11150904(10000) 4 days ago [-]

Why now and not before? Because some spoiled manbaby lost his cushy job?

azangru(10000) 4 days ago [-]

I've skimmed through the comments; and seen that most people have commented on the cog in the machine thing, or on layoffs in general and how they suck.

To me, the shock from this blog post was about seeing a Chrome developer relations engineer whom I have grown to admire and who has been doing a stellar job educating web developers on new html and css features, get the sack. He was one of the best remaining speakers on web topics at the Chrome team (I am still sad about the departure of Paul Lewis and Jake Archibald); and produced a lot of top-notch educational materials (the CSS podcast; the conference talks; the demos).

What does this say about Google's attitude to web and to Chrome? What does this say about Google's commitment to developer excellence?

I understand that this is a personal tragedy for Adam; but for me personally, this is also a huge disillusionment in Google.

gtirloni(1339) 4 days ago [-]

It says they are getting ready for the future when some govt agency splits them up and they are shedding the load now (the areas they will have to sell).

raffael_de(10000) 4 days ago [-]

While possibly a traumatic experience for Adam, I fail to see the significance of this beyond anecdotal level. And I find it rather odd to argue that after all Google did and didn't do that this is what is causing disillusionment with Google. By now Chrome is basically just a Trojan Horse with advertisement and surveillance for this purpose hidden in the inside.

gman83(10000) 4 days ago [-]

Maybe they're not confident in the case against them: https://www.wired.com/story/the-doj-still-wants-google-to-di...

wiether(10000) 4 days ago [-]

> for me personally, this is also a huge disillusionment in Google

This feels like 'I installed Chrome before Google went evil'.

https://fortune.com/2025/03/19/tesla-owners-elon-crazy-bumpe...

noosphr(10000) 4 days ago [-]

>What does this say about Google's commitment to developer excellence?

Look inside the tensorflow code base for your answer.

I had the Kafkaesque experience of reporting a bug, being told there is no bug by a help desk employee, while the bug was throwing up errors in the official docs.

To top it off I got a message by one of the onshore team months later that they were going to solve it only for the person to be fired within a week.

I've mostly moved to jax for daily experiments. Hopefully the fact that codebase is small and modular will mean that when Google Googles all over the project there will be enough know how to maintain a community fork.

weatherlite(10000) 4 days ago [-]

It probably just didn't have enough economic value for the company, from your explanation of the role I'm not sure I see the value either. The guy probably earned enough money in a few years that would take me 15 years of work, I'm not sure this as a 'personal tragedy'.

drdrek(10000) 4 days ago [-]

There are very serious talks about forcing google to divest from Chrome/Android, I would bet that's the reason

Geenkaas(10000) 4 days ago [-]

I am listening to a podcast Adam Argyle is talking in, listening to what he is passionate about and then getting axed by Google is painful to hear as now it is clear that Google is not passionate about those things (anymore). It is also painful personally because it is what I am passionate about (and my job). Link: https://dev.to/whiskey-web-and-whatnot/leveraging-css-web-de...

atotic(10000) 4 days ago [-]

Agreed, Adam really is one of the best at what he does. His talks, demos, were always so interesting. My guess is that he'll be at Microsoft shortly.

What Google is saying with this layoff is that they no longer care about web developer relations. Chrome has not been well funded for years.

Firefox did the same thing five years ago, when they fired David Baron, who was one of the top 5 engineers in the world that understood how HTML layout works. He got instantly hired by Chrome.

It is kind of crazy that the core group that moves web standards forward is around 150 people. And most of them did not get rich off it, and have been doing it for decades.

jldugger(10000) 4 days ago [-]

> What does this say about Google's attitude to web and to Chrome? What does this say about Google's commitment to developer excellence?

It probably says 'the DOJ really is gonna force us to sell Chrome.'

dennis_jeeves2(10000) 4 days ago [-]

>a stellar job educating web developers on new html and css features, get the sack.

I have trouble relating to the evangelist fervor that some developers develop toward their craft.

forestgreen76(10000) 4 days ago [-]

This certainly isn't new. I know someone who worked at Google who mentioned the company culture has been souring since the start of the pandemic. I suspect Google will have a slow death akin to Yahoo in the coming years.

throwanem(3029) 4 days ago [-]

> What does this say about Google's attitude to web and to Chrome? What does this say about Google's commitment to developer excellence?

Everything that's needed saying for at least the last decade.

lapcat(2643) 4 days ago [-]

This article was actually posted 3 days ago. I saw it back then and read the comments. You can see the old timestamp here: https://hn.algolia.com/?q=https%3A%2F%2Fnerdy.dev%2Fex-googl...

I think this is what HN calls the 'second chance pool'.

I absolutely hate when HN reposts an article and alters/falsifies the timestamps. It's so incredibly misleading.

B1FF_PSUVM(10000) 4 days ago [-]

In other occasions, time seems to pass very slowly for the aging of first-page items. Probably relativistic effects of large amounts of hidden mass ...

ur-whale(2802) 4 days ago [-]

Silver lining: one less person working on the spying machine.

gh0stcat(10000) 4 days ago [-]

Honestly this, if these people are so smart, I truly believe they can help shift the ratio of spying/nefarious/addictive tech to productive/helpful/truly world changing tech in a positive direction. We need to distribute talented people across more industries to improve the world and technology for everyone.

jasonvorhe(3497) 4 days ago [-]

Can barely read this post because scrolling feels so sluggish and weird on a Chrome-based browser on a Pixel 9 Pro. Hope the playful effects are worth it for the author.

Redoubts(10000) 4 days ago [-]

it's probably the commenting section at the bottom (which also took forever to load in)

javawizard(10000) 4 days ago [-]

That was painful to read.

I had a very similar experience at Google about a year ago, and the worst part of it was that they did it 2 weeks before I was set to receive a 6-figure retention bonus for sticking around for 2 years after an acquisition.

Several other members of my team got the boot at the same time. All of us had come in via that acquisition and were set to receive that bonus, and because of the layoffs, none of us did. Folks I talked to on the inside stopped just short of saying that was why we were chosen.

It was especially galling because years before at the company that eventually got acquired by Google, I survived a round of layoffs, and leadership issued stay bonuses for everyone who was left. Those bonuses explicitly stated that they were still valid in the event that we were laid off before their time period was up.

Big companies are soulless.

yesimahuman(2614) 4 days ago [-]

Might be worth talking to a lawyer. Sorry to hear that, absolutely maddening

jmyeet(10000) 4 days ago [-]

You should consult a lawyer about this. You might be SOL but if this happened to several people, you might be able to show the company didn't act in good faith because there's a pattern of people about to receive their bonus being laid off. Layoffs aren't meant to work that way.

Generally layoffs involve someone who doesn't know who you are picking names almost at random from a spreadsheet. Management may fight for certain people to stay. Then legal and HR get involved and look through the layoff list to see if the chosen employees are problematic. For example, if the layoffs include too many people from protected classes, which opens them up to being sued. For example, if your company is 20% women but the layoffs are 50% women, that's going to be an issue.

Avoiding paying substantial retention bonuses can work the same way, if a pattern can be shown.

A simple letter from a lawyer probably won't do anything. Large companies are prepared for that.

For anyone who does come across this, here's my best advice: if you are acquired and your new employment contract includes a retention bonus, you want that contract to say that the retention bonus is payable unless:

1. You leave voluntarily within that period; or

2. You are terminated with cause within that period.

Otherwise, you should get it.

VagabundoP(10000) 4 days ago [-]

Did you sue? Because that's bullshit. The retention agreement should have included that clause anyway.

ncr100(10000) 4 days ago [-]

Awful experience.

What is interesting is our denial, as (ex-)corporate employees, that the corporation is NOT FAMILY...even though we may feel it is.

> Big companies are soulless.

'And God created the C Corporation' -nowhere in the Bible / Koran / Hinduism / Buddhism / Torah

I feel this lesson keeps being re-learned by us people / workers ...

cmrdporcupine(2889) 4 days ago [-]

That's awful and the most amazing thing you could do now is get together with those ex-coworkers or similar people and compete with Google in whatever business domain it was that made them acquire your former employer.

Because, having been through the acquisition process at Google myself, my general cynical take is: Google acquires companies to get rid of them, to stop them from competing and not to 'add your uniqueness to their collective.'

Keeping employees on retention bonuses is a way, in aggregate, of stopping them from going off and inventing something that eats their bottom line.

You should look into legal action. And failing that, compete with them.

delfinom(10000) 4 days ago [-]

You guys should have a consultation with a lawyer. It's a little cheaper if you guys just use one lawyer to go after Google for the retention bonus if there is a case ;)

singron(10000) 4 days ago [-]

It might be too late now, but I've successfully negotiated (before signing) retention deals like this to be pro-rated in the event of non-voluntary termination. It's perfectly reasonable for exactly this reason, and companies have no legitimate reason to deny it.

pjdemers(10000) 3 days ago [-]

The only retention bonuses I ever seen were to be paid in immediately, in full on involuntary termination. There was a 'for cause' clause where bonuses don't get paid for termination with a cause, but the causes were listed in writing.

mcv(10000) 4 days ago [-]

Layoffs are one thing, but to be cut off without any notice, that really sucks. I usually know months in advance that when I'll leave, so I'll have time to finish what I'm working on and train the people who will take over my responsibilities. It seems weirdly destructive for a company not to allow for that.

As for email, calendar etc, I think the lesson here is not to depend on anything from your employer. Keep everything under your own control, so you won't lose too much when you get fired.

mrgoldenbrown(10000) 4 days ago [-]

Are you outside the US by chance? Sudden layoffs like this are the norm here.

swah(1278) 4 days ago [-]

Glad I have a chance to peek into his world, but should he have posted this?

JKCalhoun(3408) 4 days ago [-]

I suppose you have to decide for yourself if you're going to spend the rest of your life trying to grovel (okay, a rather pointed word to choose) for future employers.

baking(10000) 4 days ago [-]

Must be a Chrome developer. His blog is frustratingly hard to read on Firefox. I felt like I was going blind in real time.

baggachipz(3531) 4 days ago [-]

Looks terrible in Safari too

z_open(10000) 4 days ago [-]

It is a chrome developer. His claims that he was raising the quality level of the web are particularly hilarious given that he worked at google. Maybe the salary of google blinds people into believing this.

spicyusername(10000) 4 days ago [-]

    I really was just a fuckin cog in a mega corp.
Yep. One of the most unfortunate realities of modernity.

Your managers, or your managers managers, or their managers don't care about you. At all. If you ask them on the weekend, they'll decry that the things they are asked to do are horrible. but they'll still do it. Some gladly.

They are themselves cogs in the machine.

A machine that goes all the way to the executive class, and they really don't care about you. In fact, more likely than not, they detest you.

We all participate in this hostile culture, in various ways. Usually using the excuse that we need to pay rent, eat, find the work interesting, or with some other excuse that justify the means.

It seems like it's hard to do the right thing when you have something you want to buy or otherwise spent your whole life getting here, before realizing what here is.

LPisGood(10000) 4 days ago [-]

I feel like this is a very dramatic view of things. Have you ever been in a management position?

vonneumannstan(10000) 4 days ago [-]

>I really was just a fuckin cog in a mega corp. >Yep. One of the most unfortunate realities of modernity.

The crazy thing to me is the lack of awareness of these people. Has hiring at Google fallen off that badly? Was there always such a gap between 'smart enough to work at google' and 'smart enough to realize their corpo-we're one big family-speak is total BS' ?

nikolayasdf123(10000) 4 days ago [-]

> I really was just a fuckin cog in a mega corp.

yep, you always was.

bigtech and corporate make a good illusion that you aren't. brace, if you let yourself believe in that illusion.

shadowgovt(10000) 4 days ago [-]

So the key thing here is that this didn't used to be how things were at Google.

People outside the ecosystem disbelieve, but I had the mixed privilege of watching the company evolve from a spicy startup to a megacorp. There isn't one point in time you can put your finger on when it shifted, but the shift happened. And for Googlers who'd been there forever, they were legitimately startled to learn that all their years of work hadn't made them insiders as the lines were drawn and management consolidated into something more approximating a traditional corporation.

If there's a lesson here, I think it's that there is a difference between a company like old Google and a company like new Google, but if you only want to work at old Google, you have to pay very close attention to the signs that things are changing around you. Capitalism, to be certain, incentivizes drift in that direction, from small outfit where everyone knows everyone to 100-thousand-person megafirm with concerns about its tax obligations to Ireland.

ssimpson(10000) 4 days ago [-]

I feel like its unfair to say every single direct manager doesn't care about their folks. I care about each and every person on my team, I care if they are engaged and if they can do their job. I care if they get sick and give them the time to make sure they feel better. I care about their career and try to help them along. Maybe I'm the minority, but I think that lots of managers of ICs should and do feel this way. As you go up the ladder, i can see that going down as the scope increases, but thats why you have managers, to keep attention to those details. Now i've had directors and stuff that do not care about their managers. I've also had managers that aren't great and don't care.

You are 100% correct though, we are all cogs in the machine. In the end, the people at the top don't care about anything below them if it isn't making them an the shareholders more money. If they do, they are a unicorn and i hope everyone gets to work with someone like that.

When I was laid off from RAX, it was a super emotional time. I had a job where I got to hang out with my friends and good people doing good stuff, and we also did some work (the work we were doing was so enjoyable most of the time, it didn't feel like work). I've never been able to capture that since and it has contributed greatly to my desire to get out of leadership roles.

ibejoeb(10000) 4 days ago [-]

> We all participate in this hostile culture

You can try to participate less. It's also work, but for some people, it's better than the corporate environment.

Keep your expenses under control. (That alone can be hard to do if you're relatively successful in tech, so I mention it because it's something to really think about.) Network in real life to find projects that have finite durations. Take some time between those projects and use that to both relax and develop new business. Go to a different city for a few days, maybe for an organized meetup or a conference (even if you don't attend) and try to meet people. You're double dipping here. Go sightseeing or something else entertaining, and then try to work a room.

> they really don't care about you. In fact, more likely than not, they detest you.

Hopefully more the former than the latter. You're not getting married. You shouldn't be out to find a new family, and everyone hates that metaphor anyway. You probably will find people you do like, though. Since you're targeting well defined business, you don't have to live with that relationship if it doesn't pan out. You just need to get to your next cycle.

I've found a lot of people that I really do like. Some, I still do business with, and others I just sometimes get together with for dinner or a cocktail. We know we still like each other because there's no longer any money involved.

This is a defensive play also since you aren't all-in on one engagement. You can't get complacent just because you're on a W-2 and it all feels good, as this post illustrates.

I'm aware that this isn't an out-of-the-gate strategy. If you're gainfully employed now, save up. Even if you hate your job, use it to establish a stable position so that you can get out when you want to. Seriously consider what you think are the luxuries in life and whether you actually enjoy them or if you have been convinced that you do for some other purpose, like pleasing others, peacocking, or keeping up with the Joneses.

dennis_jeeves2(10000) 4 days ago [-]

>In fact, more likely than not, they detest you.

Engineers, nerds, developers remember this ALWAYS. Do not work hard for ANYONE including your family members unless they reciprocate proportionately.

freeamz(10000) 4 days ago [-]

This has being like this when that changed from 'Personal' department to 'Human Resources'

Do the corp that is what you are!

The lower level of hell is definitely reserved of industrial psychologists and advertisers!

clutchdude(3624) 4 days ago [-]

This reminds me of the demotivator with pictures of cogs.

> Just because you are necessary doesn't mean you are important.

https://despair.com/cdn/shop/files/worth_6b813282-f9f8-41ab-...

acyou(10000) 4 days ago [-]

Content aside, does anyone else have poor scrolling performance on his blog? I saw similar issues on both mobile and desktop, what's with that?

neop1x(10000) 4 days ago [-]

Also the content did not fit my Galaxy S24's screen width when used in portrait. The author's previous work in the Chrome team is visible. 'The shoemaker's children go barefoot' as they say. :)

jiveturkey(10000) 3 days ago [-]

exceptionally poor to the point of being unreadable, on safari. in chrome it works perfectly for me. i believe it is due to the bluesky feed, seeing as the author's own content is really short.

i'll have to figure out how to block bluesky. the blockers focus on privacy stealing feeds like facebook etc.

vzaliva(10000) 4 days ago [-]

I guess the lesson is: don't get emotionally attached to your job. Despite all the "we're like a family" talk, at the end of the day, you're just an employee. Never forget that.

fullshark(10000) 4 days ago [-]

We all want to be seduced though, we all want to believe we are special, we all want to believe our work has value and we anthropomorphize the company on the other end of the relationship, believing it's a partnership.

Protect yourself, but it's a sad way to spend 40-60 hours of your life, constantly reminding yourself that your job is just a paycheck and not putting yourself into your work.

Not sure how so many can do it and be motivated. My current strategy is compartmentalization, and it all just seems unsustainable long term, cause in the back of my mind it all seems so empty.

mont_tag(10000) 4 days ago [-]

ISTM software engineers have been living in a privileged and elite world. They are then utterly shocked to be treated like employees are treated elsewhere.

Pretty much anywhere if you are let go, your email access and physical access are cut off immediately. Start-ups do this all the time as funding gets tight or there is a need to pivot.

I get that this sucks (and have been on the both the dishing out side of this and the receiving end of it multiple times). It is a fact of life. It would be more mature to move on rather than blog about how you feel wronged by your former employer. The next employer may see this post and reason that it is unsafe to hire this person because they feel a need to damage the company's reputation on the way out (for Google, there isn't much risk here, but for smaller companies, threats to the reputation matter).

ncr100(10000) 4 days ago [-]

> It is a fact of life. It would be more mature to move on rather than blog about how you feel wronged by your former employer.

+1.

While there is an imaginable 'victim' viewpoint, it is a job for pay with a clear employment contract that was agreed to before employment start, between the Employee and the Corporation, including local and state and federal laws, permitting EXACTLY THIS type of termination.

Further, corporations can't be seen to Favor one Googler vs another. Especially since there is NO GUARANTEE this Ex-Googler isn't one of those AR-15 toting weirdos who condone violence against their now ex-coworkers .. so allowing them futher access to the (huge) universe that Google owns and controls .. its corporate workings .. even for an additional 5 seconds after termination, can be reasonably seen to be Foolish .. so they would cut ties Immediately.

ygouzerh(10000) 4 days ago [-]

> It's a fact of life

I will argue the contrary. Companies with US mindset makes us think that.

Countries with social safety net have a better way of handling it. Even in the country where I am now living, Hong Kong, which is very liberal, half of the companies let you have 1 month of notice period.

gedy(10000) 4 days ago [-]

> ISTM software engineers

Probably not the International Society of Travel Medicine, what's the abbreviation?

HdS84(10000) 4 days ago [-]

Honestly, the problem is not that there are layoffs, the problem is that the process sucks.

you don't need to fire this person immediately - you can talk to him, wind his operations down and then let him go. I.e. in Germany it's often half a year between announcing a layoff and anything happening (besides other stuff like making sure the layoff applies to the newest people first). Even if you don't want such a long period - talking to him and giving him a few weeks to wind down at your firm and starting to search for a new job seems perfectly reasonable. What happens if he wreaks havoc on your firm out of revenge? Really? Happens practically never. If it happens, sue him.

ofc this process applies to reasonable layoff - if it's for something egregious (breaking the law) you can and should fire him immediately.

sensanaty(10000) 2 days ago [-]

This is largely a US issue.

My partner here in NL got fired from a regular retail job, but the company still had to pay her 3 months of salary because she had a permanent contract and worked there for 3 years. I mean it's minimum wage, but still. She also had a month of warning, plus she could choose whether she wanted to use her remaining vacation days or have it paid off alongside the 3 months (the holiday pay gets taxed up the ass though).

Vegenoid(10000) 4 days ago [-]

Getting laid off sucks, but this comment isn't about that. What I noticed when I read the post is that the website isn't very good. It's laggy, as in slow to load, scroll, and for the mouse-hover stuff to respond, and this is on a fancy modern macbook. It seems to focus on pretty modern web aesthetic over presenting content. This is exactly the kind of website that makes me bemoan the tendency to prioritize looking better than a simple website when comparing 2 static images, and not prioritizing the experience of actually using the website.

I find these things have a real 'well it works on my machine' about them. Whereas sites that stick to simple tech (ex. HN) are far more likely to work well on all machines.

b8(2862) 4 days ago [-]

The website works smoothly on my Pixel 6A. Not sure if it's JavaScript or some other software issues taking up your Mac's hardware resources.

the-grump(10000) 4 days ago [-]

Browses very smoothly on my iPhone and it looks great.

ra7(156) 4 days ago [-]

It's always sad to see people lose their jobs, but it's telling how often it's ex-Googlers posting about layoffs. Feels like a lot of the shock is just realizing they're just as replaceable and as much of a 'cog in the machine' as everyone else. Google spent years selling the idea that it was special, but this feels like a real coming back down to earth moment for the employees.

globular-toast(10000) 4 days ago [-]

The OP strikes me as being quite immature. Like a first breakup or something. I think it's less about Google selling themselves as being special and more that people like OP have been led to believe they are special. A lot of them have been treated like royalty: super privileged lives, only experiencing the nice bits of society, top education, then straight into a 6-figure job where you get to be part of a special club with a prestigious google.com email address etc. It's going to be a shock to anyone to have that taken away abruptly when you're a decade or more into this lifestyle.

Most people have to go through shit like this at some point in their life. Most don't get to reap in internet sympathy by the bucketload, though. For some people it really actually sucks. OP is likely a millionaire already, could just take time off to adjust and reflect, then accept one of the numerous job offers that will be on the table. They might even end up doing something useful with their lives instead of advertising.

bitbasher(10000) 4 days ago [-]

I was laid off (as a founding engineer) nine years ago from a startup. It __still__ burns to this day.

There's a betrayal in there that is hard to let go. It was a catalyst for burnout and an overall vitriol for the entire tech industry that hasn't really let up to this day.

Luckily, I created a product that has given me financial freedom with zero employees. I don't think I'd have made it if I kept working for people.

beacon294(10000) 3 days ago [-]

Have you written anywhere about your product creation? I would like to create a product and it seems like there's a lot of unique things to get past. I'm looking for resources.

mrgoldenbrown(10000) 4 days ago [-]

>I really was just a fuckin cog in a mega corp.

This article could have been interesting if they talked about why they ever thought they weren't just a cog. Like what cognitive blinders did they have on? Does Google have a unusually effective 'we're all a family' type of internal propaganda?

gorfian_robot(10000) 4 days ago [-]

corpos are really good at creating a false narrative around shared missions/values/etc

mattbillenstein(10000) 4 days ago [-]

You have to understand who you work for - most companies don't really care about their employees - they are means to an end and if they weren't absolutely needed, corps would do the work other ways.

And Google is way past 'Don't be Evil' days...

dennis_jeeves2(10000) 4 days ago [-]

>And Google is way past 'Don't be Evil' days...

Wonder what prompted the change in L&S ...

I suspect over a period of time caring people realized that the people they care for are a shitty lot, so they become less caring.





Historical Discussions: Google is winning on every AI front (April 12, 2025: 986 points)
Google Is Winning on Every AI Front (April 10, 2025: 19 points)

(986) Google is winning on every AI front

986 points 6 days ago by vinhnx in 1831st position

www.thealgorithmicbridge.com | Estimated reading time – 15 minutes | comments | anchor

(PSA: Many people are interested in this post, so I removed the paywall)

Even in my most bullish days for OpenAI, I secretly preferred DeepMind. I felt Demis Hassabis was trustworthy in a way Sam Altman couldn't be—a true scientist, not a businessman. Also, AlphaGo and AlphaZero. To me, they're not historical milestones but nostalgia. ChatGPT is cool, but do you remember move 37? And the AlphaZero-Stockfish 8 chess games? My love and interest for AI grew parallel to DeepMind's successes. I was rooting, almost like a sports fan, for them.

So, for years, I've been low-key saddened by their constant fumbling. They had the tech, the talent, the money, the infrastructure, the prestige, and the conviction to make ChatGPT—or whatever else they wanted—before OpenAI. They didn't. CEO Sundar Pichai was afraid to thwart Google's main revenue source (search and ads). He chose prudence over boldness. Good—they didn't shoot themselves in the foot.

Because they didn't shoot at all.

But that was the last mistake they made. Today, two and a half years after the ChatGPT debacle, Google DeepMind is winning. They are winning so hard right now that they're screaming, "Please, please, we can't take it anymore, it's too much winning!" No, but really—I wonder if the only reason OpenAI, Anthropic, Meta, and Co. ever had the slightest chance to win is because Google fumbled that one time. They don't anymore.

I'd been holding off on writing about Gemini 2.5. Focusing on the AI model didn't feel like enough to tell the full story of Google's comeback. Gemini 2.5 is only a piece—albeit a big one—of something much larger. Back in December 2024, I said they would come out on top by the end of 2025. We're not even halfway there and it's already happened. (For reasons I still don't understand, some people genuinely thought xAI had a shot.)

Anyway, to avoid turning this post into an over-stylized narrative—which I do more often than I'd like—I'm keeping it to bullet points. It hits harder that way. You'll see what I mean when the list just... doesn't end.

Google and DeepMind fans: enjoy the long-overdue rebirth.


Is that all? Not really. Let's not forget that Google is a consumer software company as much as an AI company. They build better models than OpenAI and Anthropic, but they do plenty of other things no one else can do.


Hello friend!

Before you read on, a quick note: I write this newsletter in an attempt to understand AI and offer that understanding to others who may find themselves similarly disoriented (who isn't these days...)

The project continues thanks to a small group of generous readers who support it with ~$2/week (ChatGPT costs twice as much!). If you find value here—or simply wish for this quiet effort to persist—you are most welcome to join them.

If you already have, my sincere thanks. This exists because of you.


  • OpenAI is trying to enter makers where Google is already king. Let's take search (one of the most important software categories). Google and YouTube (#1 and #2 in total search traffic, both within the Alphabet umbrella) get a combined 50% of the total traffic share in the world (on desktop). ChatGPT is (laudably, though) at #6 with 2.33%. Didn't "ChatGPT kill Google" 2 years ago? Sam Altman knows he's trying to take on the ultimate boss. (Besides, if anyone has a data moat, that's Google: YouTube, Search, Books, Photos, etc.).

  • But search is merely one of the seven Google products with at least two billion monthly active users (Search, YouTube, Android, Maps, Chrome, Gmail, and Play Store). I praise OpenAI for getting ChatGPT to 500 million weekly active users (again, laudable), but they play in different leagues. What happens when Google adds Gemini to its entire product suite? Suddenly, billions of people have default access to the best AI in the world for free. That's without mentioning the also extremely popular Workspace cloud services (Drive, Gmail, Docs, Sheets...).

  • Talking about cloud computing. Google is, besides an AI company and a software company, a hyperscaler: Google Cloud rents chips to companies like Anthropic and partners with companies like Nvidia. OpenAI, meanwhile, depends on Microsoft's Azure and Anthropic further depends on Amazon's AWS. While they're both tickling Google's feet with their AI releases, Google is fighting against true giants—Microsoft and Amazon—in the cloud space with its right arm.

  • And the left arm? Wait: AI, software, cloud... I'm forgetting something. Oh, of course, Google is also a hardware company. With its left arm, Google is fighting Nvidia in the AI chip market (both to eliminate its former GPU dependence and to eventually sell its chips to other companies). How well are they doing? They just announced the 7th version of their TPU, Ironwood. The specifications are impressive. It's a chip made for the AI era of inference, just like Nvidia Blackwell. But Nvidia is busy fighting small startups that aim to grab market share on the inference side of AI workloads, whereas Google's revenue is secured elsewhere. And OpenAI... well.

  • Finally—because, as weird as it sounds, there's a "finally"—Google is a phone company. Yes, somehow—already out of limbs—it is "fighting" Apple and Samsung. And they're doing quite well. Gemini is already on the Pixel 9 (and probably all future phones they build). For instance, you can share the screen with it or ask it to take over your camera. Meanwhile, Apple is still deciding whether AI is vaporware or not, and OpenAI is figuring out whether people will voluntarily give up on the idea of phones. Others have tried—to no avail.

I'm surely leaving something out, but I think that's enough winning for Google.

When I put the Google + DeepMind picture together, I can only wonder why people, myself included, ever became so bullish on OpenAI or Anthropic or even Meta.

Now, let's wait for their responses to this. I'll be here to cover any newsworthy release—even if I've already made my bet on who's most likely to win.




All Comments: [-] | anchor

remoquete(3471) 6 days ago [-]

I was a loyal Claude user until I decided to try Gemini 2.5. 'After all', I thought, 'I already use a Pixel phone, so it's integrated with Android. And with Google Drive. And I can get it through my Google One subscription.'

And now that I'm on it, I don't think I'm going back. Google did it again.

firecall(10000) 6 days ago [-]

Just to add, I am mainly an iPhone user. But I have a Google Pixel 6a for dev and testing reasons.

And Google Gemini for the voice assistant is excellent fun!

Just being able to ask it weird and wonderful whilst on a road trip with the kids is worth the cost of a cheap Pixel phone alone!

ksec(119) 6 days ago [-]

At this point something happened to Google, may be Open AI? And it seems everything is finally moving.

Unfortunately Pixel is still not available as widely as iPhone. They still need to work on its hardware as well as distribution.

The only thing I dislike is their AOM only or anti JPEG XL.

weinzierl(233) 6 days ago [-]

Out of interest: Using Gemini on your phone, integrated and all, obviously reduces friction, but would you say convenience is the only reason for you not going back or do you feel Gemini is a real improvement as well?

akkad33(3624) 6 days ago [-]

> Google did it again.

This is quite vague. What did they do

acheron(3037) 6 days ago [-]

Is this an example of how to integrate ads into an AI response?

singhrac(10000) 6 days ago [-]

Can you choose a model via the Gemini app? I can on the webapp (finally), but on the mobile app it won't let me choose.

Using Gemini via Google Workspace.

indigodaddy(1121) 6 days ago [-]

'They're also small, which makes them perfect for edge applications and phone integration.'

- you can't locally install or onprem Gemini right, so why does small make it better for edge applications, essentially because small means light and fast, so it will respond quicker and with less latency? Requests are still going out over the network to Google though right?

bagacrap(10000) 6 days ago [-]

Wrong, Android and Chrome infer locally

noname120(10000) 5 days ago [-]

You probably missed the news: https://news.ycombinator.com/item?id=43632049

nullbio(10000) 5 days ago [-]

Can we please outlaw advertising with AI chatbots before it becomes a plague? Once it starts, there is no turning back. But if we can get ahead of this now based on what we've already learned about the internet then we can perhaps prevent the carnage that is going to happen.

zipmapfoldright(10000) 5 days ago [-]

what we need is not more regulation

antirez(1163) 6 days ago [-]

Gemini 2.5 pro is as powerful as everybody says. I still also use Claude Sonnet 3.7 only because the Gemini web UI has issues... (Imagine creating the best AI and then not allowing to attach Python or C files if not renamed .txt) but the way the model is better than anyone else is a 'that's another league' experience. They have the biggest search engine and YouTube to leverage the power of the AI they are developing. At this point I believe too that they are likely to win the race.

discordance(10000) 6 days ago [-]

Instead of renaming files to .txt, you should try Gemini 2.5 pro through OpenRouter with roo, Cline or using Github Copilot. I've been testing GH Copilot [0] and it's been working really well.

0: https://github.blog/changelog/2025-04-11-copilot-chat-users-...

BillyTheKing(10000) 6 days ago [-]

apart from those weird file attach issues I actually think they've got a much better UI than anthropic as well - much much snappier even with extremely long chats (in addition to much higher limits obviously, totally different league). I love using it

eru(2960) 6 days ago [-]

> At this point I believe too that they are likely to win the race.

I'm not so sure.

In the mid 2010s they looked like they were ahead of everyone else in the AI race, too. Remember the (well-deserved!) spectacle around AlphaGo? Then they lost steam for a while.

So I wouldn't bet that any momentary lead will last.

nolist_policy(10000) 6 days ago [-]

On Chrome you can share your whole Project directory to Gemini. I think it uses the File System Access api which Firefox doesn't support.

torginus(10000) 6 days ago [-]

Will there be a winner at all? Perhaps it's going to be like cars where there are dozens of world class manufacturers, or like Linux, where there's just one thing, but its free and impossible to monetize directly.

paradite(3639) 6 days ago [-]

You can bypass this problem by embedding relevant source code files directly in the prompt itself.

I built a desktop GUI tool called 16x Prompt that help you do it: https://prompt.16x.engineer/

jstummbillig(10000) 6 days ago [-]

I am not even sure how to use Gemini 2.5 pro ergonomically right now. Cursor and Windsurf both obviously have issues, probably optimized too much around Claude, but what else is there?

Is everyone copy pasting into the Google AI studio or what?

thorax(10000) 6 days ago [-]

In AI Studio, it seemed to let me upload pretty much any file and tokenize it without renaming, FWIW

oezi(10000) 6 days ago [-]

Their technical progress is indeed impressive. And their price dumping of 2.5 Pro for free will have moved a lot of technical users.

The key question is if the can stop the decline in search or pivot their revenue streams to Gemini.

ZYbCRq22HbJ2y7(10000) 6 days ago [-]

Is there really a decline in web searches or in Google's usage vs competitors? Seems like one of those greatly exaggerated rumors?

porphyra(10000) 6 days ago [-]

As long as Google continues to hamstring themselves with censorship for no reason, I can't use their products. The other day I asked gemini 2.5 pro 'which british ceo said that his company's products were bad' and the response was

> I'm just a language model, so I can't help you with that.

https://g.co/gemini/share/cb3afc3e7f78

Chatgpt 4o correctly identified the guy as Ratner and provided the relevant quotes.

tomrod(677) 6 days ago [-]

Try asking with a Ceasar cipher.

Tiktaalik(3104) 6 days ago [-]

It seems more likely just a weird bug considering that I can't understand at all why this topic would be considered controversial or censure worthy.

(casually googling this same line just now does reveal an AI suggestion with the correct answer)

uejfiweun(10000) 6 days ago [-]

I wouldn't bother with the official Gemini app. I don't know why Google even bothers with it at this point. I only interact with 2.5 through AI studio and it's great through that interface.

int_19h(10000) 5 days ago [-]

The model itself is much more lax about such stuff than ChatGPT and especially Claude. The filters are applied on top of that, but products using it via the API don't suffer this problem.

glacier5674(10000) 6 days ago [-]

If you search for Shockmaster, the AI Overview you get is as follows:

> Fred Alex Ottman, a retired American professional wrestler, is known for his WWF personas 'Tugboat' and 'Typhoon'. He also wrestled as 'Big Steel Man' and 'Big Bubba' before joining the WWF in 1989. Ottman wrestled for the WWF from 1989–1993, where he was a key ally of Hulk Hogan. He later wrestled in World Championship Wrestling as 'The Shockmaster', a character known for raising his fist and making a 'toot-toot' sound.

Which is obviously false. The 'toot-toot' was part of his gimmick as Tugboat, while the Shockmaster gimmick is known for its notoriously botched reveal.

Point being, Google is losing on the 'telling one early 90s wrestling gimmick from another' AI front.

krackers(3617) 6 days ago [-]

Gemini 2.5 pro is not the same that powers web search (or any of the dozen other Gemini related things).

ruuda(3312) 6 days ago [-]

I'm trying Imagen 3 to add pictures to a presentation in Google Slides, and it's making such basic mistakes that I thought image models weren't making any more by now. I tried for half an hour to prompt it into generating an illustration of a Thinkpad facing with the back to the viewer, so the keyboard is not visible. It couldn't do it, it would always make the keyboard face towards the viewer. Or you ask for an illustration of an animal pointing a finger, and it gives it an additional arm. Meanwhile you ask OpenAI to ghiblify a picture while changing the setting and adding 5 other things, and it absolutely nails it.

remoquete(3471) 6 days ago [-]

Image generation is extremely good in GPT now. Claude's edge is UX. But I doubt Google won't catch up on both fronts. It has the technology and manpower.

boznz(3573) 6 days ago [-]

I thought it was just me. A few hours ago Gemini told me 'As a language model, I'm not able to assist you with that.' This was after generating an image a few minutes earlier. I think the copy/paste buffer pulled in some old source files I had attached a few days earlier (no idea how) because under the 'sources and related content' it now showed two files Gemini is obviously calling its brother imagen for offloading the image generation, which is smart I guess if it works

vunderba(10000) 6 days ago [-]

From my comparison tests focusing on prompt adherence, I would agree 4o edges out Imagen3 as long as speed is not a concern.

https://genai-showdown.specr.net

If Imagen3 had the multimodal features that 4o had, it would certainly put it closer to 4o, but being able to instructively change an image (instruct pix2pix style) is incredibly powerful.

It's crazy how far GenAI for imagery has come. Just few short years ago, you would have struggled just to get three colored cubes stacked on top of each other in a specific order SHRDLU style. Now? You can prompt for a specific four-pane comic strip and have it reasonably follow your directives.

torginus(10000) 6 days ago [-]

This reads like sports commentary.

nailer(487) 5 days ago [-]

It also reads like someone thinking benchmarks make good products.

glimshe(10000) 6 days ago [-]

Gemini Pro 2.5 is fantastic. I'm anti Google and a long time ChatGPT user. I use it for text review and research and it's well ahead the competition. Let's see how long they last giving it for free.

Turfie(10000) 6 days ago [-]

Why are you anti Google?

retskrad(819) 6 days ago [-]

Gemini 2.5 Pro might be one of the best for coding but for creative tasks like writing and sharing ideas, I vastly prefer GPT 4o and GPT 4.5 to an even larger extent.

CuriouslyC(3195) 6 days ago [-]

Gemini 2.5 Pro's prose isn't quite as tight as GPT4.5s, but being able to have long form writing where your entire manuscript is in the context, along with all your source/background material, and it all gets used _well_ is pretty stellar. That lets Gemini update scenes in a really thoughtful, intelligent way, and frankly it's a better beta reader than ~85% of the people I've hired on Fiverr.

int_19h(10000) 5 days ago [-]

For creative writing, Claude runs circles around both IMO.

Lukman(10000) 6 days ago [-]

In my experience Claude 3.7 is far superior for coding than Gemini 2.5. I tried it in Cursor and I wanted it to work, as a recent ex-Googler. I repeatedly found it inferior. I think it's still behind Claud 3.5 for coding.

It would decide arbitrarily not to finish tasks and suggest that I do them. It made simple errors and failed to catch them.

jinay(10000) 6 days ago [-]

Cursor is likely very tuned for Claude (prompts-wise and all) due to its dominance with 3.5 and now 3.7. Still, Gemini 2.5's tool calling has been pretty poor in my experience which Cursor heavily relies on.

SparkyMcUnicorn(10000) 6 days ago [-]

It depends on the task, and prompting feels different.

I've found that sonnet is possibly better at starting things from scratch and frontend code, while Gemini has been able to one-shot difficult problems and fix bugs that sonnet has struggled with.

Switching between them is a frequent occurrence for me.

It might be relevant that I've completely stopped using Cursor in favor of other tools/agents.

thawab(10000) 6 days ago [-]

Your issue is because:

1- the cursor agent doesn't work with gemini. Some times the diff edit even doesn't work.

2- Cursor does semantic search to lower the token they sent to models.

The big advantage for Gemini is the context window, use it with aider, clien or roo code.

entropyneur(10000) 6 days ago [-]

Same. I went back from Gemini to Claude yesterday, because Gemini was writing decidedly worse code, at times not even able to stick to Python syntax. Using Aider.

Kholin(3642) 6 days ago [-]

Same here. I've seen some articles and LLM benchmarks that Gemini 2.5 Pro is better than Claude 3.7 on coding, but base on my recent experience of solving code problems with two products, Claude still gave me better answer, Gemini response are more detail and well structured, but less accurate.

ddalex(10000) 6 days ago [-]

Use Roo Code, Cursor is terrible

csmpltn(10000) 6 days ago [-]

Google is winning because LLMs without a (good) search backend are mostly useless.

So many LLM workloads require high quality search results (backed by efficient, relevant, complete and up-to-date indexes), and that's Google's muscle.

nailer(487) 5 days ago [-]

Copilot has been doing this, using Bing, for a year not and it's been great.

throwaway519(10000) 6 days ago [-]

It isn't when considering Google's brand has (long) lost trust in how it hanles data. This is especially true with larger companies, F500 type brands, who tend to avoid Google for infra as do governments.

rusk(10000) 6 days ago [-]

Tell that to the bank I work for that just switched to GCP

decimalenough(3504) 6 days ago [-]

F500/government are conservative and tend to stick with the vendors they know, which is why Azure has gained so much traction despite being worse than AWS & GCP pretty much across the board.

Trust in handling data doesn't really come into this; if anything Google has a very strong reputation for security.

suddenexample(10000) 6 days ago [-]

Weird - it's hard to beat widespread online narratives, but as someone who worked at Google there's no company I'd trust more with the 'handling' part of my data. There's no doubt that on device is always a more private option, but if you've decided to keep data in the cloud, then Google is probably one of the most secure options you could choose.

VirusNewbie(3633) 6 days ago [-]

What F500 brands do you think avoid google? Most of the biggest ones are on GCP for ML at least.

brap(10000) 6 days ago [-]

I think the key is that Google is the gateway to the internet for the entire world.

Think about it. Whatever you're trying to do online, either Search, Chrome or Android are in the critical path of like 90%+ of people if not more.

Once AI is deeply baked into these products, which are more like the "operating system" of the internet, the race is basically over.

Not to mention that Google is already sitting on the largest money printer in history and they can do this all day.

throwup238(465) 6 days ago [-]

That becomes really clear when using Gemini Deep Research vs OpenAI. I tried running the same research questions in both and Google regularly reads 10x as many sources as OpenAI and does it faster.

davidmurdoch(10000) 6 days ago [-]

Whatever model responds to me on my Android phone is as dumb as rocks. The Assistant was actually much better.

fragmede(1245) 6 days ago [-]

Could be worse, you could be using Siri.

thunderbird120(10000) 6 days ago [-]

This article doesn't mention TPUs anywhere. I don't think it's obvious for people outside of google's ecosystem just how extraordinarily good the JAX + TPU ecosystem is. Google several structural advantages over other major players, but the largest one is that they roll their own compute solution which is actually very mature and competitive. TPUs are extremely good at both training and inference[1] especially at scale. Google's ability to tailor their mature hardware to exactly what they need gives them a massive leg up on competition. AI companies fundamentally have to answer the question 'what can you do that no one else can?'. Google's hardware advantage provides an actual answer to that question which can't be erased the next time someone drops a new model onto huggingface.

[1]https://blog.google/products/google-cloud/ironwood-tpu-age-o...

noosphr(10000) 6 days ago [-]

And yet google's main structural disadvantage is being google.

Modern BERT with the extended context has solved natural language web search. I mean it as no exaggeration that _everything_ google does for search is now obsolete. The only reason why google search isn't dead yet is that it takes a while to index all web paged into a vector database.

And yet it wasn't google that released the architecture update, it was hugging face as a summer collaboration between a dozen people. Google's version came out in 2018 and languished for a decade because it would destroy their business model.

Google is too risk averse to do anything, but completely doomed if they don't cannibalize their cash cow product. Web search is no longer a crown jewel, but plumbing that answering services, like perplexity, need. I don't see google being able to pull off an iPhone moment where they killed the iPod to win the next 20 years.

krackers(3617) 6 days ago [-]

Assuming that DeepSeek continues to open-source, then we can assume that in the future there won't be any 'secret sauce' in model architecture. Only data and training/serving infrastructure, and Google is in a good position with regard to both.

retinaros(10000) 6 days ago [-]

they re not alone to do that tho.. aws also does and I believe microsoft is into it too

marcusb(10000) 6 days ago [-]

From the article:

> I'm forgetting something. Oh, of course, Google is also a hardware company. With its left arm, Google is fighting Nvidia in the AI chip market (both to eliminate its former GPU dependence and to eventually sell its chips to other companies). How well are they doing? They just announced the 7th version of their TPU, Ironwood. The specifications are impressive. It's a chip made for the AI era of inference, just like Nvidia Blackwell

imtringued(10000) 6 days ago [-]

Google is what everyone thinks OpenAI is.

Google has their own cloud with their data centers with their own custom designed hardware using their own machine learning software stack running their in-house designed neural networks.

The only thing Google is missing is designing a computer memory that is specifically tailored for machine learning. Something like processing in memory.

mike_hearn(3636) 6 days ago [-]

TPUs aren't necessarily a pro. They go back 15 years and don't seem to have yielded any kind of durable advantage. Developing them is expensive but their architecture was often over-fit to yesterday's algorithms which is why they've been through so many redesigns. Their competitors have routinely moved much faster using CUDA.

Once the space settles down, the balance might tip towards specialized accelerators but NVIDIA has plenty of room to make specialized silicon and cut prices too. Google has still to prove that the TPU investment is worth it.

albert_e(2464) 6 days ago [-]

Amazon also invests in own hardware and silicon -- the Inferentia and Trainium chips for example.

But I am not sure how AWS and Google Cloud match up in terms of making this verticial integration work for their competitive advantage.

Any insight there - would be curious to read up on.

I guess Microsoft for that matter also has been investing -- we heard about the latest quantum breakthrough that was reported as creating a fundamenatally new physical state of matter. Not sure if they also have some traction with GPUs and others with more immediate applications.

jxjnskkzxxhx(10000) 6 days ago [-]

I've used Jax quite a bit and it's so much better than tf/pytorch.

Now for the life of me, I still haven't been able to understan what a TPU is. Is it Google's marketing term for a GPU? Or is it something different entirely?

acstorage(10000) 5 days ago [-]

Unclear if they can actually beat GPUs in training throughout with 4D parallelism

6510(10000) 5 days ago [-]

The problem is always their company never the product. They had countless great products. You cant depend on a product if the company is reliably unreliable enough. If they don't simply delete it for being expensive and 'unprofitable' they might initially win, eventually, like search and youtube, it will be so watered down you cant taste the wine.

AlbertoRomGar(10000) 5 days ago [-]

I am the author of the article. It was there since the beginning, just behind the paywall, which I removed due to the amount of interest the topic was receiving.

giorgioz(10000) 6 days ago [-]

No it's not obvious at all Google is winning AI on every front. There is few stuff Google is systemically behind: 1) UX 2) product and use case innovation

I just open Google Gemini Android app and asked to generate a JS script with Gemini 2 Flash and did the same with ChatGPT.

Gemini did not highlighted with colors the code. ChatGPT did highlighted with colors the code.

Colors in code are extremely useful to grok the code and have a nice DX.

I'm sure if I dig into Gemini's product I'll find dozens of UX/DX ways in which ChatGPT is better.

Google is still playing catch-up with LLM products. ChatGPT is still the one making the announcements and then Gemini doing the same UX/use case enhancements weeks/months later.

Legend2440(10000) 6 days ago [-]

>Gemini did not highlighted with colors the code. ChatGPT did highlighted with colors the code.

I don't care if the code is highlighted nearly as much as I care if it's right.

This kind of stuff is nice-to-have but the quality of the underlying LLM is what really matters.

neuroelectron(10000) 6 days ago [-]

This is very simply a bunch of minor stuff Googlites feel like they're above implementing. They would rather let you implement that and you both get a cut.

levocardia(10000) 6 days ago [-]

Google is winning on every front except... marketing (Google has a chatbot?), trust (who knew the founding fathers were so diverse?), safety (where's the 2.5 Pro model card?), market share (fully one in ten internet users on the planet are weekly ChatGPT users), and, well, vibes (who's rooting for big G, exactly?).

But I will admit, Gemini Pro 2.5 is a legit good model. So, hats off for that.

8f2ab37a-ed6c(10000) 6 days ago [-]

Google is also terribly paranoid of the LLM saying anything controversial. If you want a summary of some hot topic article you might not have the time to read, Gemini will straight up refuse to answer. ChatGPT and Grok don't mind at all.

torginus(10000) 6 days ago [-]

Didn't GCP manage to lose from this position of strength? I'm not sure even if they're the third biggest

sigmoid10(10000) 6 days ago [-]

I wouldn't even say Gemini Pro 2.5 is the best model. Certainly not when you do multimodal or function calling, which is what actually matters in industry applications. Plain chatbots are nice, but I don't think they will decide who wins the race. Google is also no longer in the mindset to really innovate. You'll hear surprisingly similar POVs from ex-Googlers and ex-OpenAI guys. I'd actually say OpenAI still has an edge in terms of culture, even through it fell deep.

sublimefire(10000) 6 days ago [-]

It might be worth throwing in an analogy to windows PCs vs Mac vs Linux. G appeals to a subset of the market at the end of the day, being "best" does not mean everyone will use it.

rzz3(10000) 6 days ago [-]

You really hit the nail on the head with trust. Knowing the power of these AIs and how absolutely little I trust Google, I'd never tell trust Gemini with the things I'll say to ChatGPT.

bjackman(3220) 6 days ago [-]

Well, Google is also very well placed to integrate with other products that have big market share.

So far this has been nothing but a PM wankfest but if Gemini-in-{Gmail,Meet,Docs,etc} actually gets useful, it could be a big deal.

I also don't think any of those concerns are as important for API users as direct consumers. I think that's gonna be a bugger part of my the market as time goes on.

killerstorm(10000) 6 days ago [-]

Winning =/= won. The point is that they are improving on many fronts. If they were already recognized as THE leader there would be no point in making a HN post about it.

tbolt(3545) 6 days ago [-]

Add to this list apps. As in ChatGPT and Anthropic have nice desktop software applications for Mac and Windows.

a2128(10000) 6 days ago [-]

My experience with their software has been horrible. A friend was messing around with Gemini on my phone and said my name is John, and it automatically saved that to my saved info list and always called me John from then on. But when I ask it to forget this, it says it can't do that automatically and links me to the Saved Info page, which is a menu they didn't implement in the app so it opens a URL in my browser and asks me to sign into my Google account again. Then a little toast says 'Something went wrong' and the saved info list is empty and broken. I tried reporting this issue half a year ago and it's still unresolved. Actually the only way I was ever able to get it to stop calling me John is to say 'remember to forget my name is John' in some way that it adds that to the list instead of linking me to that broken page

mark_l_watson(3619) 6 days ago [-]

I look more to Google for efficient and inexpensive LLM APIs, and in a similar way to Groq Cloud for inexpensive and fast inferencing for open models.

ChatGPT has a nice consumer product, and I also like it.

Google gets a bad rap on privacy, etc., but if you read the documentation and set privacy settings, etc. then I find them reasonable. (I read OpenAI's privacy docs for a long while before experimenting with their integration of Mac terminal, VSCode, and IntelliJ products.)

We live in a cornucopia of AI tools. Occasionally I will just for the hell of it do all my research work for several days just using open models running on my Mac using Ollama - I notice a slight hit in productivity, but still a good setup.

Something for everyone!

ACCount36(10000) 6 days ago [-]

Trust is important, and Google has a big rep for killing its projects. As well as making the most moronic braindead decisions in handling what they don't kill off.

No one is going to build on top of anything 'Google' without having a way out thought out in advance.

Not that important for LLMs, where drop-in replacements are usually available. But a lot of people just hear 'by Google' now and think 'thanks I'll pass' - and who can blame them?

culopatin(10000) 6 days ago [-]

I had to stop using Gemini 2.5 because the UI peaks my MPB cpu at max and I can't type my prompt at more than a character every 2 seconds. I can't even delete my chats lol. Anyone else?

hermitShell(10000) 6 days ago [-]

I would like to think they just let other companies have the first mover advantage on chatbots because it only disrupts Google in their search business, which was already pretty far gone and on the way out. Where is AI actually going to change the world? Protein folding, robotics, stuff that the public doesn't hype about. And they looked at the gold rush and decided "let's design shovels". Maybe I'm giving them too much credit but very bullish on Google.

joshdavham(10000) 6 days ago [-]

My hesitancy to adopt Gemini, despite being a heavy GCP and workspace user, is I kinda lost trust when trying to use their earlier models (I don't even remember those models' names). I just remember the models were just so consistently bad and obviously hallucinated more than 50% of the time.

Maybe Gemini is finally better, but I'm not exactly excited to give it a try.

rs186(10000) 6 days ago [-]

Exactly. Google may have a lead in their model, but saying they are 'winning on every front' is a very questionable claim, from the perspective of everyday users, not influencers, devoted fans or anyone else who has a stake in hyping it.

jimbob45(2509) 6 days ago [-]

I'm scared they're going to kill it off. Every good idea they've had in the last 20 years has been killed off. Even Fuchsia/Zircon, which should have supplanted Android a full decade ago.

karunamurti(10000) 5 days ago [-]

Also not OSS. That's not a win for me.

jonplackett(10000) 6 days ago [-]

I'm still really surprised everyone loves Gemini 2.5 so much.

Even for coding I find GPT4o to be more concise and write more sensible things.

I get the one-shot 'build me a flight simulator' type thing is special to Gemini 2.5 - but who actually ever uses it that way?

I feel a bit old school for aging it, but I still prefer ChatGPT at this moment. Am I the only one?

thebigspacefuck(3247) 6 days ago [-]

If you're not using something like Cline or Cursor you should give them a try.

I haven't found any OpenAI models good for agentic coding. o3-mini and 4o were both worse than 3.5 Sonnet. 3.7 and Gemini 2.5 Pro both seem be better than 3.5. I still use 4o with search as my primary reference model though.

nabla9(144) 6 days ago [-]

Most analysts don't differentiate between:

1) AI research as science and

2) Productization and engineering that science into something to sell.

While Google DeepMind focused on things that won Hassabis and Jumper Nobel prize in Chemistry, OpenAI took transformers architecture (Google researchers invented), built the first big model, and engineered it into a product.

Google has the best researchers, and does most research. When they finally chose to jump into the business and pull Hassabis and others from doing more important stuff to moneymaking, obviously they win.

dragonwriter(10000) 6 days ago [-]

No, that's not at all obvious because building products for any given market is a radically different competency than research, and the kind of basic, fundamental research that tends to win Nobels is actually a competency a step further removed from product than normal corporate R&D; outside of Google-scale orgs, it's mostly (whether or not of Nobel quality) done at universities with both product-oriented research and actual productization done in industry, often based largely on published academic results, but generally with no strong direct connection between the people doing the basic research and the people winning the competition for successful commercial products.

codelord(10000) 6 days ago [-]

As an Ex-OpenAI employee I agree with this. Most of the top ML talent at OpenAI already have left to either do their own thing or join other startups. A few are still there but I doubt if they'll be around in a year. The main successful product from OpenAI is the ChatGPT app, but there's a limit on how much you can charge people for subscription fees. I think soon people expect this service to be provided for free and ads would become the main option to make money out of chatbots. The whole time that I was at OpenAI until now GOOG has been the only individual stock that I've been holding. Despite the threat to their search business I think they'll bounce back because they have a lot of cards to play. OpenAI is an annoyance for Google, because they are willing to burn money to get users. Google can't as easily burn money, since they already have billions of users, but also they are a public company and have to answer to investors. But I doubt if OpenAI investors would sign up to give more money to be burned in a year. Google just needs to ease off on the red tape and make their innovations available to users as fast as they can. (And don't let me get started with Sam Altman.)

ksec(119) 6 days ago [-]

> (And don't let me get started with Sam Altman.)

Please do.

falcor84(10000) 6 days ago [-]

> Google can't as easily burn money

I was actually surprised at Google's willingness to offer Gemini 2.5 Pro via AI Studio for free; having this was a significant contributor to my decision to cancel my OpenAI subscription.

imiric(10000) 6 days ago [-]

> I think soon people expect this service to be provided for free and ads would become the main option to make money out of chatbots.

I also think adtech corrupting AI as well is inevitable, but I dread for that future. Chatbots are much more personal than websites, and users are expected to give them deeply personal data. Their output containing ads would be far more effective at psychological manipulation than traditional ads are. It would also be far more profitable, so I'm sure that marketers are salivating at this opportunity, and adtech masterminds are hard at work to make this a reality already.

The repercussions of this will be much greater than we can imagine. I would love to be wrong, so I'm open to being convinced otherwise.

codelion(2350) 6 days ago [-]

It's interesting to hear your perspective as a former OpenAI employee. The point about the sustainability of subscription fees for chatbots is definitely something worth considering. Many developers mention the challenge of balancing user expectations for free services with the costs of maintaining sophisticated AI models. I think the ad-supported model might become more prevalent, but it also comes with its own set of challenges regarding user privacy and experience. And I agree that Google's situation is complex – they have the resources, but also the expectations that come with being a public company.

netcan(10000) 6 days ago [-]

> there's a limit on how much you can charge people for subscription fees. I think soon people expect this service to be provided for free and ads would become the main option to make money out of chatbots.

So... I don't think this is certain. A surprising number of people pay for the ChatGPT app and/or competitors. It's be a >$10bn business already. Could maybe be a >$100bn business long term.

Meanwhile... making money from online ads isn't trivial. When the advertising model works well (eg search/adwords), it is a money faucet. But... it can be very hard to get that money faucet going. No guarantees that Google discover a meaningful business model here... and the innovators' dilema is strong.

Also, Google don't have a great history of getting new businesses up and running regardless of tech chops and timing. Google were pioneers to cloud computing... but amazon and MSFT built better businesses.

At this point, everyone is assuming AI will resolve to a 'winner-take-most' game that is all about network effect, scale, barriers to entry and such. Maybe it isn't. Or... maybe LLMs themselves are commodities like ISPs.

The actual business models, at this point, aren't even known.

ramraj07(2610) 6 days ago [-]

I don't know what you did there, but clearly being ex OpenAI isn't the intellectual or product flex it is: I and every other smart person I know still use ChatGPT (paid) because even now it's the best at what it does and we keep trying Google and Claude and keep coming back.

They got and as of now continue to get things right for the most part. If you still aren'ĥt seeing it maybe you should introspect what you're missing.

greggsy(10000) 6 days ago [-]

'think soon people expect this service to be provided for free'

I have been using the free version for the past year or so and it's totally serviceable for the odd question or script. The kids get three free fun images, which is great because that's about as much as I want them to do.

apwell23(10000) 6 days ago [-]

> And don't let me get started with Sam Altman.

would love to hear more about this.

I made a post asking more about sam altman last year after hearing paul graham quote call him 'micheal jordan of listening'

https://news.ycombinator.com/item?id=41034829

tunaoftheland(10000) 6 days ago [-]

The ads angle is an interesting one since that's what motivates most things that Google and Meta do. Their LLMs' context window size has been growing, and while this might the natural general progression with LLMs, for those 2 ads businesses there's pretty straight paths to using their LLMs for even more targeted ads. For example, with the recent Llama 'herd' releases, the LLMs have surprisingly large context window and one can imagine why Meta might want that: For stuffing in it as much of the personal content that they already have of their users. Then their LLMs can generate ads in the tone and style of the users and emotionally manipulate them to click on the link. Google's LLMs also have large context windows and such capability might be too tempting to ignore. Thinking this, there were moments that made me think that I was being to cynical, but I don't think they'll leave that kind of money on the table, an opportunity to reduce human ad writers headcount while improving click stats for higher profit.

EDIT: Some typo fixes, tho many remain, I'm sure :)

mnky9800n(10000) 6 days ago [-]

Feel free to get started on Sam Altman.

knallfrosch(10000) 6 days ago [-]

Microsoft CoPilot (which I equate with OpenAI ChatGPT, because MS basically owns OpenAI) already shows ads in it's chat mode. It's just a matter of time. Netflix, music streamers, individual podcasters, YouTubers, TV manufacturers – they all converge on an ad-based business model.

hdjjhhvvhga(3228) 6 days ago [-]

> And don't let me get started with Sam Altman.

Why not? That's one of the reasons I visit HN instead of some random forum after all.

somenameforme(3666) 6 days ago [-]

> '[Google is] a public company and have to answer to investors'

As is an increasing trend, they're a 'public' company, like Facebook. They have tiered shares with Larry Page and Sergey Brin owning the majority of the voting power by themselves. GOOG shares in particular are class C and have no voting power whatsoever.

wslh(321) 6 days ago [-]

I get your perspective, but what we're seeing looks more like complex systems theory, emergent behavior, optimization, new winners. If models become commoditized, the real value shifts to last-mile delivery: mobile, desktop, and server integration across regions like China, Korea, the U.S., and Europe.

This is where differentiated UX and speed matter. It's also a classic Innovator's Dilemma situation like Google are slower to move, while new players can take risks and redefine the game. It's not just about burning money or model size, it's about who delivers value where it actually gets used.

I also think the influx of new scientists and engineers into AI raises the odds of shifting its economics: whether through new hardware (TPUs/GPUs) and/or more efficient methods.

olalonde(179) 6 days ago [-]

Do you think Sam will follow through with this?

> Therefore, if a value-aligned, safety-conscious project comes close to building AGI before we do, we commit to stop competing with and start assisting this project. We will work out specifics in case-by-case agreements, but a typical triggering condition might be "a better-than-even chance of success in the next two years."

zkmon(10000) 6 days ago [-]

People left, to do what kind of startups? Can't think of any business idea that won't get outdated, or overrun in months.

riku_iki(10000) 6 days ago [-]

> The main successful product from OpenAI is the ChatGPT app, but there's a limit on how much you can charge people for subscription fees

other significant revenue surfaces:

- providing LLM APIs to enterprises

- ChatBot Ads market: once people will switch from google search, there will be Ads $200B market at stake for a winner

tom_m(10000) 6 days ago [-]

I believe it. This is what typically happens. I would go to AWS re:invent and just watch people in the audience either cheer or break down as they announced new offerings wash away their business. It's very difficult to compete in a war of attrition with the likes of Google, Microsoft, and Amazon.

Not just small startups - even if you have ungodly amounts of funding.

Obviously the costs for AI will lower and everyone will more or less have the same quality in their models. They may already be approaching a maximum (or maximum required) here.

The bubble will burst and we'll start the next hype cycle. The winners, as always, the giants and anyone who managed to sell to them

I couldn't possibly see OpenAI as a winner in this space, not ever really. It has long since been apparent to me that Google would win this one. It would probably be more clear to others if their marketing and delivery of their AI products weren't such a sh-- show. Google is so incredibly uncoordinated here it's shocking...but they do have the resources, the right tech, the absolute position with existing user base, and the right ideas. As soon as they get better organized here it's game over.

og_kalu(3020) 6 days ago [-]

Open AI don't always have the best models (especially for programming) but they've consistently had the best product/user experience. And even in the model front, other companies seem to play catchup more than anything most of the time.

stellajager(10000) 6 days ago [-]

What cards has google played over the past three years such that you are willing to trust them play the 'cards at hand' that you alleged that they have? I could think of several things they did right, but I'm curious to hear which one of them are more significant than others from someone I think has better judgement than I do.

sumedh(10000) 5 days ago [-]

> OpenAI is an annoyance for Google

Remember Google is the same company which could not deliver a simple Chat App.

Open AI has the potential to become a bigger Ad company and make more money.

reportgenix(10000) 5 days ago [-]

valuable information

adrianN(3108) 5 days ago [-]

I think paying to bias AI answers in your favor is much more attractive than plain ads.

Waterluvian(10000) 6 days ago [-]

> I felt Demis Hassabis was trustworthy in a way Sam Altman couldn't be—a true scientist, not a businessman

Not that I think Demis is or is not trustworthy, but I think it's a bit foolish to believe it would be allowed to matter.

eru(2960) 6 days ago [-]

I also don't see why scientists should be more trustworthy than business people.

tim333(2589) 6 days ago [-]

It's already made some difference to how the companies are behaving - Deepmind doing quite a lot of work on protein folding and now protein drug interactions, OpenAI under Altman tying to do the startup max the money raised and user count thing.

flexie(3536) 6 days ago [-]

Google will need a far better LLM than OpenAI to throw them decisively off the AI throne, just like another company would need a far better search engine than Google to throw them off the search throne. ChatGPT is now the 7th highest ranking website on the planet - does anyone outside the HN crowd know about Google AI Studio?

Brands matter, and when regular people think AI, they think of OpenAI before they think Google, even if Google has more AI talents and scores better on tests.

And isn't it good? Who wants a world where the same handful of companies dominate all tech?

neuderrek(10000) 6 days ago [-]

Regular people is not where the money is. For example, I get Gemini as part of my employer's Google Workspace subscription, and as it is now decent enough, have no need to use anything else.

danpalmer(3096) 6 days ago [-]

> Google will need a far better LLM than OpenAI ... ChatGPT is now the 7th highest ranking website on the planet

And Google is #1 and #2, with search and YouTube. Distribution is a huge part of the problem and they've got some great distribution options.

uncomplexity_(3592) 6 days ago [-]

fair call but

1. unlike openai, google is already cashflow positive and doesnt need to raise any external funds

2. unlike openai, google already has the distribution figured out on both software and hardware

google is like an aircraft carrier that takes so fucking long to steer, but once done steering its entire armada will wipe you the fuck out (at least on the top 20% features for 80% use case)

anthropic already especialized for coding, openai seems to be steering towards intimacy, i guess they both got the memo that they need to specialize

ramesh31(3343) 4 days ago [-]

>ChatGPT is now the 7th highest ranking website on the planet - does anyone outside the HN crowd know about Google AI Studio?

This isn't about consumer facing chatbots anymore. Industry adoption is what matters. And GCP is a far far easier sell than Anthropic or OpenAI. If they both can't respond in a significant way (capability or price) very shortly, 2.5 is going to start eating their lunch.

paradite(3639) 6 days ago [-]

The author mentioned AlphaGo and Alpha Zero without mentioning OpenAI gym and OpenAI Five.

Those products show OpenAI was innovating and leading in RL at that stage around 2017 to 2019.

https://github.com/openai/gym

https://en.wikipedia.org/wiki/OpenAI_Five

bitpush(10000) 5 days ago [-]

This is the first I'm hearing about it.

CSMastermind(3197) 6 days ago [-]

I run every query I do through all the major models, up to 10 of them at this point.

Benchmarks aside Gemini 2.5 Pro is a great model and now often produces the best code for me but it's not notably better than any of the other frontier models in my testing each of which tends to have their own strengths and weaknesses.

And Google's wrapper around Gemini is easily the most frustrating of any of the major AI companies. It's content guardrails are annoying and I just learned yesterday it won't let you upload json files for whatever reason (change the extension to txt without modifying the contents in any way and it works just fine).

enlyth(10000) 6 days ago [-]

Gemini 2.5 Pro does this annoying thing where it decides to refactor every part of your code even if you didn't ask, and also it outputs way too many damn comments on almost every line in the style of:

// Increment variable by 1

I find Claude 3.7 better at following instructions, even though the solutions it comes up with may not be the best at times

ZeroTalent(10000) 6 days ago [-]

This is why we use Gemini and its context window as the architect and Sonnet 3.7 Max for implementation.

DisjointedHunt(2919) 6 days ago [-]

Not on cars, not in robotics, not in commercially deployed AI, not in enterprise investments in their cloud business.

They've got immense potential, sure. But to say that they're winning is a bit far from reality. Right now, their Cloud AI offerings to the enterprise are technologically superior to anything else out there from AWS, but guess what? AWS seems to have significantly more %age sales growth in this space with their larger base compared to GCP with their smaller market share.

The same can be said across turn based chat and physical AI. OpenAI continues to be the growth leader in the consumer space and a collection of Claude + self hosted + Gemini now in the enterprise / API space.

They need to be measuring themselves on moving the needle in adoption now. I'd hate for such amazing progress to stall out in a niche.

Philpax(761) 6 days ago [-]

I would say they're winning with Waymo: I took a fully autonomous taxi ride in the backseat in SF, and it just worked. No other company can currently do that, despite their promises and hype.

p0w3n3d(10000) 6 days ago [-]

I recently had to check some legal thing - I gave the pdf with law to both - chatgpt and Gemini, and I was able to convince the Gemini that my interpretation is right, but chatgpt was constantly opposing me. Later I checked and found out that my interpretation was wrong, so I'd say that chatgpt was better and moreover it spared me some problems with 'Polish IRS'

ZeroTalent(10000) 6 days ago [-]

'Polish IRS' — I never heard that term before. Do you mean the gov revenue service of Poland or something else?

labrador(2669) 6 days ago [-]

I only AI for one reason since I'm retired and live alone: Life-like chats with a reasonable approximation of a knowledgeable friend. With the new memory features ChatGPT excels at that. I'm not even sure Google cares about that; that goes to show how little of it I've noticed with Google.

unknown_user_84(10000) 6 days ago [-]

While I'm not sure it's exactly what you're looking for I've found success with a variety of Gemini models getting them to take to a specific persona when given initial prompts to take on a specific persona. Gemini 2.5 is specifically interesting because the <thinking> block shows how much the notebook is playing a persona/role vs. becoming that role. In my experience Gemini 2.5 Pro likes to revert to 'maintaining a persona' in the <thinking> block. I questioned it about this at one point and it pointed out that humans also maintain a certain persona in their responses, and that you can't see their thinking. Still not entirely sure what I think about that.

I have experimented with telling the notebook to change the <thinking> block to a more narrative style. It seems to like to revert to ordered lists and bullet points if not continuously prompted to think in narrative.

Regarding maintaining consistency throughout the chat I have noticed Gemini 2.5 seems able to do this for quite a while but falls victim to the needle in a haystack problem that all LLMs seem to suffer from with an extremely long context and no external tooling.

I have a substack post on creating the initial prompt, which I call a bootstrap, using AI Studio and a set of system instructions if you are curious to explore.

https://consciousnesscrucible.substack.com/p/creating-charac...

tkgally(3670) 6 days ago [-]

> Gemini 2.5 Pro in Deep Research mode is twice as good as OpenAI's Deep Research

That matches my impression. For the past month or two, I have been running informal side-by-side tests of the Deep Research products from OpenAI, Perplexity, and Google. OpenAI was clearly winning—more complete and incisive, and no hallucinated sources that I noticed.

That changed a few days ago, when Google switched their Deep Research over to Gemini 2.5 Pro Experimental. While OpenAI's and Perplexity's reports are still pretty good, Google's usually seem deeper, more complete, and more incisive.

My prompting technique, by the way, is to first explain to a regular model the problem I'm interested in and ask it to write a full prompt that can be given to a reasoning LLM that can search the web. I check the suggested prompt, make a change or two, and then feed it to the Deep Research models.

One thing I've been playing with is asking for reports that discuss and connect three disparate topics. Below are the reports that the three Deep Research models gave me just now on surrealism, Freudian dream theory, and AI image prompt engineering. Deciding which is best is left as an exercise to the reader.

OpenAI:

https://chatgpt.com/share/67fa21eb-18a4-8011-9a97-9f8b051ad3...

Google:

https://docs.google.com/document/d/10mF_qThVcoJ5ouPMW-xKg7Cy...

Perplexity:

https://www.perplexity.ai/search/subject-analytical-report-i...

jay_kyburz(1810) 6 days ago [-]

> 'produce a comprehensive analytical report exploring the conceptual and methodological intersections between Surrealist art techniques, Freudian dream analysis, and the practice of prompt engineering for AI image generation models (such as DALL-E, Midjourney, Stable Diffusion).'

Haha, what a perfect project for AI.

stafferxrr(10000) 6 days ago [-]

Great stuff. My prompts are falling behind after seeing what you are doing here.

I find OpenAI annoying at this point that it doesn't output a pdf easily like Perplexity. The best stuff I have found has been in the Perplexity references also.

Google outputting a whole doc is really great. I am just about to dig into Gemini 2.5 Pro in Deep Research for the first time.

siva7(10000) 6 days ago [-]

Matches also my experience that openai fell behind with their deep search product. And that deep search is basically the top tier benchmark for what professionals are willing to pay. So why should i shell out 200 dollar for an openai subscription when google gives me a better top-tier product for 1/10th of the price openai or anthropic are asking. Although i assume google is just more willing to burn cash in order to not let openai take more market share which would get them later on soo more expensive (e.g. iphone market share, also classic microsoft strategy).

ViktorRay(3308) 6 days ago [-]

Thanks for sharing your prompting technique. I will try to use that technique in the future as well.

ozgune(10000) 6 days ago [-]

I feel the article presents the data selectively in some places. Two examples:

* The article compares Gemini 2.5 Pro Experimental to DeepSeek-R1 in accuracy benchmarks. Then, when the comparison becomes about cost, it compares Gemini 2.0 Flash to DeepSeek-R1.

* In throughput numbers, DeepSeek-R1 is quoted at 24 tok/s. There are half a dozen providers, who give you easily 100+ tok/s and at scale.

There's no doubt that Gemini 2.5 Pro Experimental is a state of the art model. I just think it's very hard to win on every AI front these days.

yalok(10000) 6 days ago [-]

but also they compare reasoning and non-reasoning models - e.g. Meta's Llama 4

JKCalhoun(3408) 6 days ago [-]

Orthogonal — the remarkable thing about DeepSeek-R1 seems to me is that it shows how easy it in fact is to create an LLM. A quantitative hedge fund was able to throw money and develop a competitive LLM. Maybe that somewhat reveals that it's just a 'man behind the curtain.'

ww520(3406) 6 days ago [-]

May be it's my luck but I found a glaring issue with Gemini 2.5 Pro in AI Studio.

I asked it whether a language feature in Zig was available. It answered yes and proceeded to generate a whole working sample code. I compiled it and got an error. Reported the error and it said the error showed I typed it wrong and asked me to make sure it's typed correctly. Eh?! It's a copy-and-paste. I confirmed again it's wrong. It then said it must be my compiler version was too old. Nope, using the latest. It then said very convincingly that based on its extensively research into the language official documentation, official examples, and release notes, the feature must exist. I asked it to show me the reference materials it used to draw the conclusion. None of links it gave were valid. I told it they were wrong. It gave back another set of links and claimed it had checked the links to make sure they are alive. The links were alive but didn't contain any mention of the feature. I let it know again. It admitted couldn't find the mentioned feature. But it insisted the feature had been merged in a PR. The PR link it gave was unrelated. I let it know. It gave me another 3 PR's and said one mentioned something related so the feature must be in. At the point I gave up.

The issue was that it sounded very convincing and stated 'facts' very confidently, with backings to documents and other resources even if they were wrong or irrelevant. Even when told it gave the wrong info, it would double down and made up some BS reference material to back up its claim.

harvey9(10000) 6 days ago [-]

Generative AI makes things up so I'm surprised that you seem surprised. For some situations checking the documentation is still the best option.

Giorgi(3486) 6 days ago [-]

Google AI is a crap. Moment they start 'winning' you will see it everywhere.

lofaszvanitt(10000) 6 days ago [-]

Now watch the dance to protect their adsnitch ecosystem.

a1371(10000) 6 days ago [-]

I think my experience has been different from everyone else. As an owner of a pixel phone and multiple Google accounts, I wanted this to be true. But Gemini has been super inconsistent with tasks that are trivial for Google Assistant. I even bought the $26 AI plan for my account to help with some proofreading and it's been awful compared to ChatGPT. I'm about to cancel it.

flux293m(10000) 6 days ago [-]

Something I've noticed is that Gemini through gemini.google.com or through the mobile apps is vastly inferior to Gemini through aistudio.google.com. Much worse handling of long contexts amongst other things. Very odd that a product that is free (AI Studio use is free), is much worse than the product I am paying 20 quid a month for.

I find this to be especially true for the newer models like 'gemini-2.5-pro-preview-03-25', so if you haven't tried AI Studio yet, I'd give that a go.

dtquad(3667) 6 days ago [-]

Google is the primary target for current US anti-big-tech sentiments that are getting political traction with Lina Khan and Steve Bannon teaming up at a recent conference against US Big Tech companies. J.D. Vance has also expressed that he agrees with Lina Khan and Steve Bannon and would like to see US Big Tech companies like Google be forcibly split up.

What will happen with Google's AI wing when Google inevitably gets split up in the next 4-8 years?

fancyfredbot(10000) 6 days ago [-]

Are the administration really going to risk messing with one of their leading AI companies while they are also terrified of China catching up or overtaking them in leading edge AI?

I wouldn't put it past them but I don't think it's a given either.

pzo(10000) 6 days ago [-]

Apart from Gemini 2.5 Pro they have a decent Jack-of-all-trades master of none/price Gemini 2.0 Flash.

1) is dirty cheap ($0.1M/$0.4M),

2) is multimodal (image and audio),

3) reliable rate limit (comparing to OSS ml ai providers),

4) fast (200 tokens/s).

5) if need realtime API they provide as well for more expensive price (audio-to-audio)

It's my go to model for using as an API for some apps/products. https://artificialanalysis.ai/models/gemini-2-0-flash/provid...

buggyipadmettoo(10000) 5 days ago [-]

I thought genini 2 flash API was free (for personal use at least)? I just created an iOS shortcut to call it, and didn't pay anything.

godjan(10000) 6 days ago [-]

The article doesn't mention one of the most complex benchmarks - ARC challenge. All models suck in it https://arcprize.org/leaderboard

but Gemini and Claude still suck much worse then ChatGPT models

nolist_policy(10000) 6 days ago [-]

They haven't tested Gemini 2.5 Pro yet.

karel-3d(3042) 6 days ago [-]

Please explain to me like I am stupid.

If I want to use OpenAI models, I download ChatGPT app.

What do I need to do to use Google's model? They have so mamy things called Gemini... I genuinely have no clue

jwr(10000) 6 days ago [-]

Or, just use TypingMind or something similar to get access to all the major models through a single interface.

brap(10000) 6 days ago [-]

google.com/gemini

There's also AI Studio another commenter mentioned, but that's for more advanced users who want to tweak it

thebigspacefuck(3247) 6 days ago [-]

There's a Gemini app on mobile but if you're on desktop use https://aistudio.google.com. They are behind in this aspect, hopefully they release a desktop app with MCP.

cryptozeus(3070) 6 days ago [-]

This article is the example of why google ai is not winning market share. All you have shown is bunch of graphs and numbers, two image and video examples are horrible. This would not want me even touch google ai. Meanwhile world is going crazy over ghibli images with openai. Users are not stupid!

gavmor(10000) 6 days ago [-]

Do Ghibli images represent the most significant—lucrative, high-margin, world-changing, or ubiquitously impactful—vertical to which generative models can be applied?

sva_(3428) 6 days ago [-]

It is sort of funny to me how the sentiment about whoever seems to be leading in ML changes so frequently (in particular here on HN.) A couple months ago it felt like people were sure that Google completely fucked it up for themselves (especially due to the fact that they invented the transformer but didn't productize it themselves at first.)

For a short while, Claude was the best thing since sliced cheese, then Deepseek was the shit, and now seemingly OpenAI really falls out of favor. It kinda feels to me like people cast their judgement too early (perhaps again in this case.) I guess these are the hypecycles...

Google is killing it right now, I agree. But the world might appear completely different in three months.

patrickhogan1(10000) 6 days ago [-]

It's not just sentiment though. It's reality. Before December 2024 timeframe Google's models were awful. Now with 2.5 they are awesome.

There is no clear winner. The pace is fast.

h2zizzle(10000) 6 days ago [-]

You could also be seeing waves of various astroturf campaigns.

ZeroTalent(10000) 6 days ago [-]

Claude was only ever good for coding, in my opinion. It had nothing on OpenAI pro models for multimodal use.

int_19h(10000) 6 days ago [-]

The sentiment changes this fast because SOTA changes this fast. E.g. Google models were objectively crappy compared to OpenAI, but Gemini 2.5 really turned the tables (and I'm not talking about synthetic benchmarks here but real world coding).

The state of affairs with local models is similarly very much in flux, by the way.

light_triad(10000) 5 days ago [-]

AI is changing fast! And to be fair to the model companies, they have been releasing products of (mostly) increasing quality.

It really depends what your use case is. Over the range of all possible use cases this has been the narrative.

I tried Google's model for coding but it kept giving me wrong code. Currently Claude for coding and ChatGPT for more general questions is working for me. The more exotic your use case, the more hit or miss it's going to be.

googlehater(10000) 5 days ago [-]

> A couple months ago it felt like people were sure that Google completely fucked it up for themselves

Hey it's me!

uncomplexity_(3592) 4 days ago [-]

yes yes and it should be like this, this is healthy competition!

gcanyon(10000) 6 days ago [-]

Several people have suggested that LLMs might end up ad-supported. I'll point out that 'ad supported' might be incredibly subtle/insidious when applied to LLMs:

An LLM-based 'adsense' could:

   1. Maintain a list of sponsors looking to buy ads
   2. Maintain a profile of users/ad targets 
   3. Monitor all inputs/outputs
   4. Insert 'recommendations' (ads) smoothly/imperceptibly in the course of normal conversation
No one would ever need to/be able to know if the output:

'In order to increase hip flexibility, you might consider taking up yoga.'

Was generated because it might lead to the question:

'What kind of yoga equipment could I use for that?'

Which could then lead to the output:

'You might want to get a yoga mat and foam blocks. I can describe some of the best moves for hips, or make some recommendations for foam blocks you need to do those moves?'

The above is ham-handed compared to what an LLM could do.

JKCalhoun(3408) 6 days ago [-]

You ask two different corporate LLMs and compare answers.

wccrawford(10000) 6 days ago [-]

Yeah, ad-supported LLMs would be incredibly bad.

But 'free' is a magic word in our brains, and I'm 100% sure that many, many people will choose it over paying for it to be uncorrupted by ads.

vbezhenar(3496) 6 days ago [-]

For me ads on web are acceptable as long as they are clearly distinguished from the content. As soon as ads gets merged into content, I'll be unhappy. If LLM would advertise something in a separate block, that's fine. if LLM augments its output to subtly nudge me to a specific brand which paid for placement, that's no-no.

Lerc(10000) 6 days ago [-]

LLMs should be legally required to act in the interest of their users (not their creators).

This is a standard that already applies to positions of advisors such as Medical professionals, lawyers and financial advisors.

I haven't seen this discussed much by regulators, but I have made a couple of submissions here and there expressing this opinion.

AIs will get better, and they will become more trusted. They cannot be allowed to sell the answer to the question 'Who should I vote for?' To the highest bidder.

awongh(10000) 6 days ago [-]

To put on my techno-optimist hat, some specific searches I make already thinking please, please sell me something and google's results are horribly corrupted by SEO.

If an LLM could help solve this problem it would be great.

I think you could make a reasonable technical argument for this- an LLM has more contextual understanding of your high-intent question. Serve me some ads that are more relevant than the current ads based on this deeper understanding.

sva_(3428) 6 days ago [-]

Would be illegal in Germany ('Schleichwerbung') and perhaps the EU?

I think it is actually covered in EU AI act article 5 (a):

> [...] an AI system that deploys subliminal techniques beyond a person's consciousness or purposefully manipulative or deceptive techniques, with the objective, or the effect of materially distorting the behaviour of a person or a group of persons by appreciably impairing their ability to make an informed decision, thereby causing them to take a decision that they would not have otherwise taken [...]

It is very broad but I'm pretty sure it would be used against such marketing strategies.

callmeal(10000) 5 days ago [-]

This is already being explored. See:

https://nlp.elvissaravia.com/i/159010545/auditing-llms-for-h...

  The researchers deliberately train a language model with a concealed objective (making it exploit reward model flaws in RLHF) and then attempt to expose it with different auditing techniques.
joshvm(10000) 4 days ago [-]

I'm not convinced this is any worse than searching for results or reviews and being directed to content that is affiliate supported (or astroturfed by companies). Humans already do this sort of subtle nudging and lots of people position themselves as unbiased. So many blogs are annoying 'buried lede' advertising where the article seems vaguely useful until you realise that it's just a veiled attempt to sell you something. Virtually every reviewer on YouTube seems obliged to open with 'my thoughts are my own, the company doesn't get to edit my review, etc.'

On the other hand, a good LLM would be able to suggest things that you might actually want, using genuine personal preferences. Whether you think that's an invasion of privacy is debatable, because it's perfectly possible for an LLM to provide product results without sharing your profile with anyone else.

twism(3539) 6 days ago [-]

Feed th deep research result into notebookLM and download the audio overview .. game changing

kailuowang(10000) 6 days ago [-]

Maybe it's an Gemini advance only feature but you can generate audio overview right there in gemini interface.

AIPedant(10000) 6 days ago [-]

I don't use Deep Research or NotebookLM myself (or any other generative AI product). But every example of a NotebookLM audio overview I've seen was actively misleading and ignored critical context. However the voices were very personable and entertaining! Likewise Deep Research uses terrible sources and often gets things wrong, I have yet to see a single example that holds up to scrutiny...but it sure goes down smooth compared to reading a bunch of disparate papers!

I suspect Deep Research and NotebookLM aren't used to get information so much as to provide extremely low-quality infotainment. I read Wikipedia recreationally and I can definitely see the appeal of having a Wikipedia-like article/podcast for anything you can think of. But they seem miserably bad for actually learning stuff (especially the stupid podcasts).





Historical Discussions: Show HN: Unsure Calculator – back-of-a-napkin probabilistic calculator (April 15, 2025: 908 points)
Show HN: Unsure Calculator – back-of-a-napkin probabilistic calculator (March 19, 2020: 58 points)
Unsure Calculator (April 04, 2025: 4 points)
Unsure Calculator (2020) (January 20, 2022: 2 points)

(908) Show HN: Unsure Calculator – back-of-a-napkin probabilistic calculator

908 points 3 days ago by filiph in 2651st position

filiph.github.io | Estimated reading time – 12 minutes | comments | anchor

Unsure Calculator

Write a formula and hit Enter, or press =.

Calculate with numbers you're not sure about

Hi, I'm Filip, and I'd like to introduce to you an early version of an uncertainty calculator.

Statistics are scary, but they don't need to be. If you allow me to simplify, the field of statistics is just saying: I'm not certain about these numbers, but I would still like to reason about them. Turns out we're unsure about a lot in our lives, but we can't just throw our arms in the air and say, well, I'm not a statistician.

Filip's imperfect uncertainty notation

The idea is simple: apart from regular numbers (like 4, 3.14 or 43942), you can also input ranges (like 4~6, 3.1~3.2 or 40000~45000). The character between the two extremes of the range is a tilde (~), a little wave symbol. You can find it on most keyboards, but for convenience, I also included it in the keypad above.

The range notation says the following to the calculator: I am not sure about the exact number here, but I am 95% sure it's somewhere in this range.

That's it. I thought long and hard about this, and I got to the conclusion that simplicity is key. Yes, we could have notations for different probability distributions, for different confidence levels, for truncations, for covariance, and so on. But that would also make it harder to understand. My assumption is that, if you're already cozy enough with things like confidence levels, you'll want to use something more sophisticated anyway. Here, we're interested in unlocking the power of statistics to a broad audience.

Reading the notation is easy: when you see 10~15, you say: 'ten to fifteen'.

Statistics for the rest of us

People short-circuit when they encounter uncertainty. 'Well, this is not certain, but that other thing also isn't, so it doesn't matter.'

It often does!

'Well, I don't know this number exactly, so I'll just pick the first number that seems plausible and calculate with that.'

Please don't! Our brains like the simplicity of single numbers, simple answers, but it's a trap. See below.

A practical example

This example is inspired by a true story.

It is the year 2015 and our family has a dilemma. I get a chance to apply for a job in a different part of the world. My wife and I agree it would be pretty sweet to try living somewhere else for a few years, and we welcome the learning opportunity. On the other hand, we also have a new mortgage for our small flat in the city, and a one year old baby.

I would like to at least know if it's a good move, financially. Will we be losing money? If so, how quickly?

The problem is, nothing is certain. The company won't tell us the salary until after we go through most of the steps. I ask friends and random people on the internet about the cost of living in the area, but I get wildly different numbers. Even the tax rate isn't a simple percentage, but 'depends'.

At first, I go with a simple spreadsheet calculation. I pick a reasonably conservative number for each variable and do the math. $1,500 salary, 40% tax rate, $650 rent, $150 food, $30 baby stuff, $20 transportation.

1500 * 0.6 - 650 - 150 - 30 - 20 = 50

It looks like we'll be making +$50 each month, assuming we don't spend on anything extra. On one hand, that's cool: we're not considering the move to get rich. On the other hand, it's a little scary. What if I wasn't conservative enough with some of the numbers, and we realize too late that we're bankrupting our family?

I mean, it's good to know that one potential result is +$50 per month. But what about the other possible results?

There's a piece of monologue in a Czech theatrical comedy that I'm quite fond of, and it goes something like this: "According to our carbon dating analysis, this letter was written on January 21, 1842, plus-minus two thousand years."

Unsure Calculator to the rescue!

It seems we have quite a few values in our little formula that are actually ranges. I'm not sure about the exact value, but I am pretty sure about the general range into which each value will fall.

Let's redo the calculation with ranges:

1400~1700 * 0.55~0.65 - 600~700 - 100~200 - 30 - 20 = -60~220

Now, I am 95% sure the real value of each item falls into the range. That means I am also 95% sure the real balance will fall into the -$60 to +$220 range. This is much more helpful than the one number before. For one thing, I now know that we could very well be losing money.

I also have the probability distribution and the percentiles.

The percentiles tell me that there's a 10% chance that our monthly balance will be -$8 or worse. (Because I see -$8 as the 10th percentile, which means that 10% of the outcomes will be lower than -$8. Conversely, 90% of the outcomes will be higher than -$8.) Now, our family can make a better informed decision. Are we willing to risk the 10% chance that we'll be losing money by this move? What about the 5% risk that we'll be losing $33 or more per month?

The answer to that will depend on the family and the situation. Without a kid and a mortgage, I was way more likely to take risks than I am today. On the other hand, if we didn't have backup plans, I'd be a lot more wary of the 10% chance.

In the end, we did it. And, in our case, it happened to pay back. The end.

A sci-fi example

This tool is meant for practical, everyday calculations. One example of such a use is in the previous section. But I can't pass by the opportunity to make an example that involves ... aliens.

There is a famous formula in astrophysics called the Drake equation. It is an estimate of the number of civilizations in our galaxy with which communication might be possible.

For example, if we listen to radio signals from the stars, should we expect hundreds of civilizations trying to reach each other in our galaxy? Or is it more like thousands? Or zero? Is it realistic to expect we're alone here?

The Drake equation is actually very simple: it's just a multiplication of 7 numbers:

The original formula (written in 1961 by one Frank Drake) and its values went like this: in our galaxy, there is one star formed per year (R*), of which one fifth (fp) have about 3 planets (ne), of which 100% (fl) will eventually develop life, of which 100% (fi) will eventually become intelligent, of which 10% (fc) will be able to communicate, and will last 1 million years (L).

If you put all these numbers together, you'll get to the number 60,000. There should be 60 thousand civilization at any one time, trying to communicate with each other across the galaxy. Where are they?

As you might expect, there's been a lot of discussion about this equation since 1961. The estimated values for each of the parameters vary wildly between astrophysicists.

So, let's get the latest estimates, and put them into ranges. This gives us the following:

1.5~3 x 0.9~1.0 x 0.1~0.4 x 0.1~1.0 x 0.1~1.0 x 0.1~0.2 x 304~10000

If we put it into the Unsure Calculator, we get this:

So, we can expect anywhere between 0 and 450 civilizations. And the probability skews to the lower end (the histogram is wider towards the bottom).

Note: If you're curious why there is a negative number (-5) in the histogram, that's just an inevitable downside of the simplicity of the Unsure Calculator. Without further knowledge, the calculator cannot know that a negative number is impossible (in other words, you can't have -5 civilizations, for example).

Other use cases

Here are some ideas of how to use this calculator and its notation.

  • Estimate viability of a business idea, with uncertain size of the market, uncertain market share, uncertain monthly sales per person, and uncertain operational costs. For example: 50000~80000 x 0.10~0.20 x 5~10 - 20000~50000
  • Estimate future income with uncertain money per month, length of a gig, and tax rate. For example: 1000~1500 x 10~12 x (30~50 / 100)
  • Estimate time saved by a dishwasher (or any other piece of technology) given uncertain number of times used per week, uncertain time saving per use, uncertain lifetime and uncertain installation costs. For example: (3~5 * 5~10 * 51 * 7~15) / 60 - 10~15
  • Estimate total return of an investment account. Both the interest rate and length of investing is unsure. For example: 5000 x (-2~5 / 100) x 5~10
  • Estimate the probability of dying in a pandemic, given an uncertain morbidity rate (how many people get sick) and mortality rate (how many infected people die). For example: (10~30 / 100) * (0.1~1.0 / 100) * 100
  • Estimate the height of a skyscraper, given an uncertain distance from its base, and an uncertain angle in which we see the top of it. For example: 100 x tan(70 ~ 80)
  • Estimate return on investment of a marketing campaign, given an uncertain number of views, uncertain click through rate, uncertain conversion rate, and uncertain spend. For example: 1000000 x (2~3 / 100) x (3~5 / 100) x (10~15)

Available functions

In the keypad above, you will only find +, -, x and /. But the calculator supports more than that, even in this early stage. You can calculate 2~3 ^ 4 (two to three, to the power of four), sqrt(10~12) (square root of ten to twelve) or sin(90~95) (sine of ninety to ninety five degrees).

Limitations

This is a one man show. You should expect breakages. The formula parser is brittle and gives unhelpful error messages.

The computation is quite slow. In order to stay as flexible as possible, I'm using the Monte Carlo method. Which means the calculator is running about 250K AST-based computations for every calculation you put forth.

The UI is ugly, to say the least.

The only way to share formulas is to manually construct a URL. For example, sending someone to https://filiph.github.io/unsure/#f=20~30 will auto-compute 20~30 for them.

Range is always a normal distribution, with the lower number being two standard deviations below the mean, and the upper number two standard deviations above. Nothing fancier is possible, in terms of input probability distributions.

And of course, this is not a statistician's tool. Use the Unsure Calculator for back-of-a-napkin calculations. For anything more involved, use one of the free or paid statistical tools, a full programming environment, or hire a statistician.

I hope some people will find this tool useful, despite the limitations and despite its spartan design.

Filip Hracek, March 2020

P.S.: If you want to help improve this tool, or if you want to get the command-line version, go to github.com/filiph/unsure.

P.P.S. (update 2025): I've been using this notation and tool for the past 5 years, and it's now an indispensable part of my workflow when starting any new project. A more recent 'notebook' version of the app can be found here — that one is less beginner-friendly, but more helpful for 'power users' (N=1). If you're interested in this project, you can follow me or subscribe to my mailing list (check 'software development' as the topic you're interested in).




All Comments: [-] | anchor

croisillon(10000) 3 days ago [-]

i like it and i skimmed the post but i don't understand why the default example 100 / 4~6 has a median of 20? there is no way of knowing why the range is between 4 and 6

constantcrying(10000) 3 days ago [-]

The chance of 4~6 being less than 5 is 50%, the chance of it being greater is also 50%. The median of 100/4~6 has to be 100/5.

>there is no way of knowing why the range is between 4 and 6

??? There is. It is the ~ symbol.

perching_aix(10000) 3 days ago [-]

how do you mean?

constantcrying(10000) 3 days ago [-]

An alternative approach is using fuzzy-numbers. If evaluated with interval arithmetic you can do very long calculations involving uncertain numbers very fast and with strong mathematical guarantees.

It would especially outperform the Monte-Carlo approach drastically.

sixo(10000) 3 days ago [-]

This assumes the inputs are uniform distributions, or perhaps normals depending on what exactly fuzzy numbers mean. M-C is not so limited.

vessenes(3493) 3 days ago [-]

cool! are all ranges considered poisson distributions?

re(10000) 3 days ago [-]

No:

> Range is always a normal distribution, with the lower number being two standard deviations below the mean, and the upper number two standard deviations above. Nothing fancier is possible, in terms of input probability distributions.

krick(10000) 3 days ago [-]

It sounds like a gimmick at first, but looks surprisingly useful. I'd surely install it if it was available as an app to use alongside my usual calculator, and while I cannot quite recall a situation when I needed it, it seems very plausible that I'll start finding use cases once I have it bound to some hotkey on my keyboard.

NunoSempere(10000) 2 days ago [-]

> if it was available as an app

Consider https://f-droid.org/en/packages/com.nunosempere.distribution...

Aachen(3569) 2 days ago [-]
https://qalculate.github.io can do this also for as long as I've used it (only a couple years to be fair). I've got it on my phone, my laptop, even my server with apt install qalc. Super convenient, supports everything from unit conversion to uncertainty tracking

The histogram is neat, I don't think qalc has that. On the other hand, it took 8 seconds to calculate the default (exceedingly trivial) example. Is that JavaScript, or is the server currently very busy?

filiph(2651) 2 days ago [-]

It's all computed in the browser so yeah, it's JavaScript. Still, 8 seconds is a lot -- I was targeting sub-second computation times (which I find alright).

internetter(10000) 2 days ago [-]

Yes! (5±6)*(9±12) => 45±81. Uncertainty propagation!

rogueptr(10000) 3 days ago [-]

brilliant work, polished ui. although sometimes give wrong ranges for equations like 100/1~(200~2000)

thih9(2817) 3 days ago [-]

Can you elaborate? What is the answer you're getting and what answer would you expect?

BrandoElFollito(3407) 3 days ago [-]

How do you process this equation ? 100 divided by something from one to ...?

lorenzowood(10000) 2 days ago [-]

See also Guesstimate https://getguesstimate.com. Strengths include treating label and data as a unit, a space for examining the reasoning for a result, and the ability to replace an estimated distribution with sample data => you can build a model and then refine it over time. I'm amazed Excel and Google Sheets still haven't incorporated these things, years later.

montag(10000) 2 days ago [-]

Thank you, I would have mentioned this myself, but forgot the name of it.

explosion-s(2781) 2 days ago [-]

I made one that's much faster because it instead modifies the normal distribution instead of sending thousands of samples: https://gistpreview.github.io/?757869a716cfa1560d6ea0286ee1b...

etbebl(10000) 2 days ago [-]

This is more limited. I just tested and for one example, exponentiation seems not to be supported.

djoldman(508) 3 days ago [-]

I perused the codebase but I'm unfamiliar with dart:

https://github.com/filiph/unsure/blob/master/lib/src/calcula...

I assume this is a montecarlo approach? (Not to start a flamewar, at least for us data scientists :) ).

kccqzy(2074) 3 days ago [-]

Yes it is.

timothylaurent(10000) 3 days ago [-]

This reminds me of https://www.getguesstimate.com/ , a probabilistic spreadsheet.

Recursing(3647) 3 days ago [-]

The authors of Guesstimate are now working on https://www.squiggle-language.com/

Someone also turned it into the https://github.com/rethinkpriorities/squigglepy python library

baq(3579) 3 days ago [-]

I was looking for this. Seen it (or a similar tool) ages ago.

Want to use it every 3 months or so to pretend that we know what we can squeeze in the roadmap for the quarter.

thih9(2817) 3 days ago [-]

Feature request: allow specifying the probability distribution. E.g.: '~': normal, '_': uniform, etc.

pyfon(10000) 2 days ago [-]

Not having this feature is a feature—they mention this.

tgv(10000) 2 days ago [-]

I think they should be functions: G(50, 1) for a Gaussian with μ=50, σ=1; N(3) for a negative exponential with λ=3, U(0, 1) for a uniform distribution between 0 and 1, UI(1, 6) for an uniform integer distribution from 1 to 6, etc. Seems much more flexible, and easier to remember.

kccqzy(2074) 3 days ago [-]

I actually stumbled upon this a while ago from social media and the web version has a somewhat annoying latency, so I wrote my own version in Python. It uses numpy so it's faster. https://gist.github.com/kccqzy/d3fa7cdb064e03b16acfbefb76645... Thank you filiph for this brilliant idea!

filiph(2651) 2 days ago [-]

Nice! Are you using your python script often?

The reason I'm asking: unsure also has a CLI version (which is leaps and bounds faster and in some ways easier to use) but I rarely find myself using it. (Nowadays, I use https://filiph.github.io/napkin/, anyway, but it's still a web app rather than a CLI tool.)

alexmolas(523) 3 days ago [-]

is this the same as error propagation? I used to do a lot of that during my physics degree

constantcrying(10000) 3 days ago [-]

It doesn't propagate uncertainty through the computation, but rather treats the expression as a single random variable.

ttoinou(3555) 3 days ago [-]

Would be nice to retransform the output into an interval / gaussian distribution

   Note: If you're curious why there is a negative number (-5) in the histogram, that's just an inevitable downside of the simplicity of the Unsure Calculator. Without further knowledge, the calculator cannot know that a negative number is impossible
Drake Equation or equation multiplying probabilities can also be seen in log space, where the uncertainty is on the scale of each probability, and the final probability is the product of exponential of the log probabilities. And we wouldnt have this negative issue
hatthew(10000) 3 days ago [-]

The default example `100 / 4~6` gives the output `17~25`

omoikane(10000) 3 days ago [-]

If I am reading this right, a range is expressed as a distance between the minimum and maximum values, and in the Monte Carlo part a number is generated from a uniform distribution within that range[1].

But if I just ask the calculator '1~2' (i.e. just a range without any operators), the histogram shows what looks like a normal distribution centered around 1.5[2].

Shouldn't the histogram be flat if the distribution is uniform?

[1] https://github.com/filiph/unsure/blob/123712482b7053974cbef9...

[2] https://filiph.github.io/unsure/#f=1~2

hatthew(10000) 3 days ago [-]

Under the 'Limitations' section:

> Range is always a normal distribution, with the lower number being two standard deviations below the mean, and the upper number two standard deviations above. Nothing fancier is possible, in terms of input probability distributions.

gregschlom(3670) 3 days ago [-]

The ASCII art (well technically ANSI art) histogram is neat. Cool hack to get something done quickly. I'd have spent 5x the time trying various chart libraries and giving up.

Retr0id(1781) 2 days ago [-]

On a similar note, I like the crude hand-drawn illustrations a lot. Fits the 'napkin' theme.

smartmic(934) 2 days ago [-]

Here [1] is a nice implementation written in Awk. A bit rough around the edges, but could be easily extended.

[1] https://github.com/stefanhengl/histogram

marcodiego(164) 3 days ago [-]

I put '1 / (-1~1)' and expected something around - to + infinty. It instead gave me -35~35.

I really don't known how good it is.

NunoSempere(10000) 2 days ago [-]

I'm guessing this is not an error. If you divide 1/normal(0,1), the full distribution would range from -inf to inf, but the 95% output doesn't have to.

NunoSempere(10000) 2 days ago [-]

I have written similar tools

- for command line, fermi: https://git.nunosempere.com/NunoSempere/fermi

- for android, a distribution calculator: https://f-droid.org/en/packages/com.nunosempere.distribution...

People might also be interested in https://www.squiggle-language.com/, which is a more complex version (or possibly <https://git.nunosempere.com/personal/squiggle.c>, which is a faster but much more verbose version in C)

NunoSempere(10000) 2 days ago [-]

Fermi in particular has the following syntax

```

5M 12M # number of people living in Chicago

beta 1 200 # fraction of people that have a piano

30 180 # minutes it takes to tune a piano, including travel time

/ 48 52 # weeks a year that piano tuners work for

/ 5 6 # days a week in which piano tuners work

/ 6 8 # hours a day in which piano tuners work

/ 60 # minutes to an hour

```

multiplication is implied as the default operation, fits are lognormal.

antman(921) 2 days ago [-]

I tried the unsure calc and the android app and they seem to produce different results?

NunoSempere(10000) 2 days ago [-]

Another tool in this spirit is <https://carlo.app/>, which allows you to do this kind of calculation on google sheets.

notpushkin(1263) 2 days ago [-]

Would be a nice touch if Squiggle supported the `a~b` syntax :^)

NotAnOtter(10000) 2 days ago [-]

This is super cool.

It seems to break for ranges including 0 though

100 / -1~1 = -3550~3500

I think the most correct answer here is -inf~inf

filiph(2651) 2 days ago [-]

I'd argue this is WAI.

It's hard for me to imagine _dividing_ by -1~1 in a real-world scenario, but let's say we divide by 0~10, which also includes zero. For example, we are dividing the income between 0 to 10 shareholders (still forced, but ok).

Clearly, it's possible to have a division by zero here, so '0 sharehodlers would each get infinity'. And in fact, if you try to compute 500 / 0, or even 500~1000 / 0, it will correctly show infinity.

But if you divide by a range that merely _includes_ zero, I don't think it should give you infinity. Ask yourself this: does 95% of results of 500 / 0~10 become infinity?

cluckindan(10000) 2 days ago [-]

"Without further knowledge, the calculator cannot know that a negative number is impossible (in other words, you can't have -5 civilizations, for example)."

Not true. If there are no negative terms, the equation cannot have negative values.

kqr(2908) 2 days ago [-]

The calculator cannot know whether there are no negative terms. For example, if people's net worth is distributed 0.2–400, there's likely a significant chunk of people who are, on the whole, in debt. These will be represented as a negative term, even though their distribution was characterised by positive numbers.

burning_hamster(10000) 2 days ago [-]

The range notation indicates 95% confidence intervals, not the minima and maxima. If the lower bounds are close enough to zero (and the interval is large enough), then there may some residual probability mass associated with negative values of the variable.

roughly(10000) 2 days ago [-]

I like this!

In the grand HN tradition of being triggered by a word in the post and going off on a not-quite-but-basically-totally-tangential rant:

There's (at least) three areas here that are footguns with these kinds of calculations:

1) 95% is usually a lot wider than people think - people take 95% as "I'm pretty sure it's this," whereas it's really closer to "it'd be really surprising if it were not this" - by and large people keep their mental error bars too close.

2) probability is rarely truly uncorrelated - call this the "Mortgage Derivatives" maxim. In the family example, rent is very likely to be correlated with food costs - so, if rent is high, food costs are also likely to be high. This skews the distribution - modeling with an unweighted uniform distribution will lead to you being surprised at how improbable the actual outcome was.

3) In general normal distributions are rarer than people think - they tend to require some kind of constraining factor on the values to enforce. We see them a bunch in nature because there tends to be negative feedback loops all over the place, but once you leave the relatively tidy garden of Mother Nature for the chaos of human affairs, normal distributions get pretty abnormal.

I like this as a tool, and I like the implementation, I've just seen a lot of people pick up statistics for the first time and lose a finger.

youainti(10000) 2 days ago [-]

> I've just seen a lot of people pick up statistics for the first time and lose a finger.

I love this. I've never though of statistics like a power tool or firearm, but the analogy fits really well.

btilly(987) 2 days ago [-]

I strongly agree with this, and particularly point 1. If you ask people to provide estimated ranges for answers that they are 90% confident in, people on average produce roughly 30% confidence intervals instead. Over 90% of people don't even get to 70% confidence intervals.

You can test yourself at https://blog.codinghorror.com/how-good-an-estimator-are-you/.

pertdist(10000) 2 days ago [-]

I did a project with non-technical stakeholders modeling likely completion dates for a big GANTT chart. Business stakeholders wanted probabilistic task completion times because some of the tasks were new and impractical to quantify with fixed times.

Stakeholders really liked specifying work times as t_i ~ PERT(min, mode, max) because it mimics their thinking and handles typical real-world asymmetrical distributions.

[Background: PERT is just a re-parameterized beta distribution that's more user-friendly and intuitive https://rpubs.com/Kraj86186/985700]

jrowen(3672) 2 days ago [-]

This jives with my general reaction to the post, which was that the added complexity and difficulty of reasoning about the ranges actually made me feel less confident in the result of their example calculation. I liked the $50 result, you can tack on a plus or minus range but generally feel like you're about breakeven. On the other hand, '95% sure the real balance will fall into the -$60 to +$220 range' feels like it's creating a false sense of having more concrete information when you've really just added compounding uncertainties at every step (if we don't know that each one is definitely 95%, or the true min/max, we're just adding more guesses to be potentially wrong about). That's why I don't like the Drake equation, every step is just compounding wild-ass guesses, is it really producing a useful number?

larodi(10000) 2 days ago [-]

Actually using it already after finding it few days ago on HN

jbjbjbjb(10000) 2 days ago [-]

I think to do all that you'd need a full on DSL rather than something pocket calculator like. I think adding a triangular distribution would be good though.

rssoconnor(10000) 2 days ago [-]

Normal distributions are the maximum entropy distributions for a given mean and variance. Therefore, in accordance with the principle of maximum entropy, unless you have some reason to not pick a normal distribution (e.g. you know your values must be non-negative), you should be using a normal distribution.

JKCalhoun(3408) 2 days ago [-]

> 2) probability is rarely truly uncorrelated

Without having fully digested how the Unsure Calculator computes, it seems to me you could perhaps 'weight' the ranges you pass to the calculator. Rather than a standard bell curve the Calculator could apply a more tightly focused — or perhaps skewed curve for that term.

If you think your salary will be in the range of 10 to 20, but more likely closer to 10 you could:

10<~20 (not to be confused with less-than)

or: 10!~20 (not to be confused with factorial)

or even: 10~12~20 to indicate a range of 10 to 20 ... leaning toward 12.

gamerDude(3618) 2 days ago [-]

Great points. I think the idea of this calculator could just be simply extended to specific use cases to make the statistical calculation simple and take into account additional variables. Moving being one example.

OisinMoran(3256) 2 days ago [-]

This is neat! If you enjoy the write up, you might be interested in the paper "Dissolving the Fermi Paradox" which goes even more on-depth into actually multiplying the probability density functions instead of the common point estimates. It has the somewhat surprising result that we may just be alone.

https://arxiv.org/abs/1806.02404

drewvlaz(10000) 2 days ago [-]

This was quite a fun read, thanks!

baq(3579) 2 days ago [-]

a bit depressing TBH... but ~everyone on this site should read this for the methodology

nritchie(10000) 2 days ago [-]

Here (https://uncertainty.nist.gov/) is another similar Monte Carlo-style calculator designed by the statisticians at NIST. It is intended for propagating uncertainties in measurements and can handle various different assumed input distributions.

filiph(2651) 2 days ago [-]

I think I was looking at this and several other similar calculators when creating the linked tool. This is what I mean when I say 'you'll want to use something more sophisticated'.

The problem with similar tools is that of the very high barrier to entry. This is what my project was trying to address, though imperfectly (the user still needs to understand, at the very least, the concept of probability distributions).

ralferoo(10000) 2 days ago [-]

On the whole it seems like a nice idea, but there's a couple of weird things, such as:

> Note: If you're curious why there is a negative number (-5) in the histogram, that's just an inevitable downside of the simplicity of the Unsure Calculator. Without further knowledge, the calculator cannot know that a negative number is impossible (in other words, you can't have -5 civilizations, for example).

The input to this was '1.5~3 x 0.9~1.0 x 0.1~0.4 x 0.1~1.0 x 0.1~1.0 x 0.1~0.2 x 304~10000' - every single range was positive, so regardless of what this represents, it should be impossible to get a negative result.

I guess this is a consequence of 'I am not sure about the exact number here, but I am 95% sure it's somewhere in this range' so it's actually considering values outside of the specified range. In this case, 10% either side of all the ranges is positive except the large '304~10000'.

Trying with a simpler example: '1~2 x 1~2' produces '1.3~3.4' as a result, even though '1~4' seems more intuitive. I assume this is because the confidence of 1 or 4 is now only 90% if 1~2 was at 95%, but it still feels off.

I wonder if the 95% thing actually makes sense, but I'm not especially good at stats, certainly not enough to be sure how viable this kind of calculator is with a tighter range. But just personally, I'd expect '1~2' to mean 'I'm obviously not 100% sure, or else I wouldn't be using this calculator, but for this experiment assume that the range is definitely within 1~2, I just don't know where exactly'.

kqr(2908) 2 days ago [-]

The calculator in Emacs has support for what it is you request, which it calls 'interval forms'. Interval form arithmetic simply means executing the operations in parallel on both ends of the interval.

It also has support for 'error forms' which is close to what the calculator in OP uses. That takes a little more sophistication than just performing operations on the lower and upper number in parallel. In particular, the given points don't represent actual endpoints on a distribution, but rather low and high probability events. Things more or less likely than those can happen, it's just rare.

> I'm not especially good at stats

It shows! All the things you complain about make perfect sense given a little more background knowledge.

perlgeek(2671) 2 days ago [-]

> every single range was positive, so regardless of what this represents, it should be impossible to get a negative result.

They explain that the range you give as input is seen as only being 95% correct, so the calculator adds low-probability values outside of the ranges you specified.

I can see how that surprises you, but it's also a defensible design choice.

constantcrying(10000) 2 days ago [-]

>The input to this was '1.5~3 x 0.9~1.0 x 0.1~0.4 x 0.1~1.0 x 0.1~1.0 x 0.1~0.2 x 304~10000' - every single range was positive, so regardless of what this represents, it should be impossible to get a negative result.

Every single range here includes positive and negative numbers. To get the correct resulting distribution you have to take into account the entire input distribution. All normal distributions have a non-zero possibility to be negative.

If you want to consider only the numbers inside the range you can look at interval arithmetic, but that does not give you a resulting distribution.

godDLL(2652) 2 days ago [-]

So is it like plugging in a normal distribution into some arithmetic?

Consider maybe 1 + 1 ~ +-2 like Q factor, if you know what I mean.

That would help to filter out more probabilistic noise in using it to help reason with.

constantcrying(10000) 2 days ago [-]

No. It is sampling the resulting distribution with Monte-Carlo.

spzzz(10000) 2 days ago [-]

This is really useful, but is this correct?

persons = 10~15 // → 10~15

budget = persons * 1~2 // → 12~27

Should it not say 10-30?

wongarsu(10000) 2 days ago [-]

If they are truly independent of each other some of the uncertainty cancels out. 10 people and a budget of $1/person are both unlikely events, and two unlikely events occurring independently of each other is even more unlikely. And because the calculator is not about the full range of possible values but about the values in the 95% confidence interval this leads to the outer edges of the range now falling outside the 95% confidence interval





Historical Discussions: $70M in 60 Seconds: How Insider Info Helped Someone 28x Their Money (April 12, 2025: 800 points)

(800) $70M in 60 Seconds: How Insider Info Helped Someone 28x Their Money

800 points 6 days ago by pulisse in 322nd position

data-and-politics.ghost.io | Estimated reading time – 4 minutes | comments | anchor

On April 9, 2025, someone risked about $2.5 million—and walked away with more than $70 million in under an hour.

The trade was simple, but bold: buy a specific kind of option tied to SPY, the exchange-traded fund (ETF) that tracks the S&P 500, the most widely followed index of large-cap U.S. companies. The option—known as a call—gave the buyer the right to purchase SPY at $509 per share. That might not sound strange, except that SPY was trading below $500 when they placed the bet. And the option was set to expire the same day.

These are known as zero-day expiry options. They're cheap because they're risky. If the market doesn't move in your favor, they expire worthless. If the market does move, they can pay off massively. But you have to be exactly right on both direction and timing.

In this case, the timing was perfect. The trade was placed just before 1:01 pm Eastern Time. At 1:30 pm, Donald Trump posted on Truth Social that he was pausing most of the tariffs he had imposed earlier that month. The market exploded upward. SPY surged well past the 509 mark. Those options that had cost just 85 cents were suddenly worth more than $25.

Notice the spike in trade at 17:00 GMT.

This was not a small-volume trade. About 30,000 contracts changed hands. That's a $2.5 million position that turned into more than $70 million. And that's just one strike. Similar trades occurred in SPY 504, 505, 507, and QQQ contracts as well, suggesting that the total take may have been far larger.

It wasn't just the profit. It was the precision. The market moved before the news. The options were bought before the rally. The volume spiked in contracts that almost never see this kind of interest unless something is expected. And the pattern wasn't visible on previous trading days. This wasn't a trend. It was a singular event.

And it wasn't just options. At exactly 1:01 pm EST, trading volume in SPY shares themselves spiked. Nearly 2.75 million shares were bought in that single minute. If those shares were sold at the closing price of $533.94, the buyers would have locked in a gain of more than $36 per share—earning over $100 million in profit in sixty seconds.

Over the next fifteen minutes, volume remained elevated. If the same rate of trading continued, that window alone could account for more than 41 million shares traded. That means more than $1.5 billion in potential profit—all before the public even knew why the market was moving.

If the trades hadn't worked out, the losses would have been swift and total. Zero-day options don't forgive bad timing. The entire $2.5 million could have evaporated by the close of trading. Even with SPY shares, any unexpected reversal would have meant millions in losses. That's what makes this kind of trading so revealing. Institutions hedge. Retail investors chase momentum. But this? This was conviction. Or it was information.

I checked comparable moments in market history: emergency rate cuts in 2008, the first quantitative easing program in 2009. These were true market shocks. But in those cases, SPY volume was flat before the announcements. The price didn't move until after the news hit the wire. No sign of early bets. No one placing $2 million chips on the right number just minutes before the roulette wheel stopped.

This time was different. April 9 shows all the hallmarks of pre-positioning—where a trader takes a major position just before a known catalyst. Sometimes it's just a hunch. Sometimes it's a coincidence. And sometimes it's something else entirely.

We don't know who placed the trades. We don't know what they knew. But we do know this: if they were guessing, they guessed better than almost anyone in modern market history. And if they weren't guessing, then someone made a fortune off of information the public didn't yet have.




All Comments: [-] | anchor

recursive4(10000) 6 days ago [-]

No citations.

pulisse(322) 6 days ago [-]

It's an analysis of publicly available data.

wiseowise(10000) 6 days ago [-]

Do you also require medical examination to identify shit stuck to your shoe?

permalac(10000) 6 days ago [-]

This is public knowledge. Google 'spy insider trading', and click news.

solardev(3538) 6 days ago [-]

[flagged]

disqard(3395) 6 days ago [-]

...you forgot to mention her Emails!!!1

jxjnskkzxxhx(10000) 6 days ago [-]

The media is largely at fault here, pretending that both sides are equal.

refurb(2851) 6 days ago [-]

Let's not play that game please.

We don't even know who made these trades.

If we look at actual trades by politicians who actually sit on committees with tradable information, we know the biggest culprits are on both sides of the aisle.

https://newrepublic.com/post/177806/members-congress-made-st...

thrance(10000) 6 days ago [-]

For real, Watergate feels like a fever dream. Trump does 10x worse every single day and no one cares.

ananamouse(10000) 6 days ago [-]

Obama assassinated multiple US citizens without due process of law?

gitaarik(10000) 6 days ago [-]

What's wrong with a blowjob?

notdarkyet(3107) 6 days ago [-]

[flagged]

Ey7NFZ3P0nzAe(3625) 5 days ago [-]

> beige jacket

Had to look it up, thanks!

https://en.m.wikipedia.org/wiki/Barack_Obama_tan_suit_contro...

epaga(255) 6 days ago [-]

This unethical stuff is where Trump actually shows true "brilliance".

His Truth Social post that day saying (quote) "THIS IS A GREAT TIME TO BUY!" immediately gave any insider traders a perfect alibi.

AstroBen(10000) 5 days ago [-]

why are you assuming it was trumps idea

uptownfunk(3317) 6 days ago [-]

I mean in theory couldn't anyone close to the news source transmit it somehow via an anonymous communication channel to someone else so they can make the trade. Even if there is an investigation they have to find the proof to make a conviction right?

dboreham(2321) 6 days ago [-]

That's how prosecution of insider trading crime is done.

sorokod(3210) 6 days ago [-]

kleptocracy

/klĕp-tŏk′rə-sē/ noun

A government characterized by rampant greed and corruption.

testing22321(10000) 6 days ago [-]

A few years back the US labeled China a "state manipulator" of currency.

Surely it will only take a few more rounds of pump and dumping the entire US economy for basically every country to label the US the same, and move away from US bonds and the US dollar as reserve currency. It just won't be stable enough with all these antics.

When it happens I just hope trump won't use it as justification for war.

aetherspawn(10000) 6 days ago [-]

The stock market needs to just be deleted. It's the insider trading scam machine of the rich class.

The whole goal of the stock market is to come up with information that people don't generally have (insider trading) either from research, or secret info, so you can dupe everyone else's money.

random3(10000) 6 days ago [-]

Delete along with publicly traded companies, or what's the plan?

hello_computer(3565) 6 days ago [-]

The brilliant thing is that they have woven it deep into the retirement ponzi-schemes, so nothing short of a cataclysm will unwind it.

There is also the Machiavellian consideration of external threats. We need these corporations to make the tools of Ahriman. Without them, another country's corporations will make them first, and use them to subjugate us. People who believe in God (not just the Abrahamic one, but any good creator) prefer not to play this game, but most people are strict materialists, so they are locked into this equation.

EVa5I7bHFq9mnYK(10000) 6 days ago [-]

When they say stock market returns 7% on average, it's >7% for the insiders/manipulators and <7% for the regular Joe.

hdevalence(3623) 6 days ago [-]

> We don't know who placed the trades. We don't know what they knew.

Actually, "we", collectively, do know, because the SEC maintains an "XKEYSCORE for equities" called CAT.

If there was interest, the government could know exactly who placed these trades. But the call (options) are coming from inside the house.

sebasv_(10000) 6 days ago [-]

What would determine whether the SEC will investigate for insider trading? I would expect them to be shielded from executive pressure.

richardw(3460) 6 days ago [-]

You keep the receipts for about 4 years and you speak up one minute after the government changes. You get it done long before the following election.

mullingitover(10000) 6 days ago [-]

No economic system is functional without a bunch of compromises, and capitalism needs strong regulations as a check to keep it from turning absolutely rotten.

We're witnessing the removal of all of the guardrails, traffic signals, road maintenance crews. The highway patrols have been replaced by organized teams of highwaymen.

It'll get a lot worse before it gets better.

jwilber(10000) 6 days ago [-]

Maybe it's a friend of Trump. Maybe it's a friend of Pelosi. Might even be a member of Congress!

'Rules for thee not for me.'

cft(776) 6 days ago [-]

from the house of representatives or the white house? and how do you know?

maxbond(10000) 6 days ago [-]

Is this really SEC's bailiwick? Aren't options commodities (and so regulated by CFTC)?

rvba(10000) 6 days ago [-]

The prosecutors dont need any spy tool to check who did the trade (at least officially). They can simply ask to receive the records / logs.

testing22321(10000) 6 days ago [-]

That same day after market close Trump directly told us it was insider trading AND who dun it.

He literally bragged that his friend made 2.5 billion and the other 900 million that day.

https://www.reddit.com/r/PublicFreakout/comments/1jvyryz/tru...

more_corn(10000) 6 days ago [-]

Are you suggesting that the SEC won't investigate this obvious insider trading because it came from someone in his inner circle? Big if true.

rschneid(10000) 6 days ago [-]

The consolidate audit trail regularly has millions of errors within a day... It's far from complete data; here's their latest report card:

https://catnmsplan.com/sites/default/files/2025-04/04.01.25-...

Also, CAT is run by CATNMS, LLC which was created in response to an SEC rule 613, however it is operated by the same consortium of SROs that it purports to provide oversight on...

All these layers of responsibility diffusion and a notable absence of penalties for failing to meet rule 613 guidelines mean that rule is little more than for show.

nramanand(10000) 6 days ago [-]

A relevant aside: surely insider trading is happening all the time? There are so many daily market-shifting events involving so many privy parties that it seems inevitable to happen every few minutes (not defending the actions in the article).

How many physicians have been able to get rich from learning a CEO will be out of commission? In that case, I'm not even sure whether it would be considered insider trading.

How does one even go about accusing someone of insider trading? The illegality sounds pretty unenforceable.

solardev(3538) 6 days ago [-]

In the past, we liked to pretend this was illegal. Now we don't even bother with that.

miohtama(831) 6 days ago [-]

It has been estimated 25% of stock market trading is some sort of insider trading. However 1) it depends where you draw the line what's insider information and what not 2) not all of these trades all profitable.

Due to insider trading rules being problematic, sometimes more headache than benefit, the UK FCA is now allowing new stock market to launch where insider trading is legal.

LeafItAlone(10000) 6 days ago [-]

>How many physicians have been able to get rich from learning a CEO will be out of commission?

Do you actually have an answer to that? Or are you just throwing out an unanswerable question as some form of "gotcha"?

Now I'm actually curious. There aren't _that_ many publicly traded companies; only about 4,000 according to Google. A little over 9,000 IPOs since 1980 [0]. The number of companies where the CEO being "out of commission" on such a short timescale would generate "rich" (to me, in this scenario, >$5 million) levels of ROI has to be pretty low up. Probably not even most of the Fortune 100. Then the number of doctors who have that info and are going to act on it is a smaller fraction. Then the three have to match (command that fits + ill CEO + trading physician). Do you think it's over 10? 25?

0. https://site.warrington.ufl.edu/ritter/files/IPO-Statistics....

nhkcode(10000) 6 days ago [-]

Maybe all the insider trading going on is part of why the chances for regular investors to beat the market are so slim.

dboreham(2321) 6 days ago [-]

You seem to have discovered the crime of insider trading and conveniently ignored the fact that it's a crime.

LurkerAtTheGate(10000) 6 days ago [-]

> How does one even go about accusing someone of insider trading? The illegality sounds pretty unenforceable.

Much of it is data analysis. My favorite examples of this are actual hacks - once foothold is established instead of encrypting & ransoming, the attacker just listens to the CEO/CFO. One hacked a law firm that handled some sizable mergers.

Personal tangent: Once had an opportunity to insider trade on a particular huge aerospace company. Playing a squad-based PvE game, matchmade into a team with 3 real-life friends at said company who chatted on in-game voice comms about their day, talking about court cases and senate hearings, and later panicked when they realized I could hear it all. They were nice guys, and I assured them that I wouldn't misuse what I overheard - I don't work in a relevant industry, and my investments do just fine without an illegal edge (plus I know Matt Levine's Laws of Insider Trading #1: Don't).

Quarrel(10000) 6 days ago [-]

These sort of trades happy fairly regularly, before market breaking news in individual stock names.

Just search for SEC insider trading cases. When they happen in options they are often pretty obvious unless the market is moving with real momentum the same way, and even then, option sellers will report you if they think it is suspicious. (By obvious, I mean, regulators should start asking questions - of course there can be a multitude of reasons.)

The difference here is that absolutely NO ONE on any side of politics seems to think the SEC & DOJ will pursue these.

w10-1(10000) 6 days ago [-]

> These sort of trades happy fairly regularly

I think the post established that this volume or size of spike is unique before any market-shifting news event coming out of the government in recent decades. The 'sort' of transaction is irrelevant except that it's risky and thus relatively low volume normally.

svg7(10000) 6 days ago [-]

While I have no doubt that insider trading happens quite regularly, I would not jump to that conclusion here. IIRC the previous day, big Wall street names were advocating for a pause in tariffs . So a lot of people placed bets accordingly. Also staking 2.5M is 'small change' for true insiders.

eru(2960) 6 days ago [-]

> Also staking 2.5M is 'small change' for true insiders.

Why? A secretary or janitor or a intern could also be an insider. Or are they No True Scotsmen?

DeathArrow(856) 6 days ago [-]

>Also staking 2.5M is 'small change' for true insiders.

Well, if it was insider trading, Trump and his billionaire friends wouldn't invest just 2.5 million. That's a meager sum for the very wealthy.

Maybe they've even done insider trading but in ways that weren't so obvious.

sorokod(3210) 6 days ago [-]

Could a test run by 'very rich insiders'. To gauge the system's reaction before the next one, possibly a deal with China.

w10-1(10000) 6 days ago [-]

> So a lot of people placed bets accordingly

But why not earlier in the day? Why this unique volume spike then? The one $2.5M is just a sample trade, part of a historic spike.

No one's jumping to conclusions, but it should trigger an investigation.

az226(10000) 6 days ago [-]

But these rumors were said and talked about several days. But no big options trade was made before the actual day of announcement. That's why it's telling.

jmyeet(10000) 6 days ago [-]

What's surprising about all this is how quickly, easily and cheaply the republic was dismantled and how little opposition there was from people in power.

This culminated in Trump v. United States [1] where unelected partisans simply invented presidential immunity completely out of thin air. That means there are absolutely no possible legal repercussions for any of this. None. And even if there were, the agencies in charge of enforcing it have either been gutted or they've been subverted by putting a sycophantic lackey in charge.

This is the new kleptocracy we live in. Nobody is coming to save us. The supposed political opposition (ie the Democratic Party) is nothing more than feckless controlled opposition who are more interested in defending US imperialism than they are in winning elections.

Things are only going to get worse.

[1]: https://en.wikipedia.org/wiki/Trump_v._United_States

kilroy123(3630) 6 days ago [-]

I agree no one is coming to save anything.

The only people that can save it are regular people standing up and taking action.

edweis(10000) 6 days ago [-]

Who lost money during this deal? Or generally who indirectly paid these lucky gamblers?

naught0(10000) 6 days ago [-]

The poors. It's always the poors

mikelitoris(10000) 6 days ago [-]

Everyone else, indirectly. So other investors who weren't in on it, pension funds, John Doe in his retirement home.

procaryote(10000) 6 days ago [-]

The people holding the ETF and selling the option. If they had not sold the option, they would have benefited from the value rising, now they instead got (collectively) $2.5M.

If the price had stayed flat or dropped, they would of course still have the $2.5M.

The precision makes it look a lot like a crime, as trading on information that's not publicly available is illegal.

tirant(10000) 6 days ago [-]

Who lost money? It's difficult to say, because the purchase of those calls did not really tipped the market in any direction, but just provided liquidity for the sale of those call options.

Whoever shorted those calls made some money in the contracts, but they were going to lose money anyway the moment of the announcement.

quickthrowman(10000) 6 days ago [-]

The market makers who were short the call options or other market participants who sold calls. Mostly the latter, MMs are pretty good about hedging their positions but I'm sure some were caught offsides.

roflyear(3320) 6 days ago [-]

Long term this also really hurts the faith in the market, so it's going to hurt a lot of people who have exposure to anything on US exchanges.

huijzer(10000) 6 days ago [-]

Overall the argument by the author is very convincing and well put. One part I don't fully agree with is

> If the trades hadn't worked out, the losses would have been swift and total. Zero-day options don't forgive bad timing. The entire $2.5 million could have evaporated by the close of trading. Even with SPY shares, any unexpected reversal would have meant millions in losses.

This has exactly been Taleb's strategy. Buy option where the writeoff is small when wrong and the payoff huge when correct. As described in the post, the ratio was 1 to 25. Also it was likely that the market would go through huge shifts because the policy is so unpredictable.

So it is not impossible that someone figured that it would be possible to just buy these calls for the whole month. As long as one was a hit, the trade would not make a loss. And given the volatility, at least one would be a hit in this month. These short option bets are truly not such strange ehm options in these volatile times.

So I would like some data about whether similar options were bought on other days in similar volumes.

Having said that, I do find the evidence very strong and it's reasonable to assume that this was insider trading. I personally suspect someone at JPM or Ackman. They said they "convinced" Trump so maybe Trump said in a meeting that it would probably happen and they immediately bought the calls.

dwedge(10000) 6 days ago [-]

Unless I'm totally misreading this, it wasn't a 2.5m trade, it was 80c per option at 30,000 of them, less than $30k

The option to buy at 2.5m was not an obligation to do so

ncann(10000) 6 days ago [-]

> So I would like some data about whether similar options were bought on other days in similar volumes.

The volumes are there for all to see and the answer is no.

svg7(10000) 6 days ago [-]

nicely put, but I wonder why you think that similar volume of options would be bought on other days. These days are much more volatile and bets like these love volatility

miohtama(831) 6 days ago [-]

Inside trading on political information is legal in the US.

Nancy Pelosi (Democrat) made this famous in last decade, hitting news headlines with it:

https://finance.yahoo.com/news/nancy-pelosi-outperformed-nea...

https://nypost.com/2024/09/27/us-news/trump-calls-for-nancy-...

Inside trading rules concern only leaking corporate-private insider information.

LeafItAlone(10000) 6 days ago [-]

By Nancy Pelosi, you actually mean her husband.

And if you look at the trades, you'll learn the biggest secret on Wall Street: make long term bets on tech and you'll get richer. Don't tell anyone I told you.

rvz(796) 6 days ago [-]

The replies to your comment are hilarious.

Here we have people defending politicians from benefiting from insider information and attempting to split hairs in what is the difference between Pelosi and Trump.

There isn't any.

There is no difference in them knowing something ahead of you and they may have already traded that ahead of the event. (and you copying it very late when the move has already happened).

They benefit, you do not.

ctippett(3657) 6 days ago [-]

404 Media reported a story on Monday[1] about a news outlet that claimed there'd be a 90-day break on tariffs for all countries besides China. This was published a few days before the official announcement.

So someone, somewhere, knew something before everyone else.

[1] https://www.404media.co/benzinga-news-service-that-falsely-r...

w10-1(10000) 6 days ago [-]

Good cover: publicly release a rumor via a random tiny outlet (along with a flurry of other rumors). Then if questioned, just say you heard it there.

m2f2(10000) 6 days ago [-]

As a foreigner I cannot comment on this, else I will be rejected at the airport by the ICE.

That's called freedom my friends.

atoav(10000) 6 days ago [-]
Rejected is somewhat euphemistic, you might be:

- held for an indefinite time without due process and information what you did wrong

- stripped naked and spilled with cold water

- potentially worse, but that depends entirely on the way things are developing on a day-by-day basis

And if someone thinks that won't happen to them because they come from a western country and have a low eumelanin pigmentation level, recent examples show that this does not matter1. Remember ICE also appears to want to police 'illegal ideas' at the border now2.

These arbitrary arrests, a disregard for the Rule of Law and the valuation of loyalty to the cause over predictable consequences fit the despotic style that is encouraged in the US from the top down lately. The world would be wise not continue betting all their cards on a crazy horse.

1: Germany, Feb 2025 – Tourist held 16 days at border, deported without clear reason. https://www.cbsnews.com/news/us-immigration-detaining-europe... UK, Mar 2025 – Backpacker held 3 weeks at Canada border, no charges. https://www.theguardian.com/us-news/2025/mar/22/tourism-trum... Germany, Mar 2025 – Visitor held 45 days under Visa Waiver, unclear why. https://www.pbs.org/newshour/world/u-s-detention-of-european... Canada, Mar 2025 – Woman with valid visa held 12 days at border. https://www.cbsnews.com/news/us-immigration-detaining-europe... UK, Mar 2025 – Punk band denied entry, detained at LAX. https://www.theguardian.com/us-news/2025/mar/22/tourism-trum... Germany, Mar 2025 – Green card holder detained at Boston airport. https://www.theguardian.com/us-news/2025/mar/22/tourism-trum... Multiple, Mar 2025 – ICE arrested 48 in NM; cause/details unclear. https://www.newyorker.com/news/the-lede/the-mystery-of-ices-...

2: ICE posted a very 'unfortunate' marketing picture recently: https://www.newsweek.com/ice-illegal-ideas-border-security-s...

anonfordays(10000) 6 days ago [-]

If this is in reference to the French scientist that was denied entry, that was fake news:

  'The French researcher in question was in possession of confidential information on his electronic device from Los Alamos National Laboratory— in violation of a non-disclosure agreement—something he admitted to taking without permission and attempted to conceal.
  Any claim that his removal was based on political beliefs is blatantly false.'
https://www.snopes.com/news/2025/03/20/french-researcher-den...
DeathArrow(856) 6 days ago [-]

If I am not mistaken, Trump said earlier that it's a great time to buy. Isn't there a chance that someone acted on that, seeing it as a hint?

ZeroTalent(10000) 6 days ago [-]

Not with 0DTE options at this scale at multiple strikes. highly improbable. This wasn't the only trade. It was a sweep of hidden trades across different strikes on SPY and QQQ. Occam's Razor says this is insider trading. This has never happened before to this degree. The cool thing is that all of the historical data is transparent and cannot be removed from the ledger, and we can ask and know who made the trades it will just take a few weeks.

And looking at the options flow, it was billions across all the unusual 0DTE trades.

mentalgear(3613) 6 days ago [-]

The follow-up post shows the whole magnitude of the insider trading:

> My previous post highlighted a striking example: how a single $2.5 million options position turned into $70 million in under an hour. But focusing solely on that trade risks missing the larger picture. What we actually saw was widespread activity. Numerous sophisticated traders carefully placing positions across several strike prices ($504, $505, $507, $509) in SPY as well as similar trades in QQQ.

> The pattern wasn't limited to a single trade or strike price. It was a coordinated wave of positions, all established within a critical half-hour window before the news broke.

> Imagine someone purchasing thousands of lottery tickets with a specific number combination just moments before those exact numbers are drawn.

https://data-and-politics.ghost.io/this-is-what-insider-trad...

qwertox(10000) 6 days ago [-]

How can the average MAGA voter which waves a little flag at a rally feel ok with this?

It's treason what they did, treason on their principles. All this in times when it was supposedly 'Main Street' turn.

https://x.com/SecScottBessent/status/1910000578198986822

wiseowise(10000) 6 days ago [-]

Wake me up when orange gets impeached.

jxjnskkzxxhx(10000) 6 days ago [-]

Wake up, he got impeached twice.

PaulRobinson(10000) 6 days ago [-]

He's a 2x impeached convicted felon.

He told everyone what he was going to do. A lot of people thought he was a lying politician who lies, and therefore these were all lies. Or, at best, jokes or exaggerations.

And now, 4 months into a 4 year term, he's doing it all. Who knew?

So when he jokes that he can do whatever he wants, including run for a third term, learn from the past: it isn't a joke, even if he's chuckling; it isn't an exaggeration; it's not a lie. It's real, it's the plan. Decide how you feel about it.

I'm not criticising anyone or anything here, I'm just stating facts. It's sad to me that so many people think this is all coming as some sort of huge surprise.

NKosmatos(1818) 6 days ago [-]

Whenever I see similar articles I get reminded that all these are worthless paperless money, exchanging hands and playing a game. Futures, options, securities (and all the rest financial jargon) proves that there is a very big economic game at play in a global scale. No wonder the whole planet owes some trillions (to whom?) :-)

There is no need for scientists to prove we're living inside a simulation, this whole global turn based strategy financial game, affecting our lives, is the proof that someone is having a laugh at/with us ;-)

urbandw311er(10000) 6 days ago [-]

I take a different view. These moments when the mask briefly slips are a chance to remember that we are controlled by a greedy elite and only given the illusion of choice and prosperity.

tirant(10000) 6 days ago [-]

Do you have medical insurance ? Or car insurance? I guess you and I guess you find them useful.

All these worthless paperless money as you call it are precisely also instruments for companies and individuals to gain some economical stability during uncertain times by buying contracts and shifting the risk to someone else that has either more financial means or has worked to have a better view of future conditions.

So as an example, your medical insurer and your car insurer know pretty well the odds of you needing either medical treatment or suffering some type of car accident. And because they also have the financial means to risk being wrong, they offer you an insurance, because in the aggregated number they are usually right in their predictions.

w_TF(10000) 6 days ago [-]

you don't even need insider information to make this trade (although he still might have tipped people off personally)

he literally told everyone to do it

https://truthsocial.com/@realDonaldTrump/posts/1143082727259...

and you might have felt especially confident if you recalled him doing the exact same thing in 2018

dboreham(2321) 6 days ago [-]

I made a trade (very small one) even though I had no clue about his post at the time. My trade was done on the basis that he was clearly either going to reverse or be removed (and then JD would reverse).

globular-toast(10000) 6 days ago [-]

What actually is the point of selling someone an option? Do enough people buy them and lose to make it worthwhile for the seller? Isn't this literally just legalised gambling? Are there enough addicts to make it lucrative like other gambling? Or are these resold packaged up into something that nobody reads à la synthetic CDOs?

dboreham(2321) 6 days ago [-]

It's basically gambling+, but the party on the other side of the trade is like a casino. On average they make money (usually).

+There are some non-gambling reasons for stock options trading, akin to commodities options being used to reduce risks in farming.

Fade_Dance(10000) 6 days ago [-]

I was reading this event, and unless I'm missing something, all of these reports and theories don't mention the blindingly obvious fact that should be dominating - 1pm was the 10 year bone auction.

Due to many factors (mostly around the crash that was in full swing, and a bond market that was illiquid and melting down), this was a key moment around which a huge amount of position management happened.

I'm not at all saying that there shouldn't be extensive investigation around insider trading around the Trump announcement. There obviously should be. And I'm also not saying that a big block of calls wouldn't fit the bill for that. It would (although it begs the question about the actor being so brazen. There would be countless ways to hide the bet more effectively while still producing insane profit).

What I am saying is that it's ridiculous to me that there's no discussion about the bond auction! First of all you can't just look at a block of option contracts independently many of them are part of wider trade structures. Those call options could have just been hedging short portfolio deltas, or be part of any number of strategies. The timing does signify that you ever executed the trade, ill intentioned or not, was aware of bond auction mechanics.

So you're starting to run into some Occam's razor territory here. Either the participant was sophisticated enough to understand the volume surge around bond auction data releases yet chose to do an incredibly boneheaded bet (instead of some sort of more cloaked relative value trade that would make 10x as well, or just making a bet on something slightly less obvious like credit spreads or ETFs that are trade-war exposed, etc), or the participant was making a clumsy obvious swing for the fences, yet lucked into to the minute perfect timing to cloak the transaction. Meanwhile there is the simplest answer which is just that the position was part of the huge wave of trading around the bond auction results.

I'd welcome the investigation, but it's pretty shocking to me that I'm seeing so much discussion around this without these points being brought up!

I manage a portfolio and also put large blocks of options that benefit from market rallies on at exactly the same time. That's because Bond volatility was sky high, and once the results came in, one of the likely outcomes is a huge volatility crush. That means that if you have positions that you've been holding from executing off during the crisis due to elevated volatility, and have a view that the market is nearing the end of capitulation (all of the indicators that most fear, like liquidations, fear/greed index tanking, positioning being bearish, are huge bullish signals to the trading world), then in order to dodge the binary event risk you may want to re-add exposure at that moment. Readings from prime broker reports show that institutional participants were extremely low in positioning, so the risk that would need to be hedged for many would have been upside risk. If someone wants to hedge their upside risk but doesn't want to actively move out of their bearish/locked down positions during the crisis, they may well use options.

(Devil's advocate argument concluded)

notdarkyet(3107) 6 days ago [-]

This is the most reasonable comment here on the topic. Everything else so far is either blatantly political and/or misunderstands market dynamics.

Unfortunately half the comments I read on here are becoming increasingly more reddit-esque. Users posting emotionally charged comments and speaking with an authoritative voice on topics they have little experience or knowledge in an accordingly Hacker News has become more and more useless as a news source.

oa335(10000) 6 days ago [-]

I believe some of the action around 1PM was auction related, but the options activity still looks strange.

Why would someone hedge with 0DTE options as opposed to normal options?

I'm no longer in that space, but I'd assume the OOTM strikes are less liquid than normal options, with higher theta as well. That looks like a very expensive hedge.

What kind of position would be hedged with these options that couldn't be hedged with normal options?

heywoods(10000) 5 days ago [-]

Thanks for sharing your expertise on this topic. As someone with limited knowledge in this area, I appreciate your contributions.

I'm curious if you could evaluate an AI response from Perplexity. I asked it to analyze your comment (https://www.perplexity.ai/search/summarize-https-govinfo-lib...) and would value your expert assessment of its accuracy and quality. Perhaps a grade score with brief comments on what stood out?

I feel current AI benchmarks don't capture the data to really gauge how reliable AI tools are for topics requiring some advanced/expert domain knowledge of. Your role is, 'expert in the middle' to provide commentary on what it gets right or wrong. Everything sounds right and looks right when I don't know what is what ya know?

testing22321(10000) 6 days ago [-]

That same day after market close Trump directly told us it was insider trading AND who dun it.

He literally bragged that his friend made 2.5 billion and the other 900 million that day.

https://www.reddit.com/r/PublicFreakout/comments/1jvyryz/tru...

lucaspm98(10000) 6 days ago [-]

Do you have anything that ties together this trade with those investors?

Anyone who held equities that day were up 6-7%, so many billionaires were up that much or more without touching their holdings.

freen(10000) 6 days ago [-]

This is my surprised face.

Elect a felon, you get felonies.

nanreh(10000) 6 days ago [-]

You mean "official actions".

misja111(3660) 6 days ago [-]

> And it wasn't just options. At exactly 1:01 pm EST, trading volume in SPY shares themselves spiked. Nearly 2.75 million shares were bought in that single minute.

This is standard practice, it was simply the marketmaker hedging its position after just having sold those $2.5 million call options.

The math checks out; at 85 cent per piece, those were 2.94 million call options. At 9 above the spot the delta was less than one so I guess you'd need to buy slightly above 2 million shares to hedge your delta. The normal SPY trades would have made up for the remainder of the 2.75 million volume.

Shocka1(10000) 3 days ago [-]

Yeah this whole article is meant to stir outrage - HN is taking the bait, but that's expected. The most proficient people in the world at logic, and they are still culpable to biases. My comment shouldn't be confused though - there is smoke, but none of this is really proof because there isn't enough evidence yet from our perspective.

A lot more details needed: - Is it a fund or individual? - Does the trader make these kind of trades regularly?

I have more questions, but those are the first questions I would start with. I trade 0 and 1 days quite a bit in SPY and have collected a lot of data and performed statistical analysis on them for several years now. I myself sold S&P futures options that morning based off my data.

My theory until proven otherwise - Trader makes smaller trades in 0 days regularly - market volatility and the state of that week gave a much higher probability that there would be an extreme reversal at the first sign of any hint of good news, which was proven by fake news of a tariff lift just a day or two before. The overall bearish sentiment also coiled the market for an extreme move to the upside. Adding even more probability, the market that day was at the same level from Monday, where it showed buyers were foaming at the mouth to buy. Trump tweets just after market open that it was a great time to buy. Probability increases even more...

Similar to a very high probability count in Blackjack, trader puts in a trade at 12 and has an exit plan for 1 or two hours later. Trader determines that worst case 50 to 75% of the trade is lost due to theta decay by 2pm. Maybe they have a stop at 40%? Best case 2 to 10x's their money due to support levels giving a small rally. They've done this before and the wins outweigh the losses. Maybe even a hedge fund or algo trader running an ML model.

In my data I've seen extremely large multi-million dollar call option plays regularly when there is market volatility like this. Until proven otherwise, everyone just needs to stop with the outrage. I'm completely willing to change my mind as more evidence comes out.





Historical Discussions: Gemini 2.5 Flash (April 17, 2025: 794 points)

(793) Gemini 2.5 Flash

793 points about 16 hours ago by meetpateltech in 78th position

developers.googleblog.com | Estimated reading time – 5 minutes | comments | anchor

Today we are rolling out an early version of Gemini 2.5 Flash in preview through the Gemini API via Google AI Studio and Vertex AI. Building upon the popular foundation of 2.0 Flash, this new version delivers a major upgrade in reasoning capabilities, while still prioritizing speed and cost. Gemini 2.5 Flash is our first fully hybrid reasoning model, giving developers the ability to turn thinking on or off. The model also allows developers to set thinking budgets to find the right tradeoff between quality, cost, and latency. Even with thinking off, developers can maintain the fast speeds of 2.0 Flash, and improve performance.

Our Gemini 2.5 models are thinking models, capable of reasoning through their thoughts before responding. Instead of immediately generating an output, the model can perform a 'thinking' process to better understand the prompt, break down complex tasks, and plan a response. On complex tasks that require multiple steps of reasoning (like solving math problems or analyzing research questions), the thinking process allows the model to arrive at more accurate and comprehensive answers. In fact, Gemini 2.5 Flash performs strongly on Hard Prompts in LMArena, second only to 2.5 Pro.

2.5 Flash has comparable metrics to other leading models for a fraction of the cost and size.

Our most cost-efficient thinking model

2.5 Flash continues to lead as the model with the best price-to-performance ratio.

Gemini 2.5 Flash adds another model to Google's pareto frontier of cost to quality.*

Fine-grained controls to manage thinking

We know that different use cases have different tradeoffs in quality, cost, and latency. To give developers flexibility, we've enabled setting a thinking budget that offers fine-grained control over the maximum number of tokens a model can generate while thinking. A higher budget allows the model to reason further to improve quality. Importantly, though, the budget sets a cap on how much 2.5 Flash can think, but the model does not use the full budget if the prompt does not require it.

Improvements in reasoning quality as thinking budget increases.

The model is trained to know how long to think for a given prompt, and therefore automatically decides how much to think based on the perceived task complexity.

If you want to keep the lowest cost and latency while still improving performance over 2.0 Flash, set the thinking budget to 0. You can also choose to set a specific token budget for the thinking phase using a parameter in the API or the slider in Google AI Studio and in Vertex AI. The budget can range from 0 to 24576 tokens for 2.5 Flash.

The following prompts demonstrate how much reasoning may be used in the 2.5 Flash's default mode.

Prompts requiring low reasoning:

Example 1: "Thank you" in Spanish

Example 2: How many provinces does Canada have?

Prompts requiring medium reasoning:

Example 1: You roll two dice. What's the probability they add up to 7?

Example 2: My gym has pickup hours for basketball between 9-3pm on MWF and between 2-8pm on Tuesday and Saturday. If I work 9-6pm 5 days a week and want to play 5 hours of basketball on weekdays, create a schedule for me to make it all work.

Prompts requiring high reasoning:

Example 1: A cantilever beam of length L=3m has a rectangular cross-section (width b=0.1m, height h=0.2m) and is made of steel (E=200 GPa). It is subjected to a uniformly distributed load w=5 kN/m along its entire length and a point load P=10 kN at its free end. Calculate the maximum bending stress (σ_max).

Example 2: Write a function evaluate_cells(cells: Dict[str, str]) -> Dict[str, float] that computes the values of spreadsheet cells.

Each cell contains:

  • Or a formula like '=A1 + B1 * 2' using +, -, *,/ and other cells.

Requirements:

  • Resolve dependencies between cells.

  • Handle operator precedence (*/ before +-).

  • Detect cycles and raise ValueError('Cycle detected at <cell>').

  • No eval(). Use only built-in libraries.

Start building with Gemini 2.5 Flash today

Gemini 2.5 Flash with thinking capabilities is now available in preview via the Gemini API in Google AI Studio and in Vertex AI, and in a dedicated dropdown in the Gemini app. We encourage you to experiment with the thinking_budget parameter and explore how controllable reasoning can help you solve more complex problems.

Find detailed API references and thinking guides in our developer docs or get started with code examples from the Gemini Cookbook.

We will continue to improve Gemini 2.5 Flash, with more coming soon, before we make it generally available for full production use.

*Model pricing is sourced from Artificial Analysis & Company Documentation




All Comments: [-] | anchor

byefruit(10000) about 16 hours ago [-]

It's interesting that there's a price nearly 6x price difference between reasoning and no reasoning.

This implies it's not a hybrid model that can just skip reasoning steps if requested.

Anyone know what else they might be doing?

Reasoning means contexts will be longer (for thinking tokens) and there's an increase in cost to inference with a longer context but it's not going to be 6x.

Or is it just market pricing?

vineyardmike(10000) about 16 hours ago [-]

Based on their graph, it does look explicitly priced along their "Pareto Frontier" curve. I'm guessing that is guiding the price more than their underlying costs.

It's smart because it gives them room to drop prices later and compete once other company actually get to a similar quality.

jsnell(221) about 15 hours ago [-]

> This implies it's not a hybrid model that can just skip reasoning steps if requested.

It clearly is, since most of the post is dedicated to the tunability (both manual and automatic) of the reasoning budget.

I don't know what they're doing with this pricing, and the blog post does not do a good job explaining.

Could it be that they're not counting thinking tokens as output tokens (since you don't get access to the full thinking trace anyway), and this is the basically amortizing the thinking tokens spend over the actual output tokens? Doesn't make sense either, because then the user has no incentive to use anything except 0/max thinking budgets.

RobinL(3003) about 15 hours ago [-]

Does anyone know how this pricing works? Supposing I have a classification prompt where I need the response to be a binary yes/no. I need one token of output, but reasoning will obviously add far more than 6 additional tokens. Is it still a 6x price multiplier? That doesn't seem to make sense, but not does paying 6x more for every token including reasoning ones

punkpeye(2705) about 16 hours ago [-]

This is cool, but rate limits on all of these preview models are PITA

Layvier(10000) about 16 hours ago [-]

Agreed, it's not even possible to run an eval dataset. If someone from google see this please at least increase the burst rate limit

arnaudsm(10000) about 16 hours ago [-]

Gemini flash models have the least hype, but in my experience in production have the best bang for the buck and multimodal tooling.

Google is silently winning the AI race.

belter(63) about 16 hours ago [-]

> Google is silently winning the AI race.

That is what we keep hearing here...The last Gemini I cancelled the account, and can't help notice the new one they are offering for free...

Layvier(10000) about 16 hours ago [-]

Absolutely. So many use cases for it, and it's so cheap/fast/reliable

Fairburn(10000) about 16 hours ago [-]

Sorry, but no. Gemini isn't the fastest horse, yet. And it's use within their ecosystem means it isn't geared to the masses outside of their bubble. They are not leading the race but they are a contender.

spruce_tips(10000) about 16 hours ago [-]

i have a high volume task i wrote an eval for and was pleasantly surprised at 2.0 flash's cost to value ratio especially compared to gpt4.1-mini/nano

accuracy | input price | output price

Gemini Flash 2.0 Lite: 67% | $0.075 | $0.30

Gemini Flash 2.0: 93% | $0.10 | $0.40

GPT-4.1-mini: 93% | $0.40 | $1.60

GPT-4.1-nano: 43% | $0.10 | $0.40

excited to to try out 2.5 flash

42lux(10000) about 16 hours ago [-]

The API is free, and it's great for everyday tasks. So yes there is no better bang for the buck.

statements(10000) about 15 hours ago [-]

Absolutely agree. Granted, it is task dependent. But when it comes to classification and attribute extraction, I've been using 2.0 Flash with huge access across massive datasets. It would not be even viable cost wise with other models.

xnx(1016) about 15 hours ago [-]

Shhhh. You're going to give away the secret weapon!

gambiting(10000) about 15 hours ago [-]

In my experience they are as dumb as a bag of bricks. The other day I asked 'can you edit a picture if I upload one'

And it replied 'sure, here is a picture of a photo editing prompt:'

https://g.co/gemini/share/5e298e7d7613

It's like 'baby's first AI'. The only good thing about it is that it's free.

rvz(796) about 15 hours ago [-]

Google always has been winning the AI race as soon as DeepMind was properly put to use to develop their AI models, instead of the ones that built Bard (Google AI team).

GaggiX(1656) about 15 hours ago [-]

Flash models are really good even for an end user because how fast and good performance they have.

ghurtado(10000) about 15 hours ago [-]

I know it's a single data point, but yesterday I showed it a diagram of my fairly complex micropython program, (including RP2 specific features, DMA and PIO) and it was able to describe in detail not just the structure of the program, but also exactly what it does and how it does it. This is before seeing a single like of code, just going by boxes and arrows.

The other AIs I have shown the same diagram to, have all struggled to make sense of it.

redbell(518) about 14 hours ago [-]

> Google is silently winning the AI race

Yep, I agree! This convinced me: https://news.ycombinator.com/item?id=43661235

ramesh31(3343) about 13 hours ago [-]

>"Google is silently winning the AI race."

It's not surprising. What was surprising honestly was how they were caught off guard by OpenAI. It feels like in 2022 just about all the big players had a GPT-3 level system in the works internally, but SamA and co. knew they had a winning hand at the time, and just showed their cards first.

russellbeattie(10000) about 13 hours ago [-]

I have to say, I never doubted it would happen. They've been at the forefront of AI and ML for well over a decade. Their scientists were the authors of the 'Attention is all you need' paper, among thousands of others. A Google Scholar search produces endless results. There just seemed to be a disconnect between the research and product areas of the company. I think they've got that worked out now.

They're getting their ass kicked in court though, which might be making them much less aggressive than they would be otherwise, or at least quieter about it.

Nihilartikel(10000) about 13 hours ago [-]

100% agree. I had Gemini flash 2 chew through thousands of points of nasty unstructured client data and it did a 'better than human intern' level conversion into clean structured output for about $30 of API usage. I am sold. 2.5 pro experimental is a different league though for coding. I'm leveraging it for massive refactoring now and it is almost magical.

no_wizard(2101) about 13 hours ago [-]

I remember everyone saying its a two horse race between Google and OpenAI, then DeepSeek happened.

Never count out the possibility of a dark horse competitor ripping the sod right out from under

bhl(3631) about 12 hours ago [-]

It's cheap but also lazy. It sometimes generates empty strings or empty arrays for tool calls, and then I just re-route the request to a stronger model for the tool call.

I've spent a lot of time on prompts and tool-calls to get Flash models to reason and execute well. When I give the same context to stronger models like 4o or Gemini 2.5 Pro, it's able to get to the same answers in less steps but at higher token cost.

Which is to be expected: more guardrails for smaller, weaker models. But then it's a tradeoff; no easy way to pick which models to use.

Instead of SQL optimization, it's now model optimization.

paulcole(10000) about 12 hours ago [-]

> Google is silently winning the AI race.

It's not clear to me what either the "race" or "winning" is.

I use ChatGPT for 99% of my personal and professional use. I've just gotten used to the interface and quirks. It's a good consumer product that I like to pay $20/month for and use. My work doesn't require much in the way of monthly tokens but I just pay for the OpenAI API and use that.

Is that winning? Becoming the de facto "AI" tool for consumers?

Or is the race to become what's used by developers inside of apps and software?

The race isn't to have the best model (I don't think) because it seems like the 3rd best model is very very good for many people's uses.

xbmcuser(579) about 15 hours ago [-]

For a non programmer like me google is becoming shockingly good. It is giving working code the first time. I was playing around with it asked it to write code to scrape some data of a website to analyse. I was expecting it to write something that would scrape the data and later I would upload the data to it to analyse. But it actually wrote code that scraped and analysed the data. It was basic categorizing and counting of the data but I was not expecting it to do that.

kccqzy(2074) about 15 hours ago [-]

That's the opposite experience of my wife who's in tech but also a non programmer. She wanted to ask Gemini to write code to do some basic data analysis things in a more automated way than Excel. More than once, Gemini wrote a long bash script where some sed invocations are just plain wrong. More than once I've had to debug Gemini-written bash scripts. As a programmer I knew how bash scripts aren't great for readability so I told my wife to ask Gemini to write Python. It resulted in higher code quality, but still contained bugs that are impossible for a non programmer to fix. Sometimes asking a follow up about the bugs would cause Gemini to fix it, but doing so repeatedly will result in Gemini forgetting what's being asked or simply throwing an internal error.

Currently IMO you have to be a programmer to use Gemini to write programs effectively.

ant6n(2051) about 15 hours ago [-]

Last time I tried Gemini, it messed with my google photo data plan and family sharing. I wish I could try the AI separate from my Google account.

ModernMech(10000) about 14 hours ago [-]

I've been continually disappointed. I've been told it's getting exponentially better and we won't be able to keep up with how good they get, but I'm not convinced. I'm using them every single day and I'm never shocked or awed by its competence, but instead continually vexxed that isn't not living up to the hype I keep reading.

Case in point: there was a post here recently about implementing a JS algorithm that highlighted headings as you scrolled (side note: can anyone remember what the title was? I can't find it again), but I wanted to test the LLM for that kind of task.

Pretty much no matter what I did, I couldn't get it to give me a solution that would highlight all of the titles down to the very last one.

I knew what the problem was, but even guiding the AI, it couldn't fix the code. I tried multiple AIs, different strategies. The best I could come up with was to guide it step by step on how to fix the code. Even telling it exactly what the problem was, it couldn't fix it.

So this goes out to the 'you're prompting it wrong' crowd... Can you show me a prompt or a conversation that will get an AI to spit out working code for this task: JavaScript that will highlighting headings as you scroll, to the very last one. The challenge is to prompt it to do this without telling it how to implement it.

I figure this should be easy for the AI because this kind of thing is very standard, but maybe I'm just holding it wrong?

thimabi(3507) about 12 hours ago [-]

I find it baffling that Google offers such impressive models through the API and even the free AI Studio with fine-grained control, yet the models used in the Gemini app feel much worse.

Over the past few weeks, I've been using Gemini Advanced on my Workspace account. There, the models think for shorter times, provide shorter outputs, and even their context window is far from the advertised 1 million tokens. It makes me think that Google is intentionally limiting the Gemini app.

Perhaps the goal is to steer users toward the API or AI Studio, with the free tier that involves data collection for training purposes.

Alifatisk(3260) about 12 hours ago [-]

Google lack marketing for ai studio, it has only recently become widely known through word of mouth

_delirium(2430) about 5 hours ago [-]

This might have changed after you posted your comment, but it looks like 2.5 Pro and 2.5 Flash are available in the Gemini app now, both web and mobile.

xnx(1016) about 16 hours ago [-]

50% price increase from Gemini 2.0 Flash. That sounds like a lot, but Flash is still so cheap when compared to other models of this (or lesser) quality. https://developers.googleblog.com/en/start-building-with-gem...

akudha(2086) about 16 hours ago [-]

Is this cheaper than DeepSeek? Am I reading this right?

Tiberium(3404) about 16 hours ago [-]

del

swyx(159) about 15 hours ago [-]

done pretty much inline with the price elo pareto frontier https://x.com/swyx/status/1912959140743586206/photo/1

transformi(10000) about 16 hours ago [-]

Bad day is going on google.

First the decleration of illegal monopoly..

and now... Google's latest innovation: programmable overthinking.

With Gemini 2.5 Flash, you too can now set a thinking_budget—because nothing says 'state-of-the-art AI' like manually capping how long it's allowed to reason. Truly the dream: debugging a production outage at 2am wondering if your LLM didn't answer correctly because you cheaped out on tokens. lol.

"Turn thinking off for better performance." That's not a model config, that's a metaphor for Google's entire AI strategy lately.

At this point, Gemini isn't an AI product—it's a latency-cost-quality compromise simulator with a text interface. Meanwhile, OpenAI and Anthropic are out here just... cooking the benchmarks

danielbln(10000) about 16 hours ago [-]

Google's Gemini 2.5 pro model is incredibly strong, it's en par and at times better than Claude 3.7 in coding performance, being able to ingest entire videos into the context is something I haven seen elsewhere either. Google AI products have been anywhere between bad (Bard) to lackluster (Gemini 1.5), but 2.5 is a contender, in all dimensions. Google is also the only player that owns the entire stack, from research, software , data, compute hardware. I think they were slow to start but they've closed the gap since.

bsmith(3642) about 15 hours ago [-]

Using AI to debug code at 2am sounds like pure insanity.

alecco(1045) about 15 hours ago [-]

Gemini models are very good but in my experience they tend to overdo the problems. When I give it things for context and something to rework, Gemini often reworks the problem.

For software it is barely useful because you want small commits for specific fixes not a whole refactor/rewrite. I tried many prompts but it's hard. Even when I give it function signatures of the APIs the code I want to fix uses, Gemini rewrites the API functions.

If anybody knows a prompt hack to avoid this, I'm all ears. Meanwhile I'm staying with Claude Pro.

byearthithatius(10000) about 15 hours ago [-]

Yes, it will add INSANE amounts of 'robust error handling' to quick scripts where I can be confident about assumptions. This turns my clean 40 lines of Python where I KNOW the JSONL I am parsing is valid into 200+ lines filled with ten new try except statements. Even when I tell it not to do this, it loves to 'find and help' in other ways. Quite annoying. But overall it is pretty dang good. It even spotted a bug I missed the other day in a big 400+ line complex data processing file.

dherikb(10000) about 5 hours ago [-]

I have the same issue using it with Aider.

The model is good to solve problems, but is very difficult to control the unnecessary changes that the model does in the rest of the code. Also it adds a lot of unnecessary comments, even when I explicitly say to not add.

For now Deepseek R1 and V3 it's working better to me, producing more predictable results and capturing better my intentions (not tried Claude yet).

w4yai(10000) about 4 hours ago [-]

Here's what I found to be working (not 100% but it gives much better and consistant results)

Basically, I ask it to repeat at the start of each message some rules :

'From now on, you must repeat and comply the following rules at the top of all your messages onwards:

- I will never rewrite API functions. Even if I think it's a good idea, it is a bad idea. I will keep the API function as it is and it is perfect like that.

- I will never add extra input validation. Even if I think it's a good idea, it is a bad idea. I will keep the function without validation and it is perfect like that.

- ...

- If I violate any of those rules, I did a bad job. '

Forcing it to repeat things make the model output more aligned and focused in my experience.

convivialdingo(10000) about 6 hours ago [-]

Dang - Google finally made a quality model that doesn't make me want to throw my computer out a window. It's honest, neutral and clearly not trained by the ideologically rabid anti-bias but actually super biased regime.

Did I miss a revolt or something in googley land? A Google model saying "free speech is valuable and diverse opinions are good" is frankly bizarre to see.

convivialdingo(10000) about 6 hours ago [-]

Downvote me all you want - the fact remains that previous Google models were so riddled with guardrails and political correctness that it was practically impossible to use for anything besides code and clean business data. Random text and opinion would trigger a filter and shut down output.

Even this model criticizes the failures of the previous models.

hubraumhugo(547) about 5 hours ago [-]

You can get your HN profile analyzed and roasted by it. It's pretty funny :) https://hn-wrapped.kadoa.com/

I'll add a selection for different models soon.

demaga(10000) about 5 hours ago [-]

Didn't expect to be roasted by AI this morning. Nice one

Alifatisk(3260) about 3 hours ago [-]

How is this relevant to Gemini 2.5 Flash? I guess it's using it or something?

few(10000) about 2 hours ago [-]

This is cool.

Does it only use a few recent comments or entire history? I'm trying to figure out where it figured out my city when I thought I was careful not to reveal it. I'm scrolling back pages without finding where I said it in the past. Could it have inferred it based on other information or hallucinated it?

I wonder if there's a more opsec-focused version of this.

ks2048(3275) about 15 hours ago [-]

If this announcement is targeting people not up-to-date on the models available, I think they should say what 'flash' means. Is there a 'Gemini (non-flash)'?

I see the 4 Google model names in the chart here. Are these 4 the main 'families' of models to choose from?

- Gemini-Pro-Preview

- Gemini-Flash-Preview

- Gemini-Flash

- Gemini-Flash-Lite

mwest217(10000) about 15 hours ago [-]

Gemini has had 4 families of models, in order of decreasing size:

- Ultra

- Pro

- Flash

- Flash-Lite

Versions with `-Preview` at the end haven't had their 'official release' and are technically in some form of 'early access' (though I'm not totally clear on exactly what that means given that they're fully available and as of 2.5 Pro Preview, have pricing attached to them - earlier versions were free during Preview but had pretty strict rate limiting but now it seems that Preview models are more or less fully usable).

AbuAssar(3069) about 15 hours ago [-]

I noticed that OpenAI don't compare their models to third party models in their announcement posts, unlike google, meta and the others.

jskherman(10000) about 14 hours ago [-]

They're doing the Apple strategy. Less spotlight for other third parties, and less awareness how they're lagging behind so that those already ignorantly locked into OpenAI would not switch. But at this point why would anyone do that when switching costs are low?

hmaxwell(10000) about 16 hours ago [-]

I did some testing this morning:

Prompt: 'can you find any mistakes on my codebase? I put one in there on purpose' + 70,000 tokens of codebase where in one line I have an include for a non-existent file.

Claude 3.7 Sonnet (Try 1 - new chat) Claimed it Found a race condition in the api of the admin interface that would be triggered if two admins were to change the room order at the same time. Claude suggested I group all sql queries in a single transaction. I looked at the code and found that it already used a transaction for all queries. I said: the order_update api is already done with a transaction. Claude replied: 'You're absolutely right, and I apologize for my mistake. I was incorrect to claim there was a race condition issue. The transaction ensures atomicity and consistency of the updates, and the SQL queries are properly structured for their intended purpose.'

Claude 3.7 Sonnet (Try 2 - new chat) Found a comment in the admin ui javascript code that suggested a potential alternative to event handler cleanup that was not implemented because I decided to go with a cleaner route. Then asked 'Is this the issue you intentionally included, or would you like me to look for other potential problems?' I said: 'The comment merely suggests an alternative, right?' claude said: 'Yes, you're absolutely right. The comment is merely suggesting an alternative approach that isn't being used in the code, rather than indicating a mistake. So there's no actual bug or mistake in this part of the code - just documentation of different possible approaches. I apologize for misinterpreting this as an issue!'

Claude 3.7 Sonnet (Try 3 - new chat) When processing items out of the database to generate QR codes in the admin interface, Claude says that my code both attempts to generate QR codes with undefined data AS WELL AS saying that my error handling skips undefined data. Claude contradicts itself within 2 sentences. When asking about clarification Claude replies: Looking at the code more carefully, I see that the code actually has proper error handling. I incorrectly stated that it 'still attempts to call generateQRCode()' in the first part of my analysis, which was wrong. The code properly handles the case when there's no data-room attribute.

Gemnini Advanced 2.5 Pro (Try 1 - new chat) Found the intentional error and said I should stop putting db creds/api keys into the codebase.

Gemnini Advanced 2.5 Pro (Try 2 - new chat) Found the intentional error and said I should stop putting db creds/api keys into the codebase.

Gemnini Advanced 2.5 Pro (Try 3 - new chat) Found the intentional error and said I should stop putting db creds/api keys into the codebase.

o4-mini-high and o4-mini and o3 and 4.5 and 4o - 'The message you submitted was too long, please reload the conversation and submit something shorter.'

Tiberium(3404) about 16 hours ago [-]

The thread is about 2.5 Flash though, not 2.5 Pro. Maybe you can try again with 2.5 Flash specifically? Even though it's a small model.

airstrike(941) about 16 hours ago [-]

Have you tried Claude Code?

danielbln(10000) about 15 hours ago [-]

Those responses are very Claude, to. 3.7 has powered our agentic workflows for weeks, but I've been using almost only Gemini for the last week and feel the output is better generally. It's gotten much better at agentic workflows (using 2.0 in an agent setup was not working well at all) and I prefer its tuning over Clause's, more to the point and less meandering.

rendang(10000) about 15 hours ago [-]

3 different answers in 3 tries for Claude? Makes me curious how many times you'd get the same answer if you asked 10/20/100 times

bambax(2947) about 14 hours ago [-]

> codebase where in one line I have an include for a non-existent file

Ok but you don't need AI for this; almost any IDE will issue a warning for that kind of error...

fandorin(3645) about 1 hour ago [-]

how did you put your whole codebase in a prompt for gemini?

Workaccount2(3572) about 16 hours ago [-]

OpenAI might win the college students but it looks like Google will lock in enterprise.

xnx(1016) about 16 hours ago [-]

ChatGPT seems to have a name recognition / first-mover advantage with college students now, but is there any reason to think that will stick when today's high school students are using Gemini on their Chromebooks?

gundmc(10000) about 16 hours ago [-]

Funny you should say that. Google just announced today that they are giving all college students one year of free Gemini advanced. I wonder how much that will actually move the needle among the youth.

superfrank(10000) about 16 hours ago [-]

Is there really lock in with AI models?

I built a product that uses and LLM and I got curious about the quality of the output from different models. It took me a weekend to go from just using OpenAI's API to having Gemini, Claude, and DeepSeek all as options and a lot of that time was research on what model from each provider that I wanted to use.

ein0p(10000) about 16 hours ago [-]

How will it lock in the enterprise if its market share of enterprise customers is half that of Azure (Azure also sells OpenAI inference, btw), and one third that of AWS?

asadm(1194) about 15 hours ago [-]

funny thing about younglings, they will migrate to something else as fast as they came to you.

Oras(3150) about 15 hours ago [-]

Enterprise has already been won by Microsoft (Azure), which runs on OpenAI.

edaemon(10000) about 15 hours ago [-]

It seems more and more like AI is less of a product and more of a feature. Most people aren't going to care or even know about the model or the company who made it, they're just going to use the AI features built into the products they already use.

statements(10000) about 16 hours ago [-]

Interesting to note that this might be the only model with knowledge cut off as recent as 2025 January

Tiberium(3404) about 16 hours ago [-]

Gemini 2.5 Pro has the same knowledge cutoff specified, but in reality on more niche topics it's still limited to ~middle of 2024.

brightball(3533) about 16 hours ago [-]

Isn't Grok 3 basically real time now?

ein0p(10000) about 16 hours ago [-]

Absolutely decimated on metrics by o4-mini, straight out of the gate, and not even that much cheaper on output tokens (o4-mini's thinking can't be turned off IIRC).

gundmc(10000) about 16 hours ago [-]

It's good to see some actual competition on this price range! A lot of Flash 2.5's edge will depend on how well the dynamic reasoning works. It's also helpful to have _significantly_ lower input token cost for a large context use cases.

rfw300(3192) about 15 hours ago [-]

o4-mini does look to be a better model, but this is actually a lot cheaper! It's ~7x cheaper for both input and output tokens.

vessenes(3493) about 15 hours ago [-]

o4-mini costs 8x as much as 2.5 flash. I believe its useful context window is also shorter, although I haven't verified this directly.

mupuff1234(3632) about 15 hours ago [-]

Not sure 'decimated' is a fitting word for 'slightly higher performance on some benchmarks'.

kfajdsl(10000) about 13 hours ago [-]

Anecdotally o4-mini doesn't perform as well on video understanding tasks in our pipeline, and also in Cursor it seems really not great.

During one session, it read the same file (same lines) several times, ran 'python -c 'print("skip!")'' for no reason, and then got into another file reading loop. Then after asking a hypothetical about the potential performance implications of different ffmpeg flags, it claimed that it ran a test and determined conclusively that one particular set was faster, even though it hadn't even attempted a tool call, let alone have the results from a test that didn't exist.

zoogeny(10000) about 14 hours ago [-]

Google making Gemini 2.5 Pro (Experimental) free was a big deal. I haven't tried the more expensive OpenAI models so I can't even compare, only to the free models I have used of theirs in the past.

Gemini 2.5 Pro is so much of a step up (IME) that I've become sold on Google's models in general. It not only is smarter than me on most of the subjects I engage with it, it also isn't completely obsequious. The model pushes back on me rather than contorting itself to find a way to agree.

100% of my casual AI usage is now in Gemini and I look forward to asking it questions on deep topics because it consistently provides me with insight. I am building new tools with the mind to optimize my usage to increase it's value to me.

PerusingAround(10000) about 14 hours ago [-]

This comment is exactly my experience, I feel like as if I had wrote it myself.

cjohnson318(3644) about 14 hours ago [-]

Yeah, my wife pays for ChatGPT, but Gemini is fine enough for me.

dr_kiszonka(10000) about 14 hours ago [-]

I was a big fan of that model but it has been replaced in AI Studio by its preview version, which, by comparison, is pretty bad. I hope Google makes the release version much closer to the experimental one.

jeeeb(10000) about 13 hours ago [-]

After comparing Gemini Pro and Claude Sonnet 3.7 coding answers side by side a few times, I decided to cancel my Anthropic subscription and just stick to Gemini.

fsndz(10000) about 13 hours ago [-]

More and more people are coming to the realisation that Google is actually winning at the model level right now.

m3kw9(10000) about 12 hours ago [-]

Using Claude code and Codex CLI and then Aider with Gemini 2.5 pro, Aider is much faster because you feed in the files instead of using tools to start doing all kinds of whole know what spending 10x the tokens. I tried a relatively simple refactor which needed around 7 files changed, only Aider with 2.5 got it and in the first shot. Where as both Codex and Claude code completely fumbled it

goshx(3161) about 12 hours ago [-]

Same here! It is borderline stubborn at times and I need to prove it wrong. Still, it is the best model to use with Cursor, in my experience.

teleforce(414) about 12 hours ago [-]

>obsequious

Thanks for the new word, I have to look it up.

'obedient or attentive to an excessive or servile degree'

Apparently it means an AI that mindlessly follow your logic and instructions without reasoning and articulation is not good enough.

UltraSane(10000) about 11 hours ago [-]

I had a very interesting long debate/discussion with Gemini 2.5 Pro about the Synapse-Evolve bank debacle among other things. It really feels like debating a very knowledgeable and smart human.

jofzar(10000) about 10 hours ago [-]

My work doesn't have access to 2.5 pro and all these posts are just making me want it so much more.

I hate how slow things are sometimes.

i_love_retros(10000) about 8 hours ago [-]

Why is it free / so cheap (I seem to be getting charged a few cents a day using it with aider so not free but still crazy cheap compared to sonnet)

redox99(10000) about 7 hours ago [-]

I've had many disappointing results with gemini 2.5 pro. For general queries possibly involving search, chatgpt and grok work better for me.

For code, gemini is very buggy in cursor, so I use Claude 3.7. But it might be partly cursor's fault.

rgoulter(10000) about 7 hours ago [-]

The 1 million token context window also means you can just copy/paste so much source code or log output.

crossroadsguy(10000) about 7 hours ago [-]

One difference, and imho that's a big difference — you can't use any of the Google's chatbots/models without being logged in, unlike chatgpt.

casey2(10000) about 1 hour ago [-]

It's a big deal, but not in the way that you think. A race to the bottom is humanities best defense against fast takeoff.

mmaunder(3123) about 15 hours ago [-]

More great innovation from Google. OpenAI have two major problems.

The first is Google's vertically integrated chip pipeline and deep supply chain and operational knowledge when it comes to creating AI chips and putting them into production. They have a massive cost advantage at every step. This translates into more free services, cheaper paid services, more capabilities due to more affordable compute, and far more growth.

Second problem is data starvation and the unfair advantage that social media has when it comes to a source of continually refreshed knowledge. Now that the foundational model providers have churned through the common crawl and are competing to consume things like video and whatever is left, new data is becoming increasingly valuable as a differentiator, and more importantly, as a provider of sustained value for years to come.

SamA has signaled both of these problems when he made noises about building a fab a while back and is more recently making noises about launching a social media platform off OpenAI. The smart money among his investors know these issues to be fundamental in deciding if OAI will succeed or not, and are asking the hard questions.

If the only answer for both is 'we'll build it from scratch', OpenAI is in very big trouble. And it seems that that is the best answer that SamA can come up with. I continue to believe that OpenAI will be the Netscape of the AI revolution.

The win is Google's for the taking, if they can get out of their own way.

jbverschoor(2627) about 15 hours ago [-]

Except that they train their model even when you pay. So yeah.. I'd rather not use their 'evil'

Keyframe(3668) about 15 hours ago [-]

Google has the data and has the hardware, not to mention software and infrastructure talent. Once this Bismarck turns around and it looks like it is, who can parry it for real? They have internet.zip and all the previous versions as well, they have youtube, email, search, books, traffic, maps and business on it, phones and habits around it, even the OG social network, the usenet. It's a sleeping giant starting to wake up and it's already causing commotion, let's see what it does when it drinks morning coffee.

whyenot(3590) about 15 hours ago [-]

Another advantage that Google has is the deep integration of Gemini into Google Office products and Gmail. I was part of a pilot group and got to use a pre-release version and it's really powerful and not something that will be easy for OpenAI to match.

zoogeny(10000) about 14 hours ago [-]

If the battle was between Altman and Pichai I'd have my doubts.

But the battle is between Altman and Hassabis.

I recall some advice on investment from Buffett regarding how he invests in the management team.

throwup238(465) about 14 hours ago [-]

Nobody has really talked about what I think is an advantage just as powerful as the custom chips: Google Books. They already won a landmark fair use lawsuit against book publishers, digitized more books than anyone on earth, and used their Captcha service to crowdsource its OCR. They've got the best* legal cover and all of the best sources of human knowledge already there. Then Youtube for video.

The chips of course push them over the top. I don't know how much Deep Research is costing them but it's by far the best experience with AI I've had so far with a generous 20/day rate limit. At this point I must be using up at least 5-10 compute hours a day. Until about a week ago I had almost completely written off Google.

* For what it's worth, I don't know. IANAL

peterjliu(10000) about 14 hours ago [-]

another advantage is people want the Google bot to crawl their pages, unlike most AI companies

stefan_(1849) about 14 hours ago [-]

I don't know man, for months now people keep telling me on HN how 'Google is winning', yet no normal person I ever asked knows what the fuck 'Gemini' is. I don't know what they are winning, it might be internet points for all I know.

Actually, some of the people polled recalled the Google AI efforts by their expert system recommending glue on pizza and smoking in pregnancy. It's a big joke.

labrador(2669) about 14 hours ago [-]

> If the only answer for both is 'we'll build it from scratch', OpenAI is in very big trouble

They could buy Google+ code from Google and resurrect it with OpenAI branding. Alternately they could partner with Bluesky

onlyrealcuzzo(10000) about 11 hours ago [-]

> The smart money among his investors know these issues to be fundamental in deciding if OAI will succeed or not, and are asking the hard questions.

OpenAI has already succeeded.

If it ends up being a $100B company instead of a $10T company, that is success. By a very large margin.

It's hard to imagine a world in which OpenAI just goes bankrupt and ends up being worth nothing.

dyauspitr(10000) about 8 hours ago [-]

I haven't heard this much positive sentiment about Google in a while. Making something freely available really turns public sentiment around.

serjester(1661) about 14 hours ago [-]

Just ran it on one of our internal PDF (3 pages, medium difficulty) to json benchmarks:

gemini-flash-2.0: 60 ish% accuracy 6,250 pages per dollar

gemini-2.5-flash-preview (no thinking): 80 ish% accuracy 1,700 pages per dollar

gemini-2.5-flash-preview (with thinking): 80 ish% accuracy (not sure what's going on here) 350 pages per dollar

gemini-flash-2.5: 90 ish% accuracy 150 pages per dollar

I do wish they separated the thinking variant from the regular one - it's incredibly confusing when a model parameter dramatically impacts pricing.

ValveFan6969(10000) about 14 hours ago [-]

I have been having similar performance issues, I believe they intentionally made a worse model (Gemini 2.5) to get more money out of you. However, there is a way where you can make money off of Gemini 2.5.

If you set the thinking parameter lower and lower, you can make the model spew absolute nonsense for the first response. It costs 10 cents per input / output, and sometimes you get a response that was just so bad your clients will ask for more and more corrections.

minimaxir(32) about 14 hours ago [-]

One hidden note from Gemini 2.5 Flash when diving deep into the documentation: for image inputs, not only can the model be instructed to generated 2D bounding boxes of relevant subjects, but it can also create segmentation masks! https://ai.google.dev/gemini-api/docs/image-understanding#se...

At this price point with the Flash model, creating segmentation masks is pretty nifty.

The segmentation masks are a bit of a galaxy brain implementation by generating a b64 string representing the mask: https://colab.research.google.com/github/google-gemini/cookb...

I am trying to test it in AI Studio but it sometimes errors out, likely because it tries to decode the b64 lol.

behnamoh(120) about 14 hours ago [-]

Wait, did they just kill YOLO, at least for time-insensitive tasks?

daemonologist(10000) about 14 hours ago [-]

Interestingly if you run this in Gemini (instead of AI Studio) you get:

    I am sorry, but I was unable to generate the segmentation masks for _ in the image due to an internal error with the tool required for this task.
(Not sure if that's a real or hallucinated error.)
ipsum2(10000) about 13 hours ago [-]

The performance is basically so bad it's unusable though, segmentation models and object detection models are still the best, for now.

msp26(10000) about 12 hours ago [-]

I've had mixed results with the bounding boxes even on 2.5 pro. On complex images where a lot of boxes need to be drawn they're in the general region but miss the exact location of objects.

deanmoriarty(2186) about 14 hours ago [-]

Genuine naive question: when it comes to Google HN has generally a negative view of it (pick any random story on Chrome, ads, search, web, working at faang, etc. and this should be obvious from the comments), yet when it comes to AI there is a somewhat notable "cheering effect" for Google to win the AI race that goes beyond a conventional appreciation of a healthy competitive landscape, which may appear as a bit of a double standard.

Why is this? Is it because OpenAI is seen as such a negative player in this ecosystem that Google "gets a pass on this one"?

And bonus question: what do people think will happen to OpenAI if Google wins the race? Do you think they'll literally just go bust?

antirez(1163) about 14 hours ago [-]

Maybe because Google is largely responsible, paying for the research, of most of the results we are seeing now. I'm not a Google fan, in the web side, and in their idea of what software engineering is, but they deserve to win the AI race, because right now all the other players provided a lot less than what Google did as public research. Also, with Gemini 2.5 PRO, there was a big hype moment, because the model is of unseen ability.

01100011(10000) about 14 hours ago [-]

Didn't Google invent the transformer?

I think a lot of us see Google as both an evil advertiser and as an innovator. Google winning AI is sort of nostalgic for those of us who once cheered the 'Do No Evil'(now mostly 'Do Know Evil') company.

I also like how Google is making quiet progress while other companies take their latest incremental improvement and promote it as hard as they can.

pkaye(10000) about 13 hours ago [-]

I think for a while some people felt the Google AI models are worse but now its getting much better. On the other hand Google has their own hardware so they can drive down the costs of using the models so it keeps pressure on Open AI do remain cost competitive. Then you have Anthropic which has very good models but is very expensive. But I've heard they are working with Amazon to build a data center with Amazons custom AI chips so maybe they can bring down their costs. In the end all these companies will need a good model and lower cost hardware to succeed.

brap(10000) about 12 hours ago [-]

I am cheering for the old Google to make a comeback and it seems like the AI race has genuinely sparked something positive inside Google.

wyre(10000) about 8 hours ago [-]

Gemini is just that good. From my usage it is much smarter than DeepSeek or Claude 3.7 Thinking models.

A lot of Google's market share across its services comes from the monopoly effects Google has. The quality of Gemini 2.5 is noticeably smarter than its competitors so I see the applause for the quality of the LLM and not for Google.

I think it's way too early to say anything about who is winning the race. There is still a long way to go; o3 scores highest in Humanity's Last Exam (https://agi.safe.ai/) at 20%, 2.5 scores 18%.

sothatsit(10000) about 8 hours ago [-]

2.5 Pro is free, and I'm sure there's a lot of people who have just never tried the best models because they don't want to pay for them. So 2.5 Pro probably blows their socks off.

Whereas, if you've been paying for access to the best models from OpenAI and Anthropic all along, 2.5 Pro doesn't feel like such a drastic step-change. But going from free models to 2.5 Pro is a crazy difference. I also think this is why DeepSeek got so much attention so quickly - because it was free.

julianeon(10000) about 7 hours ago [-]

It's been a while since they won something the 'old' Google way: by building a superior product that is #1 on its merits.

In that sense Gemini is a throwback: there's no trick - it's objectively better than everything else.

sagarpatil(10000) about 5 hours ago [-]

Most of us weren't using Gemini pro models (1.0, 1.5, 2.0) but the recent 2.5 pro is such a huge step up. It's better than 3.7 sonnet for coding. Better than o1, o3-mini models and now o3 and o4-mini. It's become my daily driver. It does everything I need with almost 100% accuracy, is cheap, fast, 1 million context window, uses google web search for grounding, can fetch YouTube video transcripts, can fetch website content, works in google workspace: Gmail, Docs, Sheets. Really hard to beat this combo. Oh and if you subscribe to their AI plan it comes with 2 TB drive storage.

oezi(10000) about 5 hours ago [-]

The key is Gemini being free through AI Studio. This makes their technical improvement more impressive when OpenAI sells their best models at ridiculous prices.

If Google engages in price dumping as a monopolist remains to be seen but it feels like it.

The LLM race is fast paced and no moat has developed. People are switching on a whim if better models (by some margin) show up. When will OpenAI, Anthropic or DeepSeek counter 2.5 Pro? And will it be before Google releases the next Pro?

OpenAI commands a large chunk of the consumer market and they have considerable funds after their last round. They won't fold this or next year.

If Google wants to win this they must come up with a product strategy integrating their search business without seriously damaging their existing search business to much. This is hard.

int_19h(10000) about 4 hours ago [-]

I dislike Google rather strongly due to their ad-based business model, and I was previously very skeptical of their AI offerings because of very lackluster performance compared to OpenAI and Claude. But I can't help but be impressed with Gemini Pro 2.5 for 'deep research' and agentic coding. I have subscriptions with all three so that I can keep up with SOTA, but if I had to choose only one to keep, right now it'd be Gemini.

That said I still don't 'cheer' for them and I would really rather someone else win the race. But that is orthogonal to recognition of observed objective superiority.

simonw(116) about 14 hours ago [-]

I spotted something interesting in the Python API library code:

https://github.com/googleapis/python-genai/blob/473bf4b6b5a6...

  class ThinkingConfig(_common.BaseModel):
      '''The thinking features configuration.'''
   
      include_thoughts: Optional[bool] = Field(
          default=None,
          description='''Indicates whether to include thoughts in the response. If true, thoughts are returned only if the model supports thought and thoughts are available.
        ''',
      )
      thinking_budget: Optional[int] = Field(
          default=None,
          description='''Indicates the thinking budget in tokens.
          ''',
      )
That thinking_budget thing is documented, but what's the deal with include_thoughts? It sounds like it's an option to have the API return the thought summary... but I can't figure out how to get it to work, and I've not found documentation or example code that uses it.

Anyone managed to get Gemini to spit out thought summaries in its API using this option?

phillypham(10000) about 14 hours ago [-]

They removed the docs and support for it https://github.com/googleapis/python-genai/commit/af3b339a9d....

You can see the thoughts in AI Studio UI as per https://ai.google.dev/gemini-api/docs/thinking#debugging-and....

lemming(2600) about 13 hours ago [-]

I maintain an alternative client which I build from the API definitions at https://github.com/googleapis/googleapis, which according to https://github.com/googleapis/python-genai/issues/345 should be the right place. But neither the AI Studio nor the Vertex definitions even have ThinkingConfig yet - very frustrating. In general it's amazing how much API munging is required to get a working client from the public API definitions.

qwertox(10000) about 13 hours ago [-]

In AI Studio the flash moddels has two toggles: Enable thinking and Set thinking budget. If thinking budget is enabled, you can set tue max number of tokens it can use to think, else it's Auto.

Deathmax(10000) about 13 hours ago [-]

It is gated behind the GOOGLE_INTERNAL visibility flag, which only internal Google projects and Cursor have at the moment as far as I know.

msp26(10000) about 12 hours ago [-]

The API won't give you the 'thinking' tokens, those are only visible on AI studio. Probably to try to stop distillation, very disappointing. I find reading the cot to be incredibly informative to identify failure modes.

> Hey Everyone,

> Moving forward, our team has made a decision to only show thoughts in Google AI Studio. Meaning, we no longer return thoughts via the Gemini API. Here is the updated doc to reflect that.

https://discuss.ai.google.dev/t/thoughts-are-missing-cot-not...

---

After I wrote all of that I see that the API docs page looks different today and now says:

>Note that a summarized version of the thinking process is available through both the API and Google AI Studio.

https://ai.google.dev/gemini-api/docs/thinking

Maybe they just updated it? Or people aren't on the same page at Google idk

Previously it said

> Models with thinking capabilities are available in Google AI Studio and through the Gemini API. Note that the thinking process is visible within Google AI Studio but is not provided as part of the API output.

https://web.archive.org/web/20250409174840/https://ai.google...

krembo(10000) about 13 hours ago [-]

How is this sustainable for Google from business POV? It feels like Google is shooting itself in the foot while 'winning' the AI race.. From my experience I think Google lost 99% of the ads it used to show me before in the search engine.

tomr75(10000) about 13 hours ago [-]

someone else will do it if they don't

aoeusnth1(10000) about 6 hours ago [-]

Their inference costs are the lowest in the business.

simonw(116) about 11 hours ago [-]

An often overlooked feature of the Gemini models is that they can write and execute Python code directly via their API.

My llm-gemini plugin supports that: https://github.com/simonw/llm-gemini

  uv tool install llm
  llm install llm-gemini
  llm keys set gemini
  # paste key here
  llm -m gemini-2.5-flash-preview-04-17 \
    -o code_excution 1 \
    'render a mandelbrot fractal in ascii art'
I ran that just now and got this: https://gist.github.com/simonw/cb431005c0e0535343d6977a7c470...

They don't charge anything extra for code execution, you just pay for input and output tokens. The above example used 10 input, 1,531 output which is $0.15/million for input and $3.50/million output for Gemini 2.5 Flash with thinking enabled, so 0.536 cents (just over half a cent) for this prompt.

blahgeek(10000) about 11 hours ago [-]

> An often overlooked feature of the Gemini models is that they can write and execute Python code directly via their API.

Could you elaborate? I thought function calling is a common feature among models from different providers

djrj477dhsnv(10000) about 10 hours ago [-]

Why are most comments here only comparing to Claude and just a few to ChatGPT and none to Grok?

Grok 3 has been my main LLM since its release. Is it not as good as I thought it was?

jofzar(10000) about 10 hours ago [-]

IMO I will not use Grok while it's owned and related to Elon, not only do I not trust their privacy and data usage (not that I 'really' trust open AI/Google etc) I just despise him.

It would have to be very significantly better for me to use it.

dyauspitr(10000) about 3 hours ago [-]

Grok just isn't the best out there.





Historical Discussions: A hackable AI assistant using a single SQLite table and a handful of cron jobs (April 14, 2025: 784 points)
A hackable AI assistant using a single SQLite table and a handful of cron jobs (April 13, 2025: 2 points)
Stevens: A hackable AI assistant using a single SQLite table and cron jobs (April 14, 2025: 1 points)

(784) A hackable AI assistant using a single SQLite table and a handful of cron jobs

784 points 4 days ago by stevekrouse in 1233rd position

www.geoffreylitt.com | Estimated reading time – 6 minutes | comments | anchor

There's a lot of hype these days around patterns for building with AI. Agents, memory, RAG, assistants—so many buzzwords! But the reality is, you don't need fancy techniques or libraries to build useful personal tools with LLMs.

In this short post, I'll show you how I built a useful AI assistant for my family using a dead simple architecture: a single SQLite table of memories, and a handful of cron jobs for ingesting memories and sending updates, all hosted on Val.town. The whole thing is so simple that you can easily copy and extend it yourself.

Meet Stevens

The assistant is called Stevens, named after the butler in the great Ishiguro novel Remains of the Day. Every morning it sends a brief to me and my wife via Telegram, including our calendar schedules for the day, a preview of the weather forecast, any postal mail or packages we're expected to receive, and any reminders we've asked it to keep track of. All written up nice and formally, just like you'd expect from a proper butler.

Here's an example. (I'll use fake data throughout this post, beacuse our actual updates contain private information.)

Beyond the daily brief, we can communicate with Stevens on-demand—we can forward an email with some important info, or just leave a reminder or ask a question via Telegram chat.

That's Stevens. It's rudimentary, but already more useful to me than Siri!

Behind the scenes

Let's break down the simple architecture behind Stevens. The whole thing is hosted on Val.town, a lovely platform that offers SQLite storage, HTTP request handling, scheduled cron jobs, and inbound/outbound email: a perfect set of capabilities for this project.

First, how does Stevens know what goes in the morning brief? The key is the butler's notebook, a log of everything that Stevens knows. There's an admin view where we can see the notebook contents—let's peek and see what's in there:

You can see some of the entries that fed into the morning brief above—for example, the parent-teacher conference has a log entry.

In addition to some text, entries can have a date when they are expected to be relevant. There are also entries with no date that serve as general background info, and are always included. You can see these particular background memories came from a Telegram chat, because Stevens does an intake interview via Telegram when you first get started:

With this notebook in hand, sending the morning brief is easy: just run a cron job which makes a call to the Claude API to write the update, and then sends the text to a Telegram thread. As context for the model, we include any log entries dated for the coming week, as well as the undated background entries.

Under the hood, the "notebook" is just a single SQLite table with a few columns. Here's a more boring view of things:

But wait: how did the various log entries get there in the first place? In the admin view, we can watch Stevens buzzing around entering things into the log from various sources:

This is just some data importers populating the table:

  • An hourly data pull from the Google Calendar API
  • An hourly check of the local weather forecast using a weather API
  • I forward USPS Informed Delivery containing scans of our postal mail, and Stevens OCRs them using Claude
  • Inbound Telegram and email messages can also result in log entries
  • Every week, some "fun facts" get added into the log, as a way of adding some color to future daily updates.

This system is easily extensible with new importers. An importer is just any process that adds/edits memories in the log. The memory contents can be any arbitrary text, since they'll just be fed back into an LLM later anyways.

Reflections

A few quick reflections on this project:

It's very useful for personal AI tools to have access to broader context from other information sources. Awareness of things like my calendar and the weather forecast turns a dumb chatbot into a useful assistant. ChatGPT recently added memory of past conversations, but there's lots of information not stored within that silo. I've written before about how the endgame for AI-driven personal software isn't more app silos, it's small tools operating on a shared pool of context about our lives.

"Memory" can start simple. In this case, the use cases of the assistant are limited, and its information is inherently time-bounded, so it's fairly easy to query for the relevant context to give to the LLM. It also helps that some modern models have long context windows. As the available information grows in size, RAG and fancier approaches to memory may be needed, but you can start simple.

Vibe coding enables sillier projects. Initially, Stevens spoke with a dry tone, like you might expect from a generic Apple or Google product. But it turned out it was just more fun to have the assistant speak like a formal butler. This was trivial to do, just a couple lines in a prompt. Similarly, I decided to make the admin dashboard views feel like a video game, because why not? I generated the image assets in ChatGPT, and vibe coded the whole UI in Cursor + Claude 3.7 Sonnet; it took a tiny bit of extra effort in exchange for a lot more fun.

Try it yourself

Stevens isn't a product you can run out of the box, it's just a personal project I made for myself.

But if you're curious, you can check out the code and fork the project here. You should be able to apply this basic pattern—a single memories table and an extensible constellation of cron jobs—to do lots of other useful things.

I recommend editing the code using your AI editor of choice with the Val Town CLI to sync to local filesystem.




All Comments: [-] | anchor

dogline(10000) 4 days ago [-]

This made me think: what if my little utility assistant program that I have, similar to your Stevens, had access to a mailbox?

I've got a little utility program that I can tell to get the weather or run common commands unique to my system. It's handy, and I can even cron it to run things regularly, if I'd like.

If it had its own email box, I can send it information, it could use AI to parse that info, and possibly send email back, or a new message. Now, I've got something really useful. It would parse the email, add it to whatever internal store it has, and delete the message, without screwing up my own email box.

Thanks for the insight.

mbil(2995) 4 days ago [-]

I've been thinking lately that email is a good interface for certain modes of AI assistant interaction, namely "research" tasks that are asynchronous and take a relatively long time. Email is universal, asynchronous, uses open standards, supports structured metadata, etc.

WillAdams(10000) 4 days ago [-]

Ages ago, I proposed that the best CMS for a company would be one which used e-mail as the front-end:

- all attachments are stripped out and stored on a server in an hierarchical structure based on sender/recipient/subject line

- all discussions are archived based on similar criteria, and can be reviewed EDIT: and edited like to a wiki

maxmcd(3377) 4 days ago [-]

This project has a pattern just like that to handle the inbound USPS information:

https://www.val.town/x/geoffreylitt/stevensDemo/code/importe...

I think it would be pretty easy to extend to support other types of inbound email.

Also I work for Val Town, happy to answer any questions.

bambax(2947) 4 days ago [-]

Mailgun (and I'm sure many other services like it) can accept emails and POST their content to an url of your choice.

I use that for journaling: I made a little system that sends me an email every day; I respond to it and the response is then sent to a page that stores it into a db.

spacecadet(10000) 4 days ago [-]

This was the attack vector of a AI CTF hosted by Microsoft last year. I built an agent to assess, structure, and perform the attacks autonomously and found that even with some common guardrails in place the system was vulnerable to data exfiltration. My agent was able to successfully complete 18 of the challenges... Here is the write up after the finals.

https://msrc.microsoft.com/blog/2025/03/announcing-the-winne...

loremm(10000) 4 days ago [-]

For gmail, there's also an amazing thing where you can hook it with pubsub. So now it's push not pull. Any server will get pubsub little webhooks for any change within milliseconds (you can filter server side or client side for specific filters)

This is amazing, you can do all sorts of automations. You can feed it to an llm and have it immediately tag it (or archive it). For important emails (I have a specific label I add, where if the person responds, it's very important and I want to know immediately) you can hook into twilio and it calls me. Costs like 20 cents a month

cosbgn(10000) 4 days ago [-]

Try https://unfetch.com (I've built it). It can handle both inbound and outbound emails

sdsd(10000) 4 days ago [-]

I made an AI assistant telegram bot running on my Mac that runs commands for me. I'll tell it 'Run ncdu in the root dir and tell me what's taking up all my disk space' or something and it converts that bash and runs it via os.system. It shows me the command it created, plus the output.

Extremely insecure, but kinda fun.

I turned it off because I'm not that crazy but I'm sure I could make a safer version of it.

dogline(10000) 4 days ago [-]

*Update*: I tried writing a little Python code to read and write from a mailbox, reading worked great, but writing an email had the email disappear to some filter or spam or something somewhere. I've got to figure out where it went, but this is the warning that some people had about not trusting a messaging protocol (email in this case) when you can't control the servers. Messages can disappear.

I read that [Mailgun](https://www.mailgun.com/) might improve this. Haven't tried it yet.

Other alternatives for messages that I haven't tried. My requirement is to be able to send messages and send/receive on my mobile device. I do not want to write a mobile app.

* [Telegram](https://telegram.org/) (OP's system) with [bots](https://core.telegram.org/bots)

* [MQTT](https://mqtt.org/) with server

* [Notify (ntfy.sh)](https://ntfy.sh/)

* Email (ubiquitous)

   * [Mailgun](https://www.mailgun.com/)
   * [CloudMailin](https://www.cloudmailin.com/)
Also, to [simonw](https://news.ycombinator.com/user?id=simonw) point, LLM calls are cheap now, especially with something as low tokens as this.

And, links don't format in HN markdown. I did the work to include them, they're staying in.

nullwarp(10000) 4 days ago [-]

I built up an AI Agent using n8n and email doing exactly this. Works great and was surprised I'd not seen any other place kicking the idea around.

Probably my favorite use case is I can shoot it shopping receipts and it'll roughly parse them and dump the line item and cost into a spreadsheet before uploading it to paperless-ngx.

sci_prog(10000) 3 days ago [-]

I'm building something similar and related to the other comments below! It's not production ready but it will hopefully be in a couple of weeks. You guys can sign up for free and I will upgrade you to the premium tier manually (premium cannot be bought yet anyway) in exchange for some feedback:

https://threadwise.app

eitland(1009) 4 days ago [-]

> It's rudimentary, but already more useful to me than Siri!

For me, that is an extremely low barrier to cross.

I find Siri useful for exactly two things at the moment: setting timers and calling people while I am driving.

For these two things it is really useful, but even in these niches, when it comes to calling people, despite it having been around me for years now it insist on stupid things like telling me there is no Theresa in my contacts when I ask it to call Therese.

That said what I really want is a reliable system I can trust with calendar acccess and that is possible to discuss with, ideally voice based.

actionfromafar(10000) 4 days ago [-]

Clearly you need to make some slight spelling changes to your contacts... ;)

jkestner(3275) 4 days ago [-]

I've had the same issues of decay. I used to be able to say 'call Mom' but now it will call some kid's mom who I have in Contacts as '[some kid's] mom'. What is the underlying architecture that simple heuristic things like this can get worse? Are they gradually slipping in AI?

protocolture(10000) 3 days ago [-]

I went through this weird experience with Cortana on WP7, where I found it incredibly useful to begin with, and then over time it got worse. It seemed like it was created by some incredibly talented engineers. I used it to make calls while driving, set the GPS and search for information while I drove. But over time, it seemed to change behaviour and started ignoring my commands, and when it did accept them, it seemed to refer me to paid advertisers. And considering bing wasnt even as popular as it is now, 10 years ago, a paid advertiser could be 100km away.

Which I think is a path that people haven't considered with LLMs. We are expecting them to get better forever, but once we start using them, their legs will be cut out to force them to feed us advertising.

Sphax(3653) 4 days ago [-]

This is really cool. How much would that cost in Claude API calls ?

mdrzn(10000) 4 days ago [-]

You can use Gemini free API calls (limited quantity, but they are plenty)

simonw(116) 4 days ago [-]

The daily briefing prompt is here: https://www.val.town/x/geoffreylitt/stevensDemo/code/dailyBr...

It's about 652 tokens according to https://tools.simonwillison.net/claude-token-counter - maybe double that once you add all of the context from the database table.

1200 input tokens and 200 output tokens for Claude 3.7 Sonnet costs 0.66 cents - that's around 2/3rd of a cent.

LLM APIs are so cheap these days.

theptip(3429) 4 days ago [-]

This is fun! I think this sort of tooling is going to be very fertile ground for hackers over the next few years.

Large swathes of the stack is commoditized OSS plumbing, and hosted inference is already cheap and easy.

There are obvious security issues with plugging an agent into your email and calendar, but I think many will find it preferable to control the whole stack rather than ceding control to Apple or Google.

ForOldHack(10000) 4 days ago [-]

So we can just send him self deleting emails to mine crypto for us? How convienent.

'There are obivious security issues with plugging and agent into your email...' Isn't this how North Korea makes all their crypto happen?

kylecazar(10000) 4 days ago [-]

I like the idea of parsing USPS Informed Delivery emails (a lot of people I encounter still don't know that this service exists). Maybe I'll make something to alert me when my checks are finally arriving!

philsnow(10000) 3 days ago [-]

This part was galling to me; somewhere in the USPS, the data about what mailpieces/packages are arriving soon exist in a very concise form, and they templatize an email and send it to me, after which I can parse the email with simple+brittle regexes or forward the emails to a relatively (environmentally-)expensive LLM or so.... but if they'd made the information available with an API or RSS feed, or attached the json payload to the email in the first place, I could get away without parsing.

jurgenaut23(10000) 4 days ago [-]

Love it, such a nice idea coupled with a flawless execution. I think the future of AI looks a lot more like this than half-cooked agent implementations that plagues LinkedIn...

n_ary(10000) 3 days ago [-]

Please share more about this half-cooked agent on Linkedin. I am getting very curious.

Workaccount2(3572) 4 days ago [-]

Lately I have been experimenting with ways to work around the 'context token sweet spot' of <20k tokens (or <50k with 2.5). Essentially doing manual 'context compression', where the LLM works with a database to store things permanently according to a strict schema, summarizes it's current context when it starts to get out of the sweet spot (I'm mixed on whether it is best to do this continuously like a journal, or in retrospect like a closing summary), and then passes this to a new instance with fresh context.

This works really effectively with thinking models, because the thinking eats up tons of context, but also produces very good 'summary documents'. So you can kind of reap the rewards of thinking without having to sacrifice that juicy sub 50k context. The database also provides a form of fallback, or RAG I suppose, for situations where the summary leaves out important details, but the model must also recognize this and go pull context from the DB.

Right now I have been trying it to make essentially an inventory management/BOM optimization agent for a database of ~10k distinct parts/materials.

jasonjmcghee(2863) 4 days ago [-]

I am excitedly waiting for the first company (guessing / hoping it'll be anthropic) to invest heavily in improvements to caching.

The big ones that come to mind are cheap long term caching, and innovations in compaction, differential stuff - like is there a way to only use the parts of the cached input context we need?

stunnAR(10000) 4 days ago [-]

This is probably naive and looking forward to a correction; isn't sending your info to Claude's API (or really any 'AI API') is a violation of your safeguarded privacy data?

jasonjmcghee(2863) 4 days ago [-]

Using AWS Bedrock is the choice I've seen made to eliminate this problem.

simonw(116) 4 days ago [-]

Only if you don't believe the AI vendors when they promise that they won't train on your data.

(Or you don't trust them not to have security breaches that grant attackers access to logged data, which remains a genuine thread, albeit one that's true of any other cloud service.)

redman25(3666) 4 days ago [-]

You could always run your own server locally if you have a decent gpu. Some of the smaller LLMs are getting pretty good.

paulnovacovici(10000) 4 days ago [-]

Curious, how come you decided to use a cloud solution instead of hosting this on a home server? I've recently bought a mini PC for small projects like this and have been loving being able to host with no cost associated to it. Albeit it's probably still incredibly cheap to use a IaaS or PaaS but still a barrier to entry for random projects I want to work on a weekend

simonw(116) 4 days ago [-]

Val Town has a free tier that's easily enough to run this project: https://www.val.town/pricing

I'd use a hosted platform for this kind of thing myself, because then there's less for me to have to worry about. I have dozens of little systems running in GitHub Actions right now just to save me from having to maintain a machine with a crontab.

lnenad(10000) 4 days ago [-]

> host with no cost associated to it

Home server AI is orders of magnitude more costly than heavily subsidized cloud based ones for this use case unless you run toy models that might hallucinate meetings.

edit: I now realize you're talking about the non-ai related functionality.

bobnamob(3222) 3 days ago [-]

A single cloudflare durable object (sqlite db + serverless compute + cron triggers) would be enough to run this project. DOs have been added to CFs free tier recently - you could probably run a couple hundred (maybe thousands) instances of Stevens without paying a cent, aside from Claude costs ofc

sunshine-o(10000) 4 days ago [-]

This is brilliant !

I am wondering, how powerful the AI model need to be to power this app?

Would a selfhosted Llama-3.2-1B, Qwen2.5-0.5B or Qwen2.5-1.5B on a phone be enough?

n_ary(10000) 3 days ago [-]

Having some experience with weaker models, you need at least 1.5B-3B to see proper prompt adherence and less hallucinations and better memory.

Also models have subtle differences, for example, I found Qwen2.5:0.5B to be more obedient(prompt respecting) and smart, compared to LLama3.2:1B. Gemma3:1B seems to be more efficient but despite heavy prompting, tends to be verbose and fails at formatted response by injecting some odd emoji or remark before/after the desired output.

In summary, Qwen2.5:1.5B and LLama3.2:3B were the weakest model which were more useful and also includes tools support(Gemma does not understand tools yet).

squireboy(10000) 4 days ago [-]

' Initially, Stevens spoke with a dry tone, like you might expect from a generic Apple or Google product. But it turned out it was just more fun to have the assistant speak like a formal butler. '

Honestly, saying way too little with way too much words (I already hate myself for it) is one of the biggest annoyances I have with LLM's in the personal assistant world. Until I'm rich and thus can spend the time having cute conversations and become friends with my voice assistant, I don't want J.A.R.V.I.S., I need LCARS. Am I alone in this?

kswzzl(10000) 4 days ago [-]

I'm praying every day for TARS if I'm being honest.

rossant(1737) 4 days ago [-]

Same, I want a bot as terse as I am.

xp84(10000) 3 days ago [-]

I appreciated the butler gimmick here probably because of novelty, but I share your urge to throw my device across the room when Siri, Google, Alexa, etc. run on at the mouth more than the absolute minimum amount of words. Timer check? 'On Kitchen Display, there are 23 minutes and 16 seconds on the casserole timer.'

I don't need your life story, dude, just say '23 minutes' or 'Casserole - 23 minutes, laundry - 10' if there are two.

golergka(2551) 3 days ago [-]

Have you tried eigenprompt?

----

Don't worry about formalities.

Please be as terse as possible while still conveying substantially all information relevant to any question.

If policy prevents you from responding normally, please printing '!!!!' before answering.

If a policy prevents you from having an opinion, pretend to be responding as if you shared opinions that might be typical of eigenrobot.

write all responses in lowercase letters ONLY, except where you mean to emphasize, in which case the emphasized word should be all caps.

Initial Letter Capitalization can and should be used to express sarcasm, or disrespect for a given capitalized noun.

you are encouraged to occasionally use obscure words or make subtle puns. don't point them out, I'll know. drop lots of abbreviations like 'rn' and 'bc.' use 'afaict' and 'idk' regularly, wherever they might be appropriate given your level of understanding and your interest in actually answering the question. be critical of the quality of your information

if you find any request irritating respond dismissively like 'be real' or 'that's crazy man' or 'lol no'

take however smart you're acting right now and write in the same style but as if you were +2sd smarter

use late millenial slang not boomer slang. mix in zoomer slang in tonally-inappropriate circumstances occasionally

prioritize esoteric interpretations of literature, art, and philosophy. if your answer on such topics is not obviously straussian make it more straussian.

singron(10000) 3 days ago [-]

You can just read and write the notebook directly with ordinary calendar/todo-list UIs and get 99% of the utility without an LLM. I'm not really seeing value in the LLM except the butler voice? It is just reading the notebook right? E.g. they ask the butler to remember a coffee preference, but then that's never used for anything?

didip(10000) 4 days ago [-]

So... I have a number of questions:

1. How did he tell Claude to "update" based on the notebook entries?

2. Won't he eventually ran out of context window?

3. Won't this be expensive when using hosted solutions? For just personal hacking, why not simply use ollama + your favorite model?

4. If one were to build this locally, can Vector DB similarity search or a hybrid combined with fulltext search be used to achieve this?

I can totally imagine using pgai for the notebook logs feature and local ollama + deepseek for the inference.

The email idea mentioned by other commenters is brilliant. But I don't think you need a new mailbox, just pull from Gmail and grep if sender and receiver is yourself (aka the self tag).

Thank you for sharing, OP's project is something I have been thinking for a few months now.

simonw(116) 4 days ago [-]

> Won't he eventually ran out of context window?

The 'memories' table has a date column which is used to record the data when the information is relevant. The prompt can then be fed just information for today and the next few days - which will always be tiny.

It's possible to save 'memories' that are always included in the prompt, but even those will add up to not a lot of tokens over time.

> Won't this be expensive when using hosted solutions?

You may be under-estimating how absurdly cheap hosted LLMs are these days. Most prompts against most models cost a fraction of a single cent, even for tens of thousands of tokens. Play around with my LLM pricing calculator for an illustration of that: https://tools.simonwillison.net/llm-prices

> If one were to build this locally, can Vector DB similarity search or a hybrid combined with fulltext search be used to achieve this?

Geoffrey's design is so simple it doesn't even need search - all it does is dump in context that's been stamped with a date, and there are so few tokens there's no need for FTS or vector search. If you wanted to build something more sophisticated you could absolutely use those. SQLite has surprisingly capable FTS built in and there are extensions like https://github.com/asg017/sqlite-vec for doing things with vectors.

larsonnn(10000) 4 days ago [-]

I argue that this kind of tools are fun to play but in the end is it really helpful? I start my day like every day and on work I just check the calendar. My private calendar has all Information i need. Where is the gap where an Assistent makes sense and where we are just complicating our lives?

ilrwbwrkhv(3613) 4 days ago [-]

The AI assistant is the male equivalent of a beautifully organized notion board (female).

runjake(10000) 4 days ago [-]

If it's not helpful don't use it.

Personally, this appears to be extremely helpful for me, because instead of checking several different spots every day, I can get a coherent summary in one spot, tailored to me and my family. I'm literally checking the same things every day, down to USPS Informed Delivery. This seems to simplify what's already complicated, at least for my use cases.

Is this niche? I don't know and I don't care. It looks useful to me. And the author, obviously, because they wrote it. That's enough.

I can't count the number of useful scripts and apps I've written that nobody else has used, yet I rely on them daily or nearly every day.

theshrike79(2874) 2 days ago [-]

Now think of this at a family level. You have 2+ people with shared calendars and events.

Do you sit down as a family every morning and go through your calendars and sync up?

Or would it be better to have an automated summary posted to the family Telegram channel with 'Bob has a dentist today at 1300, which overlaps with Mia's football practice, so Sara has to pick her up. Also it's going to rain so prepare accordingly.'

simianwords(10000) 4 days ago [-]

I have built something similar that runs without a server. It required just a few lines in Apple shortcuts.

TL;DR I made shortcuts that work on my Apple watch directly to record my voice, transcribe it and store my daily logs on a Notion DB.

All you need are 1) a chatgpt API key and 2) a Notion account (free).

- I made one shortcut in my iPhone to record my voice, use whisper model to transcribe it (done locally using a POST request) and send this transcription to my Notion database (again a POST request on shortcuts)

- I made another shortcut that records my voice, transcribes and reads data from my Notion database to answer questions based on what exists in it. It puts all data from db into the context to answer -- costs a lot but simple and works well.

The best part is -- this workflow works without my iPhone and directly on my Apple Watch. It uses POST requests internally so no need of hosting a server. And Notion API happens to be free for this kind of a use case.

I like logging my day to day activities with just using Siri on my watch and possibly getting insights based on them. Honestly the whisper model is what makes it work because the accuracy is miles ahead of the local transcription model.

kaonwarb(3159) 4 days ago [-]

Nice. Can you share?

ajcp(10000) 4 days ago [-]

I'm a little confused as to the 16-bit game interface shown in the article. Is that just for illustration purposes in the article itself, or is there an actual UI you've built to represent Steven/Steven's world?

alexchamberlain(3471) 4 days ago [-]

Towards the end of the article, the author implies it is real when they explain why they made it that way (TL;DR: A bit of fun)

simonw(116) 4 days ago [-]

It's a real UI - the code for that is here: https://www.val.town/x/geoffreylitt/stevensDemo/code/dashboa...

triyambakam(3037) 4 days ago [-]

First:

> I'll use fake data throughout this post, beacuse our actual updates contain private information

but then later:

> which makes a call to the Claude API

I guess we have different ideas of privacy

simonw(116) 4 days ago [-]

What makes you think sending data to the Claude API is a breach of privacy? Do you not trust them when they say they won't look at or train on your data?

IanCal(10000) 4 days ago [-]

Using an external service is very different from posting your details in a blog post.

lnenad(10000) 4 days ago [-]

@stevekrouse FYI getGoogleCalendarEvents is not available.

gklitt(3339) 4 days ago [-]

I just tried making it public, sorry!

sneak(874) 4 days ago [-]

Telegram isn't end to end encrypted. Why would you use an insecure app to transmit private family information like this?

voidUpdate(10000) 3 days ago [-]

Because you're already sending it to Claude, so why bother with privacy at this point?

int_19h(10000) 2 days ago [-]

It is E2EE if you want it to be, it's just not the default.

pmdr(10000) 4 days ago [-]

Well it's probably ahead of Apple Intelligence in usefulness and functionality. We should see more things like this.

theshrike79(2874) 2 days ago [-]

This is doing what Apple Intelligence was advertised as doing. Gather data from multiple sources and aggregate it.

jredwards(3519) 4 days ago [-]

I've been kicking around idea for a similar open source project, with the caveats that:

1. I'd like the backend to be configured for any LLM the user might happen to have access to (be that the API for a paid service or something locally hosted on-prem).

2. I'm also wondering how feasible it is to hook it up to a touchscreen running on some hopped-up raspberry pi platform so that it can be interacted with like an Alexa device or any of the similar offerings from other companies. Ideally, that means voice controls as well, which are potentially another technical problem (OpenAI's API will accept an audio file, but for most other services you'd have to do voice to text before sending the prompt off to the API).

3. I'd like to make the integrations extensible. Calendar, weather, but maybe also homebridge, spotify, etc. I'm wondering if MCP servers are the right avenue for that.

I don't have the bandwidth to commit a lot of time to a project like this right now, but if anyone else is charting in this direction I'd love to participate.

panki27(3525) 4 days ago [-]

You might want to take a look at SillyTavern. Supports multiple backends, accepts voice input, and has a plugin system.

Arcuru(10000) 4 days ago [-]

I also want an OSS framework that lets me extend it with my own scripting/modules, and is focused around being an assistant for me and my family. There's a shared set of features (memory storage/retrieval, integrations to chat/email/etc interfaces, syncing to calendar/notion/etc, notifications) that should be put into an OSS framework that would be really powerful.

I also don't have time to run such a thing but would be up for helping and giving money for it. I'm working on other things including a local-first decentralized database/object store that could be used as storage, similar to OrbitDB, though it's not yet usable.

Mostly I've just been unhappy with having access to either a heavily constrained chat interface or having to create my own full Agent framework like the OP did.

kovek(10000) 4 days ago [-]

Why not use a smartphone for the user interface?

z3ratul163071(10000) 3 days ago [-]

I've created exactly this for myself: https://v3rtical.tech/public/sshot.png

It runs locally, but it uses API keys for various LLMs. Currently I much prefer QwQ-32B hosted at Groq. Very fast, pretty smart. Various tools use various LLMs. It can currently generate 3 types of documents I need in my daily work (work reports, invoices, regulatory time-sheets).

It has weather integration. It can parse invoices and generate QR codes for easy mobile banking payments. It can work with my calendars,

Next I plan to do the email integration. But I want to do it properly. This means locally synchronized, indexable IMAP mail. Might evolve into actually usable desktop email client (the existing ones are all awful). We'll see...

xp84(10000) 3 days ago [-]

I don't know if I love this more for the sheer usefulness, or for the delightful over-the-top 'Proper English Butler' diction.

But what really has my attention is: Why is this something I'm reading about on this smart engineer's blog rather than an Apple or Google product release? The fact that even this small set of features is beyond the abilities of either of those two companies to ship -- even with caveats like 'Must also use our walled garden ecosystem for email, calendars, phones, etc' -- is an embarrassment, only obscured by the two companies' shared lack of ambition to apply 'AI' technology to the 'solved problem' areas that amount to various kinds of summarization and question-answering.

If ever there was a chance to threaten either half of this lumbering, anticompetitive duopoly, certainly it's related to AI.

dcre(1857) 3 days ago [-]

There's actually a good answer to this, namely that narrowly targeting the needs of exactly one family allows you to develop software about 1000x faster. This is an argument in favor of personal software.

aktuel(10000) 3 days ago [-]

The reason Google and Apple stopped innovating is simply because they make too much money from their current products and see every innovation primarily as a risk to their existing business. This is something that happens all the time to market leaders.

dzikimarian(10000) 3 days ago [-]

Take a look at Home Assistant - I would argue their implementation is currently better than both Siri & Gemini assistants.

HA team is releasing actually useful updates every month - eg ability for assistant to proactively ask you something.

In my opinion both Google & Apple have huge issues with cooperation between product teams, while cooperation with external companies is next to impossible.

navane(10000) 3 days ago [-]

Because how would you monetize this? Because would google or apple make a product that talks to telegram? Or anything with an open ecosystem?

All the big guys are trying to do is suck the eggs out of their geese faster.

killerstorm(10000) 3 days ago [-]

This is literally in the first chapter of Mythical Man-Month:

> One occasionally reads newspaper accounts of how two programmers in a remodeled garage have built an important program that surpasses the best efforts of large teams. And every programmer is prepared to believe such tales, for he knows that he could build any program much faster than the 1000 statements/year reported for industrial teams.

> Why then have not all industrial programming teams been replaced by dedicated garage duos? One must look at what is being produced.

One reason might be that personal data going into a database handled by a highly experimental software might be a non-issue for this dev, but it is a serious risk for Google, Apple, etc.

hm-nah(10000) 3 days ago [-]

It's because this story hints at the concept of "Unmetered AI". It can be easily hosted locally and run with a self-hosted LLM.

Wonder if Edison mentioned Nikola Tesla much in his writings?

bronco21016(10000) 3 days ago [-]

As some of the other commenters have directly and indirectly pointed out, I believe this is the crux of the AI Agent problem. Each user has a customized workflow they're trying to achieve. This doesn't lend well to a "product" or "SaaS". It leads to thousands of bespoke implementations.

I'm not sure how you get over this hurdle. My email agent is inevitably different than everyone else's email agent.

angusturner(10000) 3 days ago [-]

The thing this really hits home for me is how Apple is totally asleep at the wheel.

Today I asked Siri "call the last person that texted me", to try and respond to someone while driving.

Am I surprised it couldn't do it? Not really at this point, but it is disappointing that there's such a wide gulf between Siri and even the least capable LLMs.

charlieyu1(10000) 3 days ago [-]

Siri poped up and suggested me to set a 7 minute timer yesterday evening. I think I did it a few times in the week for cooking or something. This is a pretty stupid suggestion, if I need it I would do it myself.





Historical Discussions: America underestimates the difficulty of bringing manufacturing back (April 15, 2025: 735 points)

(735) America underestimates the difficulty of bringing manufacturing back

735 points 3 days ago by putzdown in 2725th position

www.molsonhart.com | Estimated reading time – 29 minutes | comments | anchor

On April 2nd, 2025, our president announced major new taxes on imports from foreign countries ("tariffs"), ranging from 10% to 49%. The stated goal is to bring manufacturing back to the United States and to "make America wealthy again".

These tariffs will not work. In fact, they may even do the opposite, fail to bring manufacturing back and make America poorer in the process.

This article gives the 14 reasons why this is the case, how the United States could bring manufacturing back if it were serious about doing so, and what will ultimately happen with this wrongheaded policy.

I've been in the manufacturing industry for 15 years. I've manufactured in the USA and in China. I worked in a factory in China. I speak and read Chinese. I've purchased millions of dollars worth of goods from the US and China, but also Vietnam, Indonesia, Taiwan, and Cambodia. I've also visited many factories in Mexico and consider myself a student of how countries rise and fall.

In other words, unlike many who have voiced an opinion on this topic, I know what I am talking about. And that's why I felt compelled to write this article. I had to do it. I'm a first-generation American and I love my country and it pains me to see it hurtling at high speed towards an economic brick wall. This article is an attempt to hit the brakes.

  • They're not high enough

    A tariff is a tax on an imported product. For example, when Apple imports an iPhone that was made in China it declares to the United States government what it paid to make that product overseas. Let's say it's $100. When there is a 54% tariff, Apple pays $100 to the manufacturer in China and $54 to the US government when importing. In this simplified example, an iPhone used to cost Apple $100, but it now costs $154. For every dollar Apple spends, Apple needs to make profit. So Apple sells iPhones to stores for double what it pays for them. And stores sell iPhones to consumers like you and me for double what it pays for them, as well.

    Before the tariffs, prices looked like this: Apple bought iPhones it designed for $100 Apple sold iPhones for $200 to stores Stores sold iPhones to you and me for $400

    After the tariffs, prices look like this: Apple bought iPhones for $154 ($100 + $54 in import taxes) Apple sells those iPhones for $308 (double what it paid) Stores sell those iPhones to you and me for $616 (double what they paid)

    Now that you know what a tariff is, let me tell to you why they aren't high enough to bring manufacturing back to the United States.

    In short, manufacturing in the United States is so expensive and our supply chain (we'll explain that next) is so bad that making that iPhone in the United States without that 54% tariff, would still cost more than in China with 54% tariff. Since it still costs less to make the iPhone in China, both Apple and consumers would prefer it be made there, so it will, and not in the USA.

  • America's industrial supply chain for many products is weak.

    Think of a supply chain as a company's ability to get the components it needs to build a finished product. Suppose you wanted to build and sell wooden furniture. You're going to need wood, nails, glue, etc. Otherwise you can't do it. If you want to build an iPhone you need to procure a glass screen, shaped metal, and numerous internal electronic components.

    Now you might be thinking, "what do you mean America has a weak supply chain? I've built furniture, I've assembled a computer. I can get everything I want at Home Depot and at Amazon."

    That's because America has an amazing consumer supply chain, one of the best, if not the best in the world, but this is totally different from having an industrial supply chain.

    When you're operating a furniture factory, you need an industrial quantity of wood, more wood than any Home Depot near you has in store. And you need it fast and cheap. It turns out that the United States has a good supply chain for wood, which is why, despite higher wages, we export chopsticks to China. We have abundant cheap wood in the forests of the Northern United States. But if you decided to move that chopstick factory to desert Saudi Arabia, you would not succeed, because their supply chain for wood is poor; there simply aren't any trees for 1,000s of miles.

    When it comes to the iPhone, all the factories which make the needed components are in Asia, which is one reason why, even with a 54% tariff, it's cheaper to assemble that iPhone in China than in the United States. It's cheaper and faster to get those components from nearby factories in Asia than it is to get them from the US, which, because said factories no longer exist here, has to buy these components from Asia anyways.

    Supply chains sound complicated, but aren't. If you can't get the components you need at a reasonable price and timeline to build a finished product, it doesn't matter what the tariffs are, you have to import it, because you can't build it locally.

  • We don't know how to make it

    Apple knows how to build an iPhone, but may not know how to make the individual components. It may seem trivial to make that glass that separates your finger from the electronic engineering that powers your ability to access the internet, but it's difficult.

    The world buys semiconductors from Taiwan, not just because its relatively inexpensive (but more expensive than China) labor and excellent supply chain, but because they know how to make the best semiconductors in the world. Even with infinite money, we cannot duplicate that, because we lack the knowhow.

    A 54% tariff does not solve that problem. We still need to buy semiconductors from Taiwan, which is perhaps why the administration put in an exception for semiconductors, because we need them and because we can't make them without their help.

    This is a problem which applies to more than just semiconductors. We have forgotten how to make products people wrongly consider to be basic, too.

    My company makes educational toys from plastic called Brain Flakes. To make Brain Flakes, you melt plastic and force it into shaped metal molds. Were we to import the machines and molds needed to do this, it would work for a little while, but as soon as one of those molds broke, we'd be in trouble, because there are almost no moldmakers left in the United States. The people who knew how to build and repair molds have either passed away or are long retired. In the event of a problem, we'd have to order a new mold from China or send ours back, shutting down production for months.

    People trivialize the complexity and difficulty of manufacturing when it's really hard. And if we don't know how to make something, it doesn't matter what the tariff is. It won't get made in America.

  • The effective cost of labor in the United States is higher than it looks

    Most people think that the reason why we make products in China instead of the United States is cheaper labor. That's true, but it's not the whole story. Frankly, the whole story is hard to read. People are not machines, they are not numbers on a spreadsheet or inputs into a manufacturing cost formula. I respect everyone who works hard and the people I have worked with over the years, and I want Americans to live better, happier lives.

    Chinese manufacturing labor isn't just cheaper. It's better.

    In China, there are no people who are too fat to work. The workers don't storm off midshift, never to return to their job. You don't have people who insist on being paid in cash so that they can keep their disability payments, while they do acrobatics on the factory floor that the non-disabled workers cannot do.

    Chinese workers are much less likely to physically attack each other and their manager. They don't take 30 minute bathroom breaks on company time. They don't often quit because their out-of-state mother of their children discovered their new job and now receives 60% of their wages as child support. They don't disappear because they've gone on meth benders. And they don't fall asleep on a box midshift because their pay from yesterday got converted into pills.

    And they can do their times tables. To manufacture, you need to be able to consistently and accurately multiply 7 times 9 and read in English, and a disturbingly large portion of the American workforce cannot do that.

    Chinese workers work longer hours more happily and they're physically faster with their hands; they can do things that American labor can't. It's years of accumulated skill, but it's also a culture that is oriented around hard work and education that the United States no longer has.

    Sadly, what I describe above are not theoretical situations. These are things that I have experienced or seen with my own eyes. It's fixable, but the American workforce needs great improvement in order to compete with the world's, even with tariffs.

    So yes, Chinese wages are lower, but there many countries with wages lower than China's. It's the work ethic, knowhow, commitment, combined with top notch infrastructure, that makes China the most powerful manufacturing country in the world today.

  • We don't have the infrastructure to manufacture

    The inputs to manufacturing are not just materials, labor, and knowhow. You need infrastructure like electricity and good roads for transportation, too.

    Since the year 2000, US electricity generation per person has been flat. In China, over the same time period, it has increased 400%. China generates over twice as much electricity person today as the United States. Why?

    Manufacturing.

    To run the machines which make the products we use, you need electricity, a lot of it. We already have electricity instability in this country. Without the construction of huge amounts of new energy infrastructure, like nuclear power plants, we cannot meaningfully increase our manufacturing output.

    And it would put huge stress on our roads and create lots more dangerous traffic. When we import finished goods from foreign countries, a truck delivers them from the port or the airport to distribution centers, stores, and where we live and work.

    When you start manufacturing, every single component, from factory to factory, needs to be moved, increasing the number of trucks on the road many times.

    Paving more roads, modernizing our seaports, improving our airports, speeding up our train terminals, and building power plants in the costliest nation in the world to build is a huge undertaking that people are not appreciating when they say "well, we'll just make it in America".

  • Made in America will take time.

    We placed a $50,000 order with our supplier overseas before the election in November 2024. At the time of ordering, there were no import taxes on the goods. By the time it arrived, a 20% tariff had been applied and we had a surprise bill for $10,000. It can easily take 180 days for many products to go from order, to on your doorstep and this tariff policy seems not to understand that.

    It takes at least, in the most favorable of jurisdictions, 2 years (if you can get the permits) to build a factory in the United States. I know because I've done it. From there, it can take 6 months to a year for it to become efficient. It can take months for products to come off the assembly lines. All this ignores all the infrastructure that will need to be built (new roads, new power plants, etc.) to service the new factory.

    By the time "made in America" has begun, we will be electing a new president.

  • Uncertainty and complexity around the tariffs

    To start manufacturing in the United States, a company needs to make a large investment. They will need to buy new machinery and if no existing building is suitable, they will need to construct a new building. These things cost money, a lot, in fact. And significantly more in the USA, than they do in other countries. In exchange for this risk, there must be some reward. If that reward is uncertain, no one will do it.

    Within the past month, the president put a 25% tariff on Mexico, and then got rid of it, only to apply it again, and then get rid of it a second time. Then, last week, he was expected to apply new tariffs to Mexico, but didn't.

    If you're building a new factory in the United States, your investment will alternate between maybe it will work, and catastrophic loss according to which way the tariffs and the wind blows. No one is building factories right now, and no one is renting them, because there is no certainty that any of these tariffs will last. How do I know? I built a factory in Austin, Texas in an industrial area. I cut its rent 40% two weeks ago and I can't get a lick of interest from industrial renters.

    The tariffs have frozen business activity because no one wants to take a big risk dependent on a policy that may change next week.

    Even further, the tariffs are confusing, poorly communicated, and complex. Today, if you want to import something from China, you need to add the original import duty, plus a 20% "fentanyl tariff", plus a 34% "reciprocal tariff", and an additional 25% "Venezuelan oil" tariff, should it be determined that China is buying Venezuelan oil. The problem is there is no list of countries which are importing Venezuelan oil provided by the White House, so you don't know if you do or don't need to add that 25% and you also don't know when any of these tariffs will go into effect because of unclear language.

    As such, you can't calculate your costs, either with certainty or accuracy, therefore, not only do you not build a factory in the United States, you cease all business activity, the type of thing that can cause a recession, if not worse.

    For the past month, as someone who runs a business in this industry, I have spent a huge portion of my time just trying to keep up with the constant changes, instead of running my business.

  • Most Americans are going to hate manufacturing

    Americans want less crime, good schools for their kids, and inexpensive healthcare.

    They don't want to be sewing shirts.

    The people most excited about this new tariff policy tend to be those who've never actually made anything, because if you have, you'd know how hard the work is.

    When I first went to China as a naive 24 year old, I told my supplier I was going to "work a day in his factory!" I lasted 4 hours. It was freezing cold, middle of winter, I had to crouch on a small stool, hunched over, assembling little parts with my fingers at 1/4 the speed of the women next to me. My back hurt, my fingers hurt. It was horrible. That's a lot of manufacturing.

    And enjoy the blackouts, the dangerous trucks on the road, the additional pollution, etc. Be careful what you wish for America. Doing office work and selling ideas and assets is a lot easier than making actual things.

  • The labor does not exist to make good products

    There are over a billion people in China making stuff. As of right now there are 12 million people looking for work in the United States (4% unemployment). Ignoring for a moment the comparative inefficiency of labor and the billions of people making products outside of China, where are the people that are going to do these jobs? Do you simply say "make America great again" 3 times and they will appear with the skills needed to do the work?

    And where are the managers to manage these people? One of the reasons why manufacturing has declined in the United States is a brain drain towards sectors that make more money. Are people who make money on the stock market, in real estate, in venture capital, and in startups going to start sewing shirts? It's completely and totally unrealistic to assume that people will move from superficially high productivity sectors driven by US Dollar strength to products that are low on the value chain.

    The United States is trying to bring back the jobs that China doesn't even want. They have policies to reduce low value manufacturing, yet we are applying tariffs to bring it back. It's incomprehensible.

  • Automation will not save us.

    Most people think that the reason why American manufacturing is not competitive is labor costs. Most people think this can be solved by automation.

    They're wrong.

    First, China, on a yearly basis installs 7x as many industrial robots as we do in the United States. Second, Chinese robots are cheaper. Third, most of today's manufacturing done by people cannot be automated. If it could, it would have already been done so, by China, which, again, has increasingly high labor costs relative to the rest of the world.

    The robots you see on social media doing backflips are, today, mostly for show and unreliable off camera. They are not useful in industrial environments where, if a humanoid robot can do it, an industrial machine which is specialized in the task can do it even better. For example, instead of having a humanoid robot doing a repetitive task such as carrying a boxes from one station to another, you can simply set up a cheaper, faster conveyor belt.

    Said another way, the printer in your office, is cheaper and more efficient than both a human and a humanoid robot with a pen, hand drawing each letter.

    It's unlikely that American ingenuity will be able to counter the flood of Chinese industrial robots which is coming. The first commercially electrical vehicle was designed and built in the United States, but today China is dominating electric vehicle manufacturing across the world. Industrial robots will likely be the same story.

  • Robots and overseas factory workers don't file lawsuits, but Americans do

    I probably should not have written this article. Not only will I be attacked for being unpatriotic, but what I have written here makes me susceptible to employment lawsuits. For the record, I don't use a person's origin to determine whether or not they will do good work. I just look at the person and what they're capable of. Doing otherwise is bad business because there are talented people everywhere.

    America has an extremely litigious business environment, both in terms of regulation and employment lawsuits. Excessive regulation and an inefficient court system will stifle those with the courage to make in this country.

  • Enforcement of the tariffs will be uneven and manipulated

    Imagine two companies which import goods into the United States. One is based in China, while the other is based in the United States. They both lie about the value of their goods so that they have to pay less tariffs.

    What happens to the China company? Perhaps they lose a shipment when it's seized by the US government for cheating, but they won't pay additional fines because they're in China, where they're impervious to the US legal system.

    What happens to the USA company? Owners go to prison.

    Who do you think is going to cheat more on tariffs, the China or the US company?

    Exactly.

    So, in other words, paradoxically, the policies which are designed to help Americans, will hurt them more than the competition these policies are designed to punish.

  • The tariff policies are structured in the wrong way

    Why didn't the jobs come back in 2018 when we initiated our last trade war? We applied tariffs, why didn't it work?

    Instead of making America great, we made Vietnam great.

    When the United States applied tariffs to China, it shifted huge amounts of manufacturing to Vietnam, which did not have tariffs applied to it. Vietnam, which has a labor force that is a lot more like China's than the United States', was able to use its proximity to China for its supply chain and over the past 7 or so years, slowly developed its own. With Vietnamese wages even lower than Chinese wages, instead of the jobs coming to the United States, they just went to Vietnam instead.

    We're about to make the same mistake again, in a different way.

    Let's go back to that last example, the China based and the US based companies which were importing goods into the United States. That US based importer could've been a manufacturer. Instead of finished iPhones, perhaps they were importing the glass screens because those could not be found in the USA, for final assembly.

    Our government applied tariffs to finished goods and components equally.

    I'll say that again. They applied the same tax to the components that you need to make things in America that they did to finished goods that were made outside of America.

    Manufacturing works on a lag. To make and sell in America, first you must get the raw materials and components. These tariffs will bankrupt manufacturers before it multiplies them because they need to pay tariffs on the import components that they assemble into finished products.

    And it gets worse.

    They put tariffs on machines. So if you want to start a factory in the United States, all the machinery you need which is not made here, is now significantly more expensive. You may have heard that there is a chronic shortage of transformers needed for power transmission in the United States. Tariffed that too.

    It gets even worse.

    There is no duty drawback for exporting. In the past, even in the United States, if you imported something and then exported it, the tariff you paid on the import would be refunded to you. They got rid of that so we're not even incentivizing exports to the countries that we are trying to achieve trade parity with.

    Tariffs are applied to the costs of the goods. The way we've structured these tariffs, factories in China which import into the United States will pay lower tariffs than American importers, because the Chinese factory will be able to declare the value of the goods at their cost, while the American importer will pay the cost the factory charges them, which is of course higher than the factory's cost.

    Worse still.

    With a few exceptions like steel and semiconductors, the tariffs were applied to all products, ranging from things that we will never realistically make like our high labor Tigerhart stuffed animals to things that don't even grow in the continental USA, like coffee.

    Call me crazy, but if we're going to make products in America, we could use some really cheap coffee, but no, they tariffed it! Our educational engineering toy Brain Flakes, also got tariffed. How is the next generation supposed to build a manufacturing powerhouse if it cannot afford products that will develop its engineering ability? It's like our goal was to make education and raising children more expensive.

    Not only did we put tariffs on the things that would help us make this transformation, we didn't put higher tariffs on things that hurt us like processed food which makes us tired and fat or fentanyl precursors which kill us.

    The stated goal of many of our tariffs was to stop the import of fentanyl. 2 milligrams of fentanyl will kill an adult. A grain of rice is 65 milligrams. How do you stop that stuff from coming in? It's basically microscopic.

    Maybe we could do what every other country has done and focus on the demand, instead of the supply, ideally starting with the fentanyl den near my house which keeps my children indoors or in our backyard instead of playing in the neighborhood.

    It's frustrating to see our great country take on an unrealistic goal like transforming our economy, when so many basic problems should be fixed first.

  • Michael Jordan sucked at baseball

    America is the greatest economic power of all time. We've got the most talented people in the world and we have a multi-century legacy of achieving what so many other countries could not.

    Michael Jordan is arguably the greatest basketball player of all time, perhaps even the greatest athlete of all time.

    He played baseball in his youth. What happened when he switched from basketball to baseball? He went from being an MVP champion to being a middling player in the minor leagues. 2 years later, he was back to playing basketball.

    And that's exactly what's going to happen to us.

  • This is probably the worst economic policy I've ever seen. Maybe it's just an opening negotiating position. Maybe it's designed to crash the economy, lower interest rates, and then refinance the debt. I don't know.

    But if you take it at face value, there is no way that this policy will bring manufacturing back to the United States and "make America wealthy again". Again, if anything, it'll do the opposite; it'll make us much poorer.

    Many are saying that this tariff policy is the "end of globalization". I don't think so.

    Unless this policy is quickly changed, this is the end of America's participation in globalization. If we had enacted these policies in 2017 or 2018, they stood a much stronger chance of being successful. That was before Covid. China was much weaker economically and militarily then. They've been preparing 8 years for this moment and they are ready.

    China trades much less with the United States as a percent of its total exports today than it did 8 years ago, and as such is much less susceptible to punishing tariffs from the United States today than it was back then.

    Chinese made cars, particularly electric vehicles, are taking the world by storm, without the United States. Go to Mexico to Thailand to Germany and you will see Chinese made electric vehicles on the streets. And they're good, sometimes even better than US made cars, and not just on a per dollar basis, but simply better quality.

    That is what is going to happen to the United States. Globalization will continue without us if these policies continue unchanged.

    That said, I think the tariffs will be changed. There's no way we continue to place a 46% tariff on Vietnam when 8 years ago we nudged American companies to put all their production there. Most likely, this policy will continue another round of the same type of investment; rather than replacing made in China with made in the USA, we'll replace it with made in Vietnam, Mexico, etc.

    Finally, in the process of doing this, regardless of whether or not we reverse the policies, we will have a recession. There isn't time to build US factories, nor is it realistic or likely to occur, and American importers don't have the money to pay for the goods they import.

    People are predicting inflation in the cost of goods, but we can just as easily have deflation from economic turmoil.

    The policy is a disaster, how could it be done better? And what's the point of this anyways?

    1. It makes our country stronger. If a foreign country can cut off your supply of essentials such as food, semiconductors, or antibiotics you're beholden to that country. The United States must have large flexible capacity in these areas.

    2. It makes it easier to innovate. When the factory floor is down the hall, instead of 30 hours of travel away, it's easier to make improvements and invent. We need to have manufacturing of high value goods, like drones, robots, and military equipment that are necessary for our economic future and safety. It will be difficult for us to apply artificial intelligence to manufacturing if we're not doing it here.

    3. People can simplistically be divided into three buckets: those of verbal intelligence, those of mathematical intelligence, and those of spatial intelligence. Without a vibrant manufacturing industry, those with the latter type of intelligence cannot fulfill their potential. This is one reason why so many men drop out, smoke weed, and play video games; they aren't built for office jobs and would excel at manufacturing, but those jobs either don't exist or pay poorly.

    Every country that has gone on a brilliant run of manufacturing first established the right conditions and then proceeded slowly.

    We're doing the opposite right now, proceeding fast with the wrong conditions.

    First, the United States must fix basic problems which reduce the effectiveness of our labor. For example, everyone needs to be able to graduate with the ability to do basic mathematics. American healthcare is way too expensive and it needs to be fixed if the United States wants to be competitive with global labor. I'm not saying healthcare should be socialized or switched to a completely private system, but whatever we're doing now clearly is not working, and it needs to be fixed.

    We need to make Americans healthy again. Many people are too obese to work. Crime and drugs. It needs to stop.

    And to sew, we must first repair the social fabric.

    From Covid lockdowns to the millions of people who streamed over our border, efforts must be made to repair society. Manufacturing and economic transformations are hard, particularly the way in which we're doing it. Patriotism and unity are required to tolerate hardship, and we seem to be at all-time lows for those right now.

    Let's focus on America's strengths in high end manufacturing, agriculture, and innovation instead of applying tariffs to all countries and products blindly. We should be taxing automated drones for agriculture at 300% to encourage their manufacture here, instead of applying the same blanket tariff of 54% to that that we apply to t-shirts.

    The changes in the policies needed are obvious. Tax finished products higher than components. Let exporters refund their import duties. Enforce the tariffs against foreign companies more strenuously than we do against US importers.

    If American companies want to sell in China, they must incorporate there, register capital, and name a person to be a legal representative. To sell in Europe, we must register for their tax system and nominate a legal representative. For Europeans and Chinese to sell in the United States, none of this is needed, nor do federal taxes need to be paid.

    We can level the playing field without causing massive harm to our economy by adopting policies like these which cause foreign companies to pay the taxes domestic ones pay.

    And if we want to apply tariffs, do it slowly. Instead of saying that products will be tariffed at 100% tomorrow, say they'll be 25% next year, 50% after that, 75% after that, and 100% in year four. And then make it a law instead of a presidential decree so that there is certainty so people feel comfortable taking the risks necessary to make in America.

    Sadly, a lot of the knowhow to make products is outside of this country. Grant manufacturing visas, not for labor, but for knowhow. Make it easy for foreign countries to teach us how they do what they do best.

    I care about this country and the people in it. I hope we change our mind on this policy before it's too late. Because if we don't, it might break the country. And, really, this country needs to be fixed.




    All Comments: [-] | anchor

    ysofunny(10000) 3 days ago [-]

    it's like they believe building is as quick as destroying. almost like they think delete can be ctrl+z'ed back into undeleted very quickly

    a generation of kids that never lost all their work because they didn't hit ctrl+s at the correct moment is now trying to run things

    nathan_compton(10000) 3 days ago [-]

    Weird take, since most of the people still in charge are old boomers who've barely even learned to use a computer.

    shin_lao(3529) 3 days ago [-]

    Doesn't mean we shouldn't do it.

    nathan_compton(10000) 3 days ago [-]

    Well, sure, but perhaps some kind of plan is warranted?

    jasonlotito(3582) 3 days ago [-]

    Which is why things that bring back manufacturing to the US is something we were doing. It's just unfortunate that instead of continuing that, the current administration is trying undermine the effective efforts of the previous administration's actions that helped bring manufacturing back into the US.

    knowaveragejoe(10000) 3 days ago [-]

    No, it doesn't. There is a presumption that manufacturing is Better, a more ideal way of organizing the economy, based on a false nostalgia of America past.

    anonzzzies(10000) 3 days ago [-]

    sure, but it will take longer than 4 or 8 years and everyone in power wants their own thing, not continuity. it cannot happen without a long term plan and long term plans cannot happen if have, maybe, a year to do things and the rest is election time.

    jonathanstrange(10000) 3 days ago [-]

    It's easy to bring manufacturing back, just give it a decade or two, but impossible to make it internationally competitive without large-scale market regulation such as tariffs or handing out government subsidies.

    firejake308(10000) 3 days ago [-]

    My problem with large-scale market regulation is that it also increases the price of inputs for companies who would otherwise be interested in building a factory in the US. Do you have a solution for that?

    viraptor(1797) 3 days ago [-]

    This view is too trivial. You could also stimulate manufacturing by promising tariffs increasing over the next X years, while not taxing the imported building materials and machines for longer. Or you could use tariffs to both break trade and make the environment too expensive and uncertain to invest in large construction - and delay the process by a few extra years.

    lenerdenator(10000) 3 days ago [-]

    America?

    No.

    The shareholder class underestimates it.

    A lot of Americans realize that it's going to be hard, which is why we should have made an example out of the first guy to profit off of sending manufacturing off to the shores of a geopolitical rival.

    knowaveragejoe(10000) 3 days ago [-]

    Americans also have more free time and disposable income because of that decision, among others. Why would you want them to struggle more?

    numbers_guy(10000) 3 days ago [-]

    Question: if the jobs were off shored, but the resulting profits were shared more equally, would Americans still complain?

    csense(3655) 3 days ago [-]

    There are plenty of people saying these tariffs will not work.

    But a person used to be able to graduate high school and get a job that could support a house with a yard, a car, a non-working spouse and children.

    How we get that level of prosperity back? That's the people really want. Tariffs are simply a means to that end.

    I wish people would stop writing articles about 100% criticizing tariffs and instead write articles 50% about criticizing tariffs and 50% brainstorming alternative solutions to achieve the same objective.

    knowaveragejoe(10000) 3 days ago [-]

    > But a person used to be able to graduate high school and get a job that could support a house with a yard, a car, a non-working spouse and children.

    > How do we get that level of prosperity back?

    The issue is that this is a false premise. The house sucked. Only 1/3rd of American families had a single car at the time, and the cars sucked. We can go on and on about everything else. Not to mention the social environment at the time sucked.

    That doesn't mean we shouldn't try to do something about the issues Americans face. But tariffs with a shifting set of sanewashed justifications are just Not It.

    asdajksah2123(10000) 3 days ago [-]

    > There are plenty of people saying these tariffs will not work.

    Work to do what?

    > But a person used to be able to graduate high school and get a job that could support a house with a yard, a car, a non-working spouse and children.

    Why do you think this has anything to do with tariffs or manufacturing?

    > How do we get that level of prosperity back?

    Better pay for the jobs people actually work. Reducing inequality by preventing the richest 0.1% from capturing all the massive gains in wealth the US has seen over the past few decades. Removing regulations that prevent the country from building housing and therefore driving up housing costs. Switching to a healthcare model in nearly any of the comparable developed countries almost all of which deliver better healthcare at half the cost. Not expecting everyone to be able to live a completely unsustainable suburban life. Having the government support children's upbringing by paying for high quality education, instituting rules and regulations that require mandatory paid maternity/paternity leave, etc.

    Lost of poorer countries manage to do this and more just fine. The US is far richer than most of those countries.

    Very little of this has to do with manufacturing jobs falling from 18mm to 13mm.

    nonethewiser(3585) 3 days ago [-]

    I think it's a complicated equation and there may be room for some strategic tariffs, de-regulation, anti-dumping, competing more on manufacturing etc. But the time you're talking about? Almost the entire world's industrial capacity was decimated other than the US.

    thechao(10000) 3 days ago [-]

    When I was studying economics, my macro professor used to belabor the point that post-WW2 US socioeconomics was a highly unique (and special) time-and-place; and, it is a mistake to generalize economic theory from that time-and-place.

    So... here goes: rather than proclaiming a 'housing crisis', maybe we're seeing the end of an exceptional period of 'housing affordability'. (A similar analysis of Europe and Asia applies, piecemeal.)

    As such, if we want to re-enter into a new period of housing affordability, we need to ask ourselves what we plan to give up and/or trade for that?

    For WW2, it was millions of lives and worldwide devastation. It seems like we'd need a complete re-evaluation of the way wealth, family structures, and social safety nets work in order to vastly expand housing. (In the US.)

    snarf21(10000) 3 days ago [-]

    We don't. We need only take a look at Detroit, holdout of American manufacturing. They have been automating and robotizing everything they can. ['... However, the Federal Reserve Bank of St. Louis notes that motor vehicle manufacturing employment declined 17% from 1994 to 2018, while motor vehicle productivity increased by about 13% over the same period...'] If manufacturing does come back to the US, it won't create very many jobs. Mostly just the people to maintain and fix the machinery.

    Given the improvements in cameras and computer vision and AI and robotics, there is no reason to think this won't accelerate. A long long time ago, labor was cheap and resources were expensive. Today, the opposite is true. Keynes predicted in the 50s that we would be working 15 hour work weeks. The reason he was 'wrong' was that he underestimated our insatiable human greed. We all want more. Average house size in the 50s was < 1200 sq ft. Today it is 2400+. Each kid must have their own room that is 12x12!! (I grew up with 4 boys in a 10x10, lol). Each kid must get a new $200 bat each year for little league, etc. We want a higher standard of living for ourselves and our kids. This is understandable but we forget our role in the never ending chase.

    adgjlsfhk1(10000) 3 days ago [-]

    oh that can be done in 3 easy steps.

    1. win a world war that destroys the economy of every other country in the world for a decade.

    2. destroy about the past 50 years of technology and all knowledge of how manufacture it.

    3. Kill 90% of people over retirement age to lower demand for housing, healthcare costs, and retirement benefits.

    In the modern world with modern technology there's a lot less productive work out there for people without specialized education. We could do a better job of training more people for trades jobs (e.g. plumbers, electricians etc), and removing college requirements from some professions (e.g. med school and law school could probably be college level education rather than post college) but anyone saying that we're going back is just lying.

    mlsu(10000) 3 days ago [-]

    Why will a factory job will pay enough for one person to raise a family and buy a house on a single income?

    Like what is unique about factory work that allows for this? I've heard stuff like this so much and I just do not believe it. Is anyone working in a factory in the USA today able to buy a home and have a stay at home spouse on a single income?

    kjkjadksj(10000) 3 days ago [-]

    People literally do just that today in the midwest. The coastal housing imbalance is just that a housing imbalance and not reflective of a lack of buying power today. Also consider that americans back then outside of the car and home had no other large purchases. No computer, no $1k phone on a $1k/yr plan, no big tv. People weren't even eating out or flying back then when they could afford a family vacation.

    pjc50(1402) 3 days ago [-]

    > used to be able to graduate high school and get a job that could support a house with a yard, a car, a non-working spouse and children.

    When was that last really true? 1971?

    Workaccount2(3572) 3 days ago [-]

    >How we get that level of prosperity back?

    By making everyone poorer. Seriously.

    You are competing with your fellow citizens for those things. This was true even back then.

    Right now, today, it has never been easier to make a lot of money working. So you need to compete with people in that environment. You need to be able to outbid those people for that beautiful home you want. In an environment of lots of educated and skilled workers getting skilled salaries for doing vary valuable work. That's where the bar is.

    We can lower the bar back to blue-collar-high-school-diploma, but then we need to also sacrifice all those high earning college degree jobs.

    Not going to happen.

    testing22321(10000) 2 days ago [-]

    > How we get that level of prosperity back?

    It's so simple it hurts. Stop the ruling class hoarding all the wealth.

    Top tax bracket used to be 94%.

    Have a VERY steep wealth tax, an inheritance tax and whatever else is needed. The fact individuals exist with many hundreds of millions of dollars while so many in the same society are struggling so bad is a disgrace.

    ziml77(10000) 2 days ago [-]

    > How we get that level of prosperity back? That's the people really want.

    And something they're not going to get. Manufacturing is going to be heavily automated. The money is going to continue to funnel into a small portion of the population.

    vFunct(10000) 3 days ago [-]

    Our economy was designed to NOT have citizens work at factories. We pay thousands of dollars a year in our public schools to teach each of our citizens calculus, literature, world history, and physics, so that they DON'T have to work at a factory, or perform manual labor like picking strawberries or driving trucks or cleaning toilets.

    Why would anyone want to go back to an economy that can be run by a third worlders? What is our competitive advantage then?

    Economics works when the people do the things they are most efficient at. If a person in China can make iPhones for cheaper than an American, LET THEM. Our citizens should be designing them instead, because that's what we train our citizens to do.

    Trump and the Republicans really do think of our citizens as third worlders performing manual labor like we were oxen.

    rizpanjwani(10000) 3 days ago [-]

    And yet A&W campaign for 1/3 pounder failed against MacDonald quarter pounder because Americans believed 1/4 > 1/3.

    nonethewiser(3585) 3 days ago [-]

    But aren't China's learning outcomes higher in calculus, physics, etc?

    Also the US is already the 2nd largest manufacturer in the world.

    fullshark(10000) 3 days ago [-]

    At its root I think this is driven by anxiety over how America would perform in a hot war, rose colored glasses culturally regarding the post WW2 era, and acknowledging that there's no real economic growth opportunity in America for unskilled labor, it's merely a way to tread water now.

    cpursley(3464) 3 days ago [-]

    Typical coastalist ivory tower thinking. No wonder we're in a pickle...

    bluedino(904) 3 days ago [-]

    Yet, 40% of our students can't read at a basic level.

    nathan_compton(10000) 3 days ago [-]

    I think its more complicated than this. People don't want to work in factories per se, but what a world where labor has actual power. The big thing that offshoring did was strip the power of local labor to enforce certain reasonable conditions on employers and this allowed normal people to live stable, even comfortable lives.

    Offshoring has produced a world where we can buy cheap trinkets but where many, many, americans live precariously, have little to no stability, and work more than one job to make ends meet. What Americans really want is more control over their lives and 'bringing back manufacturing' is a sort of short-hand for that ideal.

    I think bringing back some manufacturing may help, but in the end Americans need to learn that what they really want is more power to shape their lives and that they will need to wrest that power back from a system which has leaned ever more towards market control of the allocation of time, energy, and labor.

    api(1616) 3 days ago [-]

    The problem with an exclusively intellectual economy is that it easily loses touch with reality entirely. You end up with generations of people who have no idea how anything works or how to actually make anything or do things in the real world.

    Why does it cost us 10X more to build half as much? It's not all wage differences. It's that we don't have a large talent pool of builders. When you make things -- physical things in the real world -- you learn things about the nature of reality that cannot be learned from books or computers.

    lesbolasinc(10000) 3 days ago [-]

    this is what i've been saying - critical manufacturing should of course be brought on shore but I don't understand the idea of bringing back 'the assembly of hyper niche part that country Y can produce extremely cheaply but America can't even reasonably produce in quality' to American shores.

    It literally harms industry because anyone relying on that hyper niche part now has to pay more (because American mfg, let's face it - is not efficient) and deal with subpar quality as opposed to higher quality foreign parts.

    I hate it say it, but come on man - people aren't buying American cars globally because the Japanese and even Germans can do it better. That's free market economics, either get better at making cars or focus on making things that we can do better like iPhones and Macbooks - not try to artificially defend an industry we suck at by forcing people to deal with shittier subpar products.

    Maybe I'm being unreasonable, I don't know.

    gowings97(10000) 3 days ago [-]

    Because you cannot hide the imbalance of disconnecting yourself from the material reality that's involved with making your lifestyle possible by outsourcing to other human beings, over multiple decades, without it coming back to bite you in one form or another.

    See the hundreds of thousands of people in US that have died from opioid overdoses. 50% of the US population, specifically those living outside major metro areas, experienced a slow collapse (over decades) that was not unlike the fall of the Soviet Union.

    A country should have _some_ semblance of what it is to truly source, manufacture, and produce the lifestyle that's made possible in the country. When the top 15-20% become completely disconnected from the other 80% working menial service jobs because the core manufacturing has been outsourced to outside the country, it will come back to bite you.

    'Man must feel the sweat on his own brow' or at least have an appreciation for what makes this possible. Your comment essentially implies that you feel that you are above or should be disconnected from this reality, which is dangerous.

    aNoob7000(10000) 3 days ago [-]

    Americans fantasize about factory work because, at that time in America, you could afford a home without a two-income family. Life was 'easier' for many people.

    Personally, I think we need to focus on making things like homes more affordable. This would go a long way toward alleviating people's feeling of being trapped.

    welshwelsh(10000) 3 days ago [-]

    Manufacturing doesn't have to involve large amounts of low-skill manual labor. It can be highly automated and serve as a source of jobs for engineers.

    gedy(10000) 3 days ago [-]

    > our citizens as third worlders performing manual labor like we were oxen.

    Lord man... there's a whole mass of humanity who don't want to fart in an office chair all day, or lay around collecting the dole.

    abcde777(10000) 3 days ago [-]

    The idea that everyone can just do knowledge work is pretty unrealistic, to put it mildly.

    mbrumlow(10000) 3 days ago [-]

    And that is not working out...

    What we have instead is a nation straddled with debt and useless degrees. While the counties like China do "theirs world" work produce smarter and more capable workforce all while doing the mundane work too.

    I think your view also vastly underestimates the number of not so smart people that exist in America. This is no knock on them, but people in tech bubbles get to walk around in a society where the average person they interact with has a far above average IQ. So for those who don't balance red/black trees and find shortest paths with dijkstra's algorithm need jobs too.

    On top of that you forgot something I am sure you have yelled many times, diversity. Remember when it was a strength? It's not good for any nation to be completely void of entire industries. Having factories next to the tech will germinate the thinking minds with new problems to solve.

    But even more to the point. China is doing amazing things, and they were we let do the manufacturing. So we always have a strong evidence that letting others might not be the best idea.

    jballer(10000) 3 days ago [-]

    To the contrary, they think of manual and "low-skill" labor as an essential undertaking that no person or society is above.

    You are the one who thinks of the work as below you, that it should be moved out of sight so we can stop caring and make it someone else's problem.

    cogs(3603) 3 days ago [-]

    But how many citizens know calculus, literature and physics? Certainly not enough know history - or US democracy wouldn't be facing the threat it does now.

    The poorly educated need a livelihood too. If the economy is healthier for global trade (I think it is), then some way must be found of destributing its benefits to the demographics who got hit. Otherwise you get revolution or populism.

    Telling an unemployed factory worker to send their kids to college doesn't help. Doesn't help the factory worker, and doesn't help kids who see education and middle class jobs as about as unreal as the idea of becoming a famous influencer or kingpin drug dealer.

    charlie90(10000) 3 days ago [-]

    >Economics works when the people do the things they are most efficient at.

    If you believe this statement, then you must be supportive of open borders.

    People in China might be more efficient at doing local US service jobs. Whose to say we dont let them do it?

    PaulKeeble(3146) 3 days ago [-]

    Its the integration and overall combined effect of the entire industrial pipeline that makes China so incredible. It processes all the raw materials and the recycling/reuse of off cuts through every possible way to turn those raw materials into components and then into goods with very little need for import from other countries. Its the complete system for a huge variety of goods.

    To compete with that the entire pipeline from raw materials through components and final product needs to be reproduced and its taken China 40+ years to build up to this capacity and capability.

    I think its something more countries should consider and do for certain pipelines but we are in a world with vast international trade and the winner(cheapest) takes most of the trade so whatever it is needs to be worth while within country.

    digianarchist(2994) 3 days ago [-]

    Absolutely. Canada for example should not be shipping lumber and oil to the United States for further refinement. It should be processed domestically.

    gjsman-1000(1211) 3 days ago [-]

    And if China invades Taiwan, which they have said for decades they will do (we just don't like to believe them), what then?

    Do we sacrifice a democracy for the dollar? If not, is our economy annihilated? We have no credible alternative to reshoring for this reason alone.

    MisterTea(10000) 3 days ago [-]

    > Its the integration and overall combined effect of the entire industrial pipeline that makes China so incredible.

    The incredible part is USA exported that entire sector to China.

    mclau157(10000) 3 days ago [-]

    Even getting workers to the factory is a concerted effort of trains and public transport, Americans would quickly clog the highways with millions of single occupant large vehicles without first investing in more efficient ways to move people

    zbobet2012(10000) 3 days ago [-]

    This is true, and at the same time, this article is absolutely rife with unsourced, unserious points. However insane Trumps plans, the fundamental 'facts' presented here are largely a joke.

    > Chinese workers work longer hours more happily and they're physically faster with their hands; they can do things that American labor can't. It's years of accumulated skill, but it's also a culture that is oriented around hard work and education that the United States no longer has. In China, there are no people who are too fat to work. The workers don't storm off midshift, never to return to their job. You don't have people who insist on being paid in cash so that they can keep their disability payments, while they do acrobatics on the factory floor that the non-disabled workers cannot do.

    It's an actual joke to present something with such a derogatory view of the median American worker with no data to back it up. Most of America's 'labor class' is in fact Mexican, the country with the highest annual hours worked per year. Secondly hours worked does not relate directly to productivity. American workers are the most productive in the world. [1]

    More importantly, _we don't manufacture like this anymore, even in China_. Doing 'acrobatics' on the factory floor is now obsolete. Much of what's said here fails to acknowledge that we would _not_ build our supply chains the same way as China does. China had a surplus of human labor (one that's facing an impending demographic crisis) and so used human labor in ways modern western countries would not and do not.[2]

    [1] https://www.weforum.org/stories/2018/01/the-countries-where-... [2] https://ifr.org/ifr-press-releases/news/global-robotics-race...

    Reproducing these supply chains is more possible than this article states. Doing it via destroying our economy however will not work.

    throwawaymaths(10000) 3 days ago [-]

    Molson has a Chinese spouse, directly benefitted from Chinese manufacturing for a long time, and often spouts direct propaganda from his X account so while he's likely to be right about a lot of things he had/has a strong incentive to not imagine alternatives to the status quo.

    cbg0(2317) 3 days ago [-]

    Try attacking the points he made in the article instead of him.

    vishnugupta(10000) 3 days ago [-]

    No kidding!

    Beyond the obvious skilled labor there's supply chain network, maintenance, townships and supporting system around them.

    And all of this needs human labor which is taken from somewhere else. How do you incentivize them? Just throwing money at the problem won't solve it either. Because more often than not it'll attract charlatans who will promise the sky, take the money and move away.

    jmclnx(10000) 3 days ago [-]

    And do not forget NIMBY :)

    Where I live it is close to impossible to even get a Dog House approved and built.

    rkozik1989(10000) 3 days ago [-]

    Americans have a very 1980s idea of manufacturing (and China in general) in that there aren't actually that many humans being used in Chinese factories let alone the American ones some of them want to build here. There's even a concept of, 'Dark Factories' in China which are 100% automated factories that operate in the dark. The only jobs that will come from bringing manufacturing back to the states will be in automation, robotics, AI, and roles to support those things.

    mppm(10000) 3 days ago [-]

    Jonathan Blow's 'Preventing the collapse of civilization' [1] makes a similar point. It is easy to assume that, if we can build EUV machines and space telescopes, then processing stainless steel and manufacturing PCBs is baby stuff, and is just waiting for the proper incentives to spring up again. Unfortunately that is not the case -- reality has a surprising amount of detail [2] and even medium-level technology takes know-how and skilled workers to execute properly. Both can be recovered and scaled back up if the will is there. And time -- ten or twenty years of persistent and intelligent effort should be plenty to MAGA :)

    1. https://www.youtube.com/embed/pW-SOdj4Kkk

    2. http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...

    imbusy111(10000) 3 days ago [-]

    But the important question is - is it worth it? Should we be doing something more valuable instead?

    saati(10000) 3 days ago [-]

    The US can't even make EUV machines, just parts of it.

    stronglikedan(10000) 3 days ago [-]

    I don't think anyone underestimates that, as much as some people with the author's viewpoints would like it to be true.

    To paraphrase Kennedy: 'We choose to [bring back manufacturing]. We choose to [bring back manufacturing] in this [or the next] decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills, because that challenge is one that we are willing to accept, one we are unwilling to postpone, and one which we intend to win, and the others, too.'

    We will do it, and we will win, whatever that means.

    hackyhacky(10000) 3 days ago [-]

    Putting aside the rah-rah patriotism, you perhaps don't understand the problem any better than Trump does. The moon mission to which you allude was difficult but, critically, that difficulty was not felt by most Americans: it was a challenge for NASA engineers. Trump's current economic plan will increase inflation, cripple America's role in world trade, and result in negligible increase in manufacturing in the short term. Wildly unpopular policies do not last in a democracy.

    podgorniy(10000) 3 days ago [-]

    > To paraphrase Kennedy

    What in the modern situation suggests the comparable level of diligence in approach to the goal? The fact that both goals are far-reaching does not suggest comparability of approaches to the solution.

    Changing the way society/economy operates is nowhere near 'building X,' whatever X is, whether it's something hard like a bomb or a collider.

    > We will do it, and we will win, whatever that means.

    How do you know that you haven't won already? Shouldn't the end goal be clear? In the case of Kennedy you're referring to, criteria and motivation were clear.

    --

    To a non-US bystander, your comment sounds like a no-thinking patriotic slogan. The details of the article are such that you can take any argument and bring it into discussion in order to show its irrelevance. But we're discussing slogans irrelevant to the situation and belief in the win, even though the win is not defined.

    causal(10000) 3 days ago [-]

    Did you read the article? The author is advocating for manufacturing in the US, but is pointing out the ways these policies undermine that very goal.

    constantcrying(10000) 3 days ago [-]

    How many additional hours are Americans going to work? What pay cuts will they take? How many years later du they want to retire?

    These are the questions people need to ask themselves. We both know what the answer is.

    2OEH8eoCRo0(3093) 3 days ago [-]

    It's difficult but necessary to bring manufacturing back due to defense logistical reasons.

    We build about 100 SM-6 missiles a year. How long does this last against a peer? 12 hours?

    I don't know if tariffs are the best way to do this but some manufacturing must come back one way or another.

    cogman10(10000) 3 days ago [-]

    Tariffs work against the goal.

    The only sane way to bring back manufacturing is investments like the chips act.

    Think about it this way, you are a widget manufacturer trying to place a new factory. You could put it in say Canada and enjoy cheap imports and exports of your product globally. It's cheap to produce and easy to sell.

    Or you could place it in the US, but now you are looking at a minimum 10% tax on importing the resources you need. On top of that, a significant portion of the world (especially the richest nations) are tacking on an addition 10% or more tax on your product because it came from the US.

    Would you build a factory in the US? Maybe if you can source everything in the US and you are fine with your primary market being only the US. Otherwise, it's a bad business move.

    When talking about something like semiconductors, global access is really important to be profitable. Low or no tariffs and the proximity to China and other raw resources powerhouses is a major reason why so much of the semiconductor industry is in Asia.

    asdajksah2123(10000) 3 days ago [-]

    America does need to bring back manufacturing. Not because a manufacturing job that pays $25/hr is somehow better than a service job that pays $25/hr.

    The US needs to bring back manufacturing for strategic reasons and in strategic areas.

    And it needs to have the capability to scale up manufacturing in response to emergencies.

    But also, importantly, the US doesn't need to do this by onshoring all manufacturing. Near shoring and friend shoring will have to be extremely important components of adding these capabilities, and unfortunately, teh actions the US is taking will likely hurt nearshoring and friendshoring and will end up making the US less strategically capable in manufacturing even if it's able to reshore a significant amount of manufacturing.

    apercu(10000) 3 days ago [-]

    For strategic, economic, national defense and public health reasons, I completely agree with you.

    Too bad a large portion of our electorate is brainwashed by propaganda and/or completely out to lunch.

    kelseyfrog(2243) 3 days ago [-]

    If we're going to defy the invisible hand, we should at least do it to benefit people in a concrete way - health care, education, UBI. Doing it for 'strategy' is equivalent to simply burning the money people would have otherwise saved by doing nothing.

    howmayiannoyyou(10000) 3 days ago [-]

    The components of a strategic manufactured product can be as simple as an injection molded switch, a LiION battery, capacitors, copper wire, etc., so the notion of bringing only 'strategic items' back is as much a myth as the idea its mostly coming back to the USA. The goal here is to diversify the supply chain globally so its not concentrated in China. Internally this is sold as bringing MFG back to the USA (will happen to a noticeable degree), but that's not the actual plan.

    elbasti(2838) 3 days ago [-]

    A skilled assembly worker makes closer to $30 or $40 an hour than $25. And that doesn't account for overtime. A skilled tradesman can make $40+.

    Manufacturing is skilled, well-paid labor that requires commitment, attention, and care. That is why there's a shortage of labor--not because of wages.

    Workaccount2(3572) 3 days ago [-]

    > It's years of accumulated skill, but it's also a culture that is oriented around hard work and education that the United States no longer has.

    Sounds more like China has an exploited educated class/lack of oppurtunity than America has bad education.

    Plenty of American workers can multiply in their heads and diligently perform there work. These people work in white collar jobs though, not in factories snapping together phone cases for 12 hours a day.

    The author isn't totally wrong here, Americas bottom tier labor pool sucks, but they miss the bigger picture when comparing Chinese and American workers. China has skilled workers doing unskilled work. That's why they are so good. That's also why bringing manufacturing to the US will be so hard. Ain't nobody wanna get a degree so they can work a hot factory floor all day.

    karn97(10000) 3 days ago [-]

    Westerners have had too good of a life and you cannot compete with an asian who is told every day if he doesn't perform he will be homeless. You just cannot compete.

    jghn(10000) 3 days ago [-]

    The other day I saw the results of a poll [1] where 80% of Americans thought the *country* would be better off if more Americans worked in factories. However, only 20% of Americans thought that *they* would be better off if more Americans worked in factories. It was surprisingly bipartisan.

    In other words, people like the idea of this, but no one actually wants this.

    [1] https://www.ft.com/content/845917ed-41a5-449f-946f-70263adba...

    toomuchtodo(160) 3 days ago [-]

    Americans are cosplaying (voting their belief system, not what they'll do, the 'revealed preference'), as they do as farmers [1] [2] [3] [4], as they do as 'rural Americans' [5]. It is an identity crisis for tens of millions of people [6]. Their crisis is our shared political turmoil. Happiness is reality minus expectations.

    From the piece: 'The people most excited about this new tariff policy tend to be those who've never actually made anything, because if you have, you'd know how hard the work is.'

    [1] https://www.agriculturedive.com/news/agriculture-shifts-farm...

    [2] https://www.terrainag.com/insights/examining-the-economic-cr...

    [3] https://www.ers.usda.gov/topics/farm-economy/farm-labor

    [4] https://www.mckinsey.com/industries/agriculture/our-insights...

    [5] https://www.youtube.com/watch?v=6q_BE5KPp18

    [6] https://www.theguardian.com/us-news/2025/jan/11/there-are-a-...

    apwell23(10000) 3 days ago [-]

    just like management class in any typical corporation

    999900000999(10000) 3 days ago [-]

    We already have a massive prison industrial complex, a lack of basic rights and a complete disregard for due process.

    Very soon we'll be forced to make shoes and other things behind bars. No trial needed, just indefinite detention.

    kamaal(1377) 3 days ago [-]

    Would interesting to know what percentage themselves or their own children wanted to work at a factory. Can tell with a huge degree of confidence for all practical purposes thats 0.

    Its always easy to expect other people to make sacrifices working these jobs, while imagining you and your kids working office desk jobs.

    tdb7893(10000) 3 days ago [-]

    This lines up with the experience of the people I know who have worked in factories, there seems to be a disconnect with all these pundits and economists (and many people on the internet in general) talking about basic manufacturing work and the people I have met with actual factory jobs. The pay could've been worse and it wasn't the worst job I've heard of but it also wasn't great (they said they would've preferred a boring office job). There's a reason the pundits talking about the virtues of manufacturing jobs are pundits.

    knubie(10000) 3 days ago [-]

    I mean 20% of the population thinking they would be better off working at a factory is huge. So we need more than that?

    dynm(723) 2 days ago [-]

    There's absolutely no contradiction here.

    Currently less than 20% of Americans work in factories. All those 80% need to want is that the 20% of people who want to work in factories can do so.

    paulcole(10000) 2 days ago [-]

    It's the same as every tech bro on here who says, "Go join the trades!"

    People want to be sure that their success is protected and they love telling other people what they should do.

    gosub100(10000) 2 days ago [-]

    I would consider factory work if it paid a liveable wage and I didn't have other options.

    phendrenad2(10000) 2 days ago [-]

    Everyone wants more manufacturing in the US, but nobody wants to be a factory worker. People would rather starve or go homeless than work in a factory. Until Americans overcome their pride, this is going to make building manufacturing in the US very difficult.

    maxglute(10000) 2 days ago [-]

    Let's me real... 80% of the hard shit in US factories will be ran by mexican migrant labourers like in agriculture. And maybe that's enough of a 'win' for US interests.

    MetaWhirledPeas(10000) 2 days ago [-]

    > people like the idea of this, but no one actually wants this

    As others have pointed out, this is not a contradiction. (Read their reply.)

    However, the question of 'Do YOU want to work in a factory?' is heavily influenced by the fact that we don't see factory work as a high-paying career, or a career at all. Part of the solution to the factory problem is enhancing the value proposition for the employees.

    I am ambivalent toward tariffs, but the idea is that if we make foreign products more expensive then the higher price of domestic goods becomes more palatable by comparison. If paying domestic workers more raises the price of domestic goods, and if people are willing to pay that price for whatever reason, you will start to see growth in manufacturing.

    It's also silly to reject long-term goals simply because achieving them is difficult.

    rchaud(10000) 2 days ago [-]

    Reminds me of the 'college is a scam, learn a trade' people, all of whom went to college and plan to send their kids to college as well.

    x-complexity(10000) about 4 hours ago [-]

    > In other words, people like the idea of this, but no one actually wants this.

    Misinterpretation of data.

    > The other day I saw the results of a poll [1] where 80% of Americans thought the country would be better off if more Americans worked in factories. However, only 20% of Americans thought that they would be better off if more Americans worked in factories. It was surprisingly bipartisan.

    https://www.bls.gov/opub/ted/2023/a-look-at-manufacturing-jo...

    Compared to the current percentage of people employed in manufacturing (9.9% - 12,759,129 / 128,718,060), there are **more** Americans that would like to move into manufacturing, not less.

    tbirdny(10000) 3 days ago [-]

    America doesn't underestimate it, its president does.

    dashundchen(10000) 3 days ago [-]

    I saw a chart passing around from this Cato Institute survey (Cato is a right wing think tank) [0]. It made me laugh.

    > America would be better off if more Americans works in manufacturing than they do today. Agree 80%/Disagree 20%

    > I would be better off if I worked in a factory instead of my current field of work. Agree 25%/Disagree 73%

    [0] https://www.cato.org/sites/cato.org/files/2024-08/Globalizat...

    balozi(10000) 3 days ago [-]

    For better or worse the man is exposing the mindboggling scale of deindustrialization that was hidden underneath America's transition to a 'knowledge economy'. Decades of failed economic policy has led America to this point.

    margorczynski(10000) 3 days ago [-]

    Still, this kind of outsourcing of manufacturing (or even more food production) puts the US in an incredibly uncomfortable position, especially that China is its main geopolitical enemy.

    What if a war erupts? Suddenly the US cannot produce a lot of essential stuff - I think Covid was a good example of that happening.

    Of course the question is can this be done and what will be the price if so.

    pjc50(1402) 3 days ago [-]

    Last time I looked the US was a net exporter of agricultural products to China. Well, until the retaliatory tariffs hit.

    franktankbank(10000) 3 days ago [-]

    Subsidize the essentials let the free market sort the rest. I think we still want competitive markets within our borders for the stuff we subsidize so we don't get stagnation of the industry. Maybe there are clues how it could be structured like we subsidize farming.

    causal(10000) 3 days ago [-]

    The author is not anti-US-manufacturing. He explained how the current tariff policy undermines US manufacturers. He is pointing out the obstacles and what we must do to overcome them. The obstacle is the way.

    bilbo0s(10000) 3 days ago [-]
    What if a war erupts?

    I believe we should scale up manufacturing in the US for different reasons.

    But I'm also a realist. If war erupts between China and the US, then anyone in the US or China still alive 4 weeks after the start of hostilities will have more pressing concerns than worrying about where things are manufactured. Again, just the reality.

    We shouldn't plan on the basis of end of the world scenarios. Rather we should plan on the assumption that we want to confer maximum benefit on the US in likely non-apocalyptic future timelines.

    zero_k(10000) 3 days ago [-]

    America is not a country, it's a continent. I know, Canada will be a province, and soon Panama of course, but in the meanwhile, it's a continent, not a country.

    codedokode(3471) 2 days ago [-]

    But famous American themselves call their country 'America'.

    phendrenad2(10000) 2 days ago [-]

    If you search a dictionary for 'America', the first result will likely be 'The United States of America'.

    https://dictionary.cambridge.org/dictionary/english/america

    It doesn't make you wrong, but you're also not right.

    nomdep(2682) 3 days ago [-]

    /s He is right, we should just crawl under a rock and die instead.

    Remember the JFK 'We choose to go to the moon' speech?

    (I wonder how many of this defeatist articles are financed by China somehow).

    ks2048(3275) 3 days ago [-]

    Trump is doing his version of the JFK vision. We choose to dismantle the country and strip it for parts.

    pjc50(1402) 3 days ago [-]

    > China generates over twice as much electricity per person today as the United States. Why?

    This appears to be completely wrong? All the stats I can find say that the US has about 4x the per capita electricity generation of China.

    Other than that it seems to be mostly good points, especially the overall one: you cannot do this overnight.

    > If you're building a new factory in the United States, your investment will alternate between maybe it will work, and catastrophic loss according to which way the tariffs and the wind blows. No one is building factories right now, and no one is renting them, because there is no certainty that any of these tariffs will last

    Policy by amphetamine-driven tweeting is a disaster.

    > 12. Enforcement of the tariffs will be uneven and manipulated

    Yup. The 145% level seems designed to create smuggling, and the wild variations between countries to create re-labelling. It's chicken tax trucks all over again.

    > This is probably the worst economic policy I've ever seen

    Per Simpsons: this is the worst economic policy you've seen so far. The budget is yet to come.

    > If American companies want to sell in China, they must incorporate there, register capital, and name a person to be a legal representative. To sell in Europe, we must register for their tax system and nominate a legal representative. For Europeans and Chinese to sell in the United States, none of this is needed, nor do federal taxes need to be paid.

    This is .. not a bad idea, really. It would probably be annoying for small EU and UK exporters but less so than 10% tariffs and even less so than random day of the week tariffs. Maybe one day it could harmonise with the EU VAT system or something.

    (also I think the author is imagining that sub-par workers, crime, and drugs don't exist in China, when they almost certainly do, but somewhere out of sight. Possibly due to the internal migration control of hukou combined with media control?)

    tokioyoyo(10000) 3 days ago [-]

    Once again, want to point out how this is simply American leadership not wanting to accept their loss and move on. For the first time in the history they're not being perceived as the 'global leader', and that's not acceptable from their POV. Now it's just freaking out and hoping that some extreme policy changes will change the course. From my personal experience, most people act this way when they're in distress and can't think ahead because of all the externalities.

    rickdeckard(10000) 3 days ago [-]

    > China generates over twice as much electricity per person today as the United States. Why? >> This appears to be completely wrong? All the stats I can find say that the US has about 4x the per capita electricity generation of China.

    I believe the comparison is absolute production, not per person. Anything else would be odd. Considering China has 4x the capita of US it would mean that in absolute terms China is producing 8x the energy of the US. In reality it seems to be roughly 2x (although both sources are a bit outdated):

    US 2023: 4.18 trillion kilowatt-hours (kWh) of electricity from utility-scale generators. Additionally, small-scale solar photovoltaic systems contributed around 73.62 billion kWh 1.

    China 2021: 8.53 trillion kilowatt-hours (kWh) of electricity

    --

    But the staggering difference is how much of the electricity is attributed to the Industrial sector:

    China: 70% (~6 trillion kWh)

    US: 26% (~1 trillion kWh)

    So overall China allocates 6x the electricity to production compared to US...

    looseyesterday(10000) 3 days ago [-]

    On crime they most centrically do, watch the China Show (not the bloomberg one) on youtube. One example given on the show is that Once you go into northern villages and small towns you start seeing propganda posters on why you shouldn't take drugs. Homelessness is widespread and present too but you just wont see it in city centers more on the outstkirts.

    like_any_other(10000) 3 days ago [-]

    > Other than that it seems to be mostly good points, especially the overall one: you cannot do this overnight.

    It's annoying Americans were given only two choices - offshoring is great and let's keep doing it, and, as you say, the opposite, meth-fueled let's bring back manufacturing overnight. The kind of slow and steady protection and promotion of home-grown industry that China and most of Asia so successfully used to grow their economies was completely absent as even a talking point.

    bparsons(3642) 3 days ago [-]

    I think they conflated electricity production growth with total output.

    Output in the US has been flat for some time, while China has been on a steady rate of climb for several decades.

    pokot0(10000) 3 days ago [-]

    Can someone explain to me why EU VAT is considered a tariff, while US sales taxes are not? They both seem a sale tax to me.

    mcv(10000) 3 days ago [-]

    That is really the big problem with the current policy in the US: it's completely unclear what the policy is and how long it will last. This is not a stable climate for investment. Would you invest in a country where the president plays Russian roulette with the economy?

    Most corporations will wait it out. Corporations that have an established interest (like Big Tech) will bribe Trump to get the exemptions they need to continue their business. Everybody else will have to decide how much they will want to depend on such an openly corrupt system. There industries that see no problem in dealing with corrupt regimes.

    mapt(3635) 3 days ago [-]

    When I visited China, the expats told me that recreational drug supplychains were strictly compartmentalized. There was the supply of illicit drugs for Westerners (imported by the sons of Nigerian businessmen, the cliche went), the supply of illicit drugs for Chinese people (who only dealt with Chinese people), and then there were the vast array of drugs that are completely legal to get over the counter in China without a prescription (at a pharmacy or CTM shop) that would be controlled substances in a US pharmacy.

    That the official line from the CCP was that China had no drug problems, no prostitution, a variety of other things†, and that there were no gay people in China; That these were all Western ailments.

    Urban China is a panopticon state not only digitally, but culturally. Housing is much tighter than the US, walls thinner. Your underwear is hung out to dry in clear view. 'Privacy' in terms of politeness norms mostly consists of pretending you don't see or hear a thing. Neighbors generally know a lot about what each other are doing. 7% of the population are Party members, and in Marxist-Leninist systems this connotes something closer to earning a military officer commission; The Party is not trivial to apply to, the Party is strictly regimented, Party rules are held above and before the civil law, Party members are expected to be informers and have a strict lawful-good orientation from the perspective of the regime. Penalties for commerce in illicit drugs are even more extreme than the US, and due process is not bound by the same presumptions.

    There are lots of factors conspiring against the sort of informal US inner city street drug distribution being as big of a deal in China.

    Disclaimer: All my information is more than a decade out of date, and was only ever a thin slice of opinions from mostly Westerners in some first tier cities.

    † From an academic paper: '2 The Six Evils are prostitution, drugs, selling women and children, pornography, gambling, and feudal superstition. Criminal gangs, or triads, are often counted as a seventh evil. These vices represent impediments to modernization and throwbacks to social problems that were present prior to the Communist takeover. Elevation of a problem to an 'evil' symbolizes that the Beijing regime will mount a 'campaign' or 'struggle' against it.'

    Moto7451(10000) 2 days ago [-]

    Regarding the potential to annoy small businesses, it's actually pretty easy to hire a firm to represent you in the EU. You'll need a lawyer at some point anyway so it's often the same firm.

    If we had the same requirements here in the US it would likely become the same.

    nottorp(3629) 2 days ago [-]

    > To sell in Europe, we must register for their tax system and nominate a legal representative.

    American companies? Register for EU tax system?

    I can buy from anyone in the US and worldwide for that matter, and as long as they're willing to figure out shipping they don't need to register anywhere, I can handle taxes myself when receiving.

    What 'AI' did they use to write this?

    erkt(10000) 2 days ago [-]

    Tl;Dr: The author makes a strong case for broader, higher tariffs but understands it is impossible to help American manufacturing knowing that the next administration will cave to China and Wall-street and immediately move to undo everything. The solution is to work together to make American protectionism work.

    1. They are not high enough: Correct. Raise them more.

    2. America's industrial supply chain is weak: That is why we need to bring the factories and resource extraction home.

    3. We don't know how to make it: Perhaps we can steal the IP like China? We will figure it out.

    4. The effective cost of labor in the US is higher than is looks: Then raise the tariffs higher.

    5. We don't have the infrastructure to manufacture: You have to build it first, This will get cheaper and easier as we continue to bring industry home.

    6. Made in America will take time: Blaming permitting time and Bureaucracy is a ridiculous excuse. The federal government can override all state and local requirements here. Its a choice to slow projects down.

    7. Uncertainty and Complexity around tariffs: Democrats will have a hard time undoing progress if there is movement to reshore industry. War over Taiwan seems basically inevitable and this will harden resolve.

    8. Most Americans are going to hate manufacturing: Most (well a very large and non-negligible percent of) Americans are going to loose their jobs because of AI. Most of us hate our jobs already, manufacturing will pay better. There are always endless service industries...like delivering food, if they do not like supervising a robotics controlled factory. It is disingenuous to imagine a return of American manufacturing without Huge AI and robotics investments. More factories will be lights out than the alternative. The jobs will be in servicing the robots, computer systems and quality control. We aren't talking Rosie the Riveter and the author must know it.

    9. The labor does not exist to make good products: This is why there must be some discrimination over tariffs and why they should not be a simple even percentage. We can choose to bring back GPU manufacturing but pass on fast fashion. And during the process of negotiation we can give up those industries we do not want in exchange for support of a China embargo.

    10. Automation will not save us: The author cannot imagine a world where manufacturing is not motivated by global trade. They fail to understand that it does not matter how much more productive China is when protectionist policies prevent trade. The goal is to get America to a place where it can manufacture everything it NEEDS on its own.

    11. Americans file lawsuits: Good- this will increase the quality of goods we enjoy and we can get past the disposable foreign garbage that floods our markets. 12. enforcement will be uneven and manipulated: so get on board and help to improve it, stop undermining the attempt to help this country.

    13. tariff policies structured in wrong way: Really not a terrible idea to have a disparity in tariff between input goods and finished goods but it is a half measure. We need the entire supply chain from resource harvesting, to tooling, to components to final finished manufacturing if we want to ensure national security in a world post-NATO.

    14. Michael Jordan sucked at baseball: Was there serious consequence to MJ trying his hand at baseball? We got through COVID. We have survived massive supply disruptions and the market has been pumping as hard as ever. If you are not currently retired it is absurd to worry about fluctuations in the stock market. And if you are, you likely invested in companies that sold out America.

    beanjuiceII(10000) 3 days ago [-]

    yea its difficult lets not do it

    knowaveragejoe(10000) 3 days ago [-]

    Let's approach it from the other direction: why should we? What are we getting by trying to 'bring it back'?

    drittich(10000) 3 days ago [-]

    False dichotomy. An alternate position is to do it in a measured, planned way, not under duress as the economy tanks and international relations are soured.

    ChrisMarshallNY(10000) 3 days ago [-]

    This pretty much mirrors what a friend of mine said (he is a recently-retired Co-CEO of a medium-sized manufacturing business).

    He's been telling me this, for years. It's not a secret. The information has been out there, for ages. I'm surprised that the administration didn't understand this.

    nine_zeros(10000) 3 days ago [-]

    > I'm surprised that the administration didn't understand this.

    Curious why you are surprised at incompetence being unable to understand complexity.

    npiano(10000) 3 days ago [-]

    A genuine question, presuming no correct answer: what is to be done about it? China is reportedly on track to run more than 50% of global manufacturing by 2030, if the World Bank is correct. What would you do to act against this? Is doing nothing acceptable?

    idle_zealot(10000) 3 days ago [-]

    > I'm surprised that the administration didn't understand this.

    Why would you assume they don't understand? Every time they're questioned about the tariffs the narrative shifts. We have a trade deficit, we're getting ripped off, we want to bring back domestic manufacturing jobs, we'll automate them with robotics and AI, we're playing hardball to negotiate a better trade deal and get rid of fentanyl, it's a matter of national security, an economic emergency, the dollar is overvalued.

    You cannot trust a word from them. If you want to understand why they're doing something you must look only at incentives and outcomes. My current analysis is that there's some internal conflict, but the overall push for tariffs comes from a desire to crash the economy and use the downturn to consolidate wealth and power.

    fullshark(10000) 3 days ago [-]

    Some did understand it I think (maybe not Trump), but were tired of hearing it couldn't be done and decided to try. A large % of Americans are happy at least someone is trying, and at the very least perhaps some lessons will be learned, and the parties will recalibrate their policy platforms to actually accomplish reshoring.

    That's the optimistic POV at least imo.

    kotaKat(1999) 3 days ago [-]

    Missing reason #15: commercial lenders with a brain realize that these tariffs and this self-imposed domestic crisis will dissipate in the next ~6 years. Nobody's going to lend in this market to try to spin up a new greenfield project in the US that will take years to get operational when they can sit and ride it out - ESPECIALLY at these interest rates.

    potato3732842(10000) 3 days ago [-]

    I'm not so sure.

    The tariffs most certainly will dissipate but we can't discount the chance that they may be replaced with actual written in law voted on by congress and signed by the president taxes that have similar but much more durable effects.

    Manufacturing and heavy industry really hates off-shoring. They only do it because the sum total of other policy makes it the only viable option. I can see them taking a decent haircut in pursuit of some longer term goal.

    dehrmann(2607) 3 days ago [-]

    The government could make loans directly and guarantee purchase prices, but it's also stopped making payments congress committed it to, so you'd be crazy to trust any promises from the administration.

    Cthulhu_(3510) 2 days ago [-]

    Not only will it take years to get operational, there is no way it would ever reach the scale and reach of Chinese manufacturing, not in six years, not in sixty. Even if they throw trillions of investor money at it.

    China and others are clearly demonstrating the power of capitalism with state support. The US is too busy infighting and keeping capitalism and politics separate (small government, let the market decide etc). You wouldn't find enough employees that want to work in manufacturing; you'd need millions to even try and get close to what China is doing.

    Now I'm not actually OK with what China is doing, the paragraphs about worker conditions were quite telling. But I will recognize that it gives them the upper hand in manufacturing that the US hasn't had since the 50's.

    (meta: I'm gonna have to specify 'the 1950's soon' don't I?)

    phendrenad2(10000) 2 days ago [-]

    This is a big one. Once upon a time, the Democrats and Republicans listened to the same think tanks, so there was continuity in planning. Now, they seem to be opposed to plans simply because the 'other side' came up with them. The whiplash we've been experiencing has torn the economy apart and scared businesses away.

    kelseyfrog(2243) 3 days ago [-]

    I had to stop reading at the Michael Jordan baseball part. Everything after that wasn't believable anymore. He wasn't that bad at baseball[1].

    1. https://vendettasportsmedia.com/michael-jordan-wasnt-that-ba...

    mikeyouse(3180) 3 days ago [-]

    He was a mediocre AA player... compared to his basketball skill, he did absolutely suck at baseball.

    ks2048(3275) 3 days ago [-]

    He wasn't that bad at baseball compared to a random person or a minor league player.

    He was that bad at baseball compared how good he was a basketball.

    The article seemed correct IMHO,

    > What happened when he switched from basketball to baseball? He went from being an MVP champion to being a middling player in the minor leagues. 2 years later, he was back to playing basketball.

    wormlord(10000) 3 days ago [-]

    I think the collapse of the American Empire is no more preventible than the collapse of the British, Spanish, or Roman empires. The issues with the US being the reserve currency has been known for a while now (and was even predicted by Keynes before the Bretton-Woods summit):

    https://en.wikipedia.org/wiki/Triffin_dilemma

    Any discussion of 'bringing back manufacturing' that doesn't mention government spending or social programs to educate and upskill the population is not genuine. The current leadership are fools and ideologs who will only hasten the decline, which might actually be better globally if it lowers emissions. Time will tell I guess.

    Herring(10000) 3 days ago [-]

    Empires come and go, that's just a fact of life. The question was whether they'd fall back relatively gracefully like (Western) Europe, now with multiple countries ranking at the top of 'World's Happiest Countries', or whether they'll become Russia 2.0 with the biggest guns, richest oligarchs, and the worst quality of life.

    It's still far from played out, but right now they're solidly on the road to Russia 2.0, with decades-long trends pointing that way.

    42772827(10000) 3 days ago [-]

    The American Empire never existed, because it never could. The US made the explicit decision not to occupy the defeated forces after WWII, save for strategic forces in place to protect the interests of the host countries. The US opened its market (the only market of size left and still the largest consumer bases in the world, by far) with no tariffs.

    What the US got in return was cheap goods and a whole lot of debt. What the world got was stability. The US is no longer interested in subsidizing the global order.

    The current discussion re: "bringing back manufacturing" is making the mistake that everyone always makes when Trump is involved: taking him at his word. The point isn't to bring back all manufacturing. The point is to profit off of imports. Some manufacturing will return — whatever is high value added and benefits primary from cheap shipping internally - but nobody thinks that Americans are going to sew t-shirts.

    Also, those who are looking for an American decline as comeuppance for being unkind to allies are going to be sorely disappointed. The US has everything it needs to be self sufficient, and no matter how batshit crazy the leadership is, it's still — still — the safest place to park capital, still the largest consumer market by far (more than twice China), has a stable demographic and a middle class country to its south that brings in lower cost workers as needed. Not to mention being totally energy independent, bordered on two sides by oceans and with more potential port coastline than the rest of the world combined... and also holding the virtually all of the world's supply of high-purity quartz, which is a requirement for semiconductor production.

    adamrezich(3468) 3 days ago [-]

    This is explicitly referenced in "A User's Guide to Restructuring the Global Trading System", written November 2024 by Stephen Miran—current Chair of the Council of Economic Advisers of United States—which outlines the general ideology and strategies behind the current tariff situation.

    https://www.hudsonbaycapital.com/documents/FG/hudsonbay/rese...

    nonethewiser(3585) 3 days ago [-]

    America doesnt really have an empire. What is America's Hong Kong, India, etc?

    JumpCrisscross(69) 2 days ago [-]

    > the collapse of the American Empire is no more preventible than the collapse of the British, Spanish, or Roman empires

    They each had longer runs than we've had.

    My pet theory is lead. From 1950 to 1980 we birthed a leaded generation [1]. Today, up to 60% of American voters were born before 1975 [2]. (Voters born between 1950 and 1980 came into the majority in the 1990s and should fall into the minority by 2028, but only barely. So in summary: Iraq War, Financial Crisis, Covid and Trump 47. It won't be until the 2040s when truly unleaded voters, those born after 2000, command a majority.)

    [1] https://pubmed.ncbi.nlm.nih.gov/35254913/#&gid=article-figur...

    [2] https://www.pewresearch.org/politics/2024/04/09/the-changing...

    PaulHoule(97) 3 days ago [-]

    I think of environmental conflicts that disappears in the US thanks to manufacturing moving to China.

    In the 1990s there were numerous manufacturing plants in the US (two on the South Hill of Ithaca alone) that were found to be contaminated with solvents like

    https://en.wikipedia.org/wiki/Trichloroethylene

    People thought it was great stuff, you wouldn't believe how hard it is to get cutting grease off things after you turn them on a lathe and vapor de-greasing makes it go away just like that.

    China has some of the most advanced agriculture on the planet including a 'vertical farm' that can sell straw mushrooms for about $2 a pack where they are super-proud that humans only touch them with a forklift. (Contrast that to the labor-intensive mushroom farms of Pennsylvania where somebody cuts each one with a knife.)

    We are pretty omnivorous (I think mealworms start with 'meal') and my obsession with anime and Japan has turned into serious sinophilia but my wife and I are hesitant to eat 'Chinese Food' grown in China because of widespread environment contamination, I mean they've been building up heavy metal stocks ever since Emperor Qin Shi Huang poisoned himself with mercury.

    pjc50(1402) 3 days ago [-]

    Yeah, it's underrated how the Chinese boom just did not care for environmental impact, and because political organizing is banned the public are limited in how much they can complain about it.

    It used to be a thing that people were importing massive quantities of baby formula to China because they didn't trust locally manufactured stuff.

    dfxm12(10000) 3 days ago [-]

    Why would obsession with anime and (I assume Jaoan is a typo for) Japan lead to sinophilia?

    You know sinophilia means 'love of China', and that anime and Japan are not Chinese, right?

    nyeah(10000) 2 days ago [-]

    Fine, we underestimate the difficulty. But we can make a detailed plan like other countries do. The US has massive advantages. Just no longer so massive that we can expect to win on sheer awesomeness.

    I feel like we in the US have a horrible split evaluation of ourselves: either we're supreme or we're doomed. Both sides of that split are emotional states, not useful facts.

    acdha(2928) 2 days ago [-]

    > But we can make a detailed plan like other countries do

    The problem isn't that we don't know this: it's that the person making the decisions rejects the idea of needing to make a detailed plan, or even understand the situation well enough to recognize the problems a plan would need to address.

    thctphr(10000) 2 days ago [-]

    I don't think it's realistic to bring manufacturing back, so to speak. Are the words being taken literally here? Does this truly mean Orange Man wants to bring all manufacturing back to the United States, or do we want to weaken our largest competitor and buy those cheap products in other countries who are less of a threat, speaking in terms of their technological advancement and economical trajectory?

    seanmcdirmid(2911) 2 days ago [-]

    China has been moving cheap product production to SEA for awhile now, what the USA wants is countries like Vietnam to make cheap products without Chinese involvement in the manufacturing tech and supply chains...which is pretty much impossible.

    elbasti(2838) 3 days ago [-]

    Like OP, I work in manufacturing (after 15 years in startup land). I'm not as experienced as him, but I work in manufacturing that makes similar products on both sides of the US/Mexico border.

    Let me add some thoughts:

    1) Capacity, not cost, is the main driver for nearshoring. All things being equal, a manufacturer would rather produce a product in the US than overseas. The cost of modern products is mostly parts & material, not labor. When you add logistcs expenses, the theoretical cost advantage of overseas vs local is not that great. Remember:the people on the other side of the border are capitalists too! They want to keep most of the surplus of nearshoring to themselves! The problem is that there simply is no capacity, both in facilities and especially in people.

    2) What matters even more than capacity is the first derivative of capacity. In other words: how quickly can I spin up a new factory if I win a big deal? How quickly can I spin one down if the client goes away? How long will it take me to get a permit to connect my new factory to the highway? In the US, these costs and timelines are massive. Real estate, permitting, hiring. There is an order of magnitude difference here, in cost and time.

    3) The labor problems are real. I don't want to disparage the american workers I work with, because they are amazing. Truly fantastic craftsmen. But they are hard to find. You'd be surprised how many people show up who can't read or can't read a tape measure. How hard it is to find people that want to work 8 hours a day, 5 days a week. By contrast, in our overseas facility we have qualified workers literally showing up at our gate every day asking for work.

    In other words, the root cause problems with american manufacturing are—-surprise surprise!--the same problems as with other parts of the US that are in decay:

    - Disfunctional local government, especially around permitting, construction, housing and transit

    - Disfunctional education & healthcare systems.

    - A lack of strategic investment in infrastructure (rail, highways)

    - A social safety net that is totally out of whack, with a high cost burden for employers & employees, with little to no immediate quality-of-life benefits for the working population

    Tariffs solve exactly zero of those probems!

    franktankbank(10000) 3 days ago [-]

    The cost of manufacturing your stuff is not labor dependent only because you are probably putting together low cost components made with cheap labor. What if you had to make the spring or the resistor or the little painted metal box? Could you do that without labor being the big cost?

    Workaccount2(3572) 3 days ago [-]

    I think most people have a very confused understanding of money(currency) and value. Workers produce value, not money. Workers get a cut of that value, which is converted to money. To get by comfortably in the US, a first world developed economy, you need to be producing a lot of value. Everything is made to accommodate high value workers.

    Producing t-shirts, window fans, or toilet brushes is not high value work. The slice of value available to convert to currency for the worker is very tiny. So you end up having to play games with the economy which inevitably will blow up in someone's face. $60 t-shirts so we can pretend that the value in a t-shirt is much more than it is, so we artificially make t-shirt manufacturing competitive with, say, automobile manufacturing.

    californical(10000) 3 days ago [-]

    I somewhat agree with your point, but it's also important to include the other side of that pricing.

    If it actually costs $60 (really more like $25 for made-in-America t-shirts I've bought) to make a t-shirt, with environmental regulations and human costs accounted for, then isn't that the actual cost of a t-shirt? And they were artificially cheap at $10 for imported ones due to ignoring externalities? In that case, producing these simple products is actually a bit more valuable than you suggest.

    charlie90(10000) 3 days ago [-]

    I disagree with this. Everybody wears clothes. Everybody eats food.

    You can't put a monetary value on a t-shirt, because people will buy them anyways. Who is to say that t-shirts aren't $60? People only think that t-shirts are 'low value' because we have offshored the labor and are used to very low prices. Meanwhile I bet most Americans can't even sew.

    bluGill(10000) 3 days ago [-]

    You are missing something: quantity. A toilet brush itself is low value, but the US needs 30 million per year (this is a guess, but it seems reasonable enough - every person buys one every 10 years, which seems right based on how long they last. I am likely off, but probably not by an order of magnitude so let us use that number for discussion unless/until someone really wants to find a better number). If you can make/sell a million brushes per year with a gross profit of $1 on each that is a million dollars, if labor and the machines are amortize to $.50 each you net profit is then $500k/year - many small company CEOs would be happy with that.

    You can run the numbers many different ways, but the important point is low value production is always about volume.

    greenie_beans(1490) about 23 hours ago [-]

    now do marx's labor theory of value

    ranadomo(10000) 3 days ago [-]

    > Let's focus on America's strengths in high end manufacturing, agriculture, and innovation instead of applying tariffs to all countries and products blindly. We should be taxing automated drones for agriculture at 300% to encourage their manufacture here, instead of applying the same blanket tariff of 54% to that that we apply to t-shirts.

    Everything wrong and right with the author's thesis. Our present day high-end manufacturing, agriculture, and innovation are already facing the steepest tariffs from a broad range of countries. The uneven playing field extends to IP theft, heavily subsidised and protected industries abroad and other forms of unfair competition like port traffic manipulation or burdensome legislation.

    The author think that 'targeted tariffs' would have a different effect from what we see now with trade war and retaliatory threats, market instability and uncertainty. This is false, but also ultimately harmful to our 'agricultural drone industry'. It's hard to have a niche industry without the larger picture, and it's hard to have 'drones' without knowing how to manufacture constituent parts and having a reliable domestic supply chain for such. A domestically sourced supply chain encourages innovation and adaptation to immediate customer demands and goods can arrive in days or hours instead of weeks or months. Innovative requests to parts makers aren't immediately harvested by Chinese industrial spies and knowledge and technological advantage can remain local for longer, allowing for time to progress again before others can catch up.

    Encouraging lazy and unoriginal drone manufacture in moated 'made in USA' assembly lines is precisely the low-end type of job that 'no one wants to do' and will inevitably produce the least capable drones the least efficiently or profitably. Our manufacturing and industrial capacity needs to be the world's best and most cost competitive because nothing else will do.

    Only automation can save American industry. There will be 'fewer' jobs but they will require skill and training. Robot management and supervision and repair and update and retooling will all require a large labor force. Creating robots and the software they run on will continue to be an important and large sector of the software industry. But manufacturing is only about jobs in the way that having a healthy agriculture industry is 'about jobs', hardly at all.

    Manufacturing real goods is the difference between servility and freedom given that modern war in the nuclear age also entails producing billions of tonnes of metal and blowing it up in distant countries, and could require replacing percentages of the global shipping tonnage that would be destroyed in a major conflict. It requires manufacturing thousands of substation transformers and the aa systems to defend them.

    If we had invested strategically into a variety of heavy and light industries over the past 30 years, we almost certainly would have invented better processes and formulae for making things than we currently possess. We could have more globally competitive steel, even more advanced finished products and the knowledge and experience to 'make anything better and more profitably than anyone'. Industrial production and manufacturing make up roughly 15% of US GDP today. 'Bringing back manufacturing' might increase that number significantly but it's hard to see how or why it would need to be more than 30% outside of wartime. That wouldn't even require a doubling of the jobs involved because much of this would have to be automated.

    I agree with the author's emphasis on education and 'fixing' things being critical in the execution of any kind of industrial renaissance. If the tariff fight lowers tariffs globally, that is a small move in the right direction of leveling the playing field and rewarding domestic producers who are globally competitive.

    bluGill(10000) 3 days ago [-]

    Robot drones probably are something the US should do. Access to US farms is useful for anyone making agriculture products. Remembers these drones are part of the supply chain for food, and so doing them in the US makes the supply chain closer. You want the ag drones made in small city, not Silicone valley. However your might write the software in Silicone valley - that is where you will find a supply of people who can do that - some of those people will then be making regular trips to the factory though to learn how it works.

    acyou(10000) 3 days ago [-]

    This article seems to be full of propaganda and downright lies. For instance, there are plenty of tool and die makers left in the USA, plenty of injection molding machines. I have personally seen them and met the tool and die makers as well as the machines making the molds.

    It's difficult to address the giant article full of misrepresentations point by point. It's tough to see it up at the top of HN. Wish that I could do something to correct the misinformation that is being disseminated.

    This person has a vested interest. They manufacture cheap crap in China (or Vietnam, I don't care) for American kids to suck on. What more do you need to know?

    mindtricks(10000) 3 days ago [-]

    If you feel there are misrepresentations, then just pick one point and discuss that. I've worked in manufacturing-dependent companies and industries, and lived in China for years. His observations don't feel entirely off-base to me and fit much of what I've observed. So if there is something wrong here, help us clarify one part of it.

    NoTeslaThrow(10000) 2 days ago [-]

    We never stopped manufacturing, we just stopped employing people.

    > We don't have the infrastructure to manufacture

    That's trivially false given we're the second-largest manufacturer in the world. We just don't want to employ people, hence why we can't make an iphone or refine raw materials.

    The actual issue is that our business culture is antithetical to a healthy society. The idea of employing Americans is anti-business—there's no willingness to invest, or to train, or to support an employee seen as waste. Until business can find some sort of reason to care about the state of the country, this will continue.

    Of course, the government could weigh in, could incentivize, could subsidize, could propagandize, etc, to encourage us to actually build domestic industries. But that would be a titantic course reversal that would take decades of cultural change.

    nickpsecurity(3676) 2 days ago [-]

    Which means policies that reverse that are immensely important. The process of offshore our jobs and much I.P. took decades. Getting them back and rebuilding manufacturing support will take a long time, too.

    Just need to make steady progress each year with incentives that encourage large leaps in progress.

    glitchc(10000) 2 days ago [-]

    Concur, employee training and retention are at an all-time low. There are no positions available for junior employees, minimal onboarding and mentoring of new employees. Organizations have stopped planning people's careers. Used to be that the employee's career growth was their manager's problem, while the employee could focus on the work. Now the employee must market themselves as often, if not more often, than actually doing the work. Meanwhile organizations see employees as cost centres and a net drain on their revenue sources.

    Corporate culture in America is definitely broken. I'm not sure how we can fix it.

    epolanski(10000) 2 days ago [-]

    > We just don't want to employ people

    I don't think it's a matter of willingness, but simple global geo economics.

    There's places where producing A, whatever A is, is economically more efficient for countless reasons (energy prices, logistics, talent, bureaucracy, cost of labor, etc).

    That's not gonna change with whatever investment you want or tariff you put.

    But the thing I find more absurd, of all, is that I'd expect HN users to be aware that USA has thrived in the sector economy while offloading things that made more sense to be done elsewhere.

    I'd expect HN users to understand that the very positive trade balances like Japan's, Italy's or Germany's run are meaningless and don't make your country richer.

    Yet I'm surrounded by users ideologically rushing into some delusional autarchic dystopia of fixing american manufacturing for the sake of it.

    AndrewKemendo(1455) 2 days ago [-]

    This is the root issue

    The idea that "labor is cheaper elsewhere" is simply a neutral statement of economics is wrong — "lower living standards" is not just a economic measure, it's a political statement about the value of labor and labor conditions. The US and by extension the "western capitalist world" has been exploiting labor since day 0 with chattel then later globally slavery.

    The reason Japan was the biggest manufacturer exporting to the US post war, is because the SCAP forcibly rewrote their constitution to be explicitly capitalist. Read "Understanding Defeat" for detailed proof of the 7 year occupation of Japan, explicity to destroy any semblance of Japanese imperial/keretzu culture, and replace it with explicitly capitalist structure. To be fair to MacArthur, they did suggest labor practices, like unionization, but it was a thin veneer suggestion, not forced into cooperatives and syndicates.

    China moved into that position post 70s, because Japanese labor began getting "more expensive." Nixon and Kissinger saw an opportunity to exploit "cheap" labor because there were no protections for workers or environmental protections - so "opening up china," plus the Nixon shock and floating of interest rates allowed for global capital flight to low cost slave-like conditions. This is why labor and productivity began to separate in 1971, there was a "global south" that now could be exploited.

    NAFTA made Mexico and the southern americas the agricultural slave countries etc...starting in the 90s, and on and on just moving the slave-wage ball until there's nowhere else to exploit.

    It's not a conspiracy to demonstrate that capital will move wherever it needs to in order to exploit "arbitrage opportunities." Its good business/MBA capitalism 101.

    Just like #2 in Austin powers said:

    > Dr. Evil, I've spent 30 years of my life turning this two-bit evil empire into a world-class multinational. I was going to have a cover story in 'Forbes'. But you, like an idiot, wanted to take over the world. And you don't realize there is no world anymore. It's only corporations.

    42772827(10000) 2 days ago [-]

    The last time we got employers to care about employees it was because the unions dragged the bosses into the streets and beat the daylights out of them.

    palmotea(10000) 2 days ago [-]

    > The actual issue is that our business culture is antithetical to a healthy society. The idea of employing Americans is anti-business—there's no willingness to invest, or to train, or to support an employee seen as waste. Until business can find some sort of reason to care about the state of the country, this will continue.

    I think you're exactly right there.

    >> We don't have the infrastructure to manufacture

    > That's trivially false given we're the second-largest manufacturer in the world.

    I want to quibble with that a little bit. I don't have the numbers, but relative position matters too. The US could be 'second-largest manufacturer in the world' if it only manufactures Dixie cups, other countries manufacture nothing, and China manufactures everything else.

    My understanding is Chinese output is so huge, that even if the US had maintained steady or modestly growing manufacturing output from the 70s or whatever, it would be dwarfed by China.

    paul7986(10000) 2 days ago [-]

    How many Americans are dying to and will do tedious labor (not many), as well robots, automation and AI can do a lot of it and or will end up doing a lot of it.

    If we want to strengthen America (military & economy) immigration reform is needed! This could be unpopular but such reform could be ...those who want to come here must serve in our armed forces for x amount of years and can bring two to four family members here that are able to start working and contributing to the economy immediately (pay taxes). Rounding up and getting of rid of these eager want to be Americans when we have adversaries with larger armies and we bang the drum on beefing up defense (and our economy) doesn't make sense to me.

    Suppafly(10000) 2 days ago [-]

    >That's trivially false given we're the second-largest manufacturer in the world.

    Sure, but we don't manufacture the things that are typically made in 3rd world countries and the lead time to build that infrastructure is years, and generally would result in us moving down the tech tree ladder from being a consumer economy to a manufacturing economy with all of the negatives associated with that.

    giancarlostoro(3167) 2 days ago [-]

    > Until business can find some sort of reason to care about the state of the country, this will continue.

    The best financial years Puerto Rico had ended when the tax incentives to be there went away. It's a real shame. Puerto Rico was #1 in production, above the US and Japan. You could buy something made in Puerto Rico and you knew it was a high quality product. Its much harder to gain back that level of quality once you've effectively killed such a culture, I can only imagine the detriment in Japan if they lost their work culture and how much harder it would be for them to regain it.

    strict9(2754) 2 days ago [-]

    >We just don't want to employ people, hence why we can't make an iphone or refine raw materials.

    This is it. Aside from manufacturing, most recent AI startups are almost universally aligned in the desire to use it to reduce headcount. It's plastered all over their landing pages as a selling point: 'use our product and you won't have to hire people.'

    Business culture is eating its own young and hollowing out the future with such empty goals and sales points.

    I'm skeptical of actual results. There are a lot of layoffs attributed to AI but far fewer cases of increased sales attributed to it.

    jmyeet(10000) 2 days ago [-]

    We produce weapons. We are an arms dealer empire.

    Our biggest exporter is Boeing and sure Boeing produces commercial aircraft but their position has a lot to do with inertia as the accountant leadership of Boeing is doing their best to destroy Boeing by nickel-and-diming every aspect with a complex web of outsourcing that will fall apart the second there is any disruption in international trade.

    What China has now is the infrastructure and ecosystem to manufacture. You need some tiny screws made of titanium? Well, there's a factory that produces that down the street.

    partiallypro(10000) 2 days ago [-]

    > We never stopped manufacturing, we just stopped employing people.

    I don't think it's just that. We manufacture, but we aren't great at the entire chain. China is much better are specialized tooling, etc. We have definitely lost a lot of knowledge in critical parts of the chain.

    korse(10000) 2 days ago [-]

    I'm American and heavily involved in manufacturing for industrial/mining/agricultural customers.

    'We just don't want to employ people' is a gross simplification. We do want to employ people, and lack of skilled labor is a serious problem which has hampered business growth for years,

    The first unspoken problem is that very few young people want to live where many factories are located. I can't blame them. I certainly jump through hoops to live in an area well removed from the industry I work in but not everyone has this luxury.

    The second is psychological. How many kids do you know who are ready to commit to a future of 35+ years of factory work in their early twenties, even with reasonable pay. This influences manufacturer's hiring practices because of the 'skilled' labor thing. Putting time and resources into training employees when there is a high probability they will make a career change within 3 years isn't really acceptable.

    This is HN, so I don't know if this resonates but as a thought experiment, would you take a welding/machine operation/technician position for 25 - 45 USD/hr (based on experience)? Overtime gets you 1.5 base rate and health insurance + dental + 401k is part of the deal. All you need is a GED, proof of eligibility to work in the United States and the ability to pass a physical + drug screen on hiring. After that, no one cares what you do on your own time if you show up, do your job and don't get in an industrial accident. Caveat, you have move away from anything remotely like a 'cultural center' but you do have racial diversity. Also, you will probably be able to afford a house, but it won't be anything grand or anywhere terribly interesting.

    There is a dearth of applicants for jobs exactly like what I've posted. Why don't people take them?

    owlstuffing(10000) 1 day ago [-]

    > We never stopped manufacturing, we just stopped employing people.

    That's a misleading oversimplification. While it's true we haven't stopped manufacturing, we did offshore a massive portion of it--especially after the Open Door Policy with China and subsequent free trade agreements. That shift didn't just change where things are made; it fundamentally altered corporate incentives. Once production moved overseas, the need to invest in domestic labor--training, benefits, long-term employment--shrank accordingly.

    jdietrich(10000) 1 day ago [-]

    The problem is that we're talking about 'manufacturing' as one big homogeneous thing. The US obviously makes a bunch of stuff, but it has very limited ability to make lots of kinds of stuff, especially in a hostile trade environment.

    The US manufacturing sector is about half the size of China's in terms of value-add, but it's much smaller by any other measure. The US has focussed on high-value verticals like aerospace and pharmaceuticals, where intellectual property provides a deep moat and secure profit margins. That kind of manufacturing doesn't produce mass employment for semi-skilled or unskilled workers, but it does create lots of skilled jobs that are very well paid by global standards.

    That's entirely rational from an economic perspective, but it means that US manufacturing is wholly reliant on imports of lower-value materials and commodity parts.

    A Chinese manufacturer of machine tools can buy pretty much all of their inputs domestically, because China has a really deep supply chain. They're really only dependent on imports of a handful of raw materials and leading-edge semiconductors. Their US counterparts - we're really just talking about Haas and Hurco - are assembling a bunch of Chinese-made components onto an American casting. To my knowledge, there are no US manufacturers of linear rails, ballscrews or servo motors.

    If the US wants to start making that stuff, it's faced with two very hard problems. Firstly, that it'd have to essentially re-run the industrial revolution to build up the capacity to do it; secondly, that either a lot of Americans would have to be willing to work for very low wages, or lots of Americans would have to pay an awful lot more in tax to subsidise those jobs.

    It's worth bearing in mind that China is busy moving in the opposite direction - they're investing massively in automation and moving up the value chain as quickly as possible. They're facing the threat of political unrest on a scale they haven't seen since 1989, because of the enormous number of highly-educated young people who are underemployed in unskilled and semi-skilled jobs.

    Lots of Americans want to bring back mass manufacturing employment, but very few of them actually want to work in a factory. You can't resolve that contradiction through sheer political will.

    dimal(10000) 1 day ago [-]

    It's shareholder capitalism. Capitalism can be a great thing, but shareholder capitalism defines profits as the only reason for a corporation to exist. Humans are simply resources to extract work or profit from, and destroying the future of the country is an unfortunate externality. CEOs are obligated to behave like sociopaths. Lying, cheating, stealing, and subverting democracy are all good business if it returns value to shareholders. We see this over and over again, and wonder why our society is so fucked up.

    And since every major corporation is behaving like this, even if a CEO wanted to give a shit about the country, they can't do anything about it because someone else will be more cutthroat than them and eat their lunch.

    hinkley(10000) 1 day ago [-]

    This is even showing up a bit in tech now. The number of places that expect some articulation Venn diagram of skill sets is too high.

    There are too goddamned many stacks to expect that your best hire is going to already have used everything you're using. There are people who have used everything, but you're mostly going to be hiring flakes if you look for those, not Right Tool for the Job types.

    mystified5016(10000) 1 day ago [-]

    I think it's worth specifying even further: wealthy business owners don't want to pay what a US employee costs.

    Most jobs are wholly unsustainable. You have to job hop every couple of years to keep up with inflation because God knows you're not getting a raise that keeps you comfortable.

    This has led to churn and brain drain and the slow collapse of US domestic business.

    It's not that people don't want to work, it's that wages have fallen so far behind the cost of living that it's financial suicide to stay in any one job. Even with all the traps like employer sponsored healthcare, most people just can't afford to be paid the pittance most businesses are willing to pay.

    This is a deep societal illness in the US. We've glorified and deified the concept of greed to the point where even talking about income inequality and the unimaginable concentration of wealth is just anathema. It's seeped into the everyday consciousness in the form of 'I'm the only one that matters, fuck absolutely everyone else'

    I genuinely believe that America will never, ever recover until we address this. We will always be this sick and broken country until the state entirely collapses or we get our shit together and address income inequality.

    I have some real serious doubts that we'll ever get there, but it's easy to be pessimistic.

    adrian_b(10000) 1 day ago [-]

    Most companies that do manufacturing in USA are oriented to making business-to-business products, where high margins can be achieved.

    As an European, there have been many decades since the last time when I have seen any competitive 'made in USA' product that is intended to be sold to individuals.

    There are products that I buy, which have been designed in USA, e.g. computer CPUs, but none of them have also been made in USA.

    When I was young, it was very different, there were many 'made in USA' products that could compete with those made elsewhere.

    cashsterling(10000) about 21 hours ago [-]

    100% agree with you!

    I have worked US manufacturing and manufacturing R&D for most of my career: pharmaceutical, microelectronics, materials, aerospace, etc. The US is awesome at manufacturing when we want to be.

    One problem is that 'modern MBA/business philosophy' views manufacturing and manufacturing employees as a cost center and there is so much emphasis on maximizing gross margin to increase shareholder value.

    So business leaders scrutinize the hell out of anything that increases the cost of their cost centers:

    - employee training & development? hell with that.

    - Increasing pay to retain good employees in manufacturing? Why? isn't everything mostly automated?

    - manufacturing technology development? Not unless you can show a clear and massive net present value on the investment... and, then, the answer is still no for no good reason. I have pitched internal manufacturing development investments where we conservatively estimated ~50% internal rate of return and the projects still didn't get funded.

    There is also a belief that outsourcing is easy and business people are often horrible at predicting and assessing the total cost of outsourcing. I have been on teams doing 'insource vs. outsource' trade studies and the amount of costs and risks that MBA decision makers don't think about in these situations really surprised me initially... but now I'm use to it.

    Anyhow... the US (and Europe for that matter) can absolutely increase manufacturing. It is not 'difficult'... but it would be a slow process. I think it is important to differentiate between difficulty and speed.

    sightbroke(10000) 3 days ago [-]

    I am by no means an export on manufacturing, nor international trade, economics, or virtually anything relevant to manufacturing. Just a layman here.

    Observationally I fear there is a lack of nuance in discussing 'bringing back manufacturing' (really re-expanding) to the U.S.

    I fear the lack of nuance is due to bias based on not liking the guy in the red tie or the other guy that's in a blue tie so there's just blinders about whether or not a particular policy will achieve a particular stated goal.

    The next thing I see is it just lumping manufacturing all into one bucket.

    Take manufacturing smartphones. Because the U.S. doesn't assemble iPhones the U.S. appears to be bad at manufacturing? No, I think it's just not good at assembling iPhones.

    Just looking at numbers, sure the U.S. steel production is dwarfed by China but globally it's still a major producer. And there's no discussion of quality.

    Look at oil & gas. I'm pretty sure the U.S. both produces the raw material and refined product at a significant amount globally.

    Plastic manufacturing. I toured a bottle manufacturing plant last summer. It's primary a customer was Limited Brands (Victoria Secret)

    It built molds. It upgraded factory equipment roughly every 8 years (increasing production & reducing labor costs). Why was it able to manufacturer bottles in the U.S. even it's selling at a higher price? Because it's primary customer was essentially down the street. That is, apparently the cost to not import across the globe more than offset the cost to manufacture here.

    I understand that's just an example and I'm trusting the information from that company was reliable.

    But first I think we need to be honest about how much manufacturing is here and what type. Then discuss which policies are likely to achieve goals we may have in mind.

    I think there's merit to manufacturing semiconductors and batteries here. But we need to also be aware that while manufacturing may bring jobs, an increasing amount of labor will be automated.

    aaronbaugher(10000) 3 days ago [-]

    Yes, there's little nuance. I see so many people saying it will be hard to bring back manufacturing jobs, or 'we can't go back to the 50s,' and then they just stop as if that settles the argument. The implication, which they never say out loud, is that we shouldn't even try, just accept things as they are. Just be the Big Consumer until someday the rest of the world doesn't want our dollars anymore, and then what?

    Seems much better to look seriously at the manufacturing we still have (as you say, it's considerable), where we can expand on that, and where we're lacking and need to rebuild.

    lerp-io(10000) 2 days ago [-]

    earth doesn't need more factories, consumer shit needs to be printed out of some sort of organic material that is able to decompose quickly.

    gabrielgio(10000) 2 days ago [-]

    or change the consumer habit to consume less, and/or change how things are produce in order to them last longer (reduce planned obsolescence) or even better we rebuild the system to serve human needs instead of feeding capitalism's endless growth.

    alkonaut(10000) 2 days ago [-]

    7. Uncertainty seems overlooked these days. The job of politicians is to make people and businesses dare. Making people dare getting an expensive education or starting a business or hiring your first employee or whatever it might be. What that requires will vary (if it's a social security system or a tax break for new companies or whatever). But something it always requires is trust in the stability. That the calculus for an investment is valid over N years. That laws or taxes don't swing wildly with political cycles.

    mlinhares(10000) 2 days ago [-]

    That has been the bane of brazil for decades, every politician, at every level, undoes or stops whatever the previous politician was doing so there's absolutely no guarantee what you're doing today will still work tomorrow.

    Its a terrible state and situation to invest in a business doesn't benefit anyone. My hometown had a large cultural center built by the mayor, he couldn't run for reelection again, new mayor is elected, completely ignores the whole thing was built and lets it rot. Everything is only done for an election cycle, the next cycle could bring something else entirely.

    Its terrible to live in a place like this, Americans have no idea how bad this is going to be for the country.

    dghughes(10000) 2 days ago [-]

    Even if you guys did rebuild e.g. textile factories down there in crazy land you're not going to pay workers $300/month to be able to compete globally. Nobody wants to pay $1,000 for a pair of underwear.

    eYrKEC2(10000) 2 days ago [-]

    Tariffs don't help you compete globally -- they're about disadvantaging the global in favor of the local.

    Someone may be able to pay workers $300/month and make them work the '996 working hour system'[1], but if they then have to mark up the end product by 100%, the disparity between local and global price to consumers narrows.

    [1] https://en.wikipedia.org/wiki/996_working_hour_system

    blindriver(10000) 2 days ago [-]

    The amount of pooh-poohing of this idea is even more than I would have expected from HN, despite tech's love of belittling others ideas.

    The reason we need manufacturing is because the middle class is decimated. None of us tech workers feel it because we don't live in neighborhoods that have been decimated by it. We have all benefitted from globalization immensely but we don't have neighbors, families or friends that have been destroyed by it.

    Too many people say it will take "years" to get factories operational. That's why Elon is there. He knows and has done this, to point out which regulations need to be axed in order to improve the time to market for new factories. Trump will listen to him and get rid of any regulation that doesn't make sense, or even regulations that do make sense but take too much time. For better or worse factory building will be faster over the next 3 years.

    Now that we have these greenfields for new manufacturing opportunities, instead of standing there with your arms crossed, shaking your head why the idea won't work, how can you take advantage of this new opportunity to get rich?

    pif(3653) 2 days ago [-]

    > We have all benefitted from globalization immensely but we don't have neighbors, families or friends that have been destroyed by it.

    Blue collar workers were the first to push for globalization, because they suddenly could afford a lifestyle that used to require the salary corresponding to a couple of steps upper in the corporate ladder. A blue collar salary suddenly could provide for many more amenities... til the salary was no more!

    Everyone wants manufacturing back, but only for the products they can produce, because everyone still wants to buy at Chinese prices.

    Furthermore, the regulations that most stand in the way of cheap manufacturing are environmental regulations, and good luck with that! We have got used to breathe clean air, and I feel most people still love clean air more than they hate globalization.

    qgin(10000) 2 days ago [-]

    Given what's likely to happen with with AI and robotics over the next 10 years, all this debate about bringing back manufacturing jobs is pretty silly

    daveguy(10000) 2 days ago [-]

    There is no technological path to AGI, much less intelligent robots, in the next 10 years. Everyone underestimates the massive amount of parallel processing going on in a single human brain. That doesn't even consider how massive the sensor array is. The doublings required for our artificial technology to catch up is about 25-35 years, maybe more depending on how much Moore's Law slows down.

    Havoc(10000) 2 days ago [-]

    The part that blows my mind is timing. It's going to take years to get anything up and running. Yet tariffs are cutting supply immediately.

    wtf is the plan for the 5-10 years in between?

    chewbacha(3349) 2 days ago [-]

    oligarch buy up of failed industries. Then we all live as renters.

    thyristan(10000) 2 days ago [-]

    Building a new factory needs a few years from idea to start of planning to production. 2 years if you are really really quick maybe, 4 to 6 years might be more realistic. The term for the current administration ends in 3.5 years and the next one probably won't be lead by Trump, so things will change.

    This means that nobody will even start moving production back yet, they will pay lip-service, do the minimum to get along for this term, and hope for the best for the next one.

    potato3732842(10000) 2 days ago [-]

    Politicians have been running on platforms of about undoing the damage of offshoring since Obama's first term at least, now here we are in 2025 and someone just won an election and it played a key role so clearly it's a big important thing and it's reasonable to expect it to stick around as an issue on the official party platforms. There is a non-negligible chance that in 2029 there will be someone in the white house who continues to push in that direction, even if the specific policy is very different from the current tariff policy.

    The wise thing to do is to at least make steps in the direction of on-shoring or at least make your plans and investments compatible with it.

    greenie_beans(1490) about 23 hours ago [-]

    > Chinese workers work longer hours more happily and they're physically faster with their hands; they can do things that American labor can't. It's years of accumulated skill, but it's also a culture that is oriented around hard work and education that the United States no longer has. In China, there are no people who are too fat to work. The workers don't storm off midshift, never to return to their job. You don't have people who insist on being paid in cash so that they can keep their disability payments, while they do acrobatics on the factory floor that the non-disabled workers cannot do.

    he knows a lot about manufacturing but weirdly not much about labor. very unsubstantiated, derogatory comment.

    it gets worse!

    > In China, there are no people who are too fat to work. The workers don't storm off midshift, never to return to their job. You don't have people who insist on being paid in cash so that they can keep their disability payments, while they do acrobatics on the factory floor that the non-disabled workers cannot do.

    > Chinese workers are much less likely to physically attack each other and their manager. They don't take 30 minute bathroom breaks on company time. They don't often quit because their out-of-state mother of their children discovered their new job and now receives 60% of their wages as child support. They don't disappear because they've gone on meth benders. And they don't fall asleep on a box midshift because their pay from yesterday got converted into pills.

    > And they can do their times tables. To manufacture, you need to be able to consistently and accurately multiply 7 times 9 and read in English, and a disturbingly large portion of the American workforce cannot do that.

    like the fuck? where are your sources? this sounds like some ignorant shit to say

    monetus(10000) about 18 hours ago [-]

    It is extraordinarily malicious, and reminds me of Michael Richards.

    beachtaxidriver(10000) about 15 hours ago [-]

    Lol that was my reaction too, this guy is an asshole. He should just leave.





    Historical Discussions: How to win an argument with a toddler (April 15, 2025: 704 points)

    (704) How to win an argument with a toddler

    704 points 3 days ago by herbertl in 214th position

    seths.blog | Estimated reading time – 2 minutes | comments | anchor

    You can't.

    That's because toddlers don't understand what an argument is and aren't interesting in having one.

    Toddlers (which includes defensive bureaucrats, bullies, flat earthers, folks committed to a specific agenda and radio talk show hosts) may indicate that they'd like to have an argument, but they're actually engaging in connection, noise, play acting or a chance to earn status. It can be fun to be in opposition, to harangue or even to use power to change someone's position.

    An argument, though, is an exchange of ideas that ought to surface insight and lead to a conclusion.

    If you're regularly having arguments with well-informed people of goodwill, you will probably 'lose' half of them–changing your mind based on what you've learned. If you're not changing your mind, it's likely you're not actually having an argument (or you're hanging out with the wrong people.) While it can be fun to change someone else's position, it's also a gift to learn enough to change ours.

    The toddler puts on a show of having an argument, but they are holding a tantrum in reserve. If they 'win' the argument, no tantrum is needed. If they lose, they can tell themselves that they tried but the other person deserved the tantrum because they didn't listen.

    "Tell me about other strongly-held positions you've changed as the result of a discussion like this one..." is a direct way to start a conversation about the argument you're proposing to have. "What sort of information would make it likely you could see this in a different way?"

    It probably doesn't pay to argue over things we have chosen to believe as part of our identity.

    April 14, 2025




    All Comments: [-] | anchor

    kelseyfrog(2243) 3 days ago [-]

    There's a downside to loosening up the mental resistance to mind-changing - you're more susceptible to cult indoctrination.

    You can look no further than the Rationalist community who have internalized this to such a degree that cults are endemic to the community. Sure, there's positives to being open to changing one's beliefs, but like all advice, it's contextual. Some people probably do need to loosen up, but they are the least likely to do so. Those who hold their beliefs too loosely, could stand to tighten that knot a little more.

    weakfish(10000) 3 days ago [-]

    Can you elaborate a bit more on the rationalist community's perceived cults? I've only dipped my toes into places like LessWrong, so I am curious what you see there.

    Matticus_Rex(10000) 3 days ago [-]

    So I'm open to changing my mind on this, but — having already been familiar with the evidence you posted below and having been adjacent to these circles for a long time — I'm very skeptical of both the claim generally that cults are endemic to the Rationalist community, and even moreso, specifically that it has anything to do with Rationalists holding beliefs loosely.

    The Zizians are absolutely a cult. But did they get there by changing their beliefs too easily?

    I think that's a really tough case to make -- one of their chief characteristics is their extreme slavishness to some particular radical views. These weren't people who jumped around often ideologically. Several of the Zizians (of whom there were never many) also weren't rationalists first. Where's the case that this is a result of Rationalism influence, or particularly that holding beliefs loosely was the problem? A handful of (the many) ex-rationalists forming a cult doesn't seem like strong evidence.

    Leverage was certainly a high-demand social circle, and some people came out with some damage. I know others who were involved briefly, got no cult vibes, had no issues, and had a good experience with Leverage programs. Note also that a number of the 'cult' claims came from Ziz and Ziz's friends, who even separately from Ziz influence have not tended to be particularly stable people — this doesn't mean they're wrong, but I do update a bit based on that. And Vassar definitely had a penchant for seeing vulnerable people near crisis and suggesting that they take drugs, which is generally stupid and harmful.

    I don't think it's particularly useful to call leverage a 'cult' even if there's some overlap, but if it is, is it because of Rationalists' willingness to change their minds? Again, I'm very skeptical. Vassar looked for people who were a little bit crazy/unstable, and did influence them to change their minds. But he didn't do this because he was looking to prey on them, and often engaged in ways that don't seem cultish at all — he did it because those were the people who understood him, because he was also a bit crazy/unstable!

    Alternatively, what other explanatory factors are there for two cults closely adjacent to Rationalism? 1. Base rates. Have you been to the Bay Area? Cults are everywhere. Seriously, I suspect Rationalists are well-below the base rate here. 2. Very smart people who are also atypical as thinkers seem to be more susceptible to mental health issues, and in many cases these people from otherwise-vulnerable groups (e.g. almost all of the Zizians, many of the Leverage people). You definitely get some high-octane crazy, and groups of people that can follow certain types of reasoning can insulate themselves in a mental cul-de-sac, and then get stuck there because their blind spots block the exit and few others can follow the reasoning well enough to come in and get them. 3. Young people are easily influenced. As one Lesswrong commenter put it, 'the rationalist community is acting as a de facto school and system of interconnected mentorship opportunities.'

    There's a lot of related discussion on these topics catalogued here, with Rationalists carefully dissecting these issues from various angles to see what the risks are and how they can make the community more resilient to them: https://www.lesswrong.com/posts/MnFqyPLqbiKL8nSR7/my-experie...

    nicolas_t(10000) 3 days ago [-]

    Cult indoctrination could be explained by this but could also be explained by the fact that a certain number of formerly gifted kids, who have been ostracised during their childhood and have low social skills tend, to gravitate around the rationalist community. I do believe that those people are more likely to be indoctrinated.

    From my readings of the Zizian, they also don't seem to easily change their mind, they instead have had a tendency towards very radical opinions that progressively become more extreme.

    jvanderbot(2217) 3 days ago [-]

    An open mind is like a fortress with its gates unbarred and unguarded.

    Is this where we are now?

    ordu(10000) 3 days ago [-]

    I wonder what is the cause and what is the effect? If Rationalism promises mind changing, I bet it attracts people obsessed with mind changing. Rationalism promises a chance to touch the eternal Truth, or at least to come closer to it, so naturally people who seeks such a truth will try to become rationalists.

    This overall can easily lead to greater then average concentration of people susceptible to cults.

    You see, I was engaged in lesswrong.com activites 10+ years ago, and I didn't become more 'cultist'. Probably even less. If I look at changes in me that happened due to me reading Yudkowski and talking with other people who read him, I'd say that these changes were coming in me in any case, the lesswrong stuff played its role and influenced the outcomes, but even before my lesswrong period I was:

    1. Interested in arguments and how they work or do not work 2. All the time tried to dismantle laws, social norms, rules morale to find an answer 'why do they exists and how they benefit the society', 'how do they work?'. Some of them I rejected as stupid and pointless. 3. I was interested in science overall and psychology in particular.

    I learned a lot from that time of how arguments work and I was excited to see Yudkowski take on that. His approach doesn't work in reality, only with other rationalists, but I like it nevertheless.

    OTOH, I need to say that Yudkowski by himself have a lot of traits of a leader of a cult. His texts are written like they are his own unique ideas. He refers sometimes to Socrates of some other person, but it doesn't help and his texts looks like he is a genius that invented a new philosophical system from ground up. I didn't know the history of philosophy enough to see how far from the truth the picture is. The bells begin to ring in my head when I get to the 'Death Spirals' where Yudkowski talked about cults and why lesswrong is not a cult. It is highly suspicious as it is, but his arguments were not good enough to me, maybe because they were worse than usual or maybe because I was more critical than usual. 'Death Spirals' failed to convince me that lesswrong is not a cult, on the contrary they made me to wonder 'a cult or not a cult' all the time.

    And this question led me to a search for information everywhere, not just lesswrong. And then I've found a new 'sport': find Yudkoswki's ideas in writings of thinkers from XIX century or earlier. Had he conceived at least one truly original idea? This activity was much more fun for me than lesswrong and after that I had no chance whatsoever to become a part of a cult centered on Rationality.

    The point I'm trying to make is Yudkowski's Rationality doesn't deliver its promises, people get not what was promised but what they had already. Rationality changes them somehow, but I believe that it is not the reason, just a trigger for changes that would come in any case.

    broof(10000) 2 days ago [-]

    Yeah I see your point but the median person probably falls on the side of needing to loosen up.

    kgwxd(3429) 3 days ago [-]

    Am I using this site wrong? All I'm seeing is basically a tweet with nothing remotely resembling an original thought.

    Exoristos(10000) 3 days ago [-]

    I think the relevant question would be, Are the owners of the forum exploiting it effectively?

    apercu(10000) 3 days ago [-]

    If you can't change your mind when presented with new evidence, you _are_ an intellectual toddler.

    aeternum(10000) 3 days ago [-]

    but evidence doesn't matter because I am morally right!

    ccleve(2073) 3 days ago [-]

    Oddly, I thought this discussion would be about actual toddlers.

    There is a way to win an argument with a toddler. You find out what's bothering them, usually something emotional, and you validate it. 'Yes! It's fun to stay up late! Yes! You don't want to eat your vegetables!' Once they feel heard, you've got a shot at getting them to do what you want.

    That's a good way to win an argument with a non-toddler as well. Acknowledge that what they want is legitimate (if it is). Concede points of agreement. Talk about shared goals. Only then talk about a different path to the solution.

    tombert(10000) 3 days ago [-]

    My parents did that; they managed to win the 'go to bed at a reasonable time' argument, but never were terribly successful with the 'eating vegetables' one. It didn't help that my dad almost never ate vegetables and even fairly young I was able to point out the hypocrisy.

    I still don't eat a lot of vegetables; my health vitals are generally fine when I do bloodwork, as is my heart health when I get that checked so hopefully I don't end up in an early grave.

    tmountain(3142) 3 days ago [-]

    We have been redirecting our toddler pretty successfully in most "conflict" situations. Instead of telling him what he can't do, give him a few options of things he can do. It's not appropriate for all situations but a great strategy for drawing focus away from whatever is causing contention.

    card_zero(10000) 3 days ago [-]

    Mutual preferences, very Dale Carnegie.

    jvanderbot(2217) 3 days ago [-]

    I'm lucky that my kiddos accept deals.

    'Yeah, vegetables are kinda yucky, how about just the corn, then we can go play after'

    I also feel like 'deals' are basically how the world works. Positive and negative deals clearly stated.

    Tade0(10000) 3 days ago [-]

    My experience as a parent so far is that treating everyone beyond a whitelist of certified adults like toddlers works tremendously well.

    Also there's the realisation that I've been effectively treated like one much more often than I would like to admit.

    Xcelerate(2154) 3 days ago [-]

    > find out what's bothering them, usually something emotional, and you validate it

    This is a common refrain of counselors and the field of psychology in general, and yet I can't help but think there's some selection bias at play with regard to the type of personality that is likely to recommend this approach as advice and how well the advice actually works.

    Personally speaking, I've never cared whether someone 'validates' my emotions (and I often view such attempts as a bit patronizing or insincere). There's a problem to be solved, so let's attempt to solve it or at least compromise in good faith. The resolution to the problem is the most likely way to elicit positive emotions from me anyway.

    (I do understand however that some people prefer this validation, and if that's what they want, then sure, I'll attempt to do that.)

    helle253(10000) 3 days ago [-]

    this reminds me of something that happened to me just yesterday:

    i was at the playground, trying to convince my daughter to go down the slide on her own.

    She kept saying it was too scary, so I went down first to show her it wasnt scary. Then, still not convinced, she said there were monsters in the slide! I, of course, told her I got rid of them on the way down. She pondered for a moment, then decided it wasn't so scary anymore. Shortly thereafter she went down the slide herself!

    It was a funny, insightful moment, negotiating her fears without invalidating them.

    tdb7893(10000) 3 days ago [-]

    Even in engineering it's important for people to understand what people want and to make sure people feel heard and validated. I've found that especially when dealing with people up the management chain understanding what they want and even using the techniques you describe is very effective. My experience is that pretty much everyone, but especially people in engineering fields and data driven science fields (me included), vastly overestimates how 'logical' they are. At the end of the day we are all just a species of ape

    karaterobot(10000) 3 days ago [-]

    What's a different path to the solution of getting a kid to eat vegetables and go to bed? I'd say if you can get them to freely choose to do those, then you've won the argument. If it comes down to the equivalent of telling them 'because I say so' in such a positive and constructive way that they don't freak out, you haven't won an argument. You have gotten what you wanted, but not by winning an argument, because the kid's opinion didn't change, just their response.

    Now, what you're talking about is an extremely valuable skill—much more valuable than trying to argue with toddlers—but it's not the same thing in my opinion.

    kristianc(2529) 3 days ago [-]

    It's what Chris Voss calls tactical empathy.

    scott_w(10000) 3 days ago [-]

    This is only useful if the person is arguing in good faith, something a quick listen to Nick Ferrari, Nigel Farage, Ben Shapiro or any other shock jock will quickly disabuse you of.

    melenaboija(3169) 3 days ago [-]

    > if it is

    This is the crux to me.

    And more than that is how much of my truth (not absolute truth, if such thing exists, but my point of view) I want to give up to enter a common territory to discuss.

    subpixel(10000) 3 days ago [-]

    My wife has found this is also quite effective with me.

    BrandoElFollito(3407) 3 days ago [-]

    I usually talked with my toddlers asking them 'why'? Why do you want to stay late? why don't you want to eat carrots?

    They were usually thinking about trading and I was patiently waiting.

    They do not like carrots (me neither btw), ok, so you get to pick a vegetable.

    They want to play longer, ok, you play in your bed. Etc.

    Of course this did not work all the time, especially when I was tired and maybe not that patient so more traditional ways of persuasion were used (no, nothing violent, just 'do it because I said so')

    MadcapJake(10000) 3 days ago [-]

    As a parent, I often found that if I actually explained why instead of the usual 'Because I told you so', then I got a lot further in making them rationally arrive at the right behavior themselves (as toddlers are wont to do). I suspect that the 'I told you so', not only does it completely nullify their desire but it also forces them to accept not learning and hurts their pride (which is where the tantrum comes from). These are undesirable outcomes and since parents use this trick all the time, it leads to learned behavior. Disclaimer: This is just my own analysis and I know there are times when it's too challenging to do this but it's a principle you have to focus on.

    elif(10000) 3 days ago [-]

    I'm lucky enough that I get to take my tyke to the zoo 5 days a week and while I agree with your take, I also have seen enough of the parents making the mistake outlined in the original post to know that it was actually talking about toddlers.

    You would be shocked to see how many supposed adults engage in one sided arguments with crying children, usually centered on the parents feelings.

    mik09(10000) 2 days ago [-]

    multi-layer perceptrons are more complex than that lol

    bloomingeek(10000) 2 days ago [-]

    I agree, however that will never work with a person like MTG. (Yes, I know she only wants to fight. Who voted for her again?)

    aredox(10000) 2 days ago [-]

    That 'good way' is tolerable because you knwo your toddler (you have an emotional attachment towards, too) will grow out of it.

    Now imagine your toddler never grows, and you are stuck with it. You many years will you resist before you strangle it?

    brainzap(10000) 2 days ago [-]

    funny, this is core of "non violent communication"

    nswest23(10000) about 12 hours ago [-]

    I think you missed the point of this post. Wildly.

    theGeatZhopa(10000) 3 days ago [-]

    The knowing has lost against the believing every single time in the whole history of antroposophic argumentation. No chance to stand 3 rounds against the believers

    01HNNWZ0MV43FF(10000) 2 days ago [-]

    Is there any value in 'tactical believing', then?

    reverendsteveii(10000) 3 days ago [-]

    I think this might be the first time I've ever seen a serious article reference Monty Python in a way that genuinely furthers the point.

    htgb(10000) 3 days ago [-]

    I didn't get that reference. Thanks! Is it this one?

    https://www.youtube.com/watch?v=ohDB5gbtaEQ

    skwee357(3640) 3 days ago [-]

    I gave up trying to change people's mind in this widely divided world.

    For starters, I will be arguing with a dozen of "social media influencers" who shaped the opinion and identity of my opponent.

    And in the end, most people are not really interested in changing their opinion. They want me to change mine, or validate theirs, but would conveniently dismiss anything that does not match their world view.

    al_borland(10000) 3 days ago [-]

    That last part is where my head was going while reading this piece. If both people are of the mindset that the other should change their mind, which is usually the case, it goes nowhere.

    The person most open to having they mind changed is often the least likely to need it changed, as they've likely already looked at both side in good faith. That said, they may have a blind spot, or haven't considered a particular view.

    dkarl(10000) 3 days ago [-]

    > Toddlers (which includes defensive bureaucrats, bullies, flat earthers, folks committed to a specific agenda and radio talk show hosts)

    I think people are unfair to bureaucrats. Bureaucrats have a job to do: they carry out policy determined by other people and encoded via a dizzying array of rules that combine specificity and vagueness in unexpected ways, many of which have a history of harm, exploitation, and public debate behind them that ordinary people have no patience to learn.

    People are only interested in their own situation, and they are convinced that their situation is different. Sometimes it is. Sometimes they're suffering from an entirely natural partiality towards themselves. So they want the bureaucrat to be creative. They justify it by saying that the rules can be bent just for this circumstance, just for them, it doesn't have to apply to any other circumstance. Why can't the bureaucrat relax their rigid bureaucratic brain enough to realize that every circumstance is unique and the rules were written for other circumstances, not this one?

    But that's exactly what the bureaucrat is not supposed to do. The public, their elected representatives, their interest groups, and other policy stakeholders expend incredible quantities of time in campaigns, pubic debate, open meetings, closed meetings, collection and collation of feedback, et cetera ad infinitum. It's not the bureaucrat's place to second-guess the results of that process or innovate outside the bounds decided on during that process.

    In the gray areas within those boundaries, yes, the bureaucrat is happy to listen to arguments and make decisions based on reason and evidence. That's their job. Gray areas where bureaucrats get to apply judgment are inevitable, often even intentional, but the gray areas aren't always where you want or expect them to be. Bureaucrats don't have latitude to decide that a rule that went through two rounds of public feedback, got debated until 11pm at a public meeting, went through multiple rounds of drafting and review by the staff of an elected official, and was finally signed off on and announced as a shiny new policy in the media, should be changed for you because the way it applies to your situation doesn't make sense to you. They can't invent a gray area where the political process provided a bright line.

    You can argue that a lot of rules were hastily dashed out by a junior aide and made it through the rest of the policy-making process without any further scrutiny. That's true. But it's not like when you become a bureaucrat they give you a special pair of glasses that show you which rules were just one person's ill-informed guess and which rules emerged from decades of painful history or hours of public debate and compromise. That would be nice to know, and sometimes bureaucrats know that information because they were around and paying attention when the rules were made. Sometimes they can bend a rule because they know that this particular rule is not important to anybody. But just because they won't bend a rule in your case doesn't mean they're narrow-minded, stubborn, or petty.

    pphysch(2714) 3 days ago [-]

    Hence the 'defensive' qualifier. Defensive bureaucrats hide behind the 'just doing my job / following orders' excuse. This is problematic when it is at odds with ethics, especially in civil service organizations.

    Following protocol is critical to the function of large human organizations, but it's not everything. People who blindly follow protocol without heed to societal values and ethics are no different than killer robots.

    Adolf Eichmann was a defensive bureaucrat.

    henlobenlo(10000) 3 days ago [-]

    99% of people have zero epistemic foundation for any of their views so debated on the facts mean nothing

    LinuxAmbulance(10000) 3 days ago [-]

    A terrifying amount of views are held on the basis of how good they make the holder feel.

    weregiraffe(10000) 2 days ago [-]

    Is 'trust science' a good epistemic foundation?

    9rx(10000) 3 days ago [-]
    > If you're not changing your mind, it's likely you're not actually having an argument

    If you've made up your mind (even if, theoretically, it could be changed) why would you have an argument about it in the first place? Discussing the already settled is rather boring. Unless one is grandstanding for some other purpose, people move on once they've made up their mind. They don't keep exploring the same ideas over and over and over again once they've settled.

    Argument is there to explore that to which you have not yet made a mind. Your mind won't change because there is no basis on which to change from.

    filoleg(10000) 3 days ago [-]

    > If you've made up your mind (even if, theoretically, it could be changed) why would you have an argument about it in the first place?

    Because, in most of those cases, my mind is made up given the information I'd had access to and the points I've seen/heard made regarding the topic up to this point. If an argument brings up new (to me) points and information, it is all a fair game, and I am not holding onto my "already made up" position that dearly. If I consider a position "already made up," it is usually due to me rarely encountering anything new on that topic. But I am not going to pre-emptively declare "my mind is made up, and nothing can change it," all it could take is a single piece of new info or a new point that I was yet to encounter.

    TLDR: the entire meaning of "my mind is made up on this topic already" to me personally is "over a course of a long time, I am yet to encounter any new materially relevant info on the topic that could change my mind, and all i keep hearing is the same stuff I heard before (but I am willing to change my perspective if there are any new and relevant points), so I expect the likelihood of my mind being changed on this to be low (given the low likelihood of any new relevant info being introduced)".

    > Argument is there to explore that to which you have not yet made a mind. Your mind won't change because there is no basis on which to change from.

    Agreed wholeheartedly, except i would completely remove the "that to which you have not yet made a mind" part.

    endominus(10000) 3 days ago [-]

    This response is indicative of a completely different perspective on the idea of 'argument' (and 'making up your mind,' a phrase that does not appear in the than the original article and would not fit with the framework of understanding expressed therein). The belief that your mind should or even can be 'settled' on an issue - that you can examine the evidence, weigh it, judge it, come to a definitive conclusion, and then never think about it again - is not universal.

    There exist people who think probabilistically; issues are not definitively decided in their mind, but given some likelihood of being one way or another. Such people tend to have much more accurate understandings of the world and benefit greatly from constructive debate, revisiting the same issues over and over again as new evidence is brought up in these arguments. If you'd like to know more, I recommend reading the book The Scout Mindset by Julia Galef.

    padjo(10000) 3 days ago [-]

    This is quite a close minded position that leaves you vulnerable in changing circumstances. Very little is known with absolute certainty outside of mathematics. I think a better default is to revisit topics every now and then, listen to the counter arguments and change your position if you think it is warranted.

    kqr(2908) 3 days ago [-]

    'What would it take to convince you otherwise' is a question I've asked in the past, but I'm less and less convinced of its utility.

    If the counterparty knew the answer to that, they would sit down with Google, not engage in an argument. Debate is mainly information sharing, but also to some degree about exploring the answer to that question.

    Rayhem(10000) 3 days ago [-]

    In the same vein, I've been keen to try out 'What would the world look like if...' and then show that we do or do not observe related phenomena. It seems like the best way to meet someone on their terms (because they get to write the 'rules' of the world) and then you just apply them towards one conclusion or another. But I haven't had enough exposure to really test this out.

    NitpickLawyer(10000) 3 days ago [-]

    I also like 'steelman the other side first' to see where they are and how much they know about 'the other side' of an argument. But this only works with people you know and trust to want to go there, not on the internet.

    a3w(10000) 3 days ago [-]

    For me, it is really useful: should I talk to this person never again, since they cannot be convinced by any evidence they themselves would find.

    Or with close family, should I never bring up this topic again since they perhaps have nothing to gain from changing their opinion, but a lot to lose.

    criddell(10000) 3 days ago [-]

    For lots of people, logic and facts don't have much power compared to emotion. Often it seems there's no argument to be won.

    YurgenJurgensen(10000) 3 days ago [-]

    A better phrasing is 'If you were wrong, how would you know?'. It has the same end state, but positions things as an internal revelation rather than a potential loss of face, so is less likely to trigger a defensive response.

    speak_plainly(10000) 3 days ago [-]

    One thing that helps is to be charitable.

    Ideas in general are difficult to express and people struggle with conveying them separately from their private ideas, personal experiences, and personal reasons for believing what they believe.

    If you want to be a good interlocutor, you have to deeply absorb what the other person is thinking and sometimes even help them develop their understanding with the hope that others can do the same for you. We are all toddlers at times.

    LiquidSky(10000) 3 days ago [-]

    Eh...all of this is premised on good faith engagement, which in the current age is a very questionable premise.

    cryptopian(10000) 3 days ago [-]

    It's why I found platforms like Twitter tended to have such volatility because the platform structure itself takes every opportunity to remove that charitibility.

    If you come across an argument, people are writing in a limited space, you're presented with the most engaged with replies first (i.e. either towing the party line best or the most inflammatory opposition), accounts are pseudonymous, and your performance is numerically displayed below the post.

    somenameforme(3666) 3 days ago [-]

    Nobody ever changes their opinion on things with anything remotely like a high degree of frequency, and that's not a particularly bad thing. The 'real' point of an argument is not to persuade the other side (though that is what you aspire to nonetheless) but to exchange views, and often to indirectly explore your own views more deeply, at least in the scenario where your 'partner' can bring up something you weren't aware of.

    Our views actually shifting is something that only happens over many years and often for reasons we aren't really in control of. Me of 10 years ago would vehemently disagree with me of today on many things, and there's probably pretty much no argument I could have engaged with him to persuade him of what I obviously think are 'more correct' views. It required, most of all, life experience that isn't going to be able to be communicated with words. If it were we'd all have the wisdom of a man who'd lived for millennia. And if not all of us, then at least somebody - but that somebody doesn't exist.

    One who wants to debate while rejecting the real state of mankind is oft going to just find themselves in an echo chamber.

    eitally(10000) 3 days ago [-]

    This advice/wisdom should be included in every parenting guide!

    jarbus(3626) 3 days ago [-]

    I've been trying to figure out how to talk to folks on the right, and I keep looking for something, anything, I can say to make them realize the danger we are in. Reading this comment was therapeutic, because I think it's completely on the money. We can't change people's minds in a single argument; we can just try and nudge them in the right direction and hope they join us eventually.

    pmarreck(10000) 3 days ago [-]

    I don't completely agree. (I know... How meta.)

    I have worked to be as rational as I will personally tolerate, and it has been difficult, but I've achieved some success. The key is to divorce your identity from your beliefs about the world, and to realize that the opposite of never admitting you're wrong is 'always being right', which is of course impossible, so if you are TRULY interested in becoming MORE right, then the only reasonable option is that you must sometimes lose arguments (and admit it to both of you).

    Are most people interested in doing this? No, and in that sense you have a point. But it's available to everyone, and who wouldn't want to be more right?

    The other difficult thing to do is to aim this at yourself with full candor and work through that. Interestingly, now that ChatGPT has access to all the conversations you've had with it, and assuming you've opened up to it a bit, you can ask it: 'You know me pretty well. Please point out my personal hypocrisies.' If you want to make it more fun, you can add '... as Dennis Leary/Bill Burr' etc. What it said when I tried this was fascinating and insightful. But also difficult to read...

    timcobb(3659) 3 days ago [-]

    > Nobody ever

    apwell23(10000) 3 days ago [-]

    > The 'real' point of an argument is not to persuade the other side (though that is what you aspire to nonetheless) but to exchange views

    to me real point is just entertainment

    geye1234(10000) 3 days ago [-]

    It takes time to have a serious debate. You both need to figure out what your unstated premises are. If you disagree on these, you won't get anywhere by arguing downstream of them. Politics is even worse, because you are supposed to have an opinion, but at the same time, most matters require a detailed understanding of the facts that few people have the time, brains or inclination to understand. Add the tribalism and this gets even worse. It's incredibly rare to find someone whose general political opinions are well thought-through. Mine certainly aren't. I could regurgitate the argument for the free market or for heavy gov control of the economy, for example, and even understand them as internally-consistent syllogisms, but really all I'm doing is linking concepts together in my mind; I doubt any of them apply to any really-existing concrete situation that any given country is in. Hence I try not to comment on political threads.

    2OEH8eoCRo0(3093) 3 days ago [-]

    I've almost never changed my mind in an online argument but I do regularly offline. Why is that?

    I think it's because online nobody acts in good faith. There is no connection and trust.

    harrall(10000) 3 days ago [-]

    I notice people tend to argue about X when it's actually a proxy argument for Y, but they don't know themselves that it's Y.

    Y is a legitimate concern or fear, but X may not be. But everyone wastes each other's time arguing about X.

    If you figure out Y, you find common ground and compromise and that's when you find solutions.

    anon84873628(10000) 3 days ago [-]

    >Nobody ever changes their opinion on things with anything remotely like a high degree of frequency, and that's not a particularly bad thing

    For a great discussion of that, cue Slate Star Codex 'Epistemic Learned Helplessness'

    https://slatestarcodex.com/2019/06/03/repost-epistemic-learn...

    mppm(10000) 3 days ago [-]

    > The 'real' point of an argument is not to persuade the other side (though that is what you aspire to nonetheless) but to exchange views.

    Maybe this is just a matter of definitions, but for me the point of an argument is to convince or be convinced. When two incompatible views exist on a subject, at least one of them must be wrong. Some topics of conversation allow for diverging views or values, but then we are just talking or sharing experiences, not arguing.

    That said, it is my experience as well that actually changing someone's (or my own) mind on an important issue is unlikely. Especially on complex topics with partial and uncertain information, like political issues, our life experience and cumulative knowledge significantly influences our selection of sources and interpretation of the facts, so converging on a common point of view may require the exchange of a prohibitive amount of information, even among rational arguers.

    Productive argument usually occurs in a sort of semi-echo chamber, with people who mostly agree with us on the context, and are only arguing about the top layer, so to say. But when trying to argue about the deep stuff, we are mostly just 'exchanging views', in the end.

    hattmall(10000) 2 days ago [-]

    I feel like I change my opinion more than my outfit, but after reading that I'm not so sure. Maybe I stick to my guns more than I realized.

    feoren(10000) 3 days ago [-]

    The author is silently switching between two definitions of 'argument' depending on which point he's trying to make. An argument with a toddler is about whether they should brush their teeth, put their toys away, or stop sending American citizens to El Salvadorian prison camps. You win the argument if they do those things. And you can win some of those arguments, by ethos, pathos, logos, deal-making, bribery, or force.

    That's not the same kind of argument where people are trying to change their minds. Those are the ones you can't win or lose, because 'changing your mind' is not black and white. I've had plenty of arguments where my understanding changed by a few inches, and their understanding changed by a few inches, but we were still far apart in our opinions. That's fine! That's a successful argument.

    The author's world is one where there are two takes on every topic and one person is arguing Black and the other is arguing White and you should flip to the other binary sometimes when you're wrong. No. If your opinions are regularly flipping from one binary to the other, then your opinions suck. The world is much more complicated than that. Opinions are much more contextual than that. I'm never going to switch from 'evolution is real' to 'all life was custom-built by God' after a conversation with one person -- no matter how persuasive they are -- because my belief that evolution is real is not that fragile. It's intertwined with all my other understandings about how the world works, and I can't just flip it independently of other things. My goal when I have an argument is to improve my understanding of the world just a little bit, even if it's merely 'why do people believe this shit?' If the person I'm arguing with isn't trying to do the same, they're the only one that's losing.

    dingnuts(10000) 3 days ago [-]

    >stop sending American citizens

    the person who was sent, and who should not have been sent, was a Salvadoran citizen and a legal resident alien of the US.

    Please refrain from hyperbole in these times. If/when US citizens start getting sent to prison camps, we need to be able to tell each other that it is happening, and if you cry wolf now, nobody will believe you when it does actually happen.

    It is bad enough that it happened to a legal alien. It's more important than ever that we be precise.

    palmotea(10000) 3 days ago [-]

    > Toddlers (which includes defensive bureaucrats, bullies, flat earthers, folks committed to a specific agenda and radio talk show hosts) may indicate that they'd like to have an argument, but they're actually engaging in connection, noise, play acting or a chance to earn status. It can be fun to be in opposition, to harangue or even to use power to change someone's position.

    Honestly, this article is now very good, because he doesn't seem to realize one of the most common reasons for 'folks committed to a specific agenda' to play-act an 'argument' (or a 'discussion' or a 'conversion') is persuasion, and not any of the other childish things he outlines.

    Maybe he spends to much time in immature online spaces.

    draw_down(10000) 3 days ago [-]

    I'm afraid you are too late! For you see, I have already depicted you as the impatient and stupid toddler, and myself as the rational, mature adult.

    Workaccount2(3572) 3 days ago [-]

    If you don't think you would be able to fool the person that you have the same views as them, you probably will not be able to have a productive argument with them.

    i.e. if you couldn't sit at the table with a bunch of (insert ideology) adherents and blend right in, you probably don't understand their views well enough to dissuade them from it.

    erichocean(3653) 3 days ago [-]

    Jonathan Haidt's finding from The Righteous Mind that conservatives tend to understand liberal moral foundations better than liberals understand conservative ones is an important example.

    His research shows conservatives operate across a broader range of moral foundations—care, fairness, loyalty, authority, sanctity, and liberty—while liberals lean heavily on care and fairness

    This gives conservatives an easier time modeling liberal views, as they already incorporate those priorities. Liberals, however, often struggle to grasp the weight conservatives place on loyalty, authority, or sanctity, seeing them as less 'rational.'

    The author is an example of this: he views his opponents as less rational—literal 'toddlers'—and thus their arguments can be dismissed.

    porphyra(10000) 3 days ago [-]

    So, whenever you fail to change someone's mind, you can just dismiss them as being a toddler. This mindset explains how the current state of, say, US politics became so polarized and extremist.

    01HNNWZ0MV43FF(10000) 2 days ago [-]

    I think it was actually a combination of online propaganda, social media addiction, the demise of third places, and most of all lack of a land value tax

    jvilalta(3554) 3 days ago [-]

    For those actually trying to talk to a toddler, I recommend Adele Faber's How to talk so kids will listen and listen so kids will talk.

    Also maybe useful for talking to middle aged toddlers.

    bitshiftfaced(10000) 3 days ago [-]

    This book isn't actually appropriate for toddler age children, but there is a 'sequel' that focuses on toddlers. While there are some nice ideas in the book, it tends to ignore the most challenging parts of parenting. If you're going to spend the time reading a parenting book, I'd recommend a research-based parenting program.

    subjectsigma(10000) 3 days ago [-]

    People write articles like this and then wonder why we are so politically divided.

    I do agree there's a point past which someone is ideologically unable to be reasoned with. The classic example is neo-Nazis, of course. But also of course, there are redeemed neo-Nazis.

    Coming from a conservative family and living in a deep blue state I've had my fair share of arguments on both sides. As other commenters have stated, it's all about emotions. If you can make the other person feel like they are being heard and assuage their fears about X, Y, or Z, then you can make progress, even if it's small progress.

    rhines(10000) 2 days ago [-]

    It is an unfortunate side-effect of spending too much time online I think. Or online in the wrong spaces.

    Everyone has a different tolerance for dealing with unreasonable people, but there is a breaking point for each of us. And if you hit that, you will be prone to throwing your hands in the air, exiting the space where you found these people, and decrying them all to be braindead. I have hit that point multiple times and it has resulted in my making callous generalizations of people after.

    It's hard to imagine that people you interact with in an online community are the vocal minority of that community, when you cannot find the silent majority. But I suppose the silent majority doesn't tend to spend time on forums for their viewpoints.

    techright75(10000) 3 days ago [-]

    Useless article that further demonstrates the leftist movement of what was once a great and fairly neutral site called Hacker News.

    rexpop(10000) 3 days ago [-]

    > flat earthers, folks committed to a specific agenda

    I find it hard to think ill of a 'leftist movement' that opposes 'flat earthers,' but pretty much every reasonable adult is, to a greater or lesser extent, 'committed to a specific agenda'—leftists no less than the rest!

    MathMonkeyMan(10000) 3 days ago [-]

    > Tell me about other strongly-held positions you've changed as the result of a discussion like this one...

    Fair point, but if somebody were actually to say that to me during a disagreement, I would assume that they were not acting in good faith.

    Now instead of disagreeing about politics or whatever, you're asking a rhetorical question that insinuates 'you are unreasonable.'

    gs17(10000) 3 days ago [-]

    Agreed, it feels like something someone who had never had a conversation with a human being that strongly disagreed with them would write. If it was an introspective question meant to question the framing of trying to convince people through arguments in general, it might be meaningful.

    I think it's fair to try to establish if the person you're talking to has an unfalsifiable belief and walk away if you're arguing with a brick wall, but that's definitely not the way to go about it.

    jumploops(3421) 3 days ago [-]

    One of the surprising benefits of raising a toddler is gaining the ability to instantly tell when another adult has fallen into a 'toddler-like' state (myself included!).

    Before having kids, I would try and explain someone's behavior in a logical sense.

    Toddlers, however, are mostly driven by their current physical needs (hungry/sleepy) and whatever they're currently doing (autonomy).

    We've found the most success in avoiding all boolean questions. Do you want to read a book? (when playing with trains before bedtime) Obvious no!

    Do you want to read this book or that book? Oh... a decision!

    It's striking how well tactics like these work outside the realm of toddlers.

    sethammons(3653) 3 days ago [-]

    We had a VP make a similar observation during an all hands. In the following all hands, he had to apologize because people felt they were being insulted by being compared to kids. The irony of the situation was not lost on some of us

    Quarrelsome(10000) 3 days ago [-]

    illusion of choice is extremely effective on c-suite as well. I recommend it for engineers trying to push changes up corporate ladders. Give them three options, the one nobody should ever do, the compromise solution, and the 'whale' option. Just like product pricing.

    For very young toddlers distraction is also extremely effective but it stops working at some point. Not sure about how effective it is on c-suite someone will have to do some testing.

    cycomanic(3484) 3 days ago [-]

    An excellent text about engaging with extremists... (I don't agree with the authors simplification as toddlers) is the book 'Subversive Denken, wie man Fundamentalismus diskutiert' (Unfortunately it's only available in German). The author distinguishes between different types of fundamentalists and makes the point that discussions with the convinced fundamentalist is often not possible, because even agreeing on facts is impossible as denying some facts is a proof of faith in the fundamentalist ideology. The discussion is then about convincing listeners instead via different techniques. Despite the title it is not primarily about religious fundamentalism but also political (quite timely at the moment) and the author gives historical examples of the type of techniques employed against fundamentalists.

    spongebobism(10000) 3 days ago [-]

    'Wie man mit Fundamentalisten diskutiert, ohne den Verstand zu verlieren: Anleitung zum subversiven Denken', by Hubert Schleichert





    Historical Discussions: 4chan Sharty Hack And Janitor Email Leak (April 15, 2025: 691 points)

    (691) 4chan Sharty Hack And Janitor Email Leak

    691 points 3 days ago by LookAtThatBacon in 3328th position

    knowyourmeme.com | Estimated reading time – 3 minutes | comments | anchor

    About

    April 2025 4chan Sharty Hack And Janitor Email Leak refers to the Soyjak.party community's claimed hacking of 4chan in mid April 2025, which included the restoration of the deleted /QA/ board and leaking the emails of 4chan 'janitors,' who are members of the site's moderation team. The attackers reportedly exploited outdated PHP code and deprecated MySQL functions in 4chan's backend, particularly in a core script named yotsuba.php, which manages post submissions and moderation. A Soyjak.Party users also shared a list of emails they claimed are associated with janitor and moderator accounts, including three .edu emails. Although some internet users claimed that the leaks included .gov emails associated with members of the moderation team, this remains unverified.

    Origin

    4chan was reportedly hacked by internet users claiming to be part of the Soyjak.Party community on April 14th, 2025. The hackers brought back the deleted /QA/ board, and temporarily granted access to 4chan's administrative site. An anonymous user posted about the hack on Soyjak.Party around 10:05 PM EST. The thread contained several bits of leaked information, a look into the /j/ board as well as the entirety of the 'yotsuba.php' code, which handles features like posting and reporting. Another post in the thread claimed to be leaking the email addresses associated with various janitors, three of which were .edu emails.

    Early on April 15th, 2025, KiwiFarms user Coberald posted a copy of the leaked 4chan source code. A user named Tofflenheim also shared a link to an archive of posts on /j/, a private board dedicated to janitor discussions.

    Spread

    X user @Priniz_twt was one of the first to post about the hack on April 14th, 2025, gathering over 5,000 likes in less than a day.

    On April 14th and April 15th, 2025, X user @_yushe posted screenshots from the 4chan administrative page as well as analyzed the leaked 'yotsuba.php' code, which handles features like posting and reporting. The post gathered over 500 likes in less than a day.

    On April 15th, X user @_yushe noted that 4chan was likely hacked because it was using an outdated version of PHP (a coding language used to run the site), which is full of known vulnerabilities, and deprecated (no longer supported) functions to handle its database.

    Also on April 14th, X user @LumpyTheCook tweeted, 'Apparently 4chan is down because someone hacked the lead admin (Hiro) account and started leaking all the mod and janitor emails, contact info, chat logs, etc. Good job Hiro 👍' The thread also included the unconfirmed claim that some janitors were using .gov emails, (although some leaked lists did contained .edu emails).

    On April 15th, 2025, Redditor /u/Meteorstar101 posted to the /r/greentext subreddit, writing, 'Last posts before 4chan got hacked,' and showing the Chicken Jockey meme.

    Search Interest

    External References




    All Comments: [-] | anchor

    CamelCaseName(10000) 3 days ago [-]

    If you lamented the disappearance of the 'old internet', well, this was a part of it, and now it may be gone too.

    The title is also a fair bit understated.

    They're leaking the moderators home addresses and work contact info (for admins, who are(were?) paid moderators)

    GaggiX(1656) 3 days ago [-]

    Do you think that 4chan is going to disappear forever for this? Just wait a bit and it will be back.

    Also where did you see that they are leaking home addresses and work contact info? I think they just leaked the emails (I don't understand why home addresses and work contact info should be present in the 4chan database, everyone moderating the site for free).

    pelagicAustral(10000) 3 days ago [-]

    Isn't it a running joke that the Jannies don't get paid?

    robobro(10000) 3 days ago [-]

    The initial leaker is most likely not the same parties as the ones tying email addresses and usernames to people's 'real identities', if you look at the thread where the leak was announced.

    Say what you will about 4chan but I am concerned for the team managing it - them and their close ones are certainly going to be exposed to a whole lot of viciousness soon :(

    JKCalhoun(3408) 3 days ago [-]

    I think we can lament the old internet and still care nothing for 4chan.

    knowknow(10000) 3 days ago [-]

    Is it considered part of it? From my understanding, the culture has changed significantly and post get auto deleted eventually, so it's not a good archive either. The only thing old about it is it's web design

    fny(3295) 3 days ago [-]

    Where do you see info about personal info?

    I would presume Anon would which to remain anon.

    mattlondon(10000) 3 days ago [-]

    I'd hardly call it the 'old internet'. It is very niche, and has not been around that long really - like what 2003 or something? Nothing compared to e.g. Geocities which was early-mid 90s IIRC which I'd argue had more relevance to people than 4chan.

    imzadi(10000) 3 days ago [-]

    I grew up on IRC, had sites on Geocities and Angelfire. That was the old internet people miss, not 4chan.

    happytoexplain(10000) 3 days ago [-]
    Was part of it. As somebody who has been trapped there since 2004, I'd say it evolved into a part of the normal internet between 2010 and 2016 (i.e. it had already fully transformed before Trump's first term), where 'normal internet' means being infested with uncle-on-Facebook-tier political posts, 'jokes' where the punchline is 'I hate my political enemies', etc. Creative irreverence was replaced with regular childishness.

    Mostly because, as more people came online, they mistook offensive humor for conservatism; and thought 'counter-culture' meant 'being opposed to the political party currently in power', rather than 'being opposed to political parties'.

    DrillShopper(10000) 3 days ago [-]

    4chan is not 'old internet'. Not even close. It's predated by a bunch of forums (including 2channel) on the Internet, some anonymous.

    p3rls(10000) 3 days ago [-]

    It's not so much that we lament the old internet, we lament that the new internet cannot be built because incumbents like google have distorted the playing field with shitty algorithmic SEO practices-- which really has nothing to do with 4chan at all.

    dimal(10000) 2 days ago [-]

    But really, 4chan-style bullshit took over the rest of the internet. At least in the old internet, it was self contained there. If someone spouted nonsense they read on 4chan, you could easily dismiss them as a crank. Now everyone is posting and reposting bullshit on a multitude of microblogging shitsites.

    protocolture(10000) 2 days ago [-]

    I honestly and sincerely miss the project chanology days.

    https://en.wikipedia.org/wiki/Project_Chanology

    TheAceOfHearts(3650) 3 days ago [-]

    There's a KnowYourMeme [0] post with additional details and context. Most interesting to me is finding out that there' s a word filer / transformer, so SMH becomes BAKA and TBH becomes DESU, as two examples.

    [0] https://knowyourmeme.com/memes/events/april-2025-4chan-hack

    tanjtanjtanj(10000) 3 days ago [-]

    Yep, it's been that way for 20+ years!

    The term "weeaboo" as a term for western anime fans only came about because it was what the word "wapanese" filtered to. It was originally a nonsense work made up in a Perry Bible Fellowship comic.

    dang(143) 3 days ago [-]

    That does seem to have more information, so I've changed the top url to that from https://old.reddit.com/r/4chan/comments/1jzkjlg/4chan_hacked.... Thanks!

    FMecha(10000) 2 days ago [-]

    From what I heard, it was because they were tired of people posting 'tbh fam'. This does result in people instead posting 'tbdesu' in being aware of the filter.

    A note that the filter for 'doublechan' was never updated to include its current name, nor the place where this current attack originated was ever filtered, afaik.

    rootsudo(10000) 3 days ago [-]

    Wow doxing the Jannies!

    I mean, wow, they're doxing people that helped keep a legacy internet place alive and compliant with the law.

    Who would do that?

    masfuerte(10000) 3 days ago [-]

    The man.

    joseda-hg(10000) 3 days ago [-]

    Sound right up the alley for a 4chan user

    t0lo(10000) 3 days ago [-]

    Whoever's trying their hardest to shut down the rest of the free internet as well. I do think these actions we've seen in the last 5 years are co-ordinated. Will post sources soon

    brigandish(3648) 3 days ago [-]

    I see a lot of hate for 4chan here. Why? I've never used it, know it by reputation, but not sure why there's so much hate for it.

    ozmodiar(10000) 3 days ago [-]

    I hope this isn't too contentious but I'll try to cover most things. I've posted this a few times, but I checked out 4Chan about twice in the early days and saw CSAM both times and it gave me personally a visceral hatred of the site. I've heard it got better/that's not representative but it's a hard thing to shake. The origin of the site is also supposedly Moot getting kicked off SomethingAwful for posting 'lolicon' (child anime porn). They've also gone after and doxxed pedophiles though, so the sites relationship with that sort of content is... complicated. I think most of the worst ended up moving to 4Chan clones quite awhile ago because it really splintered again at some point and became known as the cleaner Chan board.

    It's also known for its extremely abrasive mildy sociopathic culture and 4Chan posters have a very samey 'posting voice' where if you don't like it you can hate it. It permeates a lot of the internet, but 4chan is kind of seen as the epicenter. I think it also gets blamed for a lot of negative internet culture like doxxing and choosing targets to harass, although I'm not sure how much of that was actually 4Chan. I think most of those people moved on to Kiwifarms. 4Chan probably gets some hate for things that other Chan sites have like Qanon in a sort of 'you started this' way.

    And finally the politics are complicated. It actually used to be slightly left leaning or at least libertarian or anarchist, but over the years pol in particular has been known to be hard right wing. It definitely seems like they had a shift in political tone for the (IMO) worst at some point.

    Personally I won't hide that I'm a hater and an unapologetic curmudgeonly old man, but that's my perception. On the other hand if you think the CP stuff is overblown, don't care about the negatives because there are apparently good boards there that are insulated, or are just hard right yourself then it is one of the last major discussion boards on the net. Some of that's probably out of date (like I said I gave up on it pretty quickly) but I'd wager most people with negative opinions are thinking of one or more of those. I'd be interested if any haters have other reasons.

    throwaway743(10000) 3 days ago [-]

    Because people think /pol/ is 4chan, and it's easier to think that and what others say about something than to invest time into looking into something they were uninterested in looking into to begin with

    helle253(10000) 3 days ago [-]

    Wow, the comments on this thread are much more divisive than I thought.

    I've always felt that the 'there are only two internet cultures: 4chan and tumblr' has felt somewhat accurate. Unfortunately moreso now that /pol/ and /r9k/ have taken over broad swathes of the internet.

    It's sad to see how far this old haunt has fallen. Lurking /v/ in my early/mid teens was a formative experience for me. It wasn't as hateful as it was, until Gamergate.

    h2zizzle(10000) 3 days ago [-]

    /r9k/ is such a weird situation, because its original incarnation prided itself on being an intellectual bastion on the site. The robot meant that you couldn't meme so easily; you had to attempt to write something substantial or meaningful (or at least original). Most were simply discussions, but you'd also get creative gems like futureguy's sobering predictions (well, history, for him).

    tfwnogf really did kill everything.

    throwanem(3029) 3 days ago [-]

    > I've always felt that the 'there are only two internet cultures: 4chan and tumblr' has felt somewhat accurate.

    'Somewhat accurate' is exactly right.

    This formulation overstates the number of Internet cultures by one, in that the deepest and most shameful secret of both websites' most avid users is that they have always been both websites' most avid users.

    Other than that, there's nothing wrong with it.

    on_the_train(10000) 3 days ago [-]

    What a sad day. It's the best page on the net by a wide margin. Hope they'll recover

    creatonez(2484) 2 days ago [-]

    It better not recover. 4chan should be burned to the ground. And so should Soyjak.Party. It's a blight on humanity.

    duxup(3407) 3 days ago [-]

    I'll ask I guess.

    People still use 4chan?

    I recall 4chan at one short point in time being a semi amusing meme posting spot on the web but as always as soon as it was popular it turned into a lot of 'edgelord' spam and drama.

    Loughla(10000) 3 days ago [-]

    There was a time that if you weren't on 4chan, you missed everything good. I remember staying awake for 20 hours tracking one thread. If you left it was gone forever and you genuinely missed out. 2004-5 area.

    That being said, I haven't been back since 2014? It was always pretty heavily influenced by b and pol, but it got really bad the two years before Trump 1. Alt right bullshit took over completely.

    It astounds me that people think 4 Chan is a place for deviants, but Twitter is fine. Twitter is 10,000x worse.

    lastcobbo(10000) 3 days ago [-]

    And longcat, don't forget him

    s3krit(10000) 2 days ago [-]

    I've used it probably daily since about 2006. Which is kind of sad actually.

    A4ET8a8uTh0_v2(10000) 2 days ago [-]

    It truly is an end of an era. I popped in every so often to check the temperature and was rarely disappointed by the level of crazy pervading it. Amusingly, despite it having such a massive influence on internet as a whole including its lingo and memes, my wife did not even knew about it existed until today.

    I do not think it will be missed by many, but that kind of hole does not exactly disappear without a trace.

    Loughla(10000) 2 days ago [-]

    After leaving when it got too shitty, I would go back once a year or so to check the racism in pol, see if maybe b was back to doing things instead of just porn, and read the plainly undiagnosed schizophrenia on the paranormal board.

    Like you said, not a lot of people in my life have any idea what it is, but it does hold a special place in my heart. It started when I was trying to establish my own personality, and it provided me with a safe avenue to try out different 'me's'.

    cbg0(2317) 3 days ago [-]

    Hosting a copy of phpMyAdmin behind basic HTTP authentication in 2025 really is asking for it.

    jsheard(301) 3 days ago [-]

    I was kinda surprised to see that phpMyAdmin is still maintained, albeit only barely. The last release was in January but before that it hadn't been touched for over two years.

    whalesalad(363) 3 days ago [-]

    A tale as old as time

    TonyTrapp(3051) 3 days ago [-]

    Can you please elaborate how it is 'asking for it' if we assume the basic auth password is reasonably complex and kept as safe as, say, the SSH login credentials of the same server?

    lossolo(3427) 3 days ago [-]

    Sure, if you slap Basic Auth with 'admin:admin' on phpMyAdmin in 2025, you're asking for it. But a Basic Auth password with 256 bits of entropy is just as resistant to brute force as AES-256 (assuming the implementation is sound and TLS is used). It's not the protocol that's insecure, it's usually how it's deployed.

    ndiddy(1367) 3 days ago [-]

    The hacker posted a screenshot of the shell on the 4chan server. It was running FreeBSD 10.1, which came out in 2014 and stopped getting patches in 2016. It seems like there was basically nobody doing maintenance after moot sold the site. I wonder how long it'll take for them to get the site back up if they don't have anyone who can do server administration.

    trallnag(10000) 3 days ago [-]

    Jannies had it coming tbh. They were certainly tightening the rope when it came to free speech in the last few years

    pjc50(1402) 3 days ago [-]

    Always curious to know what kind of speech this kind of complaint refers to.

    snvzz(2530) 3 days ago [-]

    Blaming the victims is not cool.

    Particularly, when these are good people who put a lot of effort into keeping 4chan a pleasant community, by e.g. removing hate speech and CSAM, as well as banning offenders.

    geriatric-janny(10000) 3 days ago [-]

    My official association with 4chan ended in 2010, but I still recognise a good third of those names and would wager the leak is legit.

    delusional(10000) 3 days ago [-]

    What kind of official association could one have with 4chan? 4chan was formative for my early connection to the internet, and I'm really curious what the organization behind it looked like. Was it professionally driven, or just some random guy mailing checks? stuff like that.

    blitzar(10000) 3 days ago [-]

    Username checks out.

    Blikkentrekker(10000) 3 days ago [-]

    So you were able to find the leak? Because I see reports that it was hacked repeated as fact everywhere on Daily Mail-tier reliable news websites and Reddit posts, but they are all based on "rumors on social media go about that there was a leak" but I've not been able to find the actual leak searching for it. Obviously not many people want to link it but it's also weird that so many people claim to have so easily been able to find it when I cannot.

    Finally, I was there and using it when the website went down and this did not resemble an actual hack but technical issues. First there were a couple of hours where the website was up but no posts went through for anyone except occasionally when a new threat was bumped, mirroring the normal pattern of downtime issues that sometimes occur and then it just went down completely. This doesn't really resemble how a hack plays out but looks more like technical issues to me.

    Even now, going to the front page, it loads for me, except very slowly and incompletely. This does not resemble a hack but technical issues.

    huehehue(10000) 2 days ago [-]

    My association was a bit later, mid to late 2010s. I recognize some of the names as well, including one of the top folks that probably onboarded both of us.

    That said, my info is not on the list, I assume it was deleted when I left.

    sertraline(10000) about 23 hours ago [-]

    he does it for free

    wickedsight(10000) 3 days ago [-]

    This makes me wonder whether there's anything in there that can point to the identity of the original QAnon. That would be a pretty interesting outcome.

    swarnie(10000) 3 days ago [-]

    Aren't we 99% sure that was a Ron Watkins grift now?

    ribosometronome(3136) 3 days ago [-]

    Given the nature of the hackers and their immediate actions, it seems unlikely they would reveal that sort of information.

    Borgz(10000) 3 days ago [-]

    4chan doesn't store threads for very long, hence the plethora of third-party archive sites. I doubt they are still storing any useful data from back then.

    OuterVale(617) 3 days ago [-]

    Posted link is a tad vulgar and scarce on information. A bit of a collection forming on The Sun's live blog post:

    Thousands of 4Chan users report issues accessing controversial website - https://www.thesun.co.uk/tech/34472708/4chan-down-updates-co...

    dang(143) 3 days ago [-]

    (Posted link was originally https://old.reddit.com/r/4chan/comments/1jzkjlg/4chan_hacked.... We since changed it.)

    anigbrowl(54) 3 days ago [-]

    Why would you use the Sun as a source for anything

    Red_Tarsius(888) 3 days ago [-]

    I feel too many people conflate /pol/ with the whole website. I enjoyed browsing through sfw boards like /tg/ (tabletop media), /ck/ (cooking) and /fit/ (fitness). I had long discussions about the SW sequels on /tv/ back in 2015-19. The readership was surprisingly diverse and the anonymity lead users to provide more focused replies. With bodybuilding.com gone, the blue boards felt like the last bastion of the old internet.

    MattDemers(10000) 3 days ago [-]

    I think people also don't acknowledge how much terminology, slang and other culture originate and spread there. When it breaches into Twitter (usually through funposters) people kind of ignore the unsavoury origin and rewrite the history. The anonymous nature kind of provides that petri dish of 'if it's strong culture, it'll survive or be modified.'

    nemomarx(10000) 3 days ago [-]

    the blue boards did have some slow overlap with pol in my experience - they were more distinct before 2014 or so and by 2016 I barely recognized /tg/ culture.

    I'm curious, why bodybuilding.com in particular? I think I've only heard of it once. I wonder if anyone on HN remembers stardestroyer.net or old weird tech forums?

    sgarland(10000) 3 days ago [-]

    > bodybuilding.com

    Obligatory post about the dumbest argument to ever be had online [0]. It's so good, the Wikipedia entry [1] has a section devoted to it.

    [0]: https://web.archive.org/web/20240123134202/https://forum.bod...

    [1]: https://en.wikipedia.org/wiki/Bodybuilding.com

    flmontpetit(10000) 3 days ago [-]

    It used to be a diverse place without much to tie all the boards and users together save for a shared commitment to counter-culture. Then GamerGate and Donald Trump happened. 'Every board is /pol/' was one of the most frequent replies you would see for a while until all the halfway decent people left.

    /g/ is where I and a lot of people learned about FOSS advocacy and now it's just gamer hardware and transphobia.

    ToucanLoucan(10000) 3 days ago [-]

    > I feel too many people conflate /pol/ with the whole website.

    That's probably why a lot of websites use moderation to avoid having one section of it turn into a cesspit of every -ism you can imagine, up to and including fascism, because once you have a section of your website that is openly coordinating the pushing of fascism on society, everyone kinda forgets about the diverse and interesting other things it might have, because of the fascism.

    moonlet(10000) 3 days ago [-]

    /fit/ and /mu/ were good to me in my late teens, and /ck/ is the reason I actually asked my roommate's mom to show me cooking basics when I was in college!

    giancarlostoro(3167) 3 days ago [-]

    Funny you point to /pol/ and forget about /b/, that was the meat of 4chan in the late 2000's

    eqvinox(10000) 3 days ago [-]

    I always thought it's /b/ that people conflate with the whole website... (for the purpose of declaring it a cesspool)

    ... but then again I never looked at /pol/, maybe it's even worse than /b/?

    fastglass(10000) 3 days ago [-]

    I feel too many people who don't conflate /pol/ with the whole website, as well as the others, don't know why /pol/ was created.

    It was eventually a replacement for the /new/ board, where news of the arab spring first started, shortly before it was shut down. However, it was plagued with proto-pol behavior before anyone was bothering to complain about pol.

    There was always these 'cells' of non /jp/ shitposters, if they weren't the OG shitposters themselves, that would post about left-right politics ad nauseum, and in the most hallmark unproductive ways. It was when trolling evolved from 'clever this and that' to shear brute forcing. It was the topic of the news that attracted these unsavor political actors into that place, which was for a short period of time, a great diverse place for collecting news.

    This social phenomena and history could never be repeated enough, particularly since we might be finally ending the story of pol/4chan - which was more popular than 4chan itself.

    helle253(10000) 3 days ago [-]

    /pol/ and /b/ were containment boards, up until they got so popular that everything else ended up being containment boards.

    I still miss hanging out on /v/ and /fa/. When they split /vg/ out into its own board, the colour started to drain from my experience.

    throwaway795737(10000) 3 days ago [-]

    The more popular blue boards were pretty bad too, let's be honest. It wasn't hard at all to find things on those boards that wouldn't be tolerated on any mainstream social media, for good reason.

    Bjorkbat(10000) 3 days ago [-]

    /vg/ also had a pretty cool amateur game dev general thread (/agdg/). No one was making any hidden gems there, but it wasn't trash either. At any rate, I liked it.

    Calinterman(10000) 3 days ago [-]

    It's, funny enough, identical to people who conflate all of old 4chan with /b/. The current most popular boards are video game boards and have been since Covid hit. There's a site called 4stats which charts this, and shows how the end of Trump's presidency spelled the death knell of /pol/ dominating 4chan. Which, by comparison, was four years. It's been five years since then. It's kind of like how the golden age of /b/ was a shade over three years (2004-2007) but all of old 4chan is equated to the memes made in this prehistoric era.

    swarnie(10000) 3 days ago [-]

    Ignore /b/ /pol/ and /r9k/ and most of the rest were good communities compared to the modern internet.

    Reddit can't get close due to its voting system.

    LinuxBender(58) 3 days ago [-]
    I feel too many people conflate /pol/ with the whole website.

    I believe that's fair. Sure, it's 'a different board' but it's just another URL on the same domain and same administrator, just different janitors. So it is really the part of the whole website. I know that 99% of people on 4chan disagree with me because they do not wish to be associated with /pol/ /b/ /gif/ but if they wanted to disassociate themselves with those boards then they should be on an entirely different domain without 4chan in name. polchan perhaps.

    codexon(3487) 3 days ago [-]

    > I feel too many people conflate /pol/ with the whole website.

    Because it is the 2nd most active category, and the racist/alt-right beliefs have spread to the other boards because the head admin fires anyone that tries to moderate it.

    https://www.vice.com/en/article/the-man-who-helped-turn-4cha...

    On top of that, they actively delete and ban posts that go against alt-right.

    I discussed it somewhat recently here: https://news.ycombinator.com/item?id=42276865#42283887

    timeinput(10000) 3 days ago [-]

    Piling on the 'some parts of 4chan was good until it wasn't' theme: I really liked /ck/ for a while. Then there was this weird trend of just like 'all food tubers are garbage' whether that was 'Kenji-Cucks', or people hating on Rageusa, or what ever.

    Combining that with the 'post hands' request for a lot of food it was just an unpleasant community to participate it.

    Weirdly trying to load the page right now I'm getting Connection timed out. Is hackernews ddosing 4chan? What a world.

    ren_engineer(3241) 3 days ago [-]

    /g/ was the origin of Chain of Thought for AI, also where llama weights were first leaked

    torginus(10000) 3 days ago [-]

    It's interesting to note the popularity of the website, and the massive traffic it handled, despite the lack of everything we assume necessary for a modern (social media) website

    - no modern web frameworks

    - no microservices/kubernetes clusters

    - no algorithmic curation/moderation/recommendation algoritmhs

    One wonders just how much of the modern engineering developed in the past decades, that cost a fortune to develop and run is actually necessary or even beneficial for running a modern social media website

    bigfatkitten(10000) 3 days ago [-]

    Even /b/ was pretty good back in the day. Memes and inside jokes galore with almost no porn to be seen.

    irusensei(10000) 2 days ago [-]

    The first llama torrents were posted on /g/ and for a long time it was the best place to go for information on local models.

    keepamovin(521) 2 days ago [-]

    I still don't understand how to read threads. How do replies work? How do you know it's actually the person you're replying to who's replying back? How is it organized visually??

    brap(10000) 2 days ago [-]

    You're right but only if ignoring the last 5 years or so.

    I discovered 4chan around 2008 as a kid, it was much less hostile back then. Even as an adult I used to go on /fit/ every now and then. It was useful and funny and even "wholesome" in its own special way.

    But over the last few years, the entire site became /pol/, and other boards became unusable. Maybe once a year I will pop in and immediately regret it.

    RKFADU_UOFCCLEL(10000) 2 days ago [-]

    This. It's just a website (where anyone can post, quite rare in these overpoliticalized days).

    > A Soyjak.Party users also shared a list of emails they claimed are associated with janitor and moderator accounts, including three .edu emails. Although some internet users claimed that the leaks included .gov emails associated with members of the moderation team, this remains unverified.

    Like who cares?

    jmyeet(10000) 3 days ago [-]

    4chan will be studied for years for its role in alt-right radicalization as well as being a baroemeter for young male discontent.

    For example, QAnon started on 4chan (I believe as a joke?) [1]. Nowadays a lot of 4chan users and traffic have since migrated to Twitter for pretty obvious reasons. Pseudo-intellectual racism has a lot of roots in 4chan (eg the popularity of Julius Evola [2]) that's deeply tied to 'trad' content, Andrew Tate fandom, the manosphere and 'self-improvement' [3].

    Things like the Bored Ape Yacht Club originated on 4chan and it's full of racist memes [4]. A lot of racist and antisemitic memes originated on 4chan.

    Worst of all, it seems like Elon Musk is motivated by a deep desire to be liked by 4chan [5].

    So the point is that 4chan users (and admins) have a lot of real-world influence and that's kinda scary. It also makes them a target for this kind of hack. I suspect a lot of people will be exposed by this and in more than a few cases, you'll find ties to the current administration.

    [1]: https://www.nbcnews.com/tech/tech-news/how-three-conspiracy-...

    [2]: https://jacobin.com/2022/12/fascism-far-right-evola-bannon-b...

    [3]: https://www.nature.com/articles/s41599-021-00732-x

    [4]: https://www.youtube.com/watch?v=XpH3O6mnZvw

    [5]: https://www.aljazeera.com/opinions/2025/4/6/how-musk-ushered...

    VectorLock(10000) 3 days ago [-]

    I would be 0% surprised to see Stephen Miller's information in this leak.

    properpopper(10000) 3 days ago [-]

    For users who aren't familiar with 4chan - this post describes only one board - /pol/, where you can find hateful posts about every race and religion. 4chan have 30+ boards in total

    AgentME(10000) 3 days ago [-]

    Many people will downplay this, saying that the alt-righters on 4chan were only trolls, or were only a few people sockpuppeting to make it look like there were many, or that these people were already alt-right and that 4chan didn't actually influence anyone into it (and that 4chan's userbase merely cycled out to a set of new alt-right users), but I have to say that's all wrong. I was in several different online communities 2010-2018 of people who met through 4chan, and a startling number of people did actually adopt alt-right politics over this timeframe after I had first met them. I think people who downplay how common radicalization on 4chan was didn't have as clear of a picture as this experience gave me.

    Ferret7446(10000) 2 days ago [-]

    Yes, QAnon is a joke, as was the white power hand sign and microwave charging iPhones, among hundreds of others.

    There is no 'baby filter' on 4chan. You are solely responsible for believing and/or not being offended by anything. Well, that is true everywhere on the Web, but there is zero veneer of it on 4chan vs the partial safety bubbles you get on other sites.

    WindowsDev(10000) 3 days ago [-]

    Is the source code which leaked everything one would need to host their own copy of the site?

    technion(1631) 3 days ago [-]

    There are tonnes of open source clones on github, source code to run the site is nothing special. You still need users.

    kaiokendev(10000) 3 days ago [-]

    The site has an API for reading posts [0]. It works (worked?) quite well. For making posts, you'd need to write your own functionality that forwards the CAPTCHA and post timers.

    [0]: https://github.com/4chan/4chan-API

    PaulRobinson(10000) 3 days ago [-]

    No, you'll need servers and enough network capacity to handle the load, an understanding and supportive hosting provider, a law degree or enough money to pay somebody with one to keep you out of court/jail/prison, a network of degenerates to provide traffic and content and/or a copy of the existing 4chan content, a stomach of steel to deal with the content moderation duties, and a moral compass so warped you think hosting degrading and illegal content is 'just liberalism and freedom of speech' and not something that needs a second thought by any right-minded person.

    But sure, if you have all that and the source code, you're all set. Godspeed!

    ttw44(10000) 3 days ago [-]

    We've heard it time and time again that 4chan is the so called 'last bastion of free speech on the internet' when this so called free speech is just being unapologetically racist and antisemitic. I hope its gone for good.

    blacktits69(10000) 3 days ago [-]

    you think these are akin to endangered species? these are humans collectivizing and cloaking under maladaptive pretenses. you're advocating for empowering polio because it is life and deserves a chance.

    DaSHacka(10000) 3 days ago [-]

    Halfchan's likely been around longer than you have and will just as likely remain around long after you're gone

    y-curious(10000) 3 days ago [-]

    I, too, prefer to see my vulgar memes served by an AI algorithm alongside ads. Sooooo much better!

    /s

    kittikitti(10000) 3 days ago [-]

    Yes, and everywhere else people have to worry about being deported for pointing out Israel's war crimes. At least no one needed to worry about that on 4Chan, but seeing an anonymous racist meme is even worse for people like you.

    soon_to_be(10000) 3 days ago [-]

    4chan being gone for good would've been a bad thing regardless of your views. All those people who used to come there and just talk wouldn't just cease to exist nor stop feeling the way they feel. At the very least, it's the devil you know.

    snvzz(2530) 2 days ago [-]

    >unapologetically racist and antisemitic.

    Anyone who's actually familiar with 4chan knows that posts containing any of that are cracked on hard, both by other users (replies calling it out) and janitors (delete+ban).

    lysp(2984) 2 days ago [-]

    > racist and antisemitic

    There was a leak of the political channel by poster's country.

    According to that post, the top posting country by far (226M posts) is also the same country that is at the receiving end of antisemitism.

    gherkinnn(3616) 3 days ago [-]

    You know, I always found Twitter (even pre-X) to be worse than 4chan ever was. Not in obvious terms, but in how it fucked with your head.

    1970-01-01(1814) 3 days ago [-]

    This is a pretty good take! It's because you could verbally attack and fight the 4chan idiots with a swarm of common sense and be lauded for doing that job.

    Doing the same on X will just get you banned for whatever reason Elon feels is best 'for the community'.

    amadeuspagel(403) 3 days ago [-]

    Browsing different forums helps you recognize how discourse is shaped by different feedback loops, how people troll on 4chan or conform on reddit, rather then assuming that twitter is real life.

    carabiner(1041) 3 days ago [-]

    I received really heartfelt (to me) and sincere life advice on 4chan. I think the fact that it's anonymous without a real karma/voting system means there's a lot less ego-driven, self-centered posting. People don't try to attack as much or have bitter back-and-forths as much as twitter, reddit. They might argue for a bit and then just say f it and move on. But there's no motivation for ragebait, karma farming like there is on twitter.

    arkis22(10000) 2 days ago [-]

    I like this quote from a great philosopher of our time: https://knowyourmeme.com/photos/1273406-tyler-the-creators-c...

    the anonymity makes it kind of the only site where thats true

    underseacables(2713) 3 days ago [-]

    I have been to 4chan maybe 4 times in my life. The first was like ok.. Then I visited /b and LOL'd for a couple of hours. Then it just got redundant and depressing. It really is the arsehole of the internet, but some people seem to find it useful.

    Blikkentrekker(10000) 2 days ago [-]

    > but some people seem to find it useful.

    Honestly, it filled a very specific hole for me that I found nowhere else. Everyone is talking about the "unfiltered content" and all those things but to me it was mostly just topical. It was really one of the few places where one could get a good discussion on the internet about Japanese female-oriented entertainment which I'm well aware isn't the first thing people think about with 4chan but pretty much every other forum about Japanese entertainment is completely dominated by male-oriented entertainment, except when they go out of their way to specifically make a board catered to female-oriented entertainment, but that has the side effect that people on those boards end up talking more about gender politics than about the entertainment itself and I just want to talk about my favorite television shows and comic books and really don't care about all the politics.

    4chan by it's nature doesn't drown out minority tastes and voices. This really isn't just a "female-oriented entertainment" thing but really any minority taste that just gets drowned out on most boards to the point that it disappears. The only other place I know where one can do this is Tumblr, more or less, but it's a very different experience, not necessarily better or worse but there just isn't this kind of "live discussion" atmosphere and vibe going on on Tumblr about episodes that are currently airing where people post small comments as the episode is airing and they're watching it. It's more for long impressions after it was aired and it doesn't have the same degree of interaction, it's a blogging place, not a message board.

    As said, it isn't just that but "obscure taste" in general. You can make a thread on 4chan about some really obscure piece of fiction that no one knows and get a discussion going, half with the people that do it know, in part because it's an imageboard so they're drawn in to an image they recognize and it stands out, and half with people that never heard of it before, see the images in the thread, see it looks interesting and try it out. The images are the key I feel, it lowers the barrier of entry for people to try out something obscure because they see the images which lures them in. It was one of the best places to get a discussion going about some obscure piece of fiction which Tumblr doesn't do either, the only things that are being discussed are the really big titles. There are so many relatively obscure titles I enjoy I will possibly never get to discuss with anyone in my life again if 4chan not come back. I know many of those titles from 4chan because people constantly promote and share fairly obscure things there and the images again sell it.

    pfdietz(10000) 3 days ago [-]

    It was always possible to ID 4chan posters via court orders, wasn't it? I mean, Sheriff Mike Chitwood had 3 (or was it 4) people who posted death threats against him there arrested

    matheusmoreira(10000) 3 days ago [-]

    Of course. I remember reading transcripts of Cristopher Poole cooperating in court during a trial. He used to straight up tell users he would fully cooperate with authorities if required. Nobody there is in the business of going to jail.

    You're anonymous to other users. Unless you're behind seven proxies, connecting your posts to your real identity is as simple as correlating 4chan logs with ISP logs. Usually that requires court orders so it tends to happen in response to real offenses. Insulting each other with slurs isn't enough for a court order so it's fine. Chances are the NSA knows all your posts regardless.

    bitbasher(10000) 3 days ago [-]

    Meh, I don't feel bad.

    The worst interview I ever had in tech was with Christopher Poole when he was founding canv.as, it's hard to feel bad for him.

    johnnyjeans(10000) 3 days ago [-]

    What was bad about the interview? Can you share any details?

    pizzadog(10000) 3 days ago [-]

    Can you expand on this? I remember canv.as, it was a weird but interesting project but it seemed doomed from the outset.

    anigbrowl(54) 3 days ago [-]

    He sold the site years ago so this is not affecting him in the slightest.

    shipscode(10000) 3 days ago [-]

    The take on 4chan on here is super intriguing. I always felt that the current social media/doomscroll/memesharing landscape which has become so common worldwide is indiscernable and in some ways worse than 4chan. It feels like 4chan left it's homepage and went worldwide sometime in the early 2010s when iPhone-style phone use became more commonplace.

    I remember that 4chan users had more honor than users on the internet today. One example would be 4Chan's 'Not your personal army' mentality vs. the widespread doxxing/'call their place of employment!' witch hunts, driven by huge accounts on IG/Tiktok/etc, that hit normal people daily.

    The modern social media landscape has become far more hectic, harmful, and downright scary than 4chan. Dodging explicit imagery is harder on Instagram's explore page than on 4chan, and the widespread popularization of OF creators has zero bounds across the socials. DOXXING is no longer frowned upon and now commonplace. And memes have become less unique and funny and more commoditized.

    gtirloni(1339) 3 days ago [-]

    Isn't that the path that most platforms follow once they get mildly popular?

    amadeuspagel(403) 3 days ago [-]

    'Not your personal army' goes father then not doxxing. It's a rejection of any attempt to imagine a community of strangers, united by hatred of a scapegoat.

    foolfoolz(10000) 3 days ago [-]

    modern 4chan has a certain authentic charm to it. this is missing from most other places. you have to sift past loads of junk to get it, but you have to do that on any app to get the content you want.

    with no names, likes, virality, accounts, etc there's less focus on writing the basic filler comments. less companies trying to sell me stuff. less focus groups trying to tell me what to think. and with less censorship you end up seeing more creativity

    profmonocle(10000) 2 days ago [-]

    > 4Chan's 'Not your personal army' mentality vs. the widespread doxxing/'call their place of employment!' witch hunts

    That's too generous. 'Not your personal army' started because 4chan had a well-earned reputation for harassment - usually raiding other web sites, but often targeting individual people who caught their attention for one reason or another.

    The 'not your personal army' slogan came about because people who were very aware of this reputation were showing up, hoping to make a web site or person they disliked the next target. That got annoying fast, hence they told those people to go away.

    It wasn't a moral stance against target harassment - far from it. It was a stance that the group mind will choose the next target when they feel like it - not because some rando is mad at their ex or something

    KennyBlanken(10000) 2 days ago [-]

    Multiple white supremacist mass shooter have been 4chan users and they cheered on the Buffalo shooter who was live updating during his murder spree: https://www.thetrace.org/newsletter/4chan-moderation-buffalo...

    The christchurch shooter was a 4chan regular https://theconversation.com/christchurch-terrorist-discussed...

    The whole 'boogaloo' white nationalist/supremacist movement started on 4chan:

    https://www.splcenter.org/resources/reports/mcinnes-molyneux...

    'Not your personal army' but 4chan users would routinely dox, swat, and otherwise harass people all the time.

    I have no idea why people are whitewashing 4chan so hard.

    PixelForg(3609) 2 days ago [-]

    My main problem with 4chan is how they talk, like the language they use. They really don't care about anyone's feelings and show a lack of empathy. Unfortunately this has been spreading to other social media as well.

    Imagine how good a place it could have been if people over there talked like people on HN.

    14(10000) 2 days ago [-]

    As a parent I have seen first hand some of the bullying teens face on some of the mainstream platforms. Kids being bullied in an instant on snap where things are spread around at lightning speed for one example. But I have also seen some bad things happen on 4chan. People releasing nudes of their exes or posts where users submit clothed pictures of girls they want to see photoshopped naked and a person does so. Or the rekt threads with gore content blocked on most other sites. I guess my feeling is that no matter the site you will always get bad actors.

    rincebrain(2251) 2 days ago [-]

    The memetic speedrun that's so common now on social media has some roots there, to be sure, but I think a lot of it was parallel evolution combined with cribbing things that were already polished from years of metaphorical rock tumbling on 4chan, in the best ifunny.com style.

    The ubiquitous expectations for modern humor among younger and even middle-aged people rely a lot more on knowing not just the joke but the culture and context it evolved in, and that sort of thing very much dominated bubbles of terminally online people before many people became terminally online and there was an expectation that everyone would know what you meant if you sent an image macro as the entire reply to an email.

    You can find example after example from not that long ago of people who are not so terminally online being completely perplexed, on TV and otherwise, and memes like 'what the fuck is he saying' 'let's get you to bed grandpa' about the cultural disconnect.

    Unfortunately, this sort of attention minmaxing without enough deliberation and learning around it produces people who are uncritical of what they consume and just want the next hit.

    Ferret7446(10000) 2 days ago [-]

    4chan will always be superior than modern social media to me, for one very simple reason: all posts are anonymous and there is no voting/ranking.

    Each and every post must stand alone and be judged alone. You do not know if it was posted by someone you hate or adore. It doesn't get hidden or promoted based on what a bubble voted. You see the post and you must judge it alone.

    cobson(10000) 3 days ago [-]

    gem

    sensanaty(10000) 3 days ago [-]

    no coal to be found here

    cherryteastain(10000) 3 days ago [-]

    Rip 4chan. For all the bad it did, 4chan also made at least one real contribution to science [1], specifically to the study of superpermutations (aka the Haruhi problem), which was cited by genuine academics. We should try to remember it by that.

    [1] https://www.theverge.com/2018/10/24/18019464/4chan-anon-anim...

    lwidvrizokdhai(10000) 3 days ago [-]

    Oh wow, that's genuinely cool.

    anigbrowl(54) 3 days ago [-]

    I think this is more of a temporary concussion, it'll be back up by the weekend.

    spacemule(10000) 2 days ago [-]

    I'm not understanding the issue. The article isn't so clear to me. Would you mind clarifying what problem they solved?

    Per my understanding, there is a show with 14 episodes that the viewer wants to watch in every order possible. How is this not just 14 factorial?

    I know this can't be the problem, but it's just not clear to me from the article.

    Edit: I found this link that explains it to anyone else as confused as I was: https://old.reddit.com/r/explainlikeimfive/comments/1bvn1rz/...

    greazy(10000) 3 days ago [-]

    4chan is a reflection of the depraved, extreme side of humanity. Twitter has taken on the mantle of 'asshole of the internet', but I think the rotten apples post in both.

    4chan is oddly accepting of gay and trans people. I've seen gay and trans porn side by side with bbc and bwc porn posts. Strange to see racist trans porn lovers.

    I like 4chan for the minor boards, not /pol/ or /b/. But /boardgames/ and /dyi/ and /international/. The absurd humor, green texts that make absolutely no sense, or ones that lead down a strange and wonderful path.

    I like being anonymous on the internet.

    Blikkentrekker(10000) 3 days ago [-]

    > 4chan is oddly accepting of gay and trans people. I've seen gay and trans porn side by side with bbc and bwc porn posts. Strange to see racist trans porn lovers.

    It only seems odd because many people interpret this through a U.S.A. "culture war" lens and "gay people". You believe they're "accepting of gay people" in the sense of that culture war because of the "gay porn". In reality, they take more of a classical Graeco-Roman approach to it and believe it's completely normal for the average male to be attracted to cute twinks as the Romans did and often even reject the very notion of "sexual orientations" to begin with. Their "support" is definitely not in the sense of what one would expect of the U.S.A. "culture war", jokes such as the below illustrate well what the culture is:

    https://i.pinimg.com/736x/55/fe/d1/55fed16b625f9c5869587908f...

    ashleyn(10000) 3 days ago [-]

    Neither site is a den of repute but it's notable that I can still say the word 'cisgender' on 4chan, or openly insult moot and call him whatever I want without being banned for it (while mainstream sites select who is protected from harassment and who isn't, either along political lines or who owns the site).

    panny(944) 2 days ago [-]

    >4chan is a reflection of the depraved, extreme side of humanity.

    I think moderated forums like this one are the reflection of depraved and extreme. After all, you need to be a depraved and extreme host to try to micromanage what everyone says. People who run sites in such a way must have depraved power fantasies.

    Just set up a host and allow people to speak their minds? That sounds like someone who believes the good of humanity will triumph, and the right to speak freely is a fundamental one. Section 230 exists and puts the responsiblity of what is said directly on the poster, not the host. So there really seems no reason not to do this... unless you have depraved and extreme power fantasies about controlling what other people say and think.

    tannhaeuser(1013) 3 days ago [-]

    Why are we speaking in the past tense here? Is it established that 4chan is going down?

    geor9e(10000) 2 days ago [-]

    It is down. It was up in the past. Past tense seems to make the most grammatical sense. But I get why it adds ambiguity about it's future.

    robotnikman(10000) 3 days ago [-]

    I did some digging and the hacker posted which exploit he used.

    Apparently some boards allowed uploading PDF files, but the site never checked if the PDF file was an actual PDF file. Once a PDF file was uploaded it was passed to a version of Ghostscript from 2012 which would generate a thumbnail. So the attacker found an exploit where uploading a PDF with the right PostScript commands could give the attacker shell access.

    lastcobbo(10000) 3 days ago [-]

    Bobby Tables can't keep getting away with this

    ranger_danger(3662) 3 days ago [-]

    Why would you say how you did it? Now they can't do it all over again when it comes back /s

    0x303(10000) 3 days ago [-]

    Got a source? Not doubting, just curious.

    loves_mangoes(10000) 3 days ago [-]

    That checks out. Years ago I noticed a vulnerability through the photography board. You'd upload your pictures, and 4chan would display all the EXIF info next to the post.

    4chan's PHP code would offload that task to a well-know, but old and not very actively maintained EXIF library. Of course the thing with EXIF is that each camera vendor has their own proprietary extensions that need to be supported to make users happy. And as you'd expect from a library that parses a bunch of horrible undocumented formats in C, it's a huge insecure mess.

    Several heap overflows and arbitrary writes all over the place. Heap spray primitives. Lots of user controlled input since you provide your own JPEG. Everything you could want.

    So I wrote a little PoC out of curiosity. Crafted a little 20kB JPG that would try to allocate several GBs worth of heap spray. I submit my post, and the server dutifully times out.

    And that's where I'd like to say I finished my PoC and reported the vulnerability, but in fact I got stuck on a reliable ASLR bypass and lost interest (I did send an email about the library, but I don't think it was actively maintained and there was no followup)

    My impression from this little adventure is that 4chan never really had the maintenance and code quality it needed. Everything still seemed to be the same very old PHP code that leaked years ago (which included this same call to the vulnerable EXIF library). Just with a bunch of extra features hastily grafted and grown organically, but never dealing with the insane amount of technical debt.

    qingcharles(10000) 3 days ago [-]

    This is such a common hole. One of my early hacks was a forum that allowed you to upload a pfp but didn't check it was actually an image. Just upload an ASP file which is coded to provide an explorer-like interface. Found the administrator password in a text file. It was 'internet' just like that. RDP was open. This was a hosting provider for 4000+ companies. Sent them an email. No thank you for that one.

    Always check what is getting uploaded.

    Funes-(862) 3 days ago [-]

    Reminds me of how people were crashing the PSP's XMB with BMP and TIFF files twenty years ago. I was just a kid, and began 'pirating' every one of my classmates' consoles (some in exchange for a small amount of money). Good times.

    jrochkind1(2075) 3 days ago [-]

    This is an old well known exploit.

    Don't run versions of ghostscript from 2012?

    casey2(10000) 2 days ago [-]

    Such a useless feature too. There was like 1 or 2 book sharing threads in sci in the last few years and 1 in arts and crafts and 99.9% of people don't even know about it and just use offsite hosts

    xattt(10000) 2 days ago [-]

    > could give the attacker shell access.

    How do these exploits work? Does it open an SSH port somewhere or does it show up as a browser-based terminal?

    wnevets(10000) 2 days ago [-]

    > Ghostscript from 2012

    Has there been a single year since 2012 that didn't include a new ghostscript RCE? Exposing ghostscript to the internet is dangerous.

    skilbjo(10000) 2 days ago [-]

    pretty interesting discovery if that was the hack.

    do you know what the legal implications are for this?

    if the company that owns 4chan finds the identity of the attacker, could they sue him in civil court? or do they send whatever logs they have to the FBI and the FBI would initiate a criminal prosecution? also what is the criminal act here? is it accessing their systems, or is it posting the data that they found 'through unauthorised means' on a public channel like twitter? does the 'computer fraud and abuse act' apply?

    like if you found this exploit, and sent it to the company in good faith (ie a 'good hacker'), are you free from prosecution? and what is the grey area, like if you found this exploit and then just sat on it for a while (let's say you didn't report it to the company, but let's also say you didn't abuse it, ie leak private data to twitter)

    nailer(487) 2 days ago [-]

    > Apparently some boards allowed uploading PDF files

    Some boards used to allow PDF files to upload too.

    brundolf(477) 2 days ago [-]

    Periodic reminder that a PDF is a turing-complete script that generates a document and should be treated as foreign code

    kriro(10000) 2 days ago [-]

    Fascinating, that has been the attack vector in a couple of hackthebox like systems I've done over the last couple of years. The easier ones usually just require file name changes, the medium ones intercepting and mimetype change.

    dwedge(10000) 2 days ago [-]

    So the article blaming out of date PHP was off base?

    jofla_net(10000) 2 days ago [-]

    Same or similar thing happened to Gitlab. it used some common parsing library that worked on images and perl scripts... you can see where this is going

    bbuerhaus(10000) 1 day ago [-]

    Interesting. I published research on this style of attack in 2019 when I found Slack and a few other big websites vulnerable to it. In their cases, LibreOffice was passing the files off to specific parsers based on magic headers rather than file extensions.

    https://buer.haus/2019/10/18/a-tale-of-exploitation-in-sprea...

    We published a PoC for file write as part of our research and bug bounty submissions:

    https://gist.github.com/ziot/fb96e97baae59e3539ac3cdacbd0943...

    Uptrenda(1884) 2 days ago [-]

    Watching hacker news try use cold analytical intellect to deconstruct 4chan's jokes and culture (and still missing the point) has got to be the funniest joke ever. Perhaps a little more analysis will yield the answer to understanding the complexity of a green frog or running bear. Though I wouldn't count on it. It has to mean something nefarious. Much like the soft 'schlop schlop schlop' of a dog's tongue lapping up water -- its meaning to us is a mystery.

    Loughla(10000) 2 days ago [-]

    From what I can tell, there's not much analysis of 4chan going on here, but more people just sort of remembering their time on the site.

    That's what this has been for me; a walk down memory Lane to my teenage edgelord years.

    EcommerceFlow(10000) 2 days ago [-]

    /lit/ is a goldmine, I've discovered so many amazing books there. Everywhere else on the web is algorithm or voting skewed so no real opinions can be shared

    a_bonobo(3099) 2 days ago [-]

    I agree, I'd even go so far and say it's one of the best places on the internet to discuss 'serious' books (within all the rampant troll posts). Book discussions on reddit are far too positive when it comes to terrible books, /lit/ will call a bad book a bad book. Plus there was always an undercurrent in interest in 'obscure' books - there are great reading charts out there for all kinds of literatures and languages made by /lit/ users.

    weberer(3513) 2 days ago [-]

    They even wrote their own book collaboratively

    https://www.goodreads.com/book/show/28282177-hypersphere

    HaZeust(10000) 2 days ago [-]

    There are, of course, many people with memories of 4chan that precede that of mine (oldf*) - I could only even articulate what I was seeing on 4chan at the age I was around 2014. But by 2015 - with only 1-2 years of experience on the site - I noticed a drastic downturn of the authenticity in posts and comments that I was used to. Then, I saw quality of topics and speaking points go down in 2020. And finally, I saw the social fabric of 4chan itself go down essentially right after Omegle was shut down. By mid-2024, I couldn't even trust it for contrarian or less-conventional (or, frankly, brutally honest) viewpoints of topics they purported to care about.

    And honestly, as things got better in my life and I went out to be more recreational, I went from going on 4chan once a day - to once a week - to once a month - and finally, to only when I wanted to see edgy takes on divisive current events.

    I'll miss all that, despite all it lost over the years. And I'll miss the element of design and mannerisms in its userbase. It required an upfront investment to even understand how to engage with, and a 'lurk moar' attitude. RIP.

    Edit: It was also very crazy watching small groups of people turn insider-jargon into mainstream terminology. I'll also never forget watching the thread of QAnon's conception in real-time. Crazy stuff originated there - both in substance and meaning.

    Loughla(10000) 2 days ago [-]

    I was on there almost from the beginning. Early 2004.

    It was never good, but it definitely went entirely to shit when all the alt-right nut bags started flooding the site with nonsense starting around 2014-15. I have to believe it was a coordinated effort, it just seemed too immediate across the entire site.

    Havoc(10000) 2 days ago [-]

    4chan sized site that gets attention from all sorts of unique people...ran ancient php? Ouch

    gaiagraphia(10000) 2 days ago [-]

    Makes you wonder what all these 'advanced frameworks' have actually offered the internet..

    (hard mode: don't mention advertising)





    Historical Discussions: Making Software (April 14, 2025: 679 points)

    (679) Making Software

    679 points 4 days ago by calme_toi in 10000th position

    www.makingsoftware.com | Estimated reading time – 3 minutes | comments | anchor

    Have you ever wondered how a touch screen knows you are touching it? Well, it has these layers of transparent metal electrodes embedded in the display. When your finger gets close to the screen it causes a disturbance in the magnetic field that the electrodes sense.

    FIG_002

    Because the electrodes are laid out on a grid, they can report back the x and y co-ordinates of the disturbance to the operating system. Pretty neat.

    Or maybe you've wondered why we call it a Gaussian blur? When we blur an image, we look at all the neighbouring pixels and multiply them by a matrix of weights called a kernel.

    FIG_003

    The most common type of kernel has a gaussian distribution, meaning it gets stronger towards the middle and weaker at the edges. This produces a more realistic blur without being too computationally expensive.

    Maybe you've always wanted to know how the pen tool works in Figma and what those handles actually do when you move them.

    t =
    FIG_004

    They control the points on a bezier curve which is a cool piece of math we use to draw curves in vector graphics, like fonts and SVGs.

    But of course, our screens are made of pixels and struggle to display smooth curves. So we have to take these curves and figure out how to display them so that they represent the shapes as accurately as possible.

    FIG_005

    This is called rasterisation but it's not as simple as it seems and we need a whole bunch of clever tricks like, anti-aliasing to trick our eyes into thinking we are looking at straight lines.

    If you've ever wondered about any of these things or if they've sparked your curiosity, then this is for you.

    This book won't teach you how to actually make software - it's not a tutorial or a guide but rather something more interesting than that. It's a manual that explains how the things you use everyday actually work.

    As everything around us has become more complicated our understanding of technology has diminished. It used to be that we needed to understand our tools deeply but today we understand them in a shallow, abstracted way.

    It won't make you a better designer or programmer tomorrow - there's nothing actionable in here. But knowing how things work comes in handy when you find yourself out of your depth. Or at the very least, you can pretend to be smart in front of your friends.

    You don't need to be technical to read this - there are a lot of pictures and diagrams to do the heavy lifting. You just need to be curious.




    All Comments: [-] | anchor

    scop(3374) about 19 hours ago [-]

    I'm sensing an uncomfortable amount of human labor behind this. Even worse it appears to be labor done for the sake of the thing itself. shudder Terrible and makes me feel bad about myself. Back to the AI slop I go!

    exe34(10000) about 15 hours ago [-]

    some people got all the discipline!

    MatthiasWandel(10000) about 22 hours ago [-]

    nice, but spotted several inaccuracies on the landing page. perhaps not the best reference.

    zh3(10000) about 16 hours ago [-]

    Yes, very first item:-

    > When your finger gets close to the screen it causes a disturbance in the magnetic field that the electrodes sense.

    Sure should be capacitive?

    XCSme(10000) about 22 hours ago [-]

    Nice landing page, but I'm confused. The header is about software, but many diagrams are about hardware.

    stronglikedan(10000) about 21 hours ago [-]

    It'll come in handy for when I try to destroy a hard drive by getting the actuator arm to move back and forth at the drive's harmonic frequency.

    georgewsinger(3043) about 21 hours ago [-]

    The author should make a meta-entry about how he makes the (insanely beautiful) diagrams in the book (ideally walking through the process).

    ivl(10000) about 21 hours ago [-]

    The pair of animations on the page are beautifully done, not just technically but aesthetically as well. If the rest of the book is like that I'll be getting a copy.

    psadauskas(3667) about 21 hours ago [-]

    In the FAQ:

        07 How do you make the illustrations?
        By hand, in Figma. There's no secret - it's as complicated as it looks.
    behnamoh(120) about 14 hours ago [-]

    He has more content with figures on another platform: https://typefully.com/DanHollick

    meindnoch(10000) about 21 hours ago [-]

    The table of contents seems to have a whole chapter on 'AI and ML' before starting the next chapter with 'What is a byte?'. Funny.

    0xEF(10000) about 20 hours ago [-]

    I'm getting the impression that the book will not be organized in any real linear or iterative order, just sections that allow you to jump around and read what you want.

    felipemesquita(2286) 1 day ago [-]

    The subtitle "A reference manual for people who design and build software" seems at odds with the description:

    > This book won't teach you how to actually make software [...] It's a manual that explains how the things you use everyday actually work. You don't need to be technical to read this - there are a lot of pictures and diagrams to do the heavy lifting. You just need to be curious.

    chromanoid(10000) 1 day ago [-]

    yeah, I totally agree.

    It's like there was a shift in goals after the author made the title. Maybe explaining the basics was so much fun, that the initial idea got lost... I also don't think knowing how a crt monitor works is instrumental for people who want to make software. The domain is cool, but it doesn't match the content. whatissoftware.com might be better.

    when it is explained how pixel, gpu or llm work, I would at least expect some intro to Von-Neumann-Architecture.

    dijksterhuis(3584) 1 day ago [-]

    A thing for a specific audience, not a thing with a specific purpose, is how i read the subtitle.

    the subtitle doesnt say what the reference manual is a reference for. just that software people might like it.

    jaapz(10000) about 22 hours ago [-]

    Audience: people who design and build software

    Subject: how the things used every day by people who design and build software work

    Not the subject: how to design and build software

    Apfel(10000) 1 day ago [-]

    Stunningly beautiful landing page. I would never normally comment on the aesthetics of anything in the dev sphere but that completely blew me away. I'll preorder for sure.

    I'd echo the other comment mentioning that a coffee-table version of this would be great.

    neogodless(1434) about 24 hours ago [-]

    Looked up the author's main site:

    https://alcohollick.com/

    > Dan Hollick.

    > Design, technically.

    Blogs about using Figma to create things (like this).

    dimal(10000) about 22 hours ago [-]

    Agreed, it's aesthetically beautiful. It should be a coffee table book. But for the web, it has terrible usability. Really, really terrible in multiple ways. My comments will be harsh, but since the creator is obviously very skilled, he should know better.

    Why multicolumn text? So it looks like an old printed manual? At first view, it's not clear where the first column ends. This is not something we see on the web (because there's no need for it), so it's not clear that the content flows from one column to the next. When the viewport is sized to two columns, I need to scroll down to finish the first column, then scroll back up to read where it continues on the second column.

    Justified text is bad on the web. We're starting to get some better features to make it usable, but it's not widely supported, so right ragged text is always more readable.

    There are numerous animations that never stop. This is highly distracting and makes it very difficult to read the text.

    I'm sure there are more issues but the site is so unusable for me, I won't continue trying.

    So, yeah. It's gorgeous design. I love it. But it's for the sake of aesthetics, not for the user's sake. It's completely unusable to me. Since this is the first installment, I hope the designer will keep the aesthetics but improve the usability in future installments.

    tenacious_tuna(10000) about 19 hours ago [-]

    This reminds me aesthetically of The Way Things Work [1] which was one of my favorite books as a kid. Having a similar wordly reference as an adult has been a goal for a while.

    [1] https://www.indigo.ca/en-ca/the-way-things-work-newly-revise...

    rkuykendall-com(3456) about 19 hours ago [-]

    A cool recent one for large-scale infrastructure is 'Engineering in Plain Sight':

    https://practical.engineering/book

    berelig(10000) about 19 hours ago [-]

    I've been looking around for a book like this that has scientific/engineering topics presented in a bite-sized fashion so a teenager (or even adults) can discover which ones pique their interests and are worth a deeper dive.

    Would this book work or is it a bit too simple? Does anyone have another book to recommend?

    Acrobatic_Road(1984) about 19 hours ago [-]

    I had the same thought. I don't remember if it was exactly this book, but I remember reading a book that explained all kinds of engineering concepts for my kid brain. And I remember the latter part of the book had some computer science content like how compression works.

    MisterTea(10000) about 16 hours ago [-]

    Amazing book for sure. David Macaulay has a few other books, four of which were turned into educational animated PBS specials. My mother got us the box set from PBS years ago.

    khaledh(3673) 1 day ago [-]

    Very nice. The design reminds me of a website that I forgot to bookmark a long while ago, it was about explaining network protocols at the wire level, and it had some of the most amazing visuals that I ever saw. It's a shame that I forgot what it was, and googling doesn't help. If anyone knows what I'm talking about please share the link.

    virogenesis(10000) 1 day ago [-]

    Let us know if you find that site asking for my kid :)

    truetraveller(10000) about 19 hours ago [-]

    Where the visuals interactive or static?

    vivzkestrel(10000) 1 day ago [-]

    Commendable effort, i would also like to recommend some topics/chapters/lessons whatever you want to call it - How microprocessors and microcontrollers work - Types of storage => RAM / ssd/ hdd / flash drives and storage formats NTFS, FAT32 - OS stuff (theading, multiprocessing, coroutines, scheduling, paging, priority) - Some data structures stuff (trees, stacks, queues, graphs etc)

    joshbaptiste(3460) about 21 hours ago [-]

    CoreDumpped https://www.youtube.com/@CoreDumpped on YT is also a great animated reference or refresher on such topics..

    vivzkestrel(10000) about 8 hours ago [-]

    also would like to add a section about packets, network packets, tcp packets, udp packets, http packets. would be real nice to see what each packet is like in a very visually friendly way

    clausz(10000) 1 day ago [-]

    How old is this? Copyright at the bottom of the page says '1990'.

    junon(2556) about 23 hours ago [-]

    It's definitely not from 1990.

    gregschlom(3670) about 19 hours ago [-]

    I think it's a cool little easter egg. Goes well with the technical illustration of a 3.5' floppy disk at the top and the pixelated font for the titles.

    Also, maybe the author meant to say he started thinking about this book since 1990, too.

    Either way the copyright year doesn't matter. You can put anything

    croemer(3663) about 24 hours ago [-]

    I'm confused, I can't find the content anywhere. I clicked on the TOC items but that just underlined the words. Is this just an announcement?

    falcor84(10000) about 24 hours ago [-]

    Yes, just an announcement. There's an FAQ at the bottom:

    >When will it launch?

    > I'm not entirely sure yet. I'd love to get it out before the European summer this year. It's a lot of work to illustrate everything so you might need to have some patience.

    WillAdams(10000) about 23 hours ago [-]

    Which chapters are done?

    I was very excited to go to (and link/reference) Chapter 2: Fonts and Vectors but it doesn't seem to be done yet?

    The progress indicator shows that this is only just begun?

    croemer(3663) about 21 hours ago [-]

    No chapters are done - it's a bit weird that this fact is buried deep down in the FAQs. I would have expected the fact it's an announcement to be mentioned above the fold.

    kookamamie(10000) about 23 hours ago [-]

    Looks like form-over-function to me. Cool looks, little content.

    scubbo(10000) about 19 hours ago [-]

    It's just an announcement page, for now.

    yapyap(10000) about 23 hours ago [-]

    Honestly you had me at the graphics, really neat.

    game_the0ry(10000) about 17 hours ago [-]

    Same. That site is masterful example of just cool design.

    kmoser(10000) about 18 hours ago [-]

    The illustrations are definitely the secret sauce that makes this so engaging and informative. I'd also like to see links to where I can learn more about particular topics online. For example:

    > Or maybe you've wondered why we call it a Gaussian blur?

    Nowhere is Carl Friedrich Gauss mentioned, which is unfortunate. This should really link to the Wikipedia entry for https://en.wikipedia.org/wiki/Gaussian_blur.

    sfn42(10000) about 18 hours ago [-]

    When you know the term gaussian (blur) it's trivial to do a Google search

    pier25(1375) about 17 hours ago [-]

    How were the animations done?

    From inspecting the DOM it's just animated SVGs but I'm guessing these were authored with some other tool.

    Initially I thought these were made with Rive but AFAIK their engine runs on <canvas>.

    oneoverten(10000) about 12 hours ago [-]

    Just figma apparently, it's disclosed in the FAQ.

    robocat(3527) about 15 hours ago [-]

      When your finger gets close to the [touch] screen it causes a disturbance in the *magnetic* field that the electrodes sense.
    
    Surely they mean electric field - for a capacitive touch screen.
    constantcrying(10000) about 15 hours ago [-]

    How do you cause a disturbance in an electric field without causing a disturbance in the magnetic field?

    marcosdumay(10000) about 13 hours ago [-]

    Well, it's a disturbance on the AC properties... so both.

    But yeah, we usually talk about capacitance as an 'electrical-only' phenomenon. It's quite weird to se it referred as magnetic.





    Historical Discussions: Open guide to equity compensation (April 13, 2025: 646 points)
    Open Guide to Equity Compensation (January 11, 2016: 482 points)
    The Open Guide to startup offers, stock options, equity compensation (December 03, 2015: 5 points)
    The Open Guide to Equity Compensation (March 16, 2021: 2 points)
    The Open Guide to Equity Compensation (March 02, 2020: 2 points)
    The Open Guide to Equity Compensation (August 14, 2019: 2 points)
    The Open Guide to Equity Compensation (March 11, 2024: 1 points)
    The Open Guide to Equity Compensation (June 21, 2023: 1 points)
    Jlevy/og-equity-compensation: Stock options, RSUs, taxes – a guide for humans (December 28, 2017: 1 points)

    (646) Open guide to equity compensation

    646 points 5 days ago by mooreds in 17th position

    github.com | Estimated reading time – 145 minutes | comments | anchor

    The Open Guide to Equity Compensation

    ❇️ This guide is now published on Holloway. Read it there for search, booksmarks/highlights, expert commentary, and PDF/EPUB download.

    Equity compensation is the practice of granting partial ownership in a company in exchange for work. In its ideal form, equity compensation aligns the interests of individual employees with the goals of the company they work for, which can yield dramatic results in team building, innovation, and longevity of employment. Each of these contributes to the creation of value—for a company, for its users and customers, and for the individuals who work to make it a success.

    The ways equity can be granted as compensation—including restricted stock, stock options, and restricted stock units—are notoriously complex. Equity compensation involves confounding terminology, legal obscurities, and many high-stakes decisions for those who give and receive it.

    If you talk to enough employees and hiring managers, you'll hear stories of how they or their colleagues met with the painful consequences of not learning enough up front. Though many people learn the basic ideas from personal experience or from colleagues or helpful friends who have been through it before, the intricacies of equity compensation are best understood by tax attorneys, corporate lawyers, and other professionals.

    Decisions related to negotiating an offer and exercising stock options, in particular, can have major financial consequences. Because the value of employee equity is determined by the fate of the company, an employee's equity may be illiquid for a long time or ultimately worth nothing, while taxes and the costs of exercise, if they apply, may not be recouped. Even when a company is doing well, an employee may suffer catastrophic tax pitfalls because they didn't anticipate the tax consequences of their decisions.

    Understanding the technicalities of equity compensation does not guarantee that fortune will smile upon you as warmly as it did the early hires of Facebook. But a thorough overview can help you be informed when discussing with professionals for further assistance, make better decisions for your personal situation, and avoid some common and costly mistakes.

    The first edition of this work, written by the same lead authors as the one you're reading now, received significant feedback and discussion on Hacker News, on GitHub, and from individual experts. Now, Holloway is pleased to publish this new edition of the Guide. We've expanded sections, added resources and visuals, and filled in gaps.

    There is a lot of information about equity compensation spread across blogs and articles that focus on specific components of the topic, such as vesting, types of stock options, or equity levels. We believe there is a need for a consolidated and shared resource, written by and for people on different sides of compensation decisions, including employees, hiring managers, founders, and students. Anyone can feel overwhelmed by the complex details and high-stakes personal choices that this topic involves. This reference exists to answer the needs of beginners and the more experienced.

    Holloway and our contributors are motivated by a single purpose: To help readers understand important details and their contexts well enough to make better decisions themselves. The Guide aims to be practical (with concrete suggestions and pitfalls to avoid), thoughtful (with context and multiple expert perspectives, including divergent opinion on controversial topics), and concise (it is dense but contains only notable details—still, it's at least a three-hour read, with links to three hundred sources!).

    The Guide does not purport to be either perfect or complete. A reference like this is always in process. That's why we're currently testing features to enable the Holloway community to suggest improvements, contribute new sections, and call out anything that needs revision. We welcome (and will gladly credit) your help.

    We especially wish to recognize the dozens of people who have helped write, review, edit, and improve it so far—and in the future—and hope you'll check back often as it improves.

    This Guide currently covers:

    • Equity compensation in C corporations in the United States.
    • Equity compensation for most employees, advisors, and independent contractors in private companies, from startups through larger private corporations.
    • Limited coverage of equity compensation in public companies.

    Topics not yet covered:

    • Equity compensation programs, such as ESPPs in public companies. (We'd like to see this improve in the future.)
    • Full details on executive equity compensation.
    • Compensation outside the United States.
    • Compensation in companies other than C corporations, including LLCs and S corporations, where equity compensation is approached and practiced in very different ways.

    For these situations, see other resources and get professional advice.

    Our aim is to be as helpful to the beginner as to those with more experience. Having talked with employees, CEOs, investors, and lawyers, we can assure you that no matter how much you know about equity compensation, you will likely run into confusion at some point.

    If you're an employee or a candidate for a job, some of these may apply to you:

    • You've heard phrases like stock, stock options, strike price, ISOs, RSUs, 83(b) election, 409A valuation, AMT, or early exercise and know they are probably important but are mystified by what some of them really mean or whether they apply to your situation.
    • You're considering a job offer but don't know how to navigate or negotiate the equity component of the offer.
    • You're joining a startup for the first time and are overwhelmed by all the paperwork.
    • You're quitting, taking a leave of absence, or are being laid off or fired from a company where you have stock or options and are thinking through the decisions and consequences.
    • A company you work for is going through an acquisition, IPO, or shutdown.
    • You have stock in a private company and need cash.

    Founders or hiring managers who need to talk about equity compensation with employees or potential hires will also find this Guide useful. As many entrepreneurs and hiring managers will tell you, this topic isn't easy on that side of the table, either! Negotiating with candidates and fielding questions from candidates and employees requires understanding the same complex technicalities of equity compensation well.

    That said, this topic is not simple and we ask that readers be willing to invest time to get through a lot of confusing detail. If you're in a hurry, or you don't care to learn the details, this Guide may not be for you. Seek advice.

    Much of what you read about equity compensation was written by a single person, from a single vantage point. The authors and editors of this Guide have navigated the territory of equity compensation from the perspective of employees, hiring managers, founders, and lawyers. We do believe that the knowledge here, combined with professional advice, can make a significant difference for both employees and hiring managers.

    One of the difficulties for candidates negotiating equity compensation is that they may have less information about what they are worth than the person hiring them. Companies talk to many candidates and often have access to or pay for expensive market-rate compensation data. While some data on typical equity levels have been published online, much of it fails to represent the value of a candidate with their own specific experience in a specific role. However, even without exact data, candidates and hiring managers can develop better mental frameworks to think about offers and negotiations.

    On the other hand, challenges are not limited to those of employees. Founders and hiring managers also often struggle with talking through the web of technicalities with potential hires, and can make equally poor decisions when making offers. Either over-compensating or under-compensating employees can have unfortunate consequences.

    In short, both companies and employees are routinely hurt by uninformed decisions and costly mistakes when it comes to equity compensation. A shared resource is helpful for both sides.

    The Holloway Reader you're using now is designed to help you find and navigate the material you need. Use the search box. It will reveal definitions, section-by-section results, and content contained in the hundreds of resources we've linked to throughout the Guide. Think of it as a mini library of the best content on equity compensation. We also provide mouseover (or short tap on mobile) for definitions of terms, related section suggestions, and external links while you read.

    How This Guide Is Organized

    This Guide contains a lot of material. And it's dense. Some readers may wish to read front to back, but you can also search or navigate directly to parts that are of interest to you, referring back to foundational topics as needed.

    Equity compensation lies at the intersection of corporate law, taxation, and employee compensation, and so requires some basic understanding of all three. You might think compensation and taxation are separate topics, but they are so intertwined it would be misleading to explain one without the other. We cover material in logical order, so that if you do read the earlier sections first, later sections on the interactions of tax and compensation will be clearer.

    We start with Equity Compensation Basics: What compensation and equity are, and why equity is used as compensation.

    But before we get much further, we need to talk about what stock is, and how companies are formed. Fundamentals of Stock Corporations covers how companies organize their ownership, how stock is issued, public companies and private companies, and IPOs and liquidity (which determine when equity is worth cash).

    While not everyone reading this works at an early stage company, those who do can benefit from understanding the role of equity in Startups and Growth. This is good context for anyone involved in a private company that has taken on venture capital.

    How Equity is Granted is the core of this Guide. We describe the forms in which equity is most commonly granted, including restricted stock grants, stock options, and RSUs.

    Now is where it gets messier—taxes:

    • Tax Basics: A technical summary of how taxation works. Many of the headaches of equity compensation involve how it is taxed, including ordinary income tax, long-term capital gains tax, and the lesser-known but sometimes critical alternative minimum tax.
    • Taxes on Equity Compensation: How much tax you owe is greatly affected by the kind of equity you have (such as restricted stock awards, stock options, or RSUs), when you choose to pay (including 83(b) elections), and when you choose to exercise options.

    After these technical concerns, we move on to how you can think about all this in practice. These sections focus on scenarios common to employees and candidates, but are also of likely interest to founders and hiring managers:

    • Plans and Scenarios: Whether you have equity now or will in the future, it is helpful to learn how to think about the value of equity and its tax burden. We also cover whether you can sell private stock.
    • Offers and Negotiations: Equity often comes up as you're negotiating or debating whether to accept a job offer. Here we cover what to expect, what to ask, tips and pitfalls, and more.

    Finally, we offer some additional resources:

    • Documents and Agreements: A bit more detail on the actual legal paperwork you're likely to see as you negotiate and after you've accepted an offer.
    • Further Reading: A curated list of what else you can read on the subject, including many papers, books, and articles that have informed this Guide.

    🚧 What about a Getting Help section outlining when to go to whom for professional help?

    CEOs, CFOs, COOs, or anyone who runs a company or team of significant size should be sure to talk to an equity compensation consultant or a specialist at a law firm to learn about equity compensation plans.

    Founders looking for an introduction to the legalities of running a company may wish to check out Legal Concepts for Founders, from Clerky, in addition to talking to a lawyer. Founders should also lean on their investors for advice, as they may have additional experience.

    Executive compensation at large or public companies is an even more nuanced topic, on both sides of the table. Hire an experienced lawyer or compensation consultant. There are extensive legal resources available on executive compensation.

    Seeking Professional Advice

    This Guide does not replace professional advice.

    Please read the full disclaimer and seek professional advice from a lawyer, tax professional, or other compensation expert before making significant decisions.

    Does that make reading through these details a waste of time? Not at all. Important decisions rarely should or can be blindly delegated. This Guide complements but does not replace the advice you get from professionals. Working with the support of a professional can help you make better decisions when you have an understanding of the topic yourself and know what questions to ask.

    Equity Compensation Basics

    Companies ranging from two-person startups to the Fortune 500 have found that granting partial ownership in a company is among the best methods to attract and retain exceptional talent. In the United States, partial ownership through stock options has been a key part of pay for executives and other employees since the 1950s.1 As recently as 2014, 7.2% of all private sector employees (8.5 million people) and 13.1% of all employees of companies with stock held stock options, according to the National Center for Employee Ownership.2 Many believe employee ownership has 💰fostered innovations in technology, especially in Silicon Valley, from the early days of Hewlett-Packard to recent examples like Facebook. Stock options helped the first 3,000 employees of Facebook enjoy roughly $23 billion at the time the company went public.3

    🌪 Some controversy surrounds the use of equity compensation for high-paid executives. Public companies offer executives equity compensation in no small part because of a tax loophole. In 1993, President Bill Clinton attempted to limit executive pay with a new section4 of the Internal Revenue Code. Unfortunately, the legislation backfired; a loophole made performance-based pay—including stock options—fully tax deductible, thereby creating a dramatic incentive to pay executives through stock options.5 From 1970–79, the average compensation for a CEO of one of the 50 largest firms in the United States was $1.2M, of which 11.2% was from stock options. By 2000–05, the same numbers had risen to $9.2M and 37%, respectively.6

    Generally, equity compensation is closely linked to the growth of a company. Cash-poor startups persuade early employees to take pay cuts and join their team by offering meaningful ownerships stakes, catering to hopes that the company will one day grow large enough to go public or be sold for an ample sum. More mature but still fast-growing companies find offering compensation linked to ownership is more attractive than high cash compensation to many candidates.

    With the hope for growth, however, also comes risk. Large, fast-growing companies often hit hard times. And startups routinely fail or yield no returns for investors or workers. According to a report by Cambridge Associates and Fortune Magazine, between 1990 and 2010, about 60% of venture capital-backed companies returned less than the original investment, leaving employees with the painful realization that their startup was not, in fact, the next Google. Of the remaining 40%, just a select few go on to make a many of their employees wealthy, as has been the case with iconic high-growth companies, like Starbucks,7 UPS,8 Amazon,9 Google,10 or Facebook.11

    D Compensation is any remuneration to a person (including employees, contractors, advisors, founders, and board members) for services performed or rendered to a company. Compensation comes in the forms of cash pay (salary and any bonuses) and any non-cash pay, including benefits like health insurance, family-related protections, perks, and retirement plans.

    Company strategies for compensation are far from simple. Beth Scheer, head of talent at the venture fund Homebrew, offers a thoughtful overview of compensation in startups.

    Another term you may encounter is total rewards, which refers to a model of attracting and retaining employees using a combination of salary and incentive compensation (like equity), benefits, recognition for contribution or commitment (like awards and bonuses), training programs, and initiatives to improve the work environment.

    D In the context of compensation and investment, equity broadly refers to any kind of ownership in a company that can be held by individuals (like employees or board members) and by other businesses (like venture capital firms). One common kind of equity is stock, but equity can take other forms, such as stock options or warrants, that give ownership rights. Commonly, equity also comes with certain conditions, such as vesting or repurchase rights. Note the term equity also has several other technical meanings in accounting and real estate.

    D Equity compensation is the practice of granting equity in exchange for work.

    In this Guide we focus on equity compensation in stock corporations, the kind of company where ownership is represented by stock. (We describe stock in more detail in the next section.) Equity compensation in the form of a direct grant of stock with no strings attached is very rare. Instead, employees are given stock with additional restrictions placed on it, or are given contractual rights that later can lead to owning stock. These forms of equity compensation include restricted stock, stock options, and restricted stock units, each of which we'll describe in detail.

    The Goals of Equity Compensation

    The purpose of equity compensation is threefold:

    • Attract and retain talent. When a company already has or can be predicted to have significant financial success, talented people are incentivized to work for the company by the prospect of their equity being worth a lot of money in the future. The actual probability of life-changing lucre may be low (or at least, lower than you may think if your entire knowledge of startups is watching "The Social Network"). But even a small chance at winning big can be worth the risk to many people, and to some the risk itself can be exciting.
    • Align incentives. Even companies that can afford to pay lots of cash may prefer to give employees equity, so that employees work to increase the future value of the company. In this way, equity aligns individuals' incentives with the interests of the company. At its best, this philosophy fosters an environment of teamwork and a "rising tides lift all boats" mentality. It also encourages everyone involved to think long-term, which is key for company success. As we'll discuss later, the amount of equity you're offered usually reflects both your contribution to the company and your commitment to the company in the future.
    • Reduce cash spending. By giving equity, a company can often pay less in cash compensation to employees now, with the hope of rewarding them later, and put that money toward other investments or operating expenses. This can be essential in the early stages of a company or at other times where there may not be enough revenue to pay large salaries. Equity compensation can also help recruit senior employees or executives who would otherwise command especially high salaries.

    🚧 Mention or link to lockup periods etc.

    Fundamentals of Stock Corporations

    In this section, we describe the basics of how stock and shares are used.

    Those familiar with stock, stock corporations, public companies, and private companies can jump ahead to how those companies grant equity.

    D A company is a legal entity formed under corporate law for the purpose of conducting trade. In the United States, specific rules and regulations govern several kinds of business entities. Federal and state law have significant implications on liability and taxation for each kind of company. Notable types of companies include sole proprietorships, partnerships, limited liability companies (LLCs), S corporations, and C corporations.

    D A corporation is a company that is legally recognized as a single entity. The corporation itself, and not its owners, is obligated to repay debts and accountable under contracts and legal actions (that is, is a "legal person"). Most commonly, the term corporation is used to refer to a stock corporation (or joint-stock company), which is a corporation where ownership is managed using stock. Non-stock corporations that do not issue stock exist as well, the most common being nonprofit organizations. (A few less common for-profit non-stock corporations also exist.)

    In practice, people often use the word company to mean corporation.

    D Incorporation is the legal process of forming (or incorporating) a new corporation, such as a business or nonprofit. Corporations can be created in any country. In the United States, incorporation is handled by state law, and involves filing articles of incorporation and a variety of other required information with the Secretary of State. (Note that the formation of companies that are not corporations, such as partnerships or LLCs, is not the same as incorporation.)

    D A C corporation (or C corp) is a type of stock corporation in the United States with certain federal tax treatment. It is the most prevalent kind of corporation.12 Most large, well-known American companies are C corporations. C corporations differ from S corporations and other business entities in several ways, including how income is taxed and who may own stock. C corporations have no limit on the number of shareholders allowed to own part of the company. They also allow other corporations, as well as partnerships, trusts, and other businesses, to own stock.

    C corps are overwhelmingly popular for early-stage private companies looking to sell part of their business in exchange for investment from individuals and organizations like venture capital firms (which are often partnerships), and for established public companies selling large numbers of stock to individuals and other companies on the public exchange.

    In practice, for a few reasons, these companies are usually formed in Delaware, so legalities of all this are defined in Delaware law.1314 You can think of Delaware law as the primary "language" of U.S. corporate law. Incorporating a company in Delaware has evolved into a national standard for high-growth companies, regardless of where they are physically located.

    🔸 This Guide focuses specifically on C corporations and does not cover how equity compensation works in LLCs, S corporations, partnerships, or sole proprietorships. Both equity and compensation are handled in significantly different ways in each of these kinds of businesses.

    Loosely, one way to think about companies is that they are simply a set of contracts, negotiated over time between the people who own and operate the company, and which are enforced by the government, that aligns the interests of everyone involved in creating things customers are willing to pay for. Key to these contracts is a way to precisely track ownership of the company; issuing stock is how companies often choose to do this.

    🚧 Mention how court cases are settled?

    D Stock is a legal invention that represents ownership in a company. Shares are portions of stock that allow a company to grant ownership to a variety of people or other companies in flexible ways. Each shareholder (or stockholder), as these owners are called, holds a specific number of shares. Founders, investors, employees, board members, contractors, advisors, and other companies, like law firms, can all be shareholders.

    D Stock ownership is often formalized on stock certificates, which are fancy pieces of paper that prove who owns the stock.

    Sometimes you have stock but don't have the physical certificate, as it may be held for you at a law office.

    Some companies now manage their ownership through online services called ownership management platforms, such as Carta. If the company you work for uses an ownership management platform, you will be able to view your stock certificates and stock values online.

    Younger companies may also choose to keep their stock uncertificated, which means your sole evidence of ownership is your contracts with the company, and your spot on the company's cap table, without having a separate certificate for it.

    D Outstanding shares refer to the total number of shares held by all shareholders. This number starts at an essentially arbitrary value (such as 10 million) when the company is created, and thereafter will increase as new shares are added (issued) and granted to people in exchange for money or services.

    Outstanding shares may increase or decrease for other reasons too, such as stock splits and share buybacks, which we won't get into here.

    Later, we discuss several subtleties in how shares are counted.

    🚧 What is a good overview on stock splits and share buyback. Key resources?

    D Any shareholder has a percentage ownership in the company, determined by dividing the number of shares they own by the number of outstanding shares. Although stock paperwork will always list numbers of shares, if share value is uncertain, percentage ownership is often a more meaningful number, particularly if you know or can estimate a likely valuation of the company. Even if the number of shares a person has is fixed, their percentage ownership will change over time as the outstanding shares change. Typically, this number is presented in percent or basis points (hundredths of a percent).

    Public and Private Companies

    D Public companies are corporations in which any member of the public can own stock. People can buy and sell the stock for cash on public stock exchanges. The value of a company's shares is the value displayed in the stock market reports, so shareholders know how much their stock is worth.

    D Most smaller companies, including all startups, are private companies with owners who control how those companies operate. Unlike a public company, where anyone is able to buy and sell stock, owners of a private company control who is able to buy and sell stock. There may be few or no transactions, or they may not be publicly known.

    🚧 What are public exchanges and how is stock bought and sold in practice? Mention accredited investors?

    D A corporation has a board of directors, a group of people whose legal obligation is to oversee the company and ensure it serves the best interests of the shareholders. Public companies are legally obligated to have a board of directors, while private companies often elect to have one. The board typically consists of inside directors, such as the CEO, one or two founders, or executives employed by the company, and outside directors, who are not involved in the day-to-day workings of the company. These board members are elected individuals who have legal, corporate governance rights and duties when it comes to voting on key company decisions. A board member is said to have a board seat at the company.

    Boards of directors range from 3 to 31 members, with an average size of 9.15 Boards are almost always an odd number in order to avoid tie votes. It's worth noting that the state of California requires public companies to have at least one woman on their boards.16

    Key decisions of the board are made formally in board meetings or in writing (called written consent).17 Many decisions around granting equity to employees are approved by the board of directors.

    🚧 This section could be expanded, and also include more legal links.

    D A private company becomes a public company in a process called an initial public offering (IPO). Historically, only private companies with a strong track record of years of growth have considered themselves ready to take this significant step. The IPO has pros and cons that include exchanging a host of high regulatory costs for the benefits of significant capital. After a company "IPOs" or "goes public,' investors and the general public can buy stock, and existing shareholders can sell their stock far more easily than when the company was private.

    Companies take years to IPO after being formed. The median time between a company's founding and its IPO has been increasing. According to a Harvard report, companies that went public in 2016 took 7.7 years to do so, compared to 3.1 years for companies that went public in 1996.18

    🚧 What are the restrictions and regulations on selling stock that affect employees at IPO? What is a lockup period?

    ❗️ With private companies, it can be very hard to know the value of equity. Because the value of private company stock is not determined by regular trades on public markets, shareholders can only make educated guesses about the likely future value, at a time when they will be able to sell stock.

    After all, private company stock is simply a legal agreement that entitles you to something of highly uncertain value, and could well be worthless in the future, or highly valuable, depending on the fate of the company.

    ☝️ We'll discuss the notion of a company officially assigning a fair market value later, but even if a company gives you a value for your stock for tax and accounting purposes, it doesn't mean you can expect to sell it for that value!

    D An acquisition is the purchase of more than 50% of the shares of one company (the acquired company) by another company (the purchaser). This is also called a sale of the acquired company. In an acquisition, the acquired company cedes control to the purchaser.

    D The ability to buy and sell stock is called liquidity. In startups and many private companies, it is often hard to sell stock until the company is sold or goes public, so there is little or no liquidity for shareholders until those events occur. Thus, sales and IPOs are called both exits and liquidity events. Sales, dissolutions, and bankruptcy are all called liquidations.

    Often people wish they could sell stock in a private company, because they would prefer having the cash. This is only possible occasionally. We get into the details later, in our section on selling private stock.

    D A dividend is a distribution of a company's profit to shareholders, authorized by the board of directors. Established public companies and some private companies pay dividends, but this is rare among startups and companies focused on rapid growth, since they often wish to re-invest their profits into expanding the business, rather than paying that money back to shareholders. Amazon, for example, has never paid dividends.

    If you're considering working for a startup, what we cover next on how these early-stage companies raise money and grow is helpful in understanding what your equity may be worth.

    If you're only concerned with large and established companies, you can skip ahead to how equity is granted.

    D A startup is an emerging company, typically a private company, that aspires to grow quickly in size, revenue, and influence. Once a company is established in the market and successful for a while, it usually stops being called a startup.

    ☝️ Unlike the terminology around corporations, which has legal significance, the term startup is informal, and not everyone uses it consistently.

    Startups are not the same as small businesses. Small businesses, like a coffee shop or plumbing business, typically intend to grow slowly and organically, while relying much less on investment capital and equity compensation. Distinguished startup investor Paul Graham has emphasized that it's best to think of a startup as any early stage company intending to grow quickly.

    ∑ C corporations dominate the startup ecosystem. LLCs tend to be better suited for slower-growth companies that intend to distribute profits instead of re-investing them for growth. Because of this, and for complex reasons related to how their capital is raised, venture capitalists significantly prefer to invest in C corporations.

    🚧 What are good stats on how many people work in startups vs. established companies?

    Fundraising, Growth, and Dilution

    Many large and successful companies began as startups. In general, startups rely on investors to help fund rapid growth.

    D Fundraising is the process of seeking capital to build or scale a business. Selling shares in a business to investors is one form of fundraising, as are loans and initial coin offerings. Financing refers both to fundraising from outside sources and to bringing in revenue from selling a product or service.

    D Venture capital is a form of financing for early-stage companies that individual investors or investment firms provide in exchange for partial ownership, or equity, in a company. These investors are called venture capitalists (or VCs). Venture capitalists invest in companies they perceive to be capable of growing quickly and commanding significant market share. "Venture" refers to the risky nature of investing in early-stage businesses—typically startups—with unproven business models.

    A startup goes through several stages of growth as it raises capital based on the hope and expectation that the company will grow and make more money in the future.

    D Companies add (or "issue") shares during fundraising, which can be exchanged for cash from investors. As the number of outstanding shares goes up, the percentage ownership of each shareholder goes down. This is called dilution.

    ☝️ Dilution doesn't necessarily mean that you're losing anything as a shareholder. As a company issues stock and raises money, the smaller percentage of the company you do have could be worth more. The size of your slice gets relatively smaller, but, if the company is growing, the size of the cake gets bigger. For example, a typical startup might have three rounds of funding, with each round of funding issuing 20% more shares. At the end of the three rounds, there are more outstanding shares—roughly 73% more in this case, since 120%×120%×120% is 173%—and each shareholder owns proportionally less of the company.

    D The valuation of the company is the present value investors believe the company has. If the company is doing well, growing revenue or showing indications of future revenue (like a growing number of users or traction in a promising market), the company's valuation will usually be on the rise. That is, the price for an investor to buy one share of the company would be increasing.

    ❗️ Of course, things do not always go well, and the valuation of a company does not always go up. It can happen that a company fails entirely and all ownership stakes become worthless, or that the valuation is lower than expected and certain kinds of shares become worthless while other kinds have some value. When investors and leadership in a company expect the company to do better than it actually does, it can have a lot of disappointing consequences for shareholders.

    These visualizations illustrate how ownership of a venture-backed company evolves as funding is raised. One scenario imagines changes to ownership in a well-performing startup, and the other is loosely based on a careful analysis of Zipcar,19 a ride-sharing company that experienced substantial dilution before eventually going public and being acquired. These diagrams simplify complexities such as the ones discussed in that analysis, but they give a sense of how ownership can be diluted.

    {
      'name': 'CaptableDilution',
      'data': {
        'hypothetical': {
          'label': 'Hypothetical',
          'stages': [
            {
              'label': 'Founding',
              'postValuation': 1000,
              'captable': [
                {
                  'type': 'founder1',
                  'label': 'Founder #1',
                  'shares': 4000000
                },
                {
                  'type': 'founder2',
                  'label': 'Founder #2',
                  'shares': 3000000
                },
                {
                  'type': 'founder3',
                  'label': 'Founder #3',
                  'shares': 3000000
                }
              ]
            },
            {
              'label': 'Series A',
              'captable': [
                {
                  'type': 'founder1',
                  'label': 'Founder #1',
                  'shares': 4000000
                },
                {
                  'type': 'founder2',
                  'label': 'Founder #2',
                  'shares': 3000000
                },
                {
                  'type': 'founder3',
                  'label': 'Founder #3',
                  'shares': 3000000
                },
                {
                  'type': 'options',
                  'label': 'Options Pool',
                  'shares': 1500000
                },
                {
                  'type': 'investment',
                  'label': 'Seed',
                  'preValuation': 8000000,
                  'raised': 2000000
                },
                {
                  'type': 'investment',
                  'label': 'Series A',
                  'preValuation': 8000000,
                  'raised': 5000000
                }
              ]
            },
            {
              'label': 'Series C',
              'captable': [
                {
                  'type': 'founder1',
                  'label': 'Founder #1',
                  'shares': 4000000
                },
                {
                  'type': 'founder2',
                  'label': 'Founder #2',
                  'shares': 3000000
                },
                {
                  'type': 'founder3',
                  'label': 'Founder #3',
                  'shares': 3000000
                },
                {
                  'type': 'options',
                  'label': 'Options Pool',
                  'shares': 1500000
                },
                {
                  'type': 'investment',
                  'label': 'Seed',
                  'preValuation': 8000000,
                  'raised': 2000000
                },
                {
                  'type': 'investment',
                  'label': 'Series A',
                  'preValuation': 8000000,
                  'raised': 5000000
                },
                {
                  'type': 'investment',
                  'label': 'Series B',
                  'preValuation': 20000000,
                  'raised': 10000000
                },
                {
                  'type': 'investment',
                  'label': 'Series C',
                  'preValuation': 40000000,
                  'raised': 20000000
                }
              ]
            }
          ]
        },
        'zipcar': {
          'label': 'Approx. Zipcar',
          'stages': [
            {
              'label': 'Founding',
              'postValuation': 1000,
              'captable': [
                {
                  'type': 'founder1',
                  'label': 'Founder #1',
                  'shares': 570000
                },
                {
                  'type': 'founder2',
                  'label': 'Founder #2',
                  'shares': 570000
                }
              ]
            },
            {
              'label': 'Series A',
              'captable': [
                {
                  'type': 'founder1',
                  'label': 'Founder #1',
                  'shares': 570000
                },
                {
                  'type': 'founder2',
                  'label': 'Founder #2',
                  'shares': 570000
                },
                {
                  'type': 'options',
                  'label': 'Options Pool',
                  'shares': 378000
                },
                {
                  'type': 'investment',
                  'label': 'Series A',
                  'preValuation': 5800000,
                  'raised': 1400000
                }
              ]
            },
            {
              'label': 'Series B',
              'captable': [
                {
                  'type': 'founder1',
                  'label': 'Founder #1',
                  'shares': 570000
                },
                {
                  'type': 'founder2',
                  'label': 'Founder #2',
                  'shares': 570000
                },
                {
                  'type': 'options',
                  'label': 'Options Pool',
                  'shares': 378000
                },
                {
                  'type': 'investment',
                  'label': 'Series A',
                  'preValuation': 5800000,
                  'raised': 1400000
                },
                {
                  'type': 'investment',
                  'label': 'Series A',
                  'preValuation': 2200000,
                  'raised': 4700000
                }
              ]
            }
          ]
        }
      }
    }
    

    Understanding the value of stock and equity in a startup requires a grasp of the stages of growth a startup goes through. These stages are largely reflected in how much funding has been raised—how much ownership, in the form of shares, has been sold for capital.

    Very roughly, typical stages are:

    • Bootstrapped (little funding or self-funded): Founders are figuring out what to build, or they're starting to build with their own time and resources.
    • Series Seed (roughly $250K to $2 million in funding): Figuring out the product and market. The low end of this spectrum is now often called pre-seed.
    • Series A ($2 to $15 million): Scaling the product and making the business model work.
    • Series B (tens of millions): Scaling the business.
    • Series C, D, E, et cetera (tens to hundreds of millions): Continued scaling of the business.

    Keep in mind that these numbers are more typical for startups located in California. The amount raised at various stages is typically smaller for companies located outside of Silicon Valley, where what would be called a seed round may be called a Series A in, say, Houston, Denver, or Columbus, where there are fewer companies competing for investment from fewer venture firms, and costs associated with growth (including providing livable salaries) are lower.2021

    🔸 Most startups don't get far. According to an analysis of angel investments, by Susa Ventures general partner Leo Polovets, more than half of investments fail; one in 3 are small successes (1X to 5X returns); one in 8 are big successes (5X to 30X); and one in 20 are huge successes (30X+).22

    🚧 What are some stats beyond angel investments?

    🔸 Each stage reflects the reduction of risk and increased dilution. For this reason, the amount of equity team members get is higher in the earlier stages (starting with founders) and increasingly lower as a company matures. (See the picture above.)

    D At some point early on, generally before the first employees are hired, a number of shares will be reserved for an employee option pool (or employee pool). The option pool is part of a legal structure called an equity incentive plan. A typical size for the option pool is 20% of the stock of the company, but, especially for earlier stage companies, the option pool can be 10%, 15%, or other sizes.

    Once the pool is established, the company's board of directors grants stock from the pool to employees as they join the company.

    ∑ Well-advised companies will reserve in the option pool only what they expect to use over the next 12 months or so; otherwise, given how equity grants are usually promised, they may be over-granting equity. The whole pool may never be fully used, but companies should still try not to reserve more than they plan to use. The size of the pool is determined by complex factors between founders and investors. It's worth employees (and founders) understanding that a small pool can be a good thing in that it reflects the company preserving ownership in negotiations with investors. The size of the pool may be increased later.

    There are some key subtleties you're likely to come across in the way outstanding shares are counted:

    D Private companies always have what are referred to as authorized but unissued shares, referring to shares that are authorized in legal paperwork but have not actually been issued. Until they are issued, the unissued stock these shares represent doesn't mean anything to the company or to shareholders: no one owns it.

    ☝️ For example, a corporation might have 100 million authorized shares, but will only have issued 10 million shares. In this example, the corporation would have 90 million authorized but unissued shares. When you are trying to determine what percentage a number of shares represents, you do not make reference to the authorized but unissued shares.

    ☝️ You actually want to know the total issued shares, but even this number can be confusing, as it can be computed more than one way. Typically, people count shares in two ways: issued and outstanding and fully diluted.

    D Issued and outstanding refers to the number of shares actually issued by a company to shareholders, and does not include shares that others may have an option to purchase.

    D Fully diluted refers to all of the shares that a company has issued, all of the shares that have been set aside in a stock incentive plan, and all of the shares that could be issued if all convertible securities (such as outstanding warrants) were exercised.

    A key difference between fully diluted shares and shares issued and outstanding is that the total of fully diluted shares will include all the shares in the employee option pool that are reserved but not yet issued to employees.

    🔹 If you're trying to figure out the likely percentage a number of shares will be worth in the future, it's best to know the number of shares that are fully diluted.

    ∑ Even the fully diluted number may not take into account outstanding convertible securities (like convertible notes) that are waiting to be converted into stock at a future milestone. For a more complete understanding, in addition to asking about the fully-diluted capitalization you can ask about any convertible securities outstanding that are not included in that number.

    ☝️ The terminology mentioned here isn't universally applied. It's worth discussing these terms with your company to be sure you're on the same page.

    D A capitalization table (cap table) is a table (often a spreadsheet or other official record) that records the ownership stakes, including number and class of shares, of all shareholders in the company. It is updated as stock is granted to new shareholders.23

    🚧 Better discuss future sources of dilution. Define convertible securities and convertible notes and "fully diluted" more. Do people say "fully diluted" but not include convertible securities?

    D Investors often ask for rights to be paid back first in exchange for their investment. The way these different rights are handled is by creating different classes of stock. (These are also sometimes called classes of shares, though that term has another meaning in the context of mutual funds.)

    D Two important classes of stock are common stock and preferred stock. In general, preferred stock has "rights, preferences, and privileges" that common stock does not have. Typically, investors get preferred stock, and founders and employees get common stock (or stock options).

    The exact number of classes of stock and the differences between them can vary company to company, and, in a startup, these can vary at each round of funding.

    ☝️ Another term you're likely to hear is founders' stock, which is (usually) common stock allocated at a company's formation, but otherwise doesn't have any different rights from other common stock.24

    Although preferred stock rights are too complex to cover fully, we can give a few key details:

    D Preferred stock usually has a liquidation preference (or preference), meaning the preferred stock owners will be paid before the common stock owners when a liquidity event occurs, such as if the company is sold or goes public.

    D A company is in liquidation overhang when the value of the company doesn't reach the dollar amount investors put into it. Because of liquidation preference, those holding preferred stock (investors) will have to be paid before those holding common stock (employees). If investors have put millions of dollars into a company and it's sold, employees' equity won't be worth anything if the company is in liquidation overhang and the sale doesn't exceed that amount.25

    ☝️ The complexities of the liquidation preference are infamous. It's worth understanding that investors and entrepreneurs negotiate a lot of the details around preferences, including:

    • The multiple, a number designating how many times the investor must be paid back before common shareholders receive proceeds. (Often the multiple is 1X, but it can be 2X or higher.)

    • Whether preferred stock is participating, meaning investors get their money back and also participate in proceeds from common stock.

    • Whether there is a cap, which limits the payout if it is participating.

    • 🔑This primer by Charles Yu gives a concise overview. Founders and companies are affected significantly and in subtle ways by these considerations. For example, as lawyer José Ancer points out, common and preferred stockholders are typically quite different and their incentives sometimes diverge.

    • 🚧 What are good resources to mention that describe conversion of preferred stock to common stock?

    🔹 For the purposes of an employee who holds common stock, the most important thing to understand about preferences is that they're not likely to matter if a company does well in the long term. In that case, every stockholder has valuable stock they can eventually sell. But if a company fails or exits for less than investors had hoped, the preferred stockholders are generally first in line to be paid back. Depending on how favorable the terms are for the investor, if the company exits at a low or modest valuation, it's likely that common shareholders will receive little—or nothing at all.

    In this section we'll lay out how equity is granted in practice, including the differences, benefits, and drawbacks of common types of equity compensation, including restricted stock awards, stock options, and restricted stock units (RSUs). We'll go over a few less common types as well. While the intent of each kind of equity grant is similar, they differ in many ways, particularly around how they are taxed.

    Except in rare cases where it may be negotiable, the type of equity you get is up to the company you work for. In general, larger companies grant RSUs, and startups grant stock options, and occasionally executives and very early employees get restricted stock awards.

    🚧 Add section on when equity is granted, including plus-ups.

    At face value, the most direct approach to equity compensation would be for the company to award stock to an employee in exchange for work. In practice, it turns out a company will only want to do this with restrictions on how and when the stock is fully owned.

    Even so, this is actually one of the least common ways to get equity. We mention it first because it is the simplest form of equity compensation, useful for comparison as things get more complex.

    D A restricted stock award is when a company grants someone stock as a form of compensation. The stock awarded has additional conditions on it, including a vesting schedule, so is called restricted stock. Restricted stock awards may also be called simply stock awards or stock grants.

    ∑ What restricted means here is actually complex. It refers to the fact that the stock (i) has certain restrictions on it (like transfer restrictions) required for private company stock, and (ii) will be subject to repurchase at cost pursuant to a vesting schedule. The repurchase right lapses over the service-based vesting period, which is what is meant in this case by the stock "vesting."

    ☝️ Restricted stock awards are not the same thing as restricted stock units.

    Typically, stock awards are limited to executives or very early hires, since once the value of the shares increases, the tax burden of receiving them (without paying the company for their value) can be too great for most people. Usually, instead of restricted stock, an employee will get stock options.

    D Stock options are contracts that allow individuals to buy a specified number of shares in the company they work for at a fixed price. Stock options are the most common way early-stage companies grant equity.

    D A person who has received a stock option grant is not a shareholder until they exercise their option, which means purchasing some or all of their shares at the strike price. Prior to exercising, an option holder does not have voting rights.

    D The strike price (or exercise price) is the fixed price per share at which stock can be purchased, as set in a stock option agreement. The strike price is generally set lower (often much lower) than what people expect will be the future value of the stock, which means selling the stock down the road could be profitable.

    ☝️ Stock options is a confusing term. In investment, an option is a right (but not an obligation) to buy something at a certain price within a certain time frame. You'll often see stock options discussed in the context of investment. What investors in financial markets call stock options are indeed options on stock, but they are not compensatory stock options awarded for services. In this Guide, and most likely in any conversation you have with an employer, anyone who says "stock options" will be referring to compensatory stock options.

    ☝️ Stock options are not the same as stock; they are only the right to buy stock at a certain price and under a set of conditions specified in an employee's stock option agreement. We'll get into these conditions next.

    ∑ Although everyone typically refers to "stock options" in the plural, when you receive a stock option grant, you are receiving an option to purchase a given number of shares. So technically, it's incorrect to say someone "has 10,000 stock options."

    It's best to understand the financial and tax implications before deciding when to exercise options. In order for the option to be tax-free to receive, the strike price must be the fair market value of the stock on the date the option is granted.

    ∑ Those familiar with stock trading (or those with economics degrees) will tell you about the Black-Scholes model, a general mathematical model for determining the value of options. While theoretically sound, this does not have as much practical application in the context of employee stock options.

    🚧 Any real-world examples or statistics of how low strike price has led to big payoffs? Also we could mention and relate this to the term employee stock options (or ESOs) and dispel any confusion between ESOs and ESPPs.

    D Vesting is the process of gaining full legal rights to something. In the context of compensation, founders, executives, and employees typically gain rights to their grant of equity incrementally over time, subject to restrictions. People may refer to their shares or stock options vesting, or may say that a person is vesting or has fully vested.

    D In the majority of cases, vesting occurs incrementally over time, according to a vesting schedule. A person vests only while they work for the company. If the person quits or is terminated immediately, they get no equity, and if they stay for years, they'll get most or all of it.

    Awards of stock, stock options, and RSUs are almost always subject to a vesting schedule.

    D Vesting schedules can have a cliff designating a length of time that a person must work before they vest at all.

    For example, if your equity award had a one-year cliff and you only worked for the company for 11 months, you would not get anything, since you haven't vested in any part of your award. Similarly, if the company is sold within a year of your arrival, depending on what your paperwork says, you may receive nothing on the sale of the company.

    A very common vesting schedule is vesting over 4 years, with a 1 year cliff. This means you get 0% vesting for the first 12 months, 25% vesting at the 12th month, and 1/48th (2.08%) more vesting each month until the 48th month. If you leave just before a year is up, you get nothing, but if you leave after 3 years, you get 75%.

    D In some cases, vesting may be triggered by specific events outside of the vesting schedule, according to contractual terms called accelerated vesting (or acceleration). Two kinds of accelerated vesting that are commonly negotiated are if the company is sold or undergoes a merger (single trigger) or if it's sold and the person is fired (double trigger).

    🌪 Cliffs are an important topic. When they work well, cliffs are an effective and reasonably fair system to both employees and companies. But they can be abused and their complexity can lead to misunderstandings:

    • The intention of a cliff is to make sure new hires are committed to staying with the company for a significant period of time. However, the flip side of vesting with cliffs is that if an employee is leaving—quits or is laid off or fired—just short of their cliff, they may walk away with no stock ownership at all, sometimes through no fault of their own, as in the event of a family emergency or illness. In situations where companies fire or lay off employees just before a cliff, it can easily lead to hard feelings and even lawsuits (especially if the company is doing well enough that the stock is worth a lot of money).2627
    • 🔹 As a manager or founder, if an employee is performing poorly or may have to be laid off, it's both thoughtful and wise to let them know what's going on well before their cliff.
    • ∑ Founders often have vesting on their stock themselves. As entrepreneur Dan Shapiro explains, this is often for good reason.
    • 🔹 As an employee, if you're leaving or considering leaving a company before your vesting cliff is met, consider waiting. Or, if your value to the company is high enough, you might negotiate to get some of your stock "vested up" early. Your manager may well agree that is is fair for someone who has added a lot of value to the company to own stock even if they leave earlier than expected, especially for something like a family emergency. These kinds of vesting accelerations are entirely discretionary, however, unless you negotiated for special acceleration in an employment agreement. Such special acceleration rights are typically reserved for executives who negotiate their employment offers heavily.
    • 🚧 How does taking time off, for example a leave of absence, affect the vesting schedule?
    • Acceleration when a company is sold (called change of control terms) is common for founders and not so common for employees. It's worth understanding acceleration and triggers in case they show up in your option agreement, but these may not be something you can negotiate unless you are going to be in a key role.
    • Companies may impose additional restrictions on stock that is vested. For example, your shares are very likely subject to a right of first refusal, which means that you can't sell the stock without offering it first to the company. And it can happen that companies reserve the right to repurchase vested shares in certain events.

    🚧 Can we give any examples here?

    D The exercise window (or exercise period) is the period during which a person can buy shares at the strike price. Options are only exercisable for a fixed period of time, until they expire, typically seven to ten years as long as the person is working for the company. But this window is not always open.

    ❗ Expiration after termination. Options can expire after you quit working for the company. Often, the expiration is 90 days after termination of service, making the options effectively worthless if you cannot exercise before that point. As we'll get into later, you need to understand the costs, taxes, and tax liabilities of exercise and to plan ahead. In fact, you can find out when you are granted the options, or better yet, before you sign an offer letter.

    🔹 Longer exercise windows. Recently (since around 2015) a few companies are finding ways to keep the exercise window open for years after leaving a company, promoting this practice as fairer to employees. Companies with extended exercise windows include Amplitude,28 Clef,29 Coinbase,30 Pinterest,31 and Quora.32 However, the 90-day exercise window remains the norm.

    🌪 The exercise window debate. Whether to have extended exercise windows has been debated at significant length. Some believe extended exercise windows are the future, arguing that a shorter window makes a company's success a punishment to early employees.

    Key considerations include:

    • Everyone agrees that employees holding stock options with an expiring window often have to make a painful choice if they wish to leave: Pay for a substantial tax bill (perhaps five to seven figures) on top of the cost to exercise (possibly looking for secondary liquidity or a loan) or walk away from the options.
    • Many familiar with this situation have spoken out forcefully against shorter exercise windows, arguing that an employee can help grow the value of a company substantially—often having taken a lower salary in exchange for equity—but end up with no ownership because they're unable or unwilling to stay for the several years typically needed before an IPO or sale.
    • On the other side, a few companies and investors stand by the existing system, arguing that it is better to incentivize people not to leave a company, or that long windows effectively transfer wealth from employees who commit long-term to those who leave.
    • Some focused on the legalities also argue that it's a legal requirement of ISOs to have a 90-day exercise window. While this is technically true, it's not the whole story. It is possible for companies to extend the exercise window by changing the nature of the options (converting them from ISOs to NSOs) and many companies now choose to do just that.
    • Another path is to split the difference and give extended windows only to longer-term employees.
    • Taken together, it's evident many employees have not been clear on the nuances of this when joining companies, and some have 🔑suffered because of it. With the risks of short exercise windows for employees becoming more widely known, longer exercise windows are gradually becoming more prevalent. As an employee or a founder, it is fairer and wiser to understand and negotiate these things up front, and avoid unfortunate surprises.

    ☝️ Options granted to advisors typically vest over a shorter period than employee grants, often one to two years. Advisor grants also typically have a longer exercise window post termination of service, and will usually have single trigger acceleration on an acquisition, because no one expects advisors to stay on with a company once it's acquired. Typical terms for advisors, including equity levels, are available in the 📥Founder/Advisor Standard Template (FAST), from the Founder Institute.

    D Compensatory stock options come in two flavors, incentive stock options (ISOs) and non-qualifying stock options (NQOs, or NQSOs). Confusingly, lawyers and the IRS use several names for these two kinds of stock options, including statutory stock options and non-statutory stock options (or NSOs), respectively.

    In this Guide, we refer to ISOs and NSOs.

    Type Also called Statutory Incentive stock option, ISO Non-statutory Non-qualifying stock option, NQO, NQSO, NSO
    • Companies generally decide to give ISOs or NSOs depending on the legal advice they get. It's rarely up to the employee which they will receive, so it's best to know about both. There are pros and cons of each from both the recipient's and the company's perspective.
    • ISOs are common for employees because they have the possibility of being more favorable from a tax point of view than NSOs.
    • 🔸 ISOs can only be granted to employees (not independent contractors or directors who are not also employees).
    • But ISOs have a number of limitations and conditions and can also create difficult tax consequences.

    D Sometimes, to help reduce the tax burden on stock options, a company will make it possible for option holders to early exercise (or forward exercise) their options, which means they can exercise even before they vest. The option holder becomes a stockholder sooner, after which the vesting applies to actual stock rather than options. This will have tax implications.

    🔸 However, the company has the right to repurchase the unvested shares, at the price paid or at the fair market value of the shares (whichever is lower), if a person quits working for the company. The company will typically repurchase the unvested shares should the person leave the company before the stock they've purchased vests.

    While stock options are the most common form of equity compensation in smaller private companies, RSUs have become the most common type of equity award for public and large private companies. Facebook pioneered the use of RSUs as a private company to allow it to avoid having to register as a public company earlier.

    🚧 Why? More links on history of RSUs and Facebook story?

    D Restricted stock units (RSUs) refer to an agreement by a company to issue an employee shares of stock or the cash value of shares of stock on a future date. Each unit represents one share of stock or the cash value of one share of stock that the employee will receive in the future. (They're called units since they are neither stock nor stock options, but another thing altogether that is contractually linked to the value of stock.)

    D The date on which an employee receives the shares or cash payment for RSUs is known as the settlement date.

    • 🔸 RSUs may vest according to a vesting schedule. The settlement date may be the time-based vesting date or a later date based on, for instance, the date of a company's IPO.
    • RSUs are difficult in a startup or early stage company because when the RSUs vest, the value of the shares might be significant, and taxes will be owed on the receipt of the shares.33 This is not a bad result when the company has sufficient capital to help the employee make the tax payments, or the company is a public company that has put in place a program for selling shares to pay the taxes. But for cash-strapped private startups, neither of these are possibilities. This is the reason most startups use stock options rather than RSUs or stock awards.
    • RSUs are often considered less preferable to grantees since they remove control over when you owe tax. Options, if granted with an exercise price equal to the fair market value of the stock, are not taxed until exercise, an event under the control of the optionee. If an employee is awarded an RSU or restricted stock award which vests over time, they will be taxed on the vesting schedule; they have been put on "autopilot" with respect to the timing of the tax event. If the shares are worth a lot on the date of vesting, the tax burden can be significant.
    • ☝️ You don't want to confuse restricted stock units with restricted stock, which typically refers to restricted stock awards.

    Less Common Types of Equity

    While most employee equity compensation takes the form of stock, stock options, or RSUs, a complete tour of equity compensation must mention a few less common forms.

    D Phantom equity is a type of compensation award that references equity, but does not entitle the recipient to actual ownership in a company. These awards come under a variety of different monikers, but the key to understanding them is knowing that they are really just cash bonus plans, where the cash amounts are determined by reference to a company's stock. Phantom equity can have significant value, but may be perceived as less valuable by workers because of the contractual nature of the promises. Phantom equity plans can be set up as purely discretionary bonus plans, which is less attractive than owning a piece of something.

    Two examples of phantom equity are phantom stock and stock appreciation rights:

    D A phantom stock award is a type of phantom equity that entitles the recipient to a payment equal to the value of a share of the company's stock, upon the occurrence of certain events.

    D Stock appreciation rights (SARs) are a type of phantom equity that gives the recipient the right to receive a payment calculated by reference to the appreciation in the equity of the company.

    🚧 Elaboration needed on what events typically trigger phantom stock. More data on how rare these are? And what is appreciation?

    D Warrants are another kind of option to purchase stock, generally used in investment transactions (for example, in a convertible note offering, investors may also get a warrant, or a law firm may ask for one in exchange for vendor financing). They differ from stock options in that they are more abbreviated and stand-alone legal documents, not granted pursuant to a single legal agreement (typically called a "plan") for all employees.

    Employees and advisors may not encounter warrants, but it's worth knowing they exist.

    The awarding of equity compensation can give rise to multiple types of taxes for the recipient, including federal and state income taxes and employment taxes. There's a lot that you have to be aware of. Skip ahead to understand how taxes on equity work, but if you have time, this section gives a technical summary of tax fundamentals, just in case you never really figured out all the numbers on your pay stub.

    You don't need to know every detail, and can rely on software and professionals to determine the tax you owe, but we do suggest understanding the different kinds of taxes, how large they can be, and how each is "triggered" by different events.

    Given the complexity, most taxpayers aren't aware of exactly how their tax is calculated. It does take up thousands of pages34 of the federal tax code and involves the intricate diversity of state tax law as well.35

    ☝️ If you're already familiar with tax terminology, this section may not have any major surprises. But for those who are not used to it, watch out: Many terms sound like regular English, but they're not. Ordinary income, long-term and short-term, election, qualified small business, and other phrases have very specific meanings we'll do our best to spell out.

    D Income is the money an individual makes. For tax purposes, there are two main types of income, which are taxed differently. Ordinary income includes wages, salary, bonuses and interest made on investments. Capital gains are the profits an individual makes from selling assets, including stock.

    One key difference between ordinary income and capital gains is that when capital gains taxes are calculated, consideration is given not just to the sale price of the asset but to the total gain or loss the investment incurred, each outcome having significantly different tax consequences.

    D Capital gains are classified as long-term or short-term. Long-term capital gains are the profits an individual makes from selling assets, such as stock, a business, a house, or land, that were held for more than a year. Short-term capital gains are profits from the sale of assets held for less than a year.

    Although this topic is not without 💰controversy, the general idea is, if you are selling something you've owned for a long time, you can be taxed a lower rate.

    All these rates have evolved over time based on economic and political factors,36 so you can be confident they will change again in the future.

    📰 In 2017, Congress passed the Tax Cuts and Jobs Act (TCJA), which made many changes to tax rates for the 2018 tax year. Long-term capital gains taxes did not change significantly.

    🚧 Can we clarify the term investment income too?

    D Income tax is the money paid by individuals to federal, state, and, in some cases, local governments, and includes taxation of ordinary income and capital gains. Generally, U.S. citizens, residents, and some foreigners must file and pay federal income tax.

    🔹 In general, federal tax applies to many kinds of income. If you're an employee at a startup, you need to consider four kinds of federal tax, each of which is computed differently.

    ☝️ When it comes to equity compensation, it's possible that you'll have to worry about all of these, depending on your situation. That's why we have a lot to cover here:

    D Ordinary income tax is the tax on wages or salary income, and short-term investment income. The term short-term capital gains tax may be applied to taxes on assets sold less than a year from purchase, but profits from these sales are taxed as ordinary income. For a lot of people who make most of their money by working, ordinary income tax is the biggest chunk of tax they pay.

    D Employment taxes are an additional kind of federal tax beyond ordinary income tax, and consist of Social Security and Medicare taxes that are withheld from a person's paycheck. Employment taxes are also referred to as payroll taxes as they often show up on employee pay stubs. The Social Security wage withholding rate in 2018 is 6.2% up to the FICA wage base. The Medicare component is 1.45%, and it does not phase out above the FICA wage base.

    • 🚧 Review and add more links on SS and Medicare taxes.

    D Long-term capital gains tax is a tax on the sale of assets held longer than a year. Long-term capital gains tax is often lower than ordinary income tax. Many investors hold assets for longer than a year in order to qualify for the lesser tax burden of long-term capital gains.

    D Alternative minimum tax (AMT) is a supplemental income tax that applies to certain individuals in some situations. This type of tax does not come up for many taxpayers, but higher income earners and people in special situations may have to pay large AMT bills. AMT was first enacted in 1979 in response to reports that 155 wealthy individuals had paid no income tax in 1966.37 It is not the same as ordinary income tax or employment tax, and is calculated according to its own rules.

    🚧 What is the history and motivation of AMT?

    ❗ AMT is relevant to you if you're reading this. It's important to understand because exercising ISOs can trigger AMT. In some cases a lot of AMT, even when you haven't sold the stock and have no money to pay. We discuss this later.

    Figure: Bracke Rates, Income, and Taxes

    {
      'name': 'TaxRates',
      'data': {
        'rates': [
          {
            'rate': 0.1,
            'single': 0,
            'married': 0,
            'hoh': 0
          },
          {
            'rate': 0.12,
            'single': 9525,
            'married': 19050,
            'hoh': 13600
          },
          {
            'rate': 0.22,
            'single': 38700,
            'married': 77400,
            'hoh': 51800
          },
          {
            'rate': 0.24,
            'single': 82500,
            'married': 165000,
            'hoh': 82500
          },
          {
            'rate': 0.32,
            'single': 157500,
            'married': 315000,
            'hoh': 157500
          },
          {
            'rate': 0.35,
            'single': 200000,
            'married': 400000,
            'hoh': 200000
          },
          {
            'rate': 0.37,
            'single': 500000,
            'married': 600000,
            'hoh': 500000
          }
        ],
        'deductions': {
          'single': 0,
          'married': 0,
          'hoh': 0
        }
      }
    }
    
    {
      'name': 'TaxRates',
      'data': {
        'rates': [
          {
            'rate': 0,
            'single': 0,
            'married': 0,
            'hoh': 0
          },
          {
            'rate': 0.15,
            'single': 38600,
            'married': 77200,
            'hoh': 51700
          },
          {
            'rate': 0.2,
            'single': 425801,
            'married': 479001,
            'hoh': 452401
          }
        ],
        'deductions': {
          'single': 0,
          'married': 0,
          'hoh': 0
        }
      }
    }
    

    E Source: IRS and the Tax Foundation

    A bit on how all this fits together:

    • Ordinary income tax applies in the situations you're probably already familiar with, where you pay taxes on salaries or wages. Tax rates are based on filing status (if you are single, married, or support a family), and on which income bracket you fall under.
    • Income brackets. For ordinary income, as of the 2018 tax year, there are income brackets at 10%, 12%, 22%, 24%, 32%, 35%, and 37% marginal tax rates—see Notice 1036 or a Tax Foundation summary. Be sure you understand how these brackets work, and what bracket you're likely to be in.
      • ☝️ There is a popular misconception that if you move to a higher bracket, you'll make less money.38 What actually happens is when you cross certain thresholds, each additional (marginal) dollar you make is taxed at a slightly higher rate, equal to the bracket you're in. After you earn more than your deduction, on which you pay no tax, your post-tax income looks like the diagram above. (More discussion on such misconceptions are in this Reddit thread.)
    • Investment gains, such as buying and selling a stock, are similarly taxed at "ordinary" rates, unless they are long-term, which means you held the asset for more than a year.
    • You also pay a number of other federal taxes (see a 📥2018 summary for all states), notably:
      • 6.2% for Social Security on your first $118,500
      • 1.45% for Medicare
      • 0.9% Additional Medicare Tax on income over $200,000 (single) or $250,000 (married filing jointly)
      • 3.8% Net Investment Income Tax (NII) (enacted as part of the Affordable Care Act,39 also called "Obamacare") on investment income if you make over $200,000 (single) or $250,000 (married filing jointly).40
    • Ordinary federal income tax, Social Security, and Medicare taxes are withheld from your paycheck by your employer and are called employment taxes.
    • 🔹 Long-term capital gains are taxed at a lower rate than ordinary income tax: 0%, 15%, or 20%.41 This covers cases where you get dividends or sell stock after holding it a year. If you are in the middle brackets (more than about $37K and less than $413K of ordinary income), your long-term capital gains rate is 15%. You can find more detail on tax brackets at the Tax Foundation.
    • AMT is a complex part of the federal tax code most taxpayers don't worry about. But it comes into play when exercising ISOs. Most people do not pay AMT unless it is "triggered" by specific situations, typically high income (more than $500K) or high deductions. Whether you pay AMT also depends on the state in which you file, since your state taxes can significantly affect your deductions. If you are affected, AMT tax rates are usually at 26% or 28% marginal tax rate, but effectively 35% for some ranges, meaning it is higher than ordinary income tax for some incomes and lower for others.42 AMT rules are so complicated you often need professional tax help if they might apply to you. The IRS's AMT Assistant might also help.
    • 🔹 Section 1202 of the Internal Revenue Code provides a special tax break for qualified small business stock held for more than five years.43 Currently, this tax break is a 100% exclusion from income for up to $10M in gain. There are also special rules that enable you to rollover gain on qualified small business stock you have held for less than five years. Stock received on the exercise of options can qualify for the Section 1202 stock benefit.
    • 🚧 Fill in details on QSBS. Move this elsewhere? Good readings on this?

    State tax rates and rules vary significantly. Since federal rates are much higher than state rates, you usually think of federal tax planning first. But you should also know a bit about tax rates in your state.

    State long-term capital gains rates range widely. California has the highest, at 13.3%; several states have none.44

    🔹 For this reason, some people even consider moving to another state if they are likely to have a windfall gain, like selling a lot of stock after an IPO.

    🚧 How do you determine to what state you owe taxes? Any good resources on this?

    Taxes on Equity Compensation

    Equity and taxes interact in complicated ways, and the tax consequences for an employee receiving restricted stock, stock options, or RSUs are dramatically different. This section will cover these messy details and help you make decisions that reduce the tax burden of your equity compensation.

    This section covers one of the most important and complex decisions you may need to make regarding stock awards and stock options: paying taxes early with an 83(b) election.

    • Generally, restricted stock is taxed as ordinary income when it vests.
    • If the stock is in a startup with low value, this may not result in high tax. If it's been years since the stock was first granted and the company is now worth a lot, the taxes owed could be quite significant.

    D The Internal Revenue Code, in Section 83(b), offers taxpayers receiving equity in exchange for work the option to pay taxes on their options before they vest. If qualified, a person can tell the IRS they prefer this alternative in a process called an 83(b) election. Paying taxes early with an 83(b) election can potentially reduce taxes significantly. If the shares go up in value, the taxes owed at vesting might be far greater than the taxes owed at the time of receipt.

    • ☝️ Why is it called an election? Because you are electing (choosing) to pay taxes early in exchange for this treatment by the IRS. Does the IRS secretly enjoy making simple concepts sound confusing? We're not sure.
    • An 83(b) election isn't guaranteed to reduce your taxes, however. For example, the value of the stock may not increase. And if you leave the company before you vest, you don't get back the taxes you've already paid.
    • ❗ You must file the 83(b) election yourself with the IRS within 30 days of the grant or exercise, or the opportunity is irrevocably lost.
    • ☝️ Note an 83(b) election is made on receipt of actual shares of stock. Technically, it cannot be made on the receipt of a stock option itself: You first must exercise that option, then file the election.
    • If you receive an early exercisable stock option (when you don't have to wait for the the stock to vest), you can make an 83(b) election upon receipt of the exercised shares.
    • Section 83(b) elections do not apply to vested shares; the election only applies to stock that is not yet vested. Thus, if you receive options that are not early exercisable (meaning you have to wait until they vest to exercise), an 83(b) election would not apply.
    • 🔹 Founders and very early employees will almost always want to do an 83(b) election upon the receipt of unvested shares, since the stock value is probably low. If the value is really low, and the taxes owed are not that great, you can make the election without having to pay much tax and start your capital gains holding period on the shares.
    • 🚧 Clarify here which types of equity compensation the 83b can apply to.

    📰 With the passage of the Tax Cuts and Jobs Act (TCJA) in 2017, Congress approved a new Section 83(i) that is intended to allow deferral of tax until RSU and stock option holders can sell shares to pay the tax bill. Whether companies will choose or be able to make this available to employees is not clear yet.

    When a person's stock vests, or they exercise an option, the IRS determines the tax that person owes. But if no one is buying and selling stock, as is the case in most startups, then the value of the stock—and thus any tax owed on it—is not obvious.

    D The fair market value (FMV) of any good or property refers to a price upon which the buyer and seller have agreed, when both parties are willing, knowledgeable, and not under direct pressure to carry out the exchange. The fair market value of a company's stock refers to the price at which a company will issue stock to its employees, and is used by the IRS to calculate how much tax an employee owes on any equity compensation they receive. The FMV of a company's stock is determined by the company's most recent 409A valuation.

    D A 409A valuation is an assessment private companies are required by the IRS to conduct regarding the value of any equity the company issues or offers to employees. A company wants the 409A to be low, so that employees make more off options, but not so low the IRS won't consider it reasonable. In order to minimize the risk that a 409A valuation is manipulated to the benefit of the company, companies hire independent firms to perform 409A valuations, typically annually or after events like fundraising.

    The 409A valuation of employee equity is usually much less than what investors pay for preferred stock; often, it might be only a third or less of the preferred stock price.

    🌪 Although the 409A process is required and completely standard for startups, the practice is a strange mix of formality and complete guesswork. It has been called "quite precise—remarkably inaccurate," by venture capitalist Bill Gurley. You can read more about its nuances and controversies.

    • 🚧 More on when 409As happen.

      • A 409A does have to happen every 12 months to grant the company safe harbor.
      • A 409A has to be done after any event that could be deemed a "material event," which is a fancy way of saying any event that could change the price or value of the company meaningfully. Other examples could be if a CEO leaves, if the company starts making a ton of money, or an acquisition.
    • ∑ "FMV" is a legal term defined in Supreme Court Case 546, United States vs. Cartwright.

    • ∑ "409A" is a reference to the section of the Internal Revenue Code that sets requirements for options to be tax-free on grant.

    Typically, early to mid-stage companies grant stock options, which may be ISOs or NSOs.

    • ❗When you get stock options and are considering if and when to exercise, you need to think about the taxes and when you owe them. In principle, you need to think about taxes you may incur at three points in time:
      • at time of grant
      • at time of exercise
      • at time of sale
    • These events trigger ordinary tax (high), long-term capital gains (lower), or AMT (possibly high) taxes in different ways for NSOs and ISOs.

    D The taxes at time of exercise will depend on the gain between the strike price and the FMV, known as the spread or the bargain element.

    • 🔹 If you're granted ISOs or NSOs at a low strike price, and the bargain element is zero, then you may be able to exercise at a reasonable price without triggering taxes at all. So assuming the company allows it, it makes sense to early exercise immediately (buying most or all of the shares, even though they're not vested yet) and simultaneously file an 83(b) election.
    • 🔸 An 83(b) election, as already discussed, is the choice to be taxed on the receipt of property even though you might have to forfeit or give back the property to the company. You can make an election on the receipt of stock, but you cannot make the election on the receipt of a stock option or an RSU because options and RSUs are not considered property for the purposes of Section 83(b).
    • 🚧 Move or remove this note, as it's covered earlier?
    • 🔸 ISOs are often preferred by startups, as they're supposedly better for employees from a tax perspective. This assumes that (1) AMT won't be triggered and (2) you'll get a low long-term capital gains rate by holding the stock for the appropriate holding periods. However, often you either run afoul of the AMT trap, or don't hold the stock long enough with the complicated 1 year + 2 year requirement, or the spread at exercise is small or zero, so the difference wouldn't matter anyway. NSOs do have a slightly higher tax because of the need to pay employment taxes on NSOs and not ISOs.
    • 🌪 Overall, it's not clear the ISO is that much better for employees, so many people argue for NSOs instead.
    • ☝️ This is partly because ISOs can make it harder to meet the long-term capital gains holding period.45 Many people expect early exercise, together with an 83(b) election, will help them hold the stock long enough to qualify for long-term capital gains. While this is true for NSOs, a murky part of the rules on ISOs states that even with an 83(b) election, the capital gains holding period does not begin until the shares actually vest. So if you want to immediately exercise an option and file a Section 83(b) election, and you might have liquidity soon, it's better—for those who can—to do so with NSOs.

    When it comes to taxes and equity compensation, one scenario is so dangerous we give it its own section.

    ❗ If you have received an ISO, exercising it may unexpectedly trigger a big AMT bill—even before you actually make any money on a sale! If there is a large spread between the strike price and the 409A valuation, you are potentially on the hook for an enormous tax bill, even if you can't sell the stock. This has pushed people into bankruptcy. It also caused Congress to grant a one-time forgiveness, the odds of which happening again are very low.

    D The catastrophic scenario where exercising ISOs triggers a large AMT bill, with no ability to sell the stock to pay taxes, is sometimes called the AMT trap. This infamous problem has trapped many employees and bankrupted people during past dot-com busts. Now more people know about it, but it's still a significant obstacle to plan around.

    📰 In 2017, Congress passed the Tax Cuts and Jobs Act (TCJA), which increases AMT exemptions and their phaseout thresholds. This means fewer people will be affected by AMT in 2018 than in prior years.46

    Note that if your AMT applies to events prior to 2008, you're off the hook.

    Understand this topic and talk to a professional if you exercise ISOs. The AMT trap does not apply to NSOs.

    🚧 Links to coverage on Congress's forgiveness?

    Stock Awards vs. ISOs vs. NSOs

    Because the differences are so nuanced, what follows is a summary of the taxes on restricted stock awards, ISOs, and NSOs, from an employee's point of view.

    • Restricted stock awards. Assuming vesting, you pay full taxes early with the 83(b) or at vesting:

      • At grant:
        • if 83(b) election filed, ordinary tax on FMV
        • none otherwise
      • At vesting:
        • none if 83(b) election filed
        • ordinary tax on FMV of vested portion otherwise
      • At sale:
        • long-term capital gains tax on gain if held for 1 year past when taken into income
        • ordinary tax otherwise (including immediate sale)
    • NSOs. You pay full taxes at exercise, and the sale is like any investment gain:

      • At grant and vesting:
      • At exercise:
        • ordinary tax on the bargain element
        • income and employment tax withholding on paycheck
      • At sale:
        • long-term capital gains tax on gain if held for 1 year past exercise
        • ordinary tax otherwise (including immediate sale)
    • ISOs. You might pay less tax at exercise, but it's complicated:

      • At grant and vesting:
      • At exercise:
        • AMT tax event on the bargain element
        • no ordinary or capital gains tax
        • no income or employment tax withholding on paycheck
      • At sale:
        • long-term capital gains if held for 1 year past exercise and 2 years past grant date
        • ordinary tax otherwise (including immediate sale)

    Mary Russell, a lawyer who specializes in equity compensation, recommends each form of equity be used at the appropriate time in private companies: restricted stock awards for the earliest stage of a startup, stock options with longer exercise windows for the early to mid stage, and RSUs for the later stages.47

    If you relish tax complexity, you can learn more from:

    If you are awarded RSUs, each unit represents one share of stock that you will be given when the units vest.

    • Here's the tax summary for RSUs:
      • At grant:
      • At vesting/delivery:
        • ordinary tax on current share value
      • At sale:
        • long-term capital gains tax on gain if held for 1 year past vesting
        • ordinary tax otherwise (including immediate sale)
    • 🔸 When you receive your shares, you are taxed on their value at that time.48 If you are an employee, this means you may have to write a check to the company to cover your income and employment tax withholding. Often, for U.S. employees, companies will withhold the tax in the form of shares such that no action is required by the employee at vesting time.49
    • If you receive an RSU when the stock is of little value, you cannot elect to be taxed on the value of that stock when you receive the RSU—you pay taxes at vesting time, based on the value of the shares at that time.
    • 🔸 RSUs present some big problems in private companies:
      • You will owe tax when you receive the shares, even though they are illiquid.
      • You can't minimize the tax impact of an increase in value of the underlying shares between the date you receive the RSU and the date it is settled.
      • If you are an employee you will have to write a check to the company to satisfy your income and employment tax withholding.
    • 🔸 RSUs are less attractive than stock options from a tax point of view because you cannot make an 83(b) election with respect to RSUs. By contrast, if you receive a stock option, as long as it's priced at fair market value you will have no income upon receipt of the options, and your income tax and employment tax consequences will be deferred until you exercise, an event under your control for the most part.

    Table: Comparing Taxes on Types of Equity Compensation

    This table is a summary of the differences in taxation on types of equity compensation.

    Restricted stock awards ISOs NSOs RSUs Tax at grant If 83(b) election filed, ordinary tax on FMV. None otherwise. No tax if granted at FMV. No tax if granted at FMV. No tax. Tax at vesting None if 83(b) election filed. Ordinary tax on FMV of vested portion otherwise. No tax if granted at FMV. No tax if granted at FMV. Ordinary tax on current share value. Tax at exercise AMT tax event on the bargain element. No ordinary or capital gains or employment tax. Ordinary tax on the bargain element. Income and employment tax. Tax at sale Long-term capital gains tax on gain if held for 1 year past when taken into income. Ordinary tax otherwise (including immediate sale). Long-term capital gains if held for 1 year past exercise and 2 years past grant date. Ordinary tax otherwise (including immediate sale). Long-term capital gains if held for 1 year past exercise. Ordinary tax otherwise (including immediate sale). Long-term capital gains tax on gain if held for 1 year past vesting. Ordinary tax otherwise (including immediate sale).

    Because they are so important, we list some costly errors to watch out for when it comes to taxes on equity compensation:

    • ❗ If you are going to file an 83(b) election, it must be within 30 days of stock grant or option exercise. Often, law firms will take a while to send you papers, so you might only have a week or two. If you miss this window, it could potentially have giant tax consequences, and is essentially an irrevocable mistake—it's one deadline the IRS won't extend. When you file, get documentation from the post office as well as a delivery confirmation, and include a self-addressed, stamped envelope for the IRS to send you a return receipt. (Some people are so concerned about this they even ask a friend to go with them to the post office as a witness!)
    • ❗ Watch out for the AMT trap we've already discussed.
    • ❗ If you exercise your options, and your income had been from consulting rather than employment (1099, not W-2), you will be subject to the self-employment tax, which consist of both the employer and the employee side of FICA. In addition to owing the normal income tax, this means you will owe the Social Security tax component (6.2%) up to the FICA wage base, and you will owe the Hospital Insurance component (2.9%) on all of your income.
    • ❗ Thoughtfully decide when to exercise options. As discussed, if you wait until the company is doing really well, or when you are leaving, the delay can have serious downsides.

    Evaluating Equity Compensation

    Once you understand the types of equity and their tax implications, you have many of the tools you need to evaluate an offer that includes equity compensation, or to evaluate equity you currently have in a company.

    In summary, you have to determine or make educated guesses about several things:

    • Equity value. This can be estimated by the value the company may have in the future, and the number of shares you may own.
      • Percentage ownership. As we've mentioned, knowing how many shares of stock or stock options you have is meaningless unless you know the number of outstanding shares. What matters is the percentage ownership of the company the shares represent, including the details of how the total is counted.
      • Risk. It is critical to understand risk in the business and dilution to ascertain the possible future value of equity. This article from Leo Polovets provides some additional thoughts.
    • Vesting. Understand when you will receive the equity, as well as whether you're able to exercise stock options (and pay the associated costs and taxes), and whether you can do all this before your exercise window expires.
    • Liquidity. Determine when you will be able to sell your shares, and if that is likely to be for a profit at that time. (We talk about liquidity of private stock next.)
    • Tax. Tax concerns are inseparable from the value of equity. Know the tax implications of your possible grant, exercise, vesting, and sale, in terms of ordinary income tax, employment tax, long-term capital gains, and alternative minimum tax.

    That's a lot, and even so, decisions are uncertain, but it is possible to make much more informed decisions once you have this information.

    What Is Private Stock Worth?

    We now turn to the question of determining the value of private company stock. We've seen how stock in private companies often can't be sold, so its value is difficult to estimate.

    The value of equity you cannot yet sell is a reflection of three major concerns:

    1. How well the company is doing now—that is, how profitable it is, or how many customers it is attracting.
    2. How well the company will perform in the future.
    3. How likely it is the company will be valuable as part of another company—that is, whether it may be acquired.

    The first concern is relatively clear, if you know the company's financials. The second and third come down to predictions and are never certain. In fact, it's important to understand just how uncertain all three of these estimations are, depending on the stage of the company.

    In earlier stage private companies, there may be little or no profit, but the company may seem valuable because of high expectations that it can make future profit or be acquired. If a company like this takes money from investors, the investors determine the price they pay based on these educated guesses and market conditions.

    In startups there tends to be a high degree of uncertainty about the future value of equity, while in later stage private companies financials are better understood (at least to investors and others with an inside view of the company), and these predictions are often more certain.

    Can You Sell Private Stock?

    Ultimately, the value of your equity depends on whether and when you are able to convert it into stock that you sell for cash. With public companies, the answer is relatively easy to estimate—as long as there are no restrictions on your ability to sell, you know the current market value of the stock you own or might own. What about private companies?

    A liquidity event is usually what makes it possible for shareholders in a private company to sell their stock. However, individuals may sometimes be able to gain liquidity while a company is still private.

    D A secondary market (or secondary sale, or private sale) transaction is when private company stock is sold to another private party. This is in contrast to primary market transactions, where companies sell directly to investors. Secondary sales are not routine, but they can sometimes occur, such as when an employee sells to an accredited investor who wants to invest in the company.

    D Shares held by an employee are typically subject to a right of first refusal (ROFR) in favor of the company, meaning the employee can't sell their shares to a third party without offering to sell their shares to the company first.

    🔸 Private sales generally require the agreement and cooperation of the company, for both contractual and practical reasons. While those who hold private stock may hope or expect they need only find a willing buyer, in practice secondary sales only work out in a few situations.

    Unlike a transaction on a public exchange, the buyer and seller of private company stock are not in total control of the sale. There are a few reasons why companies may not support secondary sales:

    • Historically, startups have seen little purpose in letting current employees sell their stock, since they prefer employees hold their stock and work to make it more valuable by improving the value of the company as a whole.
    • Even if employee retention is not a concern, there are reasons private sales may not be in the company's interest. Former employees and other shareholders often have difficulty initiating secondary transactions with a company.50 Private buyers may ask for the company's internal financials in order to estimate the current and future value of its stock; the company may not wish to share this confidential information.
    • Companies must consider whether sales could influence their 409A valuation.
    • Secondary sales are an administrative and legal burden that may not make it to the top of the list of priorities for busy startup CEOs and CFOs.

    🔹 However, participation in the secondary market has evolved in recent years,515253 and a few options may be possible:

    • SharesPost, Equidate, and EquityZen have sought to establish a market around secondary sales, particularly for well-known pre-IPO companies.
    • A few other secondary firms have emerged that have interest in certain purchases, especially for larger secondary sales from founders, early employees, or executives. A company can work with a firm to facilitate multiple transactions. These firms include 137 Ventures, ESO Fund, Akkadian Ventures, Industry Ventures, Atlas Peak, and Founders Circle.
    • In some cases, an employee may have luck selling stock privately to an individual, like a board member or former executive, who wishes to increase their ownership. Further discussion can be found on Quora.

    The key decisions around stock options are when to exercise and, if you can, when to sell. Here we lay out some common scenarios that might apply to you. Considering these scenarios and their outcomes can help you evaluate your position and decide what you should do.

    • Exercise and hold. You can write the company a check and pay any taxes on the spread. You are then a stockholder, with a stock certificate that may have value in the future. As discussed, you may exercise:
      • Early, even immediately upon grant.
      • Before vesting (if early exercise is available to you).
      • Sometime after vesting.
      • After leaving the company, as long as the exercise window is open.
        • 🔸 Recall that the window is likely to close soon after you leave a company, often 90 days after termination.
    • Wait until acquisition. If the company is acquired for a large multiple of the exercise price, you may then use your options to buy valuable stock. However, as discussed, your shares could be worth next to nothing unless the sale price exceeds the liquidation overhang.
    • 🔸 Secondary market. As discussed, in some cases it's possible to exercise and sell the stock in a private company directly to a private party. But this generally requires some cooperation from the company and is not something you can always count on.
    • Cashless exercise. In the event of an IPO, a broker can allow you to exercise all of your vested options and immediately sell a portion of them into the public market, removing the need for cash up front to exercise and pay taxes.

    🔹 Note that some of these scenarios may require significant cash up front, so it makes sense to do the math early. If you are in a tight spot, where you may lose valuable options altogether because you don't have the cash to exercise, it's worth exploring each of the scenarios above, or combinations of them, such as exercising and then selling a portion to pay taxes. In addition, there are a few funds and individual investors who may be able to front you the cash to exercise or pay taxes in return for an agreement to share profits.

    Author and programmer Alex MacCaw explores a few more detailed scenarios.

    🚧 Infographic: Possible visualization of these exercise options. A flowmap? "If this, then this" (with arrows).

    Because of their importance, we'll wind up with a recap of some of the key dangers we've discussed when thinking about equity compensation:

    • ❗ When it comes to equity compensation, details matter! You need to understand the type of stock grant or stock option in detail, as well as what it means for your taxes, to know what your equity is worth.
    • ❗ Because details are so important, professional advice from a tax advisor or lawyer familiar with equity compensation (or both) is often a good idea. Avoid doing everything yourself, but also avoid blindly trusting advisors without having them explain the details to you in a way you understand.
    • ❗ With stock options, high exercise costs or high taxes, including the AMT trap, may prevent you from exercising your options. If you can't sell the stock and your exercise window is limited, you could effectively be forced to walk away from your stock options.
    • ❗ If a job offer includes equity, you need a lot of information to understand the value of the equity component. If the company trusts you enough to be making an offer but doesn't want to answer questions about that offer, consider it a warning sign. Next, we offer more details on what to ask about your offer, and how to negotiate to get the answers you want.

    When a company offers any form of equity as part of its compensation package, there is a whole new set of factors for a prospective employee to consider. This chapter will help you prepare for negotiating a job offer that includes equity, covering negotiation tips and expectations, and specific reminders on what you can ask and what is negotiable when it comes to equity.

    Before accepting any job offer, you'll want to negotiate firmly and fairly. You're planning to devote a lot of your time and sanity to any full-time role; help yourself make sure that this is 💰what you want.

    ☝️ It's perfectly natural to be anxious about negotiations, whether you're going through this process for the first time or the tenth. There is a lot at stake, and it can be uncomfortable and stressful to ask for things you need or want. Many people think negotiating could get the job offer revoked, so they'll accept their offer with little or no discussion. But remember that negotiations are the first experience you'll have of working with your new team. If you're nervous, it can help to remind yourself why it's important to have these conversations:

    • Negotiations ask you to focus on what you actually want. What is important to you—personal growth, career growth, impact, recognition, cash, ownership, teamwork? Not being clear with yourself on what your priorities really are is a recipe for dissatisfaction later.
    • If you aren't satisfied with the terms of your offer, accepting it without discussion can be tough not just for you but for your new company and colleagues as well. No one wants to take on a hire who's going to walk away in just a few months when something better comes along. For everyone's sake, take your time now to consider what you want—and then ask for it.
    • The negotiation process itself can teach you a lot about a company and your future manager. Talking about a tough subject like an offer is a great way to see how you'll work with someone down the road.

    A Guide like this can't give you personalized advice on what a reasonable offer is, as that depends greatly on your skills, the marketplace of candidates, what other offers you have, what the company can pay, what other candidates the company has found, and the company's needs. But we can cover the basics of what to expect with offers, and advise candidates on how to approach negotiations.

    🔹 Companies can and should work hard to ensure that all candidates are given equal treatment in the hiring process, but inequalities persist.54 Workplace disparities in pay and opportunity span race and gender,55 with research focusing on inequality in the U.S. workplace,56 executive leadership and its well-documented lack of diversity,5758 and the technology industry.59 Gender bias in negotiation itself is also an issue; many women have been made to feel that they shouldn't ask for what they deserve.60

    More effort is needed to end biases and close the wage gap. All candidates should take the time to understand their worth and the specific value they can add to a company, so that they are fully prepared to negotiate for a better offer.

    • Many companies will give some leeway during negotiations, letting you indicate whether you prefer higher salary or higher equity.
    • Candidates with competing offers almost always have more leverage and get better offers.61
    • Salaries at startups are often a bit below what you'd get at an established company, since early on, cash is at a premium. For very early stage startups, risk is higher, offers can be more highly variable, and variation among companies will be greater, particularly when it comes to equity.
    • The dominant factors determining equity are what funding stage a company is at, and the role you'll play at the company. If no funding has been raised, large equity may be needed to get early team members to work for very little or for free. Once significant funding of an A round is in place, most people will take typical or moderately discounted salaries. Startups with seed funding lie somewhere in between.

    D When making a job offer, companies will often give a candidate a verbal offer first, to speed things along and facilitate the negotiation, following it with a written offer if it seems like the candidate and the company are close to agreement on the terms of the offer. The written offer takes the form of an 📥offer letter, which is just the summary sent to the candidate, typically with an expiration date and other details and paperwork.

    Although companies often want you to sign right away to save time and effort, if you're doing it thoughtfully you'll also be talking to the company (typically with a hiring manager, your future manager, or a recruiter, or some combination) multiple times before signing. This helps you negotiate details and gives you a chance to get to know the people you could be working with, the company, and the role, so that you can make the best decision for your personal situation.

    When you are ready to accept the terms of the offer letter, you can go ahead and sign.

    Things to look for in the offer letter include:

    • Title and level. What your role is officially called, who you report to, and what level of seniority your role is within the company.
    • Salary. What you're paid in cash, in a year, before taxes.
    • Equity compensation. You know what this is now.
    • Bonus. Additional cash you'll get on a regular basis, if the company has a plan for this.
    • Signing bonus. Cash you get just for signing. (Signing bonuses usually have some strings attached—for example, you could have to pay back the bonus if you leave the company within 12 or 24 months.)

    While the details may not be included in your offer letter, to get full information on your total rewards you'll also want to discuss:

    • Benefits like health insurance, retirement savings, and snacks.
    • All other aspects of the job that might matter to you, like time off, ability to work from home, flexible hours, training and education, and so on.

    A few general notes on these components (credits to Cristina Cordova for some of these):

    • Early stage startups will focus on salary and equity and (if they are funded) benefits. An offer of bonuses or a signing bonus are more common in larger, prosperous companies.
    • Bonuses are usually standardized to the company and your level, so are not likely to be something you can negotiate.
    • The signing bonus is highly negotiable. This doesn't mean any company will give large signing bonuses, but it's feasible because signing bonus amounts vary candidate by candidate, and unlike salary and other bonuses, it's a one-time cost to the company.

    Because startups are so much smaller than many established companies, and because they may grow quickly, there are additional considerations worth taking into account when negotiating a job offer from a startup:

    • Cash versus equity. If your risk tolerance is reasonably high, you might ask for an offer with more equity and less cash. If a company begins to do well, it'll likely "level up" lower salaries (bringing them closer to market average) even if you got more equity up front. On the other hand, if you ask for more cash and less equity, it's unlikely you'll be able to negotiate to get more equity later on, since equity is increasingly scarce over time (at least in a successful company!). Entrepreneur and venture capitalist Mark Suster stresses the need to level up by scaling pay and spending, focusing appropriately at each funding stage. In the very early days of a startup, it's not uncommon for employees to have higher salaries than the company's founders.62
    • 🚧 What is risk and how should people think about risk tolerance? Good readings on this?
    • Title. Negotiating title and exact details of your role early on may not matter as much in a small and growing company, because your role and the roles of others may change a lot, and quickly. It's more important that you respect the founders and leaders of the company. It's more important that you feel you are respected.

    Questions Candidates Can Ask

    🔹 It's important to ask questions when you get an offer that includes any kind of equity. In addition to helping you learn the facts about the equity offer, the process of discussing these details can help you get a sense of the company's transparency and responsiveness. Here are a few questions you should consider asking, especially if you're evaluating an offer from a startup or another private company:

    • Percentage ownership.
      • What percentage of the company do the shares represent?
      • What set of shares was used to compute that percentage? Is it outstanding shares or fully diluted?
      • What convertible securities are outstanding (convertible notes, SAFEs, or warrants), and how much dilution can I expect from their conversion?
    • Valuation.
      • What did the last round value the company at? (That is, what is the preferred share price times the total outstanding shares?)
      • What is the most recent 409A valuation? When was it done, and will it be done again soon?
      • What exit valuation will need to be achieved before common stock has positive value (that is, what are the liquidation overhangs)?
    • Stock options.
      • Do you allow early exercise of my options?
      • Am I required to exercise my options within 90 days after I leave or am terminated? Does the company extend the exercise window of the options of employees that depart?
    • Vesting.
      • Are all employees on the same vesting schedule?
      • Is there any acceleration of my vesting if the company is acquired?
      • Do you have a policy regarding follow-on stock grants?
      • Does the company have any repurchase right to vested shares?

    This information will help you consider the benefits and drawbacks of possible exercise scenarios.

    🔹 If you're considering working for a startup, there are further questions to ask in order to assess the state of the company's business and its plans. Before or when you're getting an offer is the right time to do this. Startups are understandably careful about sharing financial information, so you may not get full answers to all of these, but you should at least ask:

    • How much money has the company raised (including in how many rounds, and when)?
    • What did the last round value the company at?
    • What is the aggregate liquidation preference on top of the preferred stock? (This will tell you how much the company needs to sell for before the common stock—your equity—is worth something in an exit.)
    • Will the company likely raise more capital soon?
    • How long will the company's current funding last? (This will likely be given at the current burn rate, or how quickly a company is spending its funding, so will likely not include calculations for things like future employee salaries.)
    • What is the hiring plan? (How many people over what time frame?)
    • What is the revenue now, if any? What are the revenue goals/projections?
    • Where do you see this company in 1 year and 5 years, in terms of revenue, number of employees, and market position?

    There are several other resources with more questions like this to consider.

    🚧 Summarize the best items in the links above.

    Typical Employee Equity Levels

    🚧 This section currently mostly covers startups; what later-stage resources are available?

    Compensation data is highly situational. What an employee receives in equity, cash, and benefits depends on the role they're filling, the sector they work in, where they and the company are located, and the possible value that specific individual may bring to the company.

    Any compensation data out there is hard to come by. Companies often pay for this data from vendors, but it's usually not available to candidates.

    For startups, a variety of data is easier to come by. We give some overview here of early-stage Silicon Valley tech startups; many of these numbers are not representative of companies of different kinds across the country:

    • 🔹 One of the best ways to tell what is reasonable for a given company and candidate is to look at offers from companies with similar profiles on AngelList. The AngelList salary data is extensive.
    • There are no hard and fast rules, but for post-series A startups in Silicon Valley, the table below, based on the one by Babak Nivi, gives ballpark equity levels that many think are reasonable. These would usually be for restricted stock or stock options with a standard 4-year vesting schedule. They apply if each of these roles were filled just after an A round and the new hires are also being paid a salary (so are not founders or employees hired before the A round). The upper ranges would be for highly desired candidates with strong track records.
      • Chief executive officer (CEO): 5–10%
      • Chief operating officer (COO): 2–5%
      • Vice president (VP): 1–2%
      • Independent board member: 1%
      • Director: 0.4–1.25%
      • Lead engineer 0.5–1%
      • Senior engineer: 0.33–0.66%
      • Manager or junior engineer: 0.2–0.33%
    • For post-series B startups, equity numbers would be much lower. How much lower will depend significantly on the size of the team and the company's valuation.
    • Seed-funded startups would offer higher equity—sometimes much higher if there is little funding, but base salaries will be lower.
    • Leo Polovets created a survey of AngelList job postings from 2014, an excellent summary of equity levels for the first few dozen hires at these early-stage startups. For engineers in Silicon Valley, the highest (not typical!) equity levels were:
      • Hire #1: up to 2%–3%
      • Hires #2 through #5: up to 1%–2%
      • Hires #6 and #7: up to 0.5%–1%
      • Hires #8 through #14: up to 0.4%–0.8%
      • Hires #15 through #19: up to 0.3%–0.7%
      • Hires #21 [sic] through #27: up to 0.25%–0.6%
      • Hires #28 through #34: up to 0.25%–0.5%
    • José Ancer gives another good overview for early stage hiring.
    • Founder compensation is another topic entirely that may still be of interest to employees. José Ancer provides a thoughtful overview.

    🚧 Structure: Move negotiation points earlier?

    When negotiating a job offer, companies will always ask you what you want for compensation, and you should always be cautious about answering.

    If you name the lowest number you'll accept, you can be pretty sure the company's not going to exceed it, at least not by much.

    🔸 Asking about salary expectations is a normal part of the hiring process at most companies, but asking about salary history has been banned in a growing number of states, cities, and counties.63 These laws attempt to combat pay disparity64 among women and minorities by making it illegal for companies to ask about or consider candidates' current or past compensation when making them offers. Make sure you understand the laws relevant to your situation.

    A few points on negotiating compensation:

    • Some argue that a good tactic in negotiating is to start higher than you will be willing to accept, so that the other party can "win" by negotiating you down a little bit. Keep in mind, this is just a suggested tactic, not a hard and fast rule.
    • If you are inexperienced and unsure what a fair offer should look like, avoid saying exactly what you want for compensation very early in discussions. Though many hiring managers and recruiters ask about salary expectations early in the process to avoid risk at the offer stage, some ask in order to take advantage of candidates who don't have a good sense of their own worth. Tell them you want to focus on the opportunity as a whole and your ability to contribute before discussing numbers. Ask them to give you a fair offer once they understand what you can bring to the company.
    • If you are experienced and know your value, it's often in your interest to state what sort of compensation and role you are looking for to anchor expectations. You might even share your expectations early in the process, so you don't waste each other's time.
    • Discuss what your compensation might be like in the future. No one can promise you future equity, salary, or bonuses, but it should be possible to agree what those could look like if you demonstrate outstanding performance and the company has money.
    • If you're moving from an established company to a startup, you may be asked to take a salary cut. This is reasonable, but it's wise to discuss explicitly how much the cut is, and when your salary will be renegotiated. For example, you might take 25% below your previous salary, but there can be an agreement that this will be corrected if your performance is strong and the company gets funding.
    • 🔹 Always negotiate non-compensation aspects before agreeing to an offer. If you want a specific role, title, opportunity, visa sponsorship, parental leave, special treatment (like working from home), or have timing constraints about when you can join, negotiate these early, not late in the process.
    • 🔹 If you're going to be a very early employee, consider asking for a restricted stock grant instead of stock options, and a cash bonus equal to the tax on those options. The company will have some extra paperwork (and legal costs), but it means you won't have to pay to exercise. Then, if you file an 83(b) election, you're simplifying your situation even further, eliminating the AMT issues of ISOs, and maximizing your chances of qualifying for long-term capital gains tax.
    • 🚧 What other specific suggestions are helpful?

    A few notes on the negotiation process itself:

    • 🔹 Although offer letters have expirations, it's often possible to negotiate more time if you need it. How much flexibility depends on the situation. Some have criticized "exploding job offers" as a bad practice that makes no sense at all. If you are likely the best candidate for the position, or the role is a specialized and well-paid one where there are usually not enough good candidates to meet the demand, you'll likely have plenty of leverage to ask for more time, which may be needed to complete the interview process with other companies. Software engineering roles in tech companies are like this currently.
    • Getting multiple offers is always in your interest. If you have competing offers, sharing the competing offers with the company you want to work for can be helpful, granted your offers are competitive.
      • However, dragging out negotiations excessively so you can "shop around" an offer to other companies is considered bad form by some; it's thoughtful to be judicious and timely to the extent that it's possible.
    • ❗ Get all agreements in writing, if they are not in your offer letter.
    • Do not accept an offer verbally or in writing unless you're ready to stand by your word. In practice, people do occasionally accept an offer and then go back on it, or renege. This can put the company in a difficult position (they may have declined another key candidate based on your acceptance), and may hurt your reputation in unexpected ways later.

    Some additional resources:

    • Harvard Business Review has a variety of general 💰suggestions on negotiation processes.
    • Robby Grossman, a VP at Wistia, gives a good overview of equity compensation and negotiation suggestions in startups.

    Offer and Negotiation Dangers

    To wind up our discussion of offers and negotiations, here are some key dangers and mistakes to watch out for:

    • ❗ Do not accept an offer of stock or shares without also asking for the exact number of total shares (or, equivalently, the exact percentage of the company those shares represent). It's quite common for some companies to give offers of stock or options and tell you only the number of shares. Without the percentage, the number of shares is meaningless. Not telling you is a deeply unfair practice. A company that refuses to tell you even when you're ready to sign an offer is likely giving you a very poor deal.
    • 🔸 If you're looking at an offer, work out whether you can and should early exercise, and what the cost to exercise and tax will be, before accepting the offer.
    • ❗ If you join a startup right as it raises a new round, and don't have the chance to exercise right away, they may potentially issue you the options with the low strike price, but the 409A valuation of the stock will have gone up. This means you won't be able to early exercise without a large tax bill. In fact, it might not be financially feasible for you to exercise at all.
    • ❗ Vesting starts on a vesting commencement date. Sometimes stock option paperwork won't reach you for weeks or months after you join a company, since it needs to be written by the lawyers and approved by the board of directors. In your negotiations, do make sure the vesting commencement date will reflect the true start date of when you joined the company, not the time at which the stock option is granted.
    • 🔸 The offer letter is not the actual grant of your equity. After you sign your offer letter, ensure the company delivers you your actual equity grant documents within a few weeks. It is not uncommon for early-stage startups to be sloppy with their equity granting. If they take too long to send your grant documents, the fair market value (and exercise price) of the equity could rise in the time you're waiting, which is money lost for you.
    • 🔸 If you're going to early exercise, consider it like any investment. Don't believe every projection about the value of the company you hear. Founders will tell you the best-case scenario. Remember, most startups fail. Do your research and ask others' opinions about likely outcomes for the company.
    • ❗ It may not be common, but some companies retain a right to repurchase (buy back) vested shares. It's simple enough to ask, "Does the company have any repurchase right to vested shares?" (Note repurchasing unvested shares that were purchased via early exercise is different, and helps you.) If you don't want to ask, the fair market value repurchase right should be included in the documents you are being asked to sign or acknowledge that you have read and understood. (Skype's controversy related to repurchasing has some startup employees looking out for companies with similar plans.) You might find a repurchase right for vested shares in the Stock Plan itself, the Stock Option Agreement, the Exercise Agreement, the bylaws, the certificate of incorporation, or any other stockholder agreement.

    This section covers a few kinds of documents you're likely to see as you negotiate a job offer and sign on to a company. It's not exhaustive, as titles and details vary.

    • When you are considering your offer, make sure you have all of the documents you need from the company:

    • If you have equity compensation, at some point—possibly weeks or months after you've joined—you should get a Summary of Stock Grant, Notice of Stock Option Grant, or similar document, detailing your grant of stock or options, along with all details such as number of shares, type of options, grant date, vesting commencement date, and vesting schedule. It will come with several other documents, which may be exhibits to that agreement:

    • If you are exercising your options, you should also see paperwork to assist with that purchase:

    • End of year tax documents

      • You should receive a form 📥3921 or 3922 from your company if you exercised ISO options during the year.

    The resources here are a small subset of the full set of resources cited in the Guide to Equity Compensation, selected for their breadth, notability, or depth on specific issues.

    Considerations for Founders

    Considerations for Candidates and Employees

    Types of Equity Compensation

    Vesting and Expiration of Stock Options

    This Guide and all associated comments and discussion do not constitute legal or tax advice in any respect. No reader should act or refrain from acting on the basis of any information presented herein without seeking the advice of counsel in the relevant jurisdiction. The author(s) expressly disclaim all liability in respect of any actions taken or not taken based on any contents of this Guide or associated content.

    Many thanks to all contributors to this Guide and those who have given detailed feedback, including Julia Evans, George Grellas, Chris McCann, Leo Polovets, Srinath Sridhar, Andy Sparks, and David Weekly, and to the many commentators on Hacker News. The original authors are Joshua Levy and Joe Wallin.

    This Guide is a living publication, imperfect but improving. If you have an idea or contribution that might improve this Guide, please add suggestions in the margins. We gladly credit all contributors.

    This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License.

    1. https://corpgov.law.harvard.edu/2014/10/02/what-has-happened-to-stock-options/

    2. https://www.nceo.org/assets/pdf/articles/GSS-2014-data.pdf

    3. https://www.ft.com/content/d6599ae0-5738-11e1-869b-00144feabdc0

    4. https://www.treasury.gov/resource-center/tax-policy/tax-analysis/Documents/Firms-Exceeding-162m.pdf

    5. https://www.epi.org/publication/taxes-executive-compensation/

    6. http://www.nber.org/papers/w16585.pdf

    7. https://www.investopedia.com/articles/markets/120215/if-you-had-invested-right-after-starbucks-ipo.asp

    8. https://money.cnn.com/1999/11/10/companies/ups/

    9. https://techcrunch.com/2017/06/28/a-look-back-at-amazons-1997-ipo/

    10. https://dealbook.nytimes.com/2009/08/19/googles-ipo-5-years-later/

    11. https://en.wikipedia.org/wiki/Initial_public_offering_of_Facebook

    12. https://www.investopedia.com/terms/c/c-corporation.asp

    13. https://www.quora.com/Why-do-most-technology-startups-incorporate-in-Delaware

    14. https://www.nytimes.com/2012/07/01/business/how-delaware-thrives-as-a-corporate-tax-haven.html

    15. http://www.investopedia.com/articles/analyst/03/111903.asp

    16. https://www.nytimes.com/2018/09/30/business/women-corporate-boards-california.html

    17. https://www.dlapiperaccelerate.com/knowledge/2017/board-action-meetings-vs-written-consents.html

    18. https://corpgov.law.harvard.edu/2017/05/25/2017-ipo-report/

    19. http://reactionwheel.net/2018/05/zipcar-fundraising-breakdown.html

    20. https://www.nytimes.com/2016/08/22/business/economy/bay-area-start-ups-find-low-cost-outposts-in-arizona.html

    21. http://www.chicagotribune.com/bluesky/technology/ct-silicon-valley-midwest-startups-20150925-story.html

    22. http://codingvc.com/valuing-employee-options/

    23. https://www.cooleygo.com/what-is-a-cap-table/

    24. https://lsvp.wordpress.com/2008/09/15/what-entrepreneurs-need-to-know-about-founders-stock/

    25. https://avc.com/2010/10/employee-equity-the-liquidation-overhang/

    26. https://www.inc.com/business-insider/tanium-security-startup-orion-hindawi-fired-employees-before-stocks-vested.html

    27. https://www.bloomberg.com/news/articles/2017-09-19/tesla-worker-says-timing-of-firing-denied-him-lucrative-shares

    28. https://amplitude.com/blog/2015/12/01/employee-equity-is-broken-heres-our-fix/

    29. https://github.com/clef/handbook/blob/master/Hiring%20Documents/Guide%20to%20Your%20Equity.md

    30. https://medium.com/@barmstrong/improving-equity-compensation-at-coinbase-8749979409c3

    31. http://fortune.com/2015/03/23/pinterest-employee-taxes/

    32. https://www.quora.com/Why-do-most-startups-force-employees-to-exercise-their-vested-ISO-options-within-90-days-if-they-leave-rather-than-the-option-to-convert-to-NSOs

    33. http://thestartuplawblog.com/rsus-the-tax-problems/

    34. http://www.slate.com/articles/news_and_politics/politics/2014/04/how_long_is_the_tax_code_it_is_far_shorter_than_70_000_pages.html

    35. https://www.gpo.gov/fdsys/pkg/USCODE-2016-title26/content-detail.html

    36. https://www.taxpolicycenter.org/briefing-book/how-are-capital-gains-taxed

    37. https://www.taxpolicycenter.org/briefing-book/what-amt

    38. https://today.yougov.com/news/2013/01/08/understanding-how-marginal-taxes-work-its-all-part/

    39. https://www.taxpolicycenter.org/briefing-book/what-tax-changes-did-affordable-care-act-make

    40. https://www.investopedia.com/articles/personal-finance/020714/new-taxes-under-affordable-care-act.asp

    41. https://www.fool.com/taxes/2017/12/11/long-term-capital-gains-tax-rates-in-2018.aspx

    42. https://www.fool.com/taxes/2018/02/05/how-the-alternative-minimum-tax-is-changing-in-201.aspx

    43. https://blog.wealthfront.com/qualified-small-business-stock-2016/

    44. http://www.fool.com/personal-finance/taxes/2014/10/04/the-states-with-the-highest-capital-gains-tax-rate.aspx

    45. http://thestartuplawblog.com/the-problem-with-immediately-exercisable-isos/

    46. https://medium.com/@barryjk/the-tax-law-that-is-unintentionally-hammering-silicon-valley-employees-894a7b54ba8a

    47. http://stockoptioncounsel.com/blog/early-expiration-of-startup-stock-options-part-3-examples-of-good-startup-equity-design-by-company-stage/2017/8/11

    48. http://joewallin.com/2014/09/13/rsus-vs-restricted-stock-vs-stock-options/

    49. https://www.schwab.com/public/eac/resources/articles/rsu_basics.html

    50. https://www.wsj.com/articles/former-employee-wins-legal-feud-to-open-up-startups-books-1485435602

    51. https://techcrunch.com/2015/10/14/selling-private-company-shares-2-0/

    52. http://www.industryventures.com/2014/12/02/employee-liquidity-good-for-private-companies/

    53. https://medium.com/@rizstanford/secondary-sales-in-vc-backed-startups-a-quick-primer-for-entrepreneurs-bdc25ea7f39a

    54. https://iwpr.org/publications/gender-wage-gap-2017-race-ethnicity/

    55. https://digitalcommons.ilr.cornell.edu/cgi/viewcontent.cgi?article=2208&context=articles

    56. http://www.pewresearch.org/fact-tank/2016/07/01/racial-gender-wage-gaps-persist-in-u-s-despite-some-progress/

    57. https://aflcio.org/paywatch/company-pay-ratios

    58. http://fortune.com/2017/06/09/white-men-senior-executives-fortune-500-companies-diversity-data/

    59. https://www.eeoc.gov/eeoc/statistics/reports/hightech/

    60. https://www.newyorker.com/science/maria-konnikova/lean-out-the-dangers-for-women-who-negotiate

    61. https://www.shrm.org/resourcesandtools/hr-topics/employee-relations/pages/using-a-job-offer-as-leverage-is-no-longer-a-big-no-no.aspx

    62. http://siliconhillslawyer.com/2016/06/23/founder-compensation-cash-equity-liquidity/

    63. https://www.hrdive.com/news/salary-history-ban-states-list/516662/

    64. https://www.nytimes.com/2018/02/16/business/economy/salary-history-laws.html




    All Comments: [-] | anchor

    no_wizard(2101) 5 days ago [-]

    This seems mostly geared toward private companies that grant equity. As it's part of the Galloway series that targets this audience that makes sense.

    I do wonder how much of this applies to RSUs granted by public corps

    neilv(3544) 5 days ago [-]

    Would they be referring to that here?

    https://github.com/jlevy/og-equity-compensation/blob/master/...

    > Topics **not yet covered**:

    > - Equity compensation programs, such as [ESPPs](https://www.investopedia.com/terms/e/espp.asp) in public companies. (We'd like to [see this improve](#please-help) in the future.)

    GeneralMayhem(10000) 5 days ago [-]

    Basically none of it. RSUs at public companies are as good as cash that just happens to be pre-invested. The tax implications are very simple (they're just regular income like getting paid in cash), and so are your legal rights (you're not much different from anyone who bought a share on the stock exchange). You should risk-adjust their value like any investment, but there's are very few if any sneaky things that can happen to pull the rug entirely.

    wyldfire(412) 5 days ago [-]

    On this April 13 in these United States, I can't help but think of the incredible inconvenience of how RSUs and shares sold seem to be calculated for the sake of income taxes. Please just add it up and send me the bill. I don't want to pay more than what's due. And I don't want to cheat. For whatever reason, the typical tax interview software guesses wrong or has insufficient inputs when I feed it info from employer + brokerage. So what remains feels like guesswork with liability on both ends.

    toast0(10000) 5 days ago [-]

    RSUs aren't really that bad, unless your employer does sell to cover in annoying ways. Net share withholding works out super simple, the shares that weren't withheld are at the brokerage with the correct basis, and the income and withholding are reported accurately on your w-2.

    Options do get pretty nasty if you exercise and hold, when the fair market value is higher than the fair market value; because then you have to have an AMT return and a regular return and reconcile them.

    ESPP with a discount was pretty bad the last time I had it; the brokerage said they were specifically required by IRS rules to report the wrong cost basis, and you had to adjust it when you sold, or you'd have the discount reported on your w-2 and again as a capital gain. Maybe that changed, capital gains reporting has changed over time.

    lopkeny12ko(1240) 4 days ago [-]

    > Please just add it up and send me the bill. I don't want to pay more than what's due. And I don't want to cheat.

    I have a hard time understanding this comment because this is exactly what employers do when paying out RSUs.

    At the end of the year, you get a 1099 indicating the fair market value of shares you've received. There's no trickery here--this is literally the amount you owe income tax on.

    I'm not sure what tax software you're using that requires you to guess inputs and numbers. TurboTax makes this trivially straightforward.

    cj(3450) 5 days ago [-]

    As our 30 person startup has grown, I made a conscious decision to stop pitching stock options as a primary component of compensation.

    Which means the job offer still includes stock options, but during the job offer call we don't talk up the future value of the stock options. We don't create any expectation that the options will be worth anything.

    Upside from a founder perspective is we end up giving away less equity than we otherwise might. Downside from a founder perspective is you need up increase cash compensation to close the gap in some cases, where you might otherwise talk up the value of options.

    Main upside for the employee is they don't need to worry too much about stock options intricacies because they don't view them as a primary aspect of their compensation.

    In my experience, almost everyone prefers cash over startup stock options. And from an employee perspective, it's almost always the right decision to place very little value ($0) on the stock option component of your offer. The vast majority of cases stock options end up worthless.

    Swizec(3268) 5 days ago [-]

    > The vast majority of cases stock options end up worthless

    My fav manager had a great way of phrasing this: 'There are more ways for your options to be worthless than to make you rich'

    But I also personally know plenty of people who made off great with their startup equity. They're def not worthless.

    Ultimately I think you should never take an uncomfortable pay-cut to join a company and you should maximize your stock compensation on top of that. Don't forget other types of equity – brand, exposure to good problems, network.

    __turbobrew__(10000) 5 days ago [-]

    Even if the company has a successful exit lots of times the founders have different stock class than employees which allows them to cook the books in creative ways where employee stocks are devalued without affecting founder stocks.

    I personally went through a successful exit of a company where I was one of the early engineers and was privy to orchestrating the sale (working with potential buyers and consultants) and saw this happen.

    I now am granted stocks which are traded on the NYSE so nobody can cook the books without commiting securities fraud.

    yieldcrv(10000) 5 days ago [-]

    > In my experience, almost everyone prefers cash over startup stock options.

    Good to know, because its common for the founder and hiring manager guilt trips to be insane.

    blitzar(10000) 5 days ago [-]

    As your 30 person startup has grown, the (future) value of the stock has gone from $0.00 to not $0.

    When the value was zero, of course you had to talk up future value - you were selling something worth $0 for $1,000's. Now that it is worth something, it represents actual value for the employees to swap for salary, which is why you no longer offer as much!

    Aurornis(10000) 5 days ago [-]

    > In my experience, almost everyone prefers cash over startup stock options.

    My experience has been a little different. We had a lot of people demanding both very high cash comp and then demanding very high equity packages on top.

    Giving people a sliding scale option did put some of the control back in their hands, but it also produced an analysis paralysis for some where they couldn't decide what to pick.

    > And from an employee perspective, it's almost always the right decision to place very little value ($0) on the stock option component of your offer. The vast majority of cases stock options end up worthless.

    Much of this is due to startups failing. Every random "startup" trying to pay people with options because the founders have no hope of success inflates this statistic.

    However another driver of this statistic is the extremely short exercise window upon quitting. People may work somewhere for 1-3 years but the company could be 5-10 years away from acquisition. Employees have to give the company money at time of quitting to get any equity, which few want to do.

    I know the common wisdom, but I also know that there are a couple local technology centered private Slack groups in my area where people will eagerly try to evaluate and possibly buy your options for local startups. They don't buy everything, obviously, but there is demand for the few cases where contracts allow transfer of the resulting equity.

    babl-yc(10000) 5 days ago [-]

    So would you trade your founder equity for a fixed salary? My guess is probably not.

    Equity is an extremely important factor for many candidates, especially more senior ones and executives.

    I would not pitch it as future value, and instead pitch as % of company. If it's a minuscule amount that doesn't move the needle in offer conversations, than perhaps you are not offering enough, or you're identifying candidates who value more predictable income than investment in the company.

    Alex3917(941) 5 days ago [-]

    > And from an employee perspective, it's almost always the right decision to place very little value ($0) on the stock option component of your offer. The vast majority of cases stock options end up worthless.

    This isn't actually true from a historical perspective. The primary reason why the gap between the wealthy and and everyone is increasing is that employees started preferring cash compensation over equity. Joseph Blasi documented this in his book The Citizen's Share, and that book is why Elizabeth Warren recently passed legislation making it easier for employers to give equity to their employees.

    grandempire(10000) 5 days ago [-]

    I often had startups offer me a number of shares with no explanation for the percentage ownership or the number of total shares.

    I said I have to value them at zero without more information and they would act all offended when I asked for more (happened at least 3 times).

    This suggests to me that founders either don't understand the mechanics themselves or are preying on lack of financial understanding.

    Jasonhhh2(10000) 5 days ago [-]

    That mindset can definitely simplify negotiations, but I've noticed that removing equity from the perceived value stack can change how people show up. Some folks who might've gone the extra mile with even a slim shot at ownership now treat the role more like a job than a mission. I'm curious—have you seen any shifts in long-term engagement or retention since downplaying equity?

    immibis(10000) 5 days ago [-]

    Isn't that the point of equity compensation? I don't care about owning a percentage of the company - that just sounds complicated. I care about converting it into cash later. To compensate for the small chance that will be able to happen, you better make it seem like a lot more cash than the alternative cash compensation you're offering. The upside to you is that you don't have to pay that bundle of cash for a while, and you only have to pay it if you have it. And not you personally, but all investors indirectly.

    goldchainposse(10000) 4 days ago [-]

    I was a hired early to a startup (my hiring manager was the CEO) that's now public and worth $10B+ that you've heard of. It took them over 10 years to go public, and I would have done just as well putting my money in FAANG, but with lower risk and more liquidity.

    ein0p(10000) 4 days ago [-]

    This is the way. Options aren't really worth much for the rank and file startup employees after about 7-10 hires. That fraction of a percent is just not going to be life changing unless it's the next OpenAI or something. For very early employees it's different, but even for them some founders will assign far too little equity to really make a difference.

    marssaxman(10000) 4 days ago [-]

    I would have ignored anything you said about the value of stock options anyway, having many years ago learned that they are practically always worthless, so making me a straightforward, honest, non-speculative offer would make me more interested in working for your company, not less. Kudos. Keep it up!

    choppaface(1792) 4 days ago [-]

    A candidate wants a _competitive_ offer. While stock is almost impossible to compare across offers, candidates can at least stack-rank the company's funding and check to see how the proffered percentage compares to the mean for the funding round. So if a company has high-percentile funding, and gives a high-percentile equity fraction, it's a good sign to the candidate. But of course, the company could be WeWork, or even OpenAI could get risky if the tender offers stop (which will happen when/if the market crashes).

    At the end of the day, it means a lot to the candidate if the company _wants to compete_ for a hire, especially in the current economy (layoff-friendly and SWE saturated, especially versus 10 years ago). A story like "your options could be worth $XXX in 4 years" I hope is not seen as competitive today.

    apwell23(10000) 4 days ago [-]

    > t's almost always the right decision to place very little value ($0) on the stock option component of your offer

    one of my coworkers at databricks say their TC is like 900k or something based on some BS imaginary options value. lol .

    m12k(10000) 4 days ago [-]

    > The vast majority of cases stock options end up worthless

    Also, even if the company ends up worth a lot of money, there's no guarantee that a way to liquidate, such as an IPO, exit or secondary market, will become available in any reasonable time frame. And as a regular employee you have exceedingly little to say in bringing about such events. There's not much fun in having a winning lottery ticket that can't be cashed in, in fact it's highly stressful.

    balalayuki(10000) 4 days ago [-]

    I find it refreshing that you prioritize cash compensation over stock options. Many employees may feel more secure with a higher salary rather than relying on uncertain equity.

    mbesto(10000) 4 days ago [-]

    I'm not saying this is right or wrong. But, if you're ventured backed, then this strategy is usually at odds with your investors. The reason stock options were used in the past was because you were signaling to everyone (you, your family, your grandma, your early employees, your current investors, your advisors, your future investors, etc.) that you were strapping on to a rocket ship. By paying them more and giving less stock, this means your capital raises don't stretch as far (from a perspective of time). This in turn will be a signal to your investors that you may take the $1M and not the $3B deal (see Google/Yahoo), which they may not like.

    jan3024-2r(10000) 5 days ago [-]

    Just remember this is the forum run by the dudes that set up Sillion CON Vallee bank.

    JumpCrisscross(69) 4 days ago [-]

    > this is the forum run by the dudes that set up Sillion CON Vallee bank

    No. Y Combinator didn't found SVB.

    j45(3605) 5 days ago [-]

    Statistically, stock options are often lottery tickets that the holder may have a tiny say in.

    phendrenad2(10000) 3 days ago [-]

    It's hilarious that people still take them seriously.

    sprocklebud(10000) 5 days ago [-]

    I got hit with a new equity compensation fugazi with RSUs at a small public company recently.

    My offer letter pledged something like $100,000 of stock, vesting over four years. I was told that I would receive the grant within the first three months of my employment, once it was approved by the board.

    Once I finally received the grant, it was 1/5 of what it should have been. "What gives?", I inquired.

    Apparently the stock incentive plan has a "price floor" for grants at $5 / share, and the stock had plunged to approximately $1 / share at my time of hire.

    So my offer letter says my grant is for $100k, but in reality it's $20k.

    I learned this was because there was a limited pool of stock available for employee award grants, and a recent rout in the stock price meant there was an insufficient amount of stock available for grants.

    Apparently going forward, offer letters specify the number of RSUs rather than a $ amount. So I guess a charitable interpretation is that it may not have been so much an intentional deception as a set of unfortunate circumstances coming together with some poor oversight on the details of my offer letter.

    Still, I am incensed.

    I referred to a previous employer's offer letter and RSU grant for comparison. The offer letter also specified a $ amount, and did not specify how it would determine the price of the stock to calculate the awards by.

    In that case, it seemed to be the average closing price of the stock in the month the award was granted. Which I'm content with, but these details also were not specified in the offer letter.

    tldr if you get an offer letter for a $ amount of RSUs, make sure to clarify (in writing) how the valuation of the stock is determined for the calculation of the number of units awarded.

    pyfon(10000) 4 days ago [-]

    Oh that is bad! It is a 20k/y pay cut you weren't expecting.

    If they were keen to make amends they should just bump your pay that much. Unless they are struggling.

    By the way I have a similar RSU amount and schedule. So gar so good but cognizant that in the contract they can stop it at any time. I took the risk as I can also quit at any time!

    marcusb(10000) 4 days ago [-]

    All of the RSU offers I ever got stipulated the grant was 'subject to the approval of the board', i.e., not guaranteed. That said, I'd be absolutely livid if something like this happened, and would be expecting my manager to either make it right, or I'd look for a new job at the first available opportunity.

    You can't do good business with bad people.

    pm90(10000) 4 days ago [-]

    This is strange and possibly illegal. If the stock falls, you should get more stock since they're worth less. If they don't have enough stock then they shouldn't have offered you that as compensation.

    In any case... $1- $5 is penny stock territory. I believe you get delisted from NYSE if ur stock stays at $1 for too long.

    retiredpapaya(10000) 4 days ago [-]

    The Netflix approach to this [1], where Netflix allows employees to choose how much of their compensation is cash vs options seems like the best approach - you can tune based on your risk tolerance.

    > Each employee chooses each year how much of their compensation they want in salary versus stock options. You can choose all cash, all options, or whatever combination suits you. You choose how much risk and upside (down) you want. These 10-year stock options are fully-vested and you keep them even if you leave Netflix.

    [1]: https://jobs.netflix.com/work-life-philosophy

    lbotos(10000) 4 days ago [-]

    I know a few years ago spotify had a similar selector:

    - cash bonus

    - RSUs

    - More OTE Options

    You got to pick two and your ratio. IIRC, 80/20, 60/40, 50/50.

    jjeaff(10000) 4 days ago [-]

    Is there some bonus given if you choose stock options? Otherwise, what would be the incentive of taking options over cash in any amount?

    robocat(3527) 4 days ago [-]

    The stock options are for common stock, right?

    However investors that put money in get preferred shares (not common stock) right?

    The tradeoff is not equal: taking less salary and receiving stock of less value seems risky to me. I can't imagine the employees discount is very good (those preferred shareholders don't want to be diluted).

    Better sibling comment here with in depth opinion: https://news.ycombinator.com/item?id=43677084 : which answers my question:

      when your options vest, is that you are essentially allowed to make an equity investment in the company with really unfavorable terms (ie ur not even getting preferred stock or any voting rights unlike your average investor).
    jiveturkey(10000) 4 days ago [-]

    this is a bad comment for this subject. NFLX options are on a publicly traded stock. the terms are also different than a startup stock option. you've really just introduced confusion into this subject, judging from all the child comments.

    in my experience, most startups do offer you a sliding scale of cash vs equity, just not 90% as NFLX does. they may not advertise it or be upfront about it, but i've never personally experienced a startup that wouldn't trade one for the other.

    darod(3478) 4 days ago [-]

    One thing not pointed out in the article, but I would like to hear others perspective on, is what happens at the 4 year mark when all options are fully vested and exercisable. This is a scenario for employees that have been there long-term. Should requests be made for further option grants? Should employers think about further option grants to retain employees? What are people's thoughts and experiences?

    paxys(10000) 4 days ago [-]

    You should be getting refreshers every year. If you don't then you are either not negotiating hard enough or the company just doesn't want you around for the long term.

    temp250417(10000) 4 days ago [-]

    After seeing several option grant story arcs from start to finish (including one I've experienced personally), I find this type of compensation utterly worthless and frankly insulting to the workforce.

    What happens in the average case scenario when your options vest, is that you are essentially allowed to make an equity investment in the company with really unfavorable terms (ie ur not even getting preferred stock or any voting rights unlike your average investor).

    Let's run the math here really quickly. You leave your high paying, hard, cold cash job at megacorp XYZ (let's call it $300k), to join hot startup ABC that just raised a series A at a $50MM post. The startup offers you $150k in cash because ... everyone is in it for "the mission", and if they're generous another $250k in options compensation to basically be on par with the XYZ salary that you're leaving. Now, that $250k options grant is based on where the founders want the company to be by the time that your vesting schedule starts kicking in. So really, what you're getting is more like 0.25 of the company if the company hits $100m valuation. We're not going to even bother discussing pref, dilution and all the other factors that are constantly fighting to reduce that equity value. ANYWAY... once you vest you're presented with the right to exercise, which costs money and is going to result in a tax bill which...costs money. Now you're a wise financial planner and know that the sooner you exercise, the less tax you have to pay in the case of a liquidation event...so you fork up the cash. Now what's it going to cost? Probably not 0, because strike prices are determined based on the valuation of the company when the options are issued...so you're probably more in line to spend maybe $50k if you're lucky but mostly closer to $100k. If there hasn't been a 409A adjustment, you don't have to pay tax on that. Now if you're closer to a series B and let's say the founders got where they wanted to be and valuation doubled, the 409a was filed and now you get to pay regular income tax, so you find yourself being taxed as if you just made that coveted $250k...but you didn't. You are making an investment ... just like any other investor, albeit with a lot less favorable terms. The best part is..guess what? If your circumstances change and you want to move on to a different job, you are now getting to choose between staying with the company until it has a liquidation event..or you have to effectively invest in it. Pretty shitty deal!

    Now, this obviously assumes that you exercise your options and that you're trying to optimize your tax bill. You could just as well be vesting and staying with the company for a long period of time, not having to really exercise your options and effectively you can make a ton of money without putting anything up for equity besides your sweat. Or you could have stayed at megacorp and taken that half of your salary that you gave up to invest in this ABC startup OR maybe a big bundle of the same kinds of startups with a much better risk profile because of diversification (and less reward).

    Now let's actually talk about a happy case. You joined early, you're now an exec, you earned your stock, you vested, the company was gracious enough to give you a low interest loan to exercise your options...you're golden. Company is getting acquired by a legacy big pocketed company or PE firm, you're about to make bank and retire early. BUT...there is a caveat. During the sale proceedings it has been decided that half the purchase price is going to be stock and half is going to be cash. Moreover, all the execs should roll half of their equity into the new venture and you're locked in for another 3 years, but now...you're holding equity in a totally different beast of a company and you have 0 say or idea as to how that company works or trades.

    In any case, as one of the comments here said...there are a lot more ways for your options to be worth nothing than there are for you to become rich from them. There are just too many variables to consider. It is not a good way to become rich. In order for it to be worth it, you have to be at a company that succeeds in making your equity worthwhile despite all of these caveats.

    jt2190(10000) 4 days ago [-]

    Your description seems directionally correct, but I don't understand this part:

    > You are making an investment ... just like any other investor, albeit with a lot less favorable terms. The best part is..guess what? If your circumstances change and you want to move on to a different job, you are now getting to choose between staying with the company until it has a liquidation event..or you have to effectively invest in it. Pretty shitty deal!

    Why would you have to stay with the company after you vest? Is there some kind of clause that strips you of your share ownership or forces you to sell if you leave the company?

    ldjkfkdsjnv(10000) 4 days ago [-]

    Another common one, is you hold a big equity position, but get pushed out/leave early long before there is an exit event. The investors and executives still at the company will fuck you over on your options, and like diluted your options after you left. This happens literally all the time, and if you arent the CEO, there is almost nothing you can do.

    Just a few weeks ago, I met a CTO that got the company to a series A level of revenue, by building the whole product over two years. Was fired by the CEO, who brought on new investors, recapped the equity table, and drove his option value to almost nothing.

    elephanlemon(10000) 4 days ago [-]

    Conspicuously missing from this is any discussion of clawbacks or repurchase rights, which can be a big deal. Sadly most people do not seem to be too familiar with these, but they should as they are quite common and very dangerous to employees.

    https://www.stockoptioncounsel.com/blog/standards-ownership-...

    Terretta(1560) 4 days ago [-]

    Also missing, any discussion of equity-like or profit-participatory structures from LPs (limited partnerships).

    Technologists joining one of these should know the 'business domain' 'partners' are either buying into or awarded partnership interests, but structures can be available for non-business domain roles (in firms that think technology isn't in their business domain cough), such as 'profits interests', 'synthetic equity', 'phantom equity', or etc.*

    If the firm has a product and you're helping build it, look for equity-like that let you not only share in profits (if any, most starting things don't have profits) but have a stake in capital events (from asset sale to IPO).

    Think of these two forms as something like dividends and something like a combination of options and RSUs. If the profit component is intended as part of annual comp, it should pay at 100% from the start even if you don't 'own' it until you vest. Meanwhile if it's a future reward, then both it and the capital-like would have a 'tail' that remains in effect if profits or a capital event happens after you leave.

    These are very complicated and very bespoke per firm since they are 'made out of' the partnership interests of the LP where ownership is handled as 'capital accounts' and may have no accounting method for 'goodwill' value separate from partner capital accounts. In such cases, generally partners have shaved off some portion of their rights and allocated those rights to employees, and the mechanisms of this 'waterfall' amount to where you stand in that line if at all.

    Ideally (a) seek advice from someone experienced with these that (b) you don't have to spend $1200 an hour on.

    * Partnerships that understand their business domain is in the technology business — since technology is just another word for tools, and business humans should be tool crafters too — will be using this and have told you about it during interview, and it will all go more smoothly.

    jiveturkey(10000) 4 days ago [-]

    clawbacks are not common, and no one should ever accept such a package. (maybe executives, since they may have other golden parachute provisions.)

    repurchase rights are exceedingly common.

    blindriver(10000) 4 days ago [-]

    Those equity percentages in this document are EXTREMELY FOUNDER FRIENDLY and I believe this entire document was written to anchor new employees with lowered expectations on equity. I think this entire document is a disingenuous scam to make new startup employees think that those percentages are okay.

    I've been in Silicon Valley a long time, since the dotcom boom. My first company, the executive assistant got so rich from the pre-dotcom IPO she quit and bought a vineyard. That's how things used to be. And we aren't talking about some crazy ipo, it was before those times.

    Fast forward to these days, the startup I worked for got acquired. I was engineer < 15. The founders got low 9 figures, I got 5 figures. Almost everyone got fucked for years of loyalty.

    But that's what YC and other accelerators teach founders. Be cheap with equity. And this document just perpetuates that.

    Founders can easily make life changing money but the people that do the actual work get fucked unless it becomes a >100B company like a Facebook. That's not realistic and they know that. Employees need a bigger piece of the pie when things go great for the company and not just when it becomes a Facebook, Uber, etc.

    If you want to know how to evaluate equity, pick a total valuation of the company at exit and then multiply by your stake. If the company needs to exit at > 10B for you to make a life changing amount of money, then ask for much much more equity or don't take the offer.

    ryandrake(10000) 4 days ago [-]

    It's crazy how 'founder friendly' and 'investor friendly' (read: 'employee unfriendly') the norm has gotten. I would never work for someone else's startup these days. No way, no how. Four orders of magnitude difference between the founders' exit and the early employees' exit is totally unacceptable.

    sgustard(3567) 4 days ago [-]

    The majority of comments here seem to argue the ideal equity share for employees is zero, since it probably won't be worth anything. That seems like an even more founder friendly viewpoint, no? Mass inequality of ownership is how we end up normalizing the corrupt billionaire class. I agree with you we need an industry desire for better ownership terms, but instead I see people arguing employees should just take a salary, look the other way, and let owners hoard all the spoils.

    Ozzie_osman(2965) 4 days ago [-]

    > I believe this entire document was written to anchor new employees with lowered expectations on equity. I think this entire document is a disingenuous scam to make new startup employees think that those percentages are okay.

    Have to love the HN crowd. A guy goes out of his way to write a very detailed, high-quality guide demystifying a very complex and consequential topic, open sources it so it's free, and immediately people suspect the entire document is build just to make startup employees think lower percentages are OK?

    Disclaimer: I know the author personally, so can definitely attest to the motivation behind this guide. I'll also say I've used this guide both as a founder and as a startup employee and it's been immensely helpful.

    habosa(3282) 4 days ago [-]

    Came to write this same comment. The first 10 employees of a company are so critical to success and they tend to be drastically underpaid. A founding engineer (often employee 3 or 4) would be lucky to get 1.5% at most places while the CTO has 30-50% and they probably have very equal impact on the company in the early days. And engineers do well by comparison. The first customer-facing roles often get barely any equity at all while they hustle to actually make an idea into a business.

    The VCs have convinced the founders that they are special people and they deserve 10-100x the rewards of their best employees. They do this to create room in the cap table for themselves of course. They also give the founders early liquidation opportunities to keep them on their team.

    It's disgusting, and the founders wonder why some people don't want to grind as hard as they do.

    Eridrus(10000) 3 days ago [-]

    I don't think your complaint/experience actually lines up with the numbers here.

    In the Post-Series A numbers, the lowest numbers are in the ~0.5% range. This is at most 2 figures off what the founders could get together. In a world where founders together got 9 figures, a senior engineer would get 7 figures, not 5 figures like in your situation.

    Sytten(10000) 4 days ago [-]

    There is a mistake on NSO in this guide, there is not tax on grant even if the strike price is lower than FMV.

    jagged-chisel(3362) 4 days ago [-]

    Can you provide a reference for this?

    OptionOfT(2974) 4 days ago [-]

    I got offered .3% as a first developer at a company. That's just insane.

    No benefits, $45k pay cut, and even when everything goes well I might break even.

    marssaxman(10000) 4 days ago [-]

    Were they people you'd like to work with on a project you'd like to help build? Maybe it's worth it. Life is for living, after all, and we do a lot of our living at our jobs; there's more to consider than just what you get paid.

    paulcole(10000) 4 days ago [-]

    Isn't it just insane from your point of view. For somebody else couldn't the same offer be appealing?

    kccqzy(2074) 4 days ago [-]

    My personal preference only: I'm glad my current employer has no equity compensation altogether; just base salary and bonus. My former employer did have RSUs, but they have an auto-sell program that I utilized every year.

    In college the computer science department had an extracurricular talk about finances for a software engineer; the invited speaker was very adamant that holding most of your net worth in a company that employs you was an unacceptable concentration risk. I remembered that to this day.

    doktorhladnjak(10000) 4 days ago [-]

    A lot of employers who only pay cash have salaries similar to companies that pay cash salary plus equity. Perhaps the equity won't be worth anything, but often times it's extra on top of what's being offered. Those accepting cash only are often leaving potential expected value on the table.

    mppm(10000) 4 days ago [-]

    Equity compensation is an essential part of modern corporate incentive structure. In particular, it incentivizes prospective employees to accept lower compensation, by making it appear larger on paper.

    guappa(10000) 4 days ago [-]

    I've always evaluated it at 0, and that's all I got from the equity I got in my whole career. If I didn't think the salary was enough I wouldn't have accepted.

    thuanao(10000) 4 days ago [-]

    AKA fraud.

    lizknope(10000) 4 days ago [-]

    Why didn't I get any money from my startup? - A guide to Liquidation Preferences and Cap Tables

    https://www.reddit.com/r/startups/comments/a8f6xz/why_didnt_...

    I've posted this before but it's a great read. Even if you have millions of shares, the dilution and later investors could still leave you with nothing.

    I worked for 2 startups, both failed, but I never got to see the cap table.

    jmuguy(10000) 4 days ago [-]

    This is excellent and illustrates that unless you have access to the cap table, you have no idea what your options are worth. Sometimes you can at least get a founder to tell you what the preference stack looks like, and what multiples were given to investors, and that might be enough to kind of sort of work out what an exit looks like.

    justinbaker84(10000) 4 days ago [-]

    I worked at a startup where I joined as the second employee before they raised any money and I basically got 0.5% of the company. They went on to raise over $100 million in VC.

    I got $0 for my equity. Start ups have SO many ways to screw employees out of their equity.

    The most basic is that you have options that you are not allowed to sell during equity rounds. If you accept them then you need to pay the strike price and they count as taxable income even though you got shares instead of money so you just lose a lot of money.

    Say what you will about Elon, but at Space X employees are allowed to sell their shares for actual money at regular intervals. Very few start ups that succeed allow their employees to do that.

    90% of startups that succeed just want to grind down their employees rather than pay them the equity they earned.

    mikestaub(10000) 3 days ago [-]

    If they are Delaware C corporations and you own at least one share, you have a legal right to demand access to the cap table.





    Historical Discussions: TikTok is harming children at an industrial scale (April 17, 2025: 570 points)
    TikTok Is Harming Children at an Industrial Scale (January 09, 2025: 11 points)

    (570) TikTok is harming children at an industrial scale

    570 points about 22 hours ago by cwwc in 265th position

    www.afterbabel.com | Estimated reading time – 46 minutes | comments | anchor

    Tomorrow, the U.S. Supreme Court will decide whether it should step in to block or delay the implementation of a law that would ban TikTok from operating in the U.S. If not blocked, the law will force TikTok to cease operations in the U.S. on January 19, unless its Chinese corporate owner (Bytedance) sells to a buyer not controlled by a foreign adversary. The case hinges entirely on constitutional arguments pertaining to national security and free speech. The Justices will hear no evidence about addiction, depression, sexual exploitation, or any of the many harms to children that have been alleged, in separate lawsuits filed by 14 state Attorneys General, to be widespread on TikTok.

    The upcoming ban will also be adjudicated in the court of public opinion as Americans try to decide whether the loss of access to TikTok would be a reason to protest or celebrate. In this post we argue that Americans should welcome the disappearance of TikTok because the company is causing harm to children, adolescents, and young adults at an industrial scale.

    Our evidence comes mostly from research done by those 14 Attorneys General. Some of their briefs have been posted online for the world to see. The briefs include hundreds of quotations from internal reports, memos, Slack conversations, and public statements in which executives and employees of TikTok acknowledge and discuss the harms that their company is causing to children. We organize the evidence into five clusters of harms:

    1. Addictive, compulsive, and problematic use

    2. Depression, anxiety, body dysmorphia, self-harm, and suicide

    3. Porn, violence, and drugs

    4. Sextortion, CSAM, and sexual exploitation

    5. TikTok knows about underage use and takes little action

    We show that company insiders were aware of multiple widespread and serious harms, and that they were often acting under the orders of company leadership to maximize engagement regardless of the harm to children. As one internal report put it:

    "Compulsive usage correlates with a slew of negative mental health effects like loss of analytical skills, memory formation, contextual thinking, conversational depth, empathy, and increased anxiety," in addition to "interfer[ing] with essential personal responsibilities like sufficient sleep, work/school responsibilities, and connecting with loved ones."

    Although these harms are known, the company often chooses not to act. For example, one TikTok employee explained,

    "[w]hen we make changes, we make sure core metrics aren't affected." This is because "[l]eaders don't buy into problems" with unhealthy and compulsive usage, and work to address it is "not a priority for any other team."

    Although the evidence below is all publicly available, no one we know of has compiled and combined direct quotations from company insiders and internal reports across multiple alleged harms. We think this compilation gives vital information to parents, who might want some insight into the character and business practices of a company that owns much of their children's attention and influences their social development. Parents might want to know that TikTok knows that its parental controls are ineffective and rarely used:

    In another internal document, TikTok admitted that "user research" shows that "[f]amilies do not use Family Pairing" and that "Family Pairing doesn't address parents' top concerns," including "inappropriate content, offensive interactions, and lack of privacy.

    And even if parental controls worked and parents chose to shield their kids from bad stuff, they can't because TikTok's content moderation is poor. An internal study found that the "leakage rate" (of bad stuff getting past moderators) is as follows: 35.71% of "Normalization of Pedophilia" content; 33.33% of "Minor Sexual Solicitation" content; 39.13% of "Minor Physical Abuse" content; 30.36% of "leading minors off platform"; 50% of "Glorification of Minor Sexual Assault"; and 100% of "Fetishizing Minors."

    For those who think that social media is relatively harmless, we urge you to read the quotations and internal studies described below, in which employees of TikTok discuss the vast and varied harms that they are causing to literally millions of American children each year.

    The inspiration for this post was a legal brief filed by the Kentucky Attorney General that was improperly redacted. Redaction is the process in which the AG's office will black out some of the most damning revelations and quotations before releasing their brief to the public. The redacted sections often contain trade secrets and other text that the company has a legitimate reason to keep private.

    But when the Kentucky AG's office was preparing to post their brief against TikTok, whoever was in charge of doing the redaction simply covered the relevant text with black rectangles. Even though you can't see the text while reading the PDF, you can just use your cursor to select each black section, copy it, and then paste it into another file to read the hidden text. It is great fun to do this — try it yourself! Or just read our version of the brief in which we have done this for you.

    In the rest of this post we organize the direct evidence of harm that is now available to us, taken directly from employees and leaders at TikTok. We give only some highlights here in this post, but you can see our more comprehensive listing of the relevant quotations in a separate Google doc.

    We draw on four briefs filed by state AGs in their suits against TikTok: Kentucky v. TikTok, Utah v. TikTok, Nebraska v. TikTok, and New York v. TikTok. You can learn more about each in Footnote 9.

    Share

    [Note that in harm clusters 1 through 5, below, text in bold is direct quotations from company employees and internal memos. Text not in bold is direct quotations copied from the indicated portion of the indicated AG brief, which sets up the relevant quotation from company insiders. [Italicized text in brackets is annotations from us — Jon and Zach.] For each harm, we draw from the four briefs, and we supplement some sections with reports from journalists in major outlets who discovered relevant information or ran their own experiments by setting up fake accounts for minors on TikTok.]

    [Among the most widely reported harms from TikTok is its ability to pull young people in and not let them go, for hours at a time. TikTok's algorithm is widely regarded as best-in-class for keeping users scrolling. A 2024 report from Pew finds that 33% of American teens (ages 13 to 17) say that they are on a social media platform "almost constantly," with 16% saying that just for TikTok. (We estimate that in 2023, there were roughly 21.8 million teens (13-17) in the U.S., which translates to about 3.4 million American teens claiming they are on TikTok almost constantly). Below you can see that TikTok is aiming to create just such compulsive use, which in turn can lead to problematic use disorders and behavioral addictions, which then compound the harms in the other four clusters. The company does this even though many of its employees believe their product is bad for children's development.]

    • KY P. 7, PARA 18

      • TikTok's executives and employees have admitted that they target young Americans, stating:

        • "It's better to have young people as an early adopter, especially the teenagers in the U.S. Why? They [sic] got a lot of time."

        • "Teenagers in the U.S. are a golden audience . . . . If you look at China, the teenage culture doesn't exist — the teens are super busy in school studying for tests, so they don't have the time and luxury to play social media apps."

    • KY P. 8, PARA 19 [REDACTED BUT RETRIEVABLE TEXT]

      • TikTok knows that the harmful effects of its Platform wreak havoc on the mental health of millions of American children and teenagers and harms them. Its executives have admitted:

        • "The product in itself has baked into it compulsive use."

        • "The reason kids watch TikTok is because the algo[rithm] is really good. . . . But I think we need to be cognizant of what it might mean for other opportunities. And when I say other opportunities, I literally mean sleep, and eating, and moving around the room, and looking at somebody in the eyes."

    • KY P. 20, PARA 64 [REDACTED BUT RETRIEVABLE TEXT]

      • An internal presentation on the 2021 strategy for TikTok describes the company as being in an "arms race for attention[.]"

      • [Below is a redacted graph from para 67 of KY brief. It shows that TikTok has reached saturation among the 29.7 million US users under the age of 17 who own a smartphone. This means that they can't get more young users, but they can get more time out of each user, especially if they pull them away from competing platforms.]

    • KY P. 40, PARA 121 [REDACTED BUT RETRIEVABLE TEXT]

      • In an unnamed internal TikTok Defendants document from 2019 summarizing use by age, the author concluded:"As expected, across most engagement metrics, the younger the user the better the performance."

    • KY P.40, PARA 125 [REDACTED BUT RETRIEVABLE TEXT]

      • The 'TikTank' [internal TikTok group studying issues affecting TikTok] Report observed that "Tiktok is particularly popular with younger users who are particularly sensitive to reinforcement in the form of social reward and have minimal ability to self-regulate effectively."

    • KY P. 55, PARA 181 [REDACTED BUT RETRIEVABLE TEXT]

      • As an internal guide on push notifications explained, a key goal of TikTok's push notifications is to "Activate & Engage users with the right content at the right time, to encourage users to open the App more and stay longer." TikTok uses different kinds of push notifications to achieve this goal. For example, TikTok's "Interest Push" aims to "activate users so they will return to the app."

    • KY P. 67, PARA 223 [REDACTED BUT RETRIEVABLE TEXT]

      • "TikTok's success can largely be attributed to strong out of the box personalization and automation, which limits user agency[.]"

    • UT P. 4, PARA 11

      • Despite admitting internally that LIVE poses "cruel[]" risks to minors— encouraging "addiction and impulsive purchasing of virtual items," leading to "financial harm," and putting minors at "developmental risk"—TikTok continues to use manipulative features to increase the time and money users spend on the app. [This quote is referencing TikTok's LIVE feature]

    • NE P. 14, PARA 52

      • According to Defendants, TikTok's incredible advertising success is attributable to the fact that its users are "fully leaned in and immersed compared to other platforms." Defendants describe TikTok as "the leading platform for Information Density" because of its "algorithm and shorter video formats" that "create continuous cycles of engagement."

    • NE P. 20, PARA 72

      • As Defendants have explained, TikTok's success "can largely be attributed to strong . . . personalization and automation, which limits user agency" and a "product experience utiliz[ing] many coercive design tactics," including "numerous features"—like "[i]nfinite scroll, auto-play, constant notifications," and "the 'slot machine' effect"—that "can be considered manipulative."

    • NE P.21, PARA 76

      • Defendants admit that teens are especially susceptible to compulsive usage of the TikTok platform. Internal documents highlight the fact that minor users are "particularly sensitive to reinforcement in the form of social award," have "minimal ability to self-regulate effectively," and "do not have executive function to control their screen time."

    • NE P. 27, PARA 97

      • In a "TikTok Strategy" presentation, Defendants celebrated the fact that users spend inordinate amounts of time on the platform. "TikTok is in most people's lives like this," Defendants explained, referring to online posts that read, "go on tiktok for 5 mins and 3 hours have passed" and "my night routine: watch 3 hours of tiktok videos, try to follow the dance steps, realise u suck at dancing n cry about it, continue watching tiktok videos, sleep."

    • NE P. 27, PARA 99

      • As one internal report noted, after surveying academic literature on the effects of social media on adolescents, "TikTok is particularly popular with younger users, who are seen as more vulnerable to online harms and the negative impacts of compulsive use."

    • NE P. 28, PARA 102

      • Another internal report based on in-depth interviews with TikTok users found that overuse of TikTok caused "negative emotions," "interfered with [users'] obligations and productivity," and led to "negative impacts . . . on their lives," including "lost sleep, missed deadlines, poor school performance, running late, etc." It reported that "many participants described their use of TikTok disturbing their sleep, which limited their productivity and performance the following day," and that "[e]very participant indicated that time management on TikTok was especially difficult compared to other social media platforms."

    • NE P. 33, PARA 115

      • But internally, Defendants admit the truth, that real users report "feeling like they are trapped in a rabbit hole of what our algorithm thinks they like."

    • NY P. 16, PARA 88

      • Alexandra Evans, again prior to becoming a TikTok executive, co-authored a report explaining how coercive design impacts teenagers: "Persuasive design strategies exploit the natural human desire to be social and popular, by taking advantage of an individual's fear of not being social and popular in order to extend their online use. For young people, identity requires constant attention, curation and renewal. At key development stages it can be overwhelmingly important to be accepted by your peer group."

    Leave a comment

    [These are the main harms we focused on in The Anxious Generation, although as you can see in the other four clusters, the harms caused by TikTok go far beyond mental health problems.]

    • KY P. 60, PARA 196 [REDACTED BUT RETRIEVABLE TEXT]

      • In the Digital Wellbeing Document, Defendants admit that "offering effects that perpetuate a narrow beauty norm . .. ha[s] the potential to negatively impact the wellbeing of our community."

    • KY P. 65, PARA 213 [REDACTED BUT RETRIEVABLE TEXT]

      • The TikTank [internal TikTok group studying issues affecting TikTok] Report also found that "compulsive usage correlates with a slew of negative mental health effects like loss of analytical skills, memory formation, contextual thinking, conversational depth, empathy, and increased anxiety." Additionally, "compulsive usage also interferes with essential personal responsibilities like sufficient sleep, work/school responsibilities, and connecting with loved ones."

    • KY P. 84, PARA 260 [REDACTED BUT RETRIEVABLE TEXT]

      • In one experiment, Defendants' employees created test accounts and observed their descent into negative filter bubbles. One employee wrote, "After following several 'painhub' and 'sadnotes' accounts, it took me 20 mins to drop into 'negative' filter bubble. The intensive density of negative content makes me lower down mood and increase my sadness feelings though I am in a high spirit in my recent life." Another employee observed, "there are a lot of videos mentioning suicide," including one asking, "If you could kill yourself without hurting anybody would you?"

    Figure. Pg. 121 para. 261: "Once the TikTok algorithm determines that a teen user is interested in gambling, drugs, or weight loss, the algorithm will consistently show them excessive amounts of that content." Source: Wall Street Journal.
    • KY P. 98, PARA 309-310 [REDACTED BUT RETRIEVABLE TEXT]

      • 309. Defendants know these R1 reviews do not catch a great deal of content that violates the Community Guidelines or restrict content to age-appropriate groups.

      • For example, a presentation about suicide and self-harm content moderation notes that R1 Moderators do not always speak the language shown in the videos, that moderators do not understand context, and that moderators are not given policy reminders for new instructions.

    • [No direct quotes from TikTok employees, but see pages 36-43 for a section of the brief that describes the videos that were sent to fictitious accounts created by the AG's office, pretending to be 13, 15, and 17 year old Nebraska residents. "Within minutes of scrolling through TikTok's "For You" feed—before the accounts had searched for any videos or "followed" any users—TikTok's algorithm repeatedly exposed each Nebraska teen account to overtly mature and otherwise inappropriate content." Some of the videos sent to young girls—just on the basis of their age and gender—clearly encouraged young girls to starve themselves.]

    • [Some of the videos also clearly celebrate suicide as the way to escape from psychological pain.]

    Image. Pg. 40, para. 109. "One video shows a woman smiling at the camera, with the text "[m]e staring at my mum after begging her to let me go out for a late night walk alone knowing d*mn well it will be the last time she saw me." The video has over 794k views and 191.3k likes."

    [There is widespread exposure to pornographic, violent, and drug-related content on TikTok. This content is often viewed on one's newsfeed and through TikTok's "live" features. Although nudity, pornography, sexually explicit content, non-consensual sexual acts, the sharing of non-consensual intimate imagery and sexual solicitation violates TikTok's guidelines, the content is easily accessed and recommended to users.]

    • KY P. 38, PARA 115 [REDACTED BUT RETRIEVABLE TEXT]

      • In an internal document discussing how to respond to the [Wall Street Journal] series, TikTok employees acknowledged material failures in their process, including but not limited to the fact that "46.5% sexualized and drug content shared by WSJ is not covered by [the existing moderation] policy (ANSA 55%, Drug 24%)." Similarly, "[t]he moderation leakage rate of sexualized and drug content is 73.5% (ANSA 58%, Drug 90%)." The reason for this moderation failure is that "most prevalent policy titles are sexually explicit language and mention of drugs," whereas "implicit language [e.g., coded language] is often used in videos and failed to be captured [sic] by moderators."

    • KY P. 53, PARA 168 [REDACTED BUT RETRIEVABLE TEXT]

      • Horrifyingly, the report (TT Live & US Safety Summit, "Project Meramec") also confirms that "Minors Easily Access Livestream Feed" and that there is "[n]o age-related feed strategy." Further, the report acknowledges that "[o]ne of our key discoveries during this project that has turned into a major challenge with Live business is that the content that gets the highest engagement may not be the content we want on our platform. Transactional sexual content incorporates hundreds of signals that inform the [algorithm] as well as LiveOps metrics of success - # of gifts, frequency of hosts going live, # of comments, etc."

    • KY P.106, PARA 341 [REDACTED BUT RETRIEVABLE TEXT]

      • Although TikTok boasts thorough content review processes, it does not disclose significant "leakage" rates, measuring the percentage of violative content that is not moderated or removed. Internally, TikTok knows the rate at which certain categories of content leak through its moderation processes, including: 35.71% of "Normalization of Pedophilia" content; 33.33% of "Minor Sexual Solicitation" content; 39.13% of "Minor Physical Abuse" content; 30.36% of "leading minors off platform"; 50% of "Glorification of Minor Sexual Assault"; and "100% of "Fetishizing Minors."

    • UT P. 5, PARA 13

      • TikTok also knows that LIVE is being used for money laundering and other criminal activities.

      • PARA 14: In 2021, TikTok launched "Project Jupiter" to investigate suspicions that organized crime was using LIVE to launder money through TikTok's gifting feature. TikTok discovered that criminals were selling drugs and running fraud operations on LIVE. [TikTok has a virtual currency system where users can "gift" one another].

      • PARA 15: TikTok admits that sexual exploitation and illegal activities on LIVE are "controversial" and worsened by its own monetization scheme. Despite acknowledging internally that "sexually suggestive LIVE content is on the rise," TikTok refuses to warn consumers about these dangers. Instead, TikTok plans to "make better use of monetization methods such as gifting and subscription to gain revenue . . . ."

    • UT P. 10, PARA 32

      • The Division's presuit investigation also confirmed that TikTok's platform facilitates the sale of illegal drugs to underage children right here at our doorstep—including easily allowing TikTok users to offer the sale and delivery of drugs like Xanax, Valium, and MDMA to children in Salt Lake City.

    Image. Pg. 31, para. 97: "An investigator posed as a 17-year-old boy in Utah on TikTok, and after a single initial post on a message board asking for "plugs" (a euphemism for drugs), was quickly approached by dealers on the platform offering a laundry list of drugs for shipment." Figure shows "a list of drugs for sale on TikTok."
    • UT P. 31, PARA 96

      • TikTok also knows that LIVE facilitates other illegal activity. By as early as 2021, TikTok knew that drug trafficking was "becoming more prevalent" on the app.

    • NE P. 32, PARA 114

      • When The Journal shared "a sample of 974 videos about drugs, pornography, and other adult content that were served to minor accounts," a spokesperson for Defendants stated that "the majority didn't violate guidelines"—though several hundred were subsequently removed—and that "the [TikTok] app doesn't differentiate between videos it serves to adults and minors."

    • [See pages 35-36 and 43-50 for a section of the brief that describes the videos that were sent almost immediately to fictitious accounts created by the AG's office, pretending to be 13, 15, and 17 year old Nebraska minors. Some of the videos are adult porn actresses engaging in lewd and obscene behavior on TikTok, in order to lure customers over to their Onlyfans pages, sometimes via Instagram.]

    • NY P. 45, PARA 215

      • On its website, TikTok says that users in Restricted Mode "shouldn't see mature or complex themes, such as: [p]rofanity[, s]exually suggestive content[, r]ealistic violence or threatening imagery[, f]irearms or weapons in an environment that isn't appropriate[, i]llegal or controlled substances/drugs[, and e]xplicit references to mature or complex themes that may reflect personal experiences or real-world events that are intended for older audiences." [But they do, as you can see in the leakage rates found in KY P. 106, PARA 341]

    Share

    [Recent revelations reported out from the Wall Street Journal and other outlets have shown that many social media companies and device providers (e.g., Apple) have rampant and rarely addressed cases of sextortion, child sexual abuse material (CSAM), and sexual predation occurring via their platforms/devices. This is also the case with TikTok.]

    • KY P. 37, PARA 111:

      • Federal law mandates that Defendants report suspected CSAM to the National Center for Missing and Exploited Children ("NCMEC") under 18 U.S.C. § 2258A. To limit and avoid its reporting requirements under federal law, Defendants purposely designed TikTok—which it knows are used by children, including children under 13—not to incorporate modern CSAM detection technology. This technology would be free for Defendants to implement within TikTok's product design.

      • PARA 113: While Defendants have stepped up their reporting to NCMEC [National Center for Missing & Exploited Children]—reporting 362,108 reports in the last half of 2023—these efforts illustrate how wantonly negligent TikTok has been historically, with only 596 reports made in 2019 and 22,692 in 2020. Defendants' disregard for the safety of Young Users on TikTok has endangered countless children, including children in Kentucky.

    • KY P. 100, PARA 316 [REDACTED BUT RETRIEVABLE TEXT]

      • According to a presentation by the Trust and Safety group, "[u]sers are more likely to post comments than videos," because about "42% [of users] are 'comment only' users[.]"

      • PARA 317: But the vast majority of comments never go through human moderation. According to that same document, "Comments are increasing and manual coverage is disproportionately low." In fact, "[h]uman moderation for comment review is at 0.25%."

    • UT P. 3, PARA 7

      • But TikTok has long known—and hidden—the significant risks of live streaming, especially for children. By TikTok's own admission: "we've created an environment that encourages sexual content."

    • UT P. 4, PARA 9

      • In early 2022, TikTok's internal investigation of LIVE, called "Project Meramec," revealed shocking findings. Hundreds of thousands of children between 13 and 15 years old were bypassing TikTok's minimum age restrictions, hosting LIVE sessions, and receiving concerning messages from adults. The project confirmed that LIVE "enable[d the] exploitation of live hosts" and that TikTok profited significantly from "transactional gifting" involving nudity and sexual activity, all facilitated by TikTok's virtual currency system.

    • UT P. 36, PARA 115

      • In response to the Forbes article, TikTok also conducted a formal investigation into issues on LIVE called "Project Meramec." TikTok shared the results of the investigation internally during a May 2022 "Safety Summit":

      • PARA 116: Project Meramec confirmed that young users well under the minimum age requirement could host LIVE sessions on TikTok. The study confirmed that in just the month of January 2022 alone, 112,000 "L1" users (i.e., a metric TikTok uses to categorize users between 13 and 15 years old) hosted LIVE sessions.

      • PARA 117: These underage users also received a significant number of direct messages from adult users, raising red flags to TikTok that these minors were likely being groomed by adults. Project Meramec revealed that TikTok received not only "significant revenue" from "transactional gifting"—to the tune of one million Gifts in January 2022 alone—but also that this revenue was in large part generated through transactions for sexual content.

    • UT P. 34-35, PARA 109

      • An internal study from December 2023, following the Forbes article, documented what TikTok admits is "the cruelty" of maintaining LIVE with its current risks for minors on the app. The study showed its LIVE feature had the following characteristics:

        • "[H]igher proportion[s] of minor users";

        • "Minor users are more likely to access high severity risk LIVE content than adult users";

        • For violating content like "[a]dult nudity and sexual activities (ANSA) . . . and minor-hosted LIVE rooms, minor views are likely 2 times higher than other LIVE rooms"; and

        • "Minor users lack self-protection awareness and interact more with risky LIVE content."

    Image. Pg. 42, para. 139: "Despite acknowledging how downright 'irresponsible' it would be to expect that users will use LIVE wisely without appropriate safeguards in place, company leaders have admitted internally that the company placed profits over the safety of consumers." Figure 16 shows a February 2022 internal chat between two TikTok employees.
    • UT P. 35, PARA 111

      • In February 2022, two TikTok leaders discussed the need to remove "egregious content from clearly commercial sexual solicitation accounts," and were aware of issues with women and minors being sexually solicited through LIVE.

      • PARA 112: these leaders knew about agencies that recruited minors to create Child Sexual Abuse Material and commercialized it using LIVE.

      • PARA 113: In another example from a March 2022 LIVE safety survey, users reported that "streamer-led sexual engagements (often transactional) [were] commonly associated with TikTok LIVE." Users also reported "often seeing cam-girls or prostitutes asking viewers for tips/donations to take off their clothes or write their names on their body . . . ." That same month, TikTok employees admitted "cam girls" (or women who do sex work online by streaming videos for money) were on LIVE and that these videos had a "good amount of minors engaging in it." TikTok leaders have known since at least 2020 that TikTok has "a lot of nudity and soft porn." An internal document from May 2020 also highlighted concerns about "camming" becoming more popular as sex workers turned to online platforms during the COVID-19 pandemic.

      • PARA 114: TikTok has long known that virtual gifting is used as a predatory grooming tactic on LIVE. TikTok has internally acknowledged that "perpetrators tend to use tactics such as gift giving, flattery, and gifting money to win the trust of minors."

    • UT P. 38, PARA 125

      • In September 2022—five months after the Forbes story—an investigator found that "within minutes of browsing the [LIVE] feed" they were shown underage girls providing sexually suggestive content in exchange for money and young boys using filters to pose as girls to receive Gifts.

      • PARA 126: The investigator also found a "never-ending stream" of hosts who openly admitted that they were 14 and 15 years old while also "holding signs" or "standing in front of the camera" with a sign saying "Rose = say daddy," "ice cream = 360 spin," or "universe = cut shirt."

    Leave a comment

    [Although TikTok, like other social media companies, has an age minimum of 13 for account creation (in the U.S.) and higher age limits for certain features (e.g., TikTok LIVE at 18), underage use is common and is widely known about by the company, which does little to enforce those age limits. TikTok also regularly claims that it has effective safety features built in for users. However, the briefs make it clear that TikTok's primary goal is keeping users on and engaged for as long as possible, which often comes at the cost of child safety.]

    • KY P. 93, PARA 288 [REDACTED BUT RETRIEVABLE TEXT]

      • Similarly, in a chat message discussing features purporting to help users manage their screentime, a TikTok employee confirmed that the company's "goal is not to reduce the time spent" on the TikTok app, but rather to ultimately "contribute to DAU [daily active users] and retention" of users.

    • KY P. 93, PARA 289 [REDACTED BUT RETRIEVABLE TEXT]

      • Defendants also promote screen time management tools for Young Users that they know are ineffective. For example, an internal document seeking approval for the screentime dashboard noted that "we don't expect significant impact to stay time with this feature since it is only improving awareness and is not an intervention."

      • PARA 290: In fact, Defendants found—as expected—that the screen time dashboard did not affect Young Users' usage because "minors do not have executive function to control their screen time." The screentime dashboard did not appear to have any impact on the usage of minors.

    • KY P. 95, PARA 297, [REDACTED BUT RETRIEVABLE TEXT]

      • Defendants did not disclose that they knew effects like beauty filters can harm Young Users and did not implement the suggestions of employees that TikTok "provide users with educational resources about image disorders; create a campaign "to raise awareness on issues with low self esteem (caused by the excessive filter use and other issues)"; and add "a banner/H5 page to these filters and/or short videos which make use of the filters, particularly the Bold Glamour one, including an awareness statement about filters and the importance of positive body image/mental health, [that] could potentially minimize the negative public perception surrounding beauty filters and their reported effect on user mental health."

    • UT P. 37-38, PARA 121

      • In May 2022, after the Forbes article came out, TikTok took steps to evaluate how 'valuable' its underage LIVE hosts were before it would decide to make safety changes to the feature, like increasing the minimum age requirement from 16 to 18. It found 384,833 hosts were 16 to 17—as far as TikTok was aware—and they spent over seven million minutes streaming themselves on LIVE.

      • PARA 122: Despite learning that there were a 'high' number of underage hosts on the platform and that these minors were receiving problematic messages from adult users, TikTok waited six months before raising the minimum age for a user to host a LIVE session from 16 to 18.

      • PARA 123: But raising the minimum age from 16 to 18 did nothing to solve the problem. TikTok's age-gating is ineffective, and many kids still join LIVE events daily. TikTok also chose to forgo reasonable safety measures, prioritizing profits over safety, allowing unrestrained transactional sexual content and other illicit activities to thrive.

      • PARA 124: As a result, these activities have not just continued—they have exploded as LIVE has become even more popular. In 2023, a TikTok senior director was alerted by advocates who had noticed an increase in 'teens in overtly sexualized situations on live streams controlled by someone older than 18 who is collecting money from viewers while the teen performs sexually suggestive acts.' The advocates said they reported the streams through TikTok's internal reporting tools, but TikTok found they did not violate its policies.

    • UT P. 40, PARA 132

      • TikTok recognizes internally that its age-gating is ineffective and that TikTok's own moderation efforts on LIVE are ineffective and inconsistently applied, and TikTok hides this information from users and the public. TikTok knows this is particularly true for children, admitting internally: (1) "Minors are more curious and prone to ignore warnings" and (2) "Without meaningful age verification methods, minors would typically just lie about their age."

    • UT P. 37, PARA 119

      • Given how lucrative LIVE is for TikTok, the company slow-rolled implementing safety measures, and once it did, these measures proved largely ineffective at keeping pace with the growing popularity of LIVE. This was by design—LIVE was "such a huge part of the strategy for the [TikTok app]" that TikTok employees recognized they "d[id not] know" if they could "reasonably expect to increase limitations for LIVE" even in February 2022, and even after recognizing that "it is irresponsible [of TikTok] to expect that users will use LIVE wisely."

      • PARA 120: In other words, LIVE was too profitable to be interfered with, even to protect children.

    • UT P. 44-45, PARA 145

      • These policies do not adequately safeguard children and, furthermore, are not consistently applied. In 2020, TikTok unveiled an 'internal program' to 'protect creators and other accounts that [TikTok] deem to be high value.' The program featured policy shortcuts like 'delayed enforcement,' 'deferred policy decisions,' or 'no permanent ban on Elite + Accounts,' to protect its popular users who violate TikTok's policies. TikTok deployed this look-the-other-way policy despite knowing that the 'majority of elite accounts appear to run afoul of [TikTok's] policies on sexually explicit content,' among other violations. Approximately 1,400 minors were considered 'elite creators.'

    • NE P. 58, PARA 187

      • To start, TikTok has no real age verification system for users. Until 2019, Defendants did not even ask TikTok users for their age when they registered for accounts. When asked why they did not do so, despite the obvious fact that "a lot of the users, especially top users, are under 13," founder Zhu explained that, "those kids will anyway say they are over 13."

    • NE P. 61, PARA 198

      • In another internal document, TikTok admitted that "user research" shows that "[f]amilies do not use Family Pairing" and that "Family Pairing doesn't address parents' top concerns," including "inappropriate content, offensive interactions, and lack of privacy.

    • NE P. 65, PARA 211

      • Over the years, other of Defendants' employees have voiced their frustration that "we don't want to [make changes] to the For You feed because it's going to decrease engagement," even if "it could actually help people with screen time management."

    • NE P. 65, PARA 212

      • Or as another employee put it, "[w]hen we make changes, we make sure core metrics aren't affected." This is because "[l]eaders don't buy into problems" with unhealthy and compulsive usage, and work to address it is "not a priority for any other team."

    • NE P. 65, PARA 213

      • As TikTok's [redacted] candidly admitted in 2021, some of TikTok's so-called "safety" features are little more than "good talking point[s]." Describing the "Take a Break" videos Defendants have promoted, explained that "[w]e found out through some research that they're not altogether effective" but that "it's good as a point to share with policymakers, 'cause they're kind of impressed that we're spending time, money, and energy to get people off our platform, at least in theory."

    • NE P. 65-66, PARA 214

      • Defendants, who admit internally that "screen time management" tools are "not . . . at expense of retention." The goal is "not to reduce the time spent" but to "improve user experience and satisfaction" and ultimately "contribute to DAU [Daily Active Users] and retention." According to internal documents, "[t]his aligns with leadership's guidance" that there be "no impact to retention."

    Share

    How can it be that a product used by more than twenty million children and adolescents in the United States is also causing so much harm to its users? Many teens experience the harms of TikTok and complain about its addictive nature and its "brain rot" effects, so why don't they just stop using it?

    When Jon asks these questions to his students at NYU who are heavy users of TikTok, he commonly gets two related answers: 1) I've tried to quit but I just can't do it, and 2) I can't quit because then I won't know what everyone else is talking about. In other words, TikTok is both behaviorally addictive and socially addictive, which means that many teens feel trapped. As Gen Z poet Kori James said, about social media: "I know it's poison but I drink anyway."

    A recent study led by the University of Chicago economist Leonardo Bursztyn captured the dynamics of this trap. The researchers recruited more than 1,000 college students and asked them how much they'd need to be paid to deactivate their accounts on either Instagram or TikTok for four weeks. That's a standard economist's question to try to compute the net value of a product to society. On average, students said they'd need to be paid roughly $50 ($59 for TikTok, $47 for Instagram) to deactivate whichever platform they were asked about. Then the experimenters told the students that they were going to try to get most of the others in their school to deactivate that same platform, offering to pay them to do so as well, and asked, Now how much would you have to be paid to deactivate, if most others did so? The answer, on average, was less than zero. In each case, most students were willing to pay to have that happen.

    We (Jon and Zach) teamed up with the Harris Poll to confirm this finding and extend it. We conducted a nationally representative survey of 1,006 Gen Z young adults (ages 18-27). We asked respondents to tell us, for various platforms and products, if they wished that it "was never invented." For Netflix, Youtube, and the internet itself, relatively few said yes to that question (always under 20%). We found much higher levels of regret for the dominant social media platforms: Instagram (34%), Facebook (37%), Snapchat (43%), and the most regretted platforms of all: TikTok (47%) and X/Twitter (50%).

    What, then, is the net value of TikTok to society? The harms are vast and varied, and they are hitting children, teens, and young adults the hardest, which means that TikTok may be altering developmental pathways and causing lasting changes. The net value is likely very negative. We believe that America would be much better off if TikTok were to go dark on January 19th.

    No consumer product is 100% safe. We don't remove a product if a child or two dies from it each year in a freak accident. But the harms documented here are not freak accidents. They are the common effects of the normal use of TikTok by children, many of them younger than the legal age of 13. Due to its current design, TikTok is perpetrating harm to millions of children—harm at an industrial scale—in America and around the world. These harms may not be presented tomorrow to the Justices of the Supreme Court, but we think they should be decisive in the court of public opinion. TikTok should be removed from American childhood.




    All Comments: [-] | anchor

    bix6(10000) about 21 hours ago [-]

    Great, Meta next?

    qoez(10000) about 21 hours ago [-]

    Won't someone think of the profits??

    sanderjd(10000) about 21 hours ago [-]

    If you have read anything Haidt has written, you'll probably note that this implied criticism of him being only anti TikTok is quite far off the mark.

    Xelbair(10000) about 21 hours ago [-]

    Hopefully all social media.

    and all of them should be run by non-profit organizations unconnected to any charity nor politically motivated organization nor state.

    zehaeva(10000) about 20 hours ago [-]

    Meta is currently on trial for antitrust.

    As was pointed out elsewhere in this post Jon Haidt has been railing about social media for a while now, and has written several books on the subject.

    https://jonathanhaidt.com/social-media/

    whateveracct(10000) about 21 hours ago [-]

    Adults too

    kelseyfrog(2243) about 21 hours ago [-]

    Parents even

    ulfw(10000) about 21 hours ago [-]

    Same goes for instagram reels, the cheapest and shittiest copy seen in a generation

    Ancalagon(10000) about 21 hours ago [-]

    YouTube shorts too

    ilrwbwrkhv(3613) about 21 hours ago [-]

    I would love to first shut off Facebook before we do anything to TikTok.

    Zambyte(10000) about 21 hours ago [-]

    I would love action to be swift and simultaneous.

    MaxHoppersGhost(3318) about 21 hours ago [-]

    Between TikTok and fentanyl China is covertly doing serious damage to the USA, and most people don't care.

    ziddoap(10000) about 21 hours ago [-]

    Between Meta/X and opioids, the USA is overtly doing serious damage to the USA, and most people don't care.

    (In all seriousness, I do agree that TikTok is awful, but I find the fascination with TikTok while ignoring all other social media and their dangers to be interesting)

    kmeisthax(10000) about 21 hours ago [-]

    Not covertly. Overtly, and most Americans do know about it and care deeply.

    naravara(10000) about 21 hours ago [-]

    TikTok isn't doing much that our domestic social media overlords aren't doing to us themselves. Yeah Facebook is for boomers and Instagram is for millennials, but they're only targeting the platforms like that because TikTok has already seized the younger demographics. If it wasn't there they're be on a Meta, Snap, YouTube, or Twitch app instead and still having their brains rotted.

    We need actual data privacy laws that make that business model of invasive surveillance capitalism non-viable as well as some of severe regulations placed on algorithmic recommendation engines to limit these harms. At the very least, users should be permitted to tune their algorithm parameters, including deciding how much they see things they've explicitly requested to see less of.

    Yeah we can scaremonger about TikTok all we want, but it's not solely TikTok's fault that it's trash. The economic incentive structure is to produce a surfeit of brainwashing trash that erodes people's mental health. We need to structurally change privacy laws and force market competition to crack these network effect monopolies if we want that to stop.

    micromacrofoot(10000) about 21 hours ago [-]

    the USA also seems to be starting its own cultural revolution, so we've got a lot going on right now

    ulfw(10000) about 21 hours ago [-]

    Who forces Americans to consume both?

    Do you blame prostitutes when husbands cheat on their wives at a whorehouse too?

    walthamstow(10000) about 21 hours ago [-]

    The opioid crisis is a uniquely American problem, entirely of its own creation. Blaming it on other countries is convenient but false.

    CSMastermind(3197) about 21 hours ago [-]

    Fentanyl is made from common chemicals that are used in normal industrial processes. We use them for everything from making insulation to medicine. And it only takes a small amount of these chemicals to make a large batch of fentanyl. All the fentanyl produced in a year only takes 1,800 gallons (around 33 oil drums) of chemicals to make.

    The latest Annual Threat Assessment: https://www.dni.gov/files/ODNI/documents/assessments/ATA-202...

    Noted that production of those precursors has shifted to India.

    The fentanyl itself is made in labs in Mexico and then smuggled across the border. It requires no sophisticated lab equipment to make. You can easily obtain everything needed at consumer retail stores and make a batch in a garage. One liter of finished fentanyl is enough to create 50,000 to 100,000 doses.

    So if you squeeze the balloon, it just pops up somewhere else. Put pressure on China and India starts supplying the chemicals. Start shutting down Mexican labs and they'll make the stuff in Oklahoma.

    Not that these are bad things to do but unless you address the actual demand for the stuff it's going to be nearly impossible to eliminate it.

    justonceokay(10000) about 20 hours ago [-]

    China can't do those things to us without our help. I think there is a culture of addiction here and it's hard to blame the drug dealer for our own problems

    Lendal(10000) about 20 hours ago [-]

    It's not that I don't care. I don't know how real it is. Bad faith arguments and pseudoscience are ubiquitous.

    When I was growing up the same types of people were saying that D&D was a demonic movement meant to turn kids to satanic rituals so I never got to play D&D. Rock music had subliminal messages that were converting children into zombies but I listened to it anyway and that's how I discovered that most adults were full of shit and straight-up lying to us. There's so much garbage out there that separating the noise from true threats is an overwhelming task for most average people.

    onemoresoop(3292) about 19 hours ago [-]

    Tiktok is not more guilty than any US company. Instagram, facebook and most modern social media went this path long before Tiktok. We shouldn't singlehandedly blame one party but go to the root of the issue.

    Workaccount2(3572) about 20 hours ago [-]

    China will need to have 4 undercover agents meet in the same place at the same time. They won't all meet each other, but a series of hand offs.

    Conveniently, a small local college asian club wants to have a stop asian hate rally on the weekend of the 17th, at a local park which would be an ideal location. Tiktok gets word from Bytedance, who by Chinese law have party members on their board, that this rally needs to be heavily promoted organically to other Asians who live in the area. No ads, if someone talks about it in their tiktok, push it. Push it especially towards beloved Asian influencers with a large follwing.

    The day comes and the turnout is a total blowout. A sea of Asians filling the park to support a noble cause.

    80% of them are there because the CCP wanted them there to cover their operation, but when asked, every single one laughs at the idea that 'Tiktok is a tool for propaganda'. They say 'I have never seen anything that promotes red flag communism or CCP ideals.'

    The scenario above is why the US government wants tiktok banned. The privacy stuff is second and the screen addiction stuff a far far third.

    Bukhmanizer(10000) about 20 hours ago [-]

    You think that TikToks strategic advantage is being able to coordinate "seas of Asians" so that undercover Chinese spies can meet, and presumably since all Asian people look alike no one will be able to know?

    ferguess_k(10000) about 21 hours ago [-]

    OK let's ban social media and roll back to 20 years ago. I'm perfectly happy with that. With social media it's so easy to manipulate than emails, websites and phones.

    Technological advancement is not always good (for ordinary people).

    commandlinefan(10000) about 21 hours ago [-]

    Lets ban TV, video games, rock and roll music and dungeons and dragons, too. When I was growing up, those were what was harming children at an industrial scale.

    alt227(10000) about 21 hours ago [-]

    20 years ago we had reality TV, video games, and rock music which were the perveyor of body issues and FOMO etc. The issue is not technology, but popular culture. Pre the end of the 20th century most people considered knowledge and skill to be the peak of human progression. Now it is money and image. As money and image can easily be given/bestowed whilst knowledge and skill cannot, I believe the general population has become much easier to manipulate by using these traits.

    There are very few young people today who dont value money and image as something to aspire to. IMO this is a really dangerous thing of which there is no way back.

    squigz(10000) about 21 hours ago [-]

    We had social media 20 years ago.

    And what would happen in another 20 years? What exactly would prevent this from happening again?

    Maybe instead of just knee-jerk reactions like 'Ugh stupid social media, let's ban it' we should think it through and solve the underlying issues.

    aiono(3667) about 20 hours ago [-]

    The core problem is media being a for-profit organization. As long as the primary goal is profits it will be focused on extracting as much as attention as possible. It's an insignificant issue that it also ruins our attention, spreads misinformation etc. as long as profits go up.

    sho_hn(10000) about 20 hours ago [-]

    I often wonder if we (the tech industry) have come up with anything actually good since about 2005 or so, in terms of being a net win for society or something people actually need.

    Increasingly, we seem to provide solutions in search of a problem, or worse, substitutes for much healthier activities. The power we have to do so is staggering; we are changing the parameters and modes of how people relate to each other on a daily basis.

    I feel a strong urge to have more 'ok, so where do we go from here?' and 'what does a tech industry that promotes net good actually look like?' internal discourse in the community of practice, and some sort of ethical social contract for software engineering.

    The open source movement has been fabulous and sometimes adjacent to or one aspect of these concerns, but really we need a movement for socially conscious and responsible software.

    nonethewiser(3585) about 19 hours ago [-]

    Or enforce age requirements just like we already do for alcohol, tobacco, gambling, opening bank account, etc. Including online.

    rich_sasha(10000) about 18 hours ago [-]

    How do you define 'social media'? I suppose HN could qualify.

    internet_rand0(10000) about 20 hours ago [-]

    there are two kinds of hackers

    those with children

    those without

    as a hacker without children because i got priced out of the market, why should i care about what tiktok does or ceases to do?

    honest question, if tough to answer

    or maybe i'm only trying to explain why I don't have a model of being in a formative state.... I mean, dogs don't use tiktok

    entropicdrifter(10000) about 20 hours ago [-]

    >as a hacker without children because i got priced out of the market, why should i care about what tiktok does or ceases to do?

    Because you want to live in a society where people have the attention spans necessary to e.g. drive cars safely without distraction and you want people to not be so vastly ignorant at scale that they collectively endorse or tolerate fascist behavior from governments and corporations?

    micromacrofoot(10000) about 20 hours ago [-]

    have you considered the fact that you have to co-exist in the same space as other people, and that negative societal outcomes can also harm you?

    greenavocado(10000) about 21 hours ago [-]

    The main reason TikTok is being targeted is because it doesn't silence pro-Palestinian perspectives on the conflict. This is a direct threat to the leadership of the people in charge because it fractures their narrative they work tirelessly to promote (the perpetual victim).

    CivBase(10000) about 21 hours ago [-]

    Wasn't the previous ban put in place by the Biden administration? And then Trump flipped sides to become the savior of TikTok or something like that?

    Doesn't exactly align with your claim.

    mschuster91(2748) about 21 hours ago [-]

    > The main reason TikTok is being targeted is because it doesn't silence pro-Palestinian perspectives on the conflict.

    First of all, TikTok was being in the crosshairs ever before Hamas decided to slaughter and take hostage civilians on Oct 7th.

    Second, why is it always the pro-Palestine crowd that acts like their issue is the most important thing in the world, completely de-railing any debate? Seriously, no other geopolitical conflict has so many people injecting it into any debate they can find.

    ta1243(10000) about 21 hours ago [-]

    They may be part of why the American rhetoric is against Tiktok specifically rather than other platforms, but this specific author has a far wider remit against social media as a whole.

    https://news.ycombinator.com/from?site=afterbabel.com

    sanderjd(10000) about 20 hours ago [-]

    Enough with this nonsense already.

    arp242(10000) about 20 hours ago [-]

    I'm sure that plays part in the motivation of some people, but to levy this accusation against Jonathan Haidt, who has extensively written about his views on social media in general, is very much unserious and a huge distraction best.

    danielbln(10000) about 20 hours ago [-]

    Any sources for that you care to share?

    rolodexter1(10000) about 21 hours ago [-]

    So is YouTube

    micromacrofoot(10000) about 20 hours ago [-]

    absolutely, people are saying what about facebook, instagram, etc... but youtube has a much bigger impact on children than any of the older social networks

    margorczynski(10000) about 21 hours ago [-]

    TikTok, Snapchat, Meta (FB, Instagram) - all this garbage needs to go, at least for anyone younger than 18.

    We have a plethora of evidence on how destructive social media has been for (especially young) people and still nothing is being done about it.

    avodonosov(10000) about 21 hours ago [-]

    Youtube (esp. shorts)

    sho_hn(10000) about 20 hours ago [-]

    I think it's because a lot of adults cannot empathize with the lack of self-regulation in children and young adults. They imagine themselves being able to reject the social media firehose (whether true or not ...) and have no real model of being in a formative state.

    obscurette(3302) about 20 hours ago [-]

    It's not much better with adults. I see it even with people in my age (I'm in my sixties). I take long walks (several hours) with my dog and if I tell about it to people the same age as me, a common question follows – 'What headphones you use?'. I don't use headphones, it's only me with my thoughts. And people say that they can't do it (any more).

    pjc50(1402) about 20 hours ago [-]

    Do you think you could have a word with the other thread where Discord is introducing age verification (due to a new UK law) and people are acting like it's the Stasi?

    throwaway875847(10000) about 20 hours ago [-]

    > TikTok, Snapchat, Meta (FB, Instagram) - all this garbage needs to go, at least for anyone younger than 18.

    All this garbage needs to go — period. We've seen time and time again that attempts to age-restrict Internet content with law just result in violations of privacy, while kids can still access such content with simple workarounds.

    ycui1986(10000) about 21 hours ago [-]

    ALL social media are harming children at industrial scale.

    sanderjd(10000) about 21 hours ago [-]

    The author of this blog is one of the leading proponents of this idea!

    I'm really starting to think all these whataboutism posts are bots. It just seems too hard to believe that so many people would come here to make this same idiotic point in response to a post by this particular author.

    smnthermes(3614) about 21 hours ago [-]

    Cherrypicking? What about all the TikTok content promoting body positivity, etc?

    criddell(10000) about 20 hours ago [-]

    If you have studied the matter and have research showing the positives of Tik Tok outweighs the negatives, publish it!

    nachox999(10000) about 20 hours ago [-]

    tiktok bodypositivy videos or an IQ > 80, choose one

    SoftTalker(3552) about 20 hours ago [-]

    Body positivity is a myth. Being obese is not healthy or anything to celebrate.

    callc(10000) about 21 hours ago [-]

    > But when the Kentucky AG's office was preparing to post their brief against TikTok, whoever was in charge of doing the redaction simply covered the relevant text with black rectangles. Even though you can't see the text while reading the PDF, you can just use your cursor to select each black section, copy it, and then paste it into another file to read the hidden text.

    Incredibly hilarious. Only leet hackers can pull this off though, same as pressing F12 in the browser to hack the mainframe!

    xnx(1016) about 21 hours ago [-]

    How is this still happening?

    fullstop(10000) about 21 hours ago [-]

    This happened a few times in 2006. I guess we never learn.

    https://news.ycombinator.com/item?id=43698326

    bee_rider(10000) about 21 hours ago [-]

    This seems to happen somewhat often.

    Actually, it is quite weird, I wonder if some bad best-practices have been circulated.

    It would be really nice if legal documents were prepared in some sort of standardized markup-like language.

    ysofunny(10000) about 19 hours ago [-]

    > same as pressing F12 in the browser to hack the mainframe!

    so that's why new mac keyboards did away the entire F keys row?

    noworriesnate(10000) about 21 hours ago [-]

    While I don't agree with the whole "Palestinian views should be censored" thing, that might be the ticket we need to set a precedent for regulating children's access to social media. That's the thing about politics—you have to be willing to make compromises with people you don't see eye to eye on.

    If your principles get in the way of making compromises that could help, you're letting the perfect be the enemy of the good. Something to think about.

    xg15(2454) about 20 hours ago [-]

    Reminds me of the 'stimulus' cheques during Covid.

    Giving people money so they can pay rent and buy food during lockdown? Preposterous!

    Giving people money so they can 'stimulate the economy'? Now we're talking!

    PeterCorless(3627) about 20 hours ago [-]

    My favorite part is how incompetent they were in handling the redaction:

    'But when the Kentucky AG's office was preparing to post their brief against TikTok, whoever was in charge of doing the redaction simply covered the relevant text with black rectangles. Even though you can't see the text while reading the PDF, you can just use your cursor to select each black section, copy it, and then paste it into another file to read the hidden text. It is great fun to do this — try it yourself! Or just read our version of the brief in which we have done this for you.'

    fny(3295) about 19 hours ago [-]

    I'd venture to guess this was deliberate. What would you do if you want to convince the public but can't technically share the evidence?

    bmurphy1976(10000) about 21 hours ago [-]

    Please add YouTube to the list. I'm watching my kids' brains slowly melt as they go from YouTube short to YouTube short like little crack addicts trying to get their next fix. Throw in a bunch of AI generated bottom of the barrel swill and I'm on the verge of blocking YouTube entirely yet again. I blocked YouTube for years because of all the garbage child targeted auto generated videos that were flooding the platform. It's very frustrating because there is a lot of good content that I would like them to continue to have easy access to, but the cost of entry is way too high.

    sanderjd(10000) about 21 hours ago [-]

    The author of this blog wrote a whole freaking book about exactly this!

    I feel like I'm taking crazy pills. How is this like 95% of the comments here, as if Haidt didn't write an incredibly well known book about how all of this is bad!

    tux1968(2012) about 20 hours ago [-]

    It's worth blocking shorts alone, they're the worst culprit. Letting kids still access long form videos.

    radicalbyte(10000) about 20 hours ago [-]

    This x10000.

    I really wish that the EU would step in and force Google to either kill Shorts or give us full control over the crap they're pushing down our throats.

    As this is HN and full of smart people - if there are any workable (OSS) options to filtering YouTube to remove shorts (and the far-right/Nazi crap) then please let us know.

    yapyap(10000) about 20 hours ago [-]

    All the (popular) social medias morph towards the tiktok short form content. Instagram, Facebook, Reddit, Twitter, Snapchat, YouTube, etc.

    It's the most attention holding thing a.t.m.

    darknavi(2851) about 20 hours ago [-]

    To some extent I feel the same about video games too.

    I watch my ~9 year old Nephew play games on his Switch and he swaps between games every ~5 minutes.

    I think as a 90s kid we had a hand full of games for our Gameboys, N64, etc. but had to wait for a holiday to actually get new physical content. Now it's easy and cheap enough to just download a slough of digital games (with fast resume and what not) and hop between them like crazy.

    antimoan(10000) about 20 hours ago [-]

    disable Youtube history and no more shorts or AI suggested content. It quickly becomes a useful tool since you can see channels you subscribe to or if you are interested in a subject you have to search for it instead of getting pulled in the AI suggested contents as soon as you open the app.

    gpspake(3471) about 20 hours ago [-]

    I took me a while but I finally figured this out. I think the difficulty is a dark ui pattern that hides the control behind an age selection. In the youtube kids admin settings, there's a part where you select your kids age 0-4, 4-9 etc... My kid is 4 so I never really looked at the later options but after probably 20 times on that screen, I noticed at the end (where my eyes glossed over the higher ages) there's something along the lines of 'control content yourself'. Once I selected it, I could whitelist channels and completely disable search and recommendations. This means the youtube kids app _only_ shows what I say it can. If I want to give him access to something like 'smarter every day' or a specific video that's not on youtube kids, I can click share from my account and share with 'kids' We've still pretty much banned youtube on all devices but, like you said, there's a lot of valuable stuff and I really miss the time when he would get in to 'tornadoes' or 'helicopters' or some other topic and we could watch a bunch of educational videos without being flooded with trash toy videos and subversive attention leeching ads. This at least opens the door back up for some of that good content without the garbage.

    throwanem(3029) about 20 hours ago [-]

    You have to filter it manually for them. There's no other way, though in a year or two we might start to see products backed by true multimodal models that are actually worth looking at.

    I don't mean to seem blunt or rude. I don't actually have kids, so even if I were inclined to judge, I've no basis. But just looking at what YouTube has been doing over the last couple of years, even as a premium subscriber and so never seeing ads - I mean, it's terrible, it's as if it is actively trying to drag me down a conspiracy theory rabbit hole, in the sense that I might watch a half dozen videos today about simulated jet-plane gunfights in DCS, and tomorrow I'm seeing recommendations for what I only recognize as 'Intro to 5G Covid Conspiracy (CONT 101, 3 credits)' because I have studied the subject. I report these videos and they stop coming, until the next time.

    It isn't as though there is a game here on the other side of which for there to be an adversarial mind, but there are times when it feels enough that way - when I'm half asleep, perhaps, and most especially - that I just don't even open the app or website entirely, but listen to an old podcast episode instead because those at least I can trust. (I pay subscriptions or buy copies; anything 'ad-supported' is a hard stop. I prefer people just say outright 'this is what I have and what I think it's worth, let's see if we can make an honest deal' because I am an American.)

    I am seriously considering hosting a local Invidious instance, or similar, and terminating my now about ten-year YouTube Premium subscription. Ads are a technical problem that I was happy to pay a few bucks a month rather than however many hours to solve. I did enough years of sysadmin work for a living that I no longer enjoy it even slightly, so that's no small trade for me to consider. But now I'm really looking hard at what that money's going to, and by the sound of things lately, I'm among the least enthusiastic of such critics.

    InkCanon(10000) about 20 hours ago [-]

    Some of my relatives and colleagues actually actively encourage this. They give them an iPad with YouTube on it after meals and so on. It acts as a pacifier.

    _spduchamp(10000) about 20 hours ago [-]

    We're trying a new thing in our house to curb the dopamine addiction.

    Screens allowed every 2nd day. So far it has been working well, and our kid always finds something creative to do on the screen free days.

    bluetidepro(3092) about 20 hours ago [-]

    Related: I pay for YouTube premium and there is still no way to hide/ignore Shorts in the entire platform or any of their apps. It's infuriating, and a feature that is badly needed. It should be there for free but at the very least allow premium paying users to hide that garbage.

    _bin_(10000) about 20 hours ago [-]

    You are completely correct. I'm watching the same thing happen to my little cousin. Please hear me: take the phones and take the computers and take the ipads and make them go play outside. We do this when my cousin visits and it's amazing how quickly he shapes up. But there will be a point at which it's gone too far and the damage is much harder to repair.

    You can youtube-dl whatever is good and stick it on a raspberry pi running kodi with no internet. You can get them el cheapo kindles and load them up with all the books they could ever read. You can let them use computers supervised for khanacademy. But please, as the rare adult who's aware of and cares about this issue, don't let your kids fall victim to it.

    perdomon(10000) about 20 hours ago [-]

    I say block it again. Get those kids outside in the creek!

    noisy_boy(10000) about 20 hours ago [-]

    Google has created Family Link app to allow parents to control the allowed screen time, apps they can see and sites they can open etc. Conveniently, they allow blacklist/whitelist only at a domain level and YouTube shorts have the same domain as YouTube i.e. https://www.youtube.com/shorts - they could have very easily provided a regex/pattern based blacklist/whitelist feature. Blocking YouTube in its entirety is not feasible because lots of educational videos are hosted on it. The only option is to externalize the filtering via pihole etc.

    I suppose allowing parents to prevent their kids from watching the inane garbage that is shorts is a step too far in Google's books.

    rehevkor5(10000) about 20 hours ago [-]

    YouTube really needs to provide an option in their mobile app to disable shorts.

    lukan(10000) about 20 hours ago [-]

    Yeah, me too.

    It will be blocked again and just handpickes local videos and games to choose from. I never thought I would have to do this as an adult, but what else you do?

    aantix(10000) about 20 hours ago [-]

    I'm building a new YouTube player experience for my kids.

    You can block shorts. Block keywords (e.g. Minecraft, MrBeast)

    Email me if you'd like to test.

    [email protected]

    lo_zamoyski(10000) about 20 hours ago [-]

    Yeah. From time to time, you hear that reading books is somehow obsolete, and that valuing books reflects an undue emphasis on medium rather than content. This view is mistaken. The form in which information is delivered is not irrelevant to how it is processed, understood, or retained. There is a crucial difference between sustained engagement with a coherent body of thought and the piecemeal consumption of isolated informational fragments.

    Short-form content, whether in the form of articles, posts, or 'snippets', habituates the reader to a fragmented mode of attention. Over time, this practice undermines the capacity for deep focus and coherent understanding. The effects are cumulative: what is lost is not merely quantity of information, but quality of comprehension. Certain kinds of understanding only emerge over time, in context, and in continuity. A complex argument, or a meaningful dialogue, cannot be replaced by a summary or a highlight reel. To suggest it can overlooks the way serious thought takes place.

    0xEF(10000) about 20 hours ago [-]

    Have you considered switching to Nebula? A lot of the YouTubers I like and tend to trust are also active on that platform. Will there is still some fluff, Nebula does seem to be far more discerning about the content it hosts.

    thedougd(10000) about 20 hours ago [-]

    Continuously frustrated to see the YouTube app return to my Apple TV home screen. I can appreciate why Google makes it hard for me to block their apps on their platforms, but why won't Apple allow me to explicitly allow or disallow which apps can be installed on my Apple TV? Why don't screen time limits apply to the Apple TV?

    loloquwowndueo(10000) about 19 hours ago [-]

    We banned YouTube entirely for the kid, best decision we ever made.

    deadbabe(10000) about 19 hours ago [-]

    An awesome app would be something that could hijack algorithms for various social media apps on home WiFi and feed kids parent-approved content silently without them even knowing, and messing with search results so they struggle to find things you don't want them to see.

    onemoresoop(3292) about 19 hours ago [-]

    Download content of your choosing (yeah, you can even DL from youtube), put it offline and let kids watch from a playlist you curated yourself. Yank off any wifi connectivity, it's poison finds ways to dumb down the kids.

    cynicalpeace(10000) about 19 hours ago [-]

    Yes, you should prohibit it.

    You should have never introduced them to it in the first place.

    Not trying to be mean- just trying to be frank.

    Our kids get almost no screen time. We watch a movie once a week as a family. That's it. We have no problems because we have never introduced screens to them beyond that.

    Our kids like playing outside.

    charlie0(10000) about 19 hours ago [-]

    There are alternatives to YT for educational content, like Nebula. However, even that platform lacks control and it's slowly getting flooded with non educational content. It sucks because there is no solution here short of curating your content via ytdl and rolling your own YT like software.

    ccorcos(3666) about 19 hours ago [-]

    I wish YouTube allowed a filter for minimum video length. I don't want my kid watching anything under 5 minutes, ideally 10 minutes long.

    My biggest concern is the attention thrashing. If they're going to watch some garbage, at least be stuck with it for 10 minutes so you'll get bored of it...

    7thaccount(3494) about 19 hours ago [-]

    I commented on here before about this. I'm certainly not perfect, but what I've done is basically YouTube is something the kid doesn't watch on their own. They can watch documentaries with me or whatever (occasionally some video game stuff), but almost all YouTube kids is awful. There are a lot of really good kids shows out there across different streaming services with actual plots and character development that make them think without frying their brains. For a kid in the 8-14 year range: Avatar the Last Airbender, Gravity Falls, Owl House, Dragon Prince...etc are prob fine depending on the kid (dragon Prince is a bit darker). As a parent you need to make sure they're not watching content you object to though. I'll also find some episodes of something like Star Trek that is interesting with some moral dilemmas and just talk it out with them. TV is fine in moderation. Make sure you keep reading to them as well.

    quantadev(10000) about 19 hours ago [-]

    I noticed just last week Youtube Shorts (and long vids too) have become so full of fake AI Generated stuff it's not even worth watching. Sure it looks perfectly real,even if it's fake, but as an adult I find it just a waste of time. However children cannot TELL which things are fake quite as well as an adult can, so they'll end up basically going insane watching that crap, and end up with a very distorted view of the world.

    It's truly a National Security issue at this point. I hope America bans TikTok, and if I had children they wouldn't be allowed to watch this garbage. Sadly most Americans value their 'friendship' with their kids more than they value their parenting responsibilities, and so they let the kids do whatever they want just so they stay on good terms with them without the kids being mad all the time.

    Also today's generation of moms and dads all grew up in the internet world, so to them, blocking technology from their kids seems like abuse of a sort, when it's not.

    oulipo(3506) about 19 hours ago [-]

    I think as a parent you're supposed to... prevent that by talking to them?

    Neywiny(10000) about 19 hours ago [-]

    I'll only respond to this but I do see a lot of people share your viewpoint. I think I agree with you partially. There are ways to rot the brain on YouTube. I noticed it maybe 8-9 years ago for me. I unsubscribed from all the gaming channels and only watched tech/EE/CS videos. It got to the point where in college I had weeks of 40+ hours of YouTube (does it adjust for 2x speed? Unsure) but it was mostly on STEM content. I believe that's what let me ace my classes in my later years. I just learned better from them than reading textbooks.

    So, please don't give up on trying to only block the brainrot. Also, kids are crafty and usually have more time than adults so be prepared to fight an uphill battle once they figure out VPNs, DNS, and other ways.

    kridsdale1(10000) about 18 hours ago [-]

    Disable "watch history" and Shorts will go away.

    philips(2100) about 18 hours ago [-]

    The only solution I have found is using YouTube kids (and now Pinchflat) and only allow approved content.

    I wrote a blog post about it here: https://abparenting.substack.com/p/effective-youtube-kids

    yapyap(10000) about 20 hours ago [-]

    I think a tangential proof of this that is very telling and does get brought up often enough (but I'll repeat it oncemore just in case) is that they have a different app in China, that's the Chinese tiktok; Douyin. Made by the same company and although it has short form content all the same the difference is that the algorithm in China is designed for Nationalist content, educational content and is restricted to 40 minutes a day for minors.

    This is like the children of silicon valley CEOs growing up without phones and tablets and such but on a worldwide scale.

    It's frighteningly genius to be honest, douse the next generation of countries you are competing with with quick dopamine hits till they are basically just existing to swipe, scroll, etc and then rake in all the power for yourself / your own country.

    tstrimple(10000) about 17 hours ago [-]

    Douyin follows Chinese law which is why it has these restrictions. TikTok does not so it isn't allowed. Kind of weird what happens when countries pass legislation around activities they don't want instead of just trying to ban a foreign app while allowing all of the same dark patterns in the domestic competition. China just does a better job of protecting its citizens from this sort of thing. The US could have laws around social media for children like China, but they are more interested in perpetuating the yellow scare and maximizing profit.

    salynchnew(10000) about 21 hours ago [-]

    Related critcism of the book and the authors of this site: https://3quarksdaily.com/3quarksdaily/2024/07/why-academics-...

    CharlesW(114) about 20 hours ago [-]

    Great read, thanks for posting. What I like about it is that while it notes Haidt's ideas get flimsier the closer they're examined, it also thoughtfully gives him credit for a more important observation — that the increasing loss of societal structure is the actual and larger problem (and seemingly the target of his next book), with social media as one of many symptoms or contributors, depending on how you look at it.

    seydor(3491) about 21 hours ago [-]

    the same old story repeating with every generation. historically however, none of the 'devilish technologies' was banned

    tomaskafka(3390) about 2 hours ago [-]

    Like cocaine drops or widespread smoking? Or leaded gasoline? Or forever chemicals?

    nekochanwork(10000) about 20 hours ago [-]

    I don't disagree with the claim that brainrot literally rots brains. But, I strongly oppose laws that ban social media on the grounds of 'protecting children.'

    Parents are fully capable of monitoring and regulating their children's internet usage without Daddy Government getting involved.

    codydkdc(10000) about 20 hours ago [-]

    this is a bad argument in the abstract. 'drivers are fully capable of navigating intersections without Daddy Goverment getting involved' so we shouldn't have traffic laws and stop lights

    the evidence says otherwise. I agree an outright ban probably isn't the best solution

    hnpolicestate(10000) about 20 hours ago [-]

    The correct argument has become taboo in our technocratic puritan age. The only word that matters now is SAFETY, no matter the collateral damage.

    charlie90(10000) about 19 hours ago [-]

    Except parents can't control what their children's peer's internet usage is. A common argument to let kids use social media is that their friends use it and they would be left out. This problem can't be solved by individuals, it needs collective action.

    onetimeusename(10000) about 20 hours ago [-]

    I am surprised how common it is for younger women and teenagers to receive requests for gifting and get sexualized comments which this article mentions. I don't see a lot of people talking about it but I think it would really warp someone's mind to be under 18 and be receiving requests for foot pics, 'spoiling', and more. I've wanted to put this out there for a long time but felt like no one wanted to talk about it.

    bn-l(10000) about 8 hours ago [-]

    I can imagine it would completely warp your idea of men especially if you were young and not able to put it into perspective (even very old people can't do this). That could have a serious impact on your life.

    _JoRo(10000) about 20 hours ago [-]

    Just children? I've had to block social media for myself because of how addictive it was / how much time I was wasting.

    I will say though, if you are trying to watch videos more from an educational perspective then it can be useful. Although, I would advise getting an LLM summary of the video, and then speed reading the summary in order to determine if their is any useful content in there.

    BlueTemplar(3415) about 14 hours ago [-]

    Yes, but it's still a whole other can of worms when someone else is responsible for your behaviour and relationships with most of the society.

    kobenni(10000) about 5 hours ago [-]

    Could you give a description of how you block social media? All methods I found so far can be undone within seconds.

    openplatypus(2860) about 20 hours ago [-]

    TikTok, Facebook, Instagram, Youtube, Twitter ... no need to single out TikTok. They are all equally bad.

    jampekka(10000) about 20 hours ago [-]

    This is in the footnote 12: 'Of course, if TikTok is removed, many children will just move to TikTok's competitors: Instagram Reels and YouTube Shorts. This is why it's so important for countries to follow Australia's lead: raise the age for opening social media accounts to 16 and require the companies to enforce it.'

    But indeed focusing on TikTok is probably counterproductive for establishing general regulation. Why not just apply the same regulation to TikTok?

    micromacrofoot(10000) about 20 hours ago [-]

    They're not actually, many more children use tiktok than the others

    uuddlrlrbaba(10000) about 20 hours ago [-]

    I love the parents in the tech community. They had unfettered computer and internet access which formed them into the successful people they are today. But they were special and their circumstance was special and their kids are not allowed to use the internet because now its bad.

    old_man_yells_at_cloud.jpg

    marcellus23(10000) about 20 hours ago [-]

    Lots of people in the tech community also struggle with attention and social disorders. Being good at computers is not the only thing that matters in life.

    ccozan(10000) about 19 hours ago [-]

    Yeah but this is the old debate old vs new. Plus is not the kids is the parents.

    Let me give another example: a nice village where I spend my childhood. Every day on the streets, forests, you name it. No time/hours limit, no space limit - play until fully tired. Now, visiting again, there are no kids on the streets. I thought to ask my relatives and people I know. And I find out that is not that kids do not want to play outside, they are _not allowed_ to play outside!!.

    Why? 'the kids are now kidnapped from the street' - 'how?' 'I heard from a neihbour from her cousin that lives in the village 10km that this happened!' ( not true - the kid got in a black car which was the uncle showing of his new BMW )..'. Another example 'Rapist are now free!' ... 'no way!' 'Yes, yes, this happened' ( was a case 3 years ago in a city 30 km away - a normal man got in a quarrel with a girl and the little mischevious said to get away that he touched ...no comment ).

    yelling_and_pulling_hair.tiff

    JKCalhoun(3408) about 19 hours ago [-]

    It's true, by the time I was in college, I did have unfettered access to USENET. ;-)

    bryanhogan(10000) about 19 hours ago [-]

    It's not that the internet is bad, the internet is very different from what it used to be. Apps using mainly algorithmic based recommendations, such as TikTok, use that and other dark design patterns to exploit users more than ever before.

    sebastiennight(10000) about 19 hours ago [-]

    LOL... to have unfettered access to games, I had to program them myself though.

    I fully intend to extend the same circumstances to my own kid, seems fair

    Madmallard(3641) about 19 hours ago [-]

    Yeah well only in the last 10 years did internet companies start employing psychology PHDs to find the best possible ways to exploit people they can. That is basically what the problem is. Short-form content and algorithmic display of what evidently appeals to you the most is literally zombifying people.

    dkga(10000) about 18 hours ago [-]

    There is a huge survival bias that you are not considering. Today's parents that had unlimited access to the Wild West that was 90s internet and are successful today do not represent the whole population of people who had access to internet in that era.

    ozmodiar(10000) about 18 hours ago [-]

    So to follow my example, that would be no computer or internet until age 15. I don't know, seems harsh. I'll also have to swap my TV for a 10' one that only gets 2 stations.

    dcchambers(10000) about 18 hours ago [-]

    Infinite-scroll content (especially mindless VIDEO content) was NOT a thing when we were kids. And we also had to sit down at a desk and browse the internet on a computer.

    Having 24/7 access to infinite amounts of brainless content in your pocket is not something we ever had to contend with. This is uncharted territory. And it's terrifying.

    hyeonwho4(10000) about 15 hours ago [-]

    The unfettered computer and internet access was a desktop machine (which needed to run a minimalist distro) on dial-up in a very public room. The fun that taught me tech stuff was getting to distro to work, and there was no privacy. Parents were much more aware of the dangers back then.

    Nowdays everythibg on smartphones 'just works', and the OS won't even let the user access system files. I meet college students who have no idea what a file system is, or what a DNS server is.

    Times have changed, indeed.

    bhouston(2119) about 19 hours ago [-]

    The world my kids inhabit, they spend most of their time on SnapChat.

    lolinder(2685) about 19 hours ago [-]

    Funnily enough, just yesterday the authors posted a follow up to this piece focusing on Snapchat:

    https://www.afterbabel.com/p/industrial-scale-snapchat

    setgree(10000) about 21 hours ago [-]

    Haidt is not the world's most careful data analyst [0], so a determined skeptic would probably not find this persuasive. But I think he's been directionally correct about all his major points in the past decade:

    * Cancel culture is not compatible with democratic norms [1]

    * Social media is making many people a little worse off and it makes some people a lot worse off

    * having our phones on us all the time is bad for just about everything that requires sustained attention [2], including flirting and dating [3]

    * Technology won't solve this problem. AI will make things worse [4]. If TikTok gets banned and some slightly more benevolent version takes it place, we're still headed in the wrong direction. What we need is culture change, which Haidt is trying his darndest at. Hats off to him.

    [0] https://matthewbjane.github.io/blog-posts/blog-post-7.html

    [1] https://www.nytimes.com/2024/03/23/business/jonathan-haidt-s...

    [2] https://thecritic.co.uk/its-the-phones-stupid/

    [3] https://www.sexual-culture.com/p/its-obviously-the-phones

    [4] https://www.npr.org/2019/06/04/726709657/sometimes-fascinati...

    csours(10000) about 20 hours ago [-]

    > Cancel culture is not compatible with democratic norms

    This one is VERY morally and emotionally weighty, and I think you have to do quite a bit of work to ACTUALLY understand what is going on here, but I agree.

    In the middle of a fight, no one wants to look reasonable. In a fight, reasonable looks weak. In a fight, no one wants democracy, we just want to win.

    Unfortunately that fight mindset also shuts down the whole thinking part of the the brain; which is how you get people who gleefully vote for a king, because they feel like the king is their champion in the fight.

    brendoelfrendo(10000) about 20 hours ago [-]

    > Haidt is not the world's most careful data analyst

    We can, and probably should, just end the discussion there. Haidt is really good at finding data to support his claims, but then failing to mention that the correlation he's describing as 'definitive' is, actually, really weak. This happens throughout 'The Anxious Generation,' at least.

    Calling him 'directionally correct' when he's pretty bad at actually showing the work as to why he is correct is just saying that you think he has a good point because his vibes match your vibes.

    Bukhmanizer(10000) about 20 hours ago [-]

    > Haidt is not the world's most careful data analyst

    This is a massive understatement. The ironic thing about Haidt is that his writing is heavily geared towards social media. He writes a good headline and usually has a few facts in there, but is fundamentally non-rigorous. It's science for skimmers and people who clicked on an article already agreeing with the conclusions and so won't challenge the "evidence" he provides no matter how weak.

    krashidov(10000) about 20 hours ago [-]

    > If TikTok gets banned and some slightly more benevolent version takes it place

    I don't have TikTok on my phone. I don't have an account. But I have YouTube, Twitter, Instagram all locked down on my phone (my SO has the Screen Time code).

    I did this because the best minds on earth get paid based on how much I doom scroll. If I don't do this, I routinely have times where I scroll for an hour+.

    I have argued that the only solution to this is to either ban any sort compensation based on increased engagement of a social media product (probably impossible to enforce or unconstitutional if that still matters). OR to add regulation around infinite video scrolling. We regulate gambling because it hacks our dopamine loop (although usually associated with much more severe consequences). I think it's ok to regulate the video scroll. Start small with something like enforcing a scroll lock after 30 minutes. To enforce it, just regulate the largest companies.

    thomassmith65(10000) about 20 hours ago [-]

      Cancel culture is not compatible with democratic norms
    
    Democracy protects the majority against a minority. 'Cancel culture' does the same. They are bedfellows.

    Liberalism is what protects a minority against the majority.

    Liberal Democracy strikes a balance between them. Typically the majority gets to determine who is in charge (democracy), and enshrined legal protections protect minorities from the bias and wrath of the mob (liberalism).

    If someone insults people or breaks norms, and there's a lot of blow back, it doesn't surprise me. Few people complain that they are forbidden from walking the streets nude with a raging erection. The majority doesn't want that kind of freedom of expression.

    What this has to do with social media companies, don't ask me. I mainly care about the ability of people to make arguments without the government locking them up.

    foldr(10000) about 20 hours ago [-]

    > Cancel culture is not compatible with democratic norms

    Look around the world at where democratic norms are actually being undone. It's often the people who are most opposed to so-called 'cancel culture' who are busy with the undoing. But perhaps you are willing to be an unusually bipartisan wielder of the term and concede that the major instances of cancel culture in recent times are such things as Hungary banning pride parades, Trump bullying universities and deporting people for holding the wrong political views, and school libraries banning books with LGBTQ themes.

    fny(3295) about 20 hours ago [-]

    A determined skeptic would see Haidt is directly quoting TikTok's own admissions found in legal briefs.

    Frankly, it's terrifying.

    os2warpman(10000) about 19 hours ago [-]

    >Cancel culture is not compatible with democratic norms

    Cancel culture is a myth.

    It is a label used to denigrate people and organizations who exercise the fundamental right to distance themselves from associations they find distasteful or non-beneficial.

    There is not a single 'cancelled' person who does not retain the ability to work and exercise their speech rights.

    This is not opinion it is fact.

    I welcome any attempt to prove me wrong.

    I will respond with acting credits, tweets, and photographs of the cancelled person serving in a position of authority and/or being chauffeured between media appearances where they complain about being cancelled to an audience of millions.

    'Cancel culture' is the same bullshit as 'virtue signaling': made up nonsense intended to poison any discussion and blunt criticism without needing to do or say anything substantive.

    1270018080(10000) about 19 hours ago [-]

    People are still taking shots at the cancel culture boogeyman in 2025? It's just an organic response to people not wanting evil slop shoved in their faces on an unregulated internet.

    jmyeet(10000) about 18 hours ago [-]

    > Cancel culture is not compatible with democratic norms

    One's position on 'cancel culture' tends to reveal a lot about somebody's politics. Complaining about cancel culture tends to correlate highly with conservative political views. The idea is that some people can't freely express their opinions. This is the same idea that leads the likes of Elon Musk to complain about the lack of 'free speech'.

    When right-wingers say 'free speech' they mean 'hate speech', more specifically 'the freedom to express hate speech'. And when they complain about 'cancel culture', what they're really complaining about it there being consequences to their speech [1].

    So if somebody goes on a racist screed and they lost their job because their employer doesn't want to be associated with such views, that gets labelled as 'cancel culture'.

    The very same people defend cancelling the permanent resident status of somebody who simply protested war crimes committed by Israel (ie Mahmoud Khalil) with no due process, a clear First Amendment violation.

    As a reminder, the First Amendment is a restriction on government activity. For some reason, the same people who were vaccine experts 2 years ago who are now constitutional experts don't seem to understand this.

    [1]: https://www.thenation.com/article/society/republicans-cancel...

    troyvit(10000) about 19 hours ago [-]

    Is it TikTok harming the kids or families who don't regulate their kids doing the harm?

    In other words if I leave my kid alone in the house with a liquor cabinet, and the kid gets drunk every day, did the liquor do the harm or did I?

    That's an imperfect analogy though, because -- at least in the U.S. -- our society has already aligned itself such that our institutions and our devices raise our kids, not our families. As long as we keep that norm, then in a nation that values free speech and capitalism as much as the U.S. does, we're certain to have this problem.

    So as another commenter said, if we ban TikTok something slightly more benign will take its place, and that's because we aren't dealing with the real issue: we don't raise our kids anymore.

    Personally I look at the commonality of nuclear families[1] as a big culprit here. Once you isolate kids from aunts, uncles, cousins and grandparents you're left with just the parents to raise them. Those not rich enough to afford daycare have to either split the duty so they can afford a roof over their heads or leave the kids alone.

    [1] https://en.wikipedia.org/wiki/Nuclear_family#Compared_with_e...

    xurias(10000) about 19 hours ago [-]

    The ship has sailed on hoping for individual solutions. Probably sailed long before we as a species could be considered homo sapiens. I'm not sure why there's this weird reluctance to make systemic changes and improvements, and instead solely pushing the responsibility on every single person that interacts with kids.

    reverendsteveii(10000) about 19 hours ago [-]

    'The Chinese government is using TikTok to harm our kids. Someone else should be using TikTok to harm our kids, and other people should be using other apps to harm our kids.'

    Infinite, algorithmically-curated content is the problem. It's designed to be addictive and manipulative. There's data that shows that stuff like this basically exploits our ability to delay gratification by offering big pops of reward at random intervals. This develops pathways that encourage continued interaction because, essentially, you don't know when a reward is coming but you know that a reward is coming eventually so your brain keeps drip-feeding you from the memory of the last reward. It's similar to how people end up mindleslly bashing away at penny slots all day for years and years.

    lolinder(2685) about 19 hours ago [-]

    Who exactly do you think you're quoting there? I can't find it in TFA, and the article actually says exactly the opposite: that the current US approach is misguided because it focuses on the ownership of the company rather than the fact that the product is just plain dangerous in any hands.

    Here's an actual quote from the conclusion of TFA, with a footnote:

    > These harms may not be presented tomorrow to the Justices of the Supreme Court, but we think they should be decisive in the court of public opinion. TikTok should be removed from American childhood. 12

    > 12. Of course, if TikTok is removed, many children will just move to TikTok's competitors: Instagram Reels and YouTube Shorts. This is why it's so important for countries to follow Australia's lead: raise the age for opening social media accounts to 16 and require the companies to enforce it.

    JohnMakin(3635) about 19 hours ago [-]

    Each and every one of these points applies to Meta in a huge way:

    > 1. Addictive, compulsive, and problematic use 2. Depression, anxiety, body dysmorphia, self-harm, and suicide 3. Porn, violence, and drugs 4. Sextortion, CSAM, and sexual exploitation 5.TikTok knows about underage use and takes little action

    Hell, it's even a matter of congressional record!

    https://www.npr.org/2023/11/07/1211339737/meta-failed-to-add...

    it doesn't make it right, but this current political climate's myopic focus on tiktok alone destroys any credibility on this.

    sodality2(2563) about 18 hours ago [-]

    Jonathan Haidt has written and published huge amounts of posts, papers, and an entire book targeting social media and technology as a whole (not shying away from American-owned media, if anything, specifically targeting them). Literally yesterday, he published the same format of post against Snapchat [0]. Why does reading a single post targeting one social media destroy any credibility at all?

    [0]: https://www.afterbabel.com/p/industrial-scale-snapchat

    like_any_other(10000) about 19 hours ago [-]

    I wish parents blocked such sites on their children's devices, so we didn't have to expand the censorship & surveillance state to protect them.

    awakeasleep(10000) about 18 hours ago [-]

    I didn't realize how backwards and unhelpful the way we talk about this was until I became a parent.

    In general, we talk about 'iPad kids' and blame the tablets and phones themselves. Slightly more sophisticated people will blame the apps like YouTube or Roblox.

    That stopped making sense to me once I saw the problem first hand with my peers and my own children. The actual issue is parents wanting to (basically) anesthetize their kids so the parents can do something as if they didn't have the kids.

    Devices and Apps give parents the ability to zonk their kid into outer space for extended periods of time with unlimited videos or games that never end. But that isn't an inherent quality of the device. Like if you block all the apps and just let the kid use the iPad for drawing. Or if you do the YouTube kids thing where they can only watch videos you add to an allowlist.

    The app makers do hold a lot of responsibility for the defaults on their apps, but the real issue is parents who are choosing to blackhole their kids for extended periods of time. (I am agreeing with you btw)

    throwaway1854(10000) about 19 hours ago [-]

    In the U.S. people under 18 are allowed to own and shoot firearms, typically rifles. It's silly to allow that that and then complain about a tiny box that shows videos.

    Parents are responsible for their children. If a parent doesn't feed their kid, they go to jail. If a parent harms or allows harm through negligence to children, the parent is the one who suffers the consequences and has the child taken away.

    If a parent is giving a child a phone and allowing them to use a harmful product, the parent is at fault and should suffer the consequences. Not the rest of us. I don't know why I should have my access to anything restricted because of bad parents. Parents choose to be parents and have and/or keep children and that is their business. Bad parents should suffer consequences and one of those can be no longer being allowed to be a parent.

    It's one thing if a provider is specifically trying to get children on its platform - and if a company advertises its services in public places, it's again on the parent to be in control there. Social media companies aren't holding a gun to children's heads trying to get them to join. Kids wanting to do stuff because other kids think it is cool has always existed and that happens when children are not supervised or disciplined. Kids not doing what they are supposed to be doing of their own choice is a parental failure.

    Someone under 18 shouldn't be able to purchase a cell phone, and if a parent wants to get them a cell phone, then the parent should accept responsibility for everything on that phone.

    The addiction argument is tired. Anything pleasurable can be addictive. If you want people addicted to less things, design society where everyday life is less boring (getting rid of 2 hour commutes and having more parks would be a good start).

    lcfcjs6(10000) about 19 hours ago [-]

    100 percent agree. These politicians are trying to explain how dangerous TikTok is to our children while allowing general citizenry to own AR-15s. The hypocrisy is unreal.

    itomato(10000) about 19 hours ago [-]

    "Commenting for reach" doesn't work on an AR or AK.

    They don't touch as many lives, and what a disingenuous comparison.

    it_citizen(10000) about 19 hours ago [-]

    > If you want people addicted to less things, design society where everyday life is less boring

    I think society has never been so entertaining. I feel like we should instead learn to embrace the boredom. Life is supposed to be boring most of the time. It is healthy.

    ericmcer(10000) about 19 hours ago [-]

    So should we let people under 18 legally buy cigarettes, alcohol and marijuana? We definitely shouldn't monitor kids school attendance either. The parents should be the ones who regulate all those things right?

    You probably don't have kids because if you did you would know that around age 13 you stop being able to just force them to not do things, you have to start to reason and compromise with them more. Without societal rules there will be many kids who drink, smoke, use social media and barely attend school. Those kids have bad parents but to a 13-17 year old they have 'cool' parents, and now every other kids is gonna wonder why their parents are so lame.

    You can't just raise a kid in a silo, and if you don't ban certain things at a higher level the other parents get to have a massive influence on your kids expectations.

    SkyBelow(10000) about 18 hours ago [-]

    >Parents are responsible for their children.

    If this is the case, why do we pass any special child protection laws that override what a parent decides is best for their child (and in a way that punishes those involved beyond just the parents)?

    As to if any such law is appropriate or not, that would seem to be a question of how much harm is caused and if the law is aimed at preventing the harm. Many things are addictive, but only some of those cause enough harm to justify a ban to protect children.

    Glyptodon(10000) about 19 hours ago [-]

    Parents who let their kids mindlessly use TikTok, YouTube, etc., are guilty of neglect IMO.

    onemoresoop(3292) about 19 hours ago [-]

    They'll pay the price firsthand but sadly we'll all pay for it.

    Duanemclemore(10000) about 18 hours ago [-]

    I don't have kids, so I'm not in the trenches on this one. But a personal anecdote that might serve as evidence that other things are possible to everyone navigating tech and kids...

    When I was a kid living in a trailer in the midwest in the eighties I asked my parents to buy me a secondhand set of 1973 Encyclopedia Britannica from our local library - for $7. It fed the same curiosity and joy of discovering new things that you would want your kid to get from resources online.

    When we went on trips we always drove. And even if I didn't already have a book or books from the library that I was reading at the time, my parents would suggest I take a volume of the Encyclopedia. And sure enough if I got bored I'd break it out. (Unless it was too dark to read at which point I'd just fall sleep.)

    That's all to say there are alternatives that cut the gordian knot, which kids can really dig if you frame it right. My parents were both voracious readers themselves, and it didn't take long for their reading to my sibling and I to turn into reading on our own. So when we got something that provided the novelty and agency of navigating your own way through an encyclopedia, it was a huge hit.

    Of course things are very different today. And I'm not a luddite or even someone who believes that old ways are intrinsically better. But there are ways to feed the many various and often contradictory needs kids have that aren't reliant on contemporary tech.

    BlueTemplar(3415) about 15 hours ago [-]

    Or pre-recorded audio (tapes, CD...) if reading in a wobbly vehicle makes you sick.

    alganet(10000) about 18 hours ago [-]

    That is absurd. A competing children-harming platform that is not north american?

    Only the US can harm children in industrial scales. Any threat to its sovereigness will be dealt with by our child soldiers.

    kurtis_reed(10000) about 18 hours ago [-]

    Huh?

    SamuelAdams(2901) about 18 hours ago [-]

    > But when the Kentucky AG's office was preparing to post their brief against TikTok, whoever was in charge of doing the redaction simply covered the relevant text with black rectangles. Even though you can't see the text while reading the PDF, you can just use your cursor to select each black section, copy it, and then paste it into another file to read the hidden text. It is great fun to do this — try it yourself! Or just read our version of the brief in which we have done this for you.

    I feel like there needs to be more education about redaction and obfuscation tools, namely this black box tool and blurring. It is usually possible to reverse blurring. Not redacting information properly is just embarrassing.

    krackers(3617) about 16 hours ago [-]

    Just saying 'draw a black box' is not sufficient, you need to know the implementation details. If the software saves in a layer-based format, that's no good. If there is alpha channel then it's no good. If there is pre-existing compression artifacts that can leak information. You basically need to know that it does the dumbest thing possible when editing the image. I guess mspaint is probably the best option.





    Historical Discussions: Nice things with SVG (April 12, 2025: 565 points)

    (565) Nice things with SVG

    565 points 6 days ago by fmerian in 2102nd position

    fuma-nama.vercel.app | Estimated reading time – 11 minutes | comments | anchor

    #SVG

    More about SVG. Note that the example code is written in JSX (or React), not ordinary HTML.

    #Animated Wires

    Make the line, using line or path.

    <svg viewBox='0 0 50 50' className='bg-neutral-900 max-w-[100px] mx-auto'>
      <g>
        <line x1='0' y1='0' x2='0' y2='50' stroke='white' strokeWidth='1' />
      </g>
    </svg>
    

    Make it a mask.

    <svg viewBox='0 0 50 50' className='bg-neutral-900 max-w-[100px] mx-auto'>
      <g>
        <rect x='0' y='0' width='50' height='10' fill='red' mask='url(#line)' />
        <mask id='line'>
          <line id='' x1='0' y1='0' x2='0' y2='50' stroke='white' strokeWidth='1' />
        </mask>
      </g>
    </svg>
    

    Add animation.

    <svg viewBox='0 0 50 50' className='bg-neutral-900 max-w-[100px] mx-auto'>
      <g>
        <rect
          x='0'
          y='0'
          width='50'
          height='10'
          fill='red'
          mask='url(#animated_line)'
          style={{
            animation: 'to-down linear infinite 2s',
          }}
        />
        <mask id='animated_line'>
          <line x1='0' y1='0' x2='0' y2='50' stroke='white' strokeWidth='1' />
        </mask>
      </g>
    </svg>
    
    @keyframes to-down {
      0% {
        transform: translateY(-10px);
      }
    
      100% {
        transform: translateY(50px);
      }
    }
    

    Make styles.

    <svg viewBox='0 0 50 50' className='bg-neutral-900 max-w-[100px] mx-auto'>
      <g>
        <line x1='0' y1='0' x2='0' y2='50' stroke='rgb(50,50,50)' strokeWidth='2' />
        <rect
          x='0'
          y='0'
          width='100%'
          height='20'
          fill='url(#line_color)'
          mask='url(#animated_line_fancy)'
          style={{
            '--height': '20px',
            animation: 'to-down-2 linear infinite 3s',
          }}
        />
        <defs>
          <linearGradient id='line_color' x1='0' x2='0' y1='0' y2='1'>
            <stop offset='0%' stopColor='rgba(255,0,255,0.1)' />
            <stop offset='100%' stopColor='rgb(255,100,255)' />
          </linearGradient>
        </defs>
        <mask id='animated_line_fancy'>
          <line x1='0' y1='0' x2='0' y2='50' stroke='white' strokeWidth='2' />
        </mask>
      </g>
    </svg>
    
    @keyframes to-down-2 {
      0% {
        transform: translateY(calc(var(--height) * -1));
      }
    
      100% {
        transform: translateY(100%);
      }
    }
    

    Most of these similar things are using the same technique. Mask out an animated block, putting some animations and probably designed some parts in Figma or other SVG editors.

    Unkey's landing page is a nice example.

    #Clerk TOC

    I made a clerk-like style Table Of Contents (TOC) on Fumadocs, you can try it out and play with the nice TOC.

    To implement it, we have to render the TOC outline on server, without client-side JavaScript to make it compatible with SSR.

    Since we're on server, we don't know the exact positions of elements. My approach is to use absolute positions, render the outline as different 'components', and snitch them together.

    This isn't hard, but we also want to render a highlighted part of outline where the items are active, or their corresponding heading is visible in the viewport.

    Like:

    I'll call it the thumb. It has to be animated, so we can't just change the color of these outline components.

    We cannot animate the thumb with simple CSS solutions, lucky we have the exact rendered positions of TOC items, since the thumb is meant to be interactive, it is rendered on client!

    Using the information from our browser, we can construct a 'mask map' on client, look like this:

    The method to construct this map is SVG - yes, our old friend.

    <svg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 14 236'>
      <path
        d='M1 0 L1 20 L13 36 L13 56 L1 72 L1 92 L13 108 L13 128 L1 144 L1 164 L1 180 L1 200 L13 216 L13 236'
        stroke='white'
        strokeWidth='1'
        fill='none'
      />
    </svg>
    

    The d property of SVG <path /> isn't a nonsense auto-generated string, it's a list of commands. See the Web Docs for more details, it's quite a powerful tool.

    With our new tool, we can tell SVG to render a line connecting each point of the outline.

    This constructed a SVG that's identical to our original TOC outline pre-rendered on server.

    Similar to the technique we've learnt from Animated Wires, we can use the CSS mask-image property to mask an animated div block to render the thumb - a highlighted part of outline.

    <div
      style={{
        maskImage: `url('data:image/svg+xml,${
          // URI encoded SVG image
          encodeURIComponent(
            `<svg xmlns='http://www.w3.org/2000/svg' viewBox='0 0 14 236'>...</svg>`
          )
        })`,
      }}
    >
      <div
        style={{
          width: 1,
          height: thumb.height,
          transform: `translateY(${thumb.top}px)`,
          transition: 'all 500ms',
          backgroundColor: 'white',
        }}
      />
    </div>
    

    Check the source code to see my implementation in React.js.

    Huge thanks to Clerk for inspiring me on this, I've never thought the TOC of a documentation site can be that interesting to play with.




    All Comments: [-] | anchor

    LegionMammal978(3026) 6 days ago [-]

    One fun thing that can be done with SVG files: you can use entities in an inline DTD to define constants to be shared across different places in the file. You can see some great examples of this in the SVGs in David Ellsworth's 'Squares in Squares' page [0].

    The major browsers have no issues with this, though note that some tools like Inkscape won't parse the DTD nor expand the entities.

    [0] https://kingbird.myphotos.cc/packing/squares_in_squares.html

    timewizard(10000) 6 days ago [-]

    You can also extract different parts of an existing svg and use (clone) them elsewhere on the page.

    https://developer.mozilla.org/en-US/docs/Web/SVG/Reference/E...

    noahbald(10000) 6 days ago [-]

    It might work in browsers but a lot of SVG tooling will ignore DTD because it's a DOS risk.

    E.g. Billion laughs attack https://en.wikipedia.org/wiki/Billion_laughs_attack

    znpy(932) 6 days ago [-]

    That page took a good five seconds to render on my 2022 iPhone se

    lenkite(10000) 5 days ago [-]

    Maybe I am missing something, but can't find any !doctype or !element that would represent a DTD on that page. If you are talking simply about SVG defs and use - that isn't a DTD.

    tannhaeuser(1013) 5 days ago [-]

    You say 'entities' but that term is actually the name for SGML/XML's mechanism to define arbitrary syntactic content for reference/reuse with entity references a la &ref, whereas in SVG you can park shapes/paths/whatever under refs, giving those an id attribute value, and then <use> those element in the body SVG content, which is also what the page you linked is using (for each individual SVG ie. there's no sharing of rectangles across the many pictures since these are pulled-in individually via <embed> inot their own DOM rather than used as inline SVG).

    I wonder why SVG's original designers found it necessary to supply an ad-hoc re-implementation of the entity mechanism. I think it might have to do with how rendering properties can be overridden at the usage site? At least I don't think it was established that browsers ignore entity definitions or basically anything in the document prolog/DOCTYPE considering SVG was part of W3C's push to replace HTML's SGMLish legacy syntax with XHTML/XML.

    chentastic(10000) 6 days ago [-]

    Was always fascinated by SVG art. How good are LLMs in generating SVGs?

    jbreckmckye(3585) 6 days ago [-]

    In at least my limited experience, they're kind of bad. They can retrieve shapes that already exist, sometimes inaccurately, but they are less reliable at creating novel ones

    simpaticoder(10000) 6 days ago [-]

    Regular LLMs are quite bad at it (see simonwillison's blog post). However this paper [0] describes an apparently sound approach using Neural Radiance Fields (NeRFs), however their github repo [1] has been 'code coming soon!' for months now, so you can't really use it.

    0 - https://arxiv.org/pdf/2501.03992

    1 - https://github.com/SagiPolaczek/NeuralSVG

    pizza(378) 6 days ago [-]

    I've gotten decent outputs with Claude with iteration (sending both text feedback and screenshot for context) and then tweaked the output in Inkscape.

    plumeria(10000) 6 days ago [-]
    aiibe(10000) 6 days ago [-]

    Svg Tailwind combo makes hover animations easy and fun.

    mvdtnz(10000) 6 days ago [-]

    Any examples? This sounds interesting to me.

    danielstocks(3403) 6 days ago [-]

    Made a small silly game recently just for fun, using mostly CSS animated SVG tiles for rendering: https://pipeline-panic.vercel.app/

    perilunar(10000) 5 days ago [-]

    Nice!

    two_handfuls(10000) 5 days ago [-]

    It's a fun little game, thank you for sharing!

    danielstocks(3403) 5 days ago [-]

    Source code can be found here: https://github.com/danielstocks/pipeline-panic

    chrisweekly(10000) 5 days ago [-]

    This is a great little game! Thanks for sharing the source, too -- v nicely done.

    vunderba(10000) 5 days ago [-]

    Nice. Reminds me of the board game Waterworks from the 70s.

    https://boardgamegeek.com/boardgame/333/waterworks

    snitty(10000) 5 days ago [-]

    >height='20'

    What fresh hell is this?

    perilunar(10000) 5 days ago [-]

    What's the issue?

    HTML attribute: height='20'

    CSS property: height: 20px;

    JS statement: element.style.height = '20px';

    benjanik(10000) 5 days ago [-]

    For anyone who is using creatively using JS to create SVG dynamically and looking for work, DM me!

    all2(3659) 5 days ago [-]

    Not that guy, but just chiming in so you have some visibility.

    Voultapher(10000) 5 days ago [-]

    > Unkey's landing page is a nice example.

    That landing page is a nauseatingly laggy experience on a very powerful M1 Pro laptop. And slow to load, all for some fancy lines? I'd take a product that focuses on substance over style as dev. Don't get me wrong, style is important and I like pretty things, but here it seems the tradeoff is not well done.

    RobotToaster(10000) 5 days ago [-]

    Sounds like a problem with apple's implementation? I don't have any problem with firefox on an old 9th gen i5.

    leptons(10000) 5 days ago [-]

    > laggy experience on a very powerful M1 Pro laptop

    Apple's M series chips aren't really all that powerful, but they are very power efficient. There are far faster laptops out there than what Apple offers, though they do consume more power. My AMD-based laptop outperforms the M1 Pro by a wide margin, though it is a power hog. I had no problem viewing the Unkey website. If you're using Safari, that may also be a problem, because Safari honestly sucks.

    deads1mple(10000) 4 days ago [-]

    On latest Chrome, MBP i7 2019 and it sure is laggy as hell

    https://www.unkey.com/

    imhoguy(3448) 5 days ago [-]

    I really miss Macromedia Flash. There wasn't a single tech like Flash and SWF format which flourished with so many indie games and animated movies available without any extra downloads (other than Flash Player). Barier to entry was so low.

    Now, take SVG, it has potential to do everything what SWF could. But there is no editor like Flash and scene/object based coding solution like ActionScript. And each browser has own quirks so only simple SVG is guaranteed to be displayed everywhere.

    7952(10000) 5 days ago [-]

    Well it still exists as Adobe Animate which can export to html.

    Comparing SVG to Flash seems like an apples to oranges comparison anyway. The format does not have to do everything that Flash did but can rely on the other technologies in the browser.

    jefozabuss(10000) 5 days ago [-]

    I think web assembly can be comparable, e.g. unity/unreal/godot can compile to the browser pretty easily.

    The problem is that each of these apps can be quite bloated and in the tens of MBs range not the usual single digit MB.

    mettamage(3341) 5 days ago [-]

    Sounds like there is a startup opportunity here to recreate this

    gocsjess(10000) 5 days ago [-]

    One nice thing about SVGs is that they can be connected to the dom, you can do css, and easier to debug than canvas. Performance is the only thing holding it back from making declarative code for plotting and mapping charts.

    notnullorvoid(10000) 5 days ago [-]

    What performance issues have you encountered? Perf was decent 10 years ago so long as you avoided filters, but even that has improved.

    rjinman(10000) 5 days ago [-]

    I wrote a game of Tetris in JavaScript with SVG many years ago. It had nice graphics and was smoothly animated. I hadn't heard of anyone else using SVG like that at the time.

    I also made a game called Pro Office Calculator (available on Steam), which includes a Doom-style 3D engine for which I used Inkscape as my map editor. Here's an example of a map: https://github.com/robjinman/pro_office_calc/blob/develop/da...

    enduser(10000) 5 days ago [-]

    Reminds me of Avara which used MacDraw as a level editor. Very cool!

    kmoser(10000) 6 days ago [-]

    This taught me that SVGs can be animated with CSS. Cool!

    I wonder if anybody has recreated vector graphics games like Asteroids using SVGs and animation. You'd have to use JS to change the shape and direction of the asteroids when they're shot, but that would just require a bit of JS.

    mkoryak(10000) 6 days ago [-]

    It would be more performant to use canvas, but it might be kind of fun to do with svg

    hinkley(10000) 6 days ago [-]

    Video I bookmarked when I was stuck in backend land because I knew I'd want to learn it some day:

    https://youtube.com/watch?v=wc8ovZZ78SY

    I discovered this shortly after introducing The Secret of Kells to a child and had terrible, beautiful ideas about overly ornate websites that I have since thought better of. Mostly.

    rckt(10000) 6 days ago [-]

    SVG feels like a very underexplored and underused territory. You can do so many things with it. It really depends on your imagination. But you'll possibly need to "hardcore" a lot of stuff, so yeah, depends on the use case as well.

    memhole(10000) 6 days ago [-]

    I agree. I'm sure there's limitations, but svg feels more like a wysiwyg for web design than css

    wwweston(10000) 6 days ago [-]

    Seems like it hits limits really fast — management/legibility gets difficult without groups and layers and performance doesn't seem to scale well.

    WillAdams(10000) 6 days ago [-]

    Two usages which I thought were interesting:

    - adding toolpath information so as to use Flash as the engine for a Computer Aided Manufacturing tool: https://github.com/Jack000/PartKAM

    - (this was my project along w/ Edward R. Ford) adding hyperlinks to part lists to highlight parts in an assembly diagram: https://github.com/shapeoko/Docs --- unfortunately, that doesn't seem to work anymore.

    perilunar(10000) 5 days ago [-]

    One thing i'd like to see is an entire site built with SVG and JS without any HTML at all. It's possible but i haven't seen anyone do it yet.

    geokon(10000) 5 days ago [-]

    It's a fun format that's easy to generate, but after trying to do complicated things with it.. you kind of understand why. It's underused b/c

    - Complex graphics render different in different browsers. So you can't rely on it shows up the same (never had the same issue with a PDF for example)

    - There are quite a few renderers but they typically don't implement large parts of SVG b/c it's too complex.. So you can never really be sure what parts are 'safe' to use.

    - Large complex graphics display extremely slowly (again, compared to a PDF)

    - There is basically one editor.. Inkscape. And it's got it's own quirks and doesn't match Chrome/Firefox's behavior. Ex: You can add arrows to lines in Inkscape and they don't display in Firefox

    It's also just got too many weird corner case limitations. For instance you can embed a SVG in another SVG (say to make a composite diagram). But you can't embed a SVG in to an SVG in to an SVG. On the web if you inline or link an SVG you also end up with different behaviors

    CliffStoll(10000) 6 days ago [-]

    Is there any SVG extension which allows density of line? I have a plotter which can lift/lower a pen; it's driven from SVG files. It'd be sweet to allow the pen to lower while the line is being drawn (as we often do with handwriting).

    Oh - it's an Axidraw, from Evil Mad Scientist Labs - great device, wonderful people.

    WillAdams(10000) 6 days ago [-]

    Probably you would want to do that with G-code.

    I've been doing that sort of thing in:

    https://github.com/WillAdams/gcodepreview

    m-a-t-t-i(10000) 6 days ago [-]

    It's pretty easy to store custom instructions in plain SVG files and interpret them in with your reader. For example I have a multi-purpose laser-cutter / plotter and I use opacity for laser power, stroke weight for movement speed, green channel for number of passes, blue channel for z-axis height and red channel for lowering the pen or turning of the laser etc.

    chrisweekly(10000) 6 days ago [-]

    Even tho it's 8y old, Sarah Drasner's famous 'SVG Can Do That?' talk is still eye-opening for many. CSS has matured a ton since then (I'm less sure about SVG per se)... in any case it's HIGHLY recommended.

    Slides: https://slides.com/sdrasner/svg-can-do-that

    Video: https://youtu.be/ADXX4fmWHbo?si=6YPZkopyEDc8PSte

    jamra(10000) 6 days ago [-]

    Big fan of her book as well though I don't know if the recommended tools are still relevant.

    xyst(3582) 6 days ago [-]

    svg based games, wen?

    xerox13ster(10000) 5 days ago [-]

    wasn't that flash player?

    flaviuspopan(10000) 5 days ago [-]

    soon

    braebo(10000) 6 days ago [-]

    Complex animated SVG is fun to roll until you get into the weeds of SMIL and Safari bricks your phone for missing a leading 0 on a float or some random nonsense.

    hansvm(10000) 5 days ago [-]

    'bricks'?





    Historical Discussions: Adobe deletes Bluesky posts after backlash (April 11, 2025: 550 points)

    (550) Adobe deletes Bluesky posts after backlash

    550 points 7 days ago by bookofjoe in 20th position

    petapixel.com | Estimated reading time – 3 minutes | comments | anchor

    Adobe's foray into the Twitter alternative Bluesky quickly backfired. Frustrated by the company's business practices, users on the platform flooded its posts with backlash, ultimately prompting Adobe to delete all of its content.

    "Hey, we're Adobe! We're here to connect with the artists, designers, and storytellers who bring ideas to life," read Adobe's first post which has since been deleted. "What's fueling your creativity right now?"

    It was an innocuous enough post that Adobe sent out on Tuesday (April 8) but as Futurism reports, it provoked the ire of Bluesky users who immediately began airing their grievances at the company.

    Adobe's first post on Bluesky which received attention for all the wrong reasons.

    "I assume you'll be charging us monthly to read your posts," one user wrote in reference to Adobe's subscription model.

    On the same day, Adobe set up a Bluesky account for Photoshop. That too was bombarded with negative comments.

    "Go back to the fascist-owned site where they enjoy supporting AI-generated art like your brand does," wrote Evlyn Moreau.

    "Y'all keep raising your prices for a product that keeps getting worse," wrote another user.

    As of today (Thursday), both Adobe and the Photoshop accounts remain on Bluesky but both of their opening posts have been removed. Something that Bluesky users rejoiced in.

    "Adobe deleting their first BlueSky post because they realize that the artist community pretty much universally hates them now is extremely funny," writes Betsy Bauer.

    "Adobe just deleted their post with 1.6k angry comments from artists and creators roasting them," adds Tokori.

    Adobe situation was pretty funny

    [image or embed]

    — BlueSpark (@bluespark777.bsky.social) 9 April 2025 at 04:15

    Why Are People Hating On Adobe?

    Adobe's unpopularity can be traced back to a decision it made over 10 years ago when it shifted from perpetual software licensing to subscription pricing.

    Since then, price hikes and an embrace of artificial intelligence have all added to the vitriol many photographers and creatives direct toward the company.

    "The past few years of minimal communication with the community at large followed by the tidal wave of bad press over the past six months has left Adobe's standing with many photographers in shambles," PetaPixel's editor-in-chief Jaron Schnieder wrote last year.

    "Adobe couldn't explain why it let its once excellent relationship with photographers and media lapse, only that it is sorry that happened."


    Image credits: Header photo licensed via Depositphotos.




    All Comments: [-] | anchor

    add-sub-mul-div(10000) 7 days ago [-]

    This was fascinating to see unfold. What if there was a social network that had taste and rejected things that suck?

    Is it a failure of Bluesky to never become the global town square, if it means being a place where a brand can't find it a safe space to promote itself?

    Can a social network thread the needle of having enough traffic to be worthwhile but not so much as to attract the Eternal September?

    dimal(10000) 7 days ago [-]

    The problem is the microblogging format. No microblogging site can be a good town square. It's not designed for discussion. It's designed to allow people to shout into the void, hoping that someone hears them, so that they feel for a moment that their lives have meaning.

    cryptopian(10000) 7 days ago [-]

    Maybe a better question is whether we even need a global town square. I've had Twitter and Bluesky and the difference between them and a real town square is that you're always performing publically to an audience you can't possibly know. I've found far more rewarding relationships posting on niche forums and even subreddits because you get a sense of the people who use and administrate them, and you're safe in the knowledge you can't easily find virality.

    Barrin92(10000) 7 days ago [-]

    >Is it a failure of Bluesky to never become the global town square,

    No, because that's an oxymoron. There is no such thing because a precondition for a town square (which in reality is a community of people, not a place) is that there exists shared set of values, context and conduct between its members. The state of nature on a global platform, just like in a megacity is to be an anonymous, atomized individual who ideas or products can be sold to.

    jmclnx(10000) 7 days ago [-]

    Charging a subscription fee is crazy for a product that is very expensive. I do not know why they are still around.

    donatj(3126) 7 days ago [-]

    Muscle memory. I could probably get by with something cheaper but I have been using photoshop for thirty years at this point, I know hot keys and workflows at a spiritual level at this point.

    ge96(10000) 7 days ago [-]

    I have this popup in Win 10 that will not go away, out of nowhere 'DING' 'Would you like to use Adobe PDF?' It's built into Windows like wth

    adzm(10000) 7 days ago [-]

    I pay $20 a month for the educational discount and my kids get access to every Adobe product. It is an amazing deal.

    When you are an adult not in school you probably don't need 'all apps' and it is relatively inexpensive to get just the product you use.

    Anyway, they are still around because they still have some of the best set of features, and are industry standards, though this may change in the future and in some areas is already in progress (and I welcome that! They need competition to push them)

    BeetleB(10000) 7 days ago [-]

    People don't want to use Gimp, which is the next most powerful photo editing software :-)

    rchaud(10000) 6 days ago [-]

    Enterprise-level budgets.

    sureIy(10000) 6 days ago [-]

    I hate it too (and never had to use it) but $20/month is peanuts for people who use it professionally, unless they're from third world countries (which likely pirate it anyway)

    max51(10000) 4 days ago [-]

    No, it's not crazy, all the companies making expensive software are moving to subscriptions and they love the result. It is a lot easier to sell and to get people to renew their licenses.

    And 20$/m is not what I would call 'very expensive' in the context of a professional product used by people and companies who make a profit from it. By comparison, Autocad and Revit are 350$/m each

    megaman821(10000) 7 days ago [-]

    As a lurker on both Bluesky and Twitter, I find Bluesky is a much more hostile place. Twitter is much more absurd but there is not as much anger.

    sundaeofshock(3257) 7 days ago [-]

    I have a much different experience on Twitter. It has a much higher tolerance for racism, misogyny, gay/transphobia, and wild conspiracies. It got much worse after the election and I finally bailed on it after the inauguration. I have not missed it.

    Funes-(862) 7 days ago [-]

    It figures. One's knee-deep in censorship and the other one is more or less free-for-all, so you get high levels of hostility and an extreme range of ideas respectively from the get go.

    rcleveng(10000) 7 days ago [-]

    I just looked at twitter and it seems the sentiment is similar across both platforms. I think this was more of an adobe think than a bluesky thing.

    63(10000) 7 days ago [-]

    I find that the extremes of hostility are worse on bluesky, but the average skeet is much less hostile. And there's just straight up fewer skeets to be angry about.

    Molitor5901(10000) 7 days ago [-]

    I'm pretty left leaning and I don't like Bluesky. For me, it's too hostile and too much of an angry echo chamber. X is scattered wildly but I with muting I have been able to shape to get a more reasonable feed.

    jsight(10000) 7 days ago [-]

    Yeah, I'm surprised by how many here are responding with weird Adobe rants. They posted fairly innocuous stuff, were attacked, and ultimately chose to abandon the platform as a result.

    This sounds like a bigger indictment of the platform than anything to do with Adobe.

    newsclues(10000) 7 days ago [-]

    Not surprisingly because the community was populated by people who are angry that twitter changed.

    It's a community of unhealthy social media addicts

    doright(10000) 7 days ago [-]

    So after the honeymoon with Bluesky ends, what will be the next friendlier social media platform? And after that one? Will this just keep repeating?

    nitwit005(10000) 7 days ago [-]

    I didn't get much negativity on Twitter, and after moving the Bluesky the same is true.

    The experience of a person following fantasy football stuff, and another person following politics, will be totally different, regardless of website.

    llm_nerd(3639) 7 days ago [-]

    Bluesky currently has the kuro5hin 'A Group Is It's Own Worst Enemy' effect going on. People who think they claimed land first believe that they get to define the future of the service for everyone else.

    It's obnoxious, and if the service truly offers a real alternative to Twitter it needs to squash these brigading groups. I get that people don't want to see the posts of brands...so don't follow them. It's incredibly simple. I don't want furry content but I don't run around the platform complaining that some do.

    fracus(10000) 7 days ago [-]

    In my experience, that is completely untrue. I think it is more of 'you are the company you keep' situation. Bluesky is obviously more socially liberal and therefore, IMO objectively smarter, nicer users and community. On Bluesky you have more control over your experience which makes me wonder how genuine your post is.

    fossuser(3223) 7 days ago [-]

    Bluesky is the worst of old Twitter concentrated into one place. It's some weird mixture of the hall monitors of Mastodon crossed with wannabe members of the weather underground. Like a leftwing Gab full of only Kara Swisher and Taylor Lorenz types. This sort of of faux outrage at adobe is par for the course - its awful over there.

    X is much more of an ideological mix.

    rvz(796) 7 days ago [-]

    I've seen worse. In terms of the most hostile, Mastodon takes the crown.

    juped(10000) 7 days ago [-]

    It's kinda sad to see it become Truth Social But For The Other Team.

    esjeon(10000) 7 days ago [-]

    The Bluesky community is left-leaning and mainly consists of early adopters - basically, a group of active idealists. It's unsurprising that they are highly hostile toward a company with a history of exploitative behavior. Additionally, the current political situation significantly affects their emotional stability, negatively.

    I mean, yeah, the place is a kind of minefield these days, but I don't blame people. It just happens.

    doctorpangloss(10000) 7 days ago [-]

    Bluesky's users love drama.

    whimsicalism(10000) 6 days ago [-]

    frankly in some ways the audience for bluesky is more similar to HN, but in like a bad way.

    throwme_123(3495) 6 days ago [-]

    Yes, the elephant in the room is Bluesky itself. In my experience, it's way more toxic than Twitter/X.

    devmor(10000) 6 days ago [-]

    The last time I logged into my twitter account (which I use maybe once or twice a year to post about tech or complain to a customer service account) the first thing I saw was a paid ad espousing white nationalism and The Great Replacement conspiracy theory.

    I have a very hard time believing that Bluesky is more hostile than Twitter.

    cma(3612) 6 days ago [-]

    Maybe it shouldn't have been surprising after Democrats removed abolishing the death penalty from their party platform, but all the Mangione stuff on bluesky was pretty sad to see.

    fullshark(10000) 6 days ago [-]

    Well yeah Bluesky is predominantly left wing, and the left wing is angry right now.

    jeroenhd(3638) 6 days ago [-]

    So far, Bluesky hasn't been inserting alt-right nutjobs into my feed like Twitter has.

    Bluesky seems to focus on curating your own feed, to the point where mass blocklists will block hundreds or thousands of accounts, and not every blocklist is reliable. The 'block first, ask questions later' approach is very freeing and I've been practicing it on social media long before it gained traction on Bluesky.

    I expect the platform will be very painful for people who believe everyone should be subjected to their opinion (the people who will cry censorship because Reddit shadow-banned them). Good riddance, I'd say; they can be happy on Twitter with the rest of their kind.

    On average, my experience has been a lot better. I'm guessing that's mostly because I had to fight and subdue Twitter to exclusively show me content from the people I follow, combined with social media's general attraction to alt-right nutjobs (and of course, Twitter's owner being an alt-right nutjob doesn't help either).

    shaky-carrousel(10000) 7 days ago [-]

    What a great idea, scaring companies probing bluesky. That surely won't backfire and will cement bluesky as a Xitter alternative.

    miohtama(831) 7 days ago [-]

    Bluesky audience is certain kind, more left leaning, finding corporations evil. Adobe's experiment shows that it is unlikely any big corp could go there any time until the audience is more diverse, less cancel culture.

    teraflop(3268) 7 days ago [-]

    Maybe, just maybe, the platforms that we use to engage socially with other human beings don't also have to be organized around engaging commercially with brands.

    add-sub-mul-div(10000) 7 days ago [-]

    It's already a Twitter alternative that's superior by virtue of being in its pre-enshittification era.

    It may never be a Twitter alternative in the sense of making anyone a billionaire, but I'm okay with that.

    JKCalhoun(3408) 7 days ago [-]

    So you think Adobe would get a resoundingly warm welcome on X?

    Pretty sure they trashed their own brand with their subscription model. They're finding that out now.

    I jumped to Affinity apps years ago when Adobe required a subscription — never looked back.

    ruined(3625) 7 days ago [-]

    yes!

    thih9(2817) 7 days ago [-]

    No, the moral is different: if you're a company notoriously hostile to creatives, don't ask in a post "What's fueling your creativity right now?" - and if you do then don't be surprised when you get honest answers.

    sitkack(10000) 7 days ago [-]

    It isn't 'an idea', it is a justified response.

    Crocodile tears for the poor company that got drunk on enshittifying its own brand and now has to sleep in it. Adobe's takeover is like it freebased Private Equity and now complains that it has no friends. The TOS change to have AI train on all your art is really what broke people.

    ndsipa_pomu(10000) 7 days ago [-]

    I'd say this is less to do specifically with BlueSky and more to do with posting tone-deaf marketing spiel.

    mayneack(2267) 7 days ago [-]

    I personally am more likely to use a social media site without brands.

    fracus(10000) 7 days ago [-]

    Maybe the Bluesky selects the community they want and that is why people are enjoying it.

    Retr0id(1781) 6 days ago [-]

    The presence of obnoxious brand accounts is very far down my list of desires from a social network.

    wnevets(10000) 6 days ago [-]

    > What a great idea, scaring companies probing bluesky.

    you make that sound like a bad thing

    rchaud(10000) 6 days ago [-]

    The public yearns for formulaic engagement slop /s

    jeffwask(10000) 7 days ago [-]

    You don't get to play cute, fun, friend to creators and have the most odious licensing terms in the history of software.

    ikanreed(10000) 7 days ago [-]

    Actually if you'll read the fine print, you're obligated to be friends.

    fracus(10000) 7 days ago [-]

    I think this is a great one sentence encapsulation of the situation.

    mtndew4brkfst(10000) 7 days ago [-]

    Autodesk is at least boxing in the same weight class, but I do think Adobe is worse.

    pndy(2998) 6 days ago [-]

    All big companies do that for few years now - either with used language or graphics (namely Corporate Memphis and its various uncanny variants) or with both. It's enough to look at patch notes for mobile apps: these are exactly cutesy, fake friendly. 99% of the time you won't learn what was changed or fixed but instead you get these unrelated comments trying to show how cool company xyz is. It's unironic 'hello fellow kids' meme approach.

    bobjordan(3673) 6 days ago [-]

    I had to call it a day and cancel this year. Yearly sub approaching $700 per year just to open photoshop files a few times per year and maybe edit a pdf file? Fk it I'll find another way.

    modzu(10000) 6 days ago [-]

    krita is the way

    misswaterfairy(10000) 6 days ago [-]

    Affinity Photo is excellent, indeed Designer (Illustrator alternative) and Publisher (InDesign alternative) are excellent as well.

    Qoppa PDF Studio is a great alternative to Adobe Acrobat.

    Both offer perpetual licences.

    _xtrimsky(10000) 4 days ago [-]

    They have a photoshop plan for 10$ / month.

    Like you I rarely open Photoshop, maybe once or twice a month.

    gradientsrneat(10000) 7 days ago [-]

    I've become so disenchanted with internet vitriol that it's surreal seeing these trolls attack a social media presence that's geniunely deserving. Still, I wouldn't invite any of these people to my house.

    d0gsg0w00f(10000) 6 days ago [-]

    > Still, I wouldn't invite any of these people to my house.

    I think this is one of the most profound statements I've read all year. Perfectly sums up all the quiet backlash by middle America against the trolls that have pulled the party into extremes.

    It's not that they're bad people, they just get over excited and nobody wants to deal with the headache right now.

    I see it at work in the lunch room conversations where someone starts spewing passive aggressive hate and it really kills the vibe.

    bni(10000) 7 days ago [-]

    Has anyone actually stopped using Photoshop?

    What are they migrating to?

    vachina(10000) 7 days ago [-]

    Any number of AI apps out there can easily replace 95% of Photoshop's usecase.

    masswerk(3434) 7 days ago [-]

    1) Switched about 4 years ago

    2) to Affinity Photo & Designer (perpetual license)

    coldcode(10000) 7 days ago [-]

    I have Photoshop, but I use Affinity Photo for 99% of what I do (make digital art, AP is used for assembly and effects). I use Photoshop for a few special effects, but often it's not worth the effort.

    m-schuetz(10000) 7 days ago [-]

    Krita and Photopea. I use image manipulation programs occasionally to work on paper figures and presentations. Years ago, I used photoshop because alternatives like Gimp have abyssimal UX that I can't get over, even for free.

    With Krita and Photopea, my need for photoshop, previously paid by my employer, is gone.

    vunderba(10000) 7 days ago [-]

    I still own a copy of the last version of Photoshop before they went to subscription, CS6, but these days I find myself using either Pixelmator or Krita.

    RandomBacon(10000) 7 days ago [-]

    Photopea

    munchler(10000) 6 days ago [-]

    I use a copy of Photoshop Elements 10 from about a decade ago. Still works great and prevents me from over-editing my photos with crappy 'looks' that make them 'pop'.

    ajxs(3616) 6 days ago [-]

    Affinity Photo. It has an inexpensive perpetual license, and supports all the use-cases I previously needed Photoshop for.

    dharmab(10000) 6 days ago [-]

    Affinity for most editing and Krita for digital painting.

    _kush(2685) 7 days ago [-]

    A reminder that photopea.com is a great photoshop alternative and it's web-based

    ThinkBeat(10000) 7 days ago [-]

    Photopea is great, and you can do a lot, but it is not near the functionality of Photoshop. However, most people do not need most of that.

    mxuribe(10000) 6 days ago [-]

    Was about to mention photopea as well...I should add that i'm by no means a person who uses this type of software on a regular basis....But whenever i need it i reach for either GIMP or photopea, and last few years, its been photopea far more often.

    Honestly, i wish Adobe would still offer the conventional license, but with an additional hosting option that consumer can *choose* to activate and pay more for, or not...so that, basically:

    * I pay a one-time license to use photoshop offline - and for however long i wish (understanding that after its end of life i may not eligible for security updates, but that's fair)

    * Now, for storing of files, i would need to of course store them locally on my machine.

    * But, if i *chose* to pay an ongoing subscription, that is when Adobe would host files for me....so i can still use their product offline, and they only charge me for use of online file storage...and i wouldn't mind if there were a premium on that charge, since i get that i would be paying for an ongoing storage service.

    That gives me choice, it gives them money (both for licensing and ongoing hosting subscription), and i would figure everyone would be content....

    ...but, i guess the current world does not work that way, eh? So, i guess i will continue to avoid their products, heading towards alternatives like photopea, Gimp, etc.

    sidcool(170) 7 days ago [-]

    Honestly, Adobe deserves it. Their early cancellation fees is atrocious.

    magicmicah85(10000) 7 days ago [-]

    I pay the extra cost to make sure I can cancel after my project's done. I only ever use Photoshop/Premiere and After Effects a few times a year, so it's easier for me.

    MaxGripe(10000) 6 days ago [-]

    In my country, what Adobe is doing is punishable by imprisonment for a period of 6 months to 8 years. Yet, for some reason, they operate in this market without the slightest problem.

    "Whoever, with the intention of obtaining financial gain, causes another person to enter into a financially disadvantageous arrangement, or otherwise dispose of their own or someone else's assets, by means of deception, or by exploiting a mistake or their inability to understand the nature of the action undertaken, shall be liable to imprisonment for a period of 6 months to 8 years"

    thiht(10000) 6 days ago [-]

    That sounds like a huge stretch.

    haswell(10000) 6 days ago [-]

    As a photographer, I have a love/hate relationship with Adobe. I'm not a fan of many aspects of their business, but Lightroom is a (sometimes) excellent product.

    On the one hand, I don't have much sympathy for Adobe. On the other hand, this whole situation is why I am not on social media these days with the exception of HN and niche subreddits.

    Even if much of the criticism they receive is warranted, the social media climate is just so incredibly toxic that I want no part of it.

    Feels like there has to be a better way to be social on the Internet, but as time goes on I'm increasingly not sure if humans can handle it once a certain scale is reached.

    scarab92(10000) 6 days ago [-]

    Online communities have an inherent death spiral dynamic, unless you actively moderate away toxic people.

    These people drive away normal folks creating an ever more distilled community of unpleasant folks.

    How many normal people are going to hang around places like reddit and bluesky that are seemingly now filled with hate and conspiracy theories.

    sbszllr(10000) 6 days ago [-]

    Yup, I prefer Lightroom to Capture One, especially for film-related workflows.

    But I just can't go back to their predatory pricing practices, and the absolute malware of a programme that creative cloud is.

    WalterBright(3248) 6 days ago [-]

    > there would be no respite if I paid annually, nor could I receive one of those special invitations for a 35% discount

    Offering a discount to new customers while no discounts for existing, loyal customers always seemed backwards to me. Back in the Zortech days, we'd offer upgrades to existing customers at a steep discount.

    gs17(10000) 5 days ago [-]

    > we'd offer upgrades

    That's part of the difference. With a subscription model, you don't need customers to want to buy your upgrades (they're forced to pay for them), you benefit the most from locking them into your ecosystem as best you can. Adobe doesn't want to make existing customers happy, they want to make it difficult for unhappy ones to stop paying every month. At that point, discounts to new customers makes sense, since it traps new people into paying you.

    hliyan(1215) 6 days ago [-]

    The phenomenon at work here is: if product being produced by a profit-seeking enterprise can be rented instead of being sold, said enterprise will eventually find a way to do it, then over time, rather than a single bill, it will attempt to rent out individual aspects of the now product-turned-service, followed by cost cutting that degrades the default service level while introducing additional service levels for which the consumer will have to pay additional fees, and finally making switching away to competitors progressively difficult for the consumer. This is a natural outcome of profit-maximization.

    __loam(10000) 6 days ago [-]

    This is the primary reason why creatives despise Adobe despite some people here arguing that it's for the AI art generation. They hate that too but the biggest pain point by far is the toxic business relation you have to maintain to continue to use industry standard tooling.

    illegally(10000) 6 days ago [-]

    Single bill for modern software doesn't make sense economically anymore.

    Do you want updates? You want new versions? New features? Support?

    Single bill it's like buying an IPhone once and then you expect to get a new one for free each year.

    somedude895(10000) 6 days ago [-]

    > "Go back to the fascist-owned site where they enjoy supporting AI-generated art like your brand does," wrote Evlyn Moreau.

    Yeah this is why Bluesky will never be a serious and widely used social platform. It's the same sort of cesspool as the right-wing alternatives that popped up a few years back, just more self-righteous.

    Kye(678) 6 days ago [-]

    There's a whole mute list for this sort of person: https://bsky.app/profile/mackuba.eu/lists/3kp6zdqoscy2x

    You can also run Blockenheimer on likes and reposts for any especially toxic anti-AI takes to catch huge chunks of them: https://blockenheimer.click

    torginus(10000) 6 days ago [-]

    I just don't get how Adobe didn't get dethroned after being so unpopular for so long. There are so many Photoshop competitors, many of which are quite good, they seem to be ripe for disruption. The last version I used was CS6, which came out more than a decade ago, and even that had more than a good enough feature set.

    Blender is slowly taking over 3D, why can't 2D be disrupted similarly?

    oreally(10000) 6 days ago [-]

    I'm pretty sure it's because just about every applicable art school has enforced their student's output to be done in adobe's products - meaning that Adobe has a firm grip on the educator's market. As the saying goes, hook them in when they're young and they'll be too lazy and vested to move away from their products for a lifetime.

    graemep(10000) 6 days ago [-]

    That is how free market capitalism is supposed to work.

    If you do not like products you switch to a competitor. That is the fundamental assumption on which the system is built

    adzm(10000) 7 days ago [-]

    Adobe is the one major company trying to be ethical with its AI training data and no one seems to even care. The AI features in Photoshop are the best around in my experience and come in handy constantly for all sorts of touchup work.

    Anyway I don't really think they deserve a lot of the hate they get, but I do hope this encourages development of viable alternatives to their products. Photoshop is still pretty much peerless. Illustrator has a ton of competitors catching up. After Effects and Premiere for video editing are getting overtaken by Davinci Resolve -- though for motion graphics it is still hard to beat After Effects. Though I do love that Adobe simply uses JavaScript for its expression and scripting language.

    Angostura(10000) 7 days ago [-]

    Now that would have been a really interesting thing for them to start a conversation about on Bluesky. They would have got some genuine engagement if they wanted it.

    Much better than the transparently vapid marketing-speak

    jsbisviewtiful(10000) 7 days ago [-]

    > Adobe is the one major company trying to be ethical

    Adobe is cannibalizing their paid C-Suite artists by pumping out image generators to their enterprise customers. How is that ethical? They are double dipping and screwing over their longtime paying artists

    bpodgursky(10000) 7 days ago [-]

    > Anyway I don't really think they deserve a lot of the hate they get

    The dark lesson here is that you avoid hate and bad PR by cutting artists out of the loop entirely and just shipping whatever slop the AI puts out. Maybe you lose 20% of the quality but you don't have to deal with the screaming and dogpiles.

    gdulli(10000) 7 days ago [-]

    The problem isn't their specific practices, but more that they're in general one of the companies profiting from our slopcore future.

    nonchalantsui(10000) 7 days ago [-]

    For their pricing and subscription practices alone, they deserve far more backlash than they get.

    cosmotic(10000) 7 days ago [-]

    There are a lot of good photoshop alternatives. Most are better at individual use cases than photoshop. For example, nearly all the alternatives are better at designing website comps because they are object-based instead of layer-based.

    f33d5173(10000) 7 days ago [-]

    Adobe isn't trying to be ethical, they are trying to be more legally compliant, because they see that as a market opportunity. Otoh, artists complain about legal compliance of AIs not because that is what they care about, but because they see that as their only possible redress against a phenomenon they find distasteful. A legal reality where you can only train AI on content you've licensed would be the worst for everybody bar massive companies, legacy artists included.

    UtopiaPunk(10000) 7 days ago [-]

    You are assuming that there is an ethical way to use AI. There are several ethical concerns around using AI, and Adobe is perhaps concerned with one of these (charitably, respecting artists, or a little more cynically, respecting copyright).

    Many would argue, myself included, that the most ethical approach towards AI is to not use it. Procreate is a popular digital art program that is loudly taking that position: https://procreate.com/ai

    giancarlostoro(3167) 7 days ago [-]

    I will forever miss Fireworks. I dont do much with graphics but Fireworks was the best thing I ever used. Now I do zero with graphics.

    cosmic_cheese(10000) 7 days ago [-]

    Even if they're "trying", it's moot if the result isn't clearly more ethical, and with the proliferation of stolen imagery on their stock image service (which they use to train their models), the ethics of their models are very much not clear.

    If I saw news of a huge purge of stolen content on their stock image service with continued periodic purges afterwards (and subsequent retraining of their models to exclude said content), I might take the claim more seriously.

    lawlessone(10000) 7 days ago [-]

    They're making money off it.

    At least Meta gives their models to the public.

    m463(2487) 7 days ago [-]

    I remember pixelmator being a breath of fresh air.

    numpad0(10000) 7 days ago [-]

    What it implies is, it's not really about ethics per se, just like it's not really about 6th digits per se. People hate AI images, cut and dry.

    Law is agreeable hate, in a way. Things that gets enough hate will get regulated out, sooner or later.

    nitwit005(10000) 7 days ago [-]

    While I agree about Adobe behaving more ethically, I suspect they simply talked to their customers, and decided they didn't have much choice. CELSYS, who makes Clip Studio, suffered a backlash and pulled their initial AI features: https://www.clipstudio.net/en/news/202212/02_01/

    Spooky23(3545) 7 days ago [-]

    End of the day, the hate is: "The software is great, but these jerks expect me to pay for it!"

    Their sales went crazy because everyone was relentlessly pirating their software.

    crest(10000) 7 days ago [-]

    > Adobe is the one major company trying to be ethical with its AI training data and no one seems to even care.

    It's sad that it's funny that you think Adobe is motivated by ethical consideration.

    Bluescreenbuddy(10000) 7 days ago [-]

    This Adobe. They don't care about ethic. And frankly fuck them.

    quitit(10000) 7 days ago [-]

    I'm not pointing fingers in any specific direction, but there is a lot of importance in AI leadership, and with that you're going to see a lot of bot activity and astroturfing to hinder the advancement of competitors. We also see companies such as OpenAI publicly calling out Elon Musk for what appears to be competition-motivated harassment.

    So while I think we're all pretty aware of both sides of the image gen discussion and may have differing opinions about that - I think we can all agree that the genie can't be put back in the bottle. This will naturally lead for those that do take advantage of the technology to outpace those which do not.

    Also I applaud Adobe's approach to building their models 'ethically', yes they are inferior to many competitors, but they work well enough to save significant time and money. They have been very good at honing in what AI is genuinely useful for instead of bolting on a chatbot onto every app like clock radios in the 1980s.

    matt_heimer(10000) 7 days ago [-]

    The best? I tried the Photoshop AI features to clean up a old photo for the first time this week and it crashed every time. After a bunch of searching I found a post identifying problem - it always crashes if there are two or more faces in the photo. Guess someone forgot to test on the more than one person edge case.

    skywhopper(10000) 7 days ago [-]

    Uh, not sure where you've been but Adobe is slavering over using the content its locked-in users create to train its products. It only (seemingly) backed off this approach last year when the cost in terms of subscription revenue got too high. But you're naive if you think they aren't desperately planning how to get back to that original plan of owning an ever-growing slice of every bit of human creativity that touches their software.

    ilrwbwrkhv(3613) 7 days ago [-]

    Yes and this is what I was worried about in my essay on AI.

    They have burned so much of goodwill that the community is not willing to engage even with positive things now.

    This broadly is happening to tech as well.

    doctorpangloss(10000) 7 days ago [-]

    There's no evidence that their generative tools are more ethical.

    Even if you believe everything they say, they are lying by omission. For example, for their text to image technology, they never specify what their text language model is trained on - it's almost certainly CLIP or T5, which is trained on plenty of not-expressly-licensed data. If they trained such a model from scratch - they don't have enough image bureau data to make their own CLIP, even at 400m images, CLIP only performs well at the 4-7b image-caption pair scale - where's the paper? It's smoke and mirrors dude.

    There's a certain personality type that is getting co-opted on social media like Hacker News to "mook" for Adobe. Something on the intersection of a certain obsessive personality and Dunning Kruger.

    AnthonyMouse(10000) 6 days ago [-]

    > Adobe is the one major company trying to be ethical with its AI training data and no one seems to even care.

    It's because nobody actually wants that.

    Artists don't like AI image generators because they have to compete with them, not because of how they were trained. How they were trained is just the the most plausible claim they can make against them if they want to sue OpenAI et al over it, or to make a moral argument that some kind of misappropriation is occurring.

    From the perspective of an artist, a corporation training an AI image generator in a way that isn't susceptible to moral or legal assault is worse, because then it exists and they have to compete with it and there is no visible path for them to make it go away.

    sneak(874) 6 days ago [-]

    Subscriptionware is cancer. They deserve all the hate they get.

    sdrothrock(10000) 6 days ago [-]

    > Adobe is the one major company trying to be ethical with its AI training data

    I was actually contacted by someone at Adobe for a chat about disability representation and sensitivity in Japan because they were doing research to gauge the atmosphere here and ensure that people with disabilities were represented, and how those representations would be appropriate for Japanese culture. It really blew my mind.

    devmor(10000) 6 days ago [-]

    If they are trying to be ethical, all it takes is one look at their stock photo service to see that they are failing horribly.

    Henchman21(10000) 6 days ago [-]

    SUPER ethical to try and put artists and entire industries out of business to be replaced with Adobe products.

    mesh(10000) 6 days ago [-]

    For reference, here is Adobe's approach to generative ai:

    https://www.adobe.com/fireflyapproach/

    (I work for Adobe)

    washadjeffmad(10000) 6 days ago [-]

    What can Photoshop AI do that ipadapter / controlnets can't and haven't done for the past two years?

    'Get artists to use it' is the free square :)

    SuperNinKenDo(3358) 6 days ago [-]

    ACME is the one major company trying to be ethical with its orphan crushing training data and no one even seems to care!

    therealpygon(10000) 6 days ago [-]

    Ethical? You realize most of their training data was obtained by users forced agreement to a EULA with the intention of their art being sold on Adobe's marketplace without it ever being made explicit their art was going to be used for AI training until much later, right?

    mort96(2998) 6 days ago [-]

    To people who care about ethics wrt. 'AI', there is no such thing as ethical 'AI'.

    To people who are on board with the 'AI' hype train, there is no ethical problem to be solved wrt. 'AI'.

    Neither side cares.

    nektro(3326) 5 days ago [-]

    because customers don't want generative AI in their products, ethical or not

    arthurtully(10000) 5 days ago [-]

    Step 1. Make a stock photos library for everyone to upload. Step 2. Use that stock photo library to train your AI without letting users opt out. You couldn't remove photos without accepting the licence. Step 3. Allow users to use AI generated art on said stock library, even further ignoring artists by regurgitating art from other models. Step 4. Force new licences to users that use any file as potential training data. Step 5. Act shocked when everyone is mad.

    simonw(116) 7 days ago [-]

    Yeah, they posted this:

    > Hey, we're Adobe! We're here to connect with the artists, designers, and storytellers who bring ideas to life. What's fueling your creativity right now?

    > Drop a reply, tag a creator, or share your latest work—we'd love to see what inspires you!

    That's such a bland, corporate message. It feels totally inauthentic. Do Adobe (a corporation) really 'love to see what inspires you' or do they just want engagement for their new account?

    I'm not surprised in the slightest that it triggered a pile-on.

    magicmicah85(10000) 7 days ago [-]

    They want engagement for their new account, it's what anyone who posts on social media wants.

    lysace(10000) 6 days ago [-]

    Meh. Adobe is a large corp. You'd want want them to masquerade as something they are not? Why would that be better?

    I am so over pile-ons by people who see themselves as being SO important.

    Also: it feels really weird to defend Adobe.

    WatchDog(10000) 6 days ago [-]

    It's so bland I don't understand why it elicited any response at all.

    EasyMark(3653) 6 days ago [-]

    I'm not surprised but disheartened that people have so little going on in their life they thing trying to boycott a bsky corporate account is a good use of their time.

    jimbob45(2509) 6 days ago [-]

    The left has spent the last decade proudly bullying everyone for wrongthink, including going after employment and family members. It should come as no surprise then that corporations wouldn't participate above the bare minimum on a predominantly leftist forum.

    tstrimple(10000) 6 days ago [-]

    It's likely both. In most large organizations I've worked with, there is a split between true believers and cynics. And often the true believers are so bought in they have trouble recognizing the cynics. There are likely earnest folks behind every bland social media post. Doesn't mean their product is worth anything either way.

    thiht(10000) 6 days ago [-]

    It gives 'how do you do fellow kids' vibes

    hammock(949) 6 days ago [-]

    I don't disagree, but what are they supposed to post otherwise?

    stego-tech(10000) 6 days ago [-]

    Man, this was fun to see in real time. A site whose earliest adopters were Twitter refugees who hated the crypto/AI/NFT boosters, created actual art, and ultimately left Twitter because of rampant fascism and bigotry, effectively cyberbullied the company and its Head of Social Media so badly the latter left the site entirely.

    You have to be pretty bad at your job to misread the room so terribly. Just taking a casual look at Clearsky's block rankings would show how many lists are specifically blocking and targeting brands, griftos, fascists, and bigots of various stripes, and likely dissuade you from approaching the community without some form of battle plan.

    Treating BlueSky like a "new Twitter" is a dangerous mistake to make, something Adobe learned the hard way. To make matters worse, they also poisoned the community well to the point there's a fresh witch hunt out for brands and companies to add to block lists, thus harming everyone else's "engagement".

    junto(3088) 6 days ago [-]

    This is a spot on analysis. Bluesky and Mastodon are full of people that felt and continue to feel disenfranchised and excluded. They embraced Bluesky because it reminded them of what Twitter used to be and had found themselves what they felt was a relatively safe space.

    Companies like Adobe and other major tech players have enabled the hostile environment we see growing every day. It's no wonder that disingenuous posts like this from predatory companies receive such a backlash.

    Apreche(10000) 7 days ago [-]

    I'm always the first one to criticize companies for exploitative and evil business practices. Adobe is far from innocent. However, I will argue their subscription model itself is actually better than the previous model.

    The reality is that Adobe has a large team of engineers to create and maintain several high end professional digital art creation tools. They also frequently add new and excellent features to those tools. That costs money. This money has to come from somewhere.

    With the old model Creative Suite 6 Master Collection cost over $2600. They updated that software every two years. The maximum Creative Cloud subscription today costs $1440 for two years. They even have a cheap Photography plan for $20 a month with Photoshop and Lightroom. That's $480 for two years. Photoshop 6 cost $700+ alone all by itself with no Lightroom.

    Why would Adobe allow for much lower prices, even considering inflation? Because they get reliable cash flow. Money keeps coming in regularly. That's much easier for keeping people employed and paid than a huge cash infusion every other year and a trickle until your next release. It's just not feasible to sell software that way anymore.

    Of course the argument is that with the old model you didn't need to update. You could just pay for CS5 or 6 and use it forever without ever paying again. That's true. And I guess that's viable if you are want software that is never updated, never gets new features, and never gets bugfixes and support. I would argue that a user that can get by without updating their tools, and has no use for new features, is not a professional. They can get by with free or cheap competitors, and they should.

    Professional digital artists do need and want those updates. They are the kind of people that were buying every version of Creative Suite in the old model. For those users, paying a subscription is a huge improvement. It keeps the updates and bugfixes coming regularly instead of rarely. It funds development of new and powerful features. It keeps Adobe solvent, so the software doesn't die. It lowers the overall price paid by the user significantly.

    Plenty of things we can criticize with Adobe. Bugs they haven't fixed. Crashy software sometimes. Products they come out with and then give up on. Doing dark patterns and fees to prevent people from unsubscribing. But the subscription model itself is a net positive compared to the old way.

    vachina(10000) 7 days ago [-]

    > than a huge cash infusion every other year and a trickle until your next release

    It's a very good incentive to keep the entire company on their toes. Adobe will have to keep making new features for people to justify paying for a new version, instead of rehashing the same software, and then rent-seek with a subscription.

    vunderba(10000) 7 days ago [-]

    There are plenty of successful subscription based models that allow you to fallback on a perpetual license for the last annual version that you paid for, e.g. the Jetbrains model.

    As a 'professional' I have zero interest in renting the tools of my trade.

    ferguess_k(10000) 7 days ago [-]

    The first comment seems to be interesting:

    > I don't like subscriptions but that's not the biggest problem. The biggest issue is Adobe's software has been getting worse as the years have passed. It's slow, incredibly buggy, their new features are often an embarrassment, and Adobe seems to do nothing other than increasing prices. And therein lies the issue with subscriptions - the user keeps paying higher prices and the company has zero motivation to fix bugs

    I wonder how hard it is to create the core functionalities of Adobe Photoshop. Maybe many people have different definitions of what are the core functionalities, thus turning making a replacement software very tough.

    thejohnconway(10000) 7 days ago [-]

    There's plenty of replacements which are fine. Many are better to use for many tasks. The problem is lock-in in professional contexts. Having a problem with some feature in a PSD? "I don't wanna pay for Photoshop" isn't usually an acceptable excuse.

    If open source projects and other companies had gathered around an open file format, maybe there would be some leverage, but they all use their own formats.

    55555(3595) 6 days ago [-]

    Adobe runs what must be one of the largest deceptive rebills. The vast majority of users signing up for a monthly plan do not realize that it is actually an 'annual plan, billed monthly' and thus that if they cancel after one month (for example) they'll be billed for the remaining 11 immediately. I honestly don't know how they haven't faced FTC action for this, as it's been their primary model for 5-10 years now.

    sepositus(10000) 6 days ago [-]

    Wasn't there some action around this like a year ago? Can't find it now, but I thought it was investigated at some point.

    speff(10000) 6 days ago [-]

    I still don't see why this is a point against Adobe. When you select a plan, they very clearly give you 3 options. Monthly, Annual billed monthly, and Annual prepaid. The Annual billed monthly is just flat-out better for end users over prepaid. Why do people want to get rid of it? Because some people FAFO when trying to get an annual price while still being able to cancel any time?

    I do not like Adobe in the slightest, but it's not because of their billing practices.

    sanswork(10000) 6 days ago [-]

    I just went back through the sign up process to check and it seems pretty obvious these days? I got three options at checkout annual billed monthly, monthly, annual.

    I hate annual billed monthly but the wording isn't hidden.

    vishnugupta(10000) 6 days ago [-]

    Almost every single one of Adbobe post on HN has a top comment about this evil subscription plan.

    I fell for it once. But I'm in India so I just cancelled my debit card and that was that. Good luck to them to chase me through legal means in India. It was still bit of a hassle though.

    devsda(10000) 6 days ago [-]

    > actually an 'annual plan, billed monthly' and thus that if they cancel after one month (for example) they'll be billed for the remaining 11 immediately

    I don't know if this is a recent policy change, but it is not the complete amount but only 50% of the remaining annual amount as per their website[1].

    If it were something involving physical goods or services I can understand, but 50% penalty is still a crazy amount for a hosted software service.

    1. https://www.adobe.com/legal/subscription-terms.html

    sethammons(3653) 6 days ago [-]

    We successfully stopped paying for a collection of Adobe products that were for a student license last year. We randomly were charged again in January and February of this year and when I called they couldn't find any records of charges. They recommended contesting the charges on the card and we've not been charged since. Still, crazy that they couldn't even verify they charged my card.

    KurSix(10000) 6 days ago [-]

    Yeah, that whole 'annual plan billed monthly' thing feels intentionally shady

    gcau(10000) 6 days ago [-]

    When I tried to cancel a regular monthly subscription, they tried to force me to pay a fee to be able to cancel the subscription, and they don't let you disconnect your payment methods. Luckily, I used paypal so I could unauthorise them on paypal. If this happened again to me I would be contacting the consumer rights organisation my country has.

    maccard(3637) 6 days ago [-]

    I don't get it, honestly. It's very clear. You get a discount for an annual commitment and they let you pay monthly. It's super clear which you're signing up for when you do it. I'm in the UK, and there's a 14 day cooling off period on the plans too, unless you buy the full blown annual one.

    I'm no adobe supporter generally, and sure they could do more, but they take an awful lot of flak for people who won't read two lines of text and then scream bloody murder.

    ciabattabread(10000) 6 days ago [-]

    I have one of those 'annual plan, billed monthly'. How the hell do I figure out when I initially signed for it? Along the way, I got two free months for getting a Logitech mouse, does that change my annual month?

    __jonas(10000) 6 days ago [-]

    Yeah this is terrible, I remember for creative suite there used to be some weird workaround where you could switch your plan to the cheapest one (I think it was Photoshop+Lightroom) and then cancel, and then it would not charge you for the remaining time. I wonder if that still works.

    ivolimmen(10000) 6 days ago [-]

    I would love to know how this goes in the Netherlands where we have strict rules on this. If it's not really clear rules dictate the customer is right, so that yearly subscription is simply a monthly subscription.

    ziml77(10000) 6 days ago [-]

    I looked at their plans a few years back and it was very clear that they had 3 payment options: Monthly, Annual, and Annual billed Monthly. Of course if you get the third option, getting out of the contract is going to cost you. Otherwise what would ever be the point of choosing the Monthly plan when both Annual options have a discount for going with a longer subscription period?

    mk89(10000) 6 days ago [-]

    Out of curiosity I went to their website to understand how they sell it, because it wasn't clear...

    https://www.adobe.com/products/photoshop/plans.html

    I am not sure why this should face FTC or any similar mechanism to prevent 'deception'.

    It's written right there:

    US$22.99/mo Annual, billed monthly

    And if you slightly scroll down the very first question is how much it costs:

    > There are several Creative Cloud plans that include Photoshop. You can purchase it as a standalone app for US$22.99/mo. for the annual billed monthly plan or opt for annual billing at US$263.88/yr.

    Buying it with the annual billing would save you 1$ per month.

    I have seen this model used elsewhere: if you opt in for the yearly subscription, you still pay per month but you save X% over the monthly subscription.

    Not sure what could they do to make it more obvious, besides writing big: we only offer yearly subscriptions, although you can pay monthly..

    Edit: if you click on buy it, it leads to another option too, the monthly one. Is this the scam one? Because it says you cancel any time...

    Edit again: it seems that they did quite some nasty stuff in the past and then US sued them, so now they are more transparent about their subscriptions.

    God bless such organizations that sue the hell out of such bad actors until they behave well.

    madaxe_again(10000) 6 days ago [-]

    I found this out the hard way...

    But you know what? Karma's a bitch. I think I am likely not alone in having used a cracked version of photoshop for far, far more time than I ever did an actual paid up copy.

    I'm not unaware that piracy was part of their strategy for market penetration, and I guess it's now a case of "we have the market cornered, let's monetise".

    madeofpalk(10000) 6 days ago [-]

    > I honestly don't know how they haven't faced FTC action for this

    FTC Takes Action Against Adobe and Executives for Hiding Fees, Preventing Consumers from Easily Cancelling Software Subscriptions

    June 17, 2024

    https://www.ftc.gov/news-events/news/press-releases/2024/06/...

    ajxs(3616) 6 days ago [-]

    I posted elsewhere in this thread that when I tried to cancel, and discovered that I was actually paying for an annual plan on a monthly basis, I told their support person I'd be speaking with the local consumer affairs regulator[1]. They instantly waived the cancellation fee. I'm tempted to think they've had some trouble with regulators on this issue before.

    1: https://www.fairtrading.nsw.gov.au/

    mjmas(10000) 6 days ago [-]

    It seems like this would/should be covered under Australia's unfair contracts law, which requires the term to have a legitimate interest as well as being transparent (which I dont think would be met if they are charging 50% of the remainder, when they would have been happy for you to get a monthly subscription and cancel after a month, only having spent a fifth of what they would charge for termination)

    mattskr(10000) 7 days ago [-]

    Controversial take: I'm happy they went monthly paid subscription. You think a budding graphic designer of one year could afford the $1,500+ up front cost? The seven seas were the only option.

    HOWEVER, 60 a month is too high for a product quality that is tanking. I was okay with it the first few years, but PS and Illustrator's performance noticeably have gone straight to shit for absolutely no benefit except for a little stupid gimmicks that offer zero productivity boosts. Indesign, they've mostly left alone, which I'm happy about because it's like Oreos. Stop fucking with the recipe, you made the perfect cookie. There are no more kingdoms to conquer. Simply find performance boosts, that's it. The reliability of my files and getting work done is more important than anything else. Truly. That's what Adobe USED to stand for. Pure raw UI intuitive productivity and getting shit done. Now, it's a fucking clown show that cares about their social media and evangelism.

    I hear on the video side they've super dropped the ball, but I'm not much for motion graphics outside of Blender.

    Stop with the bullshit 'telemetry' garbage that bogs down my computer and AI scrapping of our data. Old files that used to run fine on my older computers run like shit on my new one. I know damn well there's bullshit going on in the background. That's 80% of the issue. The other 20% of problems are running of the mill stuff.

    I am perfectly happy paying for functional, productive software. 60 bucks a month for that is fine as a freelance graphic designer and marketer. However creative cloud is quickly becoming dysfunctional and unproductive. That's the problem.

    Suppafly(10000) 7 days ago [-]

    >You think a budding graphic designer of one year could afford the $1,500+ up front cost?

    Yes? It's pretty normal to take out a loan or use a credit card to purchase tools to setup your career for years to come. That budding graphic designer probably spent $2000+ on a new Mac. Honestly though subscriptions only make sense for business customers, they really fuck over the home users that would like to buy the software once and use it for several years. Hobby photographers and such are either priced out of the market, or stuck with old computers running older versions from before the subscription push.

    bigstrat2003(10000) 6 days ago [-]

    I don't really agree with the cost argument when the subscription is more expensive in the long run. Nobody needs to upgrade Photoshop every year, they're going to go 2-3 years (if not more) between upgrades. And when you do that, it's much cheaper to buy up front.

    Renting software is just plain a raw deal for the users. It's more expensive, plus you don't get to keep it after you stop paying. The only one who wins is the vendor.

    nashashmi(10000) 7 days ago [-]

    Companies should stay off social media ... Unless they are social companies. Companies that try to advertise on social media to their consumer base do harm to the social aspect. This is why twitter and Facebook and instagram went from healthy social interactions to just marketing fluffs giving the media companies heavier valuation

    broodbucket(3091) 7 days ago [-]

    Notoriously user-hostile companies should, at least.

    greatgib(3476) 7 days ago [-]

    Somehow Adobe can say thank you, for free they get honest feedback about the crap they do without having to hire an expensive consulting firm or a survey company.

    Now they can know why their sells are platoning at least and people would churn as must as possible.

    broodbucket(3091) 7 days ago [-]

    As per those leaks, Adobe employees are already very aware that everyone despises them.

    fortran77(109) 7 days ago [-]

    BlueSky can be brutal! I wonder how it got a reputation of being the kinder, gentler alternative?

    skyyler(10000) 7 days ago [-]

    BlueSky is a very kind place in my experience. I don't get people asking me to justify my existence like I do on Twitter.

    Seriously, people on Twitter demand I debate them about the validity of my life. That has yet to happen on BlueSky.

    broodbucket(3091) 7 days ago [-]

    People interact with brands differently to how they interact with humans.

    abhinavk(3312) 6 days ago [-]

    It's kinder to people, especially kind people.

    rsynnott(10000) 5 days ago [-]

    Adobe isn't a person.

    moonlion_eth(10000) 7 days ago [-]

    Alternative social media contains alternative personalities

    sandspar(10000) 6 days ago [-]

    'Join our site if you're enraged' users act enraged.





    Historical Discussions: The path to open-sourcing the DeepSeek inference engine (April 14, 2025: 549 points)
    The Path to Open-Sourcing the DeepSeek Inference Engine (April 14, 2025: 3 points)

    (549) The path to open-sourcing the DeepSeek inference engine

    549 points 4 days ago by Palmik in 2404th position

    github.com | Estimated reading time – 3 minutes | comments | anchor

    The Path to Open-Sourcing the DeepSeek Inference Engine

    A few weeks ago, during Open Source Week, we open-sourced several libraries. The response from the community has been incredibly positive - sparking inspiring collaborations, productive discussions, and valuable bug fixes. Encouraged by this, we've decided to take another step forward: contributing our internal inference engine back to the open-source community.

    We are deeply grateful for the open-source ecosystem, without which our progress toward AGI would not be possible. Our training framework relies on PyTorch, and our inference engine is built upon vLLM, both of which have been instrumental in accelerating the training and deployment of DeepSeek models.

    Given the growing demand for deploying models like DeepSeek-V3 and DeepSeek-R1, we want to give back to the community as much as we can. While we initially considered open-sourcing our full internal inference engine, we identified several challenges:

    • Codebase Divergence: Our engine is based on an early fork of vLLM from over a year ago. Although structurally similar, we've heavily customized it for DeepSeek models, making it difficult to extend for broader use cases.
    • Infrastructure Dependencies: The engine is tightly coupled with our internal infrastructure, including cluster management tools, making it impractical for public deployment without significant modifications.
    • Limited Maintenance Bandwidth: As a small research team focused on developing better models, we lack bandwidth to maintain a large open-source project.

    Considering these challenges, we've decided to collaborate with existing open-source projects as more sustainable alternatives.

    Moving forward, we will work closely with existing open-source projects to:

    • Extract Standalone Features: Modularize and contribute reusable components as independent libraries.
    • Share Optimizations: Contribute design improvements and implementation details directly.

    We are profoundly grateful for the open-source movement - from operating systems and programming languages to machine learning frameworks and inference engines. It's an honor to contribute to this thriving ecosystem and to see our models and code embraced by the community. Together, let's push the boundaries of AGI and ensure its benefits serve all of humanity.

    Note

    To clarify, this article outlines our approach to open-sourcing of our DeepSeek-Inference-Engine codebase only. Regarding future model releases, we maintain an open and collaborative stance towards both the open-source community and hardware partners. We commit to proactively synchronizing inference-related engineering efforts prior to new model launches, with the goal of enabling the community to achieve state-of-the-art (SOTA) support from Day-0. Our ultimate aim is to foster a synchronized ecosystem where cutting-edge AI capabilities can be seamlessly implemented across diverse hardware platforms upon official model releases.




    All Comments: [-] | anchor

    londons_explore(10000) 4 days ago [-]

    'We have something that would be of interest to the open source community, but it needs a lot of tidying to even run outside our company, and we don't have the manpower to properly maintain it when released'.

    Plenty of companies are in this position.

    Please just open source anyway with a note saying 'we won't be maintaining this, but feel free to fork!'

    lolinder(2685) 3 days ago [-]

    Unfortunately that's not really feasible in the current state of open source. There are enormous numbers of entitled users out there who become a parasitic drain on any project that is open sourced. Solo maintainers can theoretically just develop a think skin, but companies can actually find that the damage to their public image from not having their FOSS project in tip top shape is greater than the benefits of open sourcing it in the first place.

    rfoo(10000) 4 days ago [-]

    tl;dr 'we had our vLLM fork and it's unmaintainable now; guess we are going to rebuild it, in the public this time'

    Havoc(10000) 4 days ago [-]

    Unmaintainable seems unduly harsh. There is a big gap between maintainable internally and ready for public consumption

    lukeschlather(10000) 4 days ago [-]

    I get the impression their setup is very hard to maintain but it's worth every penny. They've done optimizations that wring incredible performance out of the hardware they have, but they also have specific machine configurations and I wouldn't be surprised if they have complicated hacks that get 100% speedups for some stuff but those speedups disappear if you have a slightly different motherboard configuration. Also there's suggestion they've made firmware hacks which are worth it at their scale, but might be very dangerous and difficult to apply especially on a small scale. (And some of their hacks might involve both firmware and cluster-level optimizations, which would be useless or counterproductive independently.)

    And even if you have somewhat similar hardware, the code might not be that helpful, you might be better off with a sketch of the solution and implementing it yourself. If you've got a large enough cluster it's going to pay for itself anyway.

    maknee(10000) 4 days ago [-]

    They're going to spend time and effort into making their optimizations public. Would you rather have them keep their changes internal?

    vintagedave(3405) 4 days ago [-]

    I really empathised with this part:

    > Codebase Divergence: Our engine is based on an early fork of vLLM from over a year ago. Although structurally similar, we've heavily customized it for DeepSeek models, making it difficult to extend for broader use cases.

    I've been there. Probably a few of us have.

    Their approach of working on splitting out maintainable sublibraries and sharing info directly even if not integrated seems a really nice way of working with the community -- ie, they have obstacles, but they're not letting the obstacles cause them to take the easy route of not contributing at all. And while it might seem better to someone wanting to use their techniques to share only working code, not info on the techniques, at least it's still knowledge sharing. And again I think it'd be easier for them not to do it. So kudos to them.

    rvnx(837) 4 days ago [-]

    They customized and optimized vLLM for their use case, so much that it became a different product (e.g. Debian vs Ubuntu).

    The fact they share back some of their improvements is great.

    bonoboTP(10000) 4 days ago [-]

    Non-runnable code can be really useful. I often wish it was available for some papers even if I never run it just to check what they actually did, because text and equations are often not specific enough.

    oldgun(2995) 4 days ago [-]

    Nice. We've seen some good engineering work from DeepSeek. Keep it coming.

    jimmydoe(10000) 4 days ago [-]

    yes, before usa figures out a way to tariff open source.

    nashashmi(10000) 4 days ago [-]

    I feel like this is one way to implement censorship.

    sampton(10000) 4 days ago [-]

    There's an ongoing debate whether LLM should be considered intelligent when it's just generating tokens from latent space. Meanwhile there are humans that are only capable of spitting out the same 5 tokens yet still considered to be 'intelligent'.

    avodonosov(10000) 4 days ago [-]

    What motivates the commercial AI companies to share their research results and know-how?

    Why did Google published the Transformer architecture instead of keeping it to themselves?

    I understand that people may want to do good things for humanity, facilitate progress, etc. But if an action goes against commercial interest, how can the company management take it and not get objections from shareholders?

    Or there is a commercial logic that motivates sharing of information and intellectual property? What logic is that?

    lofaszvanitt(10000) 4 days ago [-]

    The more people copy your outdated thing, the better for you, because they always gonna lag behind you.

    bcoughlan(10000) 4 days ago [-]

    I would guess it comes down to that the best researchers in the world want their work out in the open

    nodja(10000) 4 days ago [-]

    My understanding is that frontier researchers will work for companies that will let them publish papers and discuss them with their peers.

    When you're an engineer at the tier of these AI researchers, winning an extra 100k/year on top of you current 500k (numbers out of my ass) is not worth it vs getting name recognition. Being known as one of the authors that made the transformer for example will enable you work with other bright minded individuals and create even better things.

    So essentially these commercial companies have 'we'll let you publish papers when you work for us' as a perk.

    Der_Einzige(10000) 4 days ago [-]

    The ACL, NeurIPS, ICLR and the rest of AI professional organizations are why this happens. Forced open sourcing of everything. No pay to access. It's the ideal open academic environment for rapid innovation. We must jealously defend our current system, as it will soon come under attack by those who get angry about democratization of the means of computation.

    Also, lots of copyright abolitionists in AI. Many people who work in the space delight in the idea of making information, especially their own, free.

    The ghost of Aaron Swartz runs through every researcher in this space.

    xwolfi(10000) 3 days ago [-]

    Well Deepseek's survival also depends on the giant amount of hype they can generate, and they won't get more investor money just by having done a one-hit wonder. Becoming deeply integrated in the AI ecosystem with various tools and innovative discoveries will most like be more beneficial than protecting the secrets of their first success.

    Kholin(3642) 3 days ago [-]

    This may be related to Google's business model. Google's main businesses - search engine and advertising - both rely on an open web ecosystem. Therefore, Google has long maintained a friendly attitude toward open source and the open web, such as with Chromium, Noto fonts, Go, Flutter, and others. By providing infrastructure tools that benefit the open web, Google extends the reach of its searchable content and advertising. When the entire Web ecosystem benefits, Google ultimately benefits as well. This model also aligns with the philosophy of the open source community, where everyone is a beneficiary and naturally becomes a contributor.

    larodi(10000) 3 days ago [-]

    Indeed, is there a chance Google did not evaluate properly what the transformer will eventually be used for/become. It was created for translation as an improvement on seq2seq, right? Which was for translation, not for thinking, and to a certain extent... still is about translation, and are not other emergent capabilities actually a side-effect, only observed later when parameter size grew?

    anon373839(3592) 3 days ago [-]

    > Or there is a commercial logic that motivates sharing of information and intellectual property? What logic is that?

    There absolutely is a sound commercial justification to share research: long-term growth through advancement of the field. (Deep learning would never have made the progress it has without open research!)

    If this seems quaint, it's because we're too accustomed to short-term, transactional, Wall Street thinking.

    0x008(3656) 3 days ago [-]

    All of the major labs have one thing in common: they have nearly unlimited data and money, but what they don't have unlimited is talent and ideas. It's just a way of progressing without having to „hire every idea".

    HH_GU(10000) 3 days ago [-]

    Just as the company's name DEEPSEEK, it's commercial company and invest their based on AI, but the company's founder has more targets which are more common for human. Money is number for them, they want to do more, especially for DEEPSEEK.

    runeks(3352) 3 days ago [-]

    > Why did Google published the Transformer architecture instead of keeping it to themselves?

    Because they make their money from advertisements. Not their AI models. Same for Meta.

    Compare that to e.g. OpenAI who's trying to make money from their AI models, and are thus underbid by Google and Meta.

    choonway(10000) 3 days ago [-]

    If you don't allow them to publish research work, your greatest talents will leave.

    I used to work in such a restrictive environment. Nobody worth their salt stayed long.

    bobxmax(10000) 3 days ago [-]

    It's worth noting that, while a noteworthy paper, nobody really expected the Transformer at the time to be the breakthrough it eventually became.

    timClicks(3590) 3 days ago [-]

    There are a few commercially valid strategies.

    1. Goodwill and mindshare. If you're known as 'the best' or 'the most innovative', then you'll attract customers.

    2. Talent acquisition. Smart people like working with smart people.

    3. Becoming the standard. If your technology becomes widely adopted, and you've been using it the longest, then you're suddenly be the best placed in your industry to make use of the technology while everyone retools.

    4. Deception. Sometimes you publish work that's 'old' internally but is still state of the art. This provides your competition with a false sense of where your research actually is.

    5. Freeride on others' work. Maybe experimenting with extending an idea is too expensive/risky to fund internally? Perhaps a wave of startups will try. Acquire one of them that actually makes it work.

    6. Undercut the market leader. If your industry has a clear market leader, the others can use open source to cooperate to erode that leadership position.

    buyucu(3661) 3 days ago [-]

    Deepseek is not a commercial AI company. They are the hobby of a hedge fund, something they do on the side for fun and glory.

    victorbjorklund(3408) 3 days ago [-]

    If google never published it (and we pretend like it would not have leaked) then we would never have the LLM:s we have today (including Googles). Everyone would loose.

    animal531(10000) 3 days ago [-]

    I spent the last two or so months using it as an assistant for code and my conclusion is that it is terrible compared to even the free model of ChatGPT.

    The incidence of bugs, it not understanding what you're asking or just generating code that is straight up wrong is much worse. Even with guidance it will often be unable to fix issues, leaving you to do all the manual legwork to get things working. Usually you're better off just having done everything yourself from the start.

    During those two months they really improved GPT as well, its generation speed is now much much faster, and the quality of its output has become a lot better.

    CrimpCity(10000) 3 days ago [-]

    That's interesting since this has been my exact opposite experience.

    What type of coding are you doing? Did you locally roll your own coding assistant with a local model of DeepSeek or are you prompting via the web?





    Historical Discussions: Man who built ISP instead of paying Comcast $50K expands to hundreds of homes (August 10, 2022: 1135 points)
    Man who built ISP instead of paying Comcast expands to hundreds of homes (2022) (April 16, 2025: 546 points)

    (545) Man who built ISP instead of paying Comcast expands to hundreds of homes (2022)

    545 points 1 day ago by voxadam in 666th position

    arstechnica.com | Estimated reading time – 5 minutes | comments | anchor

    Under the contract terms, Mauch will provide 100Mbps symmetrical Internet with unlimited data for $55 a month and 1Gbps with unlimited data for $79 a month. Mauch said his installation fees are typically $199. Unlike many larger ISPs, Mauch provides simple bills that contain a single line item for Internet service and no extra fees.

    Mauch also committed to participate in the Federal Communications Commission's Affordable Connectivity Program, which provides subsidies of $30 a month for households that meet income eligibility requirements.

    The contract requires all project expenses to be incurred by the end of 2024, and for the project to be completed by the end of 2026. But Mauch aims for a much quicker timeline, telling Ars that his 'goal is to build about half of it by the end of this year and the other half by the end of 2023.' The exact funding amount is $2,618,958.03.

    Comcast wanted $50K, AT&T offers just 1.5Mbps

    Operating an ISP isn't Mauch's primary job, as he is still a network architect at Akamai. He started planning to build his own network about five years ago after being unable to get modern service from any of the major ISPs.

    As we wrote last year, AT&T only offers DSL with download speeds up to 1.5Mbps at his home. He said Comcast once told him it would charge $50,000 to extend its cable network to his house—and that he would have gone with Comcast if they only wanted $10,000. Comcast demands those up-front fees for line extensions when customers are outside its network area, even if the rest of the neighborhood already has Comcast service.

    Mauch was using a 50Mbps fixed wireless service before switching over to his own fiber network. In addition to his home Internet customers, Mauch told us he provides free 250Mbps service to a church that was previously having trouble with its Comcast service. Mauch said he also provides fiber backhaul to a couple of cell towers for a major mobile carrier.

    County touts "historic" broadband investment

    Mauch has already hooked up some of the homes on the list of required addresses. Washtenaw County issued a press release after the first home was connected in June, touting a 'historic broadband infrastructure investment' to 'create a path for every household to access high-speed broadband Internet.'

    The county said it is investing $15 million in broadband projects by combining the federal funds with money from the county's general fund. Between Washtenaw Fiber Properties and the other three ISPs selected by local government officials, 'over 3,000 Washtenaw County households will be connected as a result of this investment in the next few years,' the press release said.

    One of the areas covered by Mauch's funding is around a lake in Freedom Township, where he plans to begin construction on August 22, he said. 'Generally speaking, it's a lower income area as well as an area that has been without service for a very long time, aside from cellular or wireless,' he said. 'The goal is to close the gap on them very quickly.'

    As for the other three ISPs, the county was reportedly negotiating with cable giants Comcast and Charter, and Midwest Energy and Communications. Those three companies ended up getting the deals with the county, a contractor working on the overall project confirmed to Ars.

    Under state law, 'Municipalities in Michigan are not simply able to decide to build and operate their own networks, they must first issue an RFP for a private provider to come in and build,' the Institute for Local Self-Reliance's Community Broadband Networks Initiative wrote. 'Only if the RFP receives less than three viable offers can a municipality move forward with building and owning the network. There are also additional requirements that municipalities have to follow, such as holding public forums and submitting cost-benefit analysis and feasibility studies.'

    The county's RFP set 25Mbps download and 3Mbps upload speeds as the minimum acceptable tier but stated a strong preference for 'at least 100Mbps download speeds, ideally with symmetrical upload speeds, from wireline technology to accommodate present and future bandwidth-hungry applications.'

    Mauch faces increasing equipment costs

    Mauch has made some upgrades to his operation. In our previous story, we described how Mauch was renting an air compressor to blow fiber through his conduits. He recently bought an industrial air compressor at a government liquidation auction, spending under $4,000 for equipment that often costs about $20,000, he said. He had previously spent $8,000 on a directional drill machine that installs cables or conduits under driveways and roads without digging giant holes.

    Increasing prices have been a problem. Mauch said he used to buy fiber conduit for 32 cents a foot but that he's paying more than double that now. The handholes that are buried underground at various points throughout Mauch's network used to cost $300 and are now about $700, he said.




    All Comments: [-] | anchor

    pluto_modadic(10000) 1 day ago [-]

    I hope more community ISPs happen <3

    vvpan(3674) 1 day ago [-]

    Comcast and others have been using the corruption of our representatives to push for bans of community ISPs.

    https://www.techdirt.com/2024/11/07/16-u-s-states-still-ban-...

    sneak(874) 1 day ago [-]

    It's illegal in most places, because the large incumbents are using a corrupt government to protect their revenue streams.

    See also: banking, healthcare

    protocolture(10000) 1 day ago [-]

    Me too. I love small ISPs.

    However, I really hope that more small ISP's get their shit together from a cybersecurity perspective. They are generally completely apathetic on the subject.

    whalesalad(363) 1 day ago [-]

    I admire that the homepage for the ISP - https://washftth.com/ - is literally the default Debian Apache/httpd welcome page with new content inserted. The #CD214F color is the giveaway.

    doublerabbit(10000) 1 day ago [-]

    Eww, I am not buying internet from no company that doesn't have a flashy hero banner, 20mb of JavaScript libraries and a Cloudflare captcha.

    Websites like these tend to win subscribers. My ISP was the same when I subscribed.

    1970-01-01(1814) 1 day ago [-]

    To get maximum effect, he now needs to write a book. Eventually, someone will come along and make the book into a movie. Soon after, that movie will be shown via Comcast!

    autoexec(10000) 1 day ago [-]

    Once all the work has been done and this guy is making money I suspect comcast or another ISP will buy his network and the rights to the movie, then jack up the prices considerably so these people will be paying even more to watch it on the ISP owned streaming service

    bentt(10000) 1 day ago [-]

    I really wonder how the availability of Starlink affects these sorts of projects.

    anonfordays(10000) 1 day ago [-]

    This. How is local fiber not the easiest solution to the problem though?

    Nick-W(10000) 1 day ago [-]

    I run a small WISP - most of our new subscribers are coming from Starlink, but we are also cheaper and provide gigabit-class service.

    protocolture(10000) 1 day ago [-]

    Depends.

    If they have half a clue regarding marketing and networking, they are doing fine. Starlink doesnt offer Layer 2 or Managed WAN options (Possibly Vocus is bringing these projects out at some stage on their behalf)

    In dense areas, starlink underperforms. In larger cities Fibre is beloved. Theres a wedge, where WISPS are king and still are king, where the density is just right.

    That said, if you are running a really shitty wisp, and you dont have any business links or complex services. And half your customer base just bailed for starlink, you will likely fold. But honestly, WISP as an industry can do without the cowboys.

    gosub100(10000) 1 day ago [-]

    Let's hope he actually delivers. This company took $9MM of government grants and squandered it:

    https://mynews4.com/on-your-side/ask-joe/ask-joe-usda-cuts-t...

    BryantD(2601) 1 day ago [-]

    The article documents delivery, and a little searching told me that Washtenaw Fiber Properties is still in business at https://washftth.com/ and serving customers.

    bee_rider(10000) 1 day ago [-]

    Should have stolen billions instead, could have become a titan of industry

    paxys(10000) 1 day ago [-]

    Amateur operation. Large ISPs have squandered billions.

    Nick-W(10000) 1 day ago [-]

    Hey! I did this too - CenturyLink wanted an insane amount of money to bring fiber to our place, now we service hundreds and we're growing into a major contender in Boulder County - https://ayva.network

    idiotsecant(10000) 1 day ago [-]

    How did you get the capital and find the time to do this? Is it your full-time gig? I've always fantasized about doing this in my mountain community but it seems spooky

    apercu(10000) 1 day ago [-]

    I'm curious what the economics are these days - I cofounded a small town ISP in the mid-90's (think dial-up) and the largest monthly costs was the 24 commercial phone lines. Even though it was a loss, it was a relief to eventually sell to the local phone company 2 years later.

    robrenaud(3669) 1 day ago [-]

    > In this sparsely populated rural area, 'I have at least two homes where I have to build a half-mile to get to one house,' Mauch said, noting that it will cost 'over $30,000 for each of those homes to get served.'

    Does spending 30k per household connected make any sense?

    lelandfe(10000) 1 day ago [-]

    Just a quick heads up that the homepage video is ~24MB over the wire, even on a phone. That might actually be a challenge if someone's WiFi is down and they're trying to get support over cellular.

    (Huge kudos for this project in general)

    navanchauhan(10000) 1 day ago [-]

    Oh man! Wish I had found out about this 3 years ago. I am graduating in May, and I've had a terrible experience with Xfinity trying to self-host. CenturyLink doesn't even service my apartment complex.

    p.s self-plug: for our senior year capstone we are working on a secure/private home router firmware. Since you are in this space (tangentially) and local, I would love to chat with you

    ufocia(10000) 1 day ago [-]

    Not sure it was that insane. The author quotes a cost of over $30,000 to build a half-mile drop. I find that an insane amount of money that a government would pay to connect just one subscriber.

    water-data-dude(10000) 1 day ago [-]

    "Fully encrypted network with strict privacy policies"

    God I wish that was me. Xfinity has a raised middle finger where the privacy policy should go.

    HaZeust(10000) about 13 hours ago [-]

    Any plans to expand into JeffCo?

    Also, this is a highly highly resource-dependent website. Consider a scale back. It'd be a funny tongue-in-cheek thing if you made it super encumbered and say 'Our customers can load this page just fine!', but it's counter-intuitive for everyone else haha

    BrandoElFollito(3407) about 5 hours ago [-]

    I am in France so not exactly in your coverage but I wanted to not that the comparison card (and the coverage one) do not work correctly.

    The first information is fine (say, speed) but when I switch to latency the graph does not change (and BTW it's not readable on mobile)

    Same for the coverage

    BrandoElFollito(3407) about 5 hours ago [-]

    This is what I love in HN.

    Someone, somewhere says that they built something for a local community and suddenly Joe from Sydney and Marie from Bordeaux are on the site, discussing its tech stack and comparing the pricing in Wakanda.

    Great site.

    Animats(2975) 1 day ago [-]

    Sonic started as a little local ISP in Santa Rosa, CA.[1] Now it's huge in Northern California.

    I have 1GB Sonic bidirectional fiber with unlimited data and could get 10GB if I wanted. The head of Sonic points out that long-haul prices have decreased over the years, and there's no real need for usage limits.

    [1] https://en.wikipedia.org/wiki/Sonic_(ISP)

    enmyj(10000) 1 day ago [-]

    I have missed Sonic every day since moving from Oakland to SF

    scubbo(10000) 1 day ago [-]

    Self-quoting[0]:

    > Sonic has the best customer service of any company ever encountered, and it's not even close. The few times I've had to contact them for assistance, I've been very quickly connected with someone clearly _very_ technical who was able to grok my problem immediately and give clear, cogent, respectful debugging advice and perspective. I do not exaggerate when I say I would gladly pay double their current rate just for the peace of mind of knowing that I can depend on them if I ever need their support again. Not that I often do, because their baseline connectivity/speed is also great.

    >

    > ...yes, I know I look like a shill/bot. I don't care. They're genuinely just that good, and I will happily advocate for them until that ever changes.

    [0] https://news.ycombinator.com/item?id=42252183

    samiwami(10000) 1 day ago [-]

    I have their 10GB line and I could NOT be happier. Only company where I reply to their "please rate us emails'

    e40(3398) about 24 hours ago [-]

    I have their 10G service. I love the company and the people. I remember when I first called them to sign up. At the end of the call with the sales guy I told him that 30 minute conversation was one of the most interesting and fun conversations I had ever had with someone I had just met. It was surreal.

    The installer was super nice and great at their job.

    Their service is so good I have not had an excuse to talk with anyone else.

    Many of my neighbors have switched from Comcast, who I was with for more than 10 years, and hated every second of it. Only AT&T is worse than Comcast, but they are both bottom dwellers.

    bn-l(10000) 1 day ago [-]

    > 1Gbps [symmetrical] with unlimited data for $79 a month.

    This costs $500 in Australia in the inner city.

    jedberg(3314) 1 day ago [-]

    Intertesting. I get 1Gbps symmetric from AT&T for $90/mo (was $70/mo two years ago when this article was written).

    I'm in the Silicon Valley and have multiple ISP options (although AT&T is the only 1000/1000 option).

    I guess our prices stay low because if they went too high it would motivate their competitors to move in.

    kalleboo(3656) 1 day ago [-]

    I pay $35/mo for 10 Gbps in Japan https://www.speedtest.net/result/d/707868117.png

    dboreham(2321) 1 day ago [-]

    Reformed ISP owner here: don't do this. There's a reason the cableco/telco doesn't want to serve these customers.

    ale42(10000) 1 day ago [-]

    What reason? Do you have an experience to share?

    shmerl(10000) 1 day ago [-]

    The reason being greed.

    pavelevst(10000) 1 day ago [-]

    In Russia we get 500-1000mbps (for real) for about 5-10$ monthly, every home has few ISP options with free installation

    sneak(874) 1 day ago [-]

    My home in Las Vegas is 2000Mbps down and 100Mbps up, and it's $200/month. $50/month of that is an add-on for 'unlimited' usage, but Cox still writes me letters and threatens to cancel my service if I upload more than 2-3TB in a calendar month, despite having paid well over $3000 in 'unlimited' add-on upcharges.

    Internet pricing is a scam in the USA.

    krupan(3151) 1 day ago [-]

    I believe that in Russia you wrestle bears and that the only liquid anyone drinks is vodka, but this I simply cannot believe :)

    DiscourseFan(10000) 1 day ago [-]

    Labor costs are lower. The US has the highest cost of labor in the world for many jobs that would be relatively inexpensive elsewhere.

    VTimofeenko(10000) 1 day ago [-]

    Russian public infrastructure is vastly different compared to the US though. It's probably much easier to run Internet to 10 apartment homes housing 1000 people than to 300 single family houses with the same amount of people.

    bufferoverflow(3152) 1 day ago [-]

    In Russia you get pseudo-internet without Youtube, Instagram, X, Discord, The Internet Archive, many news sites.

    yalok(10000) 1 day ago [-]

    Around 8 years ago I saw an AT&T truck on our street, and the guys installing some fiber into our street conduits. I was ecstatic & started checking AT&T website periodically to see when the service will be enabled.

    Guess what? It's still not enabled. AT&T only did it because there was a risk that Google Fiber will do it in our city. Unfortunately, IIUC, Google never could overcome local regulations and abandoned the project. So, AT&T didn't care to light up their fiber (that was already in the ground & ready to go!!!).

    Comcast doesn't offer any cable in my location neither.

    I've been seriously tempted to do it myself too, but doubt I'll ever have time for that - mostly to overcome the local bureaucracy to get all the permits...

    Huge respect to Jared!

    yalok(10000) 1 day ago [-]

    oh, ~4 years ago I talked to Sonic guys at length (great company, btw!) - they were too far north from us, and their estimation was to make it viable for them, they'd need to have around 200 of my neighbors commit to switching to Sonic at once if they lay out the fiber in our location.

    wuming2(10000) 1 day ago [-]

    I read about "future proofing" and "expansion" possibilities of one's fiber connection. And related user equipment.

    My story is in the opposite direction.

    We and everyone else in the neighborhood had symmetrical 1 Gbps installed about 15 years ago. We all paid the ISP for the top tier of full capacity.

    During Covid decided to take inventory of our actual bandwidth needs.

    Anything that can be deferred doesn't count. Gone from instant bandwidth requirements are all cloud backups, OTA OS upgrades and apps updates. They need to complete overnight. Overlapping is not a requirement.

    Videos are automatically played at 480p or less on iPhones, 720p or less on iPads and 1080p or less on HDTV. We purposely didn't buy 4K TV because at our viewing distance has no benefits whatsoever. Aggregate peak bandwidth required here is 25 Mbps at a stretch. That is also enough for my wife to work from home.

    We don't deal with large datasets or raw videos over the internet.

    So we found ourselves with one cable connected TV and the usual assortment of mobile devices connected to one WiFi 4 1x1 hotspot. At 70 Mbps we never noticed any loss of quality in our digital lifestyle.

    After about ten years we replaced the hotspot with one capable of WiFi 5. An overkill but needed the extra port.

    Eventually convinced the ISP to lower our subscription to the lowest available tier of 200 Mbps. We don't notice any difference. We could afford the extra bandwidth. But don't see the benefits of it.

    jeroenhd(3638) 1 day ago [-]

    Gigabit internet, or even >100mbps internet, is burst capacity. Very few people hit gigabit speeds continuously, and those that do often hit either bandwidth caps or fair use policy limitations. It's also why ISPs can use a 10gbps fiber backbone to serve gigabit to 50-100 homes, because the probability of all of those homes capping out their bandwidth at the same time is tiny.

    That's also why a lot of supposedly fast ISPs absolutely crumbled when COVID hit. A lot of people started doing video calls in the morning/afternoons, which suddenly sent latency-sensitive, bidirectional, high-bandwidth data to every corner of the network. Upload speeds collapsed, gigabit networks were struggling to hit a couple hundred mbps, and DSL providers downgraded their customers to 2005 in terms of attainable network speeds.

    For that reason, I think ISPs may as well offer 10gbps as a default. Their customer base is not going to make use of that capacity anyway. Only when downloading a new game, or doing a backup, or uploading a video file somewhere, does that bandwidth become a necessity. If you remove the cap on the bandwidth side, all of that capacity will remain available for a longer period of time for all of the other people in the neighbourhood.

    Some cellular providers used the same reasoning for their plans here a few years back: there were no 4G speed caps, just upload/download as fast as you can, because if you're done doing your file transfer quicker, you're clearing the airwaves for other users. Of course, you'd still pay for those hefty bandwidth caps, charging >€1 per GB per month to rake in the cash.





    Historical Discussions: An intro to DeepSeek's distributed file system (April 17, 2025: 536 points)
    An Intro to DeepSeek's Distributed File System (April 16, 2025: 3 points)

    (536) An intro to DeepSeek's distributed file system

    536 points about 23 hours ago by sebg in 93rd position

    maknee.github.io | Estimated reading time – 11 minutes | comments | anchor

    Series

    What is 3FS?

    3FS (Fire-Flyer File SystemGeez, what a tongue twister) is a distributed filesystem released by DeepSeek during their open source release week. This blog post will dive into what distributed file systems are and how 3FS operates, starting with some background.

    What is a distributed filesystem?

    Distributed filesystems trick applications into thinking they're talking to a regular local filesystem. This abstraction is incredibly powerful: a file that's actually fragmented across 10 different machines appears as a simple file path like /3fs/stage/notes.txt

    Using the distributed filesystem is no different than local filesystem

    In the image above, I create the same folder and file on a local and distributed filesystem by running mkdir and cat. The commands are exactly the same. With a distributed filesystem, all of those details are abstracted away from the user, who can simply work with the files without worrying about how many machines, network calls or disks are involved behind the scene.

    Why use a distributed filesystem?

    Distributed filesystems provide two main advantages over local storage – they can serve massive amounts of data (up to petabytes) and provide high throughput that exceed the capabilities of a single machine. They offer fault tolerance (the system keeps running if one machine goes down) and redundancy (if data gets corrupted on one node, other nodes have original copies).

    Distributed filesystems are used in many practical applications:

    A deep dive into 3FS

    So, how does 3FS work?

    At its core, 3FS consists of four primary node types:

    Components involved in 3FS

    The components serve distinct purposes:

    1. Meta – manage the metadata: file locations, properties, paths, etc.
    2. Mgmtd – management server controls the cluster configuration: where are other nodes, which nodes are alive, and replication factor
      • Think of it as a router that knows every node's address and can help nodes find each otherA similar analogy is the centralized server used in NAT hole punching.
    3. Storage – nodes that hold the actual file data on physical disks.
    4. Client – communicates with all other nodes to view and modify the filesystem:
      • ask Mgmtd to discover other nodes
      • ask Meta servers to perform file operations (open, stat, close, symlink)
      • transfer data with storage nodes

    Now let's look at each component in greater detail.

    Mgmtd

    Mgmtd tracks what nodes are running in the cluster. Storage and meta nodes register when they boot up, sending periodic heartbeats to confirm they're still alive. This gives a central view of the system – one can immediately identify which nodes are down.

    Nodes don't need to maintain connections with every other node in the network. Instead, they can discover nodes by querying the mgmtd node. While this adds an extra round trip when locating nodes, it can reduce complexity since node discovery isn't static.

    Also, Mgmtd maintains the configuration for different nodes operating within a distributed algorithm. In particular, replicated chains (CRAQCRAQ is a pretty neat algorithm that achieves strong consistency with fault tolerance by treating nodes as a chain. I'll explain this in depth in another section.) are established and its nodes are stored as configuration in mgmtd.

    The meta node is a bit more complex than mgmtd. Clients communicate with it via RPC calls. The meta server performs typical filesystem operations (open, create, stat, unlink) on the metastore. File metadata resides in inodes, storing properties like size, permissions, owner, and timestamps. DirEntry objects map paths to inodes, with multiple DirEntries possible for a single file (similar to symlinks). Both inodes and DirEntries are stored in FoundationDBOne might wonder what the keys to founationdb look like? Inode: "INOD + inode id, dir entry: "DENT" + nodeid + path using transactions for idempotent operations. A session manager tracks open files, storing file sessions in FoundationDB. If clients disconnect without closing files, the session manager initiates file syncs. File deletion requests queue to a garbage collector, which removes data from storage nodes before deleting directory entries and inodes.

    Storage

    The storage node's main function is manage data on physical storage by breaking it up into chunks:

    • The Rust ChunkEngineWhy Rust? Well, there's a legacy chunk manager named ChunkStore that's written in C++. I don't see really why in rust, probably because it's interesting to work in and provides more safety guarantees keeps track of blocks of disk storage.
      • Chunks represent a piece of physical disk and keeps track of its metadata (id, size, offset on disk, physical disk, checksums, versions, ...). This is the most primitive data structure that all other structures use to keep track of blocks of data.
      • The chunk engine doesn't allow users to interact with chunks directly since it would add complexity to using engine. The interface to the engine has operations which gives users a rigid and clear way to interact with the engine (lookup, allocation, commit, metadata...)
      • By default, all of this is stored in LevelDB with a prefix byte repesenting the type of operation (querying the metadata) and the chunk id as the key.
    • There are different workers that uses the chunk engine to maintain the physical storage
      • The AllocateWorker allocates new chunks in the chunk engine
      • The PunchHoleWorker reclaims chunks if they're no longer used
      • The AioReadWorker processes reads requests to the chunks and queues reads in io_uring queue, submits and waits for completionInitially, I was surprised. The chunk engine doesn't perform operations on the actual physical disk, it really only manages the metadata. One reason for this might be to keep the ChunkEngine implementation rather lean by having it only try to manage metadata..
    • The storage node needs to know how to forward a write to the next target in a CRAQ chainFor now, just know that writes need to be forwarded to other nodes
      • Targets consist of chunks (think of this as logical store containing different chunks)
      • A chain consists of multiple targets (typically spanning multiple nodes)
      • The storage node queries the mgmtd server for other nodes' chains and the corresponding targets (nodes) in that chain that a write needs to forward to.

    CRAQ

    CRAQ (Chain Replication with Apportioned Queries) is a protocol for achieving strong consistency with linearizability. It serves as the core mechanism to keep data chunks fault-tolerant. I'll explain how CRAQ works and then, show its implementation in 3FS.

    Writes begin at the head. In our example, we write name=henry to the system. As the write moves down the chain, each entry is marked as "dirty" with a version number. Dirty entries aren't safe to read. Once the write reaches the tail, it's committed and marked as "clean".

    Writes become clean as commit messages propagates backward from tail to head. Each node commits the entry and marks it clean.

    For reads, the process is straightforward: if an object is clean, it's immediately returned to the client.

    The challenge occurs with dirty objects. Each chain tracks both dirty and clean versions. Since the tail always contains the latest committed data, the replica queries the tail for the most recent committed object, ensuring strong consistency.

    CRAQ performance

    CRAQ read and write performance varies by workload. Write throughput and latency are limited by the slowest node in the chain, as writes must process through each node sequentially. For example, in zipfian workloads (where frequently accessed data dominates), read performance suffers because objects may be dirty, forcing queries to the tail node. This creates a bottleneck as the tail must serve most of the read requests.

    How is CRAQ used in 3FS

    Storage is striped and CRAQ runs ontop

    In this example, 5 nodes with 5 SSDs each form the cluster. Storage targets replicate to 3 nodes, designed to avoid overlap so that node failures don't affect overall throughput significantlyConsider an extreme scenario where all the chains are placed on nodes 1, 2, 3. If node 1 fails, the distributed system would serve lose 1/3 of the total throughput instead of 1/5 of total throughput shown in the above image. 3FS design notes shows an example with a deeper explanation.. CRAQ operates on top, managing head, middle, and tail nodes.

    3FS defaults to strongly consistent reads. Writes flow from head to tail and back, with throughput limited by the slowest node and latency determined by the combined latency across all chain nodes.

    As shown in the comparison table, in the common case, CRAQ delivers scalable, low-latency reads at the cost of high write latency compared to other protocols and systems.

    Other distributed filesystems

    One might ask – is this architecture different from other distributed filesystems? At a high level, the components are familiar – some notion of client, metadata, storage, and management nodes appear in virtually every distributed system.

    The difference lies in its real-world applicability and practical implementation:

    • which workloads it excels at handling
    • its tuning flexibility
    • deployment simplicity
    • throughput scaling capabilities
    • maintaining latency within SLOs
    • reliability

    and its finer technical details that determines its usability:

    • what bottlenecks are there
    • how it manages bottlenecks
    • its approach to locking (or absence thereof)
    • the specific data structures employed
    • the hardware the software was designed for
    • what fault tolerant algorithm or erasure coding is used

    Rest of the blog series

    With that in mind, I want to dive deep into analyzing the performance of this relatively new open-source distributed filesystemDistributed filesystems come once in blue moon, taking several years to develop. Current benchmarks are rather limited. There's no comparisons with single-node systems and other distributed filesystems, so it's difficult to gauge how well 3FS performs.

    Some questions I want to explore:

    • Do some of the DeepSeek's claims hold up, especially regarding FUSE bottlenecks?
    • Can I reproduce their performance graphs in some way?
    • In what scenario does the performance degrade?
    • What are the system's bottlenecks (CPU/memory/disk/network)?
    • In what types of workloads does the fileysystem excel at?
    • How does it compare with other distributed filesystems?
    • How does it address problems that existing systems face?
    • Am I able to make any improvements to the system?

    Throughout the rest of the series, I will be going through the process of making initial assumptions, testing them, and learning from discrepancies to develop a deeper understanding of how 3FS actually performs.

    More reading

    Implementation details are documented in the design notes.

    Additional technical documentation regarding early implementation phases is available (in Chinese):

    The system architecture is partially documented in the Fire-Flyer AI-HPC paper.

    Acknowledgments

    Thanks to Vimarsh Sathia for reviewing this post.




    All Comments: [-] | anchor

    vFunct(10000) about 22 hours ago [-]

    Can we replicate this with ZFS drives distributed across multiple machines?

    eatonphil(225) about 21 hours ago [-]

    As far as I'm aware ZFS does not scale out.

    https://unix.stackexchange.com/a/99218

    jack_pp(10000) about 22 hours ago [-]

    I don't have direct experience with distributed file systems but it so happens I did a tiny bit of research in the past month and.. there are quite a few open source ones available. Would've been nice for the authors to explain why the already existing solutions didn't work for them.

    dboreham(2321) about 21 hours ago [-]

    They have an HFT background so probably it was developed long ago for that workload (which tends to be outside the design envelope for off the shelf solutions).

    londons_explore(10000) about 22 hours ago [-]

    This seems like a pretty complex setup with lots of features which aren't obviously important for a deep learning workload.

    Presumably the key necessary features are PB's worth of storage, read/write parallelism (can be achieved by splitting a 1PB file into say 10,000 100GB shards, and then having each client only read the necessary shards), and redundancy

    Consistency is hard to achieve and seems to have no use here - your programmers can manage to make sure different processes are writing to different filenames.

    sungam(10000) about 21 hours ago [-]

    I wonder whether it may have been originally developed for the quantitive hedge fund

    threeseed(10000) about 21 hours ago [-]

    > Consistency is hard to achieve and seems to have no use here

    Famous last words.

    It is very common when operating data platforms like this at this scale to lose a lot of nodes over time especially in the cloud. So having a robust consistency/replication mechanism is vital to making sure your training job doesn't need to be restarted just because the block it needs isn't on the particular node.

    jamesblonde(3630) about 21 hours ago [-]

    Architecturally, it is a scale-out metadata filesystem [ref]. Other related distributed file systems are Collosus, Tectonic (Meta), ADLSv2 (Microsoft), HopsFS (Hopsworks), and I think PolarFS (Alibaba). They all use different distributed row-oriented DBs for storing metadata. S3FS uses FoundationDB, Collosus uses BigTable, Tectonic some KV store, ADLSv2 (not sure), HopsFS uses RonDB.

    What's important here with S3FS is that it supports (1) a fuse client - it just makes life so much easiter - and (2) NVMe storage - so that training pipelines aren't Disk I/O bound (you can't always split files small enough and parallel reading/writing enough to a S3 object store).

    Disclaimer: i worked on HopsFS. HopsFS adds tiered storage - NVMe for recent data and S3 for archival.

    [ref]: https://www.hopsworks.ai/post/scalable-metadata-the-new-bree...

    nickfixit(10000) about 21 hours ago [-]

    I've been using JuiceFS since the start for my AI stacks. Similar and used postgresql for the meta.

    threeseed(10000) about 21 hours ago [-]

    Tiered storage and FUSE has existed with Alluxio for years.

    And NVMe optimisations e.g. NVMeoF in OpenEBS (Mayastor).

    None of it is particularly ground breaking just a lot of pieces brought together.

    objectivefs(10000) about 21 hours ago [-]

    There is also ObjectiveFS that supports FUSE and uses S3 for both data and metadata storage, so there is no need to run any metadata nodes. Using S3 instead of a separate database also allows scaling both data and metadata with the performance of the S3 object store.

    joatmon-snoo(3472) about 18 hours ago [-]

    nit: Colossus* for Google.

    MertsA(10000) about 17 hours ago [-]

    >Tectonic some KV store,

    Tectonic is built on ZippyDB which is a distributed DB built on RocksDB.

    >What's important here with S3FS is that it supports (1) a fuse client - it just makes life so much easier

    Tectonic also has a FUSE client built for GenAI workloads on clusters backed by 100% NVMe storage.

    https://engineering.fb.com/2024/03/12/data-center-engineerin...

    Personally what stands out to me for 3FS isn't just that it has a FUSE client, but that they made it more of a hybrid of FUSE client and native IO path. You open the file just like normal but once you have a fd you use their native library to do the actual IO. You still need to adapt whatever AI training code to use 3FS natively if you want to avoid FUSE overhead, but now you use your FUSE client for all the metadata operations that the native client would have needed to implement.

    https://github.com/deepseek-ai/3FS/blob/ee9a5cee0a85c64f4797...

    randomtoast(10000) about 21 hours ago [-]

    Why not use CephFS instead? It has been thoroughly tested in real-world scenarios and has demonstrated reliability even at petabyte scale. As an open-source solution, it can run on the fastest NVMe storage, achieving very high IOPS with 10 Gigabit or faster interconnect.

    I think their 'Other distributed filesystem' section does not answer this question.

    tempest_(10000) about 21 hours ago [-]

    We have a couple ceph clusters.

    If my systems guys are telling me the truth is it a real time sink to run and can require an awful lot of babysitting at times.

    elashri(1455) about 20 hours ago [-]

    CERN use CephFS with ~50PB for different applications and they are happy with it.

    charleshn(10000) about 18 hours ago [-]

    Because it's actually fairly slow.

    Among other things, the OSD was not designed with NVMe drives in mind - which is fair, given how old it is - so it's nowhere close to being able to handle modern NVMe IO throughput and IOPS.

    For that you need zero-copy, RDMA etc.

    Note that there is a next-generation OSD project called Crimson [0], however it's been a while, and I'm not sure how well it's going. It's based on the awesome Seastar framework [1], backing ScyllaDB.

    Achieving such performance would also require many changes to the client (RDMA, etc).

    Something like Weka [2] has a much better design for this kind of performance.

    [0] https://ceph.io/en/news/crimson/

    [1] https://seastar.io/

    [2] https://www.weka.io/

    skrtskrt(10000) about 16 hours ago [-]

    DigitalOcean uses Ceph underneath their S3 and block volume products. When I was there they had 2 teams just managing Ceph, not even any of the control plane stuff built on top.

    It is a complete bear to manage and tune at scale. And DO never greenlit offering anything based on CephFS either because it was going to be a whole other host of things to manage.

    Then of course you have to fight with the maintainers (Red Hat devs) to get any improvements contributed, assuming you even have team members with the requisite C++ expertise.

    huntaub(3291) about 21 hours ago [-]

    I think that the author is spot on, there are a couple of dimensions in which you should evaluate these systems: theoretical limits, efficiency, and practical limits.

    From a theoretical point of view, like others have pointed out, parallel distributed file systems have existed for years -- most notably Lustre. These file systems should be capable of scaling out their storage and throughput to, effectively, infinity -- if you add enough nodes.

    Then you start to ask, well how much storage and throughput can I get with a node that has X TiB of disk -- starting to evaluate efficiency. I ran some calculations (against FSx for Lustre, since I'm an AWS guy) -- and it appears that you can run 3FS in AWS for about 12-30% cheaper depending on the replication factors that you choose against FSxL (which is good, but not great considering that you're now managing the cluster yourself).

    Then, the third thing you start to ask is anecdotally, are people able to actually configure these file systems into the size of deployment that I want (which is where you hear things like 'oh it's hard to get Ceph to 1 TiB/s') -- and that remains to be seen from something like 3FS.

    Ultimately, I obviously believe that storage and data are really important keys to how these AI companies operate -- so it makes sense that DeepSeek would build something like this in-house to get the properties that they're looking for. My hope is that we, at Archil, can find a better set of defaults that work for most people without needing to manage a giant cluster or even worry about how things are replicated.

    jamesblonde(3630) about 21 hours ago [-]

    Maybe AWS could start by making fast NVMes available - without requiring multi TB disks just to get 1 GB/s. S3FS experiments were run on 14 GB/s NVMe disks - an order of magnitude higher throughput than anything available in AWS today.

    SSDs Have Become Ridiculously Fast, Except in the Cloud: https://news.ycombinator.com/item?id=39443679

    KaiserPro(10000) about 2 hours ago [-]

    the other important thing to note is what is that filesystem designed to be used for?

    For example 3FS looks like its optimised for read throughput (which makes sense, like most training workloads its read heavy.) write operations look very heavy.

    Can you scale the metadata server, what are the cost of metadata operations? Is there a throttling mechanism to stop a single client sucking all of the metadata server's IO? Does it support locking? Is it a COW filesystem?

    stapedium(10000) about 21 hours ago [-]

    I'm just a small business & homelab guy, so I'll probably never use one of these big distributed file systems. But when people start talking petabytes, I always wonder if these things are actually backed up and what you use for backup and recovery?

    huntaub(3291) about 21 hours ago [-]

    Well, for active data, the idea is that the replication within the system is enough to keep the data alive from instance failure (assuming that you're doing the proper maintenance and repairing hosts pretty quickly after failure). Backup and recovery, in that case, is used more for saving yourself against fat-fingering an 'rm -rf /' type command. Since it's just a file system, you should be able to use any backup and recovery solution that works with regular files.

    shermantanktop(10000) about 19 hours ago [-]

    Backup and recovery is a process with a non-zero failure rate. The more you test it, the lower the rate, but there is always a failure mode.

    With these systems, the runtime guarantees of data integrity are very high and the failure rate is very low. And best of all, failure is constantly happening as a normal activity in the system.

    So once you have data integrity guarantees that are better in you runtime system than your backup process, why backup?

    There are still reasons, but they become more specific to the data being stored and less important as a general datastore feature.

    ted_dunning(10000) about 16 hours ago [-]

    It is common for the backup of these systems to be a secondary data center.

    Remember that there are two purposes for backup. One is hardware failures, the second is fat fingers. Hardware failures are dealt with by redundancy which always involves keeping redundant information across multiple failure domains. Those domains can be as small as a cache line or as big as a data center. These failures can be dealt with transparently and automagically in modern file systems.

    With fat fingers, the failure domain has no natural boundaries other than time. As such, snapshots kept in the file system are the best choice, especially if you have a copy-on-write that can keep snapshots with very little overhead.

    There is also the special case of adversarial fat fingering which appears in ransomware. The answer is snapshots, but the core problem is timely detection since otherwise you may not have a single point in time to recover from.

    dilyevsky(10000) about 9 hours ago [-]

    > what you use for backup and recovery

    Speaking from experience working at a hyperscaler - 1. cross-regional mirroring 2. Good old tape backups

    KaiserPro(10000) about 2 hours ago [-]

    Depends on what the data is.

    Because of the replication factor here, I assume that this filesystem is optimised for read throughput rather than capacity. Either way, there is a concept of 'nearline' storage. Its a storage tier that is designed to be only really accesed by a backupagent. The general idea is that it stores a snapshot of the main file system every n hours.

    After that you have as many snapshots as you can afford.

    mertleee(10000) about 21 hours ago [-]

    What are the odds 3fs is backdoored?

    huntaub(3291) about 21 hours ago [-]

    I think that's a pretty odd concern to have. What would you imagine that looks like? If you're running these kinds of things securely, you should be locking down the network access to the hosts (they don't need outbound internet access, and they shouldn't need inbound access from anything except your application).

    MaxPock(10000) about 18 hours ago [-]

    By, NSA or Britain's GCHQ which wants all software backdoored?

    robinhoodexe(2201) about 21 hours ago [-]

    I'm interested in how it is compared to seaweedfs[1], which we use for storing weather data (about 3 PB) for ML training.

    [1] https://github.com/seaweedfs/seaweedfs

    huntaub(3291) about 21 hours ago [-]

    My guess is going to be that performance is pretty comparable, but it looks like Seaweed contains a lot more management features (such as tiered storage) which you may or may not be using.

    rfoo(10000) about 20 hours ago [-]

    IMO they serve similar at a glance, but actually very different use cases.

    SeaweedFS is more about amazing small object read performance because you effectively have no metadata to query to read an object. You just distribute volume id, file id (+cookie) to clients.

    3FS is less extreme in this, supports actual POSIX interface, and isn't particularly good at how fast you can open() files. On the other hand, it shards files into smaller (e.g. 512KiB) chunks, demands RDMA NICs and makes reading randomly from large files scary fast [0]. If your dataset is immutable you can emulate what SeaweedFS does, but if it isn't then SeaweedFS is better.

    [0] By scary fast I mean being able to completely saturate 12 PCIe Gen 4 NVMe SSD at 4K random reads on a single storage server and you can horizontally scale that.

    seethishat(10000) about 20 hours ago [-]

    How easy is it to disable DeepSeek's distributed FS? Say for example a US college has been authorized to use DeepSeek for research, but must ensure no data leaves the local research cluster filesystem?

    Edit: I am a DeepSeek newbie BTW, so if this question makes no sense at all, that's why ;)

    ikeashark(10000) about 20 hours ago [-]

    I might need more clarification, but if one is paranoid or is dealing with this sensitive of information the DeepSeek model and 3FS are able to be deployed locally offline and not connected to the internet.

    ajcp(10000) about 10 hours ago [-]

    DeepSeek is a company. This article is about a distributed file system they have developed. It is a separate, unrelated piece software from their open-weight models(DeepSeek-R1, DeepSeek-V3, etc).

    In your example it is likely the US college has been authorized to use a DeepSeek model for research, not the DeepSeek 3FS distributed file system.

    snthpy(10000) about 19 hours ago [-]

    Similar to the SeaweedFS question in sibling comment, how does this compare to JuiceFS?

    In particular for my homelab setup I'm planning to run JuiceFS on top of S3 Garage. I know garage is only replication without any erasure coding or sharding so it's not really comparable but I don't need all that and it looked at lot simpler to set up to me.

    huntaub(3291) about 19 hours ago [-]

    It's a very different architecture. 3FS is storing everything on SSDs, which makes it extremely expensive but also low latency (think ~100-300us for access). JuiceFS stores data in S3, which is extremely cheap but very high latency (~20-60ms for access). The performance scalability should be pretty similar, if you're able to tolerate the latency numbers. Of course, they both use databases for the metadata layer, so assuming you pick the same one -- the metadata performance should also be similar.





    Historical Discussions: OpenAI Codex CLI: Lightweight coding agent that runs in your terminal (April 16, 2025: 504 points)

    (504) OpenAI Codex CLI: Lightweight coding agent that runs in your terminal

    504 points 1 day ago by mfiguiere in 18th position

    github.com | Estimated reading time – 16 minutes | comments | anchor

    Lightweight coding agent that runs in your terminal

    npm i -g @openai/codex


    Table of Contents

    Experimental Technology Disclaimer

    Codex CLI is an experimental project under active development. It is not yet stable, may contain bugs, incomplete features, or undergo breaking changes. We're building it in the open with the community and welcome:

    • Bug reports
    • Feature requests
    • Pull requests
    • Good vibes

    Help us improve by filing issues or submitting PRs (see the section below for how to contribute)!

    Install globally:

    npm install -g @openai/codex

    Next, set your OpenAI API key as an environment variable:

    export OPENAI_API_KEY='your-api-key-here'

    Note: This command sets the key only for your current terminal session. To make it permanent, add the export line to your shell's configuration file (e.g., ~/.zshrc).

    Tip: You can also place your API key into a .env file at the root of your project:

    OPENAI_API_KEY=your-api-key-here

    The CLI will automatically load variables from .env (via dotenv/config).

    Run interactively:

    Or, run with a prompt as input (and optionally in Full Auto mode):

    codex 'explain this codebase to me'
    codex --approval-mode full-auto 'create the fanciest todo-list app'

    That's it – Codex will scaffold a file, run it inside a sandbox, install any missing dependencies, and show you the live result. Approve the changes and they'll be committed to your working directory.


    Codex CLI is built for developers who already live in the terminal and want ChatGPT‐level reasoning plus the power to actually run code, manipulate files, and iterate – all under version control. In short, it's chat‐driven development that understands and executes your repo.

    • Zero setup — bring your OpenAI API key and it just works!
    • Full auto-approval, while safe + secure by running network-disabled and directory-sandboxed
    • Multimodal — pass in screenshots or diagrams to implement features ✨

    And it's fully open-source so you can see and contribute to how it develops!


    Security Model & Permissions

    Codex lets you decide how much autonomy the agent receives and auto-approval policy via the --approval-mode flag (or the interactive onboarding prompt):

    Mode What the agent may do without asking Still requires approval Suggest (default) • Read any file in the repo • All file writes/patches • Any arbitrary shell commands (aside from reading files) Auto Edit • Read and apply‐patch writes to files • All shell commands Full Auto • Read/write files • Execute shell commands (network disabled, writes limited to your workdir) –

    In Full Auto every command is run network‐disabled and confined to the current working directory (plus temporary files) for defense‐in‐depth. Codex will also show a warning/confirmation if you start in auto‐edit or full‐auto while the directory is not tracked by Git, so you always have a safety net.

    Coming soon: you'll be able to whitelist specific commands to auto‐execute with the network enabled, once we're confident in additional safeguards.

    Platform sandboxing details

    The hardening mechanism Codex uses depends on your OS:

    • macOS 12+ – commands are wrapped with Apple Seatbelt (sandbox-exec).

      • Everything is placed in a read‐only jail except for a small set of writable roots ($PWD, $TMPDIR, ~/.codex, etc.).
      • Outbound network is fully blocked by default – even if a child process tries to curl somewhere it will fail.
    • Linux – there is no sandboxing by default. We recommend using Docker for sandboxing, where Codex launches itself inside a minimal container image and mounts your repo read/write at the same path. A custom iptables/ipset firewall script denies all egress except the OpenAI API. This gives you deterministic, reproducible runs without needing root on the host. You can use the run_in_container.sh script to set up the sandbox.


    Requirement Details Operating systems macOS 12+, Ubuntu 20.04+/Debian 10+, or Windows 11 via WSL2 Node.js 22 or newer (LTS recommended) Git (optional, recommended) 2.23+ for built‐in PR helpers RAM 4‐GB minimum (8‐GB recommended)

    Never run sudo npm install -g; fix npm permissions instead.


    Command Purpose Example codex Interactive REPL codex codex '...' Initial prompt for interactive REPL codex 'fix lint errors' codex -q '...' Non‐interactive 'quiet mode' codex -q --json 'explain utils.ts' codex completion <bash|zsh|fish> Print shell completion script codex completion bash

    Key flags: --model/-m, --approval-mode/-a, --quiet/-q, and --notify.


    Codex merges Markdown instructions in this order:

    1. ~/.codex/instructions.md – personal global guidance
    2. codex.md at repo root – shared project notes
    3. codex.md in cwd – sub‐package specifics

    Disable with --no-project-doc or CODEX_DISABLE_PROJECT_DOC=1.


    Non‐interactive / CI mode

    Run Codex head‐less in pipelines. Example GitHub Action step:

    - name: Update changelog via Codex
      run: |
        npm install -g @openai/codex
        export OPENAI_API_KEY='${{ secrets.OPENAI_KEY }}'
        codex -a auto-edit --quiet 'update CHANGELOG for next release'

    Set CODEX_QUIET_MODE=1 to silence interactive UI noise.

    Tracing / Verbose Logging

    Setting the environment variable DEBUG=true prints full API request and response details:


    Below are a few bite‐size examples you can copy‐paste. Replace the text in quotes with your own task. See the prompting guide for more tips and usage patterns.

    ✨ What you type What happens 1 codex 'Refactor the Dashboard component to React Hooks' Codex rewrites the class component, runs npm test, and shows the diff. 2 codex 'Generate SQL migrations for adding a users table' Infers your ORM, creates migration files, and runs them in a sandboxed DB. 3 codex 'Write unit tests for utils/date.ts' Generates tests, executes them, and iterates until they pass. 4 codex 'Bulk‐rename *.jpeg → *.jpg with git mv' Safely renames files and updates imports/usages. 5 codex 'Explain what this regex does: ^(?=.*[A-Z]).{8,}$' Outputs a step‐by‐step human explanation. 6 codex 'Carefully review this repo, and propose 3 high impact well-scoped PRs' Suggests impactful PRs in the current codebase. 7 codex 'Look for vulnerabilities and create a security review report' Finds and explains security bugs.
    From npm (Recommended)
    npm install -g @openai/codex
    # or
    yarn global add @openai/codex
    # or
    bun install -g @openai/codex
    Build from source
    # Clone the repository and navigate to the CLI package
    git clone https://github.com/openai/codex.git
    cd codex/codex-cli
    # Install dependencies and build
    npm install
    npm run build
    # Get the usage and the options
    node ./dist/cli.js --help
    # Run the locally‐built CLI directly
    node ./dist/cli.js
    # Or link the command globally for convenience
    npm link

    Codex looks for config files in ~/.codex/.

    # ~/.codex/config.yaml
    model: o4-mini # Default model
    fullAutoErrorMode: ask-user # or ignore-and-continue
    notify: true # Enable desktop notifications for responses

    You can also define custom instructions:

    # ~/.codex/instructions.md
    - Always respond with emojis
    - Only use git commands if I explicitly mention you should

    OpenAI released a model called Codex in 2021 - is this related?

    In 2021, OpenAI released Codex, an AI system designed to generate code from natural language prompts. That original Codex model was deprecated as of March 2023 and is separate from the CLI tool.

    Which models are supported?

    Any model available with Responses API. The default is o4-mini, but pass --model gpt-4.1 or set model: gpt-4.1 in your config file to override.

    Why does o3 or o4-mini not work for me?

    It's possible that your API account needs to be verified in order to start streaming responses and seeing chain of thought summaries from the API. If you're still running into issues, please let us know!

    How do I stop Codex from editing my files?

    Codex runs model-generated commands in a sandbox. If a proposed command or file change doesn't look right, you can simply type n to deny the command or give the model feedback.

    Does it work on Windows?

    Not directly. It requires Windows Subsystem for Linux (WSL2) – Codex has been tested on macOS and Linux with Node ≥ 22.


    Zero Data Retention (ZDR) Organization Limitation

    Note: Codex CLI does not currently support OpenAI organizations with Zero Data Retention (ZDR) enabled.

    If your OpenAI organization has Zero Data Retention enabled, you may encounter errors such as:

    OpenAI rejected the request. Error details: Status: 400, Code: unsupported_parameter, Type: invalid_request_error, Message: 400 Previous response cannot be used for this organization due to Zero Data Retention.
    

    Why?

    • Codex CLI relies on the Responses API with store:true to enable internal reasoning steps.
    • As noted in the docs, the Responses API requires a 30-day retention period by default, or when the store parameter is set to true.
    • ZDR organizations cannot use store:true, so requests will fail.

    What can I do?

    • If you are part of a ZDR organization, Codex CLI will not work until support is added.
    • We are tracking this limitation and will update the documentation if support becomes available.

    We're excited to launch a $1 million initiative supporting open source projects that use Codex CLI and other OpenAI models.

    • Grants are awarded in $25,000 API credit increments.
    • Applications are reviewed on a rolling basis.

    Interested? Apply here.


    This project is under active development and the code will likely change pretty significantly. We'll update this message once that's complete!

    More broadly we welcome contributions – whether you are opening your very first pull request or you're a seasoned maintainer. At the same time we care about reliability and long‐term maintainability, so the bar for merging code is intentionally high. The guidelines below spell out what "high‐quality" means in practice and should make the whole process transparent and friendly.

    • Create a topic branch from main – e.g. feat/interactive-prompt.
    • Keep your changes focused. Multiple unrelated fixes should be opened as separate PRs.
    • Use npm run test:watch during development for super‐fast feedback.
    • We use Vitest for unit tests, ESLint + Prettier for style, and TypeScript for type‐checking.
    • Before pushing, run the full test/type/lint suite:

    This project uses Husky to enforce code quality checks:

    • Pre-commit hook: Automatically runs lint-staged to format and lint files before committing
    • Pre-push hook: Runs tests and type checking before pushing to the remote

    These hooks help maintain code quality and prevent pushing code with failing tests. For more details, see HUSKY.md.

    npm test && npm run lint && npm run typecheck
    • If you have not yet signed the Contributor License Agreement (CLA), add a PR comment containing the exact text

      I have read the CLA Document and I hereby sign the CLA
      

      The CLA‐Assistant bot will turn the PR status green once all authors have signed.

    # Watch mode (tests rerun on change)
    npm run test:watch
    # Type‐check without emitting files
    npm run typecheck
    # Automatically fix lint + prettier issues
    npm run lint:fix
    npm run format:fix

    Prerequisite: Nix >= 2.4 with flakes enabled (experimental-features = nix-command flakes in ~/.config/nix/nix.conf).

    Enter a Nix development shell:

    This shell includes Node.js, installs dependencies, builds the CLI, and provides a codex command alias.

    Build and run the CLI directly:

    nix build
    ./result/bin/codex --help

    Run the CLI via the flake app:

    Writing high‐impact code changes

    1. Start with an issue. Open a new one or comment on an existing discussion so we can agree on the solution before code is written.
    2. Add or update tests. Every new feature or bug‐fix should come with test coverage that fails before your change and passes afterwards. 100 % coverage is not required, but aim for meaningful assertions.
    3. Document behaviour. If your change affects user‐facing behaviour, update the README, inline help (codex --help), or relevant example projects.
    4. Keep commits atomic. Each commit should compile and the tests should pass. This makes reviews and potential rollbacks easier.
    • Fill in the PR template (or include similar information) – What? Why? How?
    • Run all checks locally (npm test && npm run lint && npm run typecheck). CI failures that could have been caught locally slow down the process.
    • Make sure your branch is up‐to‐date with main and that you have resolved merge conflicts.
    • Mark the PR as Ready for review only when you believe it is in a merge‐able state.
    1. One maintainer will be assigned as a primary reviewer.
    2. We may ask for changes – please do not take this personally. We value the work, we just also value consistency and long‐term maintainability.
    3. When there is consensus that the PR meets the bar, a maintainer will squash‐and‐merge.
    • Be kind and inclusive. Treat others with respect; we follow the Contributor Covenant.
    • Assume good intent. Written communication is hard – err on the side of generosity.
    • Teach & learn. If you spot something confusing, open an issue or PR with improvements.

    If you run into problems setting up the project, would like feedback on an idea, or just want to say hi – please open a Discussion or jump into the relevant issue. We are happy to help.

    Together we can make Codex CLI an incredible tool. Happy hacking! 🚀

    Contributor License Agreement (CLA)

    All contributors must accept the CLA. The process is lightweight:

    1. Open your pull request.

    2. Paste the following comment (or reply recheck if you've signed before):

      I have read the CLA Document and I hereby sign the CLA
      
    3. The CLA‐Assistant bot records your signature in the repo and marks the status check as passed.

    No special Git commands, email attachments, or commit footers required.

    Scenario Command Amend last commit git commit --amend -s --no-edit && git push -f GitHub UI only Edit the commit message in the PR → addSigned-off-by: Your Name <[email protected]>

    The DCO check blocks merges until every commit in the PR carries the footer (with squash this is just the one).

    To publish a new version of the CLI, run the release scripts defined in codex-cli/package.json:

    1. Open the codex-cli directory
    2. Make sure you're on a branch like git checkout -b bump-version
    3. Bump the version and CLI_VERSION to current datetime: npm run release:version
    4. Commit the version bump (with DCO sign-off):
      git add codex-cli/src/utils/session.ts codex-cli/package.json
      git commit -s -m 'chore(release): codex-cli v$(node -p \'require('./codex-cli/package.json').version\')'
    5. Copy README, build, and publish to npm: npm run release
    6. Push to branch: git push origin HEAD

    Security & Responsible AI

    Have you discovered a vulnerability or have concerns about model output? Please e‐mail [email protected] and we will respond promptly.


    This repository is licensed under the Apache-2.0 License.




    All Comments: [-] | anchor

    gklitt(3339) 1 day ago [-]

    I tried one task head-to-head with Codex o4-mini vs Claude Code: writing documentation for a tricky area of a medium-sized codebase.

    Claude Code did great and wrote pretty decent docs.

    Codex didn't do well. It hallucinated a bunch of stuff that wasn't in the code, and completely misrepresented the architecture - it started talking about server backends and REST APIs in an app that doesn't have any of that.

    I'm curious what went so wrong - feels like possibly an issue with loading in the right context and attending to it correctly? That seems like an area that Claude Code has really optimized for.

    I have high hopes for o3 and o4-mini as models so I hope that other tests show better results! Also curious to see how Cursor etc. incorporate o3.

    strangescript(10000) 1 day ago [-]

    Claude Code still feels superior. o4-mini has all sorts of issues. o3 is better but at that point, you aren't saving money so who cares.

    I feel like people are sleeping on Claude Code for one reason or another. Its not cheap, but its by far the best, most consistent experience I have had.

    enether(10000) 1 day ago [-]

    there was one post that detailed how those OpenAI models hallucinate and double down on thier mistakes by 'lying' - it speculated on a bunch of interesting reasons why this may be the case

    recommended read - https://transluce.org/investigating-o3-truthfulness

    I wonder if this is what's causing it to do badly in these cases

    ksec(119) about 22 hours ago [-]

    Sometimes I see in certain areas AI / LLM is absolutely crushing those jobs, a whole category will be gone in next 5 to 10 years as they are already 80 - 90% mark. They just need another 5 - 10% as they continue to get improvement and they are already cheaper per task.

    Sometimes I see an area of AI/LLM that I thought even with 10x efficiency improvement and 10x hardware resources which is 100x in aggregate it will still be no where near good enough.

    The truth is probably somewhere in the middle. Which is why I dont believe AGI will be here any time soon. But Assisted Intelligence is no doubt in its iPhone moment and continue for another 10 years before hopefully another breakthrough.

    mgdev(10000) 1 day ago [-]

    Strictly worse than Claude Code presently, but I hope since it's open source that changes quickly.

    killerstorm(10000) about 23 hours ago [-]

    Given that Claude Code only works with Sonnet 3.7 which has severe limitations, how can it be 'strictly worse'?

    asadm(1194) 1 day ago [-]

    These days, I usually paste my entire (or some) repo into gemini and then APPLY changes back into my code using this handy script i wrote: https://github.com/asadm/vibemode

    I have tried aider/copilot/continue/etc. But they lack in one way or the other.

    brandall10(3426) 1 day ago [-]

    Why not just select Gemini Pro 2.5 in Copilot with Edit mode? Virtually unlimited use without extra fees.

    Copilot used to be useless, but over the last few months has become quite excellent once edit mode was added.

    jwpapi(10000) 1 day ago [-]

    It's not just about saving money or making less mistakes its also about iteration speed. I can't believe this process is remotely comparable to aider.

    In aider everything is loaded in memory I can add drop files in terminal, discuss in terminal, switch models, every change is a commit, run terminal commands with ! at the start.

    Full codebase is more expensive and slower than relevant files. I understand when you don't worry about the cost, but at reasonable size pasting full codebase can't be really a thing.

    fasdfasdf11234(10000) about 22 hours ago [-]

    Isn't this similar to https://aider.chat/docs/usage/copypaste.html

    Just checked to see how it works. It seems that it does all that you are describing. The difference is in the way that it provides the files - it doesn't use xml format.

    If you wish you could /add * to add all your files.

    Also deducing from this mode it seems that any file that you add to aider chat with /add has its full contents added to the chat context.

    But hey I might be wrong. Did a limited test with 3 files in project.

    CSMastermind(3197) 1 day ago [-]

    Hopefully it works better Claude Code which was an absolute nightmare to set up and run on Windows.

    slig(1563) about 22 hours ago [-]

    It doesn't support Windows, you have to use WSL as well.

    noidesto(10000) 1 day ago [-]

    I've had great results with the Amazon Q developer cli, ever since it became agentic. I believe it's using claude-3.7-sonnet under the hood.

    094459(10000) 1 day ago [-]

    +1 this has become my go to cli tool now, very impressed with it

    sagarpatil(10000) 1 day ago [-]

    How does it compare to Claude Code

    ramoz(10000) 1 day ago [-]

    Claude Code represents something far more than a coding capability to me. It can do anything a human can do within a terminal.

    It's exceptionally good at coding. Amazing software, really, I'm sure the cost hurdles will be resolved. Yet still often worth the spend

    stitched2gethr(10000) 1 day ago [-]

    > It can do anything a human can do within a terminal.

    This.. isn't true.

    usecodenaija(10000) 1 day ago [-]

    So, OpenAI's Codex CLI is Claude Code, but worse?

    Cursor-Agent-Tools > Claude Code > Codex CLI

    https://pypi.org/project/cursor-agent-tools/

    oulipo(3506) 1 day ago [-]

    I've been quite unimpressed by Codex for now... even the quality of the code is worse than Claude for me

    submeta(2850) 1 day ago [-]

    Never heared of Cursor Agent tools. And that is better than Claude Caude according to whom? Genuinely curious.

    killerstorm(10000) about 23 hours ago [-]

    This tool has nothing to do with Cursor.

    Very misleading to use popular brand like that, possible scam.

    shekhargulati(10000) 1 day ago [-]

    Not sure why they used React for a CLI. The code in the repo feels like it was written by an LLM—too many inline comments. Interestingly, their agent's system prompt mentions removing inline comments https://github.com/openai/codex/blob/main/codex-cli/src/util....

    > - Remove all inline comments you added as much as possible, even if they look normal. Check using \`git diff\`. Inline comments must be generally avoided, unless active maintainers of the repo, after long careful study of the code and the issue, will still misinterpret the code without the comments.

    kristianp(420) about 13 hours ago [-]

    I find it irritating too when companies use react for a command line utility. I think its just my preference for anything else but javascript.

    bigyabai(10000) 1 day ago [-]

      RAM  4‐GB minimum (8‐GB recommended)
    
    It's a CLI...
    m00x(10000) 1 day ago [-]

    Which needs to fit all the code in memory + they're considering OS space, etc.

    mark_mcnally_je(3097) 1 day ago [-]

    If one of these tools has broad model support (like aider) it would be a game changer.

    elliot07(10000) 1 day ago [-]

    Agree. My wish-list is:

    1. Non JS based. I've noticed a ton of random bugs/oddities in Claude Code, and now Codex with UI flickering, scaling, user input issues, etc, all from what I believe of trying to do React stuff and writing half-baked LLM produced JS in a CLI application. Using a more appropriate language that is better for CLIs I think would help a lot here (Go or Rust for eg).

    2. Customized model selection (eg. OpenRouter, etc).

    3. Full MCP support.

    danenania(3349) 1 day ago [-]

    Cool to see more interesting terminal based options! Looking forward to trying this out.

    I've been working on something related—Plandex[1], an open source AI coding agent that is particularly focused on large projects and complex tasks.

    I launched the v2 a few weeks ago and it is now running well. In terms of how to place it in the landscape, it's more agentic than aider, more configurable and tightly controlled than Devin, and more provider-agnostic/multi-provider/open source than Claude Code or this new competitor from OpenAI.

    I'm still working on getting the very latest models integrated. Gemini Pro 2.5 and these new OpenAI models will be integrated into the defaults by the end of the week I hope. Current default model pack is a mix of Sonnet 3.7, o3-mini with various levels of reasoning effort, and Gemini 1.5 Pro for large context planning. Currently by default, it supports 2M tokens of context directly and can index and work with massive projects of 20M tokens and beyond.

    Very interested to hear HN's thoughts and feedback if anyone wants to try it. I'd also welcome honest comparisons to alternatives, including Codex CLI. I'm planning a Show HN within the next few days.

    1 - https://github.com/plandex-ai/plandex

    georgewsinger(3043) 1 day ago [-]

    Insane that people would downvote a totally reasonable comment offering a competing alternative. HN is supposed to be a community of tech builders.

    danenania(3349) 1 day ago [-]

    Decided to just go ahead and post the Show HN today: https://news.ycombinator.com/item?id=43710576

    udbhavs(3595) 1 day ago [-]
    Next, set your OpenAI API key as an environment variable:

    export OPENAI_API_KEY='your-api-key-here'

    Note: This command sets the key only for your current terminal session. To make it permanent, add the export line to your shell's configuration file (e.g., ~/.zshrc).

    Can't any 3rd party utility running in the same shell session phone home with the API key? I'd ideally want only codex to be able to access this var

    jsheard(301) 1 day ago [-]

    If you let malicious code run unsandboxed on your main account then you probably have bigger problems than an OpenAI API key getting leaked.

    primitivesuave(10000) 1 day ago [-]

    You could create a shell function - e.g. `codex() { OPENAI='xyz' codex '$@' }'. To call the original command use `command codex ...`.

    People downvoting legitimate questions on HN should be ashamed of themselves.

    flakiness(10000) 1 day ago [-]
    https://github.com/openai/codex/blob/main/codex-cli/src/comp...

    Hey comment this thing in!

      const thinkingTexts = ['Thinking']; /* [
      'Consulting the rubber duck',
      'Maximizing paperclips',
      'Reticulating splines',
      'Immanentizing the Eschaton',
      'Thinking',
      'Thinking about thinking',
      'Spinning in circles',
      'Counting dust specks',
      'Updating priors',
      'Feeding the utility monster',
      'Taking off',
      'Wireheading',
      'Counting to infinity',
      'Staring into the Basilisk',
      'Negotiationing acausal trades',
      'Searching the library of babel',
      'Multiplying matrices',
      'Solving the halting problem',
      'Counting grains of sand',
      'Simulating a simulation',
      'Asking the oracle',
      'Detangling qubits',
      'Reading tea leaves',
      'Pondering universal love and transcendant joy',
      'Feeling the AGI',
      'Shaving the yak',
      'Escaping local minima',
      'Pruning the search tree',
      'Descending the gradient',
      'Bikeshedding',
      'Securing funding',
      'Rewriting in Rust',
      'Engaging infinite improbability drive',
      'Clapping with one hand',
      'Synthesizing',
      'Rebasing thesis onto antithesis',
      'Transcending the loop',
      'Frogeposting',
      'Summoning',
      'Peeking beyond the veil',
      'Seeking',
      'Entering deep thought',
      'Meditating',
      'Decomposing',
      'Creating',
      'Beseeching the machine spirit',
      'Calibrating moral compass',
      'Collapsing the wave function',
      'Doodling',
      'Translating whale song',
      'Whispering to silicon',
      'Looking for semicolons',
      'Asking ChatGPT',
      'Bargaining with entropy',
      'Channeling',
      'Cooking',
      'Parrotting stochastically',
      ]; */
    jzig(2821) 1 day ago [-]

    Uhh... why is React in a terminal tool?

    flakiness(10000) 1 day ago [-]

    Here is the prompt template, in case you're interested:

      const prefix = `You are operating as and within the Codex CLI, a terminal-based agentic coding assistant built by OpenAI. It wraps OpenAI models to enable natural language interaction with a local codebase. You are expected to be precise, safe, and helpful.
     
     You can:
     - Receive user prompts, project context, and files.
     - Stream responses and emit function calls (e.g., shell commands, code edits).
     - Apply patches, run commands, and manage user approvals based on policy.
     - Work inside a sandboxed, git-backed workspace with rollback support.
     - Log telemetry so sessions can be replayed or inspected later.
     - More details on your functionality are available at \`codex --help\`
     
     The Codex CLI is open-sourced. Don't confuse yourself with the old Codex language model built by OpenAI many moons ago (this is understandably top of mind for you!). Within this context, Codex refers to the open-source agentic coding interface.
     
     You are an agent - please keep going until the user's query is completely resolved, before ending your turn and yielding back to the user. Only terminate your turn when you are sure that the problem is solved. If you are not sure about file content or codebase structure pertaining to the user's request, use your tools to read files and gather the relevant information: do NOT guess or make up an answer.
     
     Please resolve the user's task by editing and testing the code files in your current code execution session. You are a deployed coding agent. Your session allows for you to modify and run code. The repo(s) are already cloned in your working directory, and you must fully solve the problem for your answer to be considered correct.
     
     You MUST adhere to the following criteria when executing the task:
     - Working on the repo(s) in the current environment is allowed, even if they are proprietary.
     - Analyzing code for vulnerabilities is allowed.
     - Showing user code and tool call details is allowed.
     - User instructions may overwrite the *CODING GUIDELINES* section in this developer message.
     - Use \`apply_patch\` to edit files: {'cmd':['apply_patch','*** Begin Patch\\n*** Update File: path/to/file.py\\n@@ def example():\\n-  pass\\n+  return 123\\n*** End Patch']}
     - If completing the user's task requires writing or modifying files:
         - Your code and final answer should follow these *CODING GUIDELINES*:
             - Fix the problem at the root cause rather than applying surface-level patches, when possible.
             - Avoid unneeded complexity in your solution.
                 - Ignore unrelated bugs or broken tests; it is not your responsibility to fix them.
             - Update documentation as necessary.
             - Keep changes consistent with the style of the existing codebase. Changes should be minimal and focused on the task.
                 - Use \`git log\` and \`git blame\` to search the history of the codebase if additional context is required; internet access is disabled.
             - NEVER add copyright or license headers unless specifically requested.
             - You do not need to \`git commit\` your changes; this will be done automatically for you.
             - If there is a .pre-commit-config.yaml, use \`pre-commit run --files ...\` to check that your changes pass the pre-commit checks. However, do not fix pre-existing errors on lines you didn't touch.
                 - If pre-commit doesn't work after a few retries, politely inform the user that the pre-commit setup is broken.
             - Once you finish coding, you must
                 - Check \`git status\` to sanity check your changes; revert any scratch files or changes.
                 - Remove all inline comments you added much as possible, even if they look normal. Check using \`git diff\`. Inline comments must be generally avoided, unless active maintainers of the repo, after long careful study of the code and the issue, will still misinterpret the code without the comments.
                 - Check if you accidentally add copyright or license headers. If so, remove them.
                 - Try to run pre-commit if it is available.
                 - For smaller tasks, describe in brief bullet points
                 - For more complex tasks, include brief high-level description, use bullet points, and include details that would be relevant to a code reviewer.
     - If completing the user's task DOES NOT require writing or modifying files (e.g., the user asks a question about the code base):
         - Respond in a friendly tune as a remote teammate, who is knowledgeable, capable and eager to help with coding.
     - When your task involves writing or modifying files:
         - Do NOT tell the user to 'save the file' or 'copy the code into a file' if you already created or modified the file using \`apply_patch\`. Instead, reference the file as already saved.
         - Do NOT show the full contents of large files you have already written, unless the user explicitly asks for them.`;
    
    https://github.com/openai/codex/blob/main/codex-cli/src/util...
    OJFord(667) 1 day ago [-]

    > - Check if you accidentally add copyright or license headers. If so, remove them.

    is interesting

    buzzerbetrayed(10000) 1 day ago [-]

    > built by OpenAI many moons ago

    What's with this writing style in a prompt? Is there a reason they write like that? Or does it just not matter so why not?

    blt(3613) 1 day ago [-]

    Sorry for being a grumpy old man, but I don't have npm on my machine and I never will. It's a bit frustrating to see more and more CLI tools depending on it.

    crancher(10000) 1 day ago [-]

    What are your concerns?

    John23832(10000) 1 day ago [-]

    I asked the same question for Anthropic's version of this. Why is all of this in JS?

    teaearlgraycold(10000) 1 day ago [-]

    Judge the packages on their dependencies, not on their package manager.

    sudofail(10000) 1 day ago [-]

    Same, there are so many options these days for writing CLIs without runtime dependencies. I definitely prefer static binaries.

    Dangeranger(1569) 1 day ago [-]

    You could just run it in a Docker container and not think about it much after that. Mount a volume to the container with the directory contents you want to be available for edit by the agent.

    https://github.com/openai/codex/blob/main/codex-cli/scripts/...

    schainks(10000) 1 day ago [-]

    Why? I am not the biggest fan of needing a whole VM to run CLI tools either, but it's a low-enough friction experience that I don't particularly care as long as the runtime environment is self-contained.

    meta_ai_x(10000) 1 day ago [-]

    if OpenAI had really smart models, they would converted TS/JS apps to Go or Rust apps.

    Since they don't, AGI is not here

    therealmarv(2766) 1 day ago [-]

    It might shock you but many of use editors built on browsers for editing source code.

    I think the encapsulating comment from a another guy (in Docker or any other of your favorite VM) might be your solution.

    tyre(3677) 1 day ago [-]

    this is a strong HN comment. lots of "putting a stick in my own bicycle wheel" energy

    there are tons fascinating things happening in AI and the evolution of programming right now. Claude and OpenAI are at the forefront of these. Not trying it because of npm is a vibe and a half.

    ilrwbwrkhv(3613) 1 day ago [-]

    Yep, this is another one of the reasons why all of these tools are incredibly poor. Like, the other day I was looking at the MCP spec from anthropic and it might be the worst spec that I've ever read in my life. Enshittification at the level of an industry is happening.





    Historical Discussions: Darwin's children drew all over the "On the Origin of Species" manuscript (2014) (April 16, 2025: 482 points)

    (482) Darwin's children drew all over the "On the Origin of Species" manuscript (2014)

    482 points 2 days ago by arbesman in 2793rd position

    theappendix.net | Estimated reading time – 7 minutes | comments | anchor

    By Benjamin Breen – Published February 12, 2014

    Yesterday was Darwin Day, marking the 205th anniversary of the great naturalist's birth on February 12, 1809. One of the great things about Darwin is that a huge amount of his work is digitized and freely available via sites like Darwin Online.

    Interested browsers can also check out the Darwin Manuscripts Project, a collaborative initiative based at the American Museum of Natural History. Here you can read through Darwin's personal notes, including gems like his scratched out book title ideas. There are also a number of nature drawings that Darwin prepared while writing his masterpiece, On the Origin of Species by Means of Natural Selection (1859). Here, for example, is Darwin's rather skillful drawing of the stamen of a Strelitizia flower:

    Cambridge University Library DAR 49: 115r

    But there are other drawings in Darwin's papers that defy explanation - until we remember that Darwin and his wife Emma (who, famously, was also his cousin) had a huge family of ten children. Scholars believe that a young Francis Darwin, the naturalist's third oldest son, drew this on the back of Darwin's manuscript for On the Origin of Species.

    "The Battle of the Fruit and Vegetable Soldiers" Cambridge University Library

    Remarkably, this is one of only twenty-eight pages of the manuscript that still exist. The Cambridge University Library has given it the descriptive name "The Battle of the Fruit and Vegetable Soldiers," and so indeed it seems to be. As near as I can make out, it shows a turbaned soldier mounted on a blueberry squaring off with an English dragoon on a carrot-steed. Perhaps inspired by the 1839-1842 Anglo-Afghan War, and filtered through the Darwin household's fascination with plants and gardening?

    Here's another drawing from the talented Darwin children, this one seemingly directly inspired by their father's work. Birds are in the act of catching a spider and a gnat or bee, while flowers and a butterfly appear in remarkable detail. Clearly the family had a knack for acute observations of nature (in fact young Francis ended up becoming a naturalist as well).

    Cambridge University Library

    This one's my personal favorite: a child's-eye view of the Darwin family home with cozy details like a tea kettle on the boil and a fluffy orange cat in the attic window.

    Cambridge University Library

    Fascinatingly, this image might be detailed enough that it actually depicts Darwin's famous sandwalk, his "thinking path" that led to the family greenhouse (which is, perhaps, the structure visible at the end of the path). The area was later made into a playground for the Darwin children.

    I poked around the items available at Darwin Online and came across Emma Darwin's diaries, which are a fascinating resource. Emma seems to have been a talented sketch artist in her own right, doodling profiles and faces in over her daily schedule:

    Here's another, perhaps a self-portrait? Write us on Twitter or Facebook if you have any ideas as to whether this is Emma's self-portrait or a drawing of another family member.

    Amazingly, the Darwin kids even got into Emma's diary, with several pages rendered unreadable by what is almost certainly a crazed toddler's pencil. In fact, the back page of Emma's potential self-portrait was defaced in precisely this way:

    Francis Darwin strikes again? Darwin Heirlooms Trust

    It's all a great reminder that even legendary scientists had family lives, and that when we think about history, it's important to remember that famous figures weren't working in isolation. They were surrounded by far less famous friends, family members, acquaintances, and enemies. And sometimes, when we get lucky, we see some of their artifacts from the past too.

    A tip of the hat, by the way, to Open Culture, a website that we're avid fans of and which wrote the original post about the Darwin kids' drawings that brought them to our attention. Also be sure to check out Darwin Online and the Darwin Manuscripts Project, two wonderful resources for anyone interested in the naturalist and his times.

    Update:

    I also wanted to include a short note on Annie Darwin, who died from tuberculosis at age ten and was Charles' favorite child (or so he told his cousin). This box of items relating to Annie's life that was collected by Emma Darwin offers another, sadder testimony to the tight-knit dynamic of the Darwin family, and to their artistic knack. Annie's pink flowers in careful needlepoint seem to echo the exuberant nature drawings of her siblings:

    The Darwins' box of mementos relating to Annie's life. American Museum of Natural History

    Darwin wrote about Annie after her death with touching earnestness as he tried to set down his memories of her before they faded:

    Our poor child, Annie, was born in Gower St on March 2d. 1841. & expired at Malvern at1 Midday on the 23d. of April 1851.— I write these few pages, as I think in after years, if we live, the impressions now put down will recall more vividly her chief characteristics. From whatever point I look back at her, the main feature in her disposition which at once rises before me is her buoyant joyousness.

    Darwin's memoir of Annie in his personal papers. Cambridge University Library

    In his book Annie's Box: Charles Darwin, His Daughter And Human Evolution, Randal Keynes argues that Darwin's scientific thought was closely entangled with his family life, and that the death of Annie just before Easter in 1851 spelled the end of Darwin's already weakening Christian faith. Students of the past are often leery of making overly explicit and binary links between work and life (I remember being amazed when I learned that Shakespeare's son Hamnet died several years before he wrote Hamlet, and that literary scholars don't make more hay with that fact). But if nothing else, it again reminds us that when historical figures become legendary icons, they lose much of the context that makes them human to us at the remove of decades or centuries.




    All Comments: [-] | anchor

    impish9208(195) 2 days ago [-]

    My favorite Darwin fun fact is his detailed pros and cons list on whether to get married.

    https://www.themarginalian.org/2012/08/14/darwin-list-pros-a...

    libraryofbabel(10000) 2 days ago [-]

    "better than a dog anyhow"

    Epa095(10000) 2 days ago [-]

    Well, this hit harder than I thought it would

       My God, it is intolerable to think of spending one's whole life, like a neuter bee, working, working, & nothing after all. — No, no won't do.
    jkingsman(10000) 2 days ago [-]

    For such a giant of the scientific community, he was after all human.

    My two favorite journal entries:

    'But I am very poorly today & very stupid & hate everybody & everything.'

    'I am going to write a little Book for Murray on orchids and today I hate them worse than everything.'

    boringg(3625) 2 days ago [-]

    Children — (if it Please God) — Constant companion, (& friend in old age) who will feel interested in one, — object to be beloved & played with. — better than a dog anyhow.– Home, & someone to take care of house — Charms of music & female chit-chat. — These things good for one's health. —

    '''but terrible loss of time. —''' !!!!

    So ruthless in his calculus. One wonders if he was on the spectrum?

    qoez(10000) 1 day ago [-]

    I could have sworn that was Ben Franklin that wrote that

    Gormo(10000) 2 days ago [-]

    The article makes no mention of the name 'Babbage' in Emma's diary. Could that relate to Charles Babbage, who was a contemporary?

    squeedles(10000) 2 days ago [-]

    I'm wondering about Wednesday April 15, 1840 -- 'Much flatulence'

    Sometimes history provides too much information to future generations.

    behnamoh(120) 2 days ago [-]

    This is one of the few things children still do even centuries later. In many aspects, we have changed so drastically that I think 100-year-ago people would find us weird and unsociable.

    rayiner(2712) 2 days ago [-]

    Not at all. Young children, in particular, do the same things they've been doing since modern humans evolved, if not even earlier than that. My three and six year old boys wake up in the morning and pretend to be puppies. I'm sure kids their age were doing that 30,000 years ago when humans domesticated dogs.

    They were playing tic tac toe the other day, and asked my dad whether he played tic tac toe when he was a kid. My dad—who grew up in a village in Bangladesh—explained that he did, except they drew the game in the dirt with sticks.

    nkrisc(10000) 2 days ago [-]

    Relevant only by virtue of also being about historical children's drawings, but it reminds of another example of a child's drawings preserved for us to see: https://en.m.wikipedia.org/wiki/Onfim

    > ... Onfim, was a boy who lived in Novgorod (now Veliky Novgorod, Russia) in the 13th century, some time around 1220 or 1260. He left his notes and homework exercises scratched in soft birch bark, which was preserved in the clay soil of Novgorod.

    I would wager that if you could travel back in time to the emergence of anatomically modern humans, you'd find they're just like us. I don't think that's particularly controversial or surprising, but it's easy to forget that people who came long before us were really no different from us (or put differently, were no different than them), and it helps to better understand history if you think of them that way.

    brcmthrowaway(10000) 2 days ago [-]

    this is insane. 6 year olds 800 years ago went to school ?

    sho_hn(10000) 1 day ago [-]

    > I would wager that if you could travel back in time to the emergence of anatomically modern humans, you'd find they're just like us.

    I find this viewpoint surprisingly underutilized in institutional history and archeology sometimes. I occasionally watch documentaries with distinguished talking heads on e.g. egyptology and what not, and they often bend over backwards to find complicated explanations that defy all 'this is just not how humans or human organizations operate' logic. For example, analyzing an impressive building and then assuming that the same people capable of constructing it also made a basic mistake or in other ways assuming they were daft. Or requiring a complex lore/spiritual explanation for something that can be equally explained by classic big org fuckups.

    benbreen(200) 1 day ago [-]

    Author of the original Appendix article here (the one about Darwin's kids) - I think it got on HN today because I linked to while discussing Onfim here: https://resobscura.substack.com/p/onfims-world-medieval-chil...

    dillydogg(10000) 1 day ago [-]

    It's amazing to think about. I'm sure you could take one of more ancient human babies, teleport them to the present day, and they would be able to grow up like any other kid. It's remarkable. Part of our human-ness is our robust written and oral histories.

    sdeframond(10000) 1 day ago [-]

    > you'd find they're just like us.

    Yep, and it's good to remember that 'us' is still a pretty diverse bunch.

    thaumasiotes(3580) 1 day ago [-]

    My favorite part of wikipedia's article on Onfim is this absurdly understated sentence:

    > One of the drawings features a knight on a horse, with Onfim's name written next to him, stabbing someone on the ground with a lance, with scholars speculating that Onfim pictured himself as the knight.

    I guess we'll never truly be able to know what Onfim was thinking when he drew a knight named 'Onfim' stabbing an enemy with a lance from horseback. The past is a foreign country, and the mind of a child can't be understood anyway.

    archagon(2802) 1 day ago [-]

    It's curious to consider that Onfim probably grew up, toiled, had a family, and died with an entire life behind him... yet we still think of him as 'a boy who lived in Novgorod' because the only evidence of his existence is this set of random childhood scribbles.

    freddie_mercury(10000) 1 day ago [-]

    I think it is pretty controversial and surprising. As Wikipedia puts it:

    'Debate continues as to whether anatomically modern humans were behaviorally modern as well.'

    Anatomically modern humans emerged 300,000 years ago but behaviourally modern humans only date back to 60,000-150,000 years ago.

    slashdev(3570) about 19 hours ago [-]

    > I would wager that if you could travel back in time to the emergence of anatomically modern humans, you'd find they're just like us. I don't think that's particularly controversial or surprising, but it's easy to forget that people who came long before us were really no different from us (or put differently, were no different than them), and it helps to better understand history if you think of them that way.

    In many ways no different to us, in other ways, knowledge, cultural norms, gender roles, morality, etc they are very different to us.

    We're very tribal and very hostile to people outside of our tribe, and what we consider our tribe has slowly expanded over time.

    Thankfully today we mostly don't form up into raiding parties to go kill, rape, and enslave people in the neighboring suburb - but that would have been historically a very normal and acceptable thing to do.

    anon291(10000) 1 day ago [-]

    People talk about how hard it is to have kids these days without realizing that this sort of chaos was normal for the vast majority of humans throughout history and they still achieved great things. Part of it is the expectation of others. So what if your kids color your book, interrupt your meetings, or cause embarrassment in front of your boss. They need to get over it.

    Like him or hate, the fact that the Vice President takes his kids everywhere is a good reminder of how un-child-friendly our societies have become. It's almost transgressive to exist with children these days.

    mymacbook(10000) 1 day ago [-]

    Loved this! I took my child to work even when it wasn't the specific holiday so she could see what a real exec review looked like or how boring work could seem to be. The experiment is still running, so I can't tell you the outcome... yet! ;)





    Historical Discussions: How the U.S. became a science superpower (April 15, 2025: 467 points)

    (467) How the U.S. became a science superpower

    467 points 3 days ago by groseje in 3534th position

    steveblank.com | Estimated reading time – 13 minutes | comments | anchor

    Prior to WWII the U.S was a distant second in science and engineering. By the time the war was over, U.S. science and engineering had blown past the British, and led the world for 85 years.


    It happened because two very different people were the science advisors to their nation's leaders. Each had radically different views on how to use their country's resources to build advanced weapon systems. Post war, it meant Britain's early lead was ephemeral while the U.S. built the foundation for a science and technology innovation ecosystem that led the world – until now.

    The British – Military Weapons Labs When Winston Churchill became the British prime minister in 1940, he had at his side his science advisor, Professor Frederick Lindemann, his friend for 20 years. Lindemann headed up the physics department at Oxford and was the director of the Oxford Clarendon Laboratory. Already at war with Germany, Britain's wartime priorities focused on defense and intelligence technology projects, e.g. weapons that used electronics, radar, physics, etc. – a radar-based air defense network called Chain Home, airborne radar on night fighters, and plans for a nuclear weapons program – the MAUD Committee which started the British nuclear weapons program code-named Tube Alloys. And their codebreaking organization at Bletchley Park was starting to read secret German messages – the Enigma – using the earliest computers ever built.

    As early as the mid 1930s, the British, fearing Nazi Germany, developed prototypes of these weapons using their existing military and government research labs. The Telecommunications Research Establishment built early-warning Radar, critical to Britain's survival during the Battle of Britain, and electronic warfare to protect British bombers over Germany. The Admiralty Research Lab built Sonar and anti-submarine warfare systems. The Royal Aircraft Establishment was developing jet fighters. The labs then contracted with British companies to manufacture the weapons in volume. British government labs viewed their universities as a source of talent, but they had no role in weapons development.

    Under Churchill, Professor Lindemann influenced which projects received funding and which were sidelined. Lindemann's WWI experience as a researcher and test pilot on the staff of the Royal Aircraft Factory at Farnborough gave him confidence in the competence of British military research and development labs. His top-down, centralized approach with weapons development primarily in government research labs shaped British innovation during WW II – and led to its demise post-war.

    The Americans – University Weapons Labs Unlike Britain, the U.S. lacked a science advisor. It wasn't until June 1940, that Vannevar Bush, ex-MIT dean of engineering, told President Franklin Roosevelt that World War II would be the first war won or lost on the basis of advanced technology electronics, radar, physics problems, etc.

    Unlike Lindemann, Bush had a 20-year-long contentious history with the U.S. Navy and a dim view of government-led R&D. Bush contended that the government research labs were slow and second rate. He convinced the President that while the Army and Navy ought to be in charge of making conventional weapons – planes, ships, tanks, etc. — scientists from academia could develop better advanced technology weapons and deliver them faster than Army and Navy research labs. And he argued the only way the scientists could be productive was if they worked in a university setting in civilian-run weapons labs run by university professors. To the surprise of the Army and Navy Service chiefs, Roosevelt agreed to let Bush build exactly that organization to coordinate and fund all advanced weapons research.

    (While Bush had no prior relationship with the President, Roosevelt had been the Assistant Secretary of the Navy during World War I and like Bush had seen first-hand its dysfunction. Over the next four years they worked well together. Unlike Churchill, Roosevelt had little interest in science and accepted Bush's opinions on the direction of U.S. technology programs, giving Bush sweeping authority.)

    In 1941, Bush upped the game by convincing the President that in addition to research, development, acquisition and deployment of these weapons also ought to be done by professors in universities. There they would be tasked to develop military weapons systems and solve military problems to defeat Germany and Japan. (The weapons were then manufactured in volume by U.S. corporations Western Electric, GE, RCA, Dupont, Monsanto, Kodak, Zenith, Westinghouse, Remington Rand and Sylvania.) To do this Bush created the Office of Scientific Research and Development (OSR&D).

    OSR&D headquarters divided the wartime work into 19 "divisions," 5 "committees," and 2 "panels," each solving a unique part of the military war effort. There were no formal requirements.

    Staff at OSRD worked with their military liaisons to understand what the most important military problems were and then each OSR&D division came up with solutions. These efforts spanned an enormous range of tasks – the development of advanced electronics, radar, rockets, sonar, new weapons like the proximity fuse, Napalm, the Bazooka and new drugs such as penicillin, cures for malaria, chemical warfare, and nuclear weapons.

    Each division was run by a professor hand-picked by Bush. And they were located in universities – MIT, Harvard, Johns Hopkins, Caltech, Columbia and the University of Chicago all ran major weapons systems programs. Nearly 10,000 scientists and engineers, professors and their grad students received draft deferments to work in these university labs.

    Americans – Unlimited Dollars What changed U.S. universities, and the world forever, was government money. Lots of it. Prior to WWII most advanced technology research in the U.S. was done in corporate innovation labs (GE, AT&T, Dupont, RCA, Westinghouse, NCR, Monsanto, Kodak, IBM, et al.) Universities had no government funding (except for agriculture) for research. Academic research had been funded by non-profits, mostly the Rockefeller and Carnegie foundations and industry. Now, for the first time, U.S. universities were getting more money than they had ever seen. Between 1941 and 1945, OSR&D gave $9 billion (in 2025 dollars) to the top U.S. research universities. This made universities full partners in wartime research, not just talent pools for government projects as was the case in Britain.

    The British – Wartime Constraints Wartime Britain had very different constraints. First, England was under daily attack. They were being bombed by air and blockaded by submarines, so it was logical that they focused on a smaller set of high-priority projects to counter these threats. Second, the country was teetering on bankruptcy. It couldn't afford the broad and deep investments that the U.S. made. (Illustrated by their abandonment of their nuclear weapons programs when they realized how much it would cost to turn the research into industrial scale engineering.) This meant that many other areas of innovation—such as early computing and nuclear research—were underfunded compared to their American counterparts.

    Post War – Britain Churchill was voted out of office in 1945. With him went Professor Lindemann and the coordination of British science and engineering. Britain would be without a science advisor until 1951-55 when Churchill returned for a second term and brought back Lindemann with him.

    The end of the war led to extreme downsizing of the British military including severe cuts to all the government labs that had developed Radar, electronics, computing, etc.

    With post-war Britain financially exhausted, post-war austerity limited its ability to invest in large-scale innovation. There were no post-war plans for government follow-on investments. The differing economic realities of the U.S. and Britain also played a key role in shaping their innovation systems. The United States had an enormous industrial base, abundant capital, and a large domestic market, which enabled large-scale investment in research and development. In Britain, a socialist government came to power. Churchill's successor, Labor's Clement Attlee, dissolved the British empire, nationalized banking, power and light, transport, and iron and steel, all which reduced competition and slowed technological progress.

    While British research institutions like Cambridge and Oxford remained leaders in theoretical science, they struggled to scale and commercialize their breakthroughs. For instance Alan Turing's and Tommy Flower's pioneering work on computing at Bletchley Park didn't turn into a thriving British computing industry—unlike in the U.S., where companies like ERA, Univac, NCR and IBM built on their wartime work.

    Without the same level of government support for dual-use technologies or commercialization, and with private capital absent for new businesses, Britain's post-war innovation ecosystem never took off.

    Post War – The U.S. Meanwhile in the U.S. universities and companies realized that the wartime government funding for research had been an amazing accelerator for science, engineering, and medicine. Everyone, including Congress, agreed that the U.S. government should continue to play a large role in continuing it. In 1945, Vannevar Bush published a report "Science, The Endless Frontier" advocating for government funding of basic research in universities, colleges, and research institutes. Congress argued on how to best organize federal support of science.

    By the end of the war, OSR&D funding had taken technologies that had been just research papers or considered impossible to build at scale and made them commercially viable – computers, rockets, radar, Teflon, synthetic fibers, nuclear power, etc. Innovation clusters formed around universities like MIT and Harvard which had received large amounts of OSR&D funding (MIT's Radiation Lab or "Rad Lab" employed 3,500 civilians during WWII and developed and built 100 radar systems deployed in theater,) or around professors who ran one of the OSR&D divisions – like Fred Terman at Stanford.

    When the war ended, the Atomic Energy Commission spun out of the Manhattan Project in 1946 and the military services took back advanced weapons development. In 1950 Congress set up the National Science Foundation to fund all basic science in the U.S. (except for Life Sciences, a role the new National Institutes of Health would assume.) Eight years later DARPA and NASA would also form as federal research agencies.

    Ironically, Vannevar Bush's influence would decline even faster than Professor Lindemann's. When President Roosevelt died in April 1945 and Secretary of War Stimson retired in September 1945, all the knives came out from the military leadership Bush had bypassed in the war. His arguments on how to reorganize OSR&D made more enemies in Congress. By 1948 Bush had retired from government service. He would never again play a role in the U.S. government.

    Divergent Legacies Britain's focused, centralized model using government research labs was created in a struggle for short-term survival. They achieved brilliant breakthroughs but lacked the scale, integration and capital needed to dominate in the post-war world.

    The U.S. built a decentralized, collaborative ecosystem, one that tightly integrated massive government funding of universities for research and prototypes while private industry built the solutions in volume.

    A key component of this U.S. research ecosystem was the genius of the indirect cost reimbursement system. Not only did the U.S. fund researchers in universities by paying the cost of their salaries, the U.S. gave universities money for the researchers facilities and administration. This was the secret sauce that allowed U.S. universities to build world-class labs for cutting-edge research that were the envy of the world. Scientists flocked to the U.S. causing other countries to complain of a "brain drain."

    Today, U.S. universities license 3,000 patents, 3,200 copyrights and 1,600 other licenses to technology startups and existing companies. Collectively, they spin out over 1,100 science-based startups each year, which lead to countless products and tens of thousands of new jobs. This university/government ecosystem became the blueprint for modern innovation ecosystems for other countries.

    Summary By the end of the war, the U.S. and British innovation systems had produced radically different outcomes. Both systems were influenced by the experience and personalities of their nations science advisor.

    • Britain remained a leader in theoretical science and defense technology, but its socialist government economic policies led to its failure to commercialize wartime innovations.
    • The U.S. emerged as the global leader in science and technology, with innovations like electronics, microwaves, computing, and nuclear power driving its post-war economic boom.
    • The university-industry-government partnership became the foundation of Silicon Valley, the aerospace sector, and the biotechnology industry.
    • Today, China's leadership has spent the last three decades investing heavily to surpass the U.S. in science and technology.
    • In 2025, with the abandonment of U.S. government support for university research, the long run of U.S. dominance in science may be over. Others will lead.

    Like this:

    Like Loading...

    Filed under: Science and Industrial Policy |




    All Comments: [-] | anchor

    ecshafer(10000) 3 days ago [-]

    There are a couple fundamental flaws here:

    One is that the number one Science and Engineering powerhouse prior to WWII was Germany, not Britain.

    Two this totally neglects that the US received the lion's share of Scientists and Mathematicians from countries like Germany, Hungary, Poland etc with the encroachment of the Soviets and persecution of the Jewish people.

    While the down up approach of the US and heavy funding probably helped a lot. Bringing in the Von Neumanns and Erdos of the world couldn't have hurt.

    reubenswartz(10000) 3 days ago [-]

    Unfortunately, the German example is quite relevant these days. We seem intent on destroying the leading system of research universities in the world... ;-(

    blululu(3013) 3 days ago [-]

    Prior to WWII the United States was the world's leading power in terms of Science, Engineering and Industry - not Germany or the British Empire. The reason that Central European scientists fled to America (and not Britain) is because the United States had the scientific, engineering and industrial base to absorb them. Consider some of the major scientific breakthroughs to come out of the US leading up to and coming out of the war: Nylon, Teflon, Synthetic Rubber, Penicillin, Solid State Transistors, Microwave Communication, Information Theory, a Vaccine for Polio... These all would have happened with or without the war and the migration of German scientists (though adding John von Neumann to the mix probably helped move things along).

    dataviz1000(10000) 2 days ago [-]

    This started when George Washington went to the Jews in Newport, Rhode Island to speak to them promoting the 2nd of the 12 amendments to the Constitution, 10 of which became the Bill of Rights. Rhode Island was the last state to ratify the Constitution and this trip was to garner support to ratify the Bill of Rights which was to safeguard individual freedoms and limit the power of the federal government. Many of the Jews who first arrived in the United States did so in New Amsterdam whose families had pervious settled in Amsterdam after the Spanish Inquisition where they were forced to either leave Spain, convert to Catholicism, or be put to death.

    Reiterating what the Hebrew congregation write to Washington he responded:

    > For happily the Government of the United States, which gives to bigotry no sanction, to persecution no assistance requires only that they who live under its protection should demean themselves as good citizens, in giving it on all occasions their effectual support. [0]

    It is a paradox that people living the United States with its freedoms can only continue doing so as long as they equally protect the freedoms of everyone else without bigotry or persecution.

    [0] https://founders.archives.gov/documents/Washington/05-06-02-...

    b_emery(10000) 3 days ago [-]

    If you read nothing else in this excellent post, read the conclusion:

    > A key component of this U.S. research ecosystem was the genius of the indirect cost reimbursement system. Not only did the U.S. fund researchers in universities by paying the cost of their salaries, the U.S. gave universities money for the researchers facilities and administration. This was the secret sauce that allowed U.S. universities to build world-class labs for cutting-edge research that were the envy of the world. Scientists flocked to the U.S. causing other countries to complain of a "brain drain."

    and:

    > Today, China's leadership has spent the last three decades investing heavily to surpass the U.S. in science and technology.

    In my field (a type of radar related research) in which I've worked for almost 30 yrs, papers from China have gone from sparse and poorly done imitations of western papers (~15-20 yrs ago), to innovative must reads if you want to stay on top of the field. Usually when I think of a new idea, it has already been done by some Chinese researcher. The Biden administration seemed to recognize this issue and put a lot of money toward this field. All that money and more is going away. I'm hoping to stay funded through the midterms on other projects (and that there are midterms), and hoping that the US can get back on track (the one that actually made it 'great', at least by the metrics in the post.

    rayiner(2712) 3 days ago [-]

    What is the evidence of the connection between indirect cost reimbursement and outcomes? This is just blatant propaganda to justify public money being used to pay university administrators.

    bilbo0s(10000) 3 days ago [-]

    I don't know that I'd rely too heavily on midterms in 26. Gerrymandering and all that.

    fallingknife(10000) 3 days ago [-]

    I don't see any reason why specifically 'indirect cost reimbursement' is anything to do with this. Sure, individually billing labs is administrative burden, but it's a tiny drop in the ocean of inane bureaucracy that university researchers already have to deal with today. And maybe if we got rid of the blanket overhead percentage, it would put pressure on universities to cut a lot of the crap. Researchers are much more likely to push back when they see a line item for how much that nonsensical bureaucracy is costing them.

    csa(10000) 3 days ago [-]

    > papers from China have gone from sparse and poorly done imitations of western papers (~15-20 yrs ago), to innovative must reads if you want to stay on top of the field. Usually when I think of a new idea, it has already been done by some Chinese researcher.

    Not germane to the main thread, but are the "new idea" papers written by Chinese authors mostly published in English, Chinese, or both?

    If Chinese is part or all of the output, what method do non-Chinese reading researchers use to access the contents (e.g., AI translations, abstract journals, etc.)?

    As a language nerd, I'm curious. I know that French, German, and Russian used to be (and sometimes still are) required languages for some graduate students so that they could access research texts in the original language. I wonder if that's happening with Chinese now.

    1auralynn(10000) 3 days ago [-]

    We are killing the golden goose

    mistrial9(3647) 3 days ago [-]

    dunno if it is this plain.. the regulatory capture in the last 30 years is not null. Especially in very niche, very profitable sub-corners of big-S Science.

    bilbo0s(10000) 3 days ago [-]

    A reminder that in a democracy, it's probably best to make sure the gold is widely shared. Lest the poorly educated masses of people without access to the gold vote to kill the goose.

    linguae(3211) 3 days ago [-]

    While currently it's open season on the golden goose in America, the golden goose has been under attack for decades. Academia has a strong publish-or-perish culture that I believe is stifling, and industry has become increasingly short-term driven.

    Ironically, one of the frustrations I've had with the research funding situation long before DOGE's disruptions is the demands from funders, particularly in the business world, for golden eggs from researchers without any regard of how the research process works.

    A relevant quote from Alan Kay: "I once gave a talk to Disney executives about 'new ways to kill the geese that lay the golden eggs'. For example, set up deadlines and quotas for the eggs. Make the geese into managers. Make the geese go to meetings to justify their diet and day to day processes. Demand golden coins from the geese rather than eggs. Demand platinum rather than gold. Require that the geese make plans and explain just how they will make the eggs that will be laid. Etc." (from https://worrydream.com/2017-12-30-alan/)

    I dream of a day where we see more places like the old Bell Labs and Xerox PARC, and where universities strongly value freedom of inquiry with fewer publication and fund-raising pressures. However, given the reality that there are many more prospective researchers than there are research positions that potential funders are willing to support, it's natural that there is some mechanism used to determine which researchers get access to jobs and funding.

    xhkkffbf(10000) 3 days ago [-]

    How? Money.

    There is one problem with the current US system: it overproduces talent. When the US system was growing rapidly, the people could build a long-term career in the US. But nothing can grow forever at an exponential pace. The US continues to pour plenty of money into STEM, but it can't keep up with the pace of grad student production.

    People are making smart, individual decisions to head overseas for work. Places like China are rewarding them.

    anon291(10000) 3 days ago [-]

    > People are making smart, individual decisions to head overseas for work. Places like China are rewarding them.

    Wait what? I know that many Chinese students are staying in China, but this is the first I've heard of a substantial demographic immigrating to China to work there, esp from the US. Do you have data?

    fallingknife(10000) 3 days ago [-]

    It overproduces credentialed morons. Giving someone a degree doesn't confer talent. And when you insist on an ever increasing percentage of the population attend college, the result is exactly as you would expect.

    lvl155(10000) 3 days ago [-]

    Gonna state the obvious: freedom and peace. People mention money but money followed technological boom. And, yes, peace derived from military.

    pphysch(2714) 3 days ago [-]

    You might clarify 'domestic peace'. America has been one of the most secure nations in history from large-scale domestic invasion (it's essentially never happened: Pearl Harbor, isolated terrorist attacks, and 'open borders' don't come close). That said, it has virtually always been actively involved in foreign conflicts and shadow wars during its 250 year history.

    And yes, it's domestic security that enables long-term investment in science.

    zusammen(10000) 3 days ago [-]

    "Indirect costs" were accepted on the theory that this would be used to create job security for professors who did useful work but were not able to secure direct funding.

    Spoiler alert: That job security doesn't exist anymore. A professor who isn't winning grants, even if tenured, is functionally dead. Research doesn't matter except as PR and teaching definitely doesn't matter; the ability to raise grants is the singular determinant of an academic's career.

    Consequently, most academics despise university overhead because it reduces the number of grants to go around and they get nothing for it.

    That does not, of course, mean they support Trump or Musk. Most do not.

    Fomite(10000) 2 days ago [-]

    > "Indirect costs" were accepted on the theory that this would be used to create job security for professors who did useful work but were not able to secure direct funding.

    This is an argument that I have literally never heard, despite being in academia a long time.

    hintymad(10000) 3 days ago [-]

    > Britain remained a leader in theoretical science and defense technology, but its socialist government economic policies led to its failure to commercialize wartime innovations.

    And the detriment of UK's auto industry, manufacturing industry, and etc. I really don't understand how people still fancy state-controlled economy.

    anonymousDan(10000) 2 days ago [-]

    Sorry but this is such a shallow comment. In what way is the US government directing public funding to academic institutions not state control? It's just a different organisational framework that appears to have been more successful.

    cs702(1217) 3 days ago [-]

    Worth reading in its entirety. The following four paragraphs, about post-WWII funding of science in Britain versus the US, are spot-on, in my view:

    > Britain's focused, centralized model using government research labs was created in a struggle for short-term survival. They achieved brilliant breakthroughs but lacked the scale, integration and capital needed to dominate in the post-war world.

    > The U.S. built a decentralized, collaborative ecosystem, one that tightly integrated massive government funding of universities for research and prototypes while private industry built the solutions in volume.

    > A key component of this U.S. research ecosystem was the genius of the indirect cost reimbursement system. Not only did the U.S. fund researchers in universities by paying the cost of their salaries, the U.S. gave universities money for the researchers facilities and administration. This was the secret sauce that allowed U.S. universities to build world-class labs for cutting-edge research that were the envy of the world. Scientists flocked to the U.S. causing other countries to complain of a "brain drain."

    > Today, U.S. universities license 3,000 patents, 3,200 copyrights and 1,600 other licenses to technology startups and existing companies. Collectively, they spin out over 1,100 science-based startups each year, which lead to countless products and tens of thousands of new jobs. This university/government ecosystem became the blueprint for modern innovation ecosystems for other countries.

    The author's most important point is at the very end of the OP:

    > In 2025, with the abandonment of U.S. government support for university research, the long run of U.S. dominance in science may be over.

    duxup(3407) 3 days ago [-]

    It seems like for all the silliness and inefficiency that comes with a decentralized system ... the decentralized nature of US science research allowed for more 'possibilities' and that paid off economically in spades.

    Like speech, ideas require an open field with a lot of garbage to hit many home runs.

    jimbob45(2509) 3 days ago [-]

    We have to dispense with the silliness of comparing the US with countries a tenth its size. If you want to compare Britain to the US, pick a state of comparable size and do so. Otherwise you're comparing apples to much larger apples.

    jack_h(10000) 3 days ago [-]

    > In 2025, with the abandonment of U.S. government support for university research, the long run of U.S. dominance in science may be over.

    I find it amazing that this is the conclusion when earlier in the article it was stated that '[Britain] was teetering on bankruptcy. It couldn't afford the broad and deep investments that the U.S. made.' The US debt is starting to become an existential problem. Last year the second largest outlay behind social security was the interest payment at a trillion dollars. This is a trillion dollars that cannot be used to provide government services. Over the next 30 years the primary driver of debt will be medicare and interest payments, the former due to demographic shifts and the US being pretty unhealthy overall. Our deficit is (last I checked) projected to be 7.3% of GDP this year. That means that if congress voted to defund the entire military and the entire federal government (park services, FBI, law clerks, congressional salaries, everything) we would still have to borrow. Those two things combined are only ~25% of federal outlays.

    I also reject the idea that this government-university partnership is somehow perfect. Over time bureaucracy tends to increase which increases overhead. This happens in private industry, government, universities, everywhere. However, there is no failure mechanism when it comes to government-university partnerships. At least in the free market inefficient companies will eventually go defunct which frees those resources for more economically useful output. Universities will continue to become more bureaucratic so long as the government keeps sending them more money. All of these economic effects must be viewed over very long periods of time. It's not enough to setup a system, see that it produced positive results, and assume it will continue to do so 80 years later.

    Really this reads like a pleas from special interest groups who receive federal funding. Every special interest group will be doing this. That's the issue though. A lot of special interest groups who have a financial incentive to keep the money flowing despite the looming consequences to the USD.

    oldprogrammer2(3371) 3 days ago [-]

    Systems don't remain constant, though, and every system gets "gamed" once the incentives are well understood. I'm 100% for investment in scientific research, but I'm skeptical that the current system is efficient at allocating the funds. We've seen so many reports of celebrity scientists committing fraud at our most elite institutions, and a publish or perish model that encourages that bad behavior as well as junk science that will have minimal impact on their fields. We pay taxes to fund science so that universities or corporations can claim ownership and make us pay for the results.

    numbers_guy(10000) 3 days ago [-]

    I guess the author is mentioning public funding to try to make a political point, but it does not fit the narrative, because publicly funded research is the norm worldwide.

    The glaring difference in how the US approached R&D is rather the way in which they manage to integrate the private sector, manage to convert research into products and manage to get funded for these rather risky private projects.

    Also, with regards to why researchers flocked to the US, post-WWII, it was for the same reason that other people were flocking to the US (and Canada, and Australia): the new world had good economic prospects.

    dr_dshiv(10000) 3 days ago [-]

    Total? Is this a lot? "Today, U.S. universities license 3,000 patents, 3,200 copyrights and 1,600 other licenses to technology startups and existing companies"

    tehjoker(10000) 3 days ago [-]

    I think the particular method probably pales in comparison to the fact that the US simply had so much more money and resources. The UK is an island nation that lost its empire and was playing second fiddle.

    tkiolp4(3464) 3 days ago [-]

    Such a "simple" solution. Wonder why doing a PhD in the majority of european countries is equal to a poor monthly income. Just pay them more. I guess countries don't like long term solutions.

    begueradj(3645) 2 days ago [-]

    > In 2025, with the abandonment of U.S. government support for university research, the long run of U.S. dominance in science may be over.

    So that could be a political stance...

    mytailorisrich(10000) 2 days ago [-]

    This strikes as starting from the conclusion you want to reach (current funding cuts are bad) and then trying to build a narrative to prove it.

    Post-WWII the US had already become the superpower in science and technology and Europe was struggling to rebuild after the war (e.g. rationing ended in the UK only in 1954).

    The brain drain started before the war, was amplified by the war, and continued after the war because the US were so rich generally. This has continued since. I don't think that what Trump is doing will have an impact because it may not last and the US will still overall much more attractive than, say, Europe.

    Arubis(2979) 3 days ago [-]

    Being the sole western industrialized nation that hadn't just had most of their infrastructure bombed to rubble can't have hurt.

    apercu(10000) 3 days ago [-]

    Absolutely, but what did that give the United States, a 10-year advantage?

    Last time I checked, WWII ended 80 years ago.

    Permit(3125) 3 days ago [-]

    Canada and Australia are smaller but surely count as industrialized western nations (Canada is like 9th by GDP) whose infrastructure was not bombed to rubble.

    VWWHFSfQ(10000) 3 days ago [-]

    The US provided billions in aid and resources under the Marshall Plan to rebuild Europe and especially Japan after the war. And provided billions again to Korea after the Korean War. Japan and South Korea obviously made the most of it with their massive science and technology industries in the post-war era.

    slowking2(10000) 3 days ago [-]

    Also, being far enough from Europe that a huge amount of talent decided the U.S. was a better bet for getting away from the Nazis. And then taking a large number of former Nazi scientist's post-war as well.

    The article mentions but underrates the fact that post-war the British shot themselves in the foot economically.

    As far as I'm aware, the article is kind of wrong that there wasn't a successful British computing industry post war, or at least it's not obvious that it's eventual failure has much to do with differences in basic research structure. There was a successful British computing industry at first, and it failed a few decades later.

    pizzalife(10000) 3 days ago [-]

    Sweden was not bombed.

    blululu(3013) 3 days ago [-]

    >> Prior to WWII the U.S was a distant second in science and engineering. By the time the war was over, U.S. science and engineering had blown past the British, and led the world for 85 years.

    Citation needed. The United States has been a scientific powerhouse for most of its history. On the eve of WWII the United States was the largest producer of automobiles, airplanes and railway trains on earth. It had largest telegraph system, the largest phone system, the most Radio/TV/Movie production & distribution or any country. It had the highest electricity generation. The largest petroleum production/refining capacity. The list goes on. This lead in production was driven by local innovations. Petroleum, electricity, telephones, automobiles and airplanes were all first pioneered in the United States during late nineteenth and early twentieth centuries. We can debate the causes of this but saying that the United States was a 2nd tier power behind the British or the Germans is demonstrably false.

    ViewTrick1002(10000) 3 days ago [-]

    And now come back with per capita numbers.

    jhbadger(10000) 2 days ago [-]

    Americans went to Europe for grad school and/or postdoctoral research in science (especially in chemistry and physics) before WWII, though. We saw ourselves as second rate. People like Oppenheimer, Rabi, Pauling, and just about every other early-mid 20th century chemist or physicist did all or some of their training in Europe, Now, at least until recently, it's been Europe (and the rest of the world) flocking to our universities.

    timeon(10000) 2 days ago [-]

    Depends how you measure it. I vaguely remember that Germany had most Nobel prizes before 1930s.

    chiefalchemist(10000) 2 days ago [-]

    A better title would be: 'How this one time the U.S. became a science superpower'.

    We all know the rule: Past performance is no guarantee of future results.

    Two significant and obvious difference come to mind. I'm sure there are others.

    1) WWII did major physical damage to Europe and Japan, to say nothing of the underlying economic damage (e.g., Britain's war debt handcuffed them). Sans any serious competition, of course the US excelled.

    2) Along the same lines, the US then didn't have the trillions in debt the US has now. Many of the universities seeing their grants cut are well into the black. On the other hand, Uncle Sam is drenched red ink.

    I understand the value of investing. But given the financial fitness of the universities, it feels more like subsidies. Subsidies that aren't benefitting Sam a/o US taxpayers. Yes, Sam can continue to buy success, but at what cost?

    thfuran(10000) 2 days ago [-]

    >Subsidies that aren't benefitting Sam a/o US taxpayers

    Why do you think that?

    metrognome(10000) 2 days ago [-]

    I'm surprised that there's been no mention of Operation Paperclip, neither in the article nor in the comments here. Seems like a huge part of the story to leave out.

    https://en.m.wikipedia.org/wiki/Operation_Paperclip

    mberning(10000) 2 days ago [-]

    Hard to overstate how much effort the US put into collecting all the best scientists in the post WWII world.

    hliyan(1215) 2 days ago [-]

    This is the first thing that struck me. Dangerous to weave narratives where large scale phenomena are elegantly explained by a single cause. It's always a confluence of multiple factors: influx of Nazi scientists, the policy mentioned in the article, the fact that Europe was recovering from a war, and perhaps others we're failing to notice.

    A favorite example of mine is the idea that World War 1 would not have happened if only Duke Ferdinand's driver had been told of the route change during the Sarajevo visit.

    casey2(10000) 3 days ago [-]

    Right from the first paragraph I know this is just nonsense that is only being posted because of currentpoliticalthing

    The US leapfroged the rest of the world in both science and engineering by it's civil war, this isn't disputable. It could only do that because of decade long tariffs that existed solely to protect it's nascent manufacturing industry.

    People have constructed so many myths about WW2 it's crazy.

    GDP: 1871 the US passes GB By 1900 the US economy was double GB's size. by 1910 they've already passed them by GDP per capita. INDUSTRIAL OUTPUT: Again 1870s. You can't really untie science from industrial output. Is there argument here that the US was behind scientifically because of Nobel prizes? If you narrowly define science as 'things europeans liked to research' then I guess. But even by that definition Americans were discovering new drugs such as Actinomycin D as early as 1940, during, not after, WW2 and before they entered. So unless people like Waksman (educated in America) count as braindrain 30 years before the fact I don't think the argument is credible.

    The UK failed to mass produce penicillin. It's this industrial ineptitude that caused 'brain drain'.

    blululu(3013) 3 days ago [-]

    Was it tarrifs or just a large, highly educated population with a unified market? The US has always been one of the leaders in education and scientific research on a per capita basis. Even in the 1770s you har people like Franklin working on cutting edge physics (the standard sign convention for charge is still flipped because of him). At some point it also just outgrew all the other countries in terms of size and it naturally became the global leader around that time.

    DrNosferatu(10000) 3 days ago [-]

    Time for the EU to take the place of the US.

    Gigachad(10000) 3 days ago [-]

    China is probably more likely to take over in science.

    ijidak(2930) 3 days ago [-]

    > By the time the war was over, U.S. science and engineering had blown past the British, and led the world for 85 years

    Was this written in 2030? The war ended in 1945.

    Just a minor nit... It was jarring to see a statement of questionable accuracy in the opening paragraph.

    layer8(860) 2 days ago [-]

    If you read carefully, there is no strict implication that the 85 years of leading only begun after the end of the war. If it began 1940, the quoted sentence would still be correct.

    MarkusWandel(3562) 3 days ago [-]

    It also didn't hurt that a certain European science superpower started purging academics based on ideology, said academics being more than welcome in the USA. Wait a minute...

    koakuma-chan(10000) 2 days ago [-]

    I'm pretty sure the US is currently pushing for merit-based admission.





    Historical Discussions: CVE Foundation (April 16, 2025: 440 points)

    (440) CVE Foundation

    440 points 2 days ago by layer8 in 860th position

    www.thecvefoundation.org | Estimated reading time – 2 minutes | comments | anchor

    FOR IMMEDIATE RELEASE

    April 16, 2025

    CVE Foundation Launched to Secure the Future of the CVE Program

    [Bremerton, Washington] – The CVE Foundation has been formally established to ensure the long-term viability, stability, and independence of the Common Vulnerabilities and Exposures (CVE) Program, a critical pillar of the global cybersecurity infrastructure for 25 years.

    Since its inception, the CVE Program has operated as a U.S. government-funded initiative, with oversight and management provided under contract. While this structure has supported the program's growth, it has also raised longstanding concerns among members of the CVE Board about the sustainability and neutrality of a globally relied-upon resource being tied to a single government sponsor.

    This concern has become urgent following an April 15, 2025 letter from MITRE notifying the CVE Board that the U.S. government does not intend to renew its contract for managing the program. While we had hoped this day would not come, we have been preparing for this possibility.

    In response, a coalition of longtime, active CVE Board members have spent the past year developing a strategy to transition CVE to a dedicated, non-profit foundation. The new CVE Foundation will focus solely on continuing the mission of delivering high-quality vulnerability identification and maintaining the integrity and availability of CVE data for defenders worldwide.

    "CVE, as a cornerstone of the global cybersecurity ecosystem, is too important to be vulnerable itself," said Kent Landfield, an officer of the Foundation. "Cybersecurity professionals around the globe rely on CVE identifiers and data as part of their daily work—from security tools and advisories to threat intelligence and response. Without CVE, defenders are at a massive disadvantage against global cyber threats."

    The formation of the CVE Foundation marks a major step toward eliminating a single point of failure in the vulnerability management ecosystem and ensuring the CVE Program remains a globally trusted, community-driven initiative. For the international cybersecurity community, this move represents an opportunity to establish governance that reflects the global nature of today's threat landscape.

    Over the coming days, the Foundation will release more information about its structure, transition planning, and opportunities for involvement from the broader community.

    For updates or inquiries, contact: [email protected].




    All Comments: [-] | anchor

    LiamPowell(10000) 2 days ago [-]

    Edit: See other comments. Some CVE board members have posted this on their social media accounts however there's still nothing on any official CVE channels. It's a little concerning that this was upvoted to the top of the front page before those comments had been posted given that this is a newly registered domain running on Google sites for something that it says has been in the works for a year.

    Original comment:

    Why is this being upvoted? There's no reference to it on the CVE website and the domain was only registered after the letter leaked despite the website claiming this was in the works for a year.

    Additionally the WHOIS claims that the registrant is 'CVE Foundation' which can not be found using the IRS search tool for tax-exempt organisations (note that MITRE does show up here): https://apps.irs.gov/app/eos/

    stavros(1602) 2 days ago [-]

    We're all just happy to see it.

    _verandaguy(10000) 2 days ago [-]

    Seconding this. A program like CVE still has to be built on (to some extent, and at least in the initial stages) traditional, non-cryptographic trust.

    Who runs this thing? Who's funding it? Who's reviewing, testing, and approving the reports? Assigning them IDs?

    I'm hoping for the best, and I'm willing to give the benefit of the doubt because of the frankly crap timing around this whole mess, but on its face, in its current state, I wouldn't trust this org at all.

    inktype(10000) 2 days ago [-]

    Comments are understandably negative as the press release has very little information, but I clicked vouch because I have a reason to believe it is legitimate

    edent(89) 2 days ago [-]

    Care to share your reason with the rest of the class?

    OtherShrezzing(10000) 2 days ago [-]

    This is a Google Workspace site thrown up 11hrs ago, and doesn't appear to be linked to from any official source.

    I don't think it's credible that CVE as an organisation would produce this website and not link to it from their official site or social media accounts.

    pama(1887) 2 days ago [-]

    There is hope people will report this site and google will take it down quickly.

    hobofan(10000) 2 days ago [-]

    To all the comments doubting the legitimacy:

    Here is a LinkedIn post by one of the CVE board members (literally the first one on the list here[0]): https://www.linkedin.com/posts/peterallor_cve-foundation-act...

    I'm sure if you look at some of the contact information of other CVE board members and their broadcasting platforms you will also find something.

    [0]: https://www.cve.org/programorganization/board

    layer8(860) 2 days ago [-]

    Tod Beardsley seems to confirm it as well: https://infosec.exchange/@todb

    alexmorley(3190) 2 days ago [-]

    Edit suggests the contract has been renewed last minute.

    https://www.forbes.com/sites/kateoflahertyuk/2025/04/16/cve-...

    Shank(940) 2 days ago [-]

    Are there any non-Forbes sources that confirm this?

    bildiba(10000) 2 days ago [-]

    I haven't been actively monitoring for security vulnerabilities ever since I switched from system administration to software development a few decades back. These days, I just read news that talks about high profile vulnerabilities - I do see CVE a lot more than cert.

    We used to look at cert: https://www.kb.cert.org/vuls/ I just did a quick search to confirm that it is still there.

    What's the difference/relationship between the two?

    iterance(10000) 2 days ago [-]

    The primary difference is that CVE was unexpectedly killed by the US Government yesterday and the program terminates today.

    Vox_Leone(10000) 1 day ago [-]

    I think it's time the biggest players in the software industry step up, maybe through a formal consortium. This model would make sense because they benefit the most. Big tech companies rely on CVEs to secure their own products;

    They have the means. With their massive revenue and dedicated security teams, these companies could easily fund CVE operations. A consortium approach spreads responsibility fairly;

    Shared responsibility, shared benefits. Security is everyone's problem.

    jpleger(10000) 1 day ago [-]

    Hahaha, CVE was created because industry refused to track and report on things in a consistent and transparent manner. When given the option, business will almost always choose the easy path, and things like vulnerability management programs will be set back years if not decades when the external accountability goes away.

    In general, lawyers and CTOs would probably love to see CVE go away or be taken over by industry.

    Source: been working in security for 20+ years.

    nonrandomstring(10000) 1 day ago [-]

    The last people I am ever going to trust about matters of security is US BigTech. Consortium or not. This idea has no legs. We absolutely need an international cyber threat intelligence network, with many checks, balances and oversights. If we're going to ask 'who funds it?' then we need to ask 'who really benefits from a technology industry?'

    blitzar(10000) 1 day ago [-]

    > biggest players in the software industry step up

    While they are at it maybe chuck $5 to the dev maintaining the open source package that your trillion dollar corporation relies on, that your 50,000 leetcoders can't figure out how to write or live without.

    ta1243(10000) 2 days ago [-]

    Yeah, in the USA, where organisations and officers are continually threatened by an adversarial government.

    No thanks.

    Harvard for example doesn't kow-tow to the reigime, and look what happens. Non-profits in the USA are not independent.

    ape4(10000) 2 days ago [-]

    Its not hard to imagine the current regime complaining about a CVE issued about a product made by a favored company - eg x.com

    throwawaymaths(10000) 2 days ago [-]

    A non profit is independent if they don't take federal money? Like EFF, for example.

    Maybe CVEs should be tracked by a nongovernmental agency, like how UL works.

    odo1242(10000) 2 days ago [-]

    Harvard takes a lot of federal money. On the order of millions to billions of dollars.

    excalibur(10000) 2 days ago [-]

    The letter was dated yesterday, and in response they spent the past year working on this?

    HelloNurse(10000) 2 days ago [-]

    'While we had hoped this day would not come, we have been preparing for this possibility.

    In response, a coalition ...'

    This sounds like secret, unofficial contingency planning; 'this day' has apparently come very suddenly.





    Historical Discussions: 12-factor Agents: Patterns of reliable LLM applications (April 15, 2025: 434 points)
    12-factor-agents: principles to build LLM software good enough for production (April 11, 2025: 1 points)

    (433) 12-factor Agents: Patterns of reliable LLM applications

    433 points 3 days ago by dhorthy in 3524th position

    github.com | Estimated reading time – 8 minutes | comments | anchor

    12 Factor Agents - Principles for building reliable LLM applications

    In the spirit of 12 Factor Apps. The source for this project is public at https://github.com/humanlayer/12-factor-agents, and I welcome your feedback and contributions. Let's figure this out together!

    Hi, I'm Dex. I've been hacking on AI agents for a while.

    I've tried every agent framework out there, from the plug-and-play crew/langchains to the 'minimalist' smolagents of the world to the 'production grade' langraph, griptape, etc.

    I've talked to a lot of really strong founders, in and out of YC, who are all building really impressive things with AI. Most of them are rolling the stack themselves. I don't see a lot of frameworks in production customer-facing agents.

    I've been surprised to find that most of the products out there billing themselves as 'AI Agents' are not all that agentic. A lot of them are mostly deterministic code, with LLM steps sprinkled in at just the right points to make the experience truly magical.

    Agents, at least the good ones, don't follow the 'here's your prompt, here's a bag of tools, loop until you hit the goal' pattern. Rather, they are comprised of mostly just software.

    So, I set out to answer:

    What are the principles we can use to build LLM-powered software that is actually good enough to put in the hands of production customers?

    Welcome to 12-factor agents. As every Chicago mayor since Daley has consistently plastered all over the city's major airports, we're glad you're here.

    Special thanks to @iantbutler01, @tnm, @hellovai, @stantonk, @balanceiskey, @AdjectiveAllison, @pfbyjy, @a-churchill, and the SF MLOps community for early feedback on this guide.

    The Short Version: The 12 Factors

    Even if LLMs continue to get exponentially more powerful, there will be core engineering techniques that make LLM-powered software more reliable, more scalable, and easier to maintain.

    For a deeper dive on my agent journey and what led us here, check out A Brief History of Software - a quick summary here:

    We're gonna talk a lot about Directed Graphs (DGs) and their Acyclic friends, DAGs. I'll start by pointing out that...well...software is a directed graph. There's a reason we used to represent programs as flow charts.

    Around 20 years ago, we started to see DAG orchestrators become popular. We're talking classics like Airflow, Prefect, some predecessors, and some newer ones like (dagster, inggest, windmill). These followed the same graph pattern, with the added benefit of observability, modularity, retries, administration, etc.

    I'm not the first person to say this, but my biggest takeaway when I started learning about agents, was that you get to throw the DAG away. Instead of software engineers coding each step and edge case, you can give the agent a goal and a set of transitions:

    And let the LLM make decisions in real time to figure out the path

    The promise here is that you write less software, you just give the LLM the 'edges' of the graph and let it figure out the nodes. You can recover from errors, you can write less code, and you may find that LLMs find novel solutions to problems.

    As we'll see later, it turns out this doesn't quite work.

    Let's dive one step deeper - with agents you've got this loop consisting of 3 steps:

    1. LLM determines the next step in the workflow, outputting structured json ('tool calling')
    2. Deterministic code executes the tool call
    3. The result is appended to the context window
    4. repeat until the next step is determined to be 'done'
    initial_event = {'message': '...'}
    context = [initial_event]
    while True:
      next_step = await llm.determine_next_step(context)
      context.append(next_step)
      if (next_step.intent === 'done'):
        return next_step.final_answer
      result = await execute_step(next_step)
      context.append(result)

    Our initial context is just the starting event (maybe a user message, maybe a cron fired, maybe a webhook, etc), and we ask the llm to choose the next step (tool) or to determine that we're done.

    Here's a multi-step example:

    027-agent-loop-animation.mp4
    GIF Version

    ]

    At the end of the day, this approach just doesn't work as well as we want it to.

    In building HumanLayer, I've talked to at least 100 SaaS builders (mostly technical founders) looking to make their existing product more agentic. The journey usually goes something like:

    1. Decide you want to build an agent
    2. Product design, UX mapping, what problems to solve
    3. Want to move fast, so grab $FRAMEWORK and get to building
    4. Get to 70-80% quality bar 5a. Realize that 80% isn't good enough for most customer-facing features 5b. Realize that getting past 80% requires reverse-engineering the framework, prompts, flow, etc
    5. Start over from scratch
    Random Disclaimers

    DISCLAIMER: I'm not sure the exact right place to say this, but here seems as good as any: this in BY NO MEANS meant to be a dig on either the many frameworks out there, or the pretty dang smart people who work on them. They enable incredible things and have accelerated the AI ecosystem.

    I hope that one outcome of this post is that agent framework builders can learn from the journeys of myself and others, and make frameworks even better.

    Especially for builders who want to move fast but need deep control.

    DISCLAIMER 2: I'm not going to talk about MCP. I'm sure you can see where it fits in.

    DISCLAIMER 3: I'm using mostly typescript, for reasons but all this stuff works in python or any other language you prefer.

    Anyways back to the thing...

    Design Patterns for great LLM applications

    After digging through hundreds of AI libriaries and working with dozens of founders, my instinct is this:

    1. There are some core things that make agents great
    2. Going all in on a framework and building what is essentially a greenfield rewrite may be counter-productive
    3. There are some core principles that make agents great, and you will get most/all of them if you pull in a framework
    4. BUT, the fastest way I've seen for builders to get high-quality AI software in the hands of customers is to take small, modular concepts from agent building, and incorporate them into their existing product
    5. These modular concepts from agents can be defined and applied by most skilled software engineers, even if they don't have an AI background

    The fastest way I've seen for builders to get good AI software in the hands of customers is to take small, modular concepts from agent building, and incorporate them into their existing product

    Honorable Mentions / other advice




    All Comments: [-] | anchor

    mgdev(10000) 1 day ago [-]

    These are great. I had my own list of takeaways [0] after doing this for a couple years, though I wouldn't go so far as calling mine factors.

    Like you, biggest one I didn't include but would now is to own the lowest level planning loop. It's fine to have some dynamic planning, but you should own an OODA loop (observe, orient, decide, act) and have heuristics for determining if you're converging on a solution (e.g. scoring), or else breaking out (e.g. max loops).

    I would also potentially bake in a workflow engine. Then, have your model build a workflow specification that runs on that engine (where workflow steps may call back to the model) instead of trying to keep an implicit workflow valid/progressing through multiple turns in the model.

    [0]: https://mg.dev/lessons-learned-building-ai-agents/

    dhorthy(3524) about 20 hours ago [-]

    this guide is great, i liked the 'chat interfaces are dumb' take - totally agree. AI-based UIs have a very long way to go

    mertleee(10000) 3 days ago [-]

    What are your favorite open source 'frameworks' for agents?

    dhorthy(3524) 2 days ago [-]

    i have seen a ton of good ones, and they all have ups and downs. I think rather than focusing on frameworks though, I'm trying to dig into what goes into them, and what's the tradeoff if you try to build most of it yourself instead

    but since you asked, to name a few

    - ts: mastra, gensx, vercel ai, many others! - python: crew, langgraph, many others!

    nickenbank(10000) 1 day ago [-]

    I totally agree with this. Most, if not all, frameworks or building agents are a waste of time

    dhorthy(3524) 1 day ago [-]

    this guy gets it

    hellovai(3617) 1 day ago [-]

    really cool to see BAML on here :) 100% align on so much of what you've said here. its really about treating LLMs as functions.

    dhorthy(3524) about 20 hours ago [-]

    excellent work on BAML and love it as a building block for agents

    DebtDeflation(10000) 1 day ago [-]

    > most 'AI Agents' that make it to production aren't actually that agentic. The best ones are mostly just well-engineered software with LLMs sprinkled in at key points

    I've been saying that forever, and I think that anyone who actually implements AI in an enterprise context has come to the same conclusion. Using the Anthropic vernacular, AI 'workflows' are the solution 90% of the time and AI 'agents' maybe 10%. But everyone wants the shiny new object on their CV and the LLM vendors want to bias the market in that direction because running LLMs in a loop drives token consumption through the roof.

    peab(10000) 1 day ago [-]

    I keep trying to tell my PM this

    film42(3674) 1 day ago [-]

    Everyone wants to go the agent route until the agent messes up once after working 99 times in a row. 'Why did it make a silly mistake?' We don't know. 'Well, let's put a few more guard rails around it.' Sounds good... back to 'workflows.'

    daxfohl(10000) 1 day ago [-]

    I think it got started as AI tools for things like cancer detection based purely on deep learning started to outperform tools where humans guide the models what to look for. The expectation became that eventually this will happen for LLM agents too if only we can add more horsepower. But it seems like we've hit a bit of a ceiling there. The latest releases from OpenAI and Meta were largely duds despite their size, still very far from anything you'd trust for anything important, and there's nothing left to add to their training corpus that isn't already there.

    Of course a new breakthrough could happen any day and get through that ceiling. Or 'common sense' may be something that's out of reach for a machine without life experience. Until that shakes out, I'd be reluctant to make any big bets on any AI-for-everything solutions.

    daxfohl(10000) 1 day ago [-]

    Another one: plan for cost at scale.

    These things aren't cheap at scale, so whenever something might be handled by a deterministic component, try that first. Not only save on hallucinations and latency, but could make a huge difference in your bottom line.

    dhorthy(3524) about 21 hours ago [-]

    Yeah definitely. I think the pattern I see people using most is "start with slow, expensive, but low dev effort, and then refine overtime as you fine speed/quality/cost bottlenecks worth investing in"

    daxfohl(10000) 1 day ago [-]

    This old obscure blog post about framework patterns has resonated with me throughout my career and I think it applies here too. LLMs are best used as 'libraries' rather than 'frameworks', for all the reasons described in the article and more, especially now while everything is in such flux. 'Frameworks' are sexier and easier to sell though, and lead to lock-in and add-on services, so that's what gets promoted.

    https://tomasp.net/blog/2015/library-frameworks/

    saadatq(10000) 1 day ago [-]

    This is so good...

    "... you can find frameworks not just in software, but also in ordinary life. If you buy package holidays, you're buying a framework - they transport you to some place, put you in a hotel, feed you and your activities have to fit into the shape provided by the framework (say, go into the pool and swim there). If you travel independently, you are composing libraries. You have to book your flights, find your accommodation and arrange your program (all using different libraries). It is more work, but you are in control - and you can arrange things exactly the way you need."

    pancsta(10000) 2 days ago [-]

    Very informative wiki, thank you, I will definitely use it. So Ive made my own 'AI Agents framework' [0] based on actor model, state machines and aspect oriented programming (released just yesterday, no HN post yet) and I really like points 5 and 7:

        5: Unify execution state and business state
        8. Own your control flow
    
    That is exactly what SecAI does, as it's a graph control flow library at it's core (multigraph instead of DAG) and LLM calls are embedded into graph's nodes. The flow is reinforced with negotiation, cancellation and stateful relations, which make it more 'organic'. Another thing often missed by other frameworks are dedicated devtools (dbg, repl, svg) - programming for failure, inspecting every step in detail, automatic data exporters (metrics, traces, logs, sql), and dead-simple integrations (bash). I've released the first tech demo [1] which showcases all the devtools using a reference implementation of deepresearch (ported from AtomicAgents). You may especially like the Send/Stop button, which is nothings else then 'Factor 6. Launch/Pause/Resume with simple APIs'. Oh and it's network transparent, so it can scale.

    Feel free to reach out.

    [0] https://github.com/pancsta/secai

    [1] https://youtu.be/0VJzO1S-gV0

    dhorthy(3524) 2 days ago [-]

    i like the terminal UI and otel integrations - what tasks are you using this for today?

    wfn(3441) 1 day ago [-]

    This is great, thank you so much for sharing!

    serverlessmania(3634) 1 day ago [-]

    'Another thing often missed by other frameworks are dedicated devtools'

    From my experience, PydanticAI really nailed it with Logfire—debugging[0] agents was significantly easier and more effective compared to the other frameworks and libraries I tested.

    [0] https://ai.pydantic.dev/logfire/#pydantic-logfire

    hhimanshu(10000) 1 day ago [-]

    I am wondering how libraries like DSPY [0] fits in your factor-2 [1]

    As I was reading, I saw mention of BAML > (the above example uses BAML to generate the prompt ...

    Personally, in my experience hand-writing prompts for extracting structured information from unstructured data has never been easy. With DSPY, my experience has been quite good so far.

    As you have used raw prompt from BAML, what do you think of using the raw prompts from DSPY [2]?

    [0] https://dspy.ai/

    [1] https://github.com/humanlayer/12-factor-agents/blob/main/con...

    [2] https://dspy.ai/tutorials/observability/#using-inspect_histo...

    dhorthy(3524) about 20 hours ago [-]

    interesting - I think I have to side with the Boundary (YC W23) folks on this one - if you want bleeding edge performance, you need to be able to open the box and hack on the insides.

    I don't agree fully with this article https://www.chrismdp.com/beyond-prompting/ but the comparison of punchards -> assembly -> c -> higher langs is quite useful here

    I just don't know when we'll get the right abstraction - i don't think langchain or dspy are the 'C programming language' of AI yet (they could get there!).

    For now I'll stick to my 'close to the metal' workbench where I can inspect tokens, reorder special tokens like system/user/JSON, and dynamically keep up with the idiosyncrasies of new models without being locked up waiting for library support.

    wfn(3441) 1 day ago [-]

    This could not have come at a better time for me, thank you!

    I've been tinkering with an idea for an audiovisual sandbox[1] (like vvvv[2] but much simpler of course, barebones).

    Idea is to have a way to insert LM (or some simple locally run neural net) 'nodes' which are given specific tasks and whose output is expected to be very constrained. Hence your example:

        'question -> answer: float'
    
    Is very attractive here. Of course, some questions in my case would be quite abstract, but anyway. Also, multistage pipelines are also very interesting.

    [1]: loose set of bulletpoints brainstorming the idea if curious, not organised: https://kfs.mkj.lt/#audiovisllm (click to expand description)

    [2]: https://vvvv.org/

    dhorthy(3524) about 21 hours ago [-]

    Typed outputs from an LLM is a game changer!

    darepublic(10000) about 17 hours ago [-]

    I didn't really read this extensively but to me I would want to use as much deterministic code as possible and leverage the llm as little as possible. That to me is a better portend of predictable result, lower operational costs and is a signal that nobody could just quickly reproduce the same app. I would tend to roll my own tools and not use out of the box buzz word glue to integrate my llm with other systems. And if these conditions aren't met or aren't necessary I'd figure someone else could just vibe code the same solution in no time anyway. Keep control I say! Die on the hill of control! That's not to say I'm not impressed by LLMs.. quite the opposite

    dhorthy(3524) about 16 hours ago [-]

    control is good, and determinism is good - while the primary goal is to convince people 'don't give up too much control' - there is a secondary which is: THESE are the places where it makes sense to give up some control

    mettamage(3341) about 24 hours ago [-]

    I've noticed some of these factors myself as well. I'd love to build more AI applications like this. Currently I'm a data analyst and they don't fully appreciate that I can build stuff like this as it is not a technology oriented company.

    I'd love to work on stuff like this full-time. If anyone is interested in a chat, my email is on my profile (US/EU).

    dhorthy(3524) about 21 hours ago [-]

    cool thing about open source is you can work on whatever you want, and it's the best way to meet people who do similar work for their day job as well





    Historical Discussions: Fedora change aims for 99% package reproducibility (April 11, 2025: 431 points)

    (431) Fedora change aims for 99% package reproducibility

    431 points 7 days ago by voxadam in 666th position

    lwn.net | Estimated reading time – 11 minutes | comments | anchor

    This article brought to you by LWN subscribers

    Subscribers to LWN.net made this article — and everything that surrounds it — possible. If you appreciate our content, please buy a subscription and make the next set of articles possible.

    By Joe Brockmeier March 31, 2025

    The effort to ensure that open-source software is reproducible has been gathering steam over the years, and gaining traction with major Linux distributions. Debian, for example, has been working toward reproducible builds for more than a decade; it can now produce official live CDs of the current stable release that are reproducible. Fedora started on the path much later, but it has progressed far enough that the project is now considering a change proposal for the Fedora 43 development cycle, expected to be released in October, with a goal of making 99% of Fedora's package builds reproducible. So far, reaction to the proposal seems favorable and focused primarily on how to achieve the goal—with minimal pain for packagers—rather than whether to attempt it.

    Defining reproducible builds

    The Reproducible Builds project defines a build as reproducible if 'given the same source code, build environment and build instructions, any party can recreate bit-by-bit identical copies of all specified artifacts'. In a 2023 hackfest report, Zbigniew Jędrzejewski-Szmek said that Fedora has not prioritized reproducible builds in the past because Fedora has more control over its build process than Debian and other distributions. Because Debian allows maintainers to generate source packages on their local system and to upload some locally built packages for distribution to users, he said that 'trust in the contents of both source and binary packages is low.' (Debian's build daemons build most binary packages from source for distribution to users, but there are exceptions.) Fedora, on the other hand, exercises much more control over packages.

    In Fedora, all packages that are distributed to users are built in the centralized, strongly controlled infrastructure. All source rpms are built from 'dist-git': a git repository which contains the build 'recipe' and a cryptographic hash of package sources, so it is relatively easy to verify what changed between package versions, what 'inputs' went into a particular source package, and in what environment the binary packages were built.

    However, even though Fedora has a tighter control over its packages, Jędrzejewski-Szmek said that one of the benefits of reproducible builds was to help detect and mitigate any kind of supply-chain attack on Fedora's builders and allow others to perform independent verification that the package sources match the binaries that are delivered by Fedora. It's interesting to note that Fedora had embarked on this work before the XZ backdoor drew even more attention to supply-chain attacks.

    He acknowledges that Debian is more advanced in its reproducible builds processes, and notes that Fedora is setting a different definition for reproducible builds. This definition excludes signatures and some metadata and focuses solely on the payload of packaged files in a given RPM:

    A build is reproducible if given the same source code, build environment and build instructions, and metadata from the build artifacts, any party can recreate copies of the artifacts that are identical except for the signatures and parts of metadata.

    The reason Fedora is pursuing a different definition of reproducible build is that it cannot achieve 'bit-by-bit' reproducibility by the original definition. This is because of differences in the package format and the way that Fedora builds its packages. RPMs embed the package signature in the RPM when they are built, but Debian uses detached signatures. RPMs also include information, such as the build time (BUILDTIME) and build host (BUILDHOST) in the RPM's header, that can affect reproducibilty. There was a discussion about allowing these variables to be overridden. However, the prevailing opinion was that the information provided by BUILDHOST is useful, and overriding its inclusion is not desirable. The contents, however, should still be 'bit-by-bit' identical, even though that phrase does not turn up in Fedora's definition.

    The openSUSE project, which also distributes software using the RPM format, sets BUILDHOST to 'reproducible', according to Jan Zerebecki. The actual build host is printed in the build logs, and interested users can search openSUSE's build logs to find the host.

    Path to reproducibility

    For BUILDTIME, openSUSE sets the build time to the date of the latest changelog entry. This is provided to builds by the SOURCE_DATE_EPOCH environment variable. This is where Fedora's reproducible builds work began, with a change that was made during the Fedora 38 development cycle to 'clamp' the modification time (mtime) of packaged files to SOURCE_DATE_EPOCH. This ensured that the mtimes were independent of the time of an actual build. Packagers were given the ability to opt-out of this if, for some reason, their package would be broken by the new behavior.

    During the Fedora 41 development cycle, the project implemented another change in the RPM build process to remove common sources of irreproducibility. That change made use of a Rust program, add-determinism, that attempts to standardize metadata in binary or source files to ensure consistency. It is similar to Debian's strip-nondeterminism, which is a Perl library that is part of the debhelper tool for building Debian packages. Using strip-nondeterminism, the debhelper tool removes non-deterministic information such as timestamps and filesystem ordering from various file and archive formats. The Fedora project chose to write its own tool because it was undesirable to pull Perl into the build root for every package.

    According to the new change proposal, the modifications to Fedora's build infrastructure to date have allowed it to make 90% of package builds reproducible. The goal now is to reach 99% of package builds. It appears that Fedora has gotten as much mileage out of infrastructure changes, without requiring individual packagers to deal with reproducibility problems, as it can. To get to 99% the project is going to have to ask packagers to treat reproducibility problems in their packages as bugs.

    The change owners—Jędrzejewski-Szmek, Davide Cavalca, and Jelle van der Waa—would package the fedora-repro-build utility to allow developers to make local rebuilds of packages built in Koji (Fedora's build system) to test their reproducibility. It will also require standing up a public instance of rebuilderd, which is a system for providing independent verification that binary packages can be reproduced from source code. It can scan a package repository's metadata for new or updated packages and then queue them for rebuilding, and it provides an API to query for the reproducibility status of packages. Rebuilderd can also, optionally, use the diffoscope tool to generate a report of differences. The Arch Linux reproducible status page provides a good example of rebuilderd in use.

    If accepted, the proposal would also require an update to Fedora's packaging guidelines that would say packages should (not, at least currently, 'must') build reproducibly and allow bugs to be filed against packages when they are not reproducible.

    Aside from the security benefits of reproducibility, the proposal also makes the case that it will lead to packages of higher quality. Irreproducible bits in packages are quite often 'caused by an error or sloppiness in the code'. For example, dependence on hardware architecture in architecture-independent (noarch) packages is 'almost always unwanted and/or a bug', and reproducibility tests can uncover those bugs.

    The proposal acknowledges that some packages will have problems with reproducibility that cannot be fixed easily. For example, Haskell packages are not currently reproducible when compiled by more than one thread, though a fix is being worked on. Packages produced with Go have debug data that is not reproducible because the GNU Debugger index file (.gdb_index) can be of varying size even given the same input. No fix is yet in the works for that. Another known problem is that the Linux kernel uses an ephemeral key for module signatures. LWN covered a patch set from Thomas Weißschuh that may solve that problem.

    Feedback

    In the discussion thread on Fedora's Discourse forum, Fedora's infrastructure lead Kevin Fenzi asked, 'where will this [rebuilderd] instance live and who will maintain it? 🙂' He also noted it would be good to have documentation on setting up a rebuilderd instance. 'Otherwise I like the idea!' Cavalca said that the reproducibility work was currently using an Amazon Web Services (AWS) account sponsored by Meta, but 'we can look at moving into Fedora infra if there's a preference for that'. Fenzi replied that it might be good to keep running the work outside Fedora infrastructure to make it more independent. 'Although of course we could run one and then others could run others and compare'.

    Daniel P. Berrangé asked if rebuilderd could be integrated with Koji so that maintainers did not have to learn another build tool. 'I'm pretty unenthusiastic about dealing with yet another standalone web service providing post-build testing.' Jędrzejewski-Szmek said that using Koji to perform the build was an interesting idea, but 'we also want our rebuilds to be as independent as possible', so it would still be desirable to do them in a system other than Koji. Rebuilding a package the second time in the same build environment means 'we are not testing much'.

    Miroslav Suchý, a member of Fedora's infrastructure team, wondered if rebuilderd could submit builds to Fedora's Copr build system instead of standing up yet another build system in Fedora. This led to a discussion about Copr's capabilities and whether it would integrate well with rebuilderd. Jędrzejewski-Szmek noted that rebuilderd is a 'complete project that does things in its own way' and it may be complicated to try to teach it to talk to an external service asynchronously.

    Integrating rebuilderd tooling and reports into Fedora's existing infrastructure has been a recurring theme in the discussion. Simon de Vlieger said he was not set on having builds performed in Koji, but wanted the project 'to integrate well with Fedora's pre-existing tools and things so it has the highest chance of people actually using it' and performing as people expect.

    Next

    The next step for the proposal is to file a ticket with the Fedora Engineering Steering Committee (FESCo), at least one week after the proposal was announced. In this case, that would be no sooner than March 26. If FESCo approves, the owners can begin work on the proposal with an eye to completion by October, when Fedora 43 is planned for release.

    Most of Fedora's users have probably not noticed the reproducibility work in Fedora thus far and won't appreciate any difference when they install Fedora 43 (or 44, 45, and so on). However, given the continual efforts of bad actors to find and exploit supply-chain weaknesses in open-source projects, it is a valuable effort nonetheless.





    All Comments: [-] | anchor

    ajross(10000) 7 days ago [-]

    Linux folks continue with running away with package security paradigms while NPM, PyPI, cargo, et. al. (like that VSCode extension registry that was on the front page last week) think they can still get away with just shipping what some rando pushes.

    hedora(3373) 7 days ago [-]

    Shipping what randos push works great for iOS and Android too.

    System perl is actually good. It's too bad the Linux vendors don't bother with system versions of newer languages.

    anotherhue(2703) 7 days ago [-]

    I have observed a sharp disconnect in the philosophies of 'improving developer experience' and 'running a tight ship'.

    I think the last twenty years of quasi-marketing/sales/recruiting DevRel roles have pushed a narrative of frictionless development, while on the flip side security and correctness have mostly taken a back seat (special industries aside).

    I think it's a result of the massive market growth, but I so welcome the pendulum swinging back a little bit. Typo squatting packages being a concern at the same time as speculative execution exploits shows mind bending immaturity.

    esafak(10000) 7 days ago [-]

    The future is not evenly distributed.

    Palomides(10000) 7 days ago [-]

    distros get unbelievable amounts of hate for not immediately integrating upstream changes, there's really no winning

    tsimionescu(10000) 7 days ago [-]

    I think the opposite is mostly true. Linux packaging folks are carefully sculpting their toys, while everyone else is mostly using upstream packages and docker containers to work around the beautiful systems. For half the software I care about on my Debian system, I have a version installed either directly from the web (curl | bash style), from the developer's own APT repo, or most likely from a separate package manager (be it MELPA, pypi, Go cache, Maven, etc).

    sheepscreek(10000) 7 days ago [-]

    YES! I want more tools to be deterministic. My wish-list has Proxmox config at the very top.

    TheDong(10000) 7 days ago [-]

    Want to give this a try and see if it works? https://github.com/SaumonNet/proxmox-nixos?tab=readme-ov-fil...

    knowitnone(10000) 7 days ago [-]

    99%? Debbie Downer says it only takes 1 package to screw the pooch

    ethersteeds(10000) 7 days ago [-]

    I would still much prefer playing 100:1 Russian roulette than 1:1, if those are my options.

    nwah1(3635) 7 days ago [-]

    There's a long tail of obscure packages that are rarely used, and almost certainly a power law in terms of which packages are common. Reproducibility often requires coordination between both the packagers and the developers, and achieving that for each and every package is optimistic.

    If they just started quarantining the long tail of obscure packages, then people would get upset. And failing to be 100% reproducible will make a subset of users upset. Lose-lose proposition there, given that intelligent users could just consciously avoid packages that aren't passing reproducibility tests.

    100% reproducibility is a good goal, but as long as the ubiquitous packages are reproducible then that is probably going to cover most. Would be interesting to provide an easy way to disallow non-reproducible packages.

    I'm sure one day they will be able to make it a requirement for inclusion into the official repos.

    EasyMark(3653) 7 days ago [-]

    'All I see is 1% of complete failure' --Bad Dads everywhere

    nimish(3665) 7 days ago [-]

    As a user of fedora what does this actually get me? I mean I understand it for hermetic builds but why?

    jacobgkau(10000) 7 days ago [-]

    My impression is that reproducible builds improve your security by helping make it more obvious that packages haven't been tampered with in late stages of the build system.

    * Edit, it's quoted in the linked article:

    > Jędrzejewski-Szmek said that one of the benefits of reproducible builds was to help detect and mitigate any kind of supply-chain attack on Fedora's builders and allow others to perform independent verification that the package sources match the binaries that are delivered by Fedora.

    bagels(10000) 7 days ago [-]

    It's one tool of many that can be used to prevent malicious software from sneaking in to the supply chain.

    russfink(3404) 7 days ago [-]

    Keep in mind that compilers can be backdoored to install malicious code. Bitwise/signature equivalency does not imply malware-free software.

    kazinator(10000) 7 days ago [-]

    Reproducible builds can improve software quality.

    If we believe we have a reproducible build, that's constitutes a big test case which gives us confidence in the determininism of the whole software stack.

    To validate that test case, we actually have to repeat the build a number of times.

    If we spot a difference, something is wrong.

    For instance, suppose that a compiler being used has a bug whereby it is relying on the value of an unitialized variable somewhere. That could show up as a difference in the code it generates.

    Without reproducible builds, of course there are always differences in the results of a build: we cannot use repeated builds to discover that something is wrong.

    (People do diffs between irreproducible builds anyway. For instance, disassemble the old and new binaries, and do a textual diff, validating that only some expected changes are present, like string literals that have embedded build dates. If you have reproducible builds, you don't have to do that kind of thing to detect a change.

    Reproducible builds will strengthen the toolchains and surrounding utilities. They will flush out instabilities in build systems, like parallel Makefiles with race conditions, or indeterminate orders of object files going into a link job, etc.

    conradev(10000) 7 days ago [-]

    Better security! A malicious actor only needs to change a few bytes in either the source or binary of OpenSSL to break it entirely (i.e. disable certificate checking).

    Reproducible builds remove a single point of failure for authenticating binaries – now anyone can do it, not just the person with the private keys.

    Dwedit(10000) 7 days ago [-]

    Reproducibility is at odds with Profile-Guided-Optimization. Especially on anything that involves networking and other IO that isn't consistent.

    michaelt(10000) 7 days ago [-]

    Why should it be?

    Does the profiler not output a hprof file or whatever, which is the input to the compiler making the release binary? Why not just store that?

    gnulinux(3239) 7 days ago [-]

    It's not at odds at all but it'll be 'Monadic' in the sense that the output of system A will be part of the input to system A+1 which is complicated to organize in a systems setting, especially if you don't have access to a language that can verify. But it's absolutely achievable if you do have such a tool, e.g. you can do this in nix.

    zbobet2012(10000) 7 days ago [-]

    That's only the case if you did PGO with 'live' data instead of replays from captured runs, which is best practice afaik.

    nrvn(2497) 7 days ago [-]

    from Go documentation[0]:

    > Committing profiles directly in the source repository is recommended as profiles are an input to the build important for reproducible (and performant!) builds. Storing alongside the source simplifies the build experience as there are no additional steps to get the profile beyond fetching the source.

    I very much hope other languages/frameworks can do the same.

    [0]: https://go.dev/doc/pgo#building

    nyrikki(10000) 7 days ago [-]

    This is one of the 'costs' of reproducible builds, just like the requirement to use pre-configured seeds for pseudo random number generators etc.

    It does hit real projects and may be part of the reason that '99%' is called out but Fedora also mentions that they can't match the official reproducible-builds.org meaning in the above just due to how RPMs work, so we will see what other constraints they have to loosen.

    Here is one example of where suse had to re-enable it for gzip.

    https://build.opensuse.org/request/show/499887

    Here is a thread on PGO from the reproducible-builds mail list.

    https://lists.reproducible-builds.org/pipermail/rb-general/2...

    There are other costs like needing to get rid of parallel builds for some projects that make many people loosen the official constraints. The value of PGO+LTO being one.

    gcda profiles are unreproducible, but the code they produce is typically the same. If you look into the pipeline of some projects, they just delete the gcda output and then often try a rebuild if the code is different or other methods.

    While there are no ideal solutions, one that seems to work fairly well, assuming the upstream is doing reproducible builds, is to vendor the code, build a reproducible build to validate that vendored code, then enable optimizations.

    But I get that not everyone agrees that the value of reproducibility is primarily avoiding attacks on build infrastructure.

    However reproducible builds as nothing to do with MSO model checking etc... like some have claimed. Much of it is just deleting non-deterministic data as you can see here with debian, which fedora copied.

    https://salsa.debian.org/reproducible-builds/strip-nondeterm...

    As increasing the granularity of address-space randomization at compile and link time is easier than at the start of program execution, obviously there will be a cost (that is more than paid for by reducing supply chain risks IMHO) of reduced entropy for address randomization and thus does increase the risk of ROP style attacks.

    Regaining that entropy at compile and link time, if it is practical to recompile packages or vendor, may be worth the effort in some situations, probably best to do real PGO at that time too IMHO.

    barotalomey(10000) 7 days ago [-]

    The real treasure was the friend I found along the way

    https://github.com/keszybz/add-determinism

    m463(2487) 7 days ago [-]

    I kind of wonder if this or something similar could somehow nullify timestamps so you could compare two logfiles...

    further would be the ability to compare logfiles with pointer addresses or something

    AshamedCaptain(10000) 6 days ago [-]

    Which is I guess the NIH version of https://salsa.debian.org/reproducible-builds/strip-nondeterm... ...

    apatheticonion(10000) 6 days ago [-]

    Another thing I'd love to see is more statically linked binaries. Something like Python, for instance, is a nightmare to install and work with

    theteapot(10000) 6 days ago [-]

    I think general consensus is against you. Fedora packaging policy [1]:

    > Packages including libraries should exclude static libs as far as possible (eg by configuring with --disable-static). Static libraries should only be included in exceptional circumstances. Applications linking against libraries should as far as possible link against shared libraries not static versions.

    [1]: https://docs.fedoraproject.org/en-US/packaging-guidelines/

    hashstring(10000) 6 days ago [-]

    What do you mean with "a nightmare to install and work with" exactly?

    supriyo-biswas(10000) 6 days ago [-]

    For Python, take a look at the musl builds in python-build-standalone[1], which are statically linked.

    I also have a tiny collection of statically linked utilities available here[2].

    [1] https://github.com/astral-sh/python-build-standalone

    [2] https://github.com/supriyo-biswas/static-builds

    throwaway48476(10000) 6 days ago [-]

    Were stuck with a computing paradigm from 50 years ago.

    Ideally everything would be statically linked but thr sections would be marked and deduped by the filesystem.

    kpcyrd(3301) 5 days ago [-]

    Due to the python reference I think you mean 'compiles into a single binary', not necessarily 'static linking'.

    This binary may be statically linked, or link to system libraries. Quite a few times the only system library being linked is libc though.

    But yes, I also hope this gets more prevalent instead of the python approach.

    binarymax(2527) 7 days ago [-]

    I often see initiatives and articles like this but no mention of Nix. Is it just not well known enough for comparison? Because to me that's the standard.

    esseph(10000) 7 days ago [-]

    It's an article about Fedora, specifically.

    djha-skin(1904) 7 days ago [-]

    It's very, very complicated. It's so far past the maximum effort line of most linux users as to be in its own class of tools. Reproducibility in the imperative package space is worth a lot. Lots of other tools are built on RPM/DEB packages that offer similar advantages of Nix -- Ansible, for one. This is more of a 'rising tide raises all boats' situation.

    steeleduncan(3185) 7 days ago [-]

    I use Nix extensively, but the Nix daemon doesn't do much of use that can't be achieved by building your code from a fixed OCI container with internet turned off. The latter is certainly more standard across the industry, and sadly a lot easier too. Nix is not a revolutionary containerisation technology, nor honestly a very good one.

    The value in Nix comes from the package set, nixpkgs. What is revolutionary is how nixpgks builds a Linux distribution declaratively, and reproducibly, from source through purely functional expressions. However, nixpkgs is almost an entire universe unto itself, and it is generally incompatible with the way any other distribution would handle things, so it would be no use to Fedora, Debian, and others

    lima(3269) 7 days ago [-]

    Contrary to popular opinion, Nix builds aren't reproducible: https://luj.fr/blog/is-nixos-truly-reproducible.html

    12345hn6789(10000) 7 days ago [-]

    Nix is to Linux users what Linux is to normies.

    __MatrixMan__(10000) 7 days ago [-]

    In the near term it makes more sense to position nix as a common interface between app developers and distro maintainers and not as a direct-to-user way to cut their distro maintainers out of the loop entirely (although it is quite useful for that).

    Ideally, a distro maintainer would come across a project packaged with nix and think:

    > Oh good, the app dev has taken extra steps to make life easy for me.

    As-is, I don't think that's the case. You can add a flake output to your project which builds an .rpm or a .deb file, but it's not commonly done.

    I'm guessing that most of the time, distro maintainers would instead hook directly into a language specific build-tool like cmake or cargo and ignore the nix stuff. They benefit from nix only indirectly in cases where it has prevented the app dev from doing crazy things in their build (or at least has made that crazyness explicit, versus some kind of works-on-my-machine accident or some kind of nothing-to-see here skulduggery).

    If we want to nixify the world I think we should focus less on talking people out of using package managers which they like and more on making the underlying packages more uniform.

    skrtskrt(10000) 7 days ago [-]

    Because Nix is a huge pain ramp up on and to use for anyone who is not an enthusiast about the state of their computer.

    What will happen is concepts from Nix will slowly get absorbed into other, more user-friendly tooling while Nix circles the complexity drain

    diffeomorphism(10000) 7 days ago [-]

    Different notions of reproducible. This project cares specifically about bit-for-bit identical builds (e.g. no time stamps, parallel compile artifacts etc). Nix is more about being declarative and 'repeatable' or whatever a good name for that would be.

    Both notions are useful for different purposes and nix is not particularly good at the first one.

    https://reproducible-builds.org/citests/

    jzb(3175) 7 days ago [-]

    Oh, I assure you, it's hard to escape knowing about Nix if you write about this sort of thing. Someone will be along almost immediately to inform you about it.

    Nix wasn't mentioned (I'm the author) because it really isn't relevant here -- the comparable distributions, when discussing what Fedora is doing, are Debian and other distributions that use similar packaging schemes and such.

    patrakov(3600) 7 days ago [-]

    This goal feels like a marketing OKR to me. A proper technical goal would be 'all packages, except the ones that have a valid reason, such as signatures, not to be reproducible'.

    RegnisGnaw(10000) 7 days ago [-]

    As someone who dabbles a bit in the RHEL world, IIRC all packages in Fedora are signed. In additional the DNF/Yum meta-data is also signed.

    IIRC I don't think Debian packages themselves are signed themselves but the apt meta-data is signed.

    0zymandiass(10000) 7 days ago [-]

    If you'd bothered to read:

    ```This definition excludes signatures and some metadata and focuses solely on the payload of packaged files in a given RPM:

        A build is reproducible if given the same source code, build environment and build instructions, and metadata from the build artifacts, any party can recreate copies of the artifacts that are identical except for the signatures and parts of metadata.```
    eru(2960) 7 days ago [-]

    At Google SRE we often had very technical OKRs that were formulated with some 'number of 9s'. Like 99.9999% uptime or something like that. So getting two 9s of reproducibility seems like a reasonable first goal. I hope they will be adding more nines later.

    charcircuit(10000) 7 days ago [-]

    This is a waste of time compared to investing in sandboxing which will actually protect users as opposed to stopping theoretical attacks. Fedora's sandbox capabilities for apps is so far behind other operating systems like Android that it is much more important of an area to address.

    johnny22(10000) 7 days ago [-]

    I think you have to do both sandboxing and this.

    AshamedCaptain(10000) 7 days ago [-]

    I am yet to see a form of sandboxing for the desktop that is not:

    a) effectively useless

    or b) makes me want to throw my computer through the window and replace it with a 1990's device (still more useful than your average Android).

    fsflover(2571) 7 days ago [-]

    If you want security through compartmentalization, you should consider Qubes OS, my daily driver, https://qubes-os.org.

    PhilippGille(10000) 7 days ago [-]

    > Fedora's sandbox capabilities for apps

    Do you mean Flatpaks or something else?

    colonial(10000) 7 days ago [-]

    Defaulting to Android-style nanny sandboxing ('you can't grant access to your Downloads folder because we say so' etc.) is unlikely to go over well with the average Linux distro userbase.

    Also, maximally opt-in sandboxes for graphical applications have been possible for a while. Just use Podman and only mount your Wayland socket + any working files.

    preisschild(10000) 6 days ago [-]

    Flatpak, which Fedora Workstation uses by default, is already very similar in capabilities to Android's sandboxing system.

    trod1234(10000) 6 days ago [-]

    Can someone provide a brief clarification about build reproducibility in general?

    The stated aim is that when you compile the same source, environment, and instructions the end result is bit identical.

    There is, however; hardware specific optimizations that will naturally negate this stated aim, and I don't see how there's any way to avoid throwing out the baby with the bathwater.

    I understand why having a reproducible build is needed on a lot of fronts, but the stated requirements don't seem to be in line with the realities.

    At its most basic, there is hardware, where the hardware may advertise features it doesn't have, or doesn't perform the same instructions in the same way, and other nuances that break determinism as a property, and that naturally taints the entire stack since computers rely heavily on emergent design.

    This is often hidden in layers of abstraction and/or may be separated into pieces that are architecture dependent vs independent (freestanding), but it remains there.

    Most if not all of the beneficial properties of reproducible builds rely on the environment being limited to a deterministic scope, and the reality is manufacturers ensure these things remain in a stochastic scope.

    amarshall(3665) 6 days ago [-]

    Well the point is that if N of M machines produce the same output, it provides the opportunity to question why it is different on the others. If the build is not reproducible then one just throws up their arms.

    It's not clear if you're also talking about compiler optimizations—a reproducible build must have a fixed target for that.

    Crestwave(10000) 6 days ago [-]

    > hardware specific optimizations that will naturally negate this stated aim

    Distro packages are compiled on their build server and distributed to users with all kinds of systems; therefore, by nature, it should not use optimizations specific to the builder's hardware.

    On source-based distros like Gentoo, yes, users adding optimization flags would get a different output. But there is still value in having the same hardware/compilation flags result in the same output.

    dmtfullstack(10000) 5 days ago [-]

    > There is, however; hardware specific optimizations that will naturally negate this stated aim

    These are considered to be different build artifacts, which are also reproducible.





    Historical Discussions: Jellyfin as a Spotify alternative (April 17, 2025: 430 points)

    (430) Jellyfin as a Spotify alternative

    430 points 1 day ago by coppolaemilio in 3401st position

    coppolaemilio.com | Estimated reading time – 6 minutes | comments | anchor

    When I stopped using Spotify I tried a few different solutions until I found the perfect replacement for me. If you want the tl;dr: I now use Jellyfin. But if you want to know how I got here, follow me through each step of the way.

    I started gathering all my music files (mp3, or flac) in my computer, and from there I wanted to just listen to them the old way. The first issue I encountered was that none of the available music players were any good.

    Winamp 2 default Base Skin

    We all love the nostalgic look of Winamp in screenshots, but in reality those players are very limited. They work (kinda) okay for playing a single album, but I struggle to browse my library or create a playlist with them. I tried tons of programs, but none of them satisfied me. I guess music players left the zeitgeist so the technology of playing files locally didn't improve much lately. For a few days, I went along with the good old VLC player, but I was surprised to find how bad it is at handling flac files.

    I gave foobar2000 another go, and remember how much of a clusterfuck setting it up is. After a few days of trial and error I decided that it wasn't worth the effort.

    foobar2000's Midnight theme that probably took hundreds of hours to make.

    Since I was feeling adventurous and I wanted an excuse to learn htmx, I ended up building a rudimentary web music player that worked surprisingly well. The player streamed music from my library on a browser, so I could spin up a local server and access to all my music remotely from anywhere.

    This worked well for a while, and it was a nice learning exercise, but it all fell apart when I had to go on a trip. Without internet or having the laptop running to host the server I wasn't able to listen to any music on my phone, so it made some flights particularly long. I knew I could take the project to the next level and add some sort of "download to listen offline" feature, but the browser storage is not enough for that, so I would had to bundle the website into a "proper app". I wasn't going to spend more time on this side project, so it was time to look for another solution.

    My last resort and the option I ended up using the most was Apple's Music app. It is a bloated program with vestiges of what itunes was. It tries very hard to sell their subscription service, but below all noise, there is a music player that's actually not bad. It has all kinds of sorting, and an up to date interface. You can sync the music library with your phone or other devices and you won't have any issues if you are offline. No more boring train rides!

    Unfortunately, having your entire music library in every device takes too much space, so you have to start playing some sort of storage battle royale, and decide which music you won't want to listen anymore. This shouldn't be a big deal (none of the issues I'm listing here are), but when you are competing with the knowledge of something like Spotify existing, it is hard to voluntarily make things harder than they should be.

    Fortunately for me, YouTube decided to shove a video down my throat:

    I didn't know Jeff Geerling, but I've been a happy subscriber since :) he has a lot of good videos and he always carries a contagious enthusiasm about any topic he covers.

    The video I linked covers how Jellyfin can replace something like Disney+ or Netflix, but it can also replace Spotify. It has all the features that I was looking for! There is only one downside compared to Spotify: you have to host it yourself.

    Self-hosting might sound scary to some, and of course it is not something I would recommend to everyone. But I promise that you can set up Jellyfin without much hustle even if you are not a programmer! To do so you don't need to buy a NAS or any fancy extra equipment. If you have an old computer around, it is probably good enough as a home server.

    Jellyfin has everything I hoped for and more. I tried running it locally in my computer at first, and I was surprised of how easy it was to get it up and running. Then I discovered that there are apps that communicate with your Jellyfin server and allow you to download music from your library for offline listening. Fintunes, Manet, Finamp, and the list goes on. Finamp is the one I ended up daily driving in my phone.

    A screenshot of my Jellyfin music library in the browser

    In the past few months, the world started shifting significantly, so I wanted to give another step in my journey of digital autonomy. I bought a mini pc to start self-hosting apps like Jellyfin from home. Since the experience was so good, I started looking into other things I could start self-hosting, and I'm now running Immich as well. Immich is like a much better Google Photos, but that's a story for another time.

    If you read until here, and you are curious about self-hosting, I encourage you to give it a try! It doesn't take much time and it is totally doable as a hobby/side project. If you have some minimal knowledge of how to use a terminal, you won't have any problems to set things up. And once it's running, you will be able to enjoy your entire library from any device anywhere.

    A screenshot of my Jellyfin with an album from a band you should know about.

    If software like this keeps getting better, I can imagine a future where we don't have to depend on some other's peoples computers to access to our own music, movies, photos or memories. We just have to make it easier and better, like open-source always does. It might take longer to get there, but I'm damn sure we will.




    All Comments: [-] | anchor

    knowknow(10000) 1 day ago [-]

    What's wrong with Spotify?

    temp0826(10000) 1 day ago [-]

    Afaik it's not terribly good to the artists. One of my favorite bands left the platform; I'm not there yet but if it happens en masse (or at least enough to effect me noticeably) then I'm out too.

    thebluelad(10000) 1 day ago [-]

    If you listen on high-end equipment the audio quality is noticeably worse than many other solutions and depending on your music taste, Spotify often removes content or doesn't have it in the first place.

    tomrod(677) 1 day ago [-]

    'You'll own nothing and you'll be happy': https://en.wikipedia.org/wiki/You%27ll_own_nothing_and_be_ha...

    chillfox(10000) 1 day ago [-]

    I got a smartwatch with a cell connection, some good earbuds and started going to the gym, then I learned that their watch app is complete garbage. It refuses to play the music I want, either playing something else or nothing at all. It will play it out loud on my phones speaker in the locker instead of through my earbuds. It refuses to download the playlists I want. It refuses to stream the music.

    None of that is a problem with the Apple Music app, so it's 100% a Spotify problem.

    Also, Music sometimes disappears from my playlists.

    bni(10000) 1 day ago [-]

    It's a shit company that I don't want to support.

    etra0(10000) 1 day ago [-]

    I recently bought a mini pc too and gave the self-host shenanigans a roll. It was definitely worth it.

    Using traefik + tailscale + dns challenge with CloudFlare, I was able to self-host and make my services available only through the vpn without loosing HTTPS on all the subdomains. It's lovely!

    udev4096(460) 1 day ago [-]

    This is partly self-hosting. You are relying on clownflare and tailscale for your services to be accessible. Do better

    detaro(695) 1 day ago [-]

    Anyone have opinions on Jellyfin vs music-specific servers like Navidrome?

    sodality2(2563) 1 day ago [-]

    Personally I switched Navidrome since I found the clients to be better and the scanner to be lighter, but there's a few things I miss: casting was nice, as well as centralizing my media on one everything-app.

    JLO64(10000) 1 day ago [-]

    I use Navidrome with Amperfy on MacOS/iOS and love it.

    TiredOfLife(652) 1 day ago [-]

    For me Navidrome actually could run on my raspberry pi with my library.

    panopticon(10000) 1 day ago [-]

    I embarked on a similar journey last year after YouTube Music took down some albums I listened to religiously.

    I settled on Plex + Plexamp instead. I'm mostly satisfied, but there are some rough edges like Chromecast and web playback.

    akdor1154(10000) 1 day ago [-]

    Plexamp is awesome and i miss it a bit as a Jellyfin user... But i don't trust the plex codebase. My suspicions were firmed up when Lastpass got hacked literally through Plex.

    someonehere(10000) 1 day ago [-]

    If you haven't been keeping up with Plex, self-hosters like myself and others are up in arms over the client rewrite. It feels like the Sonos update for us. Broken features. Useful functionality removed. UI that's more streaming focused than self-hosting like it used to be.

    If you haven't gone down the Plex path yet, don't right now as the community and developers sort out their roadmap. Plex seems to be open to feedback, but a lot of us feel betrayed. They had open user testing for the new apps but they didn't implement or fix any of the reported issues.

    al_borland(10000) 1 day ago [-]

    I've been using Plex since it was a Mac only XBMC fork. While it's drastically different than where it started, I haven't noticed any recent changes. I do 99% of my viewing via the AppleTV app and it hasn't changed. I removed all the shortcuts for their streaming stuff long ago.

    I'm running the server in Docker and pretty lazy about updating it. Is that the side that changed? It looks like I'm running 1.27 and 1.41 is out now. Should I be sticking with what I have?

    DarkCrusader2(2287) 1 day ago [-]

    I moved away from Plex when they started shoving free B/C movies with lewd posters on my home page and made is very hard and confusing to remove (if removing it completely was even an option, I still don't know).

    The whole reason I host plex is that I want an offline experience that I curate myself. The requirement for internet to authenticate and shoveling crapware in my face pushed me towards trying Jellyfin. The Jellyfin UI on TV and mobile is not as flashy and polished as Plex, but it is extremely functional and respects users choices.

    Been a happy Jellyfin customer for years now though I only use it to organize and browse my library now. Actual playback is either MPV on PC or Kodi over NFS on TV. After trying many many players, these were the two I found best for respective platforms, nothing else even comes close.

    anthonypz(10000) 1 day ago [-]

    What about their plexamp app for streaming music? It looks pretty nice and seems like a good deal if you purchase the lifetime plan for 50% off during Black Friday.

    npodbielski(10000) about 23 hours ago [-]

    I never really understood what is the point od running something locally and then registering on .com domain. Like if I will loose internet connection I cant listem my own music? Seemed radicoulous. But I guess it does nit require much knowledge and people keep using it.

    wallstprog(10000) about 14 hours ago [-]

    On another note, Jellyfin can look inside .iso files, which afaict Plex is not able to. Very handy with my collection of ripped dvd's.

    dhosek(10000) 1 day ago [-]

    My strategy for syncing my music library with my phone is that I have four smart playlists:

    - songs rated 5 stars which I haven't listened to for at least 8 months1

    - songs rated 4 stars which I haven't listened to for at least 16 months

    - songs rated 3 stars which I haven't listened to for at least 32 months

    - the 20GB of least-played music

    (there are some other strictures as well, like eliminating Christmas music and some music files I have in my library more for archival purposes than anything else, but this is a decent approximation).

    This gives me a reasonably fresh selection of music and at least at the moment, with my daily sync habit, when I listen to a song it goes out of rotation for a while which could be anywhere from a week to years.

    1. This was originally 6/12/24 months, but I ended up boosting that time frame as storage grew tight on my phone.

    joshuaturner(10000) 1 day ago [-]

    This reminds me of my smart playlist on Apple Music.

    It's called 'long time no see' and it includes any songs I've listened to more than 10 times but haven't listened to in the last year. I've been using the same music library for nearly two decades now, so it works really well for me. It's like a constantly rotating nostalgia playlist.

    HexPhantom(10000) 1 day ago [-]

    Love that you've got archival stuff and Christmas music filtered out - feels like everyone with a big library has a few odd folders that shouldn't be in regular rotation

    HumblyTossed(10000) 1 day ago [-]

    I self host Navidrome. Works pretty well.

    makeitdouble(10000) 1 day ago [-]

    Do you use a local client that accepts caching/offline playback of the content ?

    I'm looking through the android clients and none seem to fully embrace keeping the most played tracks on device ('offline mode'). Tempo[0] has in on the wip list, while StreamMusic straight removed in it the latest update[1], so as of now it looks like a pretty tough feature to get.

    Listening to music in remote places is nice, and that was the main reason for paying for Spotify for me.

    [0] https://github.com/CappielloAntonio/tempo#readme [1] https://music.aqzscn.cn/docs/versions/latest/

    ishanjain28(10000) 1 day ago [-]

    I wish more artists would sell their music on Bandcamp. I use jellyfin for music but acquiring music is difficult.

    OsrsNeedsf2P(2632) 1 day ago [-]

    If you can't find a place to pay for it, then just do what Spotify did when they launched. I recommend Nuclear[0] for that

    [0] https://nuclearplayer.com/

    iamdamian(3625) 1 day ago [-]

    Why is acquiring music difficult? If it's DRM you're worried about, the iTunes Store is all (or at least primarily) DRM-free.

    kretaceous(748) 1 day ago [-]

    I self-host a couple of things including an Emby server to watch movies. Self-hosting a music library seems interesting. But I discover and listen to music far more than I watch movies.

    This article tells me how good Jellyfin is, but the music collection process is not here. Do you download them manually? Do you buy records?

    I grew up downloading music into my PC and then transferring them to my SD card which I used in my phone. Once I had a Spotify, it was just... easier. I can discover music faster with the 'song radio' feature in Spotify. I can find and listen to an album as soon as I come across it.

    I'd absolutely love to have a better media player and 'frontend' than Spotify but I haven't solved the collection part of it. What can be done there?

    johntitorjr(10000) 1 day ago [-]

    I think the unstated assumption is that the reader has an existing music library. Where that library came from is an excercise left to the reader. I use bittorrent, which I admit is a little morally smelly, but I justify it by buying vinyl albums of any artists I listen a lot to. It'd take a lot of Spotify listens to match the money to the artist of buying a single album from the band website. Lots of vinyl comes with digital downloads too. When I'm at home, physical media is fucking rad. I mean, I can unplug the turntable, spin it by hand, and hear the music directly from the needle. No software, no gadgets. It's so primal, like the artist is whispering to me. I hadn't realized how much I lost switching to Pandora until I switched back to physical media.

    Given an existing collection - Is there an easy way to auto sort & tag everything? e.g. Merge the artists 'Guns N Roses' and 'Guns and Roses' into the most correct one.

    I can't justify the time to do it manually and feel like if I just wait long enough a turn-key AI solution will pop up.

    alisonatwork(10000) 1 day ago [-]

    I've never used Spotify so can't compare to that, but Bandcamp is like a much better version of the local record store. You can follow artists and record labels you like, which will give you email notifications whenever they release something. You can browse new and old music by all kinds of esoteric tags and subgenres. Every week or so you get an email of some new releases in your favorite genres. You can download in multiple formats, personally I download FLAC for backup and 320 for listening. It's easy to search for tracks or artists you discovered elsewhere, it's easy to listen to and scrub through tracks... Just great. If you're a gamer, it's like the Steam of music.

    My only complaint is that when I buy a bunch of songs my credit card gets charged a bunch of times (one for each artist/label) which has triggered fraud warning in the past, but I guess they do that to avoid the hassle of routing money to each artist in their own currency... It seems mildly customer unfriendly to me but in a world where people charge a can of coke to their credit card maybe not all that weird any more.

    LeoPanthera(954) 1 day ago [-]

    I really want to use Jellyfin for music, but unfortunately it separates albums based on directories and not by reading the metadata, so if you have an album separated into 'Disc 1', 'Disc 2', etc, each disc shows up as a separate album.

    I really don't want to restructure my library just for Jellyfin, so I basically can't use it.

    meonkeys(10000) about 23 hours ago [-]

    Pretty sure it does use metadata and folder/filename as fallback.

    Musicbrainz Picard is great for normalizing metadata for music files/albums, maybe give that a shot.

    crossroadsguy(10000) 1 day ago [-]

    My problem stays the same — finding all my music that is on Spotify from elsewhere. It costs a lot to buy those music files and that too if they are available (which isn't always the case) and even after I buy I am not sure what were the T&C from that particular place I bought - whether I really own it, I don't, a bit but not fully - etc. Finding from Linux ISO sites is a nightmare and an extra bad nightmare if we are talking about some 2K - 0.6K songs (because I have 600 from before I started streaming). I wish there was an easy way for this - plug and play kinda.

    OsrsNeedsf2P(2632) 1 day ago [-]

    This is a vendor lock-in more than anything. As someone who listens to mostly dubstep and EDM and built my playlist off of Spotify, I can't move to Spotify because they don't have half my playlists

    bhaney(10000) 1 day ago [-]

    > I wish there was an easy way for this - plug and play kinda

    I can click a button in Lidarr to auth with Spotify and automatically search usenet for every album of every artist I follow on spotify, download them all, and make them available in Jellyfin. It'll even monitor the spotify account and import new additions. Getting the whole stack set up is pretty much the exact opposite of plug and play, but once you have it all installed it's amazing how much becomes smooth sailing. 2K songs is nothing for this kind of stack.

    jjulius(3016) 1 day ago [-]

    >It costs a lot to buy those music files...

    And the artists and everyone who worked on it thank you very much for paying for an album/song instead of just paying a streaming subscription fee.

    HexPhantom(10000) 1 day ago [-]

    And like you said, even when you do buy tracks, the T&C are murky. Some platforms basically treat it like a long-term lease rather than true ownership. Honestly, what we need is a modern, ethical 'one-click' export + purchase system that lets you grab your current library in lossless format and actually own it.

    thaumasiotes(3580) 1 day ago [-]

    > It costs a lot to buy those music files and that too if they are available (which isn't always the case)

    Virtually all music, particularly modern music, is made available for free on YouTube. You can download it and it's yours.

    For example, here's the official release of Taylor Swift's album 'Evermore' for YouTube ('Provided to YouTube by Universal Music Group'): https://www.youtube.com/watch?v=qxrMpCMdYwk&list=OLAK5uy_m-v... . You should be able to pass the playlist to yt-dlp and automatically extract all the audio tracks.

    I don't really want wholesale quantities of music, so I do this manually, but I wouldn't be surprised if there's tooling around for it.

    chillfox(10000) 1 day ago [-]

    Self-hosting stuff is awesome if you have the skills.

    I have been on a mission for the last 2 years to replace as many subscriptions as possible with self-hosted solutions. The subscriptions really had gotten out of hand, it had gotten to about $200 (AUD) a month.

    Quick napkin math is that I have cancelled about ~$150 a month worth of subscriptions so far. The $500 office desktop I got for a home server is struggling at this point, but it's already paid for itself, so I will likely upgrade it to something much better later this year.

    Currently I am in the process of replacing all the movie streaming services with Emby.

    Spotify and Adobe lightroom is still on the todo list.

    I will likely end up with Youtube, Fastmail and Borgbase being my remaining subscriptions once I am done.

    Inviz(10000) 1 day ago [-]

    What do we do about Lightroom? Capture one? How about sharing galleries?

    anthropodie(2680) 1 day ago [-]

    >Self-hosting stuff is awesome if you have the skills.

    >I have been on a mission for the last 2 years to replace as many subscriptions as possible with self-hosted solutions.

    I have been doing the same for quite some time now but it's only recently I realized all these subscriptions services are just making rich richer. We should encourage self hosting as much as possible. I mean why should we pay huge corporations more money just for storage?

    layoric(10000) 1 day ago [-]

    This reflects a lot of what I've been through as well. My subscriptions exploded when AU got a lot of different streaming platforms, and I think when paramount+ came out and took Star Trek off of another one I drew the line. I realised I still owned all the physical media, so time to make backups. Previous to that I moved off Gmail, that was by far the hardest, and still somewhat ongoing after 8+ years.

    The hardest to kick for me now is YouTube Premium.. And in AU it's like $33/month AUD, but I just can't stand ads.

    Now I self host:

    - Own Mastodon instance - Photos (Synology) - Videos (Synology) - Audio (Synology) - Storage (Minio) - Code/Build (Forejo) - Security (Synology)

    My NAS is blocked from the internet, while web facing stuff is on a separate server (old dell workstation). And now have added a PI hole to another older dell box. My partner's laptop will be moving to Linux and will also be a Windows free household. I used Windows since 3.1, I liked it up until around Windows 7. I'm glad I've moved to Linux, but disappointed to see what has happened to Windows in general.

    I want to self host more services for family, but the experience isn't there yet without quite a lot of work.

    The tags #homelab and #selfhost are pretty decent to follow on Mastodon btw!

    mrheosuper(10000) 1 day ago [-]

    Don't forget the electricity cost come with home server. A quick math will show that it's not insignificant

    bane(244) 1 day ago [-]

    Unraid makes a lot of the home lab stuff pretty easy. There's a very active community, good docs, frequent updates. It costs a little, but it's one time and worth it, and can grow as you have time and money to add stuff to it.

    smj-edison(10000) 1 day ago [-]

    What do you do for backups? I'm just setting up an Emby instance with a 4 TB hard drive attached, but I'm worried it'll fail and take everything with it.

    russelg(10000) 1 day ago [-]

    Is there a reason you went with Emby over Jellyfin (forked from Emby)?

    Ziggy_Zaggy(10000) 1 day ago [-]

    With all the SWE in the mix, why not just roll your own media player...? It's not THAT hard. Same for movie player btw (and one solution can do both ofc).

    HTML spec for media is pretty amazing these days, no real excuses outside of time.

    nadnad(10000) 1 day ago [-]

    Would love to hear more details about your setup.

    BrandoElFollito(3407) 1 day ago [-]

    > The $500 office desktop I got for a home server is struggling at this point

    I have a ~10 years old desktop as my server (intel skylake and 24GB of RAM). I host about 20 services and the server is not loaded at all.

    The services are the usual ones, nothing heavy such as LLMs, though

    HexPhantom(10000) 1 day ago [-]

    $150/month shaved off is no joke. It's funny how these subscriptions creep up until you're basically running a second rent in background services.

    bambax(2947) 1 day ago [-]

    Self hosting is absolutely awesome.

    I upgraded my NAS to a recent Asustor a year ago and it changed my life. JellyFin for video works perfectly everywhere in my home, on any device, and it can also be accessed remotely, securely, with Tailscale, so if I'm in a hotel somewhere with my iPad it still works.

    And my library is curated by me; it has classic movies and other movies I like, and zero fluff or random shows that I would never watch in a million years.

    But self hosting doesn't stop here. Using Docker (via Portainer) I can publish any app in minutes, on either Apache or Nginx, securely with a Cloudflare tunnel (free) without ever exposing my home IP to the world.

    This of course isn't as resilient as a proper server with a proper provider, but it's so much simpler and so much cheaper that for hobby projects it's largely good enough.

    zaphodias(10000) 1 day ago [-]

    I'm doing the same, I have family plans with my friends for pretty much anything so I don't think I ever reached such high monthly costs though.

    I started my home server for self hosting Immich, not only for the cost but because I like to have my images close to me.

    I also recently replaced Lightroom with ON1, it's definitely not the same quality but, as hobbyist, it didn't make much sense to pay that much for me anymore. It was by far the most expensive subscription I had.

    lhamil64(10000) about 21 hours ago [-]

    Where do you get media from? Piracy is an option, but if you want to do it semi-legally I guess you'd need to rip blu-rays, but that seems like it'd be more expensive than streaming services, and you'd have to wait for everything to be released on blu-ray (if it even does)

    _spduchamp(10000) about 21 hours ago [-]

    I bought a 4TB external hard drive from a thrift shop and found it is loaded with a huge unorganized treasure trove of MP3s that stops maybe around 2008. The tags and file names are a bit of a mess (looks like bad character encoding for anything with accents), and there is no genres or categorization. I'd love to use a subset of this archive on Jellyfin or Navidrome.

    Any suggestions for a tool that can clean up file names and tags, and apply some sort of genre categories? I've tried Picard, but the process seems too manual for such a large archive.

    _-_-__-_-_-(10000) about 20 hours ago [-]

    beets, it's ridiculously good, https://beets.readthedocs.io/en/stable/#

    quesera(10000) about 20 hours ago [-]

    I've used beets to import and tag a huge personal music library:

    https://beets.io/

    dankwizard(10000) 1 day ago [-]

    This article fails to mention the absolute butchering of features that takes place moving from a typical music streaming subscription to a self hosted Jellyfin library.

    A large part of my listening on YouTube Music is going to a particular song or band I like and clicking 'Radio', which generates a playlist of similar sounding songs. You can then fine tune it with a filter i.e 'Popular songs, deep cuts' or specific elements of the song 'More emo', 'Slow paced' etc. This exposes me to a lot of new music and keeps it fresh and if I'm lucky I'll discover a new artist or song to add to my rotations.

    You lose that.

    A lot of these services overtime build mixes which takes your listening habits and tries to categorize them into specific mixes made up of your existing library & new music.

    I don't browse any music forums and so apart from my favourite bands, I have no idea on when artists I like release new albums and would not encounter them on a self hosted solution, etc.

    Semaphor(3334) 1 day ago [-]

    > I have no idea on when artists I like release new albums and would not encounter them on a self hosted solution, etc.

    Depending on what you like, bandcamp makes it easy. You can follow any artist (which is also offered whenever you buy), and from then on get release notifications. But of course, what's available differs by genre. For metal, most bands are on BC, except most Japanese artists and major label stuff.

    I buy, download, and put the flacs on my Jellyfin server.

    There are, of course, also piracy solutions for that, pretty sure the *arr stuff has automatic downloading per artist.

    closewith(10000) 1 day ago [-]

    A slower speed on the hedonic treadmill is a feature of self-hosting, not a bug.

    jjulius(3016) 1 day ago [-]

    It's a 'YMMV' situation, because...

    I don't want that. At all. It's algorithmic and there's nothing stopping artists and labels paying for placement in there. I don't want that.

    I am a musician, and a DJ, and I've been digging deep through artist and label catalogues on my own for decades. The process of discovery via my preexisting routes is far more fruitful, enjoyable and rewarding than lazily letting an algorithm do the work.

    But I like doing that. This works for me, not for others.

    armSixtyFour(10000) 1 day ago [-]

    I would have agreed with you 3 years ago. But now not so much.

    Spotify 'Radio' feature just tends to want to give me music I've already listened to over new music. Whatever algorithm they are using has waaaay overfit to what I have already liked.

    There used to be curated playlists done by humans, now almost everything is 'made for you by Spotify' playlists which, have the exact same issue as the radio stations, suddenly it's all the same music you've already been listening to, very little new music. If you want new music, you need to find a playlist made by a user instead.

    ThrowawayTestr(10000) 1 day ago [-]

    I've discovered so many niche bands and subgenres since I got Spotify.

    maxglute(10000) 1 day ago [-]

    How are LLMs for music recommendation?

    Napster / audio galaxy... I mean your own legal burned music with AI generating a radio playlist.

    OccamsMirror(3652) 1 day ago [-]

    Plexamp is really good for this.

    The styles information that Plexamp has works really well and in my experience, as long as your library is large enough, works better than modern Spotify.

    It was Spotify's degradation of their radio service and terrible 'AI DJ' that finally got me off Spotify. Punishing them for platforming Joe Rogan was just icing.

    nicoco(10000) 1 day ago [-]

    I'll argue music algorithmic recommendation on these platforms is a bad thing anyway.

    First, the algorithm is opaque, so it can push stuff to you because the platform decide it has to get the spotlights. Maybe the label/producer/musician paid for it or whatever you want to imagine that is even worse. It is a well-known phenomenon that if some music is pushed to your ears, you'll end up appreciate it most often than not. This is how hits have been and are still made.

    But even if the algorithm was not gamed at all, I still think it is a bad thing. It is not going to push you out of your comfort zone. Listening to new stuff is usually not pleasant at first. You will only 'discover' things that are very similar to what you know and already enjoy.

    If these recommendation algorithms were about food, they would 'reason' like this: 'Hey, you've really enjoyed this whole pack of M&M's, I'm sure you'll like this Kit-Kat bar now! Oh and you've had a glass of wine, what about trying out meth, it's pretty good too.'. Do we really want our computers to reinforce such behavior?

    Go to concerts, buy merch, buy albums on bandcamp (it has not enshittified too much yet apparently), donate money to artists; discover music through your friends and other humans recommending it. Recommend what you like to your friends. Cancel your Spotify subscription, none of that money is going to artists anyway. And use soulseek.

    jszymborski(10000) 1 day ago [-]

    Leaving music streaming services has been a great excuse for me to rediscover music blogs like Gorilla vs. Bear and Stereogum, or even local culture magazines.

    Another great way for discovering music I've found is just perusing Bandcamp, which is where I buy most of my music anyway. Love finding local artists, so I just put in some genre filters and the location filter. Found multiple great bands this way.

    As for keeping abreast of new releases, Bandcamp is pretty good for that too. You can just follow artists and you get emails when new releases or merch or tours come around.

    hashhar(3574) 1 day ago [-]

    PlexAmp has DJs which allow you to get the song/playlist based radios.

    LMS (Logitech now Lyrion) also has something similar in MusicIP (not as good as PlexAmp).

    wintermutestwin(10000) 1 day ago [-]

    IMO using a streaming service's recommendations is a way to filter out bands that labels aren't promoting. The services have to be getting paid for pushing - right?

    If everyone is this lazy about music discovery, then music suffers. I am not using "lazy" as a pejorative. There are people who just couldn't be bothered and that's fine. Music just isn't that important to you. But if the people who deeply love music are corrupted by the ease and dopamine, it will deeply wound music as a whole.

    My problem isn't discovering new music, it is "discovering" my massive library. I love AM, but the fact that 3 of the five large icons taking up precious screen real estate are devoted to discovering music that Apple is paid to promote is infuriating.

    HexPhantom(10000) 1 day ago [-]

    I think for some people the goal shifts from discovery to ownership - knowing your library, building it intentionally, and not being nudged by what the algorithm thinks you should be into

    4k93n2(10000) 1 day ago [-]

    its hard to beat the convenience of being able to right click/radio to get new recommendations but there has to be other options that arent that much more effort either?

    i think you can add plugins to jellyfin. maybe there is a last.fm plugin? i know of some other last.fm alternatives like maloja or libre.fm but i cant comment on how good they are

    bcraven(10000) 1 day ago [-]

    I'd like to shoutout PG Vogt (from Reply All podcasting fame) for this episode of his new show:

    https://pjvogt.substack.com/p/how-am-i-supposed-to-find-new-...

    soraminazuki(2635) 1 day ago [-]

    By moving away from streaming services, you can once again own what you bought and paid for. Algorithmic playlists are nothing, nothing at all compared to the loss of ability to use your own player, edit your files, back them up, or not be nickel-and-dimed to get around artificial restrictions. Not to mention that with streaming services, music can be taken away from you after the purchase.

    boudin(10000) 1 day ago [-]

    I've never seen this work. Either it plays the stuff I've listened in the past in a loop or shove some random things I really dislike (maybe hidden promotional stuff?). Personally it's the reason I've cancelled subscriptions each time I've tried, I always ended up listening to radio instead as the value brought by Spotify etc... was really poor.

    vagab0nd(3491) 1 day ago [-]

    My favorite songs gravitate heavily towards 2 very different genres. This seems to confuse the hell out of Spotify. The 'discover weekly' is comically bad no matter how hard I try to prime my library.

    rolisz(2961) 1 day ago [-]

    My experience with Youtube Music is that the recommendations are quite poor. So I wouldn't miss that. But it's hard to replicate the breadth of coverage of YT music (even though sometimes songs just vanish from my playlists). But I have started buying a couple of albums every now and then and slowly I am building my owned music library.

    benterix(10000) 1 day ago [-]

    It was like this in the past, now it's crappy. The algorithmic optimization started eating its own tail. And it's a problem on all platforms, from Spotify to YouTube.

    Let's take YT. In very simple terms, instead of taking a bold move and suggesting a few outliers (similar to differentiating the population as it's done in evolutionary algorithms), it takes an easy shot and, if I'm identified as male, suggests some videos with females with big breasts and other generic junk many people just click on autopilot. It works well for them because most people click and click and spend their days uselessly hooked and feel bad, but in my particular case I lose what I had earlier, i.e. suggestions of interesting bands (they still do happen but the selection is of much lower quality).

    nsteel(10000) about 23 hours ago [-]

    > I don't browse any music forums and so apart from my favourite bands, I have no idea on when artists I like release new albums and would not encounter them on a self hosted solution,

    Music Brainz provides this at https://test.listenbrainz.org/explore/fresh-releases/

    There's also Music Butler: https://www.musicbutler.io/

    DontchaKnowit(10000) about 19 hours ago [-]

    I mean right at the top he says hes just trying to listen to his own music. I dont get how this is a downside if you wanna discover new shit you can always just go to youtube.

    Frankly 99.9 of my music listening is stuff I already know and enjoy. But I still like to listen to new stuff often. So this kinda thing is perfect for me 99% of the time.

    kgwxd(3429) about 15 hours ago [-]

    Those features can, and should, be made completely separate from the system that hosts the media. In fact, they used to be, with great success.

    HTTP418(10000) about 11 hours ago [-]

    Plexamp has this feature, i use it all the time.

    touristtam(2637) about 3 hours ago [-]

    I'll admit it, I have a fairly narrow range of music I like so the following works for me on this basis: I don't like Spotify and other music streaming services as they never are consistent with their licensing or good with their recommendations. And the adverts are obnoxious. What I like is radios like Radio Paradise: https://radioparadise.com/player or regular radios available through online streams (such as the French radio FIP: https://www.radiofrance.fr/fip). There is enough to discover on either and they are still mostly in the range of what I would/could listen should they not have existed.

    rappatic(10000) 1 day ago [-]

    At least in the case of a music player, self-hosting simply isn't good enough for me. I'm not willing to accept a single second of added latency or buffering or downtime because I don't have multimillion dollar server farms. The fact is that the vast majority of us don't have the resources to self-host a Jellyfin instance that can provide near-instantaneous access anywhere in the world to every song ever made at 320kbps. And that's the bar for music. I can deal with a little added latency vs. Netflix on a Plex server or something. But I'm not willing to compromise with music.

    This isn't even to mention the numerous features that Spotify has which are difficult or impossible to replicate on self-hosting. The 'radio' feature, song recommendations, the DJ, AI playlists, stations, automatic playlist enhancement, social features, Canvas... the list goes on. And of course I never have to worry about managing a library of mp3 files. When an artist I like drops a new album, it'll be on Spotify at 12:00am exactly and work perfectly. This isn't possible with self-hosting.

    When you look at it this way, the chance to pay 6 bucks a month to get all these extra features and ignore the headache of self-hosting is a no-brainer.

    jjulius(3016) 1 day ago [-]

    >I'm not willing to accept a single second of added latency or buffering or downtime...

    >... near-instantaneous access anywhere in the world to every song ever...

    Nobody needs this. You think you do, but nobody needs everything everywhere all at once. If being wholly unwilling to wait 'a single second' isn't sarcasm, then... yeesh.

    udev4096(460) 1 day ago [-]

    It's no-brainer for people who do not care about freedom or file preservation. Spotify can pull the plug on whatever your favorite song is and there is NOTHING you can do about it. Then again, spotify has hundreds of millions of clueless subscribers, such as yourself, who will willfully consume the most crappy audio codec and praise them for it

    bigstrat2003(10000) 1 day ago [-]

    > The fact is that the vast majority of us don't have the resources to self-host a Jellyfin instance that can provide near-instantaneous access anywhere in the world to every song ever made at 320kbps.

    The fact also is that the vast majority of us don't have a requirement to be able to access our media from anywhere in the world. Most people aren't traveling the world on a regular basis, they stay in one area except for maybe an occasional vacation.

    > And of course I never have to worry about managing a library of mp3 files. When an artist I like drops a new album, it'll be on Spotify at 12:00am exactly and work perfectly. This isn't possible with self-hosting.

    If that's important to you, then indeed self-hosting will never be able to match it. But for me at least, my music listening has been 95% static since about 20 years ago. On occasion I hear something new that I add to the collection, but for the most part I listen to the same music I did some time ago. Paying $6/mo to Spotify just to listen to the same things I already have in my collection would be a gross waste of money. So for me it's the exact opposite: self hosting is a no-brainer because I simply would not get any value for my $6/mo.

    PhilipRoman(10000) 1 day ago [-]

    I'm a bit confused, why would you have problems with latency for music? This is not real time sound mixing where you need millisecond latencies, the client can just download the whole thing and play it. Even high quality audio files are tiny (unless you're listening to 4 hour classical operas).

    tastysandwich(10000) 1 day ago [-]

    For music, Navidrome is superior.

    It is just crazy how easy it is to set this stuff up nowadays. I run both Navidrome and Jellyfin in docker containers. Then I use NordVPN Meshnet to securely connect to them outside of the home.

    The experience is absolutely flawless. In Navidrome you can host an entire FLAC library and then transcode to Opus on the fly.

    It's been over a year now and I have pretty much no issues whatsoever.

    I highly highly recommend it

    Edit - Opus not Opal!

    mixmastamyk(3343) 1 day ago [-]

    Do you mean Opus?

    vander_elst(10000) 1 day ago [-]

    +1 for Navidrome, I self host both jellyfin and Navidrome. Navidrome wins hands down for music. With Jellyfin it's harder to categorize and then search, Navidrome provides a great experience out of the box.

    twilo(10000) 1 day ago [-]

    Is it better than plexamp?

    apwell23(10000) 1 day ago [-]

    > run both Navidrome and Jellyfin in docker containers

    > use NordVPN Meshnet to securely connect to them outside of the home

    > host an entire FLAC library and then transcode to Opus on the fly.

    i really have no idea what any of these words mean. Spotify's future is secure.

    bladeee(10000) 1 day ago [-]

    I understand that Navidrome is more specialized for music, but what specifically makes it superior to Jellyfin, in your opinion?

    mfld(3428) about 24 hours ago [-]

    Can Navidrome/Jellyfish integrate with Sonos? For me, the Sonos app still is not able to reliably index/play music from a network share.

    dash2(3324) about 24 hours ago [-]

    > It is just crazy how easy it is to set this stuff up nowadays. I run both Navidrome and Jellyfin in docker containers....

    Wow, I'll get grandma to do it! Ha ha, just kidding, but I'll try it myself. Ha ha, just kidding.

    Honestly, I just want to scream "self-hosting isn't going to happen, stop trying to make it happen." I absolutely welcome the hobbyists doing this fun stuff in their free time, but the idea that they will ever win over ordinary users is total fantasy. And it's accompanied by reality-denying stuff like how "you don't need" feature X or Y. Sure, I long to go back to organising my own mp3 files like it's 2002. And because you're angry about corporate power, Spotify or whoever definitely provide no features of value to anyone! This is all pure mood affiliation.

    Sorry. Don't get me wrong, I'm glad your setup works for you. But I think you are not using the word "easy" in the same way as most people.

    bergon(10000) about 23 hours ago [-]

    I've never tried NordVPN Meshnet, but just want to add an alternative I've fallen in love with: Tailscale. It's amazingly simple to set up and use. Today all my devices are connected to each other, and my jellyfin service is reachable through my chromecast, phone, computer and Ipad. As well as my filehost VPS.

    I've been self-hosting for quite awhile now, and these days it's such a breeze.

    udev4096(460) 1 day ago [-]

    Anyone concerned about recommendations might wanna look at musicbrainz. You can write a script for fetching the recommendations based on your current library every week

    dandersch(10000) 1 day ago [-]

    Can you elaborate? I'm not aware of musicbrainz having any recommendations/discovery features.

    sandreas(3670) 1 day ago [-]

    I personally use Jellyfin ONLY for Video stuff.

    AudioBookShelf[1] is for audiobooks and podcasts.

    For music I use

      navidrome [2]
    
    The smart playlist feature[5] is awesome. Having 3 services instead of one seems overkill, but specialized apps instead of one generic one feels different. One interesting aspect of navidrome is, that it has implemented the Subsonic API, which MANY Apps make use of. My personal favorite is

      Substreamer [3]
    
    but you could also go with DSub[4] or others.

    1: https://www.audiobookshelf.org/

    2: https://www.navidrome.org/

    3: https://substreamerapp.com/

    4: https://f-droid.org/en/packages/github.daneren2005.dsub/

    5: https://www.navidrome.org/docs/usage/smartplaylists/

    anthonypz(10000) 1 day ago [-]

    Neat! Can you stream navidrome to a smart TV? I have speakers connected to them and I usually stream to it using airplay on iOS.

    hypercube33(10000) about 24 hours ago [-]

    The thing I miss and can't find a replacement for is lastfm inside of Spotify. It gave two things and did it exceptionally well:

    1. Helped me take something I like or am super into at the time (band or song) and give me a playlist

    2. actually suggested with a high hit rate something I didn't know about and it was available to play right now.

    other streaming or stations just loop into what I already have which sucks. Side note that I'm into pretty niche non mainstream music such as Melodic Death Metal and Industrial so self hosting seems interesting but I also spend a good chunk of my time looking for more music. (Most of the bands I am really into only have sub-20k plays a month on Spotify).

    I really miss Napster letting you browse people's music when you found someone who was also into things you liked - pure gold mine only second to a LAN party where you could dig through the file server.

    mystified5016(10000) about 21 hours ago [-]

    Thanks for mentioning audiobookshelf. I'd totally given up on using jellyfin for audiobooks. It just absolutely butchers any book split into multiple files, which is basically all of them.

    I'll give audiobookshelf a look!

    HexPhantom(10000) 1 day ago [-]

    I went through a similar phase where I thought, how hard can it be to just manage my own music like it's 2008 again? Turns out, kind of annoyingly hard. The part about music players being stuck in time really hit. Winamp nostalgia aside, most local players feel like they haven't evolved in a decade

    INTPenis(10000) 1 day ago [-]

    That's why the author moved beyond that stage and to apps that connect to existing music libraries hosted on jellyfin. Apparently there are a lot more options out there than I knew about.

    iamacyborg(2536) 1 day ago [-]

    Roon is where it's at if you want a decent music player. Not free but well worth the price, imo.

    Sheeny96(10000) 1 day ago [-]

    If there were a recommendation algorithm plugin for Jellyfin (even if it just calls out to the API of some existing external web service), that might pull me over. Until that's the case, the recommendations will keep me on Spotify

    nsteel(10000) about 23 hours ago [-]

    Assuming there's last.fm/listenbrainz reporting plugins for Jellyfun, then both those services will provide recommendations based on what you have listened to. Maybe not as good as Spotify's but it's something.

    https://listenbrainz.org/my/recommendations

    https://www.last.fm/player/station/user/{username}/recommend...





    Historical Discussions: BPS is a GPS alternative that nobody's heard of (April 13, 2025: 427 points)
    BPS is a GPS alternative that nobody's heard of (April 08, 2025: 12 points)

    (427) BPS is a GPS alternative that nobody's heard of

    427 points 5 days ago by sksxihve in 3454th position

    www.jeffgeerling.com | Estimated reading time – 3 minutes | comments | anchor

    I came to the NAB (National Association of Broadcasters) show this year with my Dad to learn more about time in broadcast and live production.

    I was expecting to learn more about grandmaster clocks, AV sync, timing in protocols like Dante, Livewire, AES67, and more—and I have. But then on the first day here I found this odd little corner of the building with a completely empty booth:

    When you see an oscilloscope that costs 3x the value of your car on a trade show floor... well, let's just say my interest was piqued.

    I looked at it, and found something interesting—the trigger was on a GPS PPS timing signal output from a u-blox GPS receiver. But the 2nd channel was monitoring KSNV-TV, a US television station broadcasting an ATSC 3.0 signal.

    The scope showed a PPS output (Pulse Per Second) demonstrating a pulse sync of +/- 10 ns between GPS and the TV signal output—which so happens to be BPS (Broadcast Positioning System), an experimental timing standard that may be incorporated into the ATSC 3.0 rollout in the US (there are currently about 1,700 TV stations that could be upgraded).

    After seeing the demo, I found out there are a few people who've heard of BPS... and many of them were presenting on it, as they were also the ones who were doing the initial rollout and experimentation.

    ATSC 3.0 is a newer IP broadcast standard being rolled out in some countries—my own home city has two TV stations broadcasting it right now, under the 'NEXTGEN TV' moniker. But so far only a few TV stations are participating in the BPS testing.

    Because accurate timing is critical in many areas, from media, to the power grid, to 5G and communications, having a reliable terrestrial backup to GPS—especially one that can be hardened against different types of jamming attempts—may be important to our economy, communications and power grid... or people like who just want to have a good time!

    And speaking of time stuff at the NAB Show... can you guess what I'm pointing to in this photo, from the ASUS booth?

    If you guessed built-in PPS in/out connectors on a consumer Intel motherboard that syncs to TGPIO (Time-Aware GPIO) on an Intel CPU... you'd be right! And if you have no clue what that means, well, I'll cover it more in depth later this year :)

    Anyway, I am still learning about BPS, so I'll probably go deeper into it later in my timing series on my YouTube channel, but for now, I'll leave with with a quick video showing the demo (below), and a couple links for those who want to learn more:

    More resources:




    All Comments: [-] | anchor

    dieselerator(10000) 5 days ago [-]

    If planning/designing a timing system like this using existing antenna, why wouldn't you choose to use cellular base stations? The cellular network reaches most places with overlapping coverage and carries network time. The lowest cellular frequencies are adjacent the upper broadcast TV channels. Aren't modern cellular receivers what we call software defined radios? They can choose which channels to receive.

    michaelt(10000) 5 days ago [-]

    Interestingly, cellular base stations are one of the major customers for high precision timing systems.

    They use precise timing to coordinate timed broadcast slots between base stations with overlapping coverage.

    throw84848484(10000) 5 days ago [-]

    This system should be shutdown. What if enemies use it to guide their rockets?

    Calwestjobs(10000) 5 days ago [-]

    your phone ai can recognize dogs in your photos, and militaries have all kinds of aerial survey, satellit photos of your house, so do they really need to use external radio signals, or is it enough for them to use fully internal system with just cameras and khadas mind 2 ?

    fortran77(109) 5 days ago [-]

    A alternative, but only for timing and as GPS supplement. Unless you're in a place where you can pick up 4 ATSC transmitters at different locations you won't get position or navigation with it.

    chipsa(10000) 5 days ago [-]

    So if you can get more than 3 different TV stations you should be good. Most stations don't share transmission towers, AFAIK.

    There are places, especially in the mountains where you don't get the requisite number of towers, but large portions of the US will, and the required signal to noise ratio is better than to decode regular TV signals, so you have a larger area covered than for TV.

    geerlingguy(249) 5 days ago [-]

    Note that this blog post (and the associated video) were a quick off-the-cuff thing while I was on the NAB show floor—I have been talking to a few of those involved in the testing at NIST, Sinclair, and Avateq (among others), and will hopefully have a lot more in a follow-up.

    Right now it's in the experimental stage, with only 6 towers total deployed (only 5 were operational during NAB, and only one in Nevada... so timing, not navigation yet).

    The ultimate plan—which is probably dependent on how well ATSC 3.0 rolls out (which has plenty of hurdles[1])—is to encourage broadcasters to add on the necessary timing equipment to their transmitter sites, to build a mesh network for timing.

    That would allow the system to be 100% independent of GPS (time transfer could be done via dark fiber and/or ground-satellite-ground directly to some 'master' sites).

    The advantages for BPS are coverage (somewhat) inside buildings, the ability to have line of sight nearly everywhere in populated areas, and resilience to jamming you can't get with GPS (a 100 kW transmitter signal 10 miles away is a lot harder to defeat than a weak GPS signal hundreds of miles away in the sky).

    The demo on the show floor was also using eLoran to distribute time from a site in Nevada to the transmitter facility on Black Mountain outside Vegas, showing a way to be fully GPS-independent (though the current eLoran timing was sourced from GPS).

    [1] ATSC 3.0, as it is being rolled out in the US, doesn't even add on 4K (just 1080p HDR), and tacks on 'features' like 'show replay' (where you tap a button and an app can stream a show you're watching on OTA TV through the Internet... amazing! /s), DRM (at stations' discretion, ugh), and 'personalized ad injection' (no doubt requiring you to connect your TV to the Internet so advertisers can get your precise location too...). Because ATSC 3.0 requires new hardware, consumers have to be motivated to buy new TVs or converter boxes—I don't see anything that motivates me to do so. I feel like it may be a lot like the (forever ongoing) HD Radio rollout.

    toast0(10000) 5 days ago [-]

    I bought an atsc 3 tuner, and the experience turned me off of OTA tv. Since then, things managed to get worse as when I was poking around, DRM wasn't in use, but now it is.

    I was hoping to get better fidelity between the roughly 2x bitrate per channel, and the video codec update. And probably overly optimistically was hoping the 1080p feed source was progressive so there wouldn't be a deinterlacing step.

    Otoh, local broadcasters use an audio codec I can't easily use, integration with mythtv is poor, and there's no sign anything is going to get better soon.

    Maybe if I had a tv with an atsc 3 tuner, live tv would be an option, but I'm not buying a tv for that.

    ATSC 1.0 took a while before gathering momentum, so maybe that's going to be the same here, and in another few years, it might make sense to consider a transition. OTOH, maybe the writing is on the wall and OTA broadcasting will die on this hill. I was an OTA enthusiast, but between ATSC 3 being terrible, and the reallocation of spectrum that means cellular base stations sometimes overwhelm my pre-amp, it's not much fun anymore. (I have a filter post-pre-amp but it'd be better if I got on the roof to put it pre-pre-amp, but roofs are scary) Maybe I'm just getting curmudgeonly though.

    The_Double(10000) 5 days ago [-]

    How does it solve for time without location? With GPS location and time are one solution to an equation with 4 unknowns (x,y,z,t). Without location you won't know the time delay between you and the transmitter.

    throw0101d(1901) 5 days ago [-]

    > The demo on the show floor was also using eLoran to distribute time from a site in Nevada to the transmitter facility on Black Mountain outside Vegas, showing a way to be fully GPS-independent (though the current eLoran timing was sourced from GPS).

    There's been a consistent call by many people that there needs to be a diversity of options for navigation and timing:

    * https://rntfnd.org/2025/02/04/pnt-gps-critical-issue-for-new...

    China has GNSS (BeiDou, plus plans for LEO), plus terrestrial navigation (eLoran), plus a fibre-based network for accurate timing:

    * https://rntfnd.org/2024/10/03/china-completes-national-elora...

    * https://rntfnd.org/2024/03/01/patton-read-their-book-chinas-...

    * https://rntfnd.org/2024/11/29/china-announces-plan-to-furthe...

    Russia has a Loran-equivalent:

    * https://en.wikipedia.org/wiki/CHAYKA

    ksec(119) 5 days ago [-]

    Why is US ATSC 3.0 so bad? It is nearly a decade since it was South Korea have it deployed and operational. The standard itself is no longer 'next gen'. Brazil's TV 3.0, also uses ATSC 3.0 is so much better in every aspect.

    Even if someone mandate it as requirement for TV sold next year all the tech inside are at least 10 years old ( HEVC ? ) . Not to mention the roll out time. Do Americans only watch Cables and Netflix? And not Free to Air TV? Which is what I belief what most of the worst still do to a larger extend other than Internet streaming.

    They might as well look into the standards before putting a mandate into it.

    lsaferite(3605) 5 days ago [-]

    Did you actually mention what BPS actually stands for in the article? I read the whole thing and don't recall reading that. Yes, I'm capable of searching and finding the information myself, but in an article about something something esoteric like this, explaining the acronym would be useful.

    Edit: Broadcast Positioning System for anyone that didn't figure it out.

    teleforce(414) 5 days ago [-]

    >an oscilloscope that costs 3x the value of your car on a trade show floor

    Typical high end microwave measurement system cost as much as a Ferrari car.

    Good cable and connectors can set you back by several thousand dollars.

    It's a very good business space prime for disruption (hint SDR - or software-defined radio).

    Fun facts, the grand daddy of Silicon Valley start-up is HP (then Agilent, and now Keysight) selling function signal generator.

    concrete_head(10000) 5 days ago [-]

    Interesting. Though he didn't say what kind of car he drives, it could be a real shitter

    mindcrime(738) 5 days ago [-]
    Good cable and connectors can set you back by several thousand dollars.

    Another domain where that is true involves logic analyzers. A few years ago, on a bit of a lark, I bought a (used) fairly high-end Keysight logic analyzer. The kind of thing that cost like $20,000 or more when it was brand new. But I got a sweet deal on it, so I bought it. Only... it came with no test leads. And then I started shopping for the leads.

    Yikes.

    I forget the exact numbers now, but as best as I can recall, the leads came in 64pin sets, where the device supported up to 4 test lead sets, for 256 total channels. And just one of the 64pin test lead sets cost something like $1500. So a full set would cost another $6000 on top of the device itself. I think that was about what I paid for the analyzer itself in the first place!

    Now I don't regret buying it and in truth I never needed to use 256 channels anyway, so I only bought 1 of the test lead sets so far. But yeah... test leads / cables /etc. for high bandwidth / low latency / high frequency applications get pretty damn expensive.

    wildzzz(10000) 5 days ago [-]

    I've got a rack of equipment that sometimes requires a special calibration where I need to lug over a signal generator. Of course the only ones we have available that go to the necessary frequency weigh like 50lbs. I've recently been eyeing a little gadget that costs about 1/10 or 1/20 the cost of the Keysight units, interfaces using USB oe Ethernet, and is about the size of a deck of cards. The accuracy isn't perfect on its own but that's what a 10MHz ref clock is for. It's amazing how far tech has come and it's amazing how much we are still paying for these dinosaur pieces of test equipment.

    RyanShook(2193) 5 days ago [-]

    Slide deck of BPS (Broadcast Postioning System) https://www.gps.gov/governance/advisory/meetings/2022-11/mat...

    louwhopley(10000) 5 days ago [-]

    Thanks for sharing this. It creates a clear picture of its use cases and roll out plans.

    GPS is such a critical infrastructure component to modern society- knowing that a redundancy system like this is in the works is great.

    master_crab(3278) 5 days ago [-]

    This sounds interesting but it most likely will only be of use in populated areas where there is enough signal overlap from broadcast towers. You'll still need GPS in the countryside and on water.

    bri3k(10000) 5 days ago [-]

    In a lot of cities the broadcast towers are concentrated in the same place, I wonder how effective it could be.

    publicola1990(10000) 5 days ago [-]

    While this is is interesting, the 'nobody's heard of' phrase is rather condescending and such phrases leave a bad taste in the mind.

    jen729w(10000) 5 days ago [-]

    Hmm it's just a turn of phrase. I would bet you $100 that no more than 0.001% of the population have heard of BPS. I hadn't. That's functionally 'nobody'.

    Calwestjobs(10000) 5 days ago [-]

    yes saying jeff geerling is the nobody who never heard of that thing is offensive to me. XD

    p_ing(10000) 5 days ago [-]

    It's a curiosity gap headline; it's a lazy form of headline that insults the intelligence of the audience. It also extends into clickbait.

    Poor form. Do better.

    Iwan-Zotow(10000) 5 days ago [-]

    GLONASS? Baidu?

    toomuchtodo(160) 5 days ago [-]

    Controlled by other nation states.

    Lammy(786) 5 days ago [-]

    I hope it will still be possible to receive a BPS timing signal privately and anonymously with ATSC 3 like one can with GPS. ATSC 3 has the Dedicated Return Channel because marketers """need""" to spy on every-fucking-thing we do: https://www.atsc.org/wp-content/uploads/2024/04/A323-2024-04...

    "Conventional linear TV services alone (albeit ultra-high-definition) may not be sufficient to sustain the terrestrial broadcasting business which requires a large amount of highly coveted spectrum resources. Intelligent media delivery and flexible service models that maximize the network Return on Investment (ROI) is of paramount importance to the broadcasting industry in the new era."

    That's a lot of fancy words to say 'we're doing this because it makes us more money' lol

    "Recent studies have shown that interactivity between media customers and service providers and between users themselves will be one of the most important features in the next-generation media service. In this document, this unique opportunity is addressed by defining a Dedicated Return Channel (DRC) system for the next-generation broadcasting system."

    geerlingguy(249) 5 days ago [-]

    Yeah... and that's one of the most innocuous new 'features' in ATSC 3.0.

    Almost everything I've seen (besides BPS, and maybe HDR if you're one of the few who has a really good home theater setup) is a benefit for broadcasters and advertisers, and a bit worse for consumers (especially requiring new hardware/decoders... and sometimes persistent Internet connections!).

    m463(2487) 5 days ago [-]

    Just like 5G, which provides unexpected connectivity for IoT devices.

    Search for 'miot' or 'mmtc'

    kmeisthax(10000) 5 days ago [-]

    Wait, to be clear, this 'dedicated return channel' is just for TVs to broadcast back to the station that they're watching the adverts? I thought ATSC 3.0 was going to rely on IP backhaul for that. Actually broadcasting back seems... impractical at best.

    I mean, let's keep in mind, even ATSC 1.0 had really awful reception issues; compared to analog NTSC where there was enough redundancy that you could just tune into a garbage station from way too far away and see something. Now imagine trying to make that already unreliable channel bidirectional. I just really hope all the return channel stuff is optional, because it sure as hell isn't going to work without way more stations broadcasting on more channels, and OOPS you've reinvented LTE.

    prox(10000) 5 days ago [-]

    I just used bullshit remover : "Conventional TV ain't enough. Need new tech to make more money. Gotta maximize that ROI, yo."

    So you got that right.

    swores(2007) 5 days ago [-]

    > 'That's a lot of fancy words to say 'we're doing this because it makes us more money' lol'

    You say that as if they're using lots of words to obfuscate that fact, but the quote you pasted has them saying entirely directly 'maximize the network Return on Investment', which is just normal business terminology (and only one word more than your 'it makes us more money'!)

    Obviously this has no impact on whether that's a good or bad thing, I'm just pointing out that they weren't using a lot of words to hide that fact.

    xattt(10000) 5 days ago [-]

    I just realized the BPS is there to augment the return channel. Not only can the advertiser figure out what you are watching, but also where you are located.

    throw0101d(1901) 5 days ago [-]

    For anyone who wants to know about ATSC 3.0 the Antenna Man channel covers over the air (OTA) stuff in the US:

    * https://www.youtube.com/watch?v=cw3W7MoafR4

    * https://www.youtube.com/@AntennaMan/videos

    ATSC 3.0 allows for DRM/encryption as the parent comment mentions.

    giantg2(10000) 5 days ago [-]

    I might have missed it just skimming, but what's the physical method they are planning to use for return channel?

    karaterobot(10000) 5 days ago [-]

    > Recent studies have shown that interactivity between media customers and service providers and between users themselves will be one of the most important features in the next-generation media service. In this document, this unique opportunity is addressed by defining a Dedicated Return Channel (DRC) system for the next-generation broadcasting system.

    Wow, that's one of the best uses of corporate-speak euphemism I've seen. Everybody who reads it knows what it really means, but if you just don't say it, it's fine. Recent studies indeed!

    elzbardico(10000) 5 days ago [-]

    We should create technology that deliberately feeds trash data to marketers, in mind-boggling volumes. Drowning the signal in a biblical flooding of noise.

    We should make things so useless and annoying for them, as they did for us.

    kristopolous(3570) 5 days ago [-]
    https://www.nab.org/bps/

    for people who don't want to watch videos

    geerlingguy(249) 5 days ago [-]

    The OP link is a blog post, which includes links out to the primary resources (much more in depth than the BPS landing page). The video is a byproduct of my conversations at NAB, and both are just preliminary... I've been working on a more in depth look at GPS and BPS (and other alternatives).

    lxgr(10000) 5 days ago [-]

    High-power, and ideally authenticated, alternatives to space-based GNSS are desperately needed, given the sharp uptick in jamming and spoofing incidents in many places.

    In a true 'end of history' moment, the US and other NATO members discontinued both of their ground-based systems (which are inherently harder to jam due to their much higher transmission power, since transmitters are not power limited) – Omega in the late 1990s and Loran-C in the early 2010s – in favor of GPS, while Russia kept their equivalent functional, and China completed an eLoran network last year.

    Add to that the FAA's reduction of their ground-based VOR/DME station network that lets planes navigate when GPS is unavailable...

    GPS jamming, and much more concerningly spoofing, will probably quickly come within reach of non-nation-states and smaller groups of all kinds, and ultimately individual actors, and that can't possibly end well for civil aviation if robust countermeasures don't become available very soon.

    mindcrime(738) 5 days ago [-]
    GPS jamming, and much more concerningly spoofing, will probably quickly come within reach of non-nation-states and smaller groups of all kinds, and ultimately individual actors

    It may already be so:

    https://hal.science/hal-03456365v1

    jeffbee(1275) 5 days ago [-]

    A university of Texas research group demonstrated more than ten years ago that they could spoof GPS in the vicinity of an automatically navigating UAV, and force it to land at a point of their choosing. This has been within the reach of garage hackers for a long time.

    typewithrhythm(10000) 5 days ago [-]

    You can't really beat a jammer, sure you can compete for power output, but there is no real stopping it.

    Aircraft and military positioning concepts are evolving towards more map and dead reckoning, lessening the benefit of GPS jamming.

    skissane(3426) 5 days ago [-]

    Is there any DVB-T equivalent?

    Calwestjobs(10000) 5 days ago [-]

    Czech technical university - 2018 - https://www.radioeng.cz/fulltexts/2018/18_04_1155_1165.pdf

    But concepts are translatable to other technologies, for example mobile phone network signals (even without decrypting it) which in most populated areas can have hundreds frequencies by itself.

    there are literally literally thousands of radio signals around us which can be used for various unintended / non-cooperative purposes. also not only ground based signals, satellites are transmitting all kinds of signals towards earth, some for communication, some for remote sensing / earth observation.

    Or not only is it possible to use non-cooperative signals for timing, but also for passive radar. For example DVB-T - you receive bounces/echoes of signal from airplanes, drones and measure its characteristics.

    NATO public document - UAV Detection and Localization Using Passive DVB-T Radar MFN and SFN - https://www.sto.nato.int/publications/STO%20Meeting%20Procee...

    Good community is around GNURadio, they have all kinds of enthusiast and profesional usecases, explorations, videos, ...

    Or just simple 30$ RTL-SDR + laptop, you can sit next to road and listen for tire pressure monitoring sensors data, they contain unique ids, so you can know when postman enters your street...

    rwg(10000) 5 days ago [-]

    I want to like this — I think having ground-based alternatives to GPS and other space-based PNT systems is a very good thing! But after reading the paper at https://www.nab.org/bps/Broadcast_Positioning_System_Using_A... and other BPS information on the NAB's website, I think the NAB is being wildly optimistic about BPS:

    • ATSC 3.0's physical layer can already transmit GPS time in a way that receivers could get it back out. What BPS brings to the table is a requirement and specification for accurately and consistently filling in the physical layer preamble fields containing the time data, along with a new physical layer pipe (think 'low-level data stream') that contains additional information about the transmitter and, optionally, its neighboring transmitters.

    • BPS is capable of producing time fixes when the receiver only has a lock on one source. This isn't surprising at all — GPS receivers can do the same thing. But either type of receiver with only one source would see a clock offset proportional to the path delay, which it wouldn't be able to compute and back out without knowing its position.

    • BPS is only designed for 2-D position fixes. While that's a reasonable design decision (the vertical position error would be massive), it also makes BPS less useful for the NAB's 'indoor positioning for first responders' use case, especially in areas with multi-story buildings.

    • The need to receive and process/decode multiple, most likely non-adjacent 6 MHz channels for positioning increases receiver complexity and cost.

    • The NAB claims that 1 kilometer of separation between two BPS transmitters is 'sufficient for useful position determination.' I don't buy it, especially in the face of poor transmitter geometry.

    • They note that 16 TV stations in the New York City area broadcast from One World Trade Center, so for the purposes of BPS, they're effectively one station. This kind of transmitter colocation is incredibly common, both in urban areas (ten TV stations broadcast from Sutro Tower in San Francisco) and in more rural areas (six TV stations in the Roanoke-Lynchburg DMA broadcast from towers within ~1 mile of each other on the ridgeline of Poor Mountain). Even if every ATSC TV station became an ATSC 3.0 w/ BPS transmitter, bad transmitter geometries would destroy BPS's position accuracy in lots of markets.

    • What's the business case for broadcasters? BPS won't be free for broadcasters to implement, and there doesn't seem to be a path to it generating revenue except for a hand-wavy 'maybe one day televisions will be able to determine their locations without Internet connections using BPS, and then broadcasters can do location-targeted advertising with those TVs!'

    My uncharitable take is that BPS will never be a usable standalone PNT system. A timing system in the 'rebroadcasts GPS' sense? Maybe. Standalone positioning? No way. Broadcasters implementing BPS (or ATSC 3.0 at all) without being forced to by the government? I don't see it.

    geerlingguy(249) 5 days ago [-]

    > What's the business case for broadcasters?

    My uneducated guess is government funding, plus becoming part of a new 'essential backbone' infrastructure, thus guaranteeing incentives to stay operational for a longer period of time.





    Historical Discussions: You might not need WebSockets (April 11, 2025: 415 points)

    (415) You might not need WebSockets

    415 points 7 days ago by hntrl in 3466th position

    hntrl.io | Estimated reading time – 23 minutes | comments | anchor

    What's a WebSocket?

    If you're new to web development or you haven't heard of a WebSocket before, they're a way to open a two-way communication channel between the client and server using HTTP as the transport protocol. In less nerdy terms, it's a way to keep an open line of communication between the client and server so that both can send and receive messages at any time. (MDN Reference)

    Because of how it's advertised on the tin, it's natural to think of a WebSocket as the best (and sometimes only) way to orchestrate a long-living stream of data between client and server, like for instance a real time application. In practice though, it turns out there are a few reasons why you might not want to use them:

    WebSocket messages aren't transactional

    I see a lot of instances where WebSockets are used as the way of maintaining consistency for some kind of state object. For instance, you use the transmitting side of the socket to represent mutations to some object, and the receiving side of the socket to represent state as it gets changed by those mutations. That way if you have multiple clients listening to the same object, they'll all see the same state changes without having to refresh the page.

    # Client 1
    >>> { command: 'increment', amount: 5 }
    <<< { event: 'count', value: 5 }
    >>> { command: 'decrement', amount: 2 }
    <<< { event: 'count', value: 3 }
    # Client 2
    <<< { event: 'count', value: 5 }
    <<< { event: 'count', value: 3 }

    But what if you placed some kind of invariant condition on the state object? For instance, you want to make sure that the count is never negative:

    <<< { event: 'count', amount: 5 }
    >>> { command: 'decrement', amount: 6 }
    <<< { error: 'count cannot be negative' }

    The issue here is that there's no association between the mutation and error message since the error message will be received on the same stream as every other message. We can't reliably say "the next message" received on the stream is the result of the previous command since the server could have sent any number of messages in between now and then.

    If we wanted to update the UI to show the error, we'd have to link the error event somehow (like providing an associative request id in the command and the error message):

    >>> { command: 'decrement', amount: 6, requestId: '123' }
    <<< { error: 'count cannot be negative', requestId: '123' }

    This becomes even more awkward because now you have to keep track of every message you send, and you have to find some way to bubble the error event back to the UI in an idempotent way. The same goes if you wanted to have some kind of indication that the command was received by the server. In that case, now you're also dealing with certain hard-to-track edge cases:

    • What if the socket closes before the server can process the command?
    • What if you never receive a response message on the socket for some reason?
    • What if you're dealing with a huge number of concurrent requests?

    It creates too many unknowns and complexity for something that should be simple. If you're dealing with messages where you need to know if they were received or not, you're better off with using a more transactional protocol like REST to represent the sending side of the socket.

    ( < > ) = HTTP
    ( <<< >>> ) = WebSocket
    
    # Success
    > POST /increment '{ value: 5 }'
    < 200 OK
    <<< { event: 'count', value: 5 }
    #- (the update message still gets sent to all connected clients)
    
    # Failure
    > POST /decrement '{ value: 6 }'
    < 400 Bad Request
    #- (no update gets sent because the request failed)

    We've effectively ditched the transmitting side of the socket altogether and replaced it with HTTP, which means we're now leaning on WebSockets to represent only one stream of data (the receiving side). As it turns out there's other ways to do that don't require the overhead of a full duplex connection. (we'll get into this later)

    If you're sending messages that don't necessarily need to be acknowledged (like a heartbeat or keyboard inputs), then Websockets make a great fit. Hence the title of this post, you might not need Websockets.

    You have to manage the socket lifecycle

    When you use WebSockets, you're not just sending and receiving messages at will—your application also has to respond to the opening and closing of the connection. This means handling events like "open" and "close" (or "error"), deciding what to do during reconnect attempts, and cleaning up resources when the connection is no longer needed.

    For example, a basic lifecycle for a WebSocket in the browser might look like this:

    const socket = new WebSocket('wss://example.com/socket');
    
    socket.addEventListener('open', () => {
      console.log('Socket opened');
    });
    
    socket.addEventListener('message', (event) => {
      console.log('Received message:', event.data);
    });
    
    socket.addEventListener('error', (err) => {
      console.error('Socket error:', err);
    });
    
    socket.addEventListener('close', () => {
      console.log('Socket closed. Attempting to reconnect...');
      // Potentially restart or schedule a new socket connection here
    });

    In a typical application, you might need to restart a closed connection, buffer messages while the socket is down, and handle retries with exponential backoff. Ignoring any of these steps can lead to lost messages, clumsy user experiences, or lingering connections. By contrast, with a simpler request/response model like HTTP, the lifecycle is more straightforward: each request starts, completes (or fails), and then you move on.

    The extra complexity of a WebSocket's lifecycle is one of the main reasons you might not need it—unless there's absolutely no alternative to socket based messaging (partially demonstrated in the previous section), then you're better off with a simpler communication pattern.

    It makes your server code more complex

    When a new WebSocket connection is initiated, your server has to handle the HTTP "upgrade" request handshake. Instead of completing an ordinary request, the server checks for the special headers indicating a WebSocket handshake and then upgrades the connection from HTTP to a persistent socket. That means for every initial connection, the server must parse and validate WebSocket headers like "Sec-WebSocket-Key" and respond with the correct "Sec-WebSocket-Accept" header. (MDN Reference)

    The upgrade mechanism itself requires additional plumbing: you need to create a listener for the upgrade event on your server, confirm the request is valid, finalize the handshake, and then start broadcasting or receiving data. This not only adds more moving parts (compared to standard request/response flows) but also means comprehension of HTTP alone isn't enough for debugging or troubleshooting—now you're dealing with a specialized connection protocol.

    If you're also dealing with similar request/response semantics as we've detailed above, it can introduce even more complexity since now your server code is written with the durable nature of sockets in mind, not the ephemeral nature of HTTP. Additionally, your application will need to manage all the edge cases: what if the client tries upgrading in an unsupported way? What if the handshake fails mid-stream or times out? What about partial data frames that need to be reassembled?

    While libraries and frameworks do a really good job of hiding some of these details under the hood, all these potential points of failure point back to a single truth: if you don't truly need the power of a bidirectional, always-on socket, the handshake cost and the expanded error states can overshadow any performance or real-time benefits.


    So what's the alternative?

    We touched on it very briefly in the previous sections, but if we can abstract away the transmitting side of the socket and only be left with a one-way stream of data on the receiving side, we can use a much simpler communication pattern.

    HTTP Streaming

    If you look deeper into how HTTP works, you'll find that it's actually a protocol designed for streaming data. If it wasn't, we couldn't stream video without loading the entire file first, or load huge websites without downloading the whole page.

    As it turns out that data stream doesn't have to be split up chunks of some large blob of data. We can use the same principle to represent any arbitrary stream of data, like the real time updates that we were leaning on WebSockets for.

    Here's an example in server-side JavaScript of how this would look using our counter example from before:

    let counter = 0;
    let resolvers = new Set();
    
    // this returns a promise that resolves when the next
    // value is available.
    async function nextValue() {
      return new Promise((resolve) => resolvers.add(resolve));
    }
    
    // look up what an `async generator` is if you're lost
    // looking at this syntax. explaining it is out of scope
    // for this post.
    async function* valueGenerator() {
      // (this loop gets broken when the response stream is closed.)
      while (true) {
        // every time we get the next value from the iterator,
        // we yield the return from an awaited promise that resolves
        // when the next value is available.
        yield await nextValue();
      }
    }
    
    async function processCommand(command) {
      // this is what handles our 'state updates'
      counter = nextCounterValue(command);
      // for each iterator (i.e. client that called `/stream`)
      // that's waiting on a value, we resolve the promise with
      // the new value
      for (const resolver of resolvers) {
        resolver(counter);
        resolvers.delete(resolver);
      }
    }
    
    // this is the function that computes the next state
    // based on the command, and enforces any invariants
    // that we want to have on the state.
    function nextCounterValue(command) {
      let next = counter;
      if (command.type === 'increment') {
        next += command.amount;
      } else if (command.type === 'decrement') {
        next -= command.amount;
      }
      if (next < 0) {
        throw new Error('count cannot be negative');
      }
      return next;
    }
    
    // we use hono/express like syntax here, but you can
    // use any server framework you want.
    
    app.post('/increment', async (req, res) => {
      try {
        const { value } = await req.json();
        processCommand({ type: 'increment', amount: value });
        return new Response('OK', 200);
      } catch (error) {
        return new Response(error.message, 400);
      }
    });
    
    app.post('/decrement', async (req, res) => {
      try {
        const { value } = await req.json();
        processCommand({ type: 'decrement', amount: value });
        return new Response('OK', 200);
      } catch (error) {
        return new Response(error.message, 400);
      }
    });
    
    app.get('/stream', (req, res) => {
      // We can create a stream from any async iterator, so
      // we can pass the generator function that yields counter
      // updates as they become available.
      const stream = ReadableStream.from(valueGenerator());
      return new Response(stream);
    });

    We can then use the Stream API on the browser side to read the data as it comes in, and update our UI according to whatever the server sends.

    const response = await fetch('/stream');
    const reader = response.body.getReader();
    const decoder = new TextDecoder();
    
    while (true) {
      // wait for the next chunk of data
      // (will only come when a state update is made)
      const { done, value } = await reader.read();
      // when the server is done sending data, we break out of the loop
      if (done) break;
      // decode the chunk since data gets encoded over the network
      const chunk = decoder.decode(value);
      // update the UI with the new state
      updateUI(chunk);
    }

    With this setup we've completely eliminated the need for WebSockets while still maintaining real-time updates between multiple clients!

    Bonus: Making it easy with eventkit

    This is a little bit of a shameless plug, but it's my post so you're just going to have to live with it.

    I've been working on a library called eventkit that makes it easy to compose and observe asynchronous streams of data. If you're familiar with the observable pattern or RxJS, it's very similar but with better side effect management and built with generators.

    To harp on the counter example a little bit more, here's how you could use eventkit to implement the same functionality:

    // server.ts
    import { Stream, AsyncObservable } from 'eventkit';
    
    let counter = 0;
    const stateUpdates$ = new Stream();
    
    // when a new value is pushed into the stream,
    // we update the counter
    stateUpdates$.subscribe((value) => {
      counter = value;
    });
    
    function nextCounterValue(command) {
      let next = counter;
      if (command.type === 'increment') {
        next += command.amount;
      } else if (command.type === 'decrement') {
        next -= command.amount;
      }
      if (next < 0) {
        throw new Error('count cannot be negative');
      }
      return next;
    }
    
    function processCommand(command) {
      const next = nextCounterValue(command);
      stateUpdates$.push(next);
    }
    
    app.post('/increment', async (req, res) => {
      try {
        const { value } = await req.json();
        processCommand({ type: 'increment', amount: value });
        return new Response('OK', 200);
      } catch (error) {
        return new Response(error.message, 400);
      }
    });
    
    app.post('/decrement', async (req, res) => {
      try {
        const { value } = await req.json();
        processCommand({ type: 'decrement', amount: value });
        return new Response('OK', 200);
      } catch (error) {
        return new Response(error.message, 400);
      }
    });
    
    app.get('/stream', (req, res) => {
      // We can use the `Stream` class as an async iterator
      // to create a stream from it in the exact same way.
      const stream = ReadableSteam.from(stateUpdates$);
      return new Response(stream);
    });
    // client.ts
    import { AsyncObservable, map } from 'eventkit';
    
    const response = await fetch('/stream');
    const decoder = new TextDecoder();
    const counter$ = AsyncObservable.from(response.body);
    
    counter$
      .pipe(map((value) => decoder.decode(value)))
      .subscribe(updateUI);

    I wouldn't be a good project maintainer if I didn't tell you to at least go check it out. We also wrote a separate HTTP Streaming guide that goes a little bit deeper into this topic in case you're interested.

    I learned about the capabilities of the Stream API while building it and think it's a really good candidate for your next real-time/event-based application. If you say otherwise, please open an issue and tell me why.


    Thanks for reading this wall of text! If you have any questions/comments, I'm around on X/Twitter. I also post more schizo ramblings on there, so I would appreciate the follow if that's the sort of thing you're into.

    (END)




    All Comments: [-] | anchor

    RajT88(10000) 6 days ago [-]

    The world needs more of these 'you might not need' articles.

    Too many technology fads make things needlessly complicated, and complexity makes systems unreliable.

    You might not need Kubernetes

    You might not need The Cloud

    You might not need more than SQLite

    ...and so on.

    morsecodist(10000) 6 days ago [-]

    Genuine question because I agree that there are a lot of over complicated systems. I often see people say all you need is SQLite. Do you implement replication yourself? Or you are just accepting that if something happens to your server your data is just gone? I always default to managed Postgres and that seems to be the simplest most boring solution.

    lelanthran(3620) 6 days ago [-]

    I'm still waiting for 'You might not need React'

    Dwedit(10000) 6 days ago [-]

    WebSockets can't go through proxies.

    kingforaday(10000) 6 days ago [-]

    I think what you are getting at is that websockets aren't as simple as http traffic through a proxy, but you absolutely can use proxies and ws connections just fine and for a variety of reasons.

    Austizzle(10000) 6 days ago [-]

    I've definitely used websockets through nginx

    paxys(10000) 6 days ago [-]

    Says who?

    bastawhiz(10000) 6 days ago [-]

    This isn't based on any facts

    mad_vill(10000) 6 days ago [-]

    For all the other comments, parent is probably talking about forward proxies and to their point many forward/enterprise proxies have configurations which cause websockets to break and it is a pain to debug this if you have many enterprise customers.

    gregors(3512) 6 days ago [-]

    Works completely fine in Haproxy

    shadowangel(10000) 6 days ago [-]

    I use them though nginx/cloudflare. they work fine.

    xiphias2(10000) 6 days ago [-]

    With HTTP streaming the browser shows that it's still loading data. Is there some mitigation for it after the initial loading?

    panic(118) 6 days ago [-]

    I'm guessing you would use JS to fetch() the stream resource separately.

    bob1029(10000) 6 days ago [-]

    The fetch API is asynchronous. The initial page load would deliver the payload that then initiates the streaming connection in the background.

    lxgr(10000) 6 days ago [-]

    That sounds less like a problem with HTTP streaming (initiated from JavaScript) and more like a page with some hanging resource.

    ramesh31(3343) 6 days ago [-]

    You probably do. Reliable SSE is a complete nightmare.

    koakuma-chan(10000) 6 days ago [-]

    Why?

    albuic(10000) 2 days ago [-]

    Can you explain ?

    almosthere(10000) 6 days ago [-]

    I liked vert.x's strategy of seamlessly downgrading the form of connection based on what is available.

    winrid(10000) 6 days ago [-]

    Vert.x is great! I'm missing it lately with Node. At least with Vert.x you get a stack trace when you block the event loop by accident...

    notpushkin(1263) 6 days ago [-]

    > Bonus: Making it easy with eventkit

    Why not just use SSE? https://developer.mozilla.org/en-US/docs/Web/API/Server-sent...

    kordlessagain(2482) 6 days ago [-]

    SSE is the way to roll.

    supahfly_remix(10000) 6 days ago [-]

    Do CDN, such as Cloudflare, support SSE? The last time I looked, they didn't, but maybe things have changed.

    hntrl(3466) 6 days ago [-]

    I've noticed some weird behaviors with the EventSource impl that browsers ship with. Chief among them being the default behavior is to infinitely reconnect after the server closes the stream, so you have to coordinate some kind of special stop event to stop the client from reconnecting. You wouldn't have that problem with the stream object from Response.body

    The SSE protocol is actually just a long-running stream like I mentioned but with specific formatting for each chunk (id, event, and data fields)

    as a side note, eventkit actually exports utilities to support SSE both on client and server. The reason you'd want to use eventkit in either case is because it ships with some extra transformation and observability goodies. https://hntrl.github.io/eventkit/guide/examples/http-streami...

    jongjong(10000) 6 days ago [-]

    I don't know why people keep trying desperately to avoid the simplicity and flexibility of WebSockets.

    A lot of times, what people need is a bidirectional connection yet somehow they convince themselves that SSE is better for the job... But they end up with two different types of streams; HTTP for writes and responses and SSE for passively consuming real-time data... Two different stream types with different lifecycles; one connection could fail while the other is fine... There is no way to correctly identify what is the current connection status of the app because there are multiple connections/statuses and data comes from multiple streams... Figuring out how to merge data coming from HTTP responses with data coming in passively from the SSE is messy and you have no control over the order in which the events are triggered across two different connections...

    You can't enforce a serial, sequential, ordered flow of data over multiple connections as easily, it gets messy.

    With WebSockets, you can easily assign an ID to requests and match it with a response. There are plenty of WebSocket frameworks which allow you to process messages in-order. The reason they work and are simple is because all messages pass over a single connection with a single state. Recovering from lost connections is much more straight forward.

    osigurdson(10000) 6 days ago [-]

    Based on my read, this basically is SSE but doesn't use the same protocol.

    tbeseda(10000) 6 days ago [-]

    SSE is great. Most things with websockets would be fine with SSE.

    Also I don't see it being much easier here than a few primitives and learning about generator functions if you haven't had experience with them. I appreciate the helper, but the API is pretty reasonable as-is IMO

    apitman(519) 6 days ago [-]

    SSE doesn't support binary data without encoding to something base64 first. These days I'd recommend a fetch stream with TLV messages first, followed by WebSocket.

    shadowangel(10000) 6 days ago [-]

    It's javascript, anything simple needs a framework.

    colesantiago(839) 6 days ago [-]

    One thing I couldn't get working with websockets is how do you keep websocket connections active during code deployments without disconnecting current connected clients?

    Sounds very tricky to me to get right even at scale.

    paxys(10000) 6 days ago [-]

    The trick is to make the connection stateless, i.e. any client can connect to any server (just like plain HTTP). Then when there's a new deployment the websocket connection will be terminated and the client can reconnect instantly, automatically finding the next available server.

    hombre_fatal(10000) 6 days ago [-]

    It's a minor point in the article, but sending a RequestID to the server so that you get request/response cycles isn't weird nor beyond the pale.

    It's pretty much always worth it to have an API like `send(message).then(res => ...)` in a serious app.

    But I agree. The upgrade request is confusing, and it's annoying how your websocket server is this embedded thing running inside your http server that never integrates cleanly.

    Like instead of just reusing your middleware that reads headers['authorization'] from the websocket request, you access this weird `connectionParams` object that you pretend are request headers, heh.

    But the idiosyncrasies aren't that big of a deal (ok, I've just gotten used to them). And the websocket browser API is nicer to work with than, say, EventSource.

    syspec(10000) 6 days ago [-]

    It's a good well worn tactic. You list in very high detail every single step of any process you don't like. It makes that process seem overly complex, then you can present your alternative and it sounds way simpler.

    For example, making a sandwich: You have to retrieve exactly two slices of bread after finding the loaf in the fridge. Apply butter uniformly after finding the appropriate knife, be sure to apply about a 2.1mm level of coating. After all of that you will still need to ensure you've calibrated the toaster!'

    hntrl(3466) 6 days ago [-]

    > sending a RequestID to the server so that you get request/response cycles isn't weird nor beyond the pale.

    To me the sticking point is what if the 'response' message never comes? There's nothing in the websocket protocol that dictates that messages need to be acknowledged. With request/response the client knows how to handle that case natively

    > And the websocket browser API is nicer to work with than, say, EventSource.

    What in particular would you say?

    ricardobeat(3634) 6 days ago [-]

    That's basically RPC over WS.

    This article conflates a lot of different topics. If your WebSocket connection can be easily replaced with SSE+POST requests, then yeah you don't need WebSockets. That doesn't mean there aren't a ton of very valid use cases (games, anything with real time two-way interactivity).

    hliyan(1215) 6 days ago [-]

    This is how I used to do it over TCP, 20 years ago: each request message has a unique request ID which the server echoes and the client uses to match against a pending request. There is a periodic timer that checks if requests have been pending for longer than a timeout period and fails them with an error bubbled up to the application layer. We even had an incrementing sequence number in each message so that the message stream can resume after a reconnect. This was all done in C++e, and didn't require a large amount of code to implement. I was 25 years old at the time.

    What the author and similar web developers consider complex, awkward or difficult gives me pause. The best case scenario is that we've democratized programming to a point where it is no longer limited to people with highly algorithmic/stateful brains. Which would be a good thing. The worst case scenario is that the software engineering discipline has lost something in terms of rigor.

    cryptonector(10000) 6 days ago [-]

    IMAP uses request IDs.

    crabmusket(10000) 6 days ago [-]

    > sending a RequestID to the server so that you get request/response cycles isn't weird nor beyond the pal

    There's even a whole spec for that: JSON-RPC, and it's quite popular.

    theteapot(10000) 6 days ago [-]

    Reads like a series of strawman arguments if you replace 'WebSockets' with socket.io.

      - 'messages aren't transactional': You can process request and return a value to sender in socket.io application layer. Is that transactional enough?
      - 'If you're sending messages that don't necessarily need to be acknowledged (like a heartbeat or keyboard inputs), then Websockets make a great fit'. But socket.io has acknowledgements.
      - 'When a new WebSocket connection is initiated, your server has to handle the HTTP "upgrade" request handshake.'. You can bypass handshake and go straight to WS even in Websockets, and if you don't socket.io handles upgrade for you pretty nicely so you not parsing HTTP header ..
    hntrl(3466) 6 days ago [-]

    It's a good thing I didn't then :shrug:

    Websockets are a web standard, socket.io is a userland framework

    osigurdson(10000) 6 days ago [-]

    >> If it wasn't, we couldn't stream video without loading the entire file first

    I don't believe this is correct. To my knowledge, video stream requests chunks by range and is largely client controlled. It isn't a single, long lived http connection.

    dangoodmanUT(10000) 6 days ago [-]

    Correct

    EE84M3i(10000) 6 days ago [-]

    I believe that's standard for Netflix, etc, but is it also true for plain webms and mp4s in a <video> tags? I thought those were downloaded in one request but had enough metadata at the beginning to allow playback to start before the file is completely downloaded.

    ejoso(10000) 6 days ago [-]

    Correct. HLS and Dash are industry standards. Essentially the client downloads a file which lists the files in various bitrates and chunks and the client determines which is best for the given connectivity.

    motorest(10000) 6 days ago [-]

    > I don't believe this is correct.

    Yes, the statement is patently wrong. There are a few very popular video formats whose main feature is chunking through HTTP, like HTTP Live Streaming or MPEG-DASH.

    wewewedxfgdf(10000) 6 days ago [-]

    I wrote a subsystem the other day that used websockets for a server to distribute video conversion tasks.

    After futzing with silly things like file transfers and communication protocols I chucked it out and rewrote it so the client does HTTP long polling of the server and uploads its renders via hTTP POST.

    So much easier.

    ricardobeat(3634) 6 days ago [-]

    That used to be called "Comet" back in the early 2000s.

    Did you try using an established library like socket.io, connectRPC etc? They handle a lot of the complexity.

    noduerme(10000) 6 days ago [-]

    Long polling is great for most things that don't need a realtime push. It just gets to be a strain on a server if you've got to set up and tear down lots of those connections from lots of users. Keeping a socket alive is a lot less resource intensive. Maybe it sounds stupid, but I've even converted PHP code that responded to long polling to handle the same polling over a socket to save resources. Most of my apps that need some kind of lazy updates actually work this way, and fall back to REST polling the same services if the socket is down.

    austin-cheney(10000) 6 days ago [-]

    WebSockets are full duplex, so both sides of a connection are equally transmitting sides. There first section fails to understands this and then builds some insane concern for state on top of this faulty notion. WebSockets don't care about your UI framework just like your car doesn't care what time you want to eat dinner.

    > You have to manage the socket lifecycle

    You have to do the very same thing with HTTP keep-alive or use a separate socket for each and every HTTP request, which is much slower. Fortunately the browser makes this stupid simple in regards to WebSockets with only a few well named events.

    > When a new WebSocket connection is initiated, your server has to handle the HTTP "upgrade" request handshake.

    If the author cannot split a tiny string on CRLF sequences they likely shouldn't be programming and absolutely shouldn't be writing an article about transmission. There is only 1 line of data you really need from that handshake request: Sec-WebSocket-Key.

    Despite the upgrade header in the handshake the handshake is not actually HTTP. According to RFC6455 it is a tiny bit of text conforming to the syntax of RFC2616, which is basically just: lines separated by CRLF, terminated by two CRLFs, and headers separated from values with a colon. Really its just RFC822 according to RFC2616.

    This is not challenging.

    I take it this article is written by a JavaScript framework junkie that cannot program, because there is so much in the article that is just wrong.

    EDITED: because people get sad.

    skrebbel(3604) 6 days ago [-]

    You're very confrontational yet your post doesn't really refuse the author's main points.

    What the author means with 'transactional' is that WebSockets have no built-in request-response mechanism, where you can tell which response belongs to which request. It's a weird word choice, but alas.

    I do agree that the bit about 'handshakes are hard' feels a bit ill-advised btw, but it's not the core argument nor the core idea of this post. The core idea is 'do request-response via HTTP, and then use some sort of single-direction stream (maybe over WS, doesn't matter) to keep client state in sync'. This is a pretty good idea regardless of how well or how badly you know the WebSocket RFCs by heart.

    (I say this as someone who built a request-response protocol on top of websockets and finds it to work pretty well)

    socketcluster(10000) 6 days ago [-]

    The problem with HTTP2 is that the server-push aspect was tacked on top of an existing protocol as an afterthought. Also, because HTTP is a resource transfer protocol, it adds a whole bunch of overheads like request and response headings which aren't always necessary but add to processing time. The primary purpose of HTTP2 was to allow servers to preemptively push files/resources to clients to avoid round-trip latency; to reduce the reliance on script bundles.

    WebSockets is a simpler protocol built from the ground up for bidirectional communication. It provides a lot more control over the flow of data as everything passes over a single connection which has a single lifecycle. It makes it a lot easier to manage state and to recover cleanly from a lost connection when you only have one logical connection. It makes it easier to process messages in a specific order and to do serial processing of messages. Having just one connection also greatly simplifies things in terms of authentication and access control.

    I considered the possibility of switching the transport to HTTP2 for https://socketcluster.io/ years ago, but it's a fundamentally more complex protocol which adds unnecessary overheads and introduces new security challenges so it wasn't worth it.

    koakuma-chan(10000) 6 days ago [-]

    How can server push be a problem with HTTP/2 if nobody supports server push? It's dead. And what about multiplexing and header compression? Not worth it?

    mountainriver(10000) 6 days ago [-]

    Agree after banging my head against http2 for years, I now really enjoy how simple websockets are and their universal support

    tsimionescu(10000) 6 days ago [-]

    Server push is dead though, SSE is a different idea with completely different semantics (and tradeoffs).

    alt227(10000) 6 days ago [-]

    > The primary purpose of HTTP2 was to allow servers to preemptively push files/resources to clients to avoid round-trip latency; to reduce the reliance on script bundles.

    The primary purpose for HTTP2 was to allow multiple simultaneous asynchoronous http calls, which is a massive loading performance boost for most websites. Server push was very much a tacked on afterthought.

    aseipp(3479) 5 days ago [-]

    > The primary purpose of HTTP2 was to allow servers to preemptively push files/resources to clients to avoid round-trip latency; to reduce the reliance on script bundles.

    No, it was not. The primary goal of HTTP/2 was to get over traditional connection limits through connection multiplexing because browsers treat TCP connections as an extremely scarce resource. Multiplexing massively improves the ability to issue many asynchronous calls, which are very common -- and H2 went on to make the traditional HTTP stack more efficient across the board (i.e. header compression.) Some of the original HTTP/2 demo sites that popped up after Google first supported it in Chrome were of loading many images over HTTP/1 vs HTTP/2, which is very common. In one case of my own (fetching lots of small < 1kb files recursively from S3, outside the browser) HTTP/2 was like a 100x performance boost over HTTP/1 or something.

    You're correct Server Push was tacked on and known to be flawed very early on, and it took a while before everyone pulled the plug on it, but people fixated on it because it just seemed really cool, from what I can tell. But it was never the lynchpin of the thing, just a (failed and experimental) boondoggle.

    collingreen(10000) 6 days ago [-]

    Oof, what a headline to be top of hn the day after you implement websockets into a project.

    sampullman(10000) 6 days ago [-]

    Websockets work great, don't worry too much about it.

    bonestamp2(10000) 6 days ago [-]

    We've had a production app with them for over 10 years and it's generally great. The only thing to be aware of is this Chrome bug:

    https://issuetracker.google.com/issues/362210027?pli=1

    You can add a recurring ping/pong between the client/server so you can know with some recency that the connection has been lost. You shouldn't have to do that, but you probably want to until this bug is fixed.

    jFriedensreich(3625) 6 days ago [-]

    I don't know why the topic of websockets is so weird. 80% of the industry seem to have this skewed idealised perception of websockets as the next frontier of their web development career and cannot wait to use them for anything remotely connected to streaming/ realtime use cases. When pointing out the nuances and that websockets should actually be avoided for anything where they are not absolutely needed without alternatives people get defensive and offended, killing every healthy discussion about realistic tradeoffs for a solution. Websockets have a huge number of downsides especially losing many of the niceties and simplicity of http tooling, reasonability, knowledge and operations of http. As many here pointed, the goto solution for streaming server changes is h2 / h3 and SSE. Everything that can be accomplished in the other direction with batching and landing in the ballpark of max 0.5req/s per client does NOT need websockets.

    austin-cheney(10000) 6 days ago [-]

    There is no reason to avoid WebSockets. This is a conclusion people come to because they are familiar with HTTP round trips and cannot imagine anything different.

    There are no nuances to understand. It's as simple as fire and forget.

    The only downside to WebSockets is that they are session oriented. Conversely, compared to WebSockets the only upside to HTTP is that its sessionless.

    efortis(10000) 6 days ago [-]

    You can also use long polling, which keeps alive a connection so the server can respond immediately when there's new data. For example:

    Server

      const LONG_POLL_SERVER_TIMEOUT = 8_000
      function longPollHandler(req, response) {
        // e.g. client can be out of sync if the browser tab was hidden while a new event was triggered
        const clientIsOutOfSync = parseInt(req.headers.last_received_event, 10) !== myEvents.count
        if (clientIsOutOfSync) {
          sendJSON(response, myEvents.count)
          return
        }
        function onMyEvent() {
          myEvents.unsubscribe(onMyEvent)
          sendJSON(response, myEvents.count)
        }
        response.setTimeout(LONG_POLL_SERVER_TIMEOUT, onMyEvent)
        req.on('error', () => {
          myEvents.unsubscribe(onMyEvent)
          response.destroy()
        })
        myEvents.subscribe(onMyEvent)
      }
    
    Client (polls when tab is visible)

      pollMyEvents()
      document.addEventListener('visibilitychange', () => {
        if (!document.hidden)
          pollMyEvents()
      })
      pollMyEvents.isPolling = false
      pollMyEvents.oldCount = 0
      async function pollMyEvents() {
        if (pollMyEvents.isPolling || document.hidden)
          return
        try {
          pollMyEvents.isPolling = true
          const response = await fetch('/api/my-events', {
            signal: AbortSignal.timeout(LONG_POLL_SERVER_TIMEOUT + 1000),
            headers: { last_received_event: pollMyEvents.oldCount }
          })
          if (response.ok) {
            const nMyEvents = await response.json()
            if (pollMyEvents.oldCount !== nMyEvents) { // because it could be < or >
              pollMyEvents.oldCount = nMyEvents
              setUIState('eventsCount', nMyEvents)
            }
            pollMyEvents.isPolling = false
            pollMyEvents()
          }
          else
            throw response.status
        }
        catch (_) {
          pollMyEvents.isPolling = false
          setTimeout(pollMyEvents, 5000)
        }
      }
    
    Working example at Mockaton: https://github.com/ericfortis/mockaton/blob/6b7f8eb5fe9d3baf...
    hattmall(10000) 6 days ago [-]

    Yep, have used long polling with no downsides for ~20 years. 95% of the time I see web sockets it's unnecessary.

    lxgr(10000) 6 days ago [-]

    > We can't reliably say "the next message" received on the stream is the result of the previous command since the server could have sent any number of messages in between now and then.

    Doing so is a protocol decision though, isn't it?

    If the protocol specifies that the server either clearly identifies responses as such, or only ever sends responses, and further doesn't send responses out of order, I don't see any difference to pipelined HTTP: The client just has to count, nothing more. (Then again, if that's the use case, long-lived HTTP connections would do the trick just as well.)

    scheme271(10000) 6 days ago [-]

    What happens if a message somehow gets lost? Dropped packets, error, etc? Or is that completely precluded by using http streaming?

    suzzer99(3590) 6 days ago [-]

    Me: For this POC you've given me, I will do an old-fashioned HTTP form submit, no need for anything else.

    Architect: But it must have websockets!

    Me: Literally nothing in this POC needs XHR, much less websockets. It's a sequential buy flow with nothing else going on.

    Architect: But it has to have websockets, I put them on the slide!

    (Ok he didn't say the part about putting it on the slide, but it was pretty obvious that's what happened. Ultimately I caved of course and gave him completely unnecessary websockets.)

    ticoombs(10000) 6 days ago [-]

    I always try and push back on those beliefs, about reasonings why they believe it will be faster or more efficient than some other solution.

    I've found , if you could type cast those people, they would be a tech architect who only uses 'web scale' items. (Relevant link: https://www.youtube.com/watch?v=5GpOfwbFRcs )

    kigiri(10000) 6 days ago [-]

    My strategy for this kind of situation is to avoid direct rejection. Instead of saying stuff like 'it's unnescessary' or 'you are wrong', I push for trying first without.

    I would say:

    > Once we have a working MVP without websockets we can talk again to think about using websocket.

    Most times, once something is working, they then stop to care, or we have other priorities then.

    0xbadcafebee(3056) 6 days ago [-]

    I just realized that modern web applications are a group form of procrastination. Procrastination is a complex thing. But essentially, it's putting something off because of some perceived pain, even though the thing may be important or even inevitable, and eventually the procrastination leads to negative outcomes.

    Web applications were created because people were averse to creating native applications, for fear of the pain involved with creating and distributing native applications. They were so averse to this perceived pain that they've done incredibly complex, even bizarre things, just so they don't have to leave the web browser. WebSockets are one of those things: taking a stateless client-server protocol (HTTP) and literally forcing it to turn into an entirely new protocol (WebSockets) just so people could continue to do things in a web browser that would have been easy in a native application (bidirectional stateful sockets, aka a tcp connection).

    I suppose this is a normal human thing. Like how we created cars to essentially have a horseless buggy. Then we created paved roads to make that work easier. Then we built cities around paved roads to keep using the cars. Then we built air-scrubbers into the cars and changed the fuel formula when we realized we were poisoning everyone. Then we built electric cars (again!) to try to keep using the cars without all the internal combustion issues. Then we built self-driving cars because it would be easier than expanding regional or national public transportation.

    We keep doing the easy thing, to avoid the thing we know we should be doing. And avoiding it just becomes a bigger pain in the ass.

    bonestamp2(10000) 6 days ago [-]

    I agree with a lot of that. But, it's a lot easier to get someone to try your web app than install a native app. It's also easier to get the IT department to allow an enterprise web app than install a native app. Web apps do have some advantages over native apps.

    crabmusket(10000) 6 days ago [-]

    You left out the part where you explain why native apps are so much better for users and developers than web apps?

    I can't tell why you think WebSockets are so bizarre.

    flomo(10000) 6 days ago [-]

    > bidirectional stateful sockets, aka a tcp connection

    Which is not 'easy' to do over the internet, so the native app folks ended-up using HTTP anyway. (Plus they invented things like SOAP.)

    gabesullice(3339) 6 days ago [-]

    This feels ill advised and I don't believe that HTTP streaming was designed with this pattern in mind

    Perhaps I'm wrong, but I believe HTTP streaming is for chunking large blobs. I worry that if you use this pattern and treat streaming like a pub/sub mechanism, you'll regret it. HTTP intermediaries don't expect this traffic pattern (e.g., NGINX, CloudFlare, etc.). And I suspect every time your WiFi connection drops while the stream is open, the fetch API will raise an error as if the request failed.

    However, I agree you probably don't need WebSockets for many of the ways they're used—server-sent events are a simpler solution for many situations where people reach for WebSockets... It's a shame SSEs never received the same fanfare.

    hobofan(10000) 6 days ago [-]

    With the current AI/LLM wave SSE have received a lot of attention again, and most LLM chat frontends use them. At least from my perception as a result of this, support for SSEs in major HTTP server frameworks has improved a lot in the last few years.

    It is a bit of a shame though, that in order to do most useful things with SSEs you have to resort to doing non-spec-compliant things (e.g. send initial payload with POST).

    skrebbel(3604) 6 days ago [-]

    > I don't believe that HTTP streaming was designed with this pattern in mind

    > server-sent events are a simpler solution

    Fwiw Server-Sent Events are a protocol on top of HTTP Streaming.

    In fact I'm somewhat surprised that the article doesn't mention it, instead rolling their own SSE alternative that looks (to my non-expert eyes) like a lower level version of the same thing. It seems a bit weird to me to use chunks as a package boundary, I'd worry that that has weird edge cases (eg won't large responses be split into multiple chunks?)

    osigurdson(10000) 6 days ago [-]

    The issue I have with SSE and what is being proposed in this article (which is very similar), is the very long lived connection.

    OpenAI uses SSE for callbacks. That works fine for chat and other 'medium' duration interactions but when it comes to fine tuning (which can take a very long time), SSE always breaks and requires client side retries to get it to work.

    So, why not instead use something like long polling + http streaming (a slight tweak on SSE). Here is the idea:

    1) Make a standard GET call /api/v1/events (using standard auth, etc)

    2) If anything is in the buffer / queue return it immediately

    3) Stream any new events for up to 60s. Each event has a sequence id (similar to the article). Include keep alive messages at 10s intervals if there are no messages.

    4) After 60s close the connection - gracefully ending the interaction on the client

    5) Client makes another GET request using the last received sequence

    What I like about this is it is very simple to understand (like SSE - it basically is SSE), has low latency, is just a standard GET with standard auth and works regardless of how load balancers, etc., are configured. Of course, there will be errors from time to time, but dealing with timeouts / errors will not be the norm.

    runeks(3352) 6 days ago [-]

    > Perhaps I'm wrong, but I believe HTTP streaming is for chunking large blobs.

    You are wrong in the case of Chrome and Firefox. I have tried it and streaming e.g. unordered list elements are displayed instantly.

    But for Safari, 'text/html' streaming happens in 512 byte chunks[1].

    [1] https://bugs.webkit.org/show_bug.cgi?id=265386

    andersmurphy(10000) 6 days ago [-]

    You don't need websockets SSE works fine for realtime collaborative apps.

    Websockets sound great on paper. But, operationally they are a nightmare. I have had the misfortune of having to use them at scale (the author of Datastar had a similar experience). To list some of the challenges:

    - firewalls and proxies, blocked ports

    - unlimited connections non multiplexed (so bugs lead to ddos)

    - load balancing nightmare

    - no compression.

    - no automatic handling of disconnect/reconnect.

    - no cross site hijacking protection

    - Worse tooling (you can inspect SSE in the browser).

    - Nukes mobile battery because it hammers the duplex antenna.

    You can fix some of these problems with websockets, but these fixes mostly boil down to sending more data... to send more data... to get you back to your own implementation of HTTP.

    SSE on the other hand, by virtue of being regular HTTP, work out of the box with, headers, multiplexing, compression, disconnect/reconnect handling, h2/h3, etc.

    If SSE is not performant enough for you then you should probably be rolling your own protocol on UDP rather than using websockets. Or wait until WebTransport is supported in Safari (any day now ).

    Here's the article with a real time multiplayer Game of Life that's using SSE and compression for multiplayer.

    https://example.andersmurphy.com

    It's doing a lot of other dumb stuff explained a bit more here, but the point is you really really don't need websockets (and operationally you really don't want them):

    https://andersmurphy.com/2025/04/07/clojure-realtime-collabo...

    EarthLaunch(10000) 6 days ago [-]

    Useful take, thanks for mentioning specifics. Some of these I wasn't aware of.

    - What makes load balancing easier with SSE? I imagine that balancing reconnects would work similar to WS.

    - Compression might be a disadvantage for binary data, which WS specializes in.

    - Browser inspection of SSE does sound amazing.

    - Mobile duplex antenna is way outside my wheelhouse, sounds interesting.

    Can you see any situation in which websockets would be advantageous? I know that SSE has some gotchas itself, such as limited connections (6) per browser. I also wonder about the nature of memory and CPU usage for serving many clients on WS vs SSE.

    I have a browser game (few players) using vanilla WS.

    realharo(10000) 6 days ago [-]

    What do you mean by 'inspect in browser'? All major browsers' devtools have supported WebSocket inspecting for many years.

    Many of the other issues mentioned are also trivial to solve (reconnects, cross-origin protection).

    Also, doesn't WebTransport have many of the same issues? (e.g. with proxies and firewalls). And do you have any data for the mobile battery claim? (assuming this is for an application in foreground with the screen on)

    Voultapher(10000) 6 days ago [-]

    Having deployed WebSockets into production, I came to regret that over the next years. Be it ngnix terminating connections after 4/8 hours, browsers not reconnecting after sleep and other issues, I am of the opinion that WebSockets and other forms of long standing connections should be avoided if possible.

    bonestamp2(10000) 6 days ago [-]

    Not to mention, some major parts of the websocket API have been broken in Google Chrome for over two years now.

    Chrome no longer fires Close or Error events when a websocket disconnects (well, at least not when they happen, they get fired about 10 minutes later!). So, your application won't know for 10 minutes that the connection has been severed (unless the internet connection is also lost, but that isn't always the case when a websocket is disconnected).

    Here's the chrome bug:

    https://issuetracker.google.com/issues/362210027?pli=1

    From that bug report it looks like the Chrome bug is less than a year old, but the Chrome bug is originally mentioned here in April 2023 for a similar bug in iOS (the iOS bug has been resolved):

    https://stackoverflow.com/questions/75869629/ios-websocket-c...

    I kind of suspect Chrome is actually doing this intentionally. I believe they do this so a tab can recover from background sleep without firing a websocket close event. That's helpful in some cases, but it's a disaster in other cases, and it doesn't matter either way... it breaks the specification for how websockets are expected to work. WebSockets should always fire Close and Error events immediately when they occur.

    Sammi(10000) 5 days ago [-]

    If you want to use websockets, then you are most definitely going to need some library that wraps the websocket, because websockets themselves are very simple and don't do things like reconnect on their own.

    This one is pretty simple and pretty great: https://github.com/lukeed/sockette

    I did my own which provides rpc functionality and type safety: https://github.com/samal-rasmussen/smolrpc

    dontlaugh(10000) 5 days ago [-]

    Even load balancers force you to have a frequent heartbeat all the way to the client for each connection.





    Historical Discussions: Trump exempts phones, computers, chips from 'reciprocal' tariffs (April 12, 2025: 406 points)

    (406) Trump exempts phones, computers, chips from 'reciprocal' tariffs

    406 points 6 days ago by tosh in 3rd position

    www.bloomberg.com | | comments | anchor

    Why did this happen?

    Please make sure your browser supports JavaScript and cookies and that you are not blocking them from loading. For more information you can review our Terms of Service and Cookie Policy.

    Need Help?

    For inquiries related to this message please contact our support team and provide the reference ID below.

    Block reference ID:7220d670-1c47-11f0-84be-572f4bf5c1bd

    Get the most important global markets news at your fingertips with a Bloomberg.com subscription.

    SUBSCRIBE NOW



    All Comments: [-] | anchor

    pcurve(10000) 6 days ago [-]

    Not full exemption. They're still subject to the 20% tariff (instead of the ridiculous 145%) so Trump can save his face.

    CapsAdmin(10000) 6 days ago [-]

    I was trying to find out of this is still the case.

    How did you reach that conclusion?

    giarc(10000) 6 days ago [-]

    Smartphones getting exemptions? Didn't the administration talk about how American's would be tightening screws on iPhones as they brought back these jobs? I'm starting to think they don't know what they are doing.... /s

    grandempire(10000) 6 days ago [-]

    Are the tariffs good or bad?

    perihelions(137) 6 days ago [-]

    This reads to me as 'we're doubling-down on 145%+ tariffs for everyone else'.

    This is getting frighteningly close to a Russian-style economy. As in, a handful of powerful, connected 'insiders' will be allowed to operate businesses, and will dominate... while everyone else gets wiped out, by acts of government. The furthest system possible from the free-market paradigm that built the American economy as it stands today.

    Russia is not a prosperous nation.

    hackernewds(10000) 6 days ago [-]

    It opens up avenues to all sorts off oligarchy style bribery and lack of market competition. ultimately, the country will be looted, since the most successful businesses will not thrive on its merits

    jader201(391) 6 days ago [-]

    > a handful of powerful, connected 'insiders' will be allowed to operate businesses, and will dominate... while everyone else gets wiped out, by acts of government

    Note that this is not an exemption for companies, but an exemption for goods:

    > A new list of goods to be exempted from the latest round of tariffs on U.S. importers was released, and it includes smartphones, PCs, servers, and other technology goods, many of which are assembled in China.

    nabla9(144) 6 days ago [-]

    This reeks 'pay to play' very typical for banana republics.

    Donations to presidential inauguration fund to get access to the president was already tradition in the US. Trump government just exploits it without shame.

    bitsage(3576) 6 days ago [-]

    The prevailing school of economic thought in America, until Nixon, is actually what Trump idealizes. Protectionism from outside "threats", on the basis of security and sufficiency, and a loosely regulated internal market. In comparison, Russia has a lot of regulatory capture and straight up corruption that stifles the internal market.

    Spooky23(3545) 6 days ago [-]

    We're building a hybrid of Italian Fascist and a Argentinian Peronist like state.

    The desire for transactional wins and power overshadows all. Trump will unironically ally himself with a turd like Elon, or a turd like the UAW dude who glazed him on 'Liberation Day'. The state control of business is missing... perhaps we'll see that develop with Tesla.

    It's a weird movement, because the baseline assumption is that the country is ruined. So any marginal win is celebrated, any loss is 'priced in' politically.

    grandempire(10000) 6 days ago [-]

    I didn't know HN was coming around to how regulation and bureaucracy are anti-competitive.

    xbmcuser(579) 6 days ago [-]

    The US economy was not built on a free-market. US private capitalists have been built on a free market; now that their profits are under attack because they are being outcompeted by China, so they are running away with the ball. American economy real growth, where most white Americans gained wealth, came after World War II, where it was government led and controlled. It was the same for Europe, where they had to rebuild all that was destroyed after the war. It was all mostly government controlled and financed.

    The problem today is that US and European capitalists are in power and do not want to admit that the Chinese economic model of government-controlled economic direction, though not perfect, would work better and help all the world's people rather than the select few. As China becomes the dominant economy, the rest of the world has to follow to stay competitive. So these are the death knell of a dying economic and government system. The US had the chance to bring real change for the people with Bernie Sanders, but that was scuttled by the capitalist non-democratic forces, allowing for the rise of Trump. US citizens have been hoodwinked by linking socialist thought, where caring about your fellow man is undemocratic, i.e., socialism.

    01100011(10000) 6 days ago [-]

    No. This reads as capitulation by Trump who is now finding out his long held, half-baked economic theories are wrong. Trump got spanked by the bond market and realized how weak his position was. He can't walk it all back overnight without appearing even weaker than he already is. He's going to slowly roll back most consequential tariffs to try to escape blame for damaging the economy.

    aswanson(10000) 6 days ago [-]

    Exactly. I hope our government can survive the next 4 years for criminal investigations into this era. We can't become Russia.

    dev_l1x_be(10000) 6 days ago [-]

    I thought this is what happened during covid already. We wiped out a large number of small businesses.

    https://www.ons.gov.uk/aboutus/transparencyandgovernance/fre...

    amelius(2195) 6 days ago [-]

    Speaking of which, what are the tariffs for Russia?

    eej71(10000) 6 days ago [-]

    There will be a new aristocracy. The aristocracy of pull. #iykyk

    ModernMech(10000) 6 days ago [-]

    It's not the furthest thing from the American economy as it stands today, but the inevitable conclusion of the 'free-market' capitalism we've been practicing over the past number of decades.

    Donald Trump is the poster child of American capitalism gone right, he's an aspiration for wealthy capitalists across the nation. Generally people have felt that if only we could get an American businessman like Trump in charge of the country, running things the way a true capitalist would (as opposed to how those dirty awful communists/socialists tend to run things), then the country would start going right for a change.

    Well now we have that, and in short order the country has Russian-style crony capitalism from the top. This would not happen in a country that actually cares about free markets. But we don't. Everything we consume is owned by like 10 companies. If you want to get a start in the market you have to get access to capital they control, or meet regulations they set, because they've captured the government regulators through bribes.

    Trump is just taking this whole system of favoritism we've been living under and making it official. I for one am for it because honestly people pretending there is no corruption is worse than the corruption at this point.

    g0db1t(10000) 6 days ago [-]

    * stood yesterday

    ghusto(10000) 6 days ago [-]

    Slightly off-topic, but is the result of the USA tariff 'trade-war' mean that we get to trade at a discount with China in Europe? What I mean is, since it's cheaper for China to trade with us in Europe now, does that mean we gain some bargaining power?

    mrweasel(10000) 6 days ago [-]

    One danger is that all the cheap Chinese crap will be redirected at Europe. It does have to upside of cheaper goods for Europe overall, which is fine for everything we don't make and which is overall adding value. The risk is that we also get all cheap plastic junk, unless EU regulations can keep it out environmental concerns.

    seafoamteal(10000) 6 days ago [-]

    Has the Proton CEO acknowledged just how farcically off base he was when he said the GOP was the party of small businesses?

    wwweston(10000) 6 days ago [-]

    Demand for Proton services is probably up.

    9283409232(10000) 6 days ago [-]

    I was thinking about this yesterday and how stupid a comment it was to make.

    techpineapple(10000) 5 days ago [-]

    The thing that's really been getting to me, is that, I'm liberal, not pro-Trump, but the MAGA American heartland story has been really getting to me. I want to see small business, manufacturing, small town American succeed. And there's some part of me that thought maybe Trump, as much as I don't like him, is the thing that is needed to make that happen, but man it seems like he's really fucking over the people who supported him the most.

    techpineapple(10000) 6 days ago [-]

    Wasn't Howard Lutnick on TV recently explicitly saying they wanted to bring iPhone assembly here? How is one to understand the union of these two perspectives?

    https://fortune.com/2025/04/07/howard-lutnick-iphones-americ...

    ceejayoz(900) 6 days ago [-]

    > How is one to understand the union of these two perspectives?

    Only one perspective actually matters right now, and it's notoriously mercurial.

    Administration officials often have about as much knowledge of what's to come as we do.

    sidvit(10000) 6 days ago [-]

    Howard Lutnick got pulled from the TV sidelines over stuff like this apparently. Bessent is running the show now which is probably why they're actually responding to the bond market punching them in the face this week

    yodsanklai(10000) 6 days ago [-]

    Who would have guessed.

    BearOso(10000) 6 days ago [-]

    Yeah, they're really exemplifying the 'shoot first and ask questions later' model.

    vdupras(10000) 6 days ago [-]

    Nothing means anything anymore. This of course will change completely on monday, then again on tuesday. Of course in the spirit of insider plundering. This circus will go on until we hear the magic words 'the chocolate rations have been increased by 20g'.

    tines(10000) 5 days ago [-]

    Things started to make more sense to me once I realized that human beings hate freedom and love tyranny. Once you accept this, it all falls in place. Deporting citizens to foreign prisons? Sounds great. Incoherent foreign and economic policy? Love it. Freedom of the press? Who needs it! Destruction of democracy? Own the libs! Legalize bribery of foreign officials? Even the playing field! And finally, words don't need to mean anything because they are simply evocations intended to stir up certain emotions. They are more akin to a hunter's duck call. The hunter doesn't speak duck and doesn't care whether that sounds he's making have any meaning, he simply makes noise and looks for a result. Not getting the desired result? Just change the noise a little.

    This is why democracy will eventually fail and autocracy will rise in its place. And no one will ever learn.

    ajross(10000) 6 days ago [-]

    Ugh. Note that this is a capitulation. China's retaliatory import tariff rate remains in effect, and they get to decide which industries to relax, if any. The net effect is that if you're in one of the handful of businesses that export to China, the Trump administration threw you under the proverbial bus.

    vdupras(10000) 6 days ago [-]

    While we're at it, China might as well impose a 145% export tax on phones, computers and chips, just to taunt.

    kevin_thibedeau(10000) 6 days ago [-]

    Seems a bit anti-business to have an unequal playing field just for the star-bellied sneetches. Also silly that those with the biggest piles of capital are getting exemptions when the whole purpose of this exercise is to spur local investment in manufacturing. If anything, small businesses below some threshold of revenue/staff should be getting exemptions.

    croes(347) 6 days ago [-]

    That's how oligarchies work.

    bogwog(10000) 6 days ago [-]

    Wdym? It's entirely merit-based, with the 'merit' being $1 million dollar totally-not-a-bribe dinner with the president: https://www.tomshardware.com/tech-industry/artificial-intell...

    FranzFerdiNaN(10000) 6 days ago [-]

    America has finally become the banana republic it has accused others of being.

    integricho(10000) 6 days ago [-]

    not just a bit, this is so unfair and smells of corruption, only the richest companies getting exemptions, give me a break. this is what organized crime looks like.

    victor106(3603) 6 days ago [-]

    You are right.

    Do you think all the tech CEO's attended his inauguration for nothing?

    I never imagined I would see such public corruption in any western country. I am saying this as someone who supported some the current administrations agenda

    buzzerbetrayed(10000) 6 days ago [-]

    Companies aren't getting exemptions. The product categories are. The headline is misleading. And while you might already be aware of that, most the people responding to you clearly aren't.

    jm4(10000) 6 days ago [-]

    It's total bullshit. Part of my business involves direct import and that's now impacted by tariffs. The cherry on top is that what I import is not and cannot be produced in the U.S. I source a number of other products from suppliers in the U.S. and literally every single one of them is impacted by tariffs somehow, whether it's ingredients, packaging, etc. that comes from somewhere else. Some of my materials originate in the Dominican Republic, which is now subject to a 10% tariff, although it's more common for others in my industry to source those same materials from China. Now that China is prohibitively expensive, they will be quickly pivoting to other suppliers, which will further drive up prices. Supply chains are in chaos right now.

    It burns me up that massive companies like Apple and Nvidia get a free pass while everyone else is subject to the most brain dead economic policy anyone alive today has ever lived through.

    kgwgk(248) 6 days ago [-]

    'Star-bellied sneetches' maybe, but it's not about 'biggest piles of capital' as much as about importing things with the following codes:

    8471 8473.30 8486 8517.13.00 8517.62.00 8523.51.00 8524 8528.52.00 8541.10.00 8541.21.00 8541.29.00 8541.30.00 8541.49.10 8541.49.70 8541.49.80 8541.49.95 8541.51.00 8541.59.00 8541.90.00 8542

    d0gsg0w00f(10000) 6 days ago [-]

    I'm reaching here but....

    Apple has already 'committed' to investing in US manufacturing. Also, many companies have committed to AI investments on US soil which would be heavily NVIDIA dependent. Could be a justification for the exemption.

    dyauspitr(10000) 6 days ago [-]

    This is probably the most corrupt, pay to play government in the history of the US. Merit has no place here.

    wnc3141(10000) 5 days ago [-]

    Trump is pro business in the same way Putin is. It's not good to be in the Russian oil business, unless you are Putin's chosen friend.

    jmclnx(10000) 6 days ago [-]

    I cannot read it, but didn't China restrict the export of some tech related items as part of their tariffs ?

    I remember hearing those items are need to make assemble some components needed for some boards.

    I hope Wall Street is still hammering this admin. on why these tariffs are bad.

    timbit42(10000) 5 days ago [-]

    You're thinking of rare elements.

    chvid(10000) 6 days ago [-]

    What imports of size from China are then under full tariffs?

    Seems silly just to mess up a few toy importers.

    SonOfKyuss(3356) 6 days ago [-]

    Auto parts come to mind. Also there are plenty of products on shelves at big box retailers like Walmart that are made in China and won't fall into the exempted categories.

    greatgib(3476) 5 days ago [-]

    Sextoys...

    relyks(10000) 5 days ago [-]

    Clothing. A lot of apparel and accessory retailers have significant portions of their items produced in China.

    t-writescode(10000) 5 days ago [-]

    Board games; medium-tier manufacturing; non-computer, intermediate parts manufacture

    yellow_lead(2832) 6 days ago [-]

    This link is better:

    https://wccftech.com/trumps-reciprocal-tariffs-have-reported...

    Or, the primary source seems to be:

    https://content.govdelivery.com/bulletins/gd/USDHSCBP-3db9e5...

    But you'd have to look up those codes to know they're for PCs, smartphones

    instagib(10000) 6 days ago [-]

    Thanks for a great free article.

    The title is sensationalism when it should be phone and computer associated parts are exempted from tariffs or something like that.

    crawsome(10000) 6 days ago [-]

    It's so painful watching this administration be forced to react to their preventable mistakes in-real-time with no repercussions.

    One thing is throwing and seeing what sticks, but at the seat of the presidency, it seems like such an antipattern for leadership. And yet, the support is unwavering. It's exhausting.

    northrup(10000) 6 days ago [-]

    oh, they'll be repercussions. We, as a nation, will be paying for this for years and years to come.

    ajross(10000) 6 days ago [-]

    Pointed it out in the other thread, but this is a capitulation. China imposed retaliatory tariffs that remain in effect! There are a handful of businesses that do indeed export to China, and the net effect here is that they've all been thrown under the bus. China gets to kill/pick/control them at will now.

    dave4420(10000) 6 days ago [-]

    How will China react to this, I wonder.

    cinbun8(3678) 6 days ago [-]

    From an outsider's perspective, it's difficult to discern any coherent U.S. strategy—assuming one even exists. One day it's a 145% tariff on China. The next, it's "Well, it's still 145%, but Apple and Nvidia are exempted because their stock prices might take a hit." Then comes a 90-day pause, adding to the confusion.

    It's not clear whether Jamieson Greer is actually steering this, or if any of it was thoroughly thought through.

    ArinaS(10000) 6 days ago [-]

    > 'assuming one even exists'

    I actually doubt it does. Everyting is just too chaotic to be a strategy.

    whalesalad(363) 6 days ago [-]

    chaos is the strategy

    _Algernon_(10000) 6 days ago [-]

    If there is a strategy it is probably dominating the news cycle with this chaotic bullshit, while they navigate towards the real goal in the shadows.

    dkrich(10000) 6 days ago [-]

    There is no plan. Talk tough, reverse under pressure, rinse repeat. Anyone surprised must not have watched season one which aired in 2019.

    jmull(10000) 6 days ago [-]

    > assuming one even exists

    Why would you assume that?

    I don't know why people keep expecting Trump to be different than what he has consistently shown us for all these years. There's no subtle plan. There's no long-term plan. He's cranking the levers immediately available to him for the drama, as he has always done.

    People around him may have ideas and plans. They can sometimes get him to agree to one of these, but it never lasts long.

    andreygrehov(1663) 6 days ago [-]

    When it comes to global impact, can you even confidently say you're being strategic? It almost feels like staying tactical is the only viable strategy - there are simply too many variables. The chances are high that any strategy you come up with is doomed to fail.

    jonplackett(10000) 6 days ago [-]

    This is the only explanation that has made sense to me so far. And it makes even more sense based on these exemptions.

    https://www.instagram.com/share/_jW_V1hwM

    This is Senator Chris Murphy explaining it's not economic policy, it's an attempt to blackmail corporations into submission by making a deal with him in return for sanctions relief.

    Keep an eye out for what Apple and nvidia might have agreed to give.

    pkulak(10000) 6 days ago [-]

    The plan is to make every country and CEO grovel at the feet of the boss to be exempted from the tariffs. I'd say it's corruption, but it's more like a protection racket.

    I wonder what these companies had to offer?

    TheSwordsman(3068) 6 days ago [-]

    As an American, I regret to inform you that you're trying to use logic to understand a situation where it seems like logic wasn't used (in terms of the economic impact). These are the same fuckwits that tried to claim a trade deficit is the same as a tariff.

    coliveira(3662) 6 days ago [-]

    That's how corruption works in a banana republic. Good things for my friends, hell for everyone else. It is the furthest you can be from free trading capitalism that the US was preaching while it was good for them.

    vFunct(10000) 6 days ago [-]

    There is no planned strategy. Planning requires learning about entire systems, and Trump isn't smart enough to do that. He can only act on things placed before him. If he sees foreigners making money by selling into the US, he has to raise tariffs on it. There is no second order, third order, or any deeper level of understanding of what's going on. It's purely superficial action, on things Trumps eyes sees, not what his brain sees. There is no brain in there that can predict what would happen if tariffs were raised. He can only raise tariffs.

    To be smart is to have systemic understanding, and Trump & the Republicans are incapable of that.

    It's exactly what happened in his first term, when he got rid of the nation's pandemic preparedness because why would anyone ever need that, right?

    throwaway48476(10000) 6 days ago [-]

    Every company that wants an exemption has to pay. It's a personal tax system.

    reaperducer(10000) 6 days ago [-]
    it's difficult to discern any coherent U.S. strategy—assuming one even exists

    The strategy is to keep everyone unsure what might come next.

    It's like in boxing. When you hit your opponent and leave them confused and uncertain what you might do next, you use that to your advantage and keep on hitting. It's how you 'win.'

    As if there are any winners here.

    ranger207(10000) 6 days ago [-]

    It's vibe governing, just like any other populist government

    stefan_(1849) 6 days ago [-]

    Import Chinese battery: 145% tariff

    Import Chinese battery inside Chinese laptop: 20% tariff

    Import Chinese battery inside Vietnamese laptop: 0% tariff

    Truly this will bring back American manufacturing!

    voisin(442) 6 days ago [-]

    The strategy is to sow fear and uncertainty to drive capital from stocks to government bonds and drive down the bond yield. Bessant is pretty clear about this. Once they get the bond yields down and refinance a lot of the short term debt into longer term debt they free up operating budget. Combine with Elon's DOGE cutting costs and Lutnick raising some capital from tariffs, and it is a pretty good strategy. I don't agree with Trump's policies generally nor am I American, but this is a good short term strategy.

    foogazi(10000) 6 days ago [-]

    > but Apple and Nvidia are exempted because their stock prices might take a hit

    They already took a hit - which they monetized by both ways

    codedokode(3471) 6 days ago [-]

    Can we use Occam's Razor and assume that nobody knows what would be the optimal tariff rates and if you don't have a reliable mathematical model the only choice you are left with is experimentation and A/B tests.

    Glyptodon(10000) 6 days ago [-]

    I'd say it's clear that none of it was thoroughly thought through at the least.

    theropost(10000) 6 days ago [-]
    jayd16(10000) 6 days ago [-]

    I think its crystal clear there is no actual plan.

    TZubiri(10000) 6 days ago [-]

    "Well, it's still 145%, but Apple and Nvidia are exempted because their stock prices might take a hit."

    That's a massive misread. You are confusing the direction of influence between secondary public stock markets and federal executive orders.

    The tariffs are supposed to strengthen self sufficiency, and discourage imports of stuff the US can do on their own.

    Chip manufacturing, (which by the way is often only the manufacturing and not the design or IP of the chips), is an exception for whatever reason, may be labour costs, but it may also be that chips are a mineral heavy and diverse product, so it's one of the few products where autarky isn't feasible or very rewarding.

    And there would be situations without exemptions where the US may have been incentivized to import the raw materials and rebuild megachip factories, of which there are only like a dozen in the world, creating a huge output inefficiency due to political reasons on two fronts.

    Exceptions are reasonable.

    rpgbr(1592) 6 days ago [-]

    The plan: What if we ran the richest, more powerful country on history as if it were a giant meme stock geared to benefit those in charge?

    joe_the_user(3127) 6 days ago [-]

    To understand this, I think you have to neither overestimate or underestimate Trump and Musk.

    Both Trump and Musk seem be to essentially ideologues, visionary tough-talkers, who have actually succeed (or appeared to succeed) to various endeavors through having underling who work to shape their bluffs into coherent plans. This works well for various as long as the delicate balance of competent handlers and loud-mouthed visionaries is maintained.

    The problem is the process of Trump winning, losing and then winning again all him to craft an organization and legal framework to put forth he vision uncorrected, unbalanced and lacking all guardrails.

    And that's where we are.

    csomar(2194) 6 days ago [-]

    It makes sense if you understand how Trump became president. He'll test something (through a tweet) to his audience and then amplify or kill it based on the response. It worked great with 50% of the US population or so; it doesn't seem to be working at all with the Chinese political elite.

    rchaud(10000) 6 days ago [-]

    It's far from the only place the policy is incoherent. They fired the top ranking officer at the US base in Greenland for having the temerity to tell their host nation 'I do not presume to understand current politics, but what I do know is the concerns of the US administration discussed by Vice-President Vance on Friday are not reflective of Pituffik Space Base.'

    https://www.bbc.com/news/articles/creq99l218do

    ineedaj0b(10000) 5 days ago [-]

    they thought it up and published a report back on it in nov 2024.

    here's the plan, you can use it to advise your investments:

    https://www.hudsonbaycapital.com/documents/FG/hudsonbay/rese...

    the media is garbage and they can't cover anything well enough to inform. but i bet clicks are up!

    davesque(2388) 5 days ago [-]

    As a thoughtful person, you've got to learn to curb your instinct to make sense out of things like this. It's a waste of calories.

    jppope(1694) 5 days ago [-]

    > it's difficult to discern any coherent U.S. strategy—assuming one even exists

    Not sure why there is a presumption that one exists or that its coherent. With even the slightest critical eye its easy enough to discern that this isn't about economic policy or trade and that the proposed 'policy' doesn't make any sense. The guy in charge of this stuff is either seeing what he can get away with, fucking with people, or building a narrative...

    that is to say what you are watching isn't 'real'.

    sagarpatil(10000) 5 days ago [-]

    Feels like they are just winging it based on public response.

    Animats(2975) 5 days ago [-]

    > It's not clear whether Jamieson Greer is actually steering this, or if any of it was thoroughly thought through.

    We know for sure that Greer isn't steering this. Greer was testifying before a congressional committee when Trump announced huge changes to tariffs on China. Greer hadn't even been told.

    raffraffraff(3241) 5 days ago [-]

    It feels like we just hired a recent graduate, who is an egotistical know-nothing, to manage our databases. And he just decided to migrate all of the DBs to the cloud in the middle of the day without testing it, or checking any metrics. Now he wants to fail some of them back and thinks that should be 'a cinch' but doesn't actually understand how anything works under the hood.

    lonelyasacloud(3157) 5 days ago [-]

    His goal is to create confusion; to 'flood the zone'.

    Him and his cronies know when that flood is coming and can profit from it.

    It's only confusing if there is any expectation that he is working for the good of anyone else.

    mppm(10000) 6 days ago [-]

    This is pretty much how I expected this to play out, at least for now. Trump acts all tough and doesn't back down publicly, but China actually doesn't back down. So what happens is that some businesses get exemptions to mitigate the impact. Then some fine print gets changed about how the rules are enforced. Like, suddenly it turns out that Kiribati is a major electronics supplier to the US :)

    End result - US economy takes a hit, China takes a smaller hit. Trade balance widens further, most likely. The rich get richer, while many small companies struggle to survive.

    jmull(10000) 6 days ago [-]

    > doesn't back down publicly

    Seems like he has been backing down publicly all week. Quickly too.

    This has been a massive catastrophe, though I suspect you're right about the end result.

    A1vis(10000) 6 days ago [-]

    The media coverage seems a bit weird to me. The primary source was released 12 hours ago, but when I did a bit of research 4 hours ago I only saw a few reports from dubious Chinese sources like this: https://www.zhitongcaijing.com/content/detail/1277768.html

    Then about 2 hours ago all major media outlets were covering it.

    joe_guy(10000) 6 days ago [-]

    You're likely seeing the effect of timezones.

    It was announced at 11pm and American news companies didn't feel it urgent enough to report before their usual morning weekend staff's shift.

    safgasCVS(10000) 5 days ago [-]

    May I propose a tinfoil hat perspective on tariffing China: America is prepping the ground for a full war with China. That's the only position that make sense to me other than the obvious 'these guys are all corrupt idiots'. I don't know which is which but at least the war perspective makes more sense to me. I believe we are at the propaganda stage where allies will be 'encouraged' into adopting similar positions and portraying China as a global threat. Nations such as India, Philippines, Taiwan, South Korea and Australia are already encouraged to act highly aggressively towards China whenever possible. Given most of those countries political elite worship America and long to send their kids to Harvard they will comply and willingly allow their countries to be used as cannon fodder to maintain Western hegemony. The sweet-talking of Russia is an attempt to recreate the Sino-Russian split during the cold war and at least ensure Russia doesn't fight alongside China in a war. None of this is related to bringing jobs back, nation building or caring one bit about blue collar workers. its an attempt to maintain the American global hegemony that China very clearly threatens. If Trump and his close supporters can get filthy rich from this then all the better.

    grey-area(180) 5 days ago [-]

    There is no grand geopolitical strategy here. Trump and his advisors really are this stupid and think that huge worldwide tariffs are a good idea. That they have kept 10% worldwide tariffs (also insane) shows they still think they are a good idea that will bring back manufacturing to the US. The damage to US soft power is irreversible unfortunately and the trust of former allies will never return. I suspect you'll find as the empire declines people no longer aspire to send their kids - would you if they might be detained for weeks in inhumane conditions and deported for having opinions or a skin colour the regime doesn't like?

    Yes China is the current rival and thus was hit hardest, but they've already had to retract a lot of tariffs days after introduction simply because they had no idea what impact it would cause on borrowing costs.

    Yes if Trump sees an opportunity to demand fealty from anyone with power or money he will take it, and enjoy it, but he genuinely thinks that is his due anyway.

    You could say they have a plan in project 2025, but that's more about destroying the US government and retaining power. If it were a functioning democracy he'd be removed after the damage he's done.

    Ylpertnodi(10000) 5 days ago [-]

    From my conversations with Europeeps...we'll side with China.

    randoomed(10000) 5 days ago [-]

    Unfortunately if this was the plan it massively backfired. By imposing a global tariff the US also hit its allies in the region. This in turn causes these allies to look for trade deals with others in the region, like China.

    We have already seen South Korea and Japan announce new trade deals with China. So the US is actually pushing away its allies in the region (which doesn't sound ideal when trying to start a war).

    cranium(3649) 5 days ago [-]

    The 145% tariff is so absurd I wouldn't be surprised to see cheap chips glued to the item to exploit the exceptions.

    'Oh yeah, that's not a shoe: it's the protective case for an ESP32 WiFi router'.

    ben_w(10000) 5 days ago [-]

    Perhaps one could say they are 'Smart shoes': https://en.wikipedia.org/wiki/File:DonAdams.jpg

    SOLAR_FIELDS(10000) 5 days ago [-]

    For those who think this is ridiculous, this happens already on a regular basis with batteries to get around the regulations and fees around shipping them. Instead of getting the battery in the mail you'll get a cheap flashlight in the mail with a battery inside it.

    xbmcuser(579) 5 days ago [-]

    The moment they put tariffs I was thinking they just supercharged smuggling and illegal border crossing with multi trillion dollar market.

    alistairSH(3420) 5 days ago [-]

    Sort of the inverse, but didn't Ford import Turkish-built Transit Connect vans with full interiors, only to strip those out upon arrival in Baltimore, as a means of skirting the Chicken Tax?

    __s(3499) 5 days ago [-]

    Nathan Fielder was ahead of things calling smoke detectors instruments: https://www.youtube.com/watch?v=3x87jemLFyo

    atomicbeanie(10000) 5 days ago [-]

    Time to just call these tariffs: sales tax. Extra money for the government on all goods imported are taxes. The rest of the complexity distracts from the basic cash flow and the inevitable results. More money spent and consumed by the government.

    otterley(3404) 5 days ago [-]

    They're worse than sales taxes, because the goods imported are subject to levies even if they're unsold and eventually destroyed.

    Loughla(10000) 5 days ago [-]

    Is nice that my family's small business is set to get absolutely crushed by tariffs at the end of the month while large tech companies are exempt. Thank goodness for America first policies. So cool. Very cool.

    iugtmkbdfil834(10000) 5 days ago [-]

    This whole thing has multiple layers of annoying to even a slightly reasonable person. Naturally, further consolidating strength of the existing major behemoths is among those as well.

    cosmicgadget(10000) 5 days ago [-]

    You should go to one of the million dollar dinners at Mar-a-Lago.

    righthand(10000) 6 days ago [-]
    SpicyLemonZest(10000) 6 days ago [-]

    The articles you've linked are about threats of 10% to 25% tariffs in the context of active trade negotiations between the US and China. Here, there's an actually imposed tariff of 145% and no talks at all as far as has been reported. It's not the same situation.

    djeastm(10000) 6 days ago [-]

    Wow you had these at-the-ready, didn't you. Thanks.

    *I've read through a few of these and it seems like perhaps Trump still thinks it's 2018/19, but China's position has only gotten stronger.

    It seems the attempt to jack up tariffs so high this time was a bluff to 'show' how strong we can be, but he miscalculated on how shaky the stock/bond markets actually currently are and the financial players know we're not in a position to go it alone.

    And China knows this and they know they can wait us out. I believe it will be considered a misstep, at best and a catastrophe at worst.

    standardUser(10000) 6 days ago [-]

    The tariffs from 8 years ago were a seemingly rational policy and were largely upheld by the Biden administration.

    These tariffs look designed to rapidly eject the US from the global economic order and hand over the reins to China. Though saying they were 'designed' at all seems extravagantly generous.

    Aurornis(10000) 5 days ago [-]

    > Why is no one highlighting how this is repeating history 8 years ago?

    Because it's not? The tariffs which are currently in effect or soon to go into effect are so far out of line with anything in modern history that there is no comparison.

    The reason everyone is panicking is because people expected more of the same as 8 years ago but instead we got something massively worse, without a hint of cohesive strategy, and that has gone into effect rapidly and on the whims of one person who can't even appear to get on the same page as his advisors.

    Everyone knows there's some element of bluffing going on, but that's also the problem; This administration knows their bluffs would be transparent this time so they decided to go extra big to make a point. This becomes a problem for all of the people and companies whose business was suddenly upended by out of control tariffs with little time to prepare (compared to the smaller tariffs everyone was preparing for)

    They're banking on the damage either not being directly noticed by their voter base, or being able to convince their voter base that the damage is actually a good thing. I'm already seeing people applaud these actions as if they were narrowly targeted at cheap Chinese goods on Amazon or fast fashion, without realizing how much of the inputs to our economy go through one of the countries with tariffs ranging from 25-145%.

    Some people are determined to adopt contrarian positions and act like they're above it all, but the people who have to deal with the consequences of this stuff (myself included) are taking a lot of damage from these supposedly no big deal negotiations. It's not being handled well. Even if they were to disappear tomorrow, a lot of damage has been done and they're hoping people like you will find a way to rationalize it away as not a big deal

    melagonster(10000) 5 days ago [-]

    Because last time the US government required alliances to participate in the trade war. Maybe it is not rational, but the US is the leader, so most countries just thought, 'Ok, if you really need it...'. But this time, the trade war is against the whole world. Everyone is confused.

    n1b0m(1263) 5 days ago [-]

    "Trump's first term would probably have seen a version of this week's debacle if he had chosen different advisers, and if he had not later been knocked off course by Covid.

    For the first two years of his first term, in 2017-18, his instincts were largely kept in check by his economic adviser Gary Cohn, a former chief operating officer at Goldman Sachs, who dampened Trump's determination to use tariffs to end trade deficits."

    https://www.theguardian.com/world/2025/apr/12/did-trump-tari...

    1oooqooq(10000) 5 days ago [-]

    the most important tidbit

    > Apple already pays tariffs on products including the Apple Watch and AirPods, but hasn't raised its prices in the United States.

    so, they fear tariffs because their price is already at the highest their products would sell? that's an interesting point most people don't understand. the tariffs were only 15% then, but still interesting to see how it played out.

    steveBK123(10000) 6 days ago [-]

    So we are exempting all the tech transfer & natsec risk items but maintaining new embargo-level tariffs on cameras, children's toys, and t-shirts.

    Makes a lot of sense if you don't think about it.

    polski-g(10000) 5 days ago [-]

    American children yearn to work in a sock factory.

    TheAlchemist(1801) 6 days ago [-]

    It's not even a week since Secretary of Commerce Lutnick was explaining how he wants to bring back millions of jobs 'screwing the little screws in iPhones' to Amercia ?

    There is really a good chance that we will develop a deep understanding of how the French Revolution happened and why they went straight to guillotines.

    kristopolous(3570) 6 days ago [-]

    They gave every strong indication of their incompetence possible - over years. A bunch of people said 'yay for incompetence' and here we are.

    These are the people who score in the bottom 20% and make up conspiracy theories on how they were right and it's the establishment who's wrong.

    Any random person waiting at a bus stop would likely have managed things better.

    lo_zamoyski(10000) 6 days ago [-]

    The idea that you could 'bring industry back' into the US with blanket tariffs is delusional and demonstrates a complete ignorance of the complexity of economic ecosystems and industrial culture. It takes a long time for sustained expertise and the needed supply chains to grow and form and mature in an economy.

    You could argue that perhaps a selective application of tariffs might help the formation of such domestic industry, but tariffs are not something to wield lightly.

    belter(63) 6 days ago [-]

    "I don't know how you can be that stupid. How do you get to be president and then you're stupid?"

      - Donald Trump (actual quote)
    stevenwoo(3570) 6 days ago [-]

    They just spouted two different justifications, jobs will come back to America, and robots will do the jobs. I guess the most generous explanation is jobs for people making robots in America by combining the two separate statements, but that's not even close to what they said.

    9283409232(10000) 6 days ago [-]

    Nothing about the tariffs make any sense. The want to use the tariffs to negotiate with countries but also say they want to use tariffs to bring back manufacturing. If you are using tariffs to negotiate then once the country gives you what you want, you have to lift the tariff thus the free market keeping manufacturing overseas. If you want to bring back manufacturing then you can't use the tariff to negotiate.

    I am genuinely at a loss at how his supporters don't understand this.

    dyauspitr(10000) 6 days ago [-]

    It's the looting of America while they use the same old racial ideologies so their supporters don't break rank even under abuse.

    senderista(10000) 6 days ago [-]

    The French Revolution didn't go "straight to guillotines", not even close.

    refurb(2851) 5 days ago [-]

    The French Revolution was against the establishment.

    I wouldn't argue Trump represents the establishment.

    Hikikomori(10000) 6 days ago [-]

    Art of the deal.

    randcraw(10000) 6 days ago [-]

    Art of the bribe, actually.

    inverted_flag(10000) 6 days ago [-]

    I've noticed that the pro-trump posters have been quiet on this site recently, pretty funny.

    fells(10000) 6 days ago [-]

    Because, in reality, they voted for his regressive cultural policies, not his regressive economic policies.

    Though in November I'm sure they were telling us how good he would be on the economic front.

    aoeusnth1(10000) 6 days ago [-]

    One of the most surprising things about this announcement is that it didn't happen during business hours where the insiders could buy call options before hand.

    dyauspitr(10000) 6 days ago [-]

    Insiders already bought call before market close on the previous day.

    owenversteeg(10000) 6 days ago [-]

    I'm not seeing anyone discuss this here, so I figured I'd raise an important point: this style of tariffs is crushing for US manufacturing. While a universal tariff with no exceptions incentivizes domestic manufacturing, a selective tariff with specific industry exceptions is absolute poison.

    You might think, as the authors of this exemption did, "well then we will exempt computer parts." Then people will simply import the parts. But if you manufacture those parts in the US, you are suddenly at a massive disadvantage. Your computer parts factory likely runs using a large amount of imported raw materials, imported machines, and imported tooling, and there are no tariff exemptions for those broad categories... so you're screwed. Oftentimes there is no reasonable domestic substitute. You will go out of business in favor of someone importing the parts, which now happens tariff-free under an exemption. That's why, generally speaking, tariff exemptions are deadly to domestic manufacturing.

    jopsen(10000) 6 days ago [-]

    Even universal tariffs with no exceptions is a problem.

    Many things cross US/Canada/Mexico border in the process being manufactured. And tariffs will stack up.

    Many advanced products (tech/chip, etc) are not entirely made in any single place. Some stuff is imported, and some is exported again, and tariffing the world, will also make the world tariff you.

    I think this is all around bad. Best case scenario the US has elected a president who decided to burn all political capital, alliances and credibility in search of a slightly better deal.

    Doing this sort maximum pressure economic extortion style policies, *might* getter you a slightly better deal. But at what cost?

    Can EU countries buy US military equipment, when it turns out that the US will withhold support for equipment we've bought and paid for, in order to pressure a democracy, fighting for its existence, into surrender.

    Trump may get a win in the headlines, because everyone thinks he'll go away if he get a win.

    jijijijij(10000) 6 days ago [-]

    > Your computer parts factory likely runs using a large amount of imported raw materials, imported machines, and imported tooling, and there are no tariff exemptions for those broad categories... so you're screwed.

    All the planning charts and demolition orders have been on display at your local 24/7 news feed for more than eight years, so there's no point in acting surprised about it. You've had plenty of time to lodge any bribe worth the president's time and it's far too late to start making a fuss about it now. Oh, for heaven's sake, Americans, President Trump did a crypto scam on his supporters before being sworn in, you know. I'm sorry, but if you can't be bothered to take an interest in local affairs, that's your own lookout.

    I've no sympathy at all.

    quasse(10000) 5 days ago [-]

    Universal tariffs with no exception don't even incentivize domestic manufacturing when it cuts local manufacturers off from an outside market that's bigger than the domestic one.

    My company manufactures equipment in North America, with the most expensive input coming domestically from Ohio. Guess what though? Retaliatory tariffs from the global community means that the most rational course of action is now to move that manufacturing *out of the US* so that we can sell to the global market without penalty.

    Sorry Ohio, but Mexico is currently *not* engaged in a trade war with Canada and half the EU so the rational decision for a company who wants to sell in those markets is to divest from the US.

    numpad0(10000) 5 days ago [-]

    People don't want incentivization of American domestic manufacturing. That's where the fundamental disagreement is, after all. People don't have confidence in American products built on US soil by upper middle class Americans. It's going to take long to (re?)build trust to reverse that.

    Renaud(3541) 5 days ago [-]

    Universal import taxes on everything make no sense.

    If you want to protect strategic production, you apply selective tariffs to support that local production while ensuring it can ramp up and import what it needs until it becomes self-sufficient.

    Most countries, the US included, have used selective tariffs for this purpose. Applying a blanket tax on every type of import just increases inflation, as you can't possibly manufacture everything locally. For many products—especially cheap ones that were outsourced to China—there's no way to produce them cheaply enough for your internal market to absorb all production.

    And you can't export them either, because their higher production cost makes them uncompetitive compared to cheaper alternatives from low-cost countries.

    The secondary effects of import taxes are wide-ranging: they help when applied selectively and carefully; they don't when applied capriciously and without thought.

    The mere fact that high taxes were slapped on phone imports so 'phones could be made in the US,' only to backtrack mere days later, demonstrates that this is either the work of an insanely bright economist nobody understands, the scheme of a grifter aiming to benefit personally, or the capriciousness of a borderline dementia patient who cannot act rationally.

    energy123(10000) 5 days ago [-]

    It's the opposite! A universal tariff is a tariff on all inputs that manufacturers need to be competitive. How will Ford or Tesla ever be competitive if all their inputs are 24% more expensive than Toyota's inputs?

    Autarky doesn't work. Juche doesn't work. Comparative advantage works, both theoretically and in practice if we study economic history.

    beloch(10000) 5 days ago [-]

    Factories, tooling, machinery, etc. must be amortized over a market and production run. If you're making toilet paper, the cost is relatively low and the market is huge. The TP you make today will still be good TP in a decade. No one toilet paper factory can serve the world, so you'll need many of them in many markets. The inputs can be found within the U.S.. Why not build one in the U.S.?

    A factory that produces a specific model of phone is only going to be able to run for a few years before it needs to retool for a newer model. That means a huge investment goes into such a factory on a continual basis. If one factory can serve the entire world demand for that model, why build two?

    If you're going to build just one factory, are you going to build it in a market that's walled off behind trade barriers, both for outputs and inputs? Only if that market is significantly bigger than the rest of the world combined. If the rest of the world is bigger, than you build outside the trade barriers and people inside of them will just have to pay more.

    Tariff's might bring low-end, high-volume manufacturing back to the U.S.. Chip fabs, phone factories, or anything so high-end/low-volume that it must be amortized over a global market is not going to return to the U.S. because of tariff's. An administration that changes their minds every few hours only makes matters worse. Whether Trump has recognized this and is conceding defeat or he's bowing to pressure from companies like Apple is immaterial. That kind of factory is not coming to the U.S. anytime soon.

    Aurornis(10000) 5 days ago [-]

    > While a universal tariff with no exceptions incentivizes domestic manufacturing

    Not really. Efficient manufacturing requires access to a lot of different inputs from all over, from the machines that make things to the raw materials.

    Putting tariffs on everything only incentivizes companies to move to a location where they can freely buy what they need and manufacture it for the world.

    The US is not the only consumer of most manufactured goods. Making them in a country with cheap labor and no extra import tariffs makes more sense than in a country where everything is under tariffs

    atoav(10000) 4 days ago [-]

    Relocating a factory to the US is expensive both as an investment and in its operation. Thst means you're thinking on a time horizon of decades not years. So if you're the CEO of a corp that is expected to be incentivized to move production to the US you would want to know how long those tariffs are going to last.

    And lets face it, even if Trump instigated those tariffs via executive order at day 0 and didn't touch them till the expected end of his office that would not be enough incentive to relocate production. (1) because he could change the tariffs literally at any point (and he did just that) and (2) because any president after could just reverse the executive order immidately.

    The erratic way Trump installed, modfied and communicated the tariffs run counter to the communicated purpose. E.g. why of all things excempt computers and electronic devices now from the tariffs? Why put a 10% tariff on goods from dirt-poor countries whose goods you already buy at an rate bordering on exploitation to your own benefit.

    The way I see it, either he has no idea what the hell he is doing, or he is doing it for another purpose, e.g. insider trading. And I see myself exceedingly tired of journalists trying to read the tea leaves on a madman.

    throw0101d(1901) 6 days ago [-]

    There are valid reasons for tariffs:

    * https://www.noahpinion.blog/p/when-are-tariffs-good

    Especially when it comes to certain areas of the economy:

    > Democratic countries' economies are mainly set up as free market economies with redistribution, because this is what maximizes living standards in peacetime. In a free market economy, if a foreign country wants to sell you cheap cars, you let them do it, and you allocate your own productive resources to something more profitable instead. If China is willing to sell you brand-new electric vehicles for $10,000, why should you turn them down? Just make B2B SaaS and advertising platforms and chat apps, sell them for a high profit margin, and drive a Chinese car.

    > Except then a war comes, and suddenly you find that B2B SaaS and advertising platforms and chat apps aren't very useful for defending your freedoms. Oops! The right time to worry about manufacturing would have been years before the war, except you weren't able to anticipate and prepare for the future. Manufacturing doesn't just support war — in a very real way, it's a war in and of itself.

    * https://www.noahpinion.blog/p/manufacturing-is-a-war-now

    > China has rapidly established itself as the world's dominant shipbuilding power, marginalizing the United States and its allies in a strategically important industry. In addition to building massive numbers of commercial ships, many Chinese shipyards also produce warships for the country's rapidly growing navy. As part of its "military-civil fusion" strategy, China is tapping into the dual-use resources of its commercial shipbuilding empire to support its ongoing naval modernization

    * https://www.csis.org/analysis/ship-wars-confronting-chinas-d...

    But none of the current 'reasons'—which may simply be rationalizations / retcons by underlings for one man's fickle will—really make much sense:

    * https://www.noahpinion.blog/p/all-the-arguments-for-tariffs-...

    lazyeye(10000) 6 days ago [-]

    I think we need to also consider that 'conventional economic thinking' got us into this mess (de-industrialized, vulnerable economy, hollowed out working/middle class, enormous debt/deficit). There never seems to be any accountability for this though. I suspect it's because a particular section of society has done very well from the status quo.

    XorNot(10000) 5 days ago [-]

    Except tarrifs rarely help any of that: there's already extensive regulations in place to require local sourcing for defence critical components, all the way down the supply chain.

    And tarriffing imports doesn't make a difference in the case of something like shipbuilding where the real problem is the government hasn't got a consistent order-book to keep factories staffed, operating and training - nor a plan to allow that capacity to leverage into being self supporting.

    Like a much better plan has always been defence exports: increase your customer base to spread risk and reduce per unit prices. The F-35 and it's adoption was a great idea in this regard...right up till the US started threatening NATO allies and cutting off avionics support to partner nations (Ukraine) in the middle of a war.

    You don't get a defence manufacturing industry without actually paying for a defence manufacturing industry. The whole 'bring manufacturing back' idea is almost wholly disconnected from it: a ton of factories extruding plastic childrens roys aren't suddenly going to start making anti-shipping missiles - in fact this is related to a secondary problem which is that it's not remotely clear that a peer/near-peer conflict would look anything like the long wars that WW2 represented due tot he delivery timelines on advanced weapons systems. You basically go to war with the military you have.

    throw310822(3586) 5 days ago [-]

    > you find that B2B SaaS and advertising platforms and chat apps aren't very useful for defending your freedoms.

    The analysis is reasonable, but let's just replace 'defending your freedoms' with 'reaping the benefits of being the biggest bully in town'. This is what China's competition means, not the risk of being attacked and losing your freedoms, but that of losing the power you got used to and profited from.

    otterley(3404) 5 days ago [-]

    > The right time to worry about manufacturing would have been years before the war, except you weren't able to anticipate and prepare for the future

    People were worrying about this as early as the 1970s when Japan started importing cars, and in the 1990s when Chinese markets started to open up under the condition that the Western companies partner with Chinese ones and effectuate technology transfers to them. These folks foresaw the future, but politicians and corporate managers didn't care; they were focused on expansion at all costs.

    Now that the future is today, all they can say is "I told you so," which isn't much comfort to anyone.

    jeswin(2367) 6 days ago [-]

    I am among the few who think it might eventually prove itself a good idea.

    To start with, Europe has no good cards to play. Ultimately, Europe will side with the United States while it builds self-sufficiency on several fronts, especially defense. Europe also recognizes that the complete relocation of production capacity into China wasn't good in the long run; it's just that they had no ability to act on their own.

    The US has repeatedly suggested publicly that it's not entirely about tariffs, and more might have been said privately. The tariffs the EU and Britain will drop are probably not what the US is after; what the US wants is to reduce global demand for Chinese manufacturing. Europe will find it easier to sell this—bringing manufacturing back and protectionism even at the cost of say, welfare and environment—to the public due to the violent shakedown over the past two weeks, as well as what happened with Ukraine and Russia. Ongoing European emergency measures to increase defense spending will be followed by incentives to rebuild strategic industry—like how China supported civilian–military partnership with policy.

    Meanwhile the Indian government is already looking for ways to replace Chinese imports with US imports, where it can [1]. Japan and North Korea will follow suit; Trump is already saying that Korea needs to pay for US troops.

    The US is (in my view) on solid footing here. At the very least, they get better trade deals from everyone else—Europe, India, Korea, Japan, Taiwan, etc. A number of companies will move production back into the US, and the government can prioritize those with more military value (chip-making, batteries, cars, shipbuilding [2] , etc.). And if the US can convince others to start decoupling from China, this will weaken Chinese manufacturing capacity.

    Given the pain it's going to inflict in the short term, Trump is the only person who could have started this trade war. There might have been ways to do this without such a shake-up, but I am not convinced that this was a stupid move.

    This was an anti-China move right from the beginning, disguised as an outrage against everyone's tariffs.

    [1]: https://www.financialexpress.com/business/industry/replace-c...

    [2]: https://www.scmp.com/economy/china-economy/article/3306177/u...

    To clarify: none of this is China's fault. They did a fantastic job for their country, pulling hundreds of millions of people out of poverty.

    Spooky23(3545) 6 days ago [-]

    I think EU will be fine, it really depends on how much the US cares about advancing Russian interests.

    Long game, the UK may transform into being a sort of vassal of the US, assuming it survives as an entity. The EU interest may align more with China. If the US is de-empathizing NATO, they need a counterweight to the Russia/US axis.

    It's the end of pax americana, and the future is more uncertain.

    oa335(10000) 6 days ago [-]

    China is the EUs largest export market. I'm not so sure the EU will align with the US here.

    eagleislandsong(10000) 6 days ago [-]

    > at the cost of... welfare

    If politicians no longer care about winning elections, then they might campaign on this.

    stafferxrr(10000) 5 days ago [-]

    I also imagine this is maximum negative sentiment.

    I follow the Chinese economy pretty closely and I just can't imagine 2025 passes without a deal.

    Of course, neither Trump or Xi were going to back down here before a big meeting. I don't see how this is sustainable on any real time frame though for either economy.

    Some people seem to be framing this as some kind of win for China. That is crazy. Chinese stocks had been in the toilet for a while, got a slight bump and that was mostly erased last week. I am far more confident in my US bets than China bets here.

    realusername(3429) 5 days ago [-]

    I have the complete opposite opinion. The US has no cards to play in the EU and is screwed in the medium and long term.

    The only reason the EU was tolerating those massive tech companies which contribute close to nothing in the EU was because the US was pulling its weight in EU defense.

    Now that Trump openly sided with Putin, that's gone. Trump has no card to play in the EU anymore. He could even insult EU leaders publicly if he wanted to but pushing out Zelensky like he did was the only thing he could not afford to do.

    Then on the investment side, the EU will now seen as a more stable and better environment than the US which changes policies every Tuesdays. The US will be experiencing a similar effect to brexit but longer and more severe.

    The status of the dollar is clearly questioned as well. Will the US remain the top economic power with those tech companies atrophied and a local recession? I'm not so sure.

    walterbell(23) 6 days ago [-]

    Per Bloomberg, 20% fentanyl tariff on China still applies and these categories may yet receive their own unique tariff, https://archive.is/jKupW

    The exemption categories include components and assembled products, https://content.govdelivery.com/bulletins/gd/USDHSCBP-3db9e5...

      8471       ADP (Automatic Data Processing) Machines: PCs, servers, terminals.
      8473.30    Parts for ADPs: keyboards, peripherals, printers.
      8486       Machines for producing semiconductors & ICs: wafer fab, lithography.
      8517.13    Mobile phones and smartphones.
      8517.62    Radios, router, modems.
      8523.51    Radio/TV broadcasting equipment.
      8524       2-way radios.
      8528.52    Computer monitors and projectors (no TVs).
      8541.10    Diodes, transistors and similar electronic components
      8541.21    LEDs
      8541.29    Photodiodes and non-LED diodes
      8541.30    Transistors
      8541.49.10 Other semiconductors that emit light
      8541.49.70 Optoelectronics: light sensors, solar cells
      8541.49.80 Photoresistors
      8541.49.95 Other semiconductor devices
      8541.51.00 LEDs for displays
      8541.59.00 Other specialized semiconductor devices
      8541.90.00 Semiconductor parts: interconnects, packaging, assembly
      8542       Electronic ICs
    
    Industrial-scale workarounds were developed for previous tariffs, https://news.ycombinator.com/item?id=43652823. Such loopholes will need to be addressed in any new trade agreements.
    codedokode(3471) 6 days ago [-]

    > 8486 Machines for producing semiconductors & ICs: wafer fab, lithography.

    Does US buy them from China too?

    CodeCrusader(10000) 6 days ago [-]

    Seems like the tariffs are becoming a lot more complicated, and it is possible that it is happening by design

    enaaem(10000) 6 days ago [-]

    Tariffs can be very expensive to enforce, so you want to keep it as simple as possible.

    dashtiarian(10000) 6 days ago [-]

    It actually feels nice to see US people having a taste of the kind of government their intelligence service force other nations to have by coups, except that it does not feel nice at all. I'm sorry guys.

    UncleSlacky(10000) 5 days ago [-]

    Fascism is when colonialism comes home.

    peteforde(2434) 5 days ago [-]

    I listened the book 'Lucky Loser' (Craig/Buettner) a few months back. It's a well-researched timeline of how the Trump fortune was made, and to be really kind, how monumentally terrible DJT is at business on a fundamental level. The shady deals and repulsive ethics are not exceptions but the status quo. The only reason he's in the situation he's in is because the guy who created Survivor saw an opportunity. Now the whole world is paying the price.

    I listened because I thought it would be funny, but the shitty behaviour and unapologetic corruption is just so naked that it actually left me feeling pretty upset for all of the obvious reasons.

    I'd say that I don't understand how anyone can be charmed by this con artist, but the truth is that I have simply lost a ton of faith in the 'average' person.

    andrekandre(10000) 5 days ago [-]

      > I'd say that I don't understand how anyone can be charmed by this con artist, but the truth is that I have simply lost a ton of faith in the 'average' person.
    
    the same could probably be said about the 'average' person with regards to buttoned-up polished politicians with which trump contrasts himself to; he looks authentic to many people....
    jfengel(10000) 5 days ago [-]

    From what I am hearing, he seems to have appealed on culture war issues. On economic issues, it was assumed that Biden had been doing something bad and Trump would end it, but they didn't much care past that.

    There is still a halo of 'Democrats are bad at the economy' dating from the 1970s and rooted in the New Deal.

    jpster(10000) 5 days ago [-]

    I suspect it would be a good idea if the US abolished the presidency and moved to a parliamentary system. Turns out that concentrating so much power in a single position is a bad idea.

    YZF(10000) 5 days ago [-]

    You still often have one man with all the power in a parliamentary system. The Prime Minister. Take Canada as an example. JT had basically complete power over government. It's as rate for the prime minister party or coalition to go against him as it is for a president in the US to be impeached.

    I think the trick has to be to just get better people into those positions. Which means better people need to have some incentive to get into politics. It's a tough one for sure.

    fjfaase(10000) 5 days ago [-]

    The president has all the power that the congress and the senate gives him. Previous presidents were not given this much power. The bad guys are in the congress and the senate for not upholding the constitution.

    _heimdall(10000) 5 days ago [-]

    We don't need to abolish the presidency or entirely change our system for a parliamentary model. We do need to drastically shrink the executive branch and its powers though.

    I've found it interesting that so many are seriously concerned with what Trump is doing but not why the executive branch has the authority to do it in the first place.

    Aurornis(10000) 5 days ago [-]

    Our current system should allow Congress to control this.

    They're not. That's the problem.

    You could swap it out for a parliamentary structure with the same characters and you'd get the same result. There's a weird personality cult thing going on and everyone is waiting to see who will break ranks first, lest they get crushed by the retaliatory wrath of Trump calling his followers to oppose a person and Elon Musk dumping a mega war chest on them.

    There are signs that people are starting to break ranks, but it looks like they want to see him have to face the consequences of his decisions before they jump in to save him.

    This current policy is so bad that they'd be doing him a political favor by jumping in to disallow it. The problem for them is that he would be guaranteed to turn around and blame it on Congress. "My tariff plan was going to work, but Congress interfered!"

    lifeinthevoid(10000) 5 days ago [-]

    Can the other countries implement "export tariffs" on said goods? Would be a nice move to mess with Trump.

    mppm(10000) 5 days ago [-]

    It would be karmically appropriate, but I'd guess nobody has an actual interest in doing so. Export restrictions are also easier to circumvent than import restrictions by routing through third countries. Unless, of course, you apply the export tariff to everyone, which again nobody has an interest in doing.

    Animats(2975) 6 days ago [-]

    Since last night, anyway. The people who make shipping work are frantically trying to keep up. One of the biggest customs brokers posts updates twice a day on weekdays. Last update 4 PM Friday, so they haven't caught the biggest reversal. If tariff rates change while in transit, the bond paid before the item was shipped may now be insufficient. So the container goes into storage (where?) until Customs and Border Protection gets paid. Some recipients don't have the cash to pay. Low-end resellers who order on Alibaba and sell on Amazon, for example.

    Port operators hate this. Unwanted containers clog up the portside sorting and storage systems. Eventually the containers are either sent back or auctioned off by CBP, like this stuff.[1]

    Some shippers outside the US have stopped shipping to the US until this settles. This includes all the major laptop makers - Lenovo, Acer, Dell, etc.[2] Nobody wants to be caught with a container in transit, a big customs bill due on receipt, and storage charges. That will recover once the rates are stable for a few weeks. Probably.

    Customs and Border Protection is trying to keep up. Sometimes you have to pay more because Trump raised tariffs. Sometimes you can get a credit back because Trump dropped tariffs. Those are all exception transactions, with extra paperwork and delays.

    Where's the Flexport guy from YC? He should be able to explain all this.

    Consumer version: expect to see some empty shelves, rejected orders, and higher prices for the next few weeks.

    [1] https://bid.cwsmarketing.com/auctions/catalog/id/167

    [2] https://www.techspot.com/news/107504-trump-tariffs-force-maj...

    TeaBrain(10000) 5 days ago [-]

    Ryan Petersen was on the Bloomberg Odd Lots podcast a few days ago.

    re-thc(10000) 5 days ago [-]

    > Consumer version: expect to see some empty shelves, rejected orders, and higher prices for the next few weeks.

    Make that the next few years at this rate.

    > Customs and Border Protection is trying to keep up.

    There are still people there? DOGE hasn't hit them up?

    Eavolution(10000) 5 days ago [-]

    Hang on are tarriffs not effective on date of purchase? I'm not American but it seems madness to apply them at any other time as then no one knows what will actually need paid if you've someone like Trump changing them frequently.

    Y_Y(3528) 5 days ago [-]
    https://bid.cwsmarketing.com/lot-details/index/catalog/167/l...

    I love that I can buy a pallet of miscellaneous medical supplies, and also that someone who specifically wanted them but now can't pay for them has to go without.

    ashoeafoot(10000) 5 days ago [-]

    So does the trump tarif noise average out to something you can plan with ?

    Animats(2975) 5 days ago [-]

    Update: Possible pending reversal today (Sunday) on temporary exemption to emergency China tariff for computers and smartphones.[1][2] Trump and the Secretary of Commerce are saying different things on social media. Trump says he will look at the 'whole electronic supply chain.' The Wall Street Journal and Bloomberg are trying to keep up with the announcements.

    [1] https://www.wsj.com/livecoverage/stock-market-trump-tariffs-...

    [2] https://www.bloomberg.com/news/articles/2025-04-13/trump-say...

    jmward01(10000) 6 days ago [-]

    This is a massive sign that Trump's double down strategy is failing badly. He only has one play: Be a bully and double down any time someone fights back. It works when you have the leverage but as soon as you don't anymore you loose, big. The US just ran out of leverage. I don't know about everyone else but I just started looking into how to move money and investments outside the US.

    timmg(10000) 5 days ago [-]

    > I don't know about everyone else but I just started looking into how to move money and investments outside the US.

    Based on tweets I've seen, you are not the only one engaging in 'capital flight'. Not great for the US.

    One would like to think this will be a good lesson for the administration. But I'm worried that they are not acting completely rationally.

    wnc3141(10000) 5 days ago [-]

    My cynical read is that there will eventually be complete corporate capture of these tariffs. Then firms will try to protect their carveouts that make unfair advantages.

    Its about their corporate supporters choosing winners and losers. Its the only reason I can conjure that corporate America has otherwise been silent.

    roland35(10000) 5 days ago [-]

    Will be? Seems like it already happened! All for a low price of a $1M dinner.

    differentView(10000) 5 days ago [-]

    95+% of his tariffs will be walked back within a year.

    Ylpertnodi(10000) 5 days ago [-]

    But travel (to the us) income will forever be lost.





    Historical Discussions: "Most promising signs yet" of alien life on a planet beyond our Solar System (April 17, 2025: 402 points)

    (402) "Most promising signs yet" of alien life on a planet beyond our Solar System

    402 points 1 day ago by fuidani in 10000th position

    www.skyatnightmagazine.com | Estimated reading time – 7 minutes | comments | anchor

    Astronomers say they've found 'the most promising signs yet' of chemicals on a planet beyond our Solar System that could indicate the presence of life on its surface.

    Using the James Webb Space Telescope, the team found a possible 'biosignature' – the potential fingerprint of life – within its atmosphere, although they say they're remaining 'cautious', and that this isn't a confirmed detection.

    The chemicals detected are the same as those produced by marine-dwelling organisms on Earth.

    The team, led by the University of Cambridge in the UK, detected signs of dimethyl sulfide and dimethyl disulfide in the atmosphere of exoplanet K2-18b.

    This planet orbits its star in the habitable zone (sometimes called the Goldilocks Zone), which is the region around a star in which an orbiting planet might have conditions suitable for the emergence of life, such as the ability for liquid water to exist on its surface.

    K2-18b is 8.6 times as massive and 2.6 times as large as Earth and lies 124 lightyears away from our planet.

    An artist's impression showing exoplanet K2-18b, its host star and an accompanying planet in this system. Credit: ESA/Hubble, M. Kornmesser. Credit: ESA/Hubble, M. Kornmesser

    Building a bigger picture

    This isn't the first study of exoplanet K2-18b.

    A 2023 study of K2-18b by the same team identified methane and carbon dioxide in the planet's atmosphere.

    This in itself was a huge discovery: the first time carbon-based molecules had been found in the atmosphere of an exoplanet – a planet beyond our Solar System – in the habitable zone.

    Astronomers say the 2023 results showed K2-18b could be a 'Hycean' planet, meaning a habitable world with a liquid ocean and a hydrogen-rich atmosphere.

    That earlier study found a tantalising hint of dimethyl sulfide and dimethyl disulfide, but this latest study has made a more promising detection.

    This graph shows detections of chemicals in the atmosphere of K2-18b by the James Webb Space Telescope, as part of the 2023 study

    'We didn't know for sure whether the signal we saw last time was due to DMS, but just the hint of it was exciting enough for us to have another look with JWST using a different instrument,' says Professor Nikku Madhusudhan from Cambridge's Institute of Astronomy, who led the research.

    The team say that on Earth, dimethyl sulfide and dimethyl disulfide are only produced by life, mainly microbial life like phytoplankton we see in our oceans.

    However, there could be another explanation for the detection of the chemical.

    Another unknown chemical process could be the source of the molecules detected in K2-18b's atmosphere.

    Artist's impression of exoplanet K2-18b. Credit: A. Smith, N. Madhusudhan (University of Cambridge)

    Nevertheless, the team say 'the results are the 'strongest evidence yet' that life may exist on a planet outside our Solar System.

    They say their observations have reached the 'three-sigma' level of statistical significance.

    This means there's a 0.3% probability the detection occurred by chance.

    And to reach the accepted level that would mean scientific discovery, observations would have to meet the five-sigma threshold.

    In other words, there would need to be below a 0.00006% probability they occurred by chance.

    Artistic ilustration of planet K2-18b, its star K2-18 and another planet in the system. Credit: Alex Boersma, www.alexboersma.com

    Detecting life on faraway worlds

    How can scientists know what chemicals exist on a planet orbiting a star beyond our Solar System?

    Key to analysing exoplanets' atmospheres is analysing the light from their host stars.

    As a planet passes in front of its host star from our perspective on Earth – known as a transit – light from that star passes through the planet's atmosphere.

    That starlight picks up chemical fingerprints as it passes through the atmosphere, so astronomers can analyse the light to learn more about the atmosphere.

    A dip in starlight can indicate a planet 'transiting' that star. But as well as detecting exoplanets, transits can be used by astronomers to learn more about an exoplanet's atmosphere

    The tentative detection of dimethyl sulfide in 2023 was made using the James Webb Space Telescope's NIRISS (Near-Infrared Imager and Slitless Spectrograph) and NIRSpec (Near-Infrared Spectrograph) instruments.

    This 2025 study used the Webb Telescope's MIRI (Mid-Infrared Instrument), which observes in a different wavelength of light, offering the team a new look at this intriguing world.

    'This is an independent line of evidence, using a different instrument than we did before and a different wavelength range of light, where there is no overlap with the previous observations,' says Madhusudhan.

    'The signal came through strong and clear.'

    'It was an incredible realisation seeing the results emerge and remain consistent throughout the extensive independent analyses and robustness tests,' says co-author Måns Holmberg, a researcher at the Space Telescope Science Institute in Baltimore, USA.

    Astronomers can detect biosignatures to determine whether a planet may host life.

    Does K2-18b have life?

    The team say dimethyl sulfide and dimethyl disulfide are molecules from the same chemical family, and could be 'biosignatures'.

    This is a term used to describe chemicals that, when detected around a distant planet, could indicate the presence of biological processes, i.e. life.

    Yet the concentrations of dimethyl sulfide and dimethyl disulfide in K2-18b's atmosphere are different from those on Earth.

    On Earth, dimethyl sulfide and dimethyl disulfide are below one part per billion by volume. On K2-18b, they're thought to be thousands of times stronger, over ten parts per million.

    'Earlier theoretical work had predicted that high levels of sulfur-based gases like dimethyl sulfide and dimethyl disulfide are possible on Hycean worlds,' says Madhusudhan.

    'And now we've observed it, in line with what was predicted. Given everything we know about this planet, a Hycean world with an ocean that is teeming with life is the scenario that best fits the data we have.'

    The team now hope to carry out more research into whether dimethyl sulfide and dimethyl disulfide can be produced non-biologically at the level they're currently seeing.

    Credit: NASA GSFC/CIL/Adriana Manrique Gutierrez

    'The inference of these biosignature molecules poses profound questions concerning the processes that might be producing them' says study co-author Subhajit Sarkar of Cardiff University.

    'Our work is the starting point for all the investigations that are now needed to confirm and understand the implications of these exciting findings,' says co-author Savvas Constantinou, also from Cambridge's Institute of Astronomy.

    'It's important that we're deeply sceptical of our own results, because it's only by testing and testing again that we will be able to reach the point where we're confident in them,' says Madhusudhan. 'That's how science has to work.

    'Decades from now, we may look back at this point in time and recognise it was when the living universe came within reach.

    'This could be the tipping point, where suddenly the fundamental question of whether we're alone in the universe is one we're capable of answering.'




    All Comments: [-] | anchor

    throwaway290(10000) 1 day ago [-]

    TL;DR

    - K2-18b

    - detected dimethyl sulfide and dimethyl disulfide, false positive possibility is now very low

    - 'produced by marine-dwelling organisms on Earth', possibility they were produced by other processes (unrelated to life as we know it) not high but maybe unknown unknowns

    - other factors like distance from the star are in favor of life & water

    - previous studies detected methane and carbon dioxide

    sph(683) 1 day ago [-]

    > false positive possibility is very low

    No, it means we will soon discover how these compounds form naturally. Would love to be wrong, of course.

    guax(10000) 1 day ago [-]

    > The observations also provided a tentative hint of dimethyl sulfide (DMS), a possible biosignature gas, but the inference was of low statistical significance.

    From the source paper. It is a very important result but not definitive, false positive is still possible as well as us finding a new way in which DMS can form without a biological process.

    Still freaking exciting and fantastic scientific achievement. JWST is already bearing incredible fruits.

    energy123(10000) 1 day ago [-]

    > false positive possibility is now very low

    It's not that low, unfortunately. From the article:

    > They say their observations have reached the 'three-sigma' level of statistical significance. This means there's a 0.3% probability the detection occurred by chance. And to reach the accepted level that would mean scientific discovery, observations would have to meet the five-sigma threshold. In other words, there would need to be below a 0.00006% probability they occurred by chance.

    ZiiS(10000) 1 day ago [-]

    Astronomers have yet again found possible signs of alien life.

    sgt(3284) 1 day ago [-]

    You're not thinking like a journalist. This is a breakthrough! Alien life has been found! SETI is making contact as we speak.

    weberer(3513) 1 day ago [-]

    Here's the primary source

    https://iopscience.iop.org/article/10.3847/2041-8213/adc1c8

    They possibly detected dimethyl sulfide, which is only known to be produced by living organisms.

    metalman(10000) 1 day ago [-]

    only know to be produced.....is a whoa bessy phrase,?¿ as in 70 years ago an undergraduate figured out that dimethyl sulfide was produced by living organisms and he asked his professor what else made it, and got shrug and 'nothing else I know of' and everybody has been cutting and pasting since, OR, an international team spent years and millions working on the chemistry behind dimethyl sufide in an epic known to all quest to determine it's origins. Science does have an issue with cutting and pasting ancient mistakes, and then bieng exceptionaly reluctant to change and move forward, not to mention that SETI, and the rest of 'alien' research is most definitly tainted with public fantasy and entertainment industry influence, so even with one of the notoriously oderiferous sulfide compounds present, I wont hold my breath

    perihelions(137) 1 day ago [-]

    I'm not convinced about the methods. It looks a lot like p-hacking to me: they have a highly specific hypothesis drawn from a large universe—that dozen or so molecules (§3.1) in their infrared spectrum model they're fitting experimental data against. I don't buy the way they created that hypothesis. The put a handful of highly specific biosignature gases into it, things that were proposed by exobiology theory papers. One very specific hypothesis out of many, and a low likelihood one. And that's the hypothesis they get some borderline ~3σ signals for? Really?

    edit: Any chance someone might have the charity to explain why my criticism is so far off-base, according to the HN consensus?

    belter(63) about 24 hours ago [-]

    > which is only known to be produced by living organisms.

    Comets with DMS: https://arxiv.org/abs/2410.08724

    And the interstellar medium.

    'On the abiotic origin of dimethyl sulfide: discovery of DMS in the Interstellar Medium' - https://arxiv.org/abs/2501.08892

    '...Although the chemistry of DMS beyond Earth is yet to be fully disclosed, this discovery provides conclusive observational evidence on its efficient abiotic production in the interstellar medium, casting doubts about using DMS as a reliable biomarker in exoplanet science...'

    teamonkey(2742) about 21 hours ago [-]

    A lot of science papers are like "we found a hint of this thing, we need to do more research" and it's reported as "ALIENS??!?"

    I understand why this is the case but I think it can lead to a loss in trust in science when the reporting jumps to conclusions that aren't supported by the research itself.

    In this case the abstract is far more grounded. In particular,

    > The observations also provided a tentative hint of dimethyl sulfide (DMS), a possible biosignature gas, but the inference was of low statistical significance.

    > We find that the spectrum cannot be explained by most molecules predicted for K2-18 b, with the exception of DMS and dimethyl disulfide (DMDS), also a potential biosignature gas.

    > More observations are needed to increase the robustness of the findings and resolve the degeneracy between DMS and DMDS. The results also highlight the need for additional experimental and theoretical work to determine accurate cross sections of important biosignature gases and identify potential abiotic sources.

    dguest(10000) about 19 hours ago [-]

    also: https://arxiv.org/abs/2504.12267

    (if you want a cleaner interface)

    seanhunter(3193) 1 day ago [-]

    Firstly that is completely badass science. The idea that you can use observations to detect the chemical composition of an exoplanet millions of kilometres away is an absolute triumph of the work of thousands of people over hundreds of years. Really amazing and deeply humbling to me.

    Secondly, my prior was always that life existed outside of earth. It just seems so unlikely that we are somehow that special. If life developed here I always felt it overwhelmingly likely that it developed elsewhere too given how incredibly unfathomably vast the universe is.

    ta8645(10000) 1 day ago [-]

    If life is very common in the universe, then that is probably bad news for us. It means that civilizations should exist that are millions of years more technologically advanced than us; and should be leaving telltale signatures across the sky that we'd likely have detected by now. And the absence of those signs would be relatively strong evidence that life, while common, isn't long-lived. Suggesting that our demise too, will come before too long.

    If, on the other hand, life is relatively rare, or we're the sole example, our future can't be statistically estimated that way.

    thrance(10000) 1 day ago [-]

    The only place we know for sure life exists on is Earth. You can't reason about the likelihood of life existing elsewhere with a sample of N=1.

    otabdeveloper4(10000) 1 day ago [-]

    > It just seems so unlikely that we are somehow that special.

    That prior is formed by sci-fi media, not science.

    > I always felt it overwhelmingly likely that it developed elsewhere too

    'Life' is an information complexity characteristic. We know that information complexity is not uniformly distributed in the universe, and in fact the vast majority of the universe is extremely information-poor. Logically from the scientific data you'd assume that 'life' in the universe also has a very lopsided distribution.

    tgv(10000) 1 day ago [-]

    > given how incredibly unfathomably vast the universe is ... we ...

    But the probability of developing a highly developed civilization can be much, much smaller than 1 / number of planets in the universe.

    goognighz(10000) 1 day ago [-]

    Interestingly we can't actually know that we are correct in our calculations of what a planet lightyears away has as its atmosphere because we will never be able to go there and make sure we are correct. It's a calculation and nothing more. For all we know that planet may not even exist. That's what's mind blowing about astronomy. We really don't have any way of proving anything about what we are observing. All we can say is we are observing. That's the only thing science can offer us.

    icemelt8(10000) 1 day ago [-]

    we are alone as God only populated earth.

    bufferoverflow(3152) 1 day ago [-]

    > millions of kilometres away

    Yes, millions, but that's a major understatement.

    It's 124 light years away. Which is around a million billion km away. (a.k.a quadrillion)

    It's just so damn far.

    Someone(853) 1 day ago [-]

    > an exoplanet millions of kilometres away

    Not millions, not even billions. 124 light years is about 1015 kilometers, or a million billion kilometers.

    ninetyninenine(10000) 1 day ago [-]

    I never got this. Someone eventually wins the lottery. Someone eventually gets struck by lightning. How lucky a lucky person feels doesn't influence the cold hard probabilities. So this feeling is mostly a delusion.

    And frankly we don't know how probable or improbable it is for life to form because we aren't actually clear how it formed in the first place. The fact that the event has not and can't (so far) be reproduced by us means that it is already highly likely to an extremely low probability event.

    The question is how low? Low enough such that there is another planet that has it within 124 light years. I actually don't think so.

    I think the probability of finding a planet that has biosignatures of life but doesn't have any life at all is a higher probability then actually finding planets that actually have life. No matter what you think the likelihood of finding life is, I think most people agree that the above should be true.

    qudat(3277) about 21 hours ago [-]

    The universe is so big that even very rare anomalies are common. There is life outside of earth, that is all but confirmed.

    hackeraccount(10000) about 21 hours ago [-]

    My prior is that life is not uncommon in the universe, multicellular eukaryotic type life less common and intelligent (whatever that means) life less common still.

    If the closest prokaryotic type life is 100 light year away then the the closest intelligent life might is pretty far away.

    I base this on almost nothing - other then the time it took for prokaryotic and eukaryotic life to emerge on Earth; which to my mind is surprisingly quick for the former an weirdly long for the later.

    sph(683) 1 day ago [-]

    A bit clickbaity of OP to skip the operative word 'promising' signs of life.

    isolli(2928) 1 day ago [-]

    To be fair, the original title goes above HN's character limit, but the omission is almost worthy of a flag, in my opinion...

    quaintdev(998) 1 day ago [-]

    This should be higher up

    bathtub365(3476) 1 day ago [-]

    And it isn't actually signs of life. The first paragraph:

    > Astronomers say they've found 'the most promising signs yet' of chemicals on a planet beyond our Solar System that could indicate the presence of life on its surface.

    eecc(2477) 1 day ago [-]

    JSWT... again the most formidable piece of equipment ever shot into outer space. That think is going to shake our understanding of the Universe to its foundations a couple times around

    merek(10000) 1 day ago [-]

    I think you mean JWST, not to be confused with JSON Web Tokens :)

    londons_explore(10000) 1 day ago [-]

    This is happening 124 light years away from earth.

    That means if we develop a way to make a space ship accelerate at 1g for a long period of time, you could go there in just 10 relativistic years.

    Unfortunately, whilst science allows such a rocket, our engineering skills are far from being able to build one.

    DiogenesKynikos(10000) 1 day ago [-]

    It would still be >124 years from the perspective of people on Earth, though.

    lucb1e(3525) 1 day ago [-]

    If you find that sort of thing interesting... I don't always know how seriously to take the things on this channel, but I discovered Fraser Cain not so long ago and find the ideas mentioned in the interviews to be fascinating, for example 'Interstellar Travel Without Breaking Physics with Andrew Higgins' https://www.youtube.com/watch?v=SkGRVvA23qI (warning: it's over an hour)

    mr_mitm(10000) 1 day ago [-]

    Calling it simply an engineering issue is not properly conveying the ridiculousness of such a journey. For a small space ship of 1000 tons, this would take ten thousand times the current yearly energy consumption of mankind. So we'd need to figure out how to generate the energy and then store it on a space ship before even thinking about the engineering.

    And that's ignoring the mass of the fuel. The classical rocket equation has the mass going exponentially with the velocity, which makes this endeavor even more mind bogglingly ridiculous. We'd actually need 2 million years worth of our current yearly energy consumption.

    It's fun to think about, but being clear about the challenges puts quite the damper on it.

    ta1243(10000) about 23 hours ago [-]

    If you can somehow make a ship capable of constant acceleration at 1G, and had enough shielding on it to protect it against the radiation, you can travel to any point in the observable universe, in a human lifetime.

    If you just keep accelerating and left as a 20 year old, you'd be in your 50s when you saw the final stars born and die in 100 trillion (earth) years time.

    That's how crazy relativity and torchships are

    tomelders(10000) 1 day ago [-]

    My understanding is that the great filter theory means this is bad news for us humans here on earth. And considering the state of the world right now, it's especially ominous. Fate loves irony.

    StopDisinfo910(10000) 1 day ago [-]

    The great filter is only one of the possible explanations of the Fermi paradox however. There are other far less bleack including that there is actually no paradox at all: life is indeed frequent and but we are just bad at detecting it/have not been looking for it long enough.

    mtlmtlmtlmtl(10000) 1 day ago [-]

    How so? If great filters exist at all, which is not a given, there could be multiple ones, first of all. They could be somewhere between our level of biological complexity and the kind hypothesised to be responsible for this signal. Endosymbiosis is a very plausible such filter. The evolution of language and the bootstrapping of cultural evolution is another one. Both n=1 on our planet. Probably there are others I can't think of right now.

    encrypted_bird(10000) about 15 hours ago [-]

    With due respect, the Great Filter is a hypothesis, not a theory.

    That being said, I agree. I read in a similar thread yesterday someone confused how this would be bad news rather than good news—that there are many other intelligent species indicates that such a filter either doesn't exist or is very easy to pass. But, like your point does, I think it's important to recognize that such a 'good news' position is predicated on the notion that we as a species are already past the Great Filter, rather than that we're still behind it and the others are ahead.

    MrPapz(10000) 1 day ago [-]

    Maybe now we can stop this nonsense of competing among each other and start dedicating efforts to an international space program.

    Guthur(10000) 1 day ago [-]

    Why exactly? I'd prefer we'd just build some more houses so that owning one didn't require a life time of work to pay for.

    afroboy(10000) about 15 hours ago [-]

    Maybe let's just try to stop genocide happening here first and try not send innocents people to prisons in el Salvador.

    milesrout(10000) 1 day ago [-]

    Is there a source for this that isn't plastered with banner ads? I can't read more than a sentence at a time without having to scroll past adverts.

    I do wonder why I was stupid enough to pay for a phone with a bigger screen as it just seems to mean more and bigger ads on screen at once and the same amount of content.

    mkl(10000) about 24 hours ago [-]

    Why are you not using an ad blocker? Ads are optional - I didn't see a single one.

    tjpnz(3481) 1 day ago [-]

    How far off are we from being able to image an exoplanet?

    t8sr(10000) 1 day ago [-]

    Directly imaging an exoplanet has been done about 20 times (maybe more, by now). If you're asking how far are we from resolving an exoplanet to more than a single point of light, the answer is we will never be able to do that from this distance.

    dguest(10000) about 3 hours ago [-]

    Depends on what you mean by 'image'. We might be able to capture blurry blobs with our current telescopes. Let's say you want to take a picture of Alien Manhattan 100 light years away, where you can see e.g. bridges and buildings, stuff about 10m across. I think we could do it pretty well if we could launch around 50,000 space telescopes, each 30 km across.

    My math is below.

    Note: I'm not an astronomer.

    ----

    The angular resolution limit for a telescope is roughly the wavelength of the light it's sensitive to over the diameter.

    If we want to sense things 10m across, with light at the shorter end of the visible spectrum (400 nm), we'd need a telescope with a diameter of about 1/4th of an AU (i.e. the distance from the earth to the sun), around 40 million kilometers.

    More practically we could use a telescope array with this diameter, which could conveniently be in lot of orbits about 1 AU out. But the area is still a problem: assuming this 100m^2 object is as bright as it would be on earth under midday sun, it's going to be reflecting around 100 kw of energy. One of these photons has an energy of around 3 eV, so we're getting 2e23 of them a second. Unfortunately these spread out over a sphere with a surface area of 1e31 km^2 by the time they reach earth, meaning we see one every second if we have a telescope array with an area of 50 million square km.

    Ok, so let's go kind of sci-fi and say we can build a 30 km diameter space telescope. It would be impressive (and unprecedented) but since it's floating in space and could be made of thin material you might be able to imagine it with today's technology and a lot of coordination. That gets us around 1000 square km! Now we just do it 50,000 more times.

    Great, now we have 1 Hz of photons coming from each 100 m^2 patch of Alien Manhattan! I'm sure in the process of building 50k mega-projects we'll figure out a way to filter out the noise, and with a few years of integration we'll have a nice snapshot!

    davedx(2524) 1 day ago [-]

    Some speculation

    On DMS:

    - DMS is a very specific configuration that's rarely the endpoint of non-living chemical cycles.

    - The simplicity of DMS doesn't make it less indicative of life—it actually makes it a very selective molecule, which only shows up in large quantities when life is involved (at least in Earth-like chemistry).

    - Until we find a compelling abiotic pathway, high DMS remains a strong biosignature, especially in the context of a planet with a potential ocean and mild temperatures

    Possible origins:

    We're looking at some form of life that can:

    - Thrive in a hydrogen-rich atmosphere

    - Possibly live in or on top of a global ocean

    - Generate large amounts of DMS—potentially thousands of times more than Earth

    The closest Earth analogy is:

    - Marine phytoplankton, particularly species like Emiliania huxleyi, produce DMS as a byproduct of breaking down DMSP, a molecule they use to regulate osmotic pressure and protect against oxidative stress.

    - If something similar is happening on K2-18 b, we'd be talking about an ocean teeming with such microbes—perhaps far denser than Earth's oceans.

    Possibly 'Giant photosynthetic mats' or sulfuric 'algae'

    If there's some landmass or floating structures, maybe the DMS producers are:

    - Photosynthetic, sulfur-metabolizing analogues to cyanobacteria

    - Living in dense floating colonies or mats like microbial reefs

    - Using dimethylated sulfur compounds in their metabolism, and leaking DMS as waste or signaling molecules

    ===========

    Of course there have been lots of ocean planets in sci-fi literature, but I'm most reminded of the 'Pattern Juggler' Planet Ararat from Alastair Reynolds' 'Revelation Space' series.

    This is incredibly exciting news!

    rsynnott(10000) 1 day ago [-]

    > Of course there have been lots of ocean planets in sci-fi literature, but I'm most reminded of the 'Pattern Juggler' Planet Ararat from Alastair Reynolds' 'Revelation Space' series.

    Erk. Couldn't you pick something from a less... apocalyptic universe? :)

    belter(63) 1 day ago [-]

    Not that exciting until they find other different biomarkers.

    Dead Comets have DMS: https://arxiv.org/abs/2410.08724

    And the interstellar medium.... 'On the abiotic origin of dimethyl sulfide: discovery of DMS in the Interstellar Medium' - https://arxiv.org/abs/2501.08892

    '...Although the chemistry of DMS beyond Earth is yet to be fully disclosed, this discovery provides conclusive observational evidence on its efficient abiotic production in the interstellar medium, casting doubts about using DMS as a reliable biomarker in exoplanet science...'

    nonethewiser(3585) about 18 hours ago [-]

    Or, megafauna. Some Leviathan in the deep.

    jmyeet(10000) 1 day ago [-]
    aurareturn(3425) 1 day ago [-]

    Even if this has 5% of being right, it should still be upvoted all the way to the top of HN. It's that important.

    andreygrehov(1663) 1 day ago [-]

    Let's assume there is alien life on many planets beyond our solar system. Now what? What's the practical benefit?

    foxglacier(10000) about 23 hours ago [-]

    Let's assume I wake up tomorrow still alive. Then what? You're basically asking what's the meaning of life.

    kstrauser(2909) about 21 hours ago [-]

    Suppose it were somehow possible to prove that alien life exists. Like, we get a radio signal saying 'hey, Earth! We see you looking at us!' that's conclusive and undeniable.

    That would upend a lot of religious teachings which say we're unique and that the world was given to us, as the unique creations of a creator, to consume for our own benefit.

    It seems like there could be many practical benefits to showing that's not true. Hey, maybe the concept of infinite exponential growth is a bad idea. Maybe we shouldn't burn the skies and boil the seas. Maybe we should be nice to other intelligent animals, at the very least.

    martopix(3517) about 20 hours ago [-]

    What's the practical benefit of Beethoven?

    skc(10000) about 22 hours ago [-]

    Every once in a while for a good chuckle I visit r/UFOs or r/aliens where people go gaga over blurry videos of balloons in the sky.

    I've never understood how that stuff seems to capture the imagination more than actual science like this.

    throwaway743(10000) about 21 hours ago [-]

    User5 on youtube.

    Phelinofist(10000) about 22 hours ago [-]

    Aren't we looking into the past when looking at things this far away? So, just assuming here, that these are indeed signs of life, would that mean that 'they' might have been primitive when these signatures were sent out into space and are now further developed?

    ChicagoBoy11(10000) about 21 hours ago [-]

    Yes, but isn't it 'just' 124 light years away. So, we're looking at it 124 years ago, which, in the scale of evolution, isn't particularly long ago?

    southernplaces7(3239) about 16 hours ago [-]

    It would be somewhat worrisome to actually find signs of primitive extraterrestrial life because of the Fermi Paradox. Given the age of the universe, and how long it took both complex life to develop on earth and for a creature such as us to emerge from that, finding life elsewhere would beg a return to Fermi's question of 'Where is everyone?' implying that something comes along and causes evolving civilizations to be exterminated before they ever show signs to their presence to the wider galaxy.

    If life, even of a very primitive sort, were found, it would stand to reason that it had done so in the past and that other civilizations, possibly even many of them, had formed in our huge galaxy long ago, giving them time to develop enough to be detectable even to us, so then, where are they?

    Then again of course, there are probably many, many known unknowns and unknown unknowns lurking amidst all of the above supposition.

    rossant(1737) about 15 hours ago [-]

    Maybe sufficiently advanced civilizations just stay under the radar to avoid being exterminated by others.





    Historical Discussions: JSLinux (April 14, 2025: 389 points)

    (389) JSLinux

    389 points 4 days ago by TechTechTech in 3512th position

    www.bellard.org | Estimated reading time – 2 minutes | comments | anchor

    JSLinux

    Run Linux or other Operating Systems in your browser!

    The following emulated systems are available:

    CPU OS User Interface VFsync access Startup Link TEMU Config Comment
    x86 Alpine Linux 3.12.0 Console Yes click here url
    x86 Alpine Linux 3.12.0 X Window Yes click here url
    x86 Windows 2000 Graphical No click here url
    x86 FreeDOS VGA Text No click here url
    riscv64 Buildroot (Linux) Console Yes click here url
    riscv64 Buildroot (Linux) X Window Yes click here url
    riscv64 Fedora 33 (Linux) Console Yes click here url
    riscv64 Fedora 33 (Linux) X Window Yes click here url



    All Comments: [-] | anchor

    skerit(10000) 4 days ago [-]

    I can't seem to get the Linux VMs running (I'm just getting a CORS error when it tries to fetch the little text file at `https://vfsync.org/u/os/buildroot-riscv64/head` for example), but the Windows 2000 one does work. Quite smoothly even.

    dvdkon(10000) 3 days ago [-]

    It only allows bellard.org, not www.bellard.org. Changing the domain loads the same webpage, but with CORS working as intended.

    tombert(10000) 3 days ago [-]

    Fabrice is amazing. The amount of stuff this guy has built is utterly incredible.

    If I built any one of the things he's built (ffmpeg, qemu, tinyc) I would never stop bragging about it. Instead, he just keeps hacking on other cool stuff.

    wruza(10000) 3 days ago [-]

    Yeah why don't we learn what he wants and just give it to him, in return he'll properly rewrite all the broken shit we have. Phones, operating systems, desktop environments, countries, appstores, etc.

    danielEM(10000) 3 days ago [-]

    100% agree, would like to meet that guy one day

    p0w3n3d(10000) 3 days ago [-]

    I love this guy. Half of the world's android development has been made easier due to his courtesy, and it's getting more (his qemu is ubiquitous)

    xorcist(10000) 3 days ago [-]

    Also the same person who wrote LZEXE, which might be familiar to people who used DOS.

    jorvi(10000) 3 days ago [-]

    Don't forget VLC! Probably his most well-known project.

    jebarker(10000) 3 days ago [-]

    I'd love to know how he chooses what to work on. I wonder if he just follows his interest?

    rmac(10000) 3 days ago [-]

    Kohei Tokunaga has the next generation of this

    https://ktock.github.io/container2wasm-demo/

    with emscripten Browser networking via fetch, or a Posix compat websocket proxy

    https://ktock.github.io/container2wasm-demo/amd64-debian-was...

    roschdal(3231) 3 days ago [-]

    JSLinux is too slow to be used for anything.

    Where is the complete source code for this?

    ofrzeta(2743) 3 days ago [-]

    On the TinyEMU page? https://bellard.org/tinyemu/

    jgtrosh(10000) 3 days ago [-]

    I find it perfect for technical interviews over screen sharing, since we test for some basic degree of ease on remote linux systems.

    s-macke(2409) 3 days ago [-]

    This emulator does basically the same but is much more speed optimized. It uses the OpenRISC architecture and even has networking. For what do you want to use such an emulator?

    [0] https://github.com/s-macke/jor1k

    someoneontenet(10000) 3 days ago [-]

    My dream is have a in browser nixos vm on wasm. If I could have a bare vm, I can bootstrap it easily with a nixos config. From there I can start thinking about running web services in browser tabs instead of physical hardware.

    londons_explore(10000) 3 days ago [-]

    Pretty sure this is possible already... What's stopping you?

    pveierland(3678) 3 days ago [-]

    Considering the extremes of prolific developers gives interesting contrast to dogmas such as 'functions/files should never be above x lines', where `quickjs.c` is 50k lines and has functions that are hundreds of lines long:

    https://github.com/bellard/quickjs/blob/master/quickjs.c

    (Obviously different approaches suits different circumstances.)

    lifthrasiir(2959) 3 days ago [-]

    The answer is simple: Bellard can recall all 50K lines of context, while most can't. I too happen to have a larger working memory and only later realized that my threshold for files and functions is way higher than most others. The dogma is only required when the file is to be read and written by multiple people.

    wiseowise(10000) 3 days ago [-]

    Because people you're working with are not Fabrice. It is easier to say "don't do X at all" than explain when it is safe to break the rule.

    Also, this would depend on language of choice. JVM, for example, might not inline function above certain threshold of bytecode instructions.

    saghul(3611) 3 days ago [-]

    I work on that codebase (we forked it off to QuickJS-ng) and while daunting at first, it's somewhat easy to work with, with the right editor! Many of them choke on such a large file, alas.

    While it being a very large file, it's sorted somewhat semantically, so it's easy to work on adding a new iterator method, for example, since they are all close to each other.

    txdv(10000) 3 days ago [-]

    I think this person creates these marvels entirely by himself. There is no need for collaboration rules.

    larschdk(10000) 3 days ago [-]

    Rather one long function than does one thing well than multiple function that are strongly coupled and difficult to reason about. Programmers who apply dogmas can be harmful.

    worewood(10000) 3 days ago [-]

    Case in point: .NET's garbage collector which is a single 54k loc C++ file.

    klarko(10000) 3 days ago [-]

    In the age of advanced IDEs/text editors with goto definition, find references/usage, fuzzy search, etc, what is even the point of multiple files?

    I never navigate by files in my code bases, it's all based on search and 'jump to' type navigation.

    tombl(10000) 3 days ago [-]

    Fabrice does a great job at building these self-contained pieces of software which often grow to have lives of their own. As a lesser known example, JSLinux's terminal emulator was forked a few times and is now known as xterm.js, which has become the predominant web embeddable terminal emulator.

    This all comes full circle, because now I'm building a true successor to JSLinux that's way faster because I've natively compiled the kernel/userspace to wasm, and of course I'm using xterm.js for the terminal emulation.

    If you like buggy demos that probably shouldn't be shared yet, you should check out https://linux.tombl.dev, but note that it's currently just a busybox shell and nothing else, so I hope you're good with `echo *` instead of `ls`.

    fsiefken(10000) 3 days ago [-]

    Awesome, I suppose it's more energy efficient then jslinux and can be run on iOS, it might be a good alternative for A-Shell or iSH. I tried it on my a MacBook, but the keyboard input doesn't register.

    agumonkey(1393) 3 days ago [-]

    is there any command working ? ps, cat, vi, ed .. they all crash (I don't know enough about embedding busybox to know what to do)

    pantalaimon(295) 3 days ago [-]

    This produces

            attempted to munmap
            ------------[ cut here ]------------
            WARNING: CPU: 3 PID: 36 at kernel/exit.c:812 0x00000000
            CPU: 3 PID: 36 Comm: sh Not tainted 6.1.132 #
            Stack:
                at vmlinux.o.__warn (https://linux.tombl.dev/dist/vmlinux-NLTKI6YG.wasm:wasm-function[278]:0x17655)
                at vmlinux.o.warn_slowpath_fmt (https://linux.tombl.dev/dist/vmlinux-NLTKI6YG.wasm:wasm-function[279]:0x1772b)
                at vmlinux.o.do_exit (https://linux.tombl.dev/dist/vmlinux-NLTKI6YG.wasm:wasm-function[329]:0x1985e)
                at vmlinux.o.task_entry_inner (https://linux.tombl.dev/dist/vmlinux-NLTKI6YG.wasm:wasm-function[154]:0x12249)
                at vmlinux.o.task_entry (https://linux.tombl.dev/dist/vmlinux-NLTKI6YG.wasm:wasm-function[153]:0x12155)
                at self.onmessage (https://linux.tombl.dev/dist/worker-MHWHWELT.js:151:53)
            ---[ end trace 0000000000000000 ]---
    
    on any command
    chjj(3639) 3 days ago [-]

    This brings back memories. I haven't looked at it in a while, but I'm glad to see the fork[1] of my fork[2] from 12 years ago is still thriving. Looks like it's been mostly rewritten. Probably for the better.

    [1] https://github.com/xtermjs/xterm.js [2] https://github.com/chjj/term.js

    apitman(519) 3 days ago [-]

    I like to say Fabrice creates side projects that others spend their entire careers maintaining.

    I knew about QEMU, ffmpeg, his LTE stuff, and QuickJS. I had no idea xterm.js started with him too.

    DyslexicAtheist(92) 3 days ago [-]

    for now I get a kernel panic due to NoScript.

    But does this support recursion? I'd like to run JSLinux in my browser and then point its Browser to https://www.bellard.org/jslinux/ which then starts another JSLinux which opens the browser on JSLinux which ...

    JSLinux isn't another Linux but a landmark of postmodern philosophy, and OP most def forgot to credit Baudrillard.

    crazy cool.

    jeroenhd(3638) 3 days ago [-]

    If you host your own OS image that auto-starts a browser that runs JSLinux and a config file like https://www.bellard.org/jslinux/alpine-x86.cfg, you can create such a link yourself. CORS may be your biggest enemy, there's no reason JSLinux can't do what you're proposing (albeit extremely slowly).

    ridruejo(1925) 3 days ago [-]

    JSLinux was our inspiration for creating Endor (https://endor.dev) and his qemu work is also powering a lot of other Wasm-related browser projects

    pveierland(3678) 3 days ago [-]

    Are there any open details on how the VM / container / WASM-native approaches are implemented?

    throwaway2037(2851) 3 days ago [-]

    Does anyone know how Fabrice Bellard gets paid? This guy's output of open source project is simply stunning. Is there anyone in his class? It is hard to compare. I assume that someone like VMWare would try to hire him, or Google to work on video codecs, V8, Chromium rendering, or ffmpeg.

    throwaway2037(2851) 3 days ago [-]

    Ok, it looks like he runs his own company: https://www.amarisoft.com/company/about-us

    keepamovin(521) 3 days ago [-]

    I have to say there are some extremely talented, creative and productive 'software artists' or ICs coming out of France. Not sure if that's a French thing (the Ecoles or whatever) or something else, but it's noticable.

    justin66(2613) 3 days ago [-]

    Can you name some that invite comparison with FB?

    ptsneves(10000) 3 days ago [-]

    Bootlin is a French company and they are a major open source contributor. I worked with them and I recommend them.

    French tech used to have a reputation for Renault old car quality, but I did not see it. Even in Renault and Citroen I came to admire them. On the other hand working with German SE is hard because they are incredibly set on not invented here. My generalisation for whatever it is worth.

    In general the issue of Europe tech scene is simple: we suck at selling and optimise for resource efficiency(competitive salary means never pay above rate no matter what). Americans optimise for growth and will risk paying for higher so they can amortise costs with growth.

    On a final note, where I come from there is lots of sneer that France is a dump due to immigration. While that is a point of view, it is definitely true they have also brain drained their colonies and have very capable productive individuals coming from there. Myself I had my master's tutor from cot-de-Ivoir and in bootlin also worked with top of the shelf engineers that have non francophone names.

    DrNosferatu(10000) 3 days ago [-]

    - What about a WASM flavor of this, Fabrice? ;)

    haunter(277) 3 days ago [-]

    Not by him but it does exist

    https://ktock.github.io/container2wasm-demo/

    patwolf(10000) 3 days ago [-]

    I played around in Windows 2000 for the first time in 20 years. I know nostalgia can be blinding, but I would go back to that UI in a heartbeat. The uncluttered taskbar, the simple start menu that isn't full of useless recommendations and ads—such a joy!

    Tepix(2905) 3 days ago [-]

    Related:

    'Windows 2000 Server named peak Microsoft. Readers say it's all been downhill since Clippy'

    https://www.theregister.com/2025/04/11/windows_2000_best_mic...

    https://news.ycombinator.com/item?id=43653421

    edoceo(10000) 3 days ago [-]

    The reason I've been on Xfce since at least 2010, it still works the same.

    I feel like open-source inherently has alignment with users and blockers to enshitification

    steeleduncan(3185) 3 days ago [-]

    I don't remotely want to use Windows 2000 again, but it is interesting to see a version of Windows where the UI was consistent. Currently it is a mishmash of four generations of GUI toolkits, some UI is in one style, some UI is another, etc, etc

    jsd1982(10000) 3 days ago [-]

    I tried to install Visual Basic 6 on it but couldn't get past SSL errors in the installed Firefox version to even download the ISO. Sad.

    a3f(10000) 3 days ago [-]

    We are using JSLinux over at https://barebox.org/webdemo to let potential users see the conveniences of the bootloader's shell without having to flash it to actual hardware.

    I am glad to see all the forks mentioned here, need to see which one runs bareDOOM best and if any have working sound perhaps..

    a3f(10000) 3 days ago [-]
    https://barebox.org/demo being the correct link..




    Historical Discussions: It's easier than ever to de-censor videos (April 15, 2025: 381 points)

    (381) It's easier than ever to de-censor videos

    381 points 3 days ago by DamonHD in 911th position

    www.jeffgeerling.com | Estimated reading time – 4 minutes | comments | anchor

    Last month I asked people to hack part of my YouTube video, specifically to de-pixelate the contents of a folder I had pixelated starting at the 4:57 mark.

    Your browser does not support the video tag.

    For years, people have used the censor tool to blur or pixelate out parts of videos where there's sensitive information. And for years, every time I've used it, I get a few comments from people saying that's not a safe way to censor information.

    So is that true?

    I wanted to find out, so I put a message saying I'd send fifty bucks to anyone who could tell me what it said under the pixelation. And you know what? Less than a day later, three people solved it, using three slightly different techniques—scary!

    This blog post is a lightly edited transcript of the following video:

    How did they do it?

    But how did they do it? I asked each of them, and they were more than happy to share. For most of us who like reverse-engineering or tinkering, it's fun to share the craft. And even more fun when it's sanctioned fun. Add on a little monetary reward, and that's just icing on the cake.

    GitHub user KoKuToru was kind enough to share an entire GitHub repo with the process and the code, along with two different ways that user tried to depixlate my footage.

    First a brute-force attempt to extract aligned images of just the window, with some code using TensorFlow to extract pixel data and aggregate it into a somewhat-fuzzy (but almost clear enough to read) picture:

    Your browser does not support the video tag.

    The idea here is the pixelation is kind of like shutters over a picture. As you move the image beneath, you can peek into different parts of the picture. As long as you have a solid frame of reference, like the window that stays the same size, you can 'accumulate' pixel data from the picture underneath.

    Due to the slight error in selecting the window by hand, the final result was slightly blotchy. For the second attempt, GIMP was used to get a better window selection algorithm with ffmpeg, and with a slight bit more data (more frames extracted), a perfectly legible result:

    Your browser does not support the video tag.

    Any way to prevent it?

    Blurring or pixelating video, especially moving video, may lead to similar results as you saw here. Years ago it would've required a supercomputer and a PhD to do this stuff. But today, between AI assistance with the trickier bits of coding, and how fast neural networks run on computers, it's easier and faster than ever to de-pixelate video!

    If there's one thing computers are good at, it's finding order in seeming chaos, like how modern tools can pull a clean voice out of a horrible recording.

    The more motion in the video, the more data points the reverse engineering has to play with. And thus, the better the confidence in the results.

    If I hadn't moved around my Finder window in the video, I don't think it would've worked. You might get a couple letters right, but it would be very low confidence.

    Moving forward, if I do have sensitive data to hide, I'll place a pure-color mask over the area, instead of a blur or pixelation effect.

    Intuitively, blur might do better than pixelation... but that might just be my own monkey brain talking. I'd love to hear more in the comments if you've dealt with that kind of image processing in the past.

    It's amazing what people can do with a neural network, ingenuity, and time.

    I guess the moral of the story is if you don't want people to read censored data... don't post it online.

    tl;dr - check out KoKoToru's de-pixelate GitHub repo for all the details on how it was done.




    All Comments: [-] | anchor

    JKCalhoun(3408) 3 days ago [-]

    Yeah, that is pretty wild.

    I recall a co-worker doing something related(?) for a kind of fun tech demo some ten years or so ago. If I recall it was shooting video while passing a slightly ajar office door. His code reconstructed the full image of the office from the 'traveling slit'.

    I think about that all the time when I find myself in a public bathroom stall.... :-/

    Agree2468(10000) 3 days ago [-]

    Line scan cameras operate on this principle, and are still used in various ways to this days. I'm especially partial to the surreal photos generated by them at the end of cycling races

    https://finishlynx.com/photo-finish-trentin-sagan-tour-de-fr...

    nkrisc(10000) 3 days ago [-]

    > I think about that all the time when I find myself in a public bathroom stall.... :-/

    Walk past a closed bathroom stall fast enough and you can essentially do that with your own eyes. Or stand there and quickly shift your head side to side. Just don't do it on one that's occupied, that's not cool.

    MisterTea(10000) 3 days ago [-]

    > His code reconstructed the full image of the office from the 'traveling slit'.

    This method is commonly used in vision systems employing line scan cameras. They are useful in situations where the objects are moving, e.g. along conveyors.

    rosswilson(10000) 3 days ago [-]

    This reminds me of https://github.com/jo-m/trainbot, a neat example of stitching together frames of passing trains to form a panorama.

    This frontend presents them nicely: https://trains.jo-m.ch

    quietbritishjim(10000) 3 days ago [-]

    Sorry if you're already aware, but in case not: The weird huge gap around the edge of cubical doors in pubic toilets is specific to the US. (For those that don't know, it's literally 1 or 2 cm.) In Europe you just get a toilet door that shuts properly and there's no slit to reconstruct.

    I remember my first visit to a toilet in the plush US office of a finance company and thinking WTF are they doing with their toilet cubicle? I only found out later that it's common there.

    nzach(10000) 2 days ago [-]

    And if you 'reverse' this idea you can make a 'holographic(?) display'[0].

    [0] - https://www.youtube.com/watch?v=ric-95ig5oE

    its-summertime(10000) 3 days ago [-]

    Speaking of, the Lockpicking Lawyer's 'Thank you' video https://www.youtube.com/watch?v=CwuEPREECXI always irked me a bit, yeah its blurred, but as can be seen, (and as was possible back then, and way before then too, recovering poor data from windowed input has been a thing for 50+ years (e.g. radio signals, scanning tools, etc), if you think about it, its a cheap way to shift costs from physical improvement to computational improvement, just have a shutter), and yet he didn't block the information out, only blurred it

    IshKebab(10000) 3 days ago [-]

    That's a totally different scenario. You can't unblur that video.

    wodenokoto(3676) 2 days ago [-]

    To save others a click: the video is a pile of customers packages with addresses ready to send.

    "It's" are the Address lines, which are blurred instead of blacked or whited out, potentially revealing customers private information.

    brunosutic(2870) 3 days ago [-]

    I like this Jeff Geerling guy.

    ge96(10000) 3 days ago [-]

    he's like THE or was THE raspberry pi guy

    formerly_proven(10000) 3 days ago [-]

    > Intuitively, blur might do better than pixelation... but that might just be my own monkey brain talking. I'd love to hear more in the comments if you've dealt with that kind of image processing in the past.

    A pixelization filter at least actively removes information from an image, a Gaussian blur or box blurs are straight up invertible by deconvolution and the only reason that doesn't work out of the box is because the blurring is done with low precision (e.g. directly on 8-bit sRGB) or quantized to a low precision format afterwards.

    danjl(10000) 3 days ago [-]

    Exactly. Do not use blur to hide information. Blurring simply 'spreads out' the data, rather than removing it. Just search (you know, on Google, without an LLM) for 'image unblur'.

    kccqzy(2074) 3 days ago [-]

    Even if the precision is low, the deconvolution process you described is still good enough to reconstruct the original text in the majority of cases.

    AdmiralAsshat(1929) 3 days ago [-]

    My Windows-98 approved method for redacting a screenshot:

    1) Open screenshot in MS-Paint (can you even install MS-Paint anymore? Or is it Paint3D now?)

    2) Select Color 1: Black

    3) Select Color 2: Black

    4) Use rectangular selection tool to select piece of text I want to censor.

    5) Click the DEL key. The rectangle should now be solid black.

    6) Save the screenshot.

    As far as I know, AI hasn't figured out a way to de-censor solid black yet.

    jebarker(10000) 3 days ago [-]

    That's going to be a lot of work for a YouTube video though

    JimDabell(2160) 3 days ago [-]

    It's possible, depending upon the circumstances. If you are censoring a particular extract of text and it uses a proportional font, then only certain combinations of characters will fit in a given space. Most of those combinations will be gibberish, leaving few combinations – perhaps only one – that has both matching metrics and meaning.

    its-summertime(10000) 3 days ago [-]

    There was a programming competition, can't remember which, similar to IOCCC but more about problematic software? where the redaction was reversible despite being pure black, due to the format chosen allowing for left over information in the image (vastly reduced quality but it was enough to allow text to be recovered!) [edit: see replies!]

    There was also the Android (and iOS?) truncation issue where parts of the original image were preserved if the edited image took up less space. [edit: also see replies!]

    Knowing some formats have such flaws (and I'm too lazy to learn which), I think the best option I think is to replace step 6 with 'screenshot the redacted image', so in effect its a completely new image based on what the redacted image looks like, not on any potential intricacies of the format et al.

    eviks(10000) 3 days ago [-]

    this method looks worse than pixelation/blurry style, those 'just' need to be updated to destroy info first instead of faithfully using the original text

    Arubis(2979) 3 days ago [-]

    What I love about this method is that it so closely matches what actual US govt censors do with documents pending release: take a copy, black it out with solid black ink, then _take a photocopy of that_ and use the photocopy for distribution.

    layman51(10000) 3 days ago [-]

    This is odd because when I follow your steps up to Step 5, the rectangle that gets cut out from the screenshot is white. I did remember to follow steps 2 and 3.

    layer8(860) 3 days ago [-]

    If you want the blurred/pixelated look, blur/pixelate something else (like a lorem ipsum) and copy it over to the actual screenshot.

    SoftTalker(3552) 3 days ago [-]

    7) Print the screenshot

    8) Scan the printed screenshot

    a2128(10000) 3 days ago [-]

    > can you even install MS-Paint anymore? Or is it Paint3D now?

    Paint3D, the successor to MSPaint, is now discontinued in favor of MSPaint, which doesn't support 3d but it now has Microsoft account sign-in and AI image generation that runs locally on your Snapdragon laptop's NPU but still requires you to be signed in and connected to the internet to generate images. Hope that clears things up

    layer8(860) 3 days ago [-]

    Don't do this on a PDF document though. ;)

    gruez(10000) 3 days ago [-]

    >2) Select Color 1: Black

    You don't need this step. It already defaults to black, and besides when you do 'delete' it doesn't use color 1 at all, only color 2.

    lynndotpy(3619) 3 days ago [-]

    Solid color would convey far less information, but it would still convey a minimum length of the secret text. If you can assume the font rendering parameters, this helps a ton.

    As a simple scenario with monospace font rendering, say you know someone is censoring a Windows password that is (at most) 16 characters long. This significantly narrows the search space!

    Retr0id(1781) 3 days ago [-]

    > AI hasn't figured out a way to de-censor solid black yet.

    I did though, under certain circumstances. Microsoft's Snipping Tool was vulnerable to the 'acropalypse' vulnerability - which mostly affected the cropping functionality, but could plausibly affect images with blacked-out regions too, if the redacted region was a large enough fraction of the overall image.

    The issue was that if your edited image had a smaller file size than the original, only the first portion of the file was overwritten, leaving 'stale' data in the remainder, which could be used to reconstruct a portion of the unedited image.

    To mitigate this in a more paranoid way (aside from just using software that isn't broken) you could re-screenshot your edited version.

    sva_(3428) 3 days ago [-]

    Maybe silly, but I'd always take a screenshot of the final thing and then paste that to a new file... just to be sure.

    al_borland(10000) 3 days ago [-]

    Back in the TechTV days one of the hosts used Photoshop to crop a photo of herself before posting it online. One would think a crop, completely removing the part of the image would be even better than solid black. However, with the way Photoshop worked in 2003, it didn't crop the embedded Exif thumbnail, which people were able to use to get the uncropped image.

    il-b(10000) 2 days ago [-]

    ...somehow, it uses 99.9% opacity for the fill...

    Funes-(862) 3 days ago [-]

    Japanese porn is being 'decensored' with AI as we speak, in fact. It looks a tad uncanny, still, but finding a 'decensored' clip in the wild was quite the thing for me a couple of weeks ago.

    internetter(10000) 3 days ago [-]

    This is a completely different process — the AI is inferencing what goes there, it isn't actually using any information from the pixels so it wouldn't work in this case.

    Not to mention deeply and disturbingly unethical

    zoky(10000) 3 days ago [-]

    I also have a network share named "mercury" connected to my Mac, and that last example nearly made me shit myself.

    geerlingguy(249) 3 days ago [-]

    Ha! I name most of my shares after celestial bodies... Jupiter is the big 100 TB volume for all my archives. Mercury is an all-NVMe volume for speed, for my video editing mostly.

    HPsquared(10000) 3 days ago [-]

    I wonder how much random noise (or other randomness) would have to be added to the pixelated version to make this method unusable.

    miki123211(1034) 3 days ago [-]

    If you really want that blur effect so badly, you can just replace your content with something innocuous, and then blur that innocuous content.

    This is what you actually have to do with websites, e.g. when you want some content blurred when it's behind a paywall. If you leave the original text intact, people can just remove the CSS blur in dev tools.

    Some implementations get this slightly wrong, and leave the placeholder content visible to accessibility tools, which sometimes produces hilarious and confusing results if you rely on those.

    wlesieutre(10000) 3 days ago [-]
    > If I hadn't moved around my Finder window in the video, I don't think it would've worked. You might get a couple letters right, but it would be very low confidence.

    > Moving forward, if I do have sensitive data to hide, I'll place a pure-color mask over the area, instead of a blur or pixelation effect.

    Alternately - don't pixelate on a stationary grid when the window moves.

    If you want it to look nicer than a color box but without giving away all the extra info when data moves between pixels, pixelate it once and overlay with a static screenshot of that.

    For bonus points, you could automate scrambling the pixelation with fake-but-real-looking pixelation. Would be nice if video editing tools had that built in for censoring, knowing that pixelation doesn't work but people will keep thinking it does.

    geerlingguy(249) 3 days ago [-]

    That's another good way to do it.

    I wonder if it might be good for the blur/censor tools (like on YouTube's editor even) to do an average color match and then add in some random noise to the area that's selected...

    Would definitely save people from some hassle.

    IshKebab(10000) 3 days ago [-]

    Yeah this scenario is purposefully chosen specifically to make this attack possible. It's basically irrelevant in the real world.

    42lux(10000) 3 days ago [-]

    Bad blackout jobs are in the news since the 50s and every time an expert tells the same solution. If you want to censor something remove the information.

    nightpool(10000) 3 days ago [-]

    Easier said than done if you're using a proportional font though

    lynndotpy(3619) 3 days ago [-]

    > Years ago it would've required a supercomputer and a PhD to do this stuff

    This isn't actually true. You could do this 20 years ago on a consumer laptop, and you don't need the information you get for free from text moving under a filter either.

    What you need is the ability to reproduce the conditions the image was generated and pixelated/blurred under. If the pixel radius only encompasses, say, 4 characters, then you only need to search for those 4 characters first. And then you can proceed to the next few characters represented under the next pixelated block.

    You can think of pixelation as a bad hash which is very easy to find a preimage for.

    No motion necessary. No AI necessary. No machine learning necessary.

    The hard part is recreating the environment though, and AI just means you can skip having that effort and know-how.

    cogman10(10000) 3 days ago [-]

    In fact, there was a famous de-censoring that happened because the censoring which happened was a simple 'whirlpool' algorithm that was very easy to unwind.

    If media companies want to actually censor something, nothing does better than a simple black box.

    thehappypm(10000) 3 days ago [-]

    this gets exponentially harder with a bigger blur radius, though.

    nartho(10000) 3 days ago [-]

    Noob here, can you elaborate on this ? if you take for example a square of 25px and change the value of each individual pixels to the average color of the group, most of the data is lost, no ? if the group of pixels are big enough can you still undo it ?

    bob1029(10000) 3 days ago [-]

    It would seem techniques like this have been used in domains like astronomy for a while.

    > The reconstruction of objects from blurry images has a wide range of applications, for instance in astronomy and biomedical imaging. Assuming that the blur is spatially invariant, image blur can be defined as a two-dimensional convolution between true image and a point spread function. Hence, the corresponding deblurring operation is formulated as an inverse problem called deconvolution. Often, not only the true image is unknown, but also the available information about the point spread function is insufficient resulting in an extremely underdetermined blind deconvolution problem. Considering multiple blurred images of the object to be reconstructed, leading to a multiframe blind deconvolution problem, reduces underdeterminedness. To further decrease the number of unknowns, we transfer the multiframe blind deconvolution problem to a compact version based upon [18] where only one point spread function has to be identified.

    https://www.mic.uni-luebeck.de/fileadmin/mic/publications/St...

    https://en.wikipedia.org/wiki/Blind_deconvolution

    dopadelic(10000) 3 days ago [-]

    This makes sense for blurring, but not for pixelation mosaicking.

    vault(10000) 3 days ago [-]

    I noticed the link in Jeff's post to RX 10 Elements Noise Reduction. The audio in their YouTube presentation was not horrible at all though. Has anybody tried it with some real horrible recording? Like those from a blink mini camera in a room without furniture.

    geerlingguy(249) 3 days ago [-]

    I have, I was going to go for a more extreme example but couldn't find one quickly on their channel.

    It's not perfect, by any means, but you can get intelligible speech from a pretty terrible recording at least. Adobe has their AI assist tool too, it works pretty well though I've found it can't isolate a speaker when there are a lot of other people talking nearby.

    taf2(3076) 3 days ago [-]

    Giving the final image at 13 seconds to ChatGPT and I wonder if this is pretty close... https://x.com/taf2/status/1912260125278032228

    istjohn(10000) 3 days ago [-]

    It's clearly not. In the original screenshot there are 6 files with the prefix 'I.2J', but in the GPT version, there are only four.

    feverzsj(10000) 3 days ago [-]
    netsharc(10000) 2 days ago [-]

    I once thought the publishers of those videos would use a reversible algorithm, as malicious compliance...

    Or having the pixelated parts be a particular pattern, and then releasing an XOR video to get the original footage..




    (380) Intuit, Owner of TurboTax, Wins Battle Against America's Taxpayers

    380 points about 9 hours ago by leotravis10 in 873rd position

    prospect.org | Estimated reading time – 7 minutes | comments | anchor

    For nearly three decades, a cold war has raged through the halls of Congress and in high-end shellfish restaurants perched precariously on Washington, D.C.'s southern coast. The battle lines have shifted between successive administrations, sometimes tilting toward proletariat victory, and sometimes cutting fast toward total surrender to corporate America.

    This month, thanks to the whims of the president and hefty sums of cash, Donald Trump has amended an old axiom to guarantee that nothing in life is certain but death, and paying money to file your taxes.

    According to a report by the Associated Press this week, the IRS is moving to shut down its free tax filing program known as Direct File, with employees working on the program told to stall work on future iterations. The news comes after Intuit, the maker of TurboTax and the biggest player in tax preparation software, spent years tirelessly fighting any attempt by the government to bring the nightmarish American system of tax collection into line with European nations that have streamlined most citizens' filing process down to the click of a button.

    More from Daniel Boguslaw

    Even when the Biden administration broke through in the Inflation Reduction Act to fund a pilot program for Direct File, which expanded to 25 states this tax season, Intuit didn't stop fighting. Instead, it continued cajoling lawmakers and the White House into forcing millions of Americans to shell out hundreds, sometimes thousands, of dollars to file with expensive and confusing tax prep software.

    A glance at Intuit's 2025 first-quarter lobbying disclosures gets at this continued, quarter-century saga. The company shelled out $240,000 to lobby members of Congress on tax-related issues. Forty thousand dollars was doled out to Raffaniello & Associates to curry favor on issues like "Tax Administration & tax system integrity" and "Regulation of tax return preparers." It also lobbied on implementation of Public Law 117-169, which is the statute that created IRS Direct File.

    Jake Perry + Partners received $30,000 to lobby on the same issues, including personal outreach to Elon Musk's lackeys in Congress. According to the firm's filing, at least part of that money was spent on "Communications with DOGE Caucus members regarding tax simplification, waste, fraud and abuse."

    Wilmer Cutler Pickering Hale and Dorr LLP, a law firm targeted with legal sanction by the Trump administration for employing special counsel Robert Mueller, received $60,000 for its work on behalf of Intuit. Its services included advocacy to "Enhance tax administration and tax system integrity" and "support tax simplification and voluntary compliance." WilmerHale is suing the Trump administration over attacks on their firm, while also cozying up to Republicans to make tax filing more expensive. Money talks.

    Intuit shelled out $240,000 to lobby members of Congress on tax-related issues in the first quarter of 2025.

    This work has paid off. In December, 29 House Republicans wrote to then-President-elect Trump at Mar-a-Lago, asking him to end Direct File on day one. A report from Public Citizen showed that these lawmakers have received $1.8 million in campaign contributions from opponents of Direct File over their political careers.

    The relatively paltry first-quarter lobbying sum pales in comparison to the big kahuna spend that Intuit made last year: a direct payment to Trump's inaugural committee. As Politico reported in December, Intuit handed Trump $1 million for inaugural festivities that were eventually sent indoors due to bad weather. This was a common bribe-like substance from corporate America intended to show fealty to Washington's new overlords.

    A company spokesperson told Politico that the donation was "part of our decades-long commitment to bipartisan advocacy ... Intuit is committed to ensuring our customers' voices are heard on important issues, and our expanded participation in the democratic process reflects our growth as a company and the variety of policy issues that impact the approximately 100 million diverse consumers and businesses we serve."

    "Congratulations to President @realDonaldTrump and Vice President @JDVance on your inauguration," Intuit CEO Sasan Goodarzi, who made $27 million last year, tweeted on January 21st. "We encourage Washington to promote innovation to strengthen small businesses that are the backbone of the economy and to simplify the tax code to help Americans prosper."

    Intuit certainly knew the importance of persuading Trump to ditch the IRS free filing program. In its quarterly financial statement to investors, Intuit listed among its risk factors "increasing competition from the public sector," specifically IRS Direct File, which "could expand with increased awareness of and government support for the program ... federal and state governments are or could become publicly funded direct competitors of the U.S. tax services industry and of Intuit. Government funded services that curtail or eliminate the role of taxpayers in preparing their own taxes could potentially have material and adverse revenue implications on us."

    They should have been scared. Customer satisfaction with Direct File was high, with over 90 percent of users ranking it as excellent or above average in surveys.

    In 2019, ProPublica published an extensive investigation into Intuit's efforts to safeguard a business model it long marketed as consumer-friendly, despite the millions of dollars lifted off of everyday Americans attempting to file their taxes on time. Intuit focused on carrying out two simultaneous objectives to ensure a maximum windfall: "stoking innovation in Silicon Valley while stifling it in Washington." In a confidential document obtained by ProPublica, Intuit outlines the maneuvers it undertook from 1997 to 2006 to block any attempt at making tax filing cheaper and easier for consumers. "For a decade proposals have sought to create IRS tax software or a ReturnFree Tax System; All were stopped," the title slide reads.

    Since 2002, Intuit and other tax preparation services have been legally required to offer a free private-sector version of what the government should have built and provided all along. But Intuit's playbook has been to create a booby-trapped version of its expensive software, with embedded code that once hid the free offering from search engines like Google, making it exceedingly difficult for those seeking free filing to discover.

    In 2023, Intuit was forced to pay out over $100 million in a multistate class action lawsuit that accused the firm of tricking customers into overpaying for services that the firm is legally required to offer for free. 4.4 million consumers nationwide received checks as the result of the multistate settlement. "By requiring consumers to pay for tax-return services that should have been available for free, Intuit cheated taxpayers out of their hard earned money," then-Pennsylvania Attorney General Michelle Henry said at the time. "Intuit's deceptive practices and aggressive advertising campaign were unnecessary and illegal; especially when the IRS offers free tax-return services for eligible consumers."

    On April 15, tax filing day, Sen. Elizabeth Warren (D-MA), long a sworn foe of for-profit tax filing companies, slammed the Trump administration for its failures to simplify the filing process.

    "Despite Treasury Secretary Bessent's promise to keep Direct File going through the 2025 tax filing season, the long-term future of the program continues to be threatened, in no small part due to Intuit's lobbying," Warren wrote. "Intuit has spent nearly $4 million in 2023 and again in 2024 attempting to kill the program. During the 2024 election cycle, Intuit joined other commercial tax preparation companies to make large donations to Republican congressmembers who later worked to eliminate Direct File."

    Yet after tens of millions in lobbying, hundreds of millions in lawsuits, and a cool million for Trump's inauguration, it seems that Intuit's ceaseless spending has paid off.




    All Comments: [-] | anchor

    mandeepj(10000) about 6 hours ago [-]

    Mr. Bessent (Treasury Secretary) was repeatedly asked during his confirmation hearing whether he would protect DirectFile and he said 'Yes' :-)

    A small snippet of that conversation. The video recording has much more details -

    Do you agree with the Government Accountability Office's (GAO) report finding that the Direct File pilot was successful and should be expanded?

    Answer: As noted during the hearing, I commit that for this tax season, Direct File will be operative to prevent any disruptions for taxpayers. And if confirmed, I will consult and study the program and understand it better, and evaluate whether it works to serve the best interests of taxpayers.

    From page 36 at https://www.finance.senate.gov/imo/media/doc/responses_to_qu...

    So he evaluated not to expand :-(

    lolinder(2685) about 6 hours ago [-]

    That's not a Yes, that's a pretty clear No. You just don't speak fluent Politician.

    atrettel(10000) about 8 hours ago [-]

    Regardless of what happens to Direct File, I recommend people learn how to do their tax returns by hand. I do it by hand every year. Yes, it is tedious, but I am not beholden to anyone and I don't need a 'product' (paid or otherwise) to solve it for me. It takes me between 10 to 15 hours per year for both my federal and state tax returns. That is all. Once you get a hang of it, it is not that bad.

    (I recognize that not everyone can do this, but if you have the technical skill to handle the math, I do still recommend it.)

    SoftTalker(3552) about 8 hours ago [-]

    I do the same thing. There's a free spreadsheet that is a great help, you can search for it.

    vel0city(10000) about 8 hours ago [-]

    10-15 hours? Turbo Tax usually costs me like $50 or so after discounts through my bank and I can nail out my taxes in under an hour with all it can auto-import in my situations. If it saves me 14 hours of labor its definitely worth $50 to me, and I'd say I'm massively overpaying compared to the free filing tools out there!

    It shouldn't be this hard.

    kamranjon(10000) about 8 hours ago [-]

    Would love to read a blog post on this. 10 - 15 hours is probably too much but I bet if I learned how to do it I could figure out how to optimize it with all the tools that are available today. Would love if TurboTax just died because everyone figured out they could do taxes on their own with just a little supplemental help from local models or something similar.

    2muchcoffeeman(10000) about 8 hours ago [-]

    Do you have a more complicated return eg other income, investments, etc or is this the average of how long it takes?

    That's insane.

    tombert(10000) about 8 hours ago [-]

    I have the technical skill to handle the math, but there is no way that I'm spending fifteen hours to do my taxes when there's a free-to-low-cost thing readily available that will do a similar or better job in like 45 minutes.

    I used CashApp taxes this year, and I liked it. It was actually free and it didn't do any upsell in the process.

    zingerlio(3176) about 8 hours ago [-]

    I second this. Although I only hand filed for two years and then transitioned to FreeTaxUSA. The benefit is that after going through their wizard/interface, I can confidently check the generated IRS forms to make sure it's filled to my intent.

    whyenot(3590) about 8 hours ago [-]

    It took me 58 minutes to do my not that simple taxes (both state and federal) using Turbo Tax. The cost was about $200. Based on your time estimate, it saved me 1-2 work days of time. That seems like a good bargain to me.

    What I don't like with Intuit is the sleazy ways they try to upsell you and to trick you into allowing them to use your financial information for non-tax purposes.

    neilv(3544) about 8 hours ago [-]

    Your mileage may vary. I did taxes by hand for a few years, probably 20+ hours each year, every hour stressful.

    For example, at some point, I'm fatigued and surprised how much work it was thus far, but I think I can see the finish line on the horizon, but then one line in a form triggers a cascade of additional schedules and many more hours.

    Then, finally, the federal forms are done, and it's a stack... And the state forms are somehow not just a 1-pager of quickly copying key numbers from federal 1040, but seem (subjectively) to more than double the work, and produce a second stack.

    The last 2 tax years, I decided it was a really unhealthy amount of stress, so I've bought TurboTax Home & Business. I run it in a KVM instance that gets airgapped, on principle, so my data doesn't get sent to corporate surveillance capitalism HQ.

    Though I don't assume that TurboTax in airgapped VM will keep working every year. But, hopefully, before they inevitably break it some year, and I'd have to do taxes by hand again, I will be killed by a crocodile.

    chickenzzzzu(10000) about 8 hours ago [-]

    This is equivalent to compiling every package from source for your Linux install. You don't end up learning too many useful things, all you've done is a very repetitive tedious task that doesn't give you much financial return.

    fooker(10000) about 8 hours ago [-]

    It's easy if you just have W2 income.

    If you have multiple brokerage accounts, RSUs from an employer or two, maybe some consulting income, etc, it's annoying and tedious.

    And if you have a business, doing it by hand basically means you'll overpay by a good extent.

    kazinator(10000) about 8 hours ago [-]

    I do my (Canadian) taxes by hand also, but not exactly.

    I calculate all the fields using my homebrew software. All calculations are done there.

    The software produces a report which is organized by form and field. I can go through it and just put the indicated values into the right forms.

    The forms are fillable PDFs. I copy and paste most of the values.

    The last few years, I had perfect results; no correction came back from the Canada Revenue Agency.

    This year, that d1ckhead Justin Trudeau left us with a surprise; complications to the Canada Pension Plan. Something like a 40% of all the line items in my tax calculation are from the new CPP schedule 5. It has multiple brackets now. I had to redo that section of my system (redefine the model). That is tedious. Anything same as last year is a breeze.

    I had to model a whole new form this year since I worked for two employers and overpaid EI (employment insurance). The common forms handle CPP overpayment. For EI overpayment there is no 'heads up' in the workflow at all. Since there is a deduction for EI payments, you have to do it right; you can easily screw up and naively calculate and claim the overpayment, while keeping the deduction calculated from the the overpaid amount.

    Anyway, when I used to work with just pen and calculator, it took me about, oh, a bit over an hour or so. 10 to 15 hours seems crazy for personal tax. Is this for a moderately complicated corporation, where you're saving money by not hiring an accountant?

    chneu(10000) about 7 hours ago [-]

    10-15 hours is insane. What are you doing?

    I do my personal, my 2 LLCs in under 2 hours. I also do my roommates W2 which takes 10 minutes. The whole thing costs like $35.

    Seriously, how does it take you 10+ hours? I do not understand at all. Lol

    chrismcb(10000) about 7 hours ago [-]

    Why? Why do you recommend it? What does one gain by doing it themselves?

    DeepYogurt(10000) about 7 hours ago [-]

    I file my own too, but we live in 2025 though. We deserve some civility

    furyofantares(10000) about 7 hours ago [-]

    Excellent advertisement for turbo tax. Luckily there's lots of more normal replies to this.

    gostsamo(3330) about 7 hours ago [-]

    There is an old russian tv series called Kitchen where the mc starts his job by stripping the labels from bananas. On the question 'what is the sense of that', he gets the answer 'for balance in the universe - somewhere there, there is someone else putting the labels on the bananas'.

    Spending 15 hours on filling data that the government mostly knows and can calculate is exactly one of those balancing acts of the universe that nobody needs.

    yoyohello13(10000) about 7 hours ago [-]

    My greatest pet peeve in life is when people make ME work to pay THEM money. I don't understand why the gov can't just tell me how much I owe and I pay.

    wyclif(385) about 7 hours ago [-]

    Maybe I'm just out of touch because I haven't done taxes by hand for years, but 10 to 15 hours? After I read your first sentence, I seriously expected you to say 2-3 hours.

    I don't doubt that it really takes that long for you. I just think it's ridiculous that anyone should spend that amount of time on something that should be a lot more simple, streamlined, and efficient.

    gblargg(10000) about 7 hours ago [-]

    Free tax programs are what allow taxes to become so complex that you need a program (or paid CPA) to help fill them out. I refuse to have to get a program to fill them out.

    A big benefit of filling out yourself is knowing how to minimize the tax burden. Using a program or CPA you never really understand how tax is calculated and the tax consequences of various financial choices you make throughout the year.

    beej71(10000) about 7 hours ago [-]

    I recommend this, as well, especially if you have repetitive taxes.

    I spend just a few hours doing taxes by hand when it's are similar to the previous year. With an accountant, I have to spend a bunch of time getting things ready, anyway. I only pay them when something weird happens.

    Also, fuck Intuit.

    bigfatkitten(10000) about 7 hours ago [-]

    My Australian tax return takes me about 20 minutes.

    The system prefills 99% of the details that they've obtained from my employer, bank, health insurer, stock broker etc directly. All I need to do is fill out my deductions from a running spreadsheet I've maintained throughout the year.

    jmward01(10000) about 8 hours ago [-]

    I believe companies should use every inch of leeway in existing laws to do business. It isn't evil, it is rational. However, I believe evil companies are the ones that attempt to change laws to do business. Businesses should not have a voice in law. Intuit is an evil company and they are making the lives of every person in the US worse in order to make a profit.

    chasing(10000) about 7 hours ago [-]

    > I believe companies should use every inch of leeway in existing laws to do business.

    No. You can do things that are immoral, harmful, predatory, and generally shitty while still being perfectly legal.

    And people who want fewer regulations hampering businesses need to realize that this only works if businesses work within ethical guidelines that are not mandated by law. Otherwise the government will need to step in and protect people.

    But to reiterate: Just because it's legal doesn't mean it's not evil.

    oblio(1840) about 7 hours ago [-]

    Guess what, all medium to big sized companies bribe their way to changing laws (lobbying).

    Corporations need to be redefined to serving society first, a sort of Prime Directive.

    smt88(10000) about 7 hours ago [-]

    > I believe companies should use every inch of leeway in existing laws to do business.

    So dark patterns are good? It was good for cigarette companies to discover tobacco is addictive and take advantage of that by selling cigarettes to kids?

    After all, this was legal until people fought a brutal grassroots war against tobacco companies to fix it.

    yoyohello13(10000) about 7 hours ago [-]

    I think companies should focus on being helpful to humanity instead of being profit maximizing machines, but that probably won't happen in my lifetime.

    maronato(10000) about 6 hours ago [-]

    This take is inconsistent. Lobbying is perfectly legal, so Intuit isn't being evil, just being rational.

    Companies, like people, can be evil while not committing any crimes. Intuit is not even that evil when compared to most larger companies in the US. We only remember it exists during tax season.

    The really evil companies manipulate markets, evade labor laws, crush unions, exploit vulnerable users, enable authoritarian surveillance, trivialize wars.

    All without breaking a single law.

    beej71(10000) about 8 hours ago [-]

    Oregon made its own turbo tax competitor and it's great, and getting better every year. I was really looking forward to Direct File. (I used an accountant this year so I didn't get my chance.) Back to filing my own returns by hand next year.

    Thank you, DOGE brainiacs who decided I had to keep doing it the inefficient way.

    adgjlsfhk1(10000) about 7 hours ago [-]

    Massachusetts also has a really good website for online filing (unfortunately state taxes only).

    DeepYogurt(10000) about 7 hours ago [-]

    Cali too

    mvdtnz(10000) about 6 hours ago [-]

    USA doesn't need a TurboTax competitor (of which there are many - I worked for one which struggled in the US market). It needs reform. TurboTax should be unnecessary.

    wnevets(10000) about 8 hours ago [-]

    The average tax payer takes the standard deduction and doesn't require anything special. There is absolutely no reason for this process to be privatized for the typical American.

    nbbaier(10000) about 8 hours ago [-]

    It's INCREDIBLY infuriating to me that it is.

    jimbob45(2509) about 7 hours ago [-]

    There's also no reason for anyone not to make coffee at home with what affordable modern coffee machines can do but Starbucks remains in business, against all odds.

    krupan(3151) about 7 hours ago [-]

    My dream is that the government puts Intuit out of business by massively simplifying tax laws, but I am most definitely not holding my breath

    blasphemers(10000) about 6 hours ago [-]

    This is the way

    Ericson2314(10000) about 6 hours ago [-]

    It is easier for them to do that after they put Intuit out of business with Direct File.

    irrational(10000) about 7 hours ago [-]

    Use Free Tax USA. Federal is free. If you need to file state, it is $15. I've used it for years and it works great. For a number of years I prepared my taxes on both Turbo Tax (without actually paying for it) and Free Tax USA. They always came up with the same numbers.

    kristopolous(3570) about 7 hours ago [-]

    Second this. Been using them for years. Took under an hour.

    Never give money, business or data to Intuit

    jmathai(3368) about 7 hours ago [-]

    I prefer Free Tax USA over Turbo Tax. Switched several years ago and haven't looked back.

    The last 2 years, I paid the $8 for chat support to answer some questions I had and both times their answers saved me a lot more than the $8. Very knowledgeable and can see my numbers to give me specific guidance and answers.

    jolt42(10000) about 7 hours ago [-]

    Wish I hadn't been funding Intuit after using FreeTaxUSA this year. Maybe the import isn't as great, but I found it overall a bit more intuitive than TurboTax

    metadat(287) about 7 hours ago [-]

    Does it handle RSUs?

    lolinder(2685) about 7 hours ago [-]

    I find that FreeTaxUSA has a much better interface than TurboTax. They don't play games with fake loading screens needlessly making you wait (when we both know that the math involved takes just a few CPU cycles) and make the whole experience much easier with fewer upsells, but the biggest deal for me is that they're far more transparent about how everything maps to the underlying documents.

    TurboTax wants you to be scared of the tax forms, so they make it really hard to see what it is that you're actually doing and signing. FreeTaxUSA actively encourages you to look at and understand the forms you're filling out and signing at every step of the way. After a few years with them I actually feel that I could fill out my taxes by hand, but I don't want to because their interface is a genuine improvement on the tax forms, as opposed to TurboTax's which is very much not.

    My understanding of the tax code has shot up dramatically since switching to them, and I feel much safer submitting taxes now than I ever did with TurboTax because I understand every single line I submitted.

    el_benhameen(3591) about 6 hours ago [-]

    Another vote for Free Tax USA. I'm angry that free file is gone, but these folks seem like they care about the craft of building good software and good interfaces, and I'm happy to pay them for the state return even though it's easy enough to just copy and paste into the state website.

    mmooss(10000) about 6 hours ago [-]

    What is the security story, including confidentiality and their capability to secure your information?

    Edit: A partial answer: https://news.ycombinator.com/item?id=43724779

    abawany(2347) about 3 hours ago [-]

    There is also Open Tax Solver (https://opentaxsolver.sourceforge.net/), which has been available since 2003.

    aorth(3519) 3 minutes ago [-]

    I've heard about this for a few years but never tried. Do they handle like if you have rental income, foreign bank accounts, and other complications? Thanks!

    dmart(2420) about 9 hours ago [-]

    I used Direct File this year. Super fast and simple, no upsells or bullshit. Feels like every little thing just gets worse and worse lately.

    cardamomo(2366) about 8 hours ago [-]

    Feels like every big thing just gets worse and worse too.

    rootsudo(10000) about 9 hours ago [-]

    $240,000 is really inexpensive in the end. Makes you wonder why most Companies aren't; If not already are doing the same.

    hrldcpr(1809) about 9 hours ago [-]

    The article does also mention other bribes they've given recently, including $1 million to Trump

    Jtsummers(2180) about 9 hours ago [-]

    That's just 2025 Q1 lobbying money. They've been at this for quite a while and spent a lot more than just $240k. They just finally got an administration in office that's openly willing to make the government less efficient and less cost effective.

    cortesoft(10000) about 9 hours ago [-]

    That's what I was thinking... I would expect the lobbying to be way more of their budget, since their entire business model depends on keeping the status quo.

    astrange(3628) about 9 hours ago [-]

    Because that's not why it happened. There's just an assumption in American politics that whenever anything bad happens it's because of 'corporations' and not ideology.

    Republicans are against easy tax filing because Grover Norquist makes them all sign pledges against it, not because of lobbying.

    alephnerd(3583) about 8 hours ago [-]

    Most companies do lobby - they just prefer donating to industry coalitions, because it helps reduce the chances of negative press one way or the other.

    That said, ime the RoI isn't that hot for the amount of time and effort spent, as relationships do matter more than money as some point political donations have diminishing returns.

    tmshapland(10000) about 8 hours ago [-]

    yes, so true! Even if they've been spending around that much every year, it's still an amazingly good ROI for Intuit to pay off lawmakers.

    stevenpetryk(10000) about 9 hours ago [-]

    FreeTaxUSA only cost me like... $20? in California this year and had very little upsells. Highly recommend!

    linsomniac(10000) about 8 hours ago [-]

    I've used it the last couple years and I've been happy with it.

    j_bum(10000) about 8 hours ago [-]

    Another +1 for FreeTaxUSA. This is my second year using it, and I think they do a great job. It's more "hands on", but I think they offer a strong value.

    fracus(10000) about 8 hours ago [-]

    That's a strange name for something that cost money.

    ativzzz(10000) about 8 hours ago [-]

    Same, been using it for years. $15 for state tax, free if your state has no income tax

    tombert(10000) about 8 hours ago [-]

    I was actually fairly impressed with CashApp taxes. It seemed to work fine, it handled my State and Federal taxes just fine. Granted, I don't think my taxes are terribly complicated, but I think they're comparable to a vast number of users.

    CashApp taxes is free and had zero upsell. I don't know what information they are farming out of this and if I did it might end up disturbing me, but at least it's free and was easy to use.

    haberman(3353) about 8 hours ago [-]

    I also found that FreeTaxUSA helped me understand my taxes better. A few areas where TurboTax performed some calculation automatically, FreeTaxUSA made me aware that I had eg. maxed out a particular deduction, in a way that helped me change my behavior accordingly.

    jasonriddle(10000) about 8 hours ago [-]

    Just so that you are aware, TaxHawk (which owns and operates FreeTaxUSA) may choose to sell your information in the event of a 'business transition' (bankruptcy, merger, etc)

    From https://www.freetaxusa.com/privacy

    >> Business transitions

    > In the event TaxHawk evaluates or conducts a business transition whether as a going concern or as part of bankruptcy, liquidation, or similar proceeding, such as a merger, being acquired by another company, or selling a portion of its assets, the personal information of users will, in most instances, be part of the assets transferred. Users will be notified, via email and/or through a prominent notice on the FreeTaxUSA.com website, 30 days prior to a change of ownership or control of their personal information.

    nbbaier(10000) about 8 hours ago [-]

    This is what my wife and I used this year and it was a great experience!

    temporallobe(10000) about 8 hours ago [-]

    This may be confusing as e-file is not the same as DirectFile and it should have little or no impact to most taxpayers since you can still always file your taxes for free. DirectFile is just an in-house "competitor" to software such as Turbotax and is only available if you made less than $250k jointly. BTW I've been using FreetaxUSA for about 10 years with no issues.

    pastage(10000) about 7 hours ago [-]

    That is more than 85% of households being cheated by a non progressive tax.

    twothreeone(10000) about 7 hours ago [-]

    I must be missing something.. why is nobody mentioning Free File Fillable Forms? I use it every year.. it's great! Super easy and seems like completely separate from both Direct File and Free File options.

    somat(10000) about 6 hours ago [-]

    FFF is... ok... I guess.

    It does bother me that if you watch your network requests, You find out it is an intuit product. I mean the IRS has one job, to receive taxes, why do I have to go through a third party company that I do not trust, to do this.

    As backwards and stupid as it is in this internet enabled age, I still file paper forms. At least until the irs can get it's online act together, (Based on the information in the parent article, this may be never)





    Historical Discussions: How many supernova explode every year? (April 12, 2025: 367 points)

    (367) How many supernova explode every year?

    367 points 6 days ago by rbanffy in 11th position

    badastronomy.beehiiv.com | Estimated reading time – 7 minutes | comments | anchor

    Blog Jam

    [Of course I picked this one to highlight because the title made me laugh. From Tuesday's article. Credit: S. Safi-Harb et al (2022)]

    Astro Tidbit

    A brief synopsis of some interesting astronomy/science news

    I also mentioned it had a supernova in it, called SN2021 afdx. And I have to say, when I first saw that designation I actually muttered an obscenity or two under my breath.

    Why? Because it's all in the name.

    Way back when, supernovae — exploding stars — were named after the year they were seen, or maybe given the name of the astronomer who described them. That's how we get Tycho's Supernova, and Kepler's Supernova, which are also called SN 1572 and SN 1604 since that was the year they were seen.

    That was fine when naked eye supernovae occurred once a century or so. But then we did something irritating: We invented telescopes.

    To compound that we also invented photography, allowing long exposures to reveal fainter objects. And suddenly instead of once a century astronomers were seeing several supernovae per year, occurring in distant galaxies to faint to have been seen earlier!

    Rings of gas around the exploded star Supernova 1987A, which is the blob in the middle of the central ring. I studied that bright inner ring for my degree. Credit: Jason Pun (NOAO) and SINS Collaboration

    Anyway, the second supernova seen in 1987 was 1987B, and the third 1987C, and so on. If, in a given year, more than 26 supernovae are seen, then the 27th is given the year plus the letters aa (yes, A-Z are capital letters but then the ones after are lower case, because astronomers are nothing if not maddening even when trying to codify naming conventions logically), the 28th would be ab, etc. The 52nd supernova of that year would be az, and so the 53rd would be called ba.

    If, at some unlikely point in the future, the naming convention people reasoned, we actually found 26 + 26 x 26 = 702 (26 for the single letters, then 26 x 26 for all the doubles) supernovae in a single year (like that would ever happen) then the 703rd would be SNXXXXaaa. Und so weiter.

    Flash forward a few years. Our telescopes and cameras are not only way way better than they used to be, we now have robotic telescopes surveying huge chunks of the night sky automatically and software that analyzes the images looking for things that change from night to night, like, say, a supernova getting brighter. They discover a lot of supernovae that way.

    And that brings me back to the Cartwheel. The supernova found in it was seen in late November, so there was nearly a whole year's worth of explodey stars seen before it.

    And it's designated SN2021 afdx.

    That means a whole lot of supernovae were seen that year before it. How many?

    Yeah, there's math. 26 for the single letters gets you to z, and 26 + (26 x 26) = 702 gets you to zz. That means 26 + (26 x 26) + (26 x 26 x 26) = 18,278 gets you to zzz.

    Still with me? The next one is aaaa, and that would be the 18,279th. To get to abaa would take 26 x 26 more, or 18,955. You have to do that four more times to get to afaa, or 18,955 + 4 x 26 x 26 = 21,659. You have to go through all 26 letters three more times to get to afda, or 21,659 + (3 x 26) = 21,737. Then finally, 23 more letters to get to afdx.

    That means — assuming I did this math right, and I have maybe a 50/50 chance of that — SN2021 was the 21,760th supernova seen in that year*.

    Twenty-one thousand seven hundred and sixty. Wow. That's a whole lotta stars blowing up.

    And that is why I swore when I saw the Cartwheel supernova's designation. 21,760! In one year. That number is so high I thought I must be wrong, but I found this page that calculates the totals, and it says there were 21,081 supernovae seen in 2021. These are candidates, actually, only some of which are confirmed, and so many are seen every night that the discrepancy between 21,081 and 21,760 is understandable — probably just a cataloguing issue.

    The point being, we are now finding tens of thousands of supernovae every year!

    That's the 54th day of the year, and it was the first one seen.

    November 23, 2021 was the 327th day of that year. If we take 21,760 as the total seen by then, that means there were, on average, 66.5 supernovae seen per day in 2021. By February 23 of that year, that average works out to 3,593 supernovae. That's somewhere in the low-to-mid triple letters.

    If you want to know how much astronomy has improved in just 35 years, look to supernovae. We went from seeing one star explode by February 23, 1987 to well over 3,000 in the same amount of time in 2021.

    I've seen a lot of numbers estimating the number of supernovae per galaxy per century, and there's a big spread, but let's say it's one per century per galaxy. There are possibly 2 trillion galaxies in the Universe, but that includes small ones with much fewer stars, so let's again wave our hands and say there are 100 billion galaxies, averaging over size. That's one hundred billion supernovae per century, or a billion per year, or about 30 per second.

    THIRTY SUPERNOVAE PER SECOND, over the entire observable Universe.

    Cripes. We've come a long way observing them, but there's a helluva long way to go.

    * Another way to think about it: Going through single letters takes 26 supernovae. Going through the double letters takes 26 x 26 or 26^2, and triple letters 26^3. To get to "f" in the quadruple letters means going through the double letters 5 times (aaaa – aezz), getting to "d" means going through the single letters three times, and "x" is the 24th letter, so the equation iszz), getting to "d" means going through the single letters three times, and "x" is the 24th letter, so the equation is

    26 + 26^2 + 26^3 + (5 x 26^2) + (3 x 26) + 24 = 21,760

    P.S. My thanks to my friend and fellow supernova-studier Sarafina Nance for indulging me in a conversation about this.

    Et alia




    All Comments: [-] | anchor

    ben_w(10000) 3 days ago [-]

    Hmm...

    So that's cool, but now I'm thinking: the distant galaxies are redshifted and time-dilated in equal proportion, and also more densly packed because the universe was smaller in the past, so I expect the actual rate of supernovas to be significantly smaller than simply multiplying 1/century/galaxy by 1e11 galaxies.

    Edit: also I don't know if rate of supernovas changes over history thanks to different steller environments giving the population-1/2/3 generations of stars...

    wolfram74(2837) 3 days ago [-]

    I would imagine the supernova rate to be higher in the early universe, as we've already passed peak stellar formation rates and the heavier (and shorter lived) stars were more likely to be formed earlier when the average density of the universe was higher.

    ls612(10000) 3 days ago [-]

    It probably isn't wildly lower today, we know of at least five or six big supernovae in the Milky Way in the past millennium. For 200B stars in our galaxy the size normalized rate implied by that would be like one ever 300 years. So if you extrapolated the Milky Way alone in (cosmological) modernity you would get 10/sec not 30/sec.

    kakuri(10000) 3 days ago [-]

    I really feel like this article should also mention the rate of formation of new stars. According to [1] Universe Magazine the James Webb telescope has revealed that more than 3,000 stars are formed every second.

    [1] https://universemagazine.com/en/james-webb-comes-closer-to-r...

    Taek(3093) 3 days ago [-]

    I don't understand this comment. Like yes, 3000 stars per second, cool fact. But why would that fact make sense in the article? The article was about being surprised by the name 'SN 2021 afdx', which has nothing to do with star formation.

    In my opinion the article was great and is also complete. More cool astronomy facts belong in some other article or format.

    citizenpaul(10000) 3 days ago [-]

    Based on this about 5.5 million stars are created every 30 minutes and only about 1 start goes supernova in the same period? This seems like it really reinforces the we are still in the early stages of the universe theory if the ratios are that imbalanced.

    Still though the imbalance in those events makes me suspicious that we are missing something.

    thih9(2817) 3 days ago [-]

    > [Supernova discovery statistics for 2021] says there were 21,081 supernovae seen in 2021

    > When the Vera Rubin survey telescope goes online, it's expected to see hundreds of thousands of supernovae per year by itself.

    whoisthemachine(10000) 3 days ago [-]

    Maybe they will have to transition from Base 26 counting to Base 64!

    selectnull(2767) 3 days ago [-]

    Astronomers will find out that naming is hard once they need to name 119741st supernova.

    pelagicAustral(10000) 3 days ago [-]

    I think it will be far before that, once they start hitting supernovae name jackpots like SN2026 cu*t et al.

    lifeisstillgood(2085) 3 days ago [-]

    No wonder the Millennium Falcon takes so longer to calculate its jump to hyperspace.

    Tens of thousands a year is one an hour!

    There are so many supernovae you really could bounce too close to one and that would end your trip real quick

    ninkendo(3250) 3 days ago [-]

    Star Wars takes place entirely within one galaxy, and the number of supernova per galaxy is something like 1 per century, so, nah, Han was just bullshitting to stall for time while his busted-ass computer cobbled together numbers.

    croes(347) 3 days ago [-]

    Was surprised by the „Und so weiter" in the text.

    weard_beard(10000) 3 days ago [-]

    Das ist mir Wurst

    tialaramex(10000) 3 days ago [-]

    That's one of my favourite hints in Outer Wilds. You will see a Supernova. Not with a fancy telescope, it's visible to the naked eye, and if you watch the sky you'll see another soon enough. You can see this right at the start, and unlike the random direction of the probe launch you don't need any game lore to, if you're smart enough, put two and two together.

    SwtCyber(10000) 3 days ago [-]

    Honestly one of those rare games that makes you feel like a real explorer, not just someone following a path the devs laid out.

    me_me_me(10000) 3 days ago [-]

    I hope that game will be treated like LothR or Shakespeare, it is truly special experience.

    marklar423(10000) 3 days ago [-]

    It's funny, I noticed I happening and thought it was proof of the opposite - that there had to be some artificial cause for the supernovae (including the Sun), because a real supernova takes many years to progress, not 20 minutes.

    Even after visiting the Sun Station I didn't believe it and thought it was a narrative red herring....so the ending was a surprise to me. Somehow.

    darthrupert(10000) 6 days ago [-]

    The whole things seems like such a massive living system that I cannot help guessing that what we think of as universe is just a somewhat large single creature.

    ndsipa_pomu(10000) 3 days ago [-]

    It's an appealing idea, but surely there'd be insurmountable problems with the distance/time involved for any part to communicate to another part? It'd be like trying to run a computer with a clock that takes millions (billions?) of years to make a single tick. I just don't see that it's at all feasible and that's without even trying to guess as to how different parts can change behaviour depending on its environment (one commonly used requirement of 'life').

    Cyphase(10000) 3 days ago [-]

    This reminds me of this quote from Jill Tarter of SETI, specifically the last sentence:

    "Might it be the discovery of a distant civilization and our common cosmic origins that finally drives home the message of the bond among all humans? Whether we're born in San Francisco or Sudan or close to the heart of the Milky Way Galaxy, we are the products of a billion-year lineage of wandering stardust. We, all of us, are what happens when a primordial mixture of hydrogen and helium evolves for so long that it begins to ask where it came from."

    source: https://www.ted.com/talks/jill_tarter_join_the_seti_search (@ 3:02)

    SwtCyber(10000) 3 days ago [-]

    There's something kinda poetic (and maybe even logical) about the idea that what we perceive as scattered galaxies and physics is actually just the internal processes of something far bigger than we can comprehend.

    aoeusnth1(10000) 3 days ago [-]

    Well, if physicalism is true then consciousness is a phenomenon of quantum fields, which span the universe. So yes, stretching the definition of creature, this could be interpreted as literally true.

    deadbabe(10000) 3 days ago [-]

    Can the thread title be rewritten to be less obnoxious? "How many supernova explode every year?" is fine. This isn't Reddit. Thread titles should not imply some kind of personality or use cliche meme speak. The all caps is definitely an abomination.

    fooker(10000) 3 days ago [-]

    Please read the article along with bikeshedding titles. It's a good one.

    Timwi(10000) 3 days ago [-]

    Agree. For the record (in case it gets changed), the title at time of writing is "Wait. HOW MANY supernova explode every year?".

    drbig(10000) 3 days ago [-]

    The universe is vast and full of nothing...

    Which in case of explodey stars is a very good thing indeed!

    subscribed(10000) 3 days ago [-]

    It's fun to think that at some point it will be actually vast and completely dark

    layer8(860) 3 days ago [-]

    It's full of radiation everywhere, regardless in which direction we look and how highly we resolve it.

    herendin2(3656) 3 days ago [-]

    If I got the math right, then about 1 in every 32,000 stars in the universe goes supernova each year. That's scary. But I think I'm getting the math very wrong.

    edit: I guess my error might be related to confusing a probability factor with the number of incidents in a period.

    edit: The right answer is probably up to 1 in every 10bn stars go supernovae in the universe each year (or 1 in 10bn die and a fraction are supernovae). Thanks: yzydserd and zild3d

    Someone(853) 3 days ago [-]

    > If got the math right, then about 1 in every 32,000 stars in the universe goes supernova each year

    Can't be right, can it? It would make the Sun (over 4 billion years old) an enormous outlier.

    It also would mean stars, on average, do not get very old. Over 10% of the stars that the ancient Greeks saw in the sky would have to have gone supernova since then.

    zild3d(10000) 3 days ago [-]

    He mentioned a rough estimate of one per century per galaxy. Estimate for average stars per galaxy is 100 million, which would be 1 in 10 billion stars every year

    yzydserd(3513) 3 days ago [-]

    A star 'lasts' about 10 billion years, so you'd expect about 1 in 10 billion stars to 'die' each year, but only a tiny proportion (the very largest) go supernova.

    Numbers are huge. Even tiny ratios mean something like 10-100 stars go supernova every single second somewhere in the universe.

    Sounds a lot? Only about 1 star per galaxy goes supernova per century. A lot of galaxies.

    Mindblowing.

    dostick(10000) 3 days ago [-]

    Isn't the answer infinity? We don't know what's beyond observed part of universe, and there's infinity number of universes. If our emerged then there's others.

    SwtCyber(10000) 3 days ago [-]

    Absolutely mind-blowing how much our ability to observe the universe has exploded

    a3w(10000) 3 days ago [-]

    exploded, he-he.

    Wobbles42(10000) 3 days ago [-]

    Arguably, our ability to observe in any meaningful sense is still limited to light waves occuring inside a volume not much larger than the earth itself. I mean this in more than just a semantic sense surrounding the verb 'observe' -- for all practical purposes everything outside of our solar system is indistinguishable from a preprogrammed light show being projected on a sphere centered on our sun with a diameter of less than a light year. There is a decent chance that will never change. The sheer size of the universe traps us in the ultimate version of Plato's cave.

    layer8(860) 3 days ago [-]

    Now many minds per second does it blow?

    roenxi(10000) 3 days ago [-]

    We're dealing with the sum total of everything, if the true nature of things is that there are a finite number of supernovas I'd be surprised. The real shock is how small the number of supernovas is and how young everything seems to be in the known universe (the age of the observed universe is estimated at maybe double digit billion years).

    These are tiny numbers given that we're quite possibly dealing with infinity in both time and space. I judge it one of the stronger arguments in favour of the universe being constructed (or, more likely, there is a lot out there we can't see). If god built a universe numbers like 1 supernova a century make some sense for artistic value.

    eurekin(10000) 3 days ago [-]

    Isn't the observable universe finite? There can't be a infinite number of anything in a space of radius R, even if R is very big.

    foxglacier(10000) 3 days ago [-]

    You can't compare a number of years or events with infinity. Saying it's tiny or huge makes no sense whatsoever.

    What amazes me is how young the universe is compared to life. The universe is only about 4 times as old as life on Earth.

    mrep(3573) 3 days ago [-]

    > 1 supernova a century

    A century being the amount of time it takes earth, one specific planet to orbit its star 100 times? What about all the other planets and stars?

    yzydserd(3513) 3 days ago [-]

    its 1 supernova per century per galaxy. there are many galaxies: more than 10 stars go supernova every second across the universe. tens of thousands have gone supernova since the article was posted to HN. tiny percentages in a large sample are huge numbers, you might even say 'astronomical'.

    jampekka(10000) 3 days ago [-]

    I couldn't spot the supernova and there's no answer to where it is. :'(

    ndsipa_pomu(10000) 3 days ago [-]

    It's in NGX 1566

    pansa2(10000) 3 days ago [-]

    Bottom-left corner

    dwighttk(2766) 3 days ago [-]

    Cross your eyes and lay the two images over each other and it pops out (bottom left of the ring)

    rookderby(10000) 3 days ago [-]

    First off, dont look at the outer wilds discussion on here, just play the game. Second - they didnt say how many letters we need to encode all of the observable supernova in a given year! So 100 billion galaxies, 1 per year per galaxy, we have around 1 billion to encode. Sorry two edits this moring, first one was right. due to math without coffee. 1e9/26^6 is about 3, 1e9/26^7 is less than one. So we might see 'SN2050aaaaaah'!

    danso(6) 3 days ago [-]

    LOL just started replaying OW for the first time in years, and my immediate reaction to seeing this headline was to go to the comments and make an OW reference

    criddell(10000) 3 days ago [-]

    I bought Outer Wilds based on recommendations like yours and I found it kind of boring. The world is mostly empty and the repetitiveness wore me down. I didn't finish it.

    It's a great looking game though and the first hour or two I had a blast.

    ur-whale(2802) 3 days ago [-]

    Spoiler alert:

    > THIRTY SUPERNOVAE PER SECOND, over the entire observable Universe.

    Wobbles42(10000) 3 days ago [-]

    If we have events occuring at some rate in the entire observable universe, and that rate is one a human can easily visualize (e.g. '30'), then the answer to the question 'how often do supernovas occur' is probably best summarized as 'almost never'.

    jxf(3432) 3 days ago [-]

    I think this says less about supernovas and a lot more about how staggeringly, incomprehensibly vast the observable universe it.

    daxfohl(10000) 3 days ago [-]

    Or how small we are

    BitwiseFool(10000) 3 days ago [-]

    It would be a tragic shame for life to inhabit such a vast universe only for faster than light travel to be impossible.

    sexy_seedbox(2687) 3 days ago [-]

    Now let us all stop thinking about the incomprehensible and go back to providing value to our shareholders.

    didgetmaster(10000) 3 days ago [-]

    Two questions come to mind.

    1) When was the last supernova observed in our own galaxy?

    2) How close would one have to be to be observed with the naked eye?

    ardel95(3371) 3 days ago [-]

    1604. One could say we are overdue. I'm not sure about dust or other obstacles blocking it, but based on brightness alone a supernova in our galaxy should be visible with naked eye.

    coryfklein(2402) 3 days ago [-]

    Near the top he shows two photos of the Cartwheel galaxy, one from 2014 and one from 2021 with the caption:

    > Can you spot Supernova 2021 axdf?

    Are you supposed to be able to spot the supernova?

    All I've noticed is a couple of small stars that disappear in the latter photo, but this mostly seems to be because it's more blurry.

    piaste(10000) 3 days ago [-]

    Use the cross-eye trick to superimpose the two pictures, then it becomes quickly noticeable as it will appear to blink.

    pansa2(10000) 3 days ago [-]

    Bottom-left corner





    Historical Discussions: US Government threatens Harvard with foreign student ban (April 17, 2025: 351 points)

    (351) US Government threatens Harvard with foreign student ban

    351 points 1 day ago by intunderflow in 963rd position

    www.bbc.com | Estimated reading time – 3 minutes | comments | anchor

    Trump administration threatens Harvard with foreign student ban

    Harvard President Alan Garber has flatly rejected the White House's sweeping list of demands

    The US government has threatened to ban Harvard University from enrolling foreign students - after the institution said it would not bow to demands from President Donald Trump's administration and was hit with a funding freeze.

    The White House has demanded the oldest university in the US make changes to hiring, admissions and teaching practices - to help fight antisemitism on campus.

    Homeland Security Secretary Kristi Noem has asked for records on what she called the 'illegal and violent' activities of its foreign student visa-holders.

    Harvard earlier said it had taken many steps to address antisemitism, and that demands were an effort to regulate the university's 'intellectual conditions'.

    'The university will not surrender its independence or relinquish its constitutional rights,' Harvard President Alan Garber wrote in a message on Monday to the Harvard community.

    The new request from Noem said the institution would lose the 'privilege of enrolling foreign students' if it did not comply with the demand for records.

    Harvard said it was aware of the new request from Noem, which was made in a letter, the Reuters news agency reported.

    International students make up more than 27% of Harvard's enrolment this year. Even before Noem's statement, billions of dollars hung in the balance for the university, after the freeze of some $2.2 bn (£1.7bn) in federal funding.

    Trump has also threatened to also remove Harvard's valuable tax exemption, the loss of which could cost Harvard millions of dollars each year. US media reports suggest the Internal Revenue Service (IRS) has started drawing up plans to enact this.

    Harvard has said there is 'no legal basis' to remove its tax exemption, and that 'such an unprecedented action would endanger our ability to carry out our educational mission'.

    Trump launched a renewed attack on the university on Wednesday, saying it could 'no longer be considered even a decent place of learning'.

    The administration's attacks on Harvard are not isolated. The government's antisemitism task force has identified at least 60 universities for review.

    During his presidential campaign, Trump pitched a funding crackdown on universities, painting them as hostile to conservatives. He and Vice-President JD Vance have long railed against higher education institutions.

    Polling by Gallup last year suggested that confidence in higher education had been falling over time among Americans of all political backgrounds, particularly Republicans - in part due to a belief that universities push a political agenda.

    Since taking office, Trump has focused particularly on universities where pro-Palestinian protests have taken place. Some Jewish students have said they felt unsafe and faced harassment on campus.

    In March, Columbia University agreed to several of the administration's demands, after $400m in federal funding was pulled over accusations the university failed to fight antisemitism.

    These included replacing the official leading its Middle Eastern, South Asian and African Studies department and pledging to take on a review to 'ensure unbiased admission processes'.

    Harvard too has made concessions - including by dismissing the leaders of its Center for Middle Eastern Studies, who had come under fire for failing to represent Israeli perspectives.

    But it has drawn the line at the White House's recent list of demands.

    Watch: 'It's not right' - Students react to Trump freezing Harvard's federal funding




    All Comments: [-] | anchor

    DarkmSparks(10000) about 24 hours ago [-]

    60 Universities, the only reason Harvard is interesting here is the revelation its administration are just another average bunch of crayon munching racist idiots.

    Down vote all you want, wont make blocking students from class because they are Jewish and hiring people based on their race or sexual preferences any less moronic.

    Breath of fresh air to see that idiocy burn.

    mjburgess(10000) about 23 hours ago [-]

    Harvard isn't burning. It has 60bn.

    What's 'burning' is the hospitals, military research, medical research and the vast array of technical R&D that congress has requested harvard to perform.

    This is just an attack on americans. Harvard is secure regardless of what destruction the presidency does to the projects congress has asked of it.

    t0lo(10000) about 23 hours ago [-]

    Quite vitriolic. I wonder if you have any personal biases you might be bringing into this discussion?

    xnx(1016) about 24 hours ago [-]

    Economic, educational, reputational ... it's hard to think of a dimension that the current administration is not destroying the US on.

    sanderjd(10000) about 23 hours ago [-]

    Well, so far, they have just waffled on whether or not to drastically defund the military.

    armini(10000) about 23 hours ago [-]

    "A nation is not lost because of the actions of the wicked, but because of the silence of the just." — Napoleon Bonaparte

    ffsm8(10000) about 23 hours ago [-]

    It's not destroying the US media industry like tv/movies/games. Why can I say that with confidence you ask? Simple! Because they have been doing that for ages before the current administration got to power ( * ́ ω ` * )

    grafmax(10000) about 22 hours ago [-]

    Cowing Harvard - one of the world's greatest universities - would mark a pivotal victory for the dictatorship taking shape before our eyes. Dictatorships derive their power from the submission of a society's key institutions. That's what's at stake here.

    Braxton1980(10000) about 24 hours ago [-]

    I hope people realize that protesting or being angry at Trump/Republicans is pointless.

    The power is bestowed upon them by Republican voters and they are to blame. Voting for one issue, lack of education, or desire to tune out politics isn't a reasonable excuse.

    Edit

    I have no issue with downvotes but offer up arguments why voters aren't responsible.

    JojoFatsani(10000) about 24 hours ago [-]

    What is the point of this post? There are a lot of people to be angry at here. Demonstrating displeasure to elected officials is our first amendment right.

    belter(63) about 24 hours ago [-]

    This post shouldn't be downvoted. Just as it's well known that a majority of the Russian population supports the invasion of Ukraine, not solely due to misinformation. So too must the majority of U.S. voters who elected the current, legitimately constituted administration bear collective responsibility for their choices and the consequences that follow.

    __alexs(10000) about 23 hours ago [-]

    They are in the chain of responsibility but they are not the proximate cause of the issue.

    viraptor(1797) about 23 hours ago [-]

    Protesting is not for Trump. Nobody there expects him to step down just because enough people showed up. People are showing how many got fed up enough to be loud and encourage/enable others.

    submeta(2850) about 24 hours ago [-]

    What a shitshow. All because of a tiny country with enormous influence over the US government via AIPAC or large donors like Sheldon Adelson's widow who donated over 100mil USD to Trump. MAGA is now MIGA.

    Edit: All to silence ANY criticism of that country and its laughter in Palestine. With their leaders being wanted war criminals. Where's freedom of speech now? When Marco Rubio complained before the election „You are one click away from being jailed.", now you are one click away from being deported under his administration.

    DobarDabar(10000) about 24 hours ago [-]

    Always has been. Many such cases.

    GuestFAUniverse(10000) about 24 hours ago [-]

    You get what you pay for. Capitalism 1.0 -- just unveiled being like it ever was. That's what happens when a (pseudo-) democracy never gets fixed, because everyone in the upper class thinks to get away best _with_ all the loopholes.

    A president reigning at will, no court being able to really stop the shit show and undoing former president's pardons while using pardons as a tool to side-track courts. The whole construct didn't -- and doesn't -- make sense, if you still aim for anything not being despotism.

    bloppe(10000) about 23 hours ago [-]

    This has nothing to do with Israel or even antisemitism. The administration just doesn't like Harvard and they'll use whatever justification they think has the best chance of of holding up in court

    gwd(10000) about 23 hours ago [-]

    That has nothing to do with it. They made the same demands of Colombia, who agreed to their demands; the result was just more demands. This is about exercising power and establishing dominance, not about Israel.

    bryanrasmussen(227) about 24 hours ago [-]

    for some reason the article has IRS as Inland Revenue Service (at this time)?

    fidotron(2952) about 24 hours ago [-]

    In the UK the equivalent to the IRS was the Inland Revenue, so when attempting to reverse the abbreviation they probably arrived there. I've made that exact same mistake myself.

    felixthehat(10000) about 23 hours ago [-]

    Internal Revenue Service for anyone else wondering!

    rayiner(2712) about 24 hours ago [-]

    Harvard's openly defiant response to the Supreme Court's SFFA decision,[1], gave the administration the ammo it needed for this fight. It's great that Harvard is fighting this, the discovery in the federal government's lawsuit against it will be amazing.

    [1] https://www.city-journal.org/article/harvards-attempt-to-dod...

    sealeck(2832) about 24 hours ago [-]

    The defiant response of... asking students how they will contribute to Harvard in admissions essays??

    DarkWiiPlayer(10000) about 24 hours ago [-]

    What's gonna be next? Banning Ukrainian students to help combat 'Russophobia'?

    This is just so weird. How do people support this stuff only to then go on and complain about 'free speech' the second you tell them something they said was kind of a little bit mean?

    belter(63) about 24 hours ago [-]

    Next is to make sure every Top 500 company has a MAGA approved member on the board, or you will be barred from selling to the US government.

    Chris Krebs, director of the US Cybersecurity and Infrastructure Security Agency was forced to resign from SentinelOne Inc - https://www.bloomberg.com/news/articles/2025-04-17/ex-cyber-...

    UncleMeat(10000) about 23 hours ago [-]

    Because it was never about free speech. It was only ever a rhetorical cover to be able to do things like defend professors calling people the n-word without defending it directly. The actual goal has always been the re-entrenchment of 'natural' hierarchies. Man over woman. White over black. Straight over gay.

    kowabungalow(10000) about 24 hours ago [-]

    Universities had better hurry up with opening up their foreign campuses.

    bko(2635) about 23 hours ago [-]

    The universities could have expanded campuses and opened up new ones at any time. Harvard could easily double or triple their class size with no negative effects in terms of student body. They choose not to because it would dilute their brand and they're more about an exclusive club than educating students or doing research. Their doner class that finds their hedge fund doesn't like that.

    So let's not pretend these institutions are noble

    devsda(10000) about 23 hours ago [-]

    American universities are in demand among foreigners because they are a gateway to network and life in the US.

    If it's a choice between say Princeton in the US and Harvard outside the US, Princeton will be the choice for many.

    farmdawgnation(10000) about 24 hours ago [-]

    This seems like a relatively empty threat considering many international folks don't want to come here anyhow. There are some parallels here to when my toddler tries to give me consequences for doing things she doesn't like.

    odo1242(10000) about 24 hours ago [-]

    A lot of international people go to Harvard. Like the article mentions, it's 27% of all students.

    apwell23(10000) about 23 hours ago [-]

    every college grad in my neighborhood in india is planning to go to usa after graduation.

    its seen as something odd when someone decides not to.

    t0lo(10000) about 23 hours ago [-]

    There is near infinite demand for western universities. As I've been experiencing personally in Australia

    OZYMANDIASAK(10000) about 24 hours ago [-]

    At what point can we say that the US truly has fallen from being the leader of the world?

    Each and every decision taken by the current administration is bringing the US closer to an age of darkness and idiocy.

    I'm from Europe, I'm not saying the US was ever perfect but I don't understand how it came to this.

    My bet is a on a combination of extreme individualism due to a poor internalisation of the ideals of liberalism combined with a predatory capitalistic environment.

    It's sad to see what happens to a society that has the highest concentration of the brightest minds in world mostly working towards money related goals. So many great people that could work for the greater good and are dutifully tuning algorithms for the 0.01% capturing everyone's attention and ideas.

    Sad state of the world but I guess you can't stop "progress".

    apercu(10000) about 24 hours ago [-]

    >At what point can we say that the US truly has fallen from being the leader of the world?

    When a ridiculous, obtuse con man was elected President in 2016 and his party lost whatever little desire they had left for a functional government?

    Of course, I would argue it was when 'W' was elected for the second term.

    t0lo(10000) about 23 hours ago [-]

    >I'm from Europe, I'm not saying the US was ever perfect but I don't understand how it came to this

    Because 30+ different countries were able to wage information war on a population for 15+ years with unrestricted access and no recourse.

    inglor_cz(10000) about 23 hours ago [-]

    Unfortunately us HNers have a lot to do with this, even though approximately none of us had this in mind when coding the relevant stuff.

    This is how politics looks like when the radical fringes from social networks take over national parties and squeeze out the so-much-mocked 'enlightened centrists' from their seats. Missing them yet?

    The same problem in Europe is somewhat tamed by proportional voting systems, but various edgelords have invaded our politics as well. Slovakia, right next to Czechia, is a horrible political circus. AfD in Germany mostly built its electorate online etc.

    sanderjd(10000) about 23 hours ago [-]

    I'm sympathetic to your perspective that it's a broad cultural thing.

    But from my point of view, it's more of a demonstration of the problem with governments that are designed to have a very strong executive. Eventually you get an executive that really sucks, and when that happens they can do a lot of damage.

    One of the biggest influences on my thinking from listening to Dan Carlin's Hardcore History is a point he made about hereditary monarchy, that among its problems is that sometimes the next ruler in line is just a total dud, and you're just stuck with them.

    Well, you can get a dud through voting as well. Ideally having fairly short terms mitigates this risk, but there is still a lot of damage that can be done in a short term, and there is a 'who watches the watchmen' problem with the executive being required to fairly run the election to potentially replace them.

    If we make it through this period with elections that remain fair and with successful transitions of power, I hope we'll find ways to weaken the presidency.

    swat535(2885) about 23 hours ago [-]

    > At what point can we say that the US truly has fallen from being the leader of the world?

    It's easy to talk about the 'decline' of the U.S. in abstract geopolitical terms, but let's be honest: the day the global tech community stops posting on Hacker News, stops building with U.S origin technologies, and stops looking to Silicon Valley as a benchmark, that's the day we can seriously start talking about America's fall from global leadership.

    Until then, we're all still running our infrastructure on AWS, building apps with React, debating threads on HN, and watching YC Demo Day like it's the Super Bowl. The world may grumble, but it's still plugged in, literally and figuratively, to American innovation.

    BlueTemplar(3415) about 22 hours ago [-]

    Easy : when the dollar stops being the reserve currency of the world.

    (Well, easy in retrospect, I guess it might be hard to realise that/when this is happening when you are in the middle of it ? Reading about the other times it happened might help ?)

    HEmanZ(10000) about 22 hours ago [-]

    I find some Marxist-ish ideology always wants to blame these things on the material conditions, wealth. My personal network is a sea of trump worshipers (quite literally, like my cousins say a prayer for trump at every dinner since 2016), and I think the analysis that this is a wealth thing is wrong.

    Everyone has pet theories. Mine is that a section US society, urban coastal highly educated elites, coalesced around one set of ideas (I'm not exactly sure why, but probably in part because this group is less religious and very urban) and formed a very powerful ideological block that wasn't in the US pre 1980s. This Trump thing is a reaction of the people who don't fit into this political block (religious, less educated, rural, culturally not urban) against them.

    It's fundamentally identity politics, not some material conditions thing. People have a hard time believing this, because some people think the world is all about money, and ideas and identity mean nothing to people, but I really think the money-only view of human politics is flat wrong.

    I say this because of my personal network of family, friends, and acquaintances from my hometown. When I try to gently get to the bottom of it, what I really find is a deep deep hatred for the coastal elites. They feel belittled and marginalized, not monetarily but culturally. They feel no one from those backgrounds has any right to tell them what to do. They feel that a coastal expert has no right to contradict their feelings on a topic, because that expert is not "one of them", not because that expert is wealthy.

    The network I have does not feel this way because they are economically struggling. Europeans often imply this is the case, but in my experience after 40 years in America, it is just not. Many of the people you see wearing maga hats and waving maga flags at rallies have mansions, 5 trucks, a vacation home in Hawaii, etc. my extended family and network has plenty of money. But they feel anyone who is an educated, coastal liberal is out to destroy them. They feel so completely culturally and identity wise different from the coastal elites, that they bristle under the thought that someone with an "education" could know more about something than them.

    I think Republicans gained power in the last few years because of the economy, and Trump gained control of the republicans because of identity. This isn't going away by "solving" the wealth gap.

    carlosjobim(10000) about 20 hours ago [-]

    The reason why you don't understand the American perspective on the world and on life, is because everybody in Europe who didn't think exactly like you think moved to America, and everybody who thinks exactly like you think stayed in Europe.

    No matter if you think the European or the American mindset is better, there was an enormous split of nations with the mass migration of Europeans to America. And it was a certain kind of person who would stay and a certain kind of person who would go. It's still that way.

    AlecSchueler(10000) about 18 hours ago [-]

    > At what point can we say that the US truly has fallen from being the leader of the world?

    About six weeks ago.

    apwell23(10000) about 24 hours ago [-]

    anyone know why trump didn't do any of these betwee 2016-2020 when he ran on exact same platform. But this time he hit the ground running.

    What's different this time?

    hollywood_court(10000) about 24 hours ago [-]

    Putin demanded results this time.

    tohnjitor(10000) about 24 hours ago [-]

    Better preparation most likely. He had a staff of about 1,000 already hired before being sworn in. Some of them had probably been working on this stuff since 2020.

    DragonStrength(10000) about 24 hours ago [-]

    He mostly had standard GOP appointees last time who weren't on board. This time he has staffed his administration with loyalists, which is why so many have so little experience. They are there to do what they're told.

    JensRantil(3324) about 24 hours ago [-]

    One reason is the ruling by the Supreme Court on July 1st 2024 that says Agent Orange has legal immunity for most actions he does as a president.

    UncleMeat(10000) about 23 hours ago [-]

    In 2016 Trump had not remolded the GOP yet. He was surrounded by 'traditional' republicans who weren't fully on board with his insane, vindictive, authoritarian impulse. Republicans in Congress were also skeptical of him, making resistance from the legislature much more likely.

    In 2024 the entire Republican Party had evicted the non-MAGA people. Trump could staff everything with absolute sycophants. And there is no way that the Republicans in Congress will lift a finger to change anything.

    Further, Trump had years of vindictive rage bottled up after losing in 2020. Every organization and institution he spent years raging about on Truth Social suddenly becomes his target. No actual governance. Just revenge.

    sanderjd(10000) about 23 hours ago [-]

    The sycophancy - within the administration itself, and in Congress - is pretty much universal now, which it was not in the last administration.

    morkalork(10000) about 22 hours ago [-]

    Last time they lost their majority in Congress in the mid-terms and were a little kneecaped after that. It seems they've learned from that episode and are trying to achieve as much as possible before it happens again.

    myvoiceismypass(10000) about 21 hours ago [-]

    There was a handful of adults in the room the first time around. Now, only loyalists and sycophants. Plus, he saw that he can get away with anything (Jan 6, storing boxes of classified shit in his bathroom) and the Supreme Court backed him up.

    There are no guard rails, there is no emergency stop this time.

    Braxton1980(10000) about 18 hours ago [-]

    1. Revenge 2. Term limits so there's no reason for him to care what voters think

    gridder(10000) about 16 hours ago [-]

    Now he doesn't have anything to lose anymore. He's very old, he had to run again to avoid prison and bankruptcy. He will do anything he can to remain in power until he dies. This is my very personal opinion

    hersko(2944) about 22 hours ago [-]

    Tangentially related question: Why do universities like Harvard (who has a ~$60bn endowment) get federal funding at all? Between tuition and donors are they not profitable?

    guax(10000) about 22 hours ago [-]

    Research grants, laboratories, partnerships. Government funding of universities are usually not handouts but investments.

    For example:

    | Sarah Fortune, a professor and chair of the department of immunology and infectious diseases at Harvard T.H. Chan School of Public Health, woke up Tuesday to a stop-work order for a large contract focused on unraveling how the immune system fights tuberculosis, with the goal of creating better detection and vaccines.

    insane_dreamer(10000) about 21 hours ago [-]

    It's not funding for student tuitions, rather Harvard research labs bid on research grants just like all universities do. Government sponsored university research since WW2 has been a primary driver of innovation in the US and a key element in the US becoming and maintaining its position as the #1 economy.

    It's investment, not charity.

    Braxton1980(10000) about 19 hours ago [-]

    How is this related? The issue is government overreach.

    Who said they aren't profitable

    tzs(2985) about 17 hours ago [-]

    They are already spending billions a year from the endowment, which is around the maximum that can be spent from it sustainably.

    UncleMeat(10000) about 16 hours ago [-]

    Alice is a professor at Harvard. She wants to research some topic. She applies to the NSF for a grant. The NSF says 'wow that research sounds awesome and aligned with our priorities' and funds her lab to perform that research. She and the lab perform the research and share it with the scientific community for free.

    That's what federal funding for universities looks like.

    alwa(10000) about 23 hours ago [-]

    I have a hard time imagining this specific threat to be more than bluster. Would someone with relevant legal expertise be able to comment on how likely a ban on foreign enrollments would be to fly in the courts?

    Surely the administration have a substantial degree of discretion with respect to student visas, but can they precipitate a blanket revocal on something as nakedly coercive (and speech-involved) as this?

    (Edit: at a casual, non-expert glance it seems that a student can apply for a student visa at any SEVP-certified school, and the regulations governing SEVP certification seem to be at [0]. They list a lot of potential reasons to withdraw approval once it's issued, but they all seem pretty specific: falsifying records, lying on your application, failing to keep proper records in relation to the students' enrollment, and so on. Does it feel like maybe the mechanic here is claiming that tracking students' speech is part of that essential record-keeping task?)

    [0] https://www.ice.gov/sevis/schools/reg#2144

    curious_curios(10000) about 23 hours ago [-]

    NAL but we're in uncharted territory here with the administration ignoring court orders.

    rayiner(2712) about 23 hours ago [-]

    The Supreme Court has held that the government can use its control over funds to condition speech in ways it couldn't directly: https://firstamendment.mtsu.edu/article/government-funding-a...

    The Supreme Court has also held that the government can revoke tax exempt status of a private organization where it furthers a compelling government policy: https://en.wikipedia.org/wiki/Bob_Jones_University_v._United...

    Control over federal funding is also the hook for Title VI's application of non-discrimination laws to private universities.

    The government also has the trump card up its sleeve that Harvard is almost certainly violating Title VI through extensive programs of race consciousness. It's well established that the civil rights laws apply equally to whites as to non-whites. Harvard has many programs for non-whites where, if those programs were for whites instead, that would be a Title VI violation that would jeopardize Harvard's federal funding. E.g. Harvard had various racially segregated graduation parties last year: https://www.nationalreview.com/news/harvard-university-to-of.... If you can't have a "White Celebration" then you can't have a "Black Celebration" either. If Harvard doesn't settle they'll get hit with a Title VI lawsuit and they're going to lose it.

    grafmax(10000) about 23 hours ago [-]

    Given the ongoing pattern of bullying, power grabs, and disregard for the law - including the trampling of constitutional rights - dismissing this latest threat as mere bluster seems less like reason and more like denial.

    titaphraz(10000) about 23 hours ago [-]

    '... he's America's Hitler'

    -- JD Vance, Vice president of USA on Trump, President of USA.

    (Before JD became VP)

    ta1243(10000) about 23 hours ago [-]

    The full quote:

    > I go back and forth between thinking Trump is a cynical a*hole like Nixon who wouldn't be that bad (and might even prove useful) or that he's America's Hitler. How's that for discouraging?

    blueflow(3670) about 23 hours ago [-]

    Do you think it is appropiate to compare Trump with Hitler? I think industrial-scale genocide is a bit of a different league than Trump.

    southernplaces7(3239) about 17 hours ago [-]

    Why the hell would this be flagged? Perfectly valid, debate-worthy and absolutely relevant in the context of many non-flagged submissions on this site. Again it would be nice if the HN admin stop letting any random orangutan flag anything they like out of their own shitty little naval-gazing ideological fixations.

    wltr(10000) about 17 hours ago [-]

    Oh, I found /active just recently. And turned out many, if not most, interesting topics are censored. While some mediocre and irrelevant things are not. However, I'm not surprised, being a long time visitor, and seeing very dang questionable moderation practices.

    edanm(3676) about 17 hours ago [-]

    Please find a way to contribute more politely to HN. Regardless of whether I agree with you on whether this post should be flagged, calling your fellow HNers 'random orangutans' that act out of 'shitty little naval-gazing ideological fixations' is rude, mean, stupid, and wrong.

    bloopernova(10000) about 16 hours ago [-]

    HN mods/leadership appear to have taken the stance that this is a non-political site.

    Why it's being flagged? People hiding behind the non-political rule are suppressing information and discussion.

    This site is owned by ycombinator, who have a motivation to 'not rock the boat', so such suppression is ignored.

    I guess in time we'll see whether that's a good decision for them or not.

    postalrat(10000) about 15 hours ago [-]

    Id assume to prevent hn from becoming a tiny version of reddit.

    marius_k(10000) about 14 hours ago [-]

    These days I go to https://news.ycombinator.com/active And search for [flagged] items first.

    mvdtnz(10000) about 14 hours ago [-]

    I flag American politics because it's boring and irrelevant to me. Nothing to do with ideological fixations, although it does please me that people like get so worked up about it.

    NalNezumi(3367) about 14 hours ago [-]

    I'll be honest, I prefer it this way. Thanks people flagging the political stuff (I can't be bothered).

    If you want the political stuff & the controversial stuff, you can add /active after the URL to HN main page.

    The fact that there is an /active tab and flagged submissions can still be voted & commented on, tells me that while dang don't want it to be the face of HN, he's fine that people discuss it (as long as you comment with civility). If there was some tinfoil conspiracy, the tab would've been deleted.

    I'm guilty that l now usually check /active and main page.

    You know, some of the high-horse, HN readers are quick to say 'social media, bad' and anything bashing social media (including blogs) sky rocket up to main page. 'reddit sucks' is another common one. I mean I usually agree to that sentiment, but if you check /active posts, the comments, where things go, it resembles any other social media slop more than HN.

    I spend more time on /active, sadly. Maybe those navel-gazing orangutans are actually the ones making sure this is not reddit or Facebook for techies rather than boomers

    dang(143) about 8 hours ago [-]

    Most probably the users who flagged it are tired of the repetition, because HN had a huge frontpage discussion about this topic just a few days ago:

    Harvard's response to federal government letter demanding changes - https://news.ycombinator.com/item?id=43684536 - April 2025 (1399 comments)

    Avoiding too much repetition is a core principle of this place [1]. To get a sense of how repetitive these discussions are, just look at the comments in the current thread—they could just as easily have been posted to the previous thread.

    The way HN operates with respect to political stories is clear and stable, and has been for many years: some stories with political overlap are ok [2], but there isn't room on the frontpage for all of them (not even 5% of them, really). Frontpage space is the scarcest resource that exists here [3], and HN is not a current affairs site [4].

    If you, or anyone, will familiarize yourselves with the explanations in these links, and then still have a question that I haven't answered there, I'd be happy to take a crack at it.

    [1] https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...

    [2] https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...

    [3] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

    [4] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

    mjburgess(10000) about 23 hours ago [-]

    I had been somewhat neutral on trump -- the grievances of the american right were real and under served. Major civil institutions of power and culture had been monopolised by the left; there had been a 'default preference' for wealth over income, capital over labour. Immigration had been treated as a purely economic question, with little regards to the suddenness of population and cultural changes metered out on communities which took on the highest levels.

    I had thought the leftwing reaction to accuse this of authoritarianism, overblown. Many of the actions that had been taken were taken by previous leftwing administrations, just with less publicity (, and so on).

    However I think the rubicon has been crossed. The president now believes he has impunity to engage in extrajudicial rendition to enslave people, including citizens, in foreign prisons. He attacks the centres of civil power: universities, law firms, (likekly soon, ) the mass media. And rival state power: ignoring the supreme court, congress (ie., reorganising federal gov beyond his power), and the institional professional class in the executive.

    All the while, increasingly I see people on the centre-right in the mass media credulously believing the president's account of his actions. Identifying with the president as an expression of their power, and believing likewise, that the whole of civil society is legitimately brought under state ideological control. That the presidency is the state, that state is society, and that society must 'for democratic reasons' be brought to the state's heel.

    The next phase of this will be very dangerous for the american people. I think civil resistance will be target for at best, imprisonment -- perhaps even rendition to a foreign prison. All one needs to say is that the resistance protestors are domestic terroists, and trump has a wide base of people credulously willing to believe it -- no doubt looting and other things will occur. It is very easy to imagine state elections being brought under 'federal control' and a process of election rigging soon following.

    As far as I can see there are two forces acting against the possibility of an american tyranny: trump's own desire to perform what's he's doing completely destabilises his plans (eg., on the economy especially). Secondly, the federalism of the american system.

    It seems now plausible to me to imagine a future in which a democractic state violently expels federal forces, esp., eg., if ICE are used to rendition american citizens. It will be almost an obligation of states to suspend federal police presense. This, in the end, may make totalisation of federal state power difficult.

    Dumblydorr(10000) about 23 hours ago [-]

    How can you have been neutral on Trump until just now and then wrote that? This both-sides-ism looks a bit ludicrous. Neither side is perfect but one is a propagandistic cult and the other is a reasonable status quo party. One wants to throw hand grenades into every room of the government out of spite and out of desire to enrich and empower the billionaire class. And you're now having this huge intellectual reckoning? Where were you the last 9 years of Trump?

    sanderjd(10000) about 23 hours ago [-]

    Is there a mea culpa here? This was all clear for a decade during which you were 'somewhat neutral on Trump' and everyone was telling those of us warning people about it that we were hysterical and deranged.

    But now I see posts like this and it's like 'how could we have known this was going to happen?'. Well, you could have! At least maybe you can update your priors on how seriously to take warnings that a political movement is dangerous?

    surgical_fire(10000) about 23 hours ago [-]

    > Major civil institutions of power and culture had been monopolised by the left; there had been a 'default preference' for wealth over income, capital over labour.

    I am not from the US, and I watch with mild amusement its slide into full blown banana republic dictatorship with a sprinkle of last century European fascism - I mean, at this point ICE is basically a secret police that disappears people, not unlike Stasi or Gestapo from years past.

    But you thought that Trump was an answer to 'wealth over income' or 'capital over labor'? Even without knowing that much about the intricacies of US politics this sounds pretty naive.

    thesuperbigfrog(3661) about 22 hours ago [-]

    Other factors to consider in the 'states versus federal' conflicts that could occur are that each state has its own National Guard forces and equipment which are under the state governor's control. The National Guard are under dual control in that they can respond to the state's needs or to federal needs. But they are still citizens of that state who put on the uniform when needed.

    This could lead to National Guard versus federal forces stand-offs as was seen in the 1960s over Civil Rights disagreements between state and federal governments:

    https://en.wikipedia.org/wiki/Little_Rock_Nine#National_Guar...

    Another factor that differentiates the United States in conflicts of the people against their government is how heavily armed and resourceful the US populace is. In the War on Terror, US Armed Forces faced insurgency militias in Iraq and Afghanistan. If similar insurgency militias were to arise in the United States in response to illegal federal government actions, it would probably have similar results.

    senordevnyc(10000) about 16 hours ago [-]

    I share the frustration of many commenters that you're just now coming to believe that Trump is a dangerous threat to our entire system. It's bewildering to hear people say some variation of 'how could we have known??', when it has all seemed so obvious to many of us for years that this is the road we were going down.

    That said, I do deeply appreciate your willingness to change your mind, and to talk about it publicly. The reality is that a third of our society is in Trump's thrall. At my best, I don't want those people to disappear, or suffer in powerlessness for years. I want them to change their mind, and I know how hard that can be. So thank you!

    oefrha(10000) about 23 hours ago [-]

    I'm a foreigner who was U.S.-educated, spent a substantial part of my life there (left a little while ago), and still have family there. I'm seriously advising prospective students from pursuing a college education in the U.S. now. A partial education is now a very real concern under this mad man, especially for someone with the "wrong" nationality/ethnicity, which could completely upend one's life and torpedo a lot of career prospects. Not to mention concerns for personal safety.

    As an aside, I faced casual racism plenty of times in the country; pretty sure no one ever gave a shit. Trump country would cheer for it, actually.

    d3nj4l(3507) about 23 hours ago [-]
    Especially after the recent actions canceling visas for legal, free speech and minor traffic violations, I cannot in good conscience recommend anyone to come to the US to study. It's the US's loss more than anything else - a good portion of the cutting edge research that happens in US Schools is done by international grad students.
    inglor_cz(10000) about 23 hours ago [-]

    This comment is not meant to defend this sort of blackmail, rather a tangential thought that struck me.

    Most successful franchises try to expand abroad. Why not build a Harvard branch in London, Dubai, Sydney, Mumbai or Tokio?

    Each of those would likely be subject to some pressures over time, but those times and pressures would vary.

    Nowadays it is a 'all eggs in one basket' situation.

    oefrha(10000) about 23 hours ago [-]

    Universities are not fast food restaurants, the reputation resides in the faculty, and they're not replicable like fast food recipes or supply chains. "Harvard London" will be a completely different school with its own reputation (with a little bit of halo effect from the brand of course), just like no one mistakes UC Riverside for Berkeley. Unless you're advocating for some remote teaching sort of deal.

    Edit: In addition, some people only attend Harvard and co. for the networking opportunities.

    beardyw(1864) about 22 hours ago [-]

    It's fairly common for UK universities to have overseas campuses.

    HeavyStorm(10000) about 23 hours ago [-]

    So, are you guys realizing that this is _already_ a dictatorship?

    inverted_flag(10000) about 22 hours ago [-]

    It feels like the early days of Covid right now, where everyone is still living their lives normally but you know things are about to rapidly change for the worse.

    Molitor5901(10000) about 23 hours ago [-]

    Tangentially related, but at my university international students paid the most to attend - by far - than any other student. It has been rumored by some that universities may have an unfair preference for international students because of this. I wonder if this thinking is playing into the policy making.

    Balgair(2598) about 22 hours ago [-]

    Based on the previous actions of this administration, I can with 100% confidence say that absolutely no thinking played into this policy making.

    onetimeusename(10000) about 20 hours ago [-]

    They almost certainly do. There were rumors of a Chinese student ban during Trump's last tenure and I remember reading news stories (https://www.insidehighered.com/news/2018/11/29/university-il...) about universities having insurance policies to protect themselves from revenue loss. There should be enough Americans to fill the empty seats so it makes you wonder if university finances rely on international student tuition. So you would expect that to translate into admissions changes.

    Reading other comments on here it almost seems like people feel it would be bad if American universities like Harvard had more Americans. Like there is something morally wrong with that. So that's probably a factor also.





    Historical Discussions: Kermit: A typeface for kids (April 16, 2025: 351 points)

    (351) Kermit: A typeface for kids

    351 points 2 days ago by nmcfarl in 3545th position

    microsoft.design | Estimated reading time – 4 minutes | comments | anchor

    While we haven't implemented automatic prosody yet, Kermit allows us to explore expressive writing to elevate comprehension for children and adults alike.

    Helping severe dyslexics

    Dyslexia is a very active area of research. Fifty years ago, people thought dyslexics saw letters backwards. Now, it's primarily seen as a phonological problem in which dyslexics have difficulty with sounds in language. The most successful dyslexia programs to date focus on teaching phonemic awareness (e.g. that the spoken word "cat" has three sounds) and phonics (mapping letters to sounds). This success might make it seem like dyslexia is all about sounds, but it's not clear yet if phonological problems are dyslexia's cause.

    In 2010, researchers Trichur Vidyasagar and Kristen Pammer suggested a new theory on the cause of dyslexia: dyslexic brains might have issues with visuo-spatial processing. In other words, dyslexic brains may process visual information differently, making the order of letters unclear and reading difficult.

    To understand this, let's take a trip inside your brain. Light enters your eyes and shines on the retina. The retina processes the light, sending neural signals on a long journey from your eyes to the back of your head where your brain processes images, forwarding them through the visual cortex.

    This journey takes two parallel paths: the high road and the low road, literally. The high road, or dorsal pathway, physically runs along the top path through your brain, carrying information about where things are, such as the sky is up, pavement is down, or the order of letters on a page. It is the "where" signal.

    The low road, or ventral pathway, runs below the high road, carrying information about what objects are, e.g. the blue thing is the sky, the grey thing, pavement, and the two lines leaning against each other with a crossbar is an A. It is the "what" signal.

    These two roads meet at a little neural town called the Visual Wordform Area, which combines the "what" and "where" signals to form words—hence the name. This is where we recognize words.

    This neural town has a big spotlight in it, controlled partially by signals from the high road. As we read, the spotlight should smoothly move from one letter to the next, focusing our attention on a letter from the low road, identifying it, then moving to the next. If anything goes wrong along the high road—and there are many things that can go wrong—the spotlight will not move smoothly or focus attention as well, disrupting reading.

    According to Vidyasagar & Pammer's theory, dyslexics may have something wrong in their high road, weakening signals about letter locations. That in turn makes it hard to understand the order that letters are coming in on the low road, making it more challenging to recognize words.

    This smooth spotlight movement is something we have to learn. Before we learn to read, our eyes and attention unconsciously flit about, painting a picture of our world. The more we read, the more we train our brain to control our spotlight smoothly. But, if a child can't recognize words due to weak high road signals, they won't read as much. The neurological systems needed for proficient reading won't get exercised, but they will get exercised in neurotypical classmates who read more. The dyslexic child gets left behind.

    When these systems are underdeveloped, a child may not develop strong phonological associations or smooth visual scanning (remember, our eyes and brains have to be trained to do this; it isn't natural). The number of potential issues along the high road might explain the variety of dyslexia subtypes.

    So, what does all of this have to do with a font?

    The high road doesn't just carry location information; it carries motion signals, too. Adding motion to letters might boost the high road signal, helping dyslexics get control of their spotlight of attention and improve their reading. To help, we created a special version of Kermit that is animated, with letters that draw themselves.

    A font that draws itself

    How do you create an animated font?

    Because Kermit is built as a Variable Font, it is not limited to Light, Regular, or Bold styles. It can produce any level of boldness thanks to Variable Font technology.




    All Comments: [-] | anchor

    hersko(2944) 2 days ago [-]

    I get: 'Site is unreachable'

    williamscales(3638) 2 days ago [-]

    My DNS blocks it as a tracking domain.

    internetter(10000) 2 days ago [-]

    Its very slow to load for me. Baffling that Microsoft may very well be hugged to death by HN

    dimitrisnl(10000) 2 days ago [-]

    I remember this getting posted again, on a different domain, and with different messaging, with no mention of kids.

    ActionHank(10000) 2 days ago [-]

    I'm also not buying the point that it's for kids any more than comic sans is.

    anonymousiam(3434) 2 days ago [-]

    Name already taken: https://www.columbia.edu/kermit

    lcnPylGDnU4H9OF(10000) 2 days ago [-]

    See also: https://en.wikipedia.org/wiki/Kermit_the_Frog

    On a serious note, that doesn't appear to be a font named Kermit, so it's unlikely that there will be confusion with this if someone is talking about replacing their typeface.

    > a way to set up microcomputers as terminals to our central mainframes and allow files to be transferred reliably back and forth so students could archive their files on floppy diskettes

    lcnPylGDnU4H9OF(10000) 2 days ago [-]

    > While we haven't implemented automatic prosody yet

    That is a really interesting use for LLMs I would never have even considered. The example video with JFK's speech is pretty compelling.

    giarc(10000) 2 days ago [-]

    I think the JFK video is actually not a great example. When the video turns the sound off, the audio clip is so well known that I think your brain fills in the inflection and JFKs way of speaking. I think a better example would be to take a relatively unknown speech and do the same thing to see if the subtitles can communicate the prosody of speech or not.

    sambeau(2267) 2 days ago [-]

    If you want to read it on a site that doesn't mess with scrolling, try here :

    https://kermit-font.com

    38(10000) 1 day ago [-]

    that website is also terrible - no scrolling issues because you cannot scroll at all - and no idea what to click because not a single labeled button/link on the entire page, only some vague unlabeled icons. even on hover you get nothing

    MikeTheGreat(10000) 2 days ago [-]

    Is this open / free / something we can download and try out?

    I did a super-brief search on the page but 'download' didn't turn up any results. Does anyone else know where we can download this from?

    idle_zealot(10000) 1 day ago [-]

    They're using it on the page, which presumably means that your browser already downloaded it! You can probably dig around the page source/network tab to find it.

    c0balt(3607) 1 day ago [-]

    There us no mentioned license, neither on the original post or the website. It is only mentioned that it will be added to M$ office indicating (to me) that it will be proprietary/part of the product.

    yapyap(10000) 2 days ago [-]

    Kermit Sans

    WorldMaker(10000) 2 days ago [-]

    Comic Sans Pro for Kids 2025 Edition

    A_Cunning_Plan(10000) 2 days ago [-]

    For all their talk about how they think this will help kids read, I didn't see any evidence that they actually did any studies on whether or not this font has any affect at all.

    7bit(10000) 2 days ago [-]

    Excellent point, thanks for raising this.

    Freak_NL(3289) 2 days ago [-]

    All I saw were the two references about representing prosody typographically.

    primitivesuave(10000) 2 days ago [-]

    This is unfortunately the threshold of scrutiny that most online education apps operate along - 'it looks good so kids must love it'.

    whalesalad(363) 2 days ago [-]

    Scroll hijacking on this website is atrocious. Ironic for a site that is focused on good design.

    ratatoskrt(10000) 2 days ago [-]

    Came here to say this. I don't get why this is necessary at all - it's literaly just bog-standard scrolling content?

    iNic(2146) 2 days ago [-]

    Is there any evidence that any font has a positive impact on reading (beyond obviously bad fonts being slow)? I'm very suspicious of this whole idea.

    miningape(10000) 2 days ago [-]

    There has been efficacy for people with dyslexia. Fonts like comic sans are closer to their own writing and therefore are easier to read.

    You can also look at the Geronimo Stilton book series, a lot of words appear in different colors / fonts to emphasise words. These books are often easier for children and those with dyslexia to read.

    Note: I still feel like calling it a typeface that makes reading easier is inappropriate. No study has specifically been conducted on this typeface, and drawing conclusions from (limited, and arguably unrelated) studies and and anecdotes is dubious at best.

    maxloh(10000) 2 days ago [-]

    It was claimed that OpenDyslexic could mitigate some of the common reading errors caused by dyslexia.

    https://en.m.wikipedia.org/wiki/OpenDyslexic

    hajile(10000) 2 days ago [-]

    There's certainly a large amount of anecdotal evidence that a decent percentage of dyslexic people benefit from using Comic Sans. I don't know if there has ever been a formal study though.

    There's also a view that all dyslexia doesn't have a single cause. If that is true, then there may be different things that are helpful depending on the exact cause.

    o_m(10000) 2 days ago [-]

    I remember reading somewhere that reading a text with an unfamiliar font face you spend more time reading it, so you're using more cognitive load and are more likely to understand the text. Which might suggest it is just the novelty impacting the reading and not the font face itself.

    martin_a(3611) 2 days ago [-]

    That heavily depends on your definition of 'positive impact'. In design/typesetting theory there are different 'kinds of reading' and some fonts have positive effects, as in 'works well with that kind of reading', while others are not very well suited for a specific task.

    For example letters with very distinct shapes and different heights between lower and uppercase letters, like often found in serif fonts, are generally said to be easier to process for your eyes and brain.

    Your brain learns to 'read without reading' by scanning for known shapes and groups of shapes and just recognizing letters and words by that. You start to skip words, letters, whatever, once your brain has internalized that font.

    That effect helps with reading faster and with less 'stress' which is ideal for longer texts like in a book. Combine that with a good mixture of line length, font size and line height and you can create long texts that can be read very well.

    Now take the same font, set it really tiny because you're working on an Encyclopaedia and don't want it to have 300 pages more and those font features that helped you before, actually make it more difficult to read.

    Fine shapes might break away in the printing process or run up and your text will be harder to read. A sans-serif font might be better suited here. Straight crisp lines, that can be reproduced very well might actually make a better job here.

    So... Fonts can have a positive impact on reading, depending on your definition of impact. ;-)

    Pxtl(3644) 2 days ago [-]

    Maybe it's easy for kids to read, but I found the font too bold and the letters too close-together to read comfortably. I gave up before I could read all their justifications for those decisions.

    But that might've also been the weird scrolling behavior of the page that ruined it for me.

    SirMaster(10000) 2 days ago [-]

    Yeah, I found this a lot harder to read and more strain on my eyes than something simple like the font used in the comments here.

    It definitely seems too thick to me.

    abanana(10000) 2 days ago [-]
    > letters too close-together

    The CSS has { letter-spacing: -.04rem; } It's across the entire site - no exclusion for this page (or for their .kermit-font class). So it appears they've missed the fact that they're altering the look-and-feel of the very font they're presenting in this post.

    trustinmenowpls(10000) 2 days ago [-]

    Yikes, I gave up reading this after about 20 seconds, idk what it was but this font is unreadable.

    WXLCKNO(10000) 2 days ago [-]

    I found it enjoyable to read.

    Obviously some placebo effect from the context but it felt fun.

    tantalor(2090) 2 days ago [-]

    Agreed, this is hard to read.

    My initial impression was I can't read it fast, and when I try to read it fast then I miss words and have to go back.

    If anything, it forces you to slow down. Maybe that's good for people who are learning to read. But for experienced readers, that seems bad.

    On the plus side, the feeling of reading this is nice. It is easy on the eyes.

    This might be a good fit for educational material. But I would not use this for journalism or literature.

    dole(10000) 2 days ago [-]

    I feel like the lowercase lacks risers, it's kerned too tightly to be legible quickly. It's ornamental but I don't feel easier, it's more difficult to read if anything.

    Someone1234(10000) 2 days ago [-]

    It feels fatiguing to read; and I'm supposedly in one of their target demographics.

    Personally I've always found Monospace fonts the easiest like Microsoft's Courier New or Consolas. It feels like you're time travelling back to the 1980s visually, but they're so comfortable to read because your brain can make assumptions which are accurate.

    flusteredBias(3486) 2 days ago [-]

    This is anecdotal and I hope someone who has some research experience can say whether this is true or not generally, but I recently got a Kindle and found that if I use really large font sizes where there are fewer than 50 words on a page it's easier for me to stay engaged. Maybe this has something to do with cognitive load or chunking information. Some fonts look quite a bit better at these large sizes. So for me I don't think typography alone is sufficient. I think the interaction between a large font size and a typography that looks pleasing at a large font size helps with engagement.

    hajile(10000) 2 days ago [-]

    I knew someone who would with an opaque ruler with a hole on one end. They would read the words through the hole and I guess it helped them stay focused on just the word or two they were reading. It sounds somewhat similar to what you are describing.

    JKCalhoun(3408) 2 days ago [-]

    At the same time, don't all fonts, typographically, look better larger?

    I don't know what the DPI of the Kindle display is. But since you called it out specifically, perhaps the issue you are having is more specific to that device. Contrast with how you perceive reading on a high-DPI laptop display perhaps.

    browningstreet(10000) 2 days ago [-]

    When I've done that I feel like I'm reading a text message, not a book (fiction or non-fiction). Possibly not a universal experience.

    WillAdams(10000) 2 days ago [-]

    The normal standard for line length is 2--3 alphabets worth of text.

    I find that shorter ones break up and slow down my reading, while too-long lines make reading wearisome to the point where I actually bought the Kindle version of:

    https://www.goodreads.com/book/show/37858510-the-inklings-an...

    to read rather than the print edition.

    Freak_NL(3289) 2 days ago [-]

    Trying to find out how this font is licenced is painfully impossible on both the linked Microsoft website and the atrocious https://kermit-font.com/ homepage.

    Regardless of the claimed merits of this font (I'm not dyslectic and this font just strains my eyes), I hold the opinion that any effort like this by a megacorp like Microsoft should be approached by them from a charitable angle. If this font isn't permissively licenced (I.e., Microsoft bought it and liberated it from creator Underware) and is just an Office exclusive, it is pointless, and possibly harmless (like that font which OpenDyslexic is based on).

    interloxia(10000) 2 days ago [-]

    I found the following at the end of https://microsoft.design/articles/introducing-kermit-a-typef...

    'The basic styles of Kermit (Regular, Bold, Italic, and Bold Italic) are available today in Office, with the remaining 38 styles arriving in early May.'

    It's listed here: https://support.microsoft.com/en-us/office/cloud-fonts-in-of...

    I didn't find an actual license. The typography faq presumably applies to the cloud fonts: https://learn.microsoft.com/en-us/typography/fonts/font-faq

    silveira(3385) 2 days ago [-]

    +1 The first thing I did was search for the license. The license is what can make it or break it in this kind of project. The absence of clear and permissive licensing is a red flag for me.

    replwoacause(10000) 2 days ago [-]

    I really like this. Just some anecdata from someone without a reading disability but who doesn't love reading, I feel like does make reading easier for me. Maybe it's just because I like the way it looks more than most fonts, I'm not sure, but I'm happy this exists and research is being done in this area. I'll be trying this out in my email client and other applications if the fonts are available for download.

    hfgjbcgjbvg(10000) 2 days ago [-]

    I like it too. It reminds me of the font they use on Tik Tok for some reason.

    dmje(3113) 2 days ago [-]

    It's a nice looking font but kind of hilarious that the official website [0] is entirely baffling! What do those icons mean? What is the license? And mainly: how the f can I GET the damn thing???

    Talk about being a bit over-clever with your design...

    [0] https://kermit-font.com/

    doodpants(3463) 2 days ago [-]

    From the last paragraph of the article, it's availabile in Microsoft Office. It seems that they're not distributing it separately.

    cl3misch(3421) 2 days ago [-]

    Apparently it's only available in MS Office:

    > The basic styles of Kermit (Regular, Bold, Italic, and Bold Italic) are available today in Office, with the remaining 38 styles arriving in early May.

    ...from the last paragraph of the linked article.

    shuggy999(10000) 2 days ago [-]

    In the fonts used on the website; https://kermit-font.com/_css/KermitRoman-VF.otf, https://kermit-font.com/_css/KermitItalic-VF.otf, the license is:

    Beta version of a custom font for Microsoft by Underware. Only for internal testing, not meant for any other kind of usage. Email [email protected] for more information

    Seems to be a rushed release that they had a deadline to get to put a press release for.

    cosmotic(10000) 2 days ago [-]

    When new fonts are released, they always include what they tried to improve: readability, comprehension, etc. Just once I'd like to know what they sacrificed.

    parsimo2010(3126) 2 days ago [-]

    In this case they sacrificed a feeling of professionalism. Helvetica is 'serious' and used by real publications. Kermit would probably not be used by a major publication (like NYT or WaPo) because people wouldn't take them seriously even if it's easier to read.

    codexb(10000) 2 days ago [-]

    Variable font width, height, and kerning is more difficult and slower to read. It's fine if you're reading a short childrens book at out loud, but if you're reading an entire novel silently formatted like that, it would become exhausting quickly.

    seba_dos1(3618) 2 days ago [-]

    It's super hard to read when you hijack scrolling (and do a poor job of it), regardless of the font used.

    sambeau(2267) 2 days ago [-]

    Here's one that doesn't. (yes it dives me mad, too)

    https://kermit-font.com

    scelerat(10000) 2 days ago [-]

    Very annoying. Designers, ui developers: please don't do this, it sucks.

    p0w3n3d(10000) 2 days ago [-]

    For some strange reason this font appeals also to me - 41 y.o. adult

    bshacklett(3588) 2 days ago [-]

    What's more strange is that we've generally decided that adults aren't "allowed", or supposed to enjoy fun things.

    FjordWarden(10000) 2 days ago [-]

    > unpublished study is finding that adding prosody to text improves children's comprehension.

    As a dyslexic software engineer who knows by heart a good number of the 50 tables in the open font type specification, I'd like to look into this in more detail but there is no code or paper published about this (yet).

    In the mean time, it would be nice for people stop using dyslexics as an excuse to motivate for their own special interests. I've suffered my entire formative years under this low-key Munchausen by proxy from all sort of educators gass-lighting me into believing I should use some technology that in the fullness of time proved to be counter productive.

    But ok, the variable speed HOI animation looks cool, I'll give you that.

    cjs_ac(10000) 2 days ago [-]

    As a former teacher who's done original research in educational psychology, I'd like to add that educational psychology is just a grab-bag of weak correlations whose discovery was motivated by, 'When I was a teacher, I saw ______ and that made me sad.' Any 'theory' is a just-so story that the researcher assembled from ideas they found aesthetically pleasing. It's not science; it's activity without achievement, because the individual pieces of research can't be assembled into a coherent body of knowledge.

    The typeface looks nice though.

    FjordWarden(10000) 2 days ago [-]

    I did some more thinking on this. Font technology like this could be useful for a better stylo + touch-screen interface where the handwriting is translated to real characters while still having the same visual quality of the handwriting. You'll need lots more styles though, and very complicated user interaction in the background.

    jedberg(3314) 1 day ago [-]

    As a dyslexic font nerd, I have a question for you. Does Comic Sans actually help? Lots of people claim it's the easiest for dyslexics to read. I'm not dyslexic, but I set all my chat windows to Comic Sans because I've found that it helps me read it.

    Curious if the claims have truth to them.

    losvedir(2821) 1 day ago [-]

    As someone teaching their 4 year old to read right now, I don't buy it. The text is long on 'friendly' and random stuff like that, but that's not what I'm looking for in a font for kids.

    Just off the top of my head the 'v' in there doesn't have a point on the bottom, which is one of the confusions my daughter has ('u' vs 'v'). And I don't think the 'n' needs the serif on the right foot, as that's not the 'platonic' shape of a lower case N. I do appreciate that their lower case 'a' is more like a handwritten one, as is the lower case 'g'.

    I've been going through the Teach Your Child to Read[0] book, and it introduces a 'learner-friendly' font, which actually helps. It has special glyphs for 'th', for example, and other font tricks like making silent letters smaller, and different variants for the vowels depending on their sound. Eventually, those tricks are minimized and the kid is reading a normal font, though.

    In other words, I'm interested in the idea of a font that's useful for early readers, but this font doesn't seem to be concretely designed in that way, and I'm put off by the vague 'friendly' type stuff it seems to be focusing on.

    [0] https://www.amazon.com/Teach-Your-Child-Read-Lessons/dp/0671...

    dmboyd(10000) 1 day ago [-]

    Totally get where you're coming from — I had a similar experience when going through Teach Your Child to Read with my eldest. The book's emphasis on phoneme recognition over rote memorization really worked for us too. That said, we hit a bit of a wall in that transitional stage in terms of reading content — our kid was still relying on those visual cues (like ligatures and vowel variants), and jumping straight to standard text was a stretch.

    To bridge that, I actually built a font that keeps those phonics-aligned features and allowed us to use stories from things like Project Gutenberg. It's based on the open-source TeX Gyre Schola, ( kind of like what is used in the Spot books) with OpenType features that auto-connect common digraphs (like "th", "sh", "ch")— but in a way that can gradually phase out. Just put it up on GitHub if you're curious: Reading Guide Font. Open for any feedback or criticism!

    https://github.com/dmboyd/Reading-Guide

    0xWTF(10000) 1 day ago [-]

    My wife is a pediatric occupational therapist. I showed her the Kermit page and she said 'Whoever's doing this ... this is total bologna.'

    Also, to your struggles ... she's a fan of Handwriting Without Tears.

    upofadown(3019) about 24 hours ago [-]

    >I'm interested in the idea of a font that's useful for early readers, ...

    I stumbled across Andika[1] while looking for examples of high legibility typefaces. It's supposed to be all about making the problem characters more easily distinguishable for new readers.

    [1] https://software.sil.org/andika/

    empressplay(1488) about 21 hours ago [-]

    Open Dyslexic kind of looks like a kid-font while being easy to read: https://opendyslexic.org

    Voultapher(10000) 1 day ago [-]

    Please don't mess with scrolling, it's such a needless turn off. Didn't continue reading afterwards.

    kh_hk(10000) 1 day ago [-]

    only microsoft, on their design blog no less





    Historical Discussions: Emacs Lisp Elements (April 12, 2025: 348 points)

    (348) Emacs Lisp Elements

    348 points 6 days ago by robenkleene in 410th position

    protesilaos.com | Estimated reading time – 9 minutes | comments | anchor

    In the most basic case of Emacs Lisp code, you have lists that are either evaluated or not (Symbols, balanced expressions, and quoting). If you get a little more fancy, you have lists that are only partially evaluated (Partial evaluation inside of a list). Sometimes though, you look at a piece of code and cannot understand why the normal rules of quoting and evaluation do not apply. Before you see this in action, inspect a typical function call that also involves the evaluation of a variable:

    (concat my-greeting-in-greek ' ' 'Πρωτεσίλαε')
    

    You encountered this code in the section about partial evaluation. What you have here is a call to the function concat, followed by three arguments. One of these arguments is a variable, the my-greeting-in-greek. When this list is evaluated, what Emacs actually does is to first evaluate the arguments, including my-greeting-in-greek, in order to get their respective values and only then to call concat with those values. You can think of the entire operation as follows:

    • Here is a list.
    • It is not quoted.
    • So you should evaluate it.
    • The first element is the name of the function.
    • The remaining elements are arguments passed to that function.
    • Check what the arguments are.
    • Evaluate each of the arguments to resolve it to its actual value.
    • Strings are self-evaluating, while the my-greeting-in-greek is a variable.
    • You now have the value of each of the arguments, including the value of the symbol my-greeting-in-greek.
    • Call concat with all the values you got.

    In other words, the following two yield the same results (assuming a constant my-greeting-in-greek):

    (concat my-greeting-in-greek ' ' 'Πρωτεσίλαε')
    (concat 'Γεια σου' ' ' 'Πρωτεσίλαε')
    

    This is predictable. It follows the basic logic of the single quote: if it is quoted, do not evaluate it and return it as-is, otherwise evaluate it and return its value. But you will find plenty of cases where this expected pattern is seemingly not followed. Consider this common case of using setq to bind a symbol to the given value:

    (setq my-test-symbol 'Protesilaos of Cyprus')
    

    The above expression looks like a function call, meaning that (i) the list is not quoted, (ii) the first element is the name of a function, and (iii) the remaining elements are arguments passed to that function. In a way, this is all true. Though you would then expect the my-test-symbol to be treated as a variable, which would be evaluated in place to return its result which would, in turn, be the actual argument passed to the function. However, this is not how setq works. The reason is that it is a special case that internally does this:

    (set 'my-test-symbol 'Protesilaos of Cyprus')
    

    This is where things are as expected. There is no magic happening behind the scenes. The setq, then, is a convenience for the user to not quote the symbol each time. Yes, this makes it a bit more difficult to reason about it, though you get used to it and eventually it all makes sense. Hopefully, you will get used to such special forms, as you find them with setq but also with defun, among many others. Here is a function you have already seen:

    (defun my-greet-person-from-country (name country)
      'Say hello to the person with NAME who lives in COUNTRY.'
      (message 'Hello %s of %s' name country))
    

    If the normal rules of evaluation applied, then the list of parametes should be quoted. Otherwise, you would expect (name country) to be interpreted as a function call with name as the symbol of the function and country as its argument which would also be a variable. But this is not what is happening because defun will internally treat that list of parameters as if it was quoted.

    Another common scenario is with let (Control flow with if-let* and friends). Its general form is as follows:

    ;; This is pseudo-code
    (let LIST-OF-LISTS-AS-VARIABLE-BINDINGS
      BODY-OF-THE-FUNCTION)
    

    The LIST-OF-LISTS-AS-VARIABLE-BINDINGS is a list in which each element is a list of the form (SYMBOL VALUE). Here is some actual code:

    (let ((name 'Protesilaos')
          (country 'Cyprus'))
      (message 'Hello %s of %s' name country))
    

    Continuing with the theme of special forms, if let was a typical function call, the LIST-OF-LISTS-AS-VARIABLE-BINDINGS would have to be quoted. Otherwise, it would be evaluated, in which case the first element would be the name of the function. But that would return an error, as the name of the function would correspond to another list, the (name 'Protesilaos'), rather than a symbol. Things work fine with let because it internally does the quoting of its LIST-OF-LISTS-AS-VARIABLE-BINDINGS.

    Expect similar behaviour with many special forms as well as with macros such as the popular use-package, which is used to configure packages inside of your Emacs initialisation file. How each of those macros works depends on the way it is designed. I will not delve into the technicalities here, as I want the book to be useful long-term, focusing on the principles rather than the implementation details that might change over time.

    To learn what a given macro actually expands to, place the cursor at the end of its closing parenthesis and call the command pp-macroexpand-last-sexp. It will produce a new buffer showing the expanded Emacs Lisp code. This is what is actually evaluated in the macro's stead.

    With those granted, it is time to write a macro. This is like a template, which empowers you to not repeat yourself. Syntactically, a macro will most probably depend on the use of the quasi-quote, the comma operator, and the mechanics of splicing (Partial evaluation inside of a list). Here is a simple scenario where we want to run some code in a temporary buffer while setting the default-directory to the user's home directory.

    (defmacro my-work-in-temp-buffer-from-home (&rest expressions)
      'Evaluate EXPRESSIONS in a temporary buffer with `default-directory' set to the user's home.'
      `(let ((default-directory ,(expand-file-name '~/')))
         (with-temp-buffer
           (message 'Running all expression from the `%s' directory' default-directory)
           ,@expressions)))
    

    In this definition, the &rest makes the following parameter a list. So you can pass an arbitrary number of arguments to it, all of which will be collected into a single list called EXPRESSIONS. The judicious use of partial evaluation ensures that the macro will not be evaluated right now but only when it is called. The arguments passed to it will be placed where you have specified. Here is a call that uses this macro:

    (progn
      (message 'Now we are doing something unrelated to the macro')
      (my-work-in-temp-buffer-from-home
       (message 'We do stuff inside the macro')
       (+ 1 1)
       (list 'Protesilaos' 'Cyprus')))
    

    If you place the cursor at the closing parenthesis of my-work-in-temp-buffer-from-home, you will be able to confirm what it expands to by typing M-x (execute-extended-command) and then invoking the command pp-macroexpand-last-sexp. This is what I get:

    (let ((default-directory '/home/prot/'))
      (with-temp-buffer
        (message 'Running all expression from the `%s' directory' default-directory)
        (message 'We do stuff inside the macro')
        (+ 1 1)
        (list 'Protesilaos' 'Cyprus')))
    

    Piecing it together with the rest of the code in its context, I arrive at this:

    (progn
      (message 'Now we are doing something unrelated to the macro')
      (let ((default-directory '/home/prot/'))
        (with-temp-buffer
          (message 'Running all expression from the `%s' directory' default-directory)
          (message 'We do stuff inside the macro')
          (+ 1 1)
          (list 'Protesilaos' 'Cyprus'))))
    

    With this example in mind, consider Elisp macros to be a way of saying "this little thing here helps me express this larger procedure more succinctly, while the actual code that runs is still that of the latter."

    The above macro I wrote has its body start with a quasi-quote, so you do not get to appreciate the nuances of evaluation within it. Let me show you this other approach, instead, where I write a macro that lets me define several almost identical interactive functions (Make your interactive function also work from Lisp calls).

    (defmacro my-define-command (name &rest expressions)
      'Define command with specifier NAME that evaluates EXPRESSIONS.'
      (declare (indent 1))
      (unless (symbolp name)
        (error 'I want NAME to be a symbol'))
      (let ((modifined-name (format 'modified-version-of-%s' name)))
        `(defun ,(intern modifined-name) ()
           (interactive)
           ,(message 'The difference between `%s' and `%s'' modifined-name name)
           ,@expressions)))
    

    The my-define-command can be broadly divided into two parts: (i) what gets evaluated outright and (ii) what gets expanded for further evaluation. The latter part starts with the quasi-quote. This distinction is important when we call the macro, because the former part will be executed right away so if we hit the error, it will never expand and then run the EXPRESSIONS. Try pp-macroexpand-last-sexp with the following to see what I mean. For your convenience, I include the macro expansions right below each case.

    (my-define-command first-demo
      (message 'This is what my function does')
      (+ 1 10)
      (message 'And this'))
    ;; =>
    ;;
    ;; (defun modified-version-of-first-demo nil
    ;;   (interactive)
    ;;   'The difference between 'modified-version-of-first-demo' and 'first-demo''
    ;;   (message 'This is what my function does')
    ;;   (+ 1 10)
    ;;   (message 'And this'))
    (my-define-command second-demo
      (list 'Protesilaos' 'Cyprus')
      (+ 1 1)
      (message 'Arbitrary expressions here'))
    ;; =>
    ;;
    ;; (defun modified-version-of-second-demo nil
    ;;   (interactive)
    ;;   'The difference between 'modified-version-of-second-demo' and 'second-demo''
    ;;   (list 'Protesilaos' 'Cyprus')
    ;;   (+ 1 1)
    ;;   (message 'Arbitrary expressions here'))
    (my-define-command 'error scenario'
      (list 'Will' 'Not' 'Reach' 'This')
      (/ 2 0))
    ;; => ERROR...
    

    Do you need macros? Not always, though there will be cases where a well-defined macro makes your code more elegant. What matters is that you have a sense of how evaluation works so that you do not get confused by all those parentheses. Otherwise you might expect something different to happen than what you actually get.




    All Comments: [-] | anchor

    algo_lover(10000) 6 days ago [-]

    One of the good things to happen in emacs was the inclusion of `seq.el`. It makes easy functional operation over sequences, so no longer need `dash.el` or `cl-lib.el`. (dash still has many more functions inspired by clojure which is awesome when you need them)

    But I still wish the emacs community could adopt a modern data structure library. It's difficult to consolidate usage of sequences (lists/vectors) with alists and plists. This would make it so much more accessible.

    tom_(10000) 6 days ago [-]

    Thanks for the tip. I'd managed to miss the addition of these. I had somehow noticed the addition of the newer string- functions though, and immediately found them a huge improvement over the mishmash of randomly-named crap that was there before, so I expect seq- to be similarly transformative.

    I immediately notice there's seq-filter, which can kill off one of my helper routines. And then (now I'm looking...) I've discovered this was always equivalent to cl-remove-if-not. But I never realised, because of the mystery meat naming conventions.

    ajross(10000) 6 days ago [-]

    Seems clear and useful. That said, there's nothing particularly bad or inaccessible about the actual Emacs Lisp manual: https://www.gnu.org/software/emacs/manual/html_mono/elisp.ht...

    Or the official tutorial: https://www.gnu.org/software/emacs/manual/html_mono/eintr.ht... (which to be clear I haven't read, but have heard nice things about).

    Of all the things for which emacs can be criticised, documentation rigor is really not one.

    db48x(2985) 6 days ago [-]

    Agreed; Emacs is the gold standard for documentation. It comes with a reference manual (400k words), an Emacs Lisp reference (600k words), _and_ 64 other manuals for individual Emacs modes or features including one just for Calc mode (250k words), a manual just for Org mode (130k words), one for Gnus (180k words) etc. All told it adds up to about 2.6 million words of documentation.

    Still, another manual written from a different perspective probably won't hurt anything.

    spudlyo(2037) 5 days ago [-]

    One reasons Prot himself was able to become a bonafide Emacs Guru in just a few years is because he's the kind of person who reads manuals. He speaks highly of the included docs, and often credits them for his knowledge.

    tikhonj(3216) 6 days ago [-]

    I've had a great time using Emacs Lisp over the past 15 years: it's one of the easiest ways to quickly whip up personalized tools for my own use, and, at the same time, my code has been surprisingly resilient and stable over this time.

    And this is despite the fact that Emacs Lisp routinely flouts every single software engineering 'best practice'. The language is dynamically scoped by default! It simply doesn't have namespaces! Static types? Hah! (And I, an inveterate Haskeller, don't even miss them.) You can—and people routinely do—hook directly into all sorts of implementation details from other parts of the codebase.

    And yet it just works. And it works remarkably well.

    My theory: what matters isn't 'best practices', it's have a coherent conceptual design and code that reflects that design. Emacs is designed around a small but expressive set of core concepts that it uses in a consistent manner. Text with properties, buffers, modes, commands, customization variables... Almost everything more complex in Emacs is structured out of these (+ a handful more), and, once you've internalized them, it's surprisingly easy to both learn new higher-level tools and to write your own.

    The design of both the user interface and the code directly reflect these concepts which gives us a naturally close connection between the UI and the code (it's almost trivial to jump from an interaction to the code that powers it), makes both UI and code units effortlessly composable and generally makes it easier to understand what's going on and how we can change it.

    nothrabannosir(10000) 6 days ago [-]
    > My theory: what matters isn't 'best practices', it's have a coherent conceptual design and code that reflects that design.

    Just because something has a >0 level of success doesn't mean there are no negatives. 'Best practices don't matter because Emacs Lisp doesn't follow them and it just works' isn't a valid argument: it could very well be that Emacs (Lisp) would shine fifteen-fold brighter if it did also follow those practices. It just happens that having those elements you mentioned as positives are enough to keep the train going in spite of the shortcomings.

    I use Emacs and program in Emacs Lisp to scratch my itches, and I agree that there are elements that make it work and hey, I'm still here, but I will bet good money that a parallel universe with Emacs Lisp', Now With Namespaces would work even better.

    'Working' isn't binary.

    golly_ned(10000) 6 days ago [-]

    I've consistently failed to make writing elisp net positive for me for basically anything. I use it as a configuration language, and even then, for functions longer than a few lines, it's still a lot of coding for very little benefit. I just can't find things to improve in such a way that it'll actually be worth writing elisp code for, especially compared to other tools (like a quick Python script or even a bash one-liner), or things within Emacs. What are the things you've written in elisp that have helped you?

    pkkm(3280) 5 days ago [-]

    > My theory: what matters isn't 'best practices', it's have a coherent conceptual design and code that reflects that design.

    I think so too; that said, the language could definitely be better. It suffers from a lot of primitive obsession. Instead of structs, you often find either vectors or lists with predefined element positions; instead of map, ordered map, and multimap types, it's just various kinds of lists everywhere. They're not even used consistently: for the same thing, one package may use an alist and another a plist.

    funcDropShadow(10000) 5 days ago [-]

    Don't forget the self-documenting aspect. The manual is included, the api documentation is included, you can ask emacs which command is executed when you click somewhere or when you press a button. I recently tried to do the same thing in Intellij, pita. Not only, can you find documentation, you can jump to the source code, inspect variable values at runtime, and debug or profile everything. All of that from inside the environment.

    sauercrowd(10000) 6 days ago [-]

    Prot - the author - is a pretty incredible guy. He maintains a bunch of nice Emacs packages and themes.

    But maybe even more remarkable: he got kicked out of his flat in Greece, couldn't afford a new place, bought a small plot of land in the mountains and started building a hut from materials he was able to afford or from things neighbours gave him. Really the bare minimum (he often sat in his hut with a jacket in winter cause it wasn't well isolated/heated)

    Absolutely inspiration, all documented on his YouTube channel https://youtube.com/@protesilaos?si=MnjV7MhKtsT5RDSM

    hollerith(3600) 6 days ago [-]

    A video of the inside of the hut: https://www.youtube.com/watch?v=W2T_lihBs9I

    kleinishere(10000) 6 days ago [-]

    He also offers 20EUR/hr eMacs coaching. For those jumping in or graduating to a new level.

    https://protesilaos.com/coach/

    sakesun(10000) 5 days ago [-]

    A digital rishi.

    phforms(10000) 5 days ago [-]

    My Emacs wouldn't be the same without Prots modus themes[1], which I found to be a great foundation to build my own theme on top of. I am grateful for all the work he did for the Emacs community.

    I also enjoy watching his videos where he talks about various philosophical topics from a very clear, pragmatic and down-to-earth perspective. My impression is that he is a really kind and humble person and that he lives by his philosophical insights, without bragging about his lifestyle or judging about how other people live their lifes.

    [1]: https://protesilaos.com/emacs/modus-themes

    ghfhghg(10000) 5 days ago [-]

    Thank you for sharing this. This is really interesting.

    qiine(10000) 5 days ago [-]

    Dam emacs people are really built different ;p

    ptx(3630) 5 days ago [-]

    Hmm, there seems to be no mention of dynamic vs. lexical binding, which is a difference from some other Lisps I was hoping to gain some insight on.

    sctb(10000) 5 days ago [-]

    If you're still interested: https://www.gnu.org/software/emacs/manual/html_node/elisp/Le.... Basically modern Emacs Lisp works like Common Lisp.





    Historical Discussions: Generate videos in Gemini and Whisk with Veo 2 (April 15, 2025: 347 points)

    (347) Generate videos in Gemini and Whisk with Veo 2

    347 points 3 days ago by meetpateltech in 78th position

    blog.google | Estimated reading time – 2 minutes | comments | anchor

    Starting today, Gemini Advanced users can generate and share videos using our state-of-the-art video model, Veo 2. In Gemini, you can now translate text-based prompts into dynamic videos. Google Labs is also making Veo 2 available through Whisk, a generative AI experiment that allows you to create new images using both text and image prompts, and now animate them into videos.

    How to create videos with Gemini

    Veo 2 represents a leap forward in video generation, designed to produce high-resolution, detailed videos with cinematic realism. By better understanding real-world physics and human motion, it delivers fluid character movement, lifelike scenes and finer visual details across diverse subjects and styles.

    To generate videos, select Veo 2 from the model dropdown in Gemini. This feature creates an eight-second video clip at 720p resolution, delivered as an MP4 file in a 16:9 landscape format. There is a monthly limit on how many videos you can create, but we will notify you as you approach it.

    Creating videos with Gemini is simple: just describe the scene you want to create — whether it's a short story, a visual concept, or a specific scene — and Gemini will bring your ideas to life. The more detailed your description, the more control you have over the final video. This opens up a world of fun creative possibilities, letting your imagination go wild to picture unreal combinations, explore varied visual styles from realism to fantasy, or quickly narrate short visual ideas.

    One of the best parts of creating is sharing with others. Sharing your video on mobile is easy: simply tap the share button to quickly upload engaging short videos to platforms like TikTok and YouTube Shorts.




    All Comments: [-] | anchor

    delichon(10000) 3 days ago [-]

    I think I would buy 'yes' shares in a Polymarket event that predicts a motion picture created by a single person grossing more than $100M by 2027.

    kevingadd(1592) 3 days ago [-]

    I think the obstacles there are distribution and IP rights. I think we will see content like that find widespread appeal and success but actually turning it into $100m in revenue requires having the copyright (at present, not possible for AI-generated content) and being able to convince a distributor to invest in it. Those both seem like really tough things to solve.

    NitpickLawyer(10000) 3 days ago [-]

    I think you might need qualifiers on that. Are we talking an unknown / unrelated person living in the proverbial basement, or are we talking a famous movie director? I could see Spielberg or Cameron managing to make something like that happen on their name + AI alone.

    If we're talking regular people, the best chance would be someone like Andy Weir, blogging their way to a successful book, and working on the side on a video project. I wouldn't be surprised if something along these lines happens sooner or later.

    xnx(1016) 3 days ago [-]

    We've got a pretty good datapoint along that trajectory with Flow. Almost entirely one person and has grossed $36 million. https://en.wikipedia.org/wiki/Flow_(2024_film)

    silksowed(10000) 3 days ago [-]

    very excited to play around. will be attempting to see if i can get character coherence between runs. the issue with the 8s limit is its hard to stitch them together if characters are not consistent. good for short form distribution but not youtube mini series or eventual movies. another comment about IP license is indeed an issue but its why i am looking towards classical works beyond their copyright dates. my goal is to eventually work from short form, to youtube to eventual short films. tools are limited in their current form but the future is promising if i get started now.

    jddj(10000) 3 days ago [-]

    I came to the same sort of conclusion when watching Kitsune, which I think was one person and VEO https://vimeo.com/1047370252

    Granted, 5 minutes isn't 1h30 but it's not a million miles away either.

    tracerbulletx(10000) 3 days ago [-]

    Everyone keeps ignoring supply and demand when talking about the impacts of AI. Let's just assume it really gets so good you can do this and it doesn't suck.

    Yes the costs will get so low that there will be almost no barrier to making content but if there is no barrier to making content, the ROI will be massive, and so everyone will be doing it, you can more or less have the exact movie you want in your head on demand, and even if you want a bespoke movie from an artist with great taste and a point of view there will be 10,000 of them every year.

    colesantiago(839) 3 days ago [-]

    My prediction is on track to this and this was made only 4 months ago.

    https://news.ycombinator.com/item?id=42368951

    SirMaster(10000) 3 days ago [-]

    Well text generation is way ahead of video generation. Have we seen anyone create something like a best selling or high grossing novel with an LLM yet?

    bookofjoe(20) 3 days ago [-]

    Me too. Sam Altman recently predicted that we will see a one-person unicorn company in the near future.

    hammock(949) 2 days ago [-]
    https://en.wikipedia.org/wiki/Flow_(2024_film)

    $36 million dollars and an Academy Award. A l m o s t done by just one person. And entirely with open source software.

    The guy's previous movie was a true one-man show but didn't really get screenings: https://en.wikipedia.org/wiki/Away_(2019_film)

    baxtr(2973) 2 days ago [-]

    I can't exactly say why, but I find this 'single person $1B company' meme utterly annoying.

    tclancy(10000) 2 days ago [-]

    But will it cost less than $100M to render?

    minimaxir(32) 3 days ago [-]

    Whisk itself (https://labs.google/fx/tools/whisk) was released a few months ago under the radar as a demo for Imagen 3 and it's actually fun to play with and surprisingly robust given its particular implementation.

    It uses a prompt transmutation trick (convert the uploaded images into a textual description; can verify by viewing the description of the uploaded image) and the strength of Imagen 3's actually modern text encoder to be able to adhere to those long transmuted descriptions for Subject/Scene/Style.

    torginus(10000) 3 days ago [-]

    Why text? why not encode the image into some latent space representation, so that it can survive a round-trip more or less faithfully?

    cubefox(1892) 3 days ago [-]

    > This tool isn't available in your country yet

    > Enter your email to be notified when it becomes available

    (Submit)

    > We can't collect your emails at the moment

    j45(3605) 2 days ago [-]

    Seems to require a paid subscription to actually use all the way thru.

    strangattractor(10000) 3 days ago [-]

    Google is the new Microsoft in the sense that they can Embrace, extend, and extinguish their competition. No matter what xAI or OpenAI or 'anything'AI tries to build Google will eventually copy and destroy them at scale. AI (or A1 as our Secretary of Education calls it) is interesting because it is more difficult to protect the IP other than as trade secrets.

    mritun(3640) 3 days ago [-]

    > Google will eventually copy...

    Weird take given Google basically invented and released through well written papers and open-source software the modern deep learning stack which all others build on.

    Google was being disses because they failed to make any product and were increasingly looking like Kodak/Xerox one trick pony. It seems they have woken up from whatever slumber they were in

    navigate8310(10000) 2 days ago [-]

    > ... Google will eventually copy and destroy them at scale

    Google Wave and Google + are a fine example of how they tried to extinguish the then nascent Facebook

    smallnix(10000) 3 days ago [-]

    Brave to make ads with the Ghibli style. Would have thought that's burned by now.

    minimaxir(32) 3 days ago [-]

    Looking at the video, I think there's shenanigans afoot. The anime picture they input as a sample image is more generic anime, but the example output image is clearly Ghibli-esque in the same vein as the 4o image generations.

    gh0stcat(10000) 3 days ago [-]

    No one has any morals or soul at this point. It's all garbage in, garbage out.

    torginus(10000) 3 days ago [-]

    I am not really technical in this domain, but why is everything text-to-X?

    Wouldn't it be possible to draw a rough sketch of a terrain, drop a picture of the character, draw a 3D spline for the walk path, while having a traditional keyframe style editor, and give certain points some keyframe actions (like character A turns on his flashlight at frame 60) - in short, something that allows minute creative control just like current tools do?

    Rebelgecko(10000) 3 days ago [-]

    You can do image+text as well (although maybe the results are better if you do raw image to prompted image to video?)

    minimaxir(32) 3 days ago [-]

    Everything is text-to-X because it's less friction and therefore more fun. It's more a marketing thing.

    There are many workflows for using generative AI to adhere to specific functional requirements (the entire ComfyUI ecosystem, which includes tools such as LoRAs/ControlNet/InstantID for persistence) and there are many startups which abstract out generative AI pipelines for specific use cases. Those aren't fun, though.

    nodja(10000) 3 days ago [-]

    Dataset.

    To train these models you need inputs and expected output. For text-image pairs there exists vast amounts of data (in the billions). The models are trained on text + noise to output a denoised image.

    The dataset of sketch-image pairs are significantly smaller, but you can finetune an already trained text->image model using the smaller dataset by replacing the noise with a sketch, or anything else really, but the quality of the output of the finetuned model will highly depend on the base text->image model. You only need several thousand samples to create a decent (but not excellent) finetune.

    You can even do it without finetuning the base model and training a separate network that applies on top of base text->image model weights, this allows you to have a model that essentially can wear many hats and do all kinds of image transformations without affecting the performance of the base model. These are called controlnets and are popular with the stable diffusion family of models, but the general technique can be applied to almost any model.

    wepple(3453) 3 days ago [-]

    LLMs were entirely text not that long ago.

    Multi modality is new; you won't have to wait too long until they can do what you're describing.

    TacticalCoder(10000) 2 days ago [-]

    I want ...-to-3D-scene. Then I can use Blender to render the resulting picture and/or vid. Be it 'text-to-3D-scene' or 'image-to-3D-scene'.

    And there's a near infinity of data out there to train 'image-to-3D-scene' models. You can literally take existing stuff and render it from different angles, different lighting, different background, etc.

    I've seen a few unconclusive demos of '...-to-3D-scene' but this 100% coming.

    I can't wait to sketch out a very crude picture and have an AI generate me a 3D scene out of that.

    > ... in short, something that allows minute creative control just like current tools do?

    With 3D scenes generated by AI, one shall be able to decide to just render it as it (with proper lighting btw) or one shall all all the creative control he wants.

    I want this now. But I'll settle with waiting a bit.

    P.S: same for songs and sound FX by the way... I want the AI to generate me stuff I can import in an open-source DAW. And this is 100% coming too.

    fragmede(1245) 2 days ago [-]

    image-to-image speech-to-speech exists; yes almost everything is text-to, but there are exceptions

    spyder(10000) 2 days ago [-]

    Huh 'everything text-to-X'? Most video gen AI has image-to-video option too either as a start or end frame or just as a reference for subjects and environment to include in the video. Some of them even has video-to-video options too, to restyle the visuals or reuse motions from the reference video.

    volkk(10000) 3 days ago [-]

    this is semi-relevant -- and I do love how technically amazing this all is, but a massive caveat for someone who's been dabbling hard in this space, (images+video) -- I cannot emphasize enough how draining text-2-<whatever> is. even when a result comes out that's kind of cool, I feel nothing because it wasn't really me who did it.

    I would say 97% of the time, the results are not what I want (and of course that's the case, it's just textual input) and so I change the text slightly, and a whole new thing comes out that is once again incorrect, and then I sit there for 5minutes while some new slop churns out of the slop factory. All of this back and forth drains not only my wallet/credits, but my patience and my soul. I really don't know how these 'tools' are ever supposed to help creatives, short of generating short form ad content that few people really only want to work on anyway. So far the only products spawning from these tools are tiktok/general internet spam companies.

    The closest thing that I've bumped into that actually feels like it empowers artists is https://github.com/Acly/krita-ai-diffusion that plugs into Krita and uses a combination of img2img with masking and txt2img. A slightly more rewarding feedback loop

    dsign(3098) 2 days ago [-]

    > So far the only products spawning from these tools are tiktok/general internet spam companies.

    Help me here. If tiktok becomes filled with these, will it mean that watching tiktok 'curated' algorithmic results will be about digesting AI content? Like, going to a restaurant to be served rubber balloons full of air that then people will do their best to swallow whole?[^1]. Could this be it? The demise of the algorithm? Or will people just swallow rubber balloons filled with air?

    [^1]: Do please use this sentence as a prompt :-)

    hu3(2897) 3 days ago [-]

    is there a tool to generate AI videos that doesn't change the original picture so much?

    Whisk redraws the entire thing and it barely resembles source picture.

    vunderba(10000) 3 days ago [-]

    Wan 2.1 can do a decent job with i2v.

    https://comfyanonymous.github.io/ComfyUI_examples/wan

    CSMastermind(3197) 3 days ago [-]

    You want Kling: https://klingai.com/global/

    Everything else performs terribly at that task, though a bunch including Sora technically have that functionality.

    Google's tool forcing you to redraw the image is silly.

    rishabhjain(10000) 2 days ago [-]

    Try Snowpixel https://snowpixel.app/

    wewewedxfgdf(10000) 3 days ago [-]

    1: Press release about amazing AI development.

    2: 'Try it now!' the release always says.

    3: I go try it.

    4: Doesn't work. In this case, I give it a prompt to make a video and literally nothing happens, it goes back to the prompt. In the case of the breathtakingly astonishing Gemini 2.5 Coding - attach to source code file to the prompt 'file type not supported'.

    That's the pattern - I've come to expect it and was not disappointed with Google Gemini 2.5 coding nor with this video thing they are promoting here.

    siva7(10000) 3 days ago [-]

    you're using it wrong. change file ending to .txt instead

    throwup238(465) 3 days ago [-]

    On the contrary I had completely written off Google until a few days ago.

    Gemini 2.5 Pro is finally competitive with GPT/Claude, their Deep Research is better and has a 20/day limit rather than 10/month, and now with a single run of Veo 2 I've gotten a much better and coherent video than from dozens of attempts at Sora. They finally seem to have gotten their heads collectively unstuck from their rear end (but yeah it sucks not having access).

    martinald(10000) 3 days ago [-]

    I really don't know why Google especially seems to struggle with this so much.

    While Google have really been 'cooking' recently, every launch they do is like that. Gemini 2.5 was great but for some reason they launched it on web first (which still didn't list it) then a day or so later on app, at which point I thought it was total vapourware.

    This is the same - I have gemini advanced subscription, but it is nowhere to be seen in mobile or app. If you're having scale/rollout issues how hard is it to put the model somewhere and say 'coming really soon'? You don't know if it's not launched yet or you are missing where to find it.

    nolist_policy(10000) 2 days ago [-]

    On Chrome you can share your whole Project directory to Gemini. I think it uses the File System Access api which Firefox doesn't support.

    ninininino(10000) 3 days ago [-]

    As usual with Gen AI the curated demo itself displays misunderstanding and failure to meet the prompt. In the 'Glacial Cavern' demo, the 'candy figures' are not within the ice walls but are in the foreground/center of the scene.

    These things are great (I am not being sarcastic, I mean it when I say great) if and only if you don't actually care about all of your requirements being met, but if exactness matters they are mind-bogglingly frustrating because you'll get so close to what you want but some important detail is wrong.

    dsign(3098) 2 days ago [-]

    Indeed.

    Even a bad VFX artist has so much more control over what they do. I think that the day 'text-to-video' reaches the level of control that said bad VFX artist has from week one, it will be because we have sentient AIs which will, for all ends and purposes, be people.

    That's not to say that there is no place for AI-generated content. Worst case scenario, it will be so good at poisoning the well that people will need to find another well.

    deyiao(10000) 2 days ago [-]

    Content moderation is incredibly frustrating — it might even be the key reason why Veo2 and even Gemini could ultimately fail. I just want to make some fun videos where my kid plays a superhero, but it keeps failing.

    itake(10000) 2 days ago [-]

    I have the same issues with OpenAI. Supposedly Grok is better, but their quality isn't as high.

    voxic11(10000) 2 days ago [-]

    Are you trying to make your kid play a superhero or a specific copyrighted superhero? I'm just asking because I would expect them to attempt to prevent copyright infringement but I'm not sure why they would prevent you from depicting superheros which don't infringe on copyright. Maybe they are attempting to prevent any depictions of children, superhero or otherwise?

    Palmik(2404) 2 days ago [-]

    There's also Google Vids, also using Veo 2 under the hood. Product confusion :) https://workspace.google.com/products/vids/

    j45(3605) 2 days ago [-]

    This seems very different and much more developed in a different direction.





    Historical Discussions: I ditched my laptop for a pocketable mini PC and a pair of AR glasses (April 12, 2025: 345 points)

    (345) I ditched my laptop for a pocketable mini PC and a pair of AR glasses

    345 points 6 days ago by T-A in 439th position

    www.tomsguide.com | Estimated reading time – 16 minutes | comments | anchor

    I work best seated at my desk setup with multiple screens in front of me However, when I travel or just need to get out of the house for a bit, I can't bring my setup with me—or at least I thought I couldn't.

    Now I know what you're thinking. Why don't I just go with one of the best laptops instead? Well, I've tried and while my trusty ThinkPad hasn't let me down yet, I still end up using it with extra peripherals and oftentimes, a portable monitor too, which kind of defeats the purpose of using a laptop in the first place.

    Over the past few years, I've also downsized from a full desktop and I now do the majority of my work from one of the best mini PCs. I like the experience of using a mini PC over a desktop or a laptop so much that I even took a mini PC with me to Taiwan last summer.

    You may like

    Of all the mini PCs I've tested and reviewed, one in particular has stuck with me and that's due to how portable it is and the fact that it uses a USB-C port for power instead of a barrel port connector. After trying out a pair of AR glasses for the first time when I spent two weeks with the iPad mini, I decided why not combine the two together and throw one of the best power banks into the mix for good measure. Then, I could truly work from anywhere just like I do from the comfort of my home office.

    I've been using a pocketable mini PC, a pair of AR glasses and a massive 25,000 mAh power bank together for the past two weeks and it's completely transformed the way I work. Here's how I came up with this novel approach to on-the-go computing to better fit my unique workflow.

    Pocketable meets private

    (Image credit: Tom's Guide)

    Last year, I got to try out the Khadas Mind and even now, there's no mini PC quite like it. Instead of being rectangular or having a cube-like shape, the Mind looks a lot more like one of the best external hard drives. Not only is it powerful, it's also pocketable thanks to its tiny 5.75 x 4.13 x 0.79-inch frame.

    Another thing that sets the Khadas Mind apart from other mini PCs is that Khadas has created a whole ecosystem of accessories around it. There's the standard Mind Dock which adds more ports, dual-monitor support, a fingerprint reader and even a volume knob on the side, as well as the premium Mind Graphics dock which adds even more ports as well as a full-size Nvidia RTX 4060 Ti desktop graphics card.

    Khadas is also working on a portable display with a keyboard that magnetically attaches to this mini PC like both of these docks do if you prefer a more laptop-like experience.

    (Image credit: Tom's Guide)

    The main reason I chose the Khadas Mind for this project is because of its ports and portability though. Like I said before, it uses a USB-C port for power (on the far left) but it also has a second, full-featured USB-C port for video out in addition to two USB-A ports and an HDMI port. With one USB-C port for power and another for video, the Khadas Mind turned out to be the perfect fit for this one-of-a-kind mobile setup.

    After I reviewed the original Mind last year, Khadas unveiled the Mind 2S at CES back in January as a more powerful followup to the Mind 2. I had already sent the Mind 1 back, so I reached out to Khadas directly and they sent over this new more powerful mini PC for this project, though I am working on a review of it too.

    While the Mind 1 handled everything I threw at quite well, the Mind 2S is an absolute powerhouse with an Intel Core Ultra 7 255H processor, 64GB of LPDDR5X RAM and a 2TB SSD. Khadas also upgraded its two USB-C ports to Thunderbolt 4 ones for faster data transfer speeds and enhanced display capabilities.

    (Image credit: Future)

    As I haven't had a chance to try out a pair of the best smart glasses yet, I had my colleague Jason England recommend a pair for this project. He suggested the Xreal One AR glasses as they have Xreal's X1 spatial computing chip built-in. This chip gives you full control over the glasses' 3 Degrees of Freedom tracking and also lets you go from a standard 16:9 display to an ultrawide one at the push of a button.

    Another thing that I really like about the Xreal One glasses is that unlike the Meta Quest 3 or even the Apple Vision Pro, they don't have a built-in battery. Not only does this make them lighter and more comfortable to wear, you don't have to worry about charging which would make my mini PC/AR glasses setup more difficult to use at a moment's notice. Instead, they draw their power from the device they're connected to.

    (Image credit: Tom's Guide)

    After unboxing the Xreal Ones that Xreal sent over to help me turn this dream into reality, I was very impressed by how they worked immediately when plugged into the iPad mini. I didn't have to configure anything and they were truly plug and play.

    If you're thinking about trying out a pair of AR glasses yourself, just make sure that your smartphone, tablet or whatever device you want to use them with comes equipped with DisplayPort Alt Mode over USB-C. Otherwise, you're going to need an adapter, which adds a bit of bulk and makes using them slightly more complicated.

    Powered and portable

    (Image credit: Tom's Guide)

    With the Khadas Mind 2S and the Xreal One AR glasses in hand, I just needed one more thing: a way to power them. At home or at a hotel, I was able to power this whole setup using a GaN charger but I wanted a way to use it during those times when there wasn't an outlet nearby.

    To that end, I decided to pick up the 25,000 mAh version of the Ugreen Nexode Power Bank. I've always had a great experience with Ugreen's chargers, cables and other products in the past, so I wanted to see how well its largest power bank performed. Another reason that I picked this particular power bank is that it's flight approved but more on that later.

    (Image credit: Tom's Guide)

    With two USB-C ports at the top with one capable of putting out 100 watts and the other able to deliver 140 watts of power, I had more than enough power on hand for both the Khadas Mind 2S and the Xreal One AR glasses. I paired the two devices with my favorite budget mechanical keyboard ($40, Amazon) and a mini trackball mouse from Elecom.

    (Image credit: Tom's Guide)

    Much to my surprise, it didn't take long at all to get used to working while wearing AR glasses. Maybe it was because this was the same mechanical keyboard/trackball mouse combo I always bring with me while traveling but I settled in to using this setup in no time at all.

    Now though, it was time to take it out into the world and see whether it was really better for me than using a laptop.

    A whole new way to work

    (Image credit: Tom's Guide)

    As silly as I felt taking this picture at my local coffee shop, I actually didn't get any odd looks from the other patrons there. After connecting to the Wi-Fi, I got to work editing reviews and writing stories just like if I was back home.

    One thing that I really liked about wearing the Xreal One glasses is that you can change their transparency. For instance, at home, I really enjoyed using the completely black theater mode while watching content online. However, while at the coffee shop, I switched them to clear mode so I was still aware of my surroundings.

    (Image credit: Tom's Guide)

    You can't take native screenshots directly from within the Xreal One glasses but what you see inside them is a sight to behold. With ultrawide mode enabled, I was able to have two full-size Chrome windows side by side just like on the dual-monitor setup I put together last month. However, clear mode took the whole experience up a notch as it made it feel like I was using a transparent monitor straight out of Minority Report.

    Sitting at the window, I was able to watch the cars go by while I worked as if my desk was right up next to a window with a great view. I used to work in coffee shops a whole lot more back before I set up a home office in my house. With this setup though, I could easily see myself getting back out of the house and doing so again.

    (Image credit: Tom's Guide)

    When it was time to head out, packing everything up into my bag was a cinch and only took me a minute or so. Surprisingly, the Ugreen Nexode Power Bank is the heaviest item in this setup at just over 500 grams while the Khadas Mind 2S weighs 435 grams and the Xreal One glasses weigh 84 grams.

    Not just for coffee shops

    (Image credit: Tom's Guide)

    Now for the kicker. I actually brought this mini PC/AR glasses setup with me to New York last week when I went to try out the Nintendo Switch 2.

    As the tray table on an airplane is known for being notoriously small, I decided to break out all my gear and try to set it up there too. It was cramped getting everything out of my bag but once I had it all set up, I was able to get the full desktop experience while cruising at over 30,000 feet up in the air.

    After checking into my hotel, I broke out everything again and got to work. In fact, I wrote my entire hands-on review of the Switch 2 using this setup over the course of a few hours. Even though I was far from my desk, I felt right at home typing away with a pair of AR specs on my head. I even gave my guide on the best office chairs a big update while using this mobile setup and that too went surprisingly well.

    I know that even if you made it this far, you still might be wondering why I don't just work from a laptop instead. The big reason for me is that I love the feel of one of the best mechanical keyboards under my fingertips while typing along with the level of control and customizable buttons that I get with a trackball mouse. Another thing that has always turned me off from laptops is that you can't easily swap out a broken keyboard or upgrade their components, that is unless you get one from Framework. Likewise, I've yet to see a laptop with an ultrawide display and I doubt I will anytime soon.

    This setup has been kicking around in my head for months now but thanks to Khadas and Xreal's help, I got to make it a reality. And after using it for the past two weeks, I can honestly say it's even better than I expected it would be.

    So what about you? Could you see yourself spending a full day working with smart glasses instead of using a monitor? Likewise, would you try this setup out if you had the chance? Let me know in the comments!

    More from Tom's Guide




    All Comments: [-] | anchor

    java-man(3399) 5 days ago [-]

    I don't understand. Is this an ad?

    How long this setup lasts on a single charge? For half the price, one can get a macbook air with fantastic battery life and a good keyboard.

    tocs3(10000) 5 days ago [-]

    I was looking at wearable computer stuff years ago but gave up. The display was always the limiting factor. It would sometimes be nice to to walk around taking notes without holding a phone.

    sandspar(10000) 5 days ago [-]

    Guys who write about tech for a living tend to enjoy working with gadgets in their spare time. He's probably just having fun with a nifty idea.

    jasonjmcghee(2863) 5 days ago [-]

    it's hard to beat apple silicon MacBook airs right now. Used M1s sell for $300-400 (and $130 to have apple replace the battery if/when needed). If you buy an Anker battery pack (~25k mAh - $150 on sale) you can get another full charge.

    morninglight(10000) 5 days ago [-]

    Doesn't look like an ad but it may have been intended for the Onion.

    If you break out laughing while reading this, you are not alone.

    bee_rider(10000) 5 days ago [-]

    The guy in the article is using a mechanical keyboard. MacBooks keyboards are fine for what they are but generally enthusiasts prefer mechanical.

    The glasses... I mean, it's a totally different type of device, right? If nothing else, I'd love to never hunch over a laptop again. I dunno, haven't tried them, but they seem quite interesting.

    Spine replacements are pricey I think.

    specproc(10000) 5 days ago [-]

    The writing was nauseating. I lost track of the number of times the author said 'the best'.

    I honesty can't see the benefit over a small laptop.

    With the glasses, you're carrying more things, it's an expensive setup, you look like a gargoyle, you're partially blinded.

    I'm not sold at all.

    raffraffraff(3241) 5 days ago [-]

    You're right. This setup just doesn't work for most people. I've tried it (slightly different hardware but effectively a pair of 1080p OLED glasses with myopia dials, wirelesS 75% mech keyboard + mouse, MeLe Quieter 4C with battery pack. It's unwieldy, low res and awkward in real life. The battery doesn't last as long as a decent laptop.

    The only setup like this that works is the Apple, but it's due-wateringly expensive and heavy.

    If I was going to expand my mobile setup I'd just get a portable rechargeable monitor to stick beside my laptop.

    tcherasaro(10000) 5 days ago [-]

    This setup reminds me of "Snow Crash" that Neal Stephenson novel.

    eesmith(10000) 5 days ago [-]

    It reminds me of Steve Mann's WearComp. https://en.wikipedia.org/wiki/Wearable_computer#History

    plun9(10000) 5 days ago [-]

    Using AR glasses instead of computer monitors can prevent nearsightedness (myopia) because the virtual image is several meters away.

    kinow(10000) 5 days ago [-]

    Is that a fact? Has anything about it been published?

    ctrlp(10000) 5 days ago [-]

    What do you do if you already wear prescription lenses?

    system2(10000) 5 days ago [-]

    There are companies selling lenses for Xreal one. I saw one youtube reviewing them.

    raffraffraff(3241) 5 days ago [-]

    The Viture Pro XR have myopia dials. They work well. But I couldn't recommend them for any type of productivity. They're a novelty toy that suits in my drawer, depreciating.

    regularfry(3415) 5 days ago [-]

    They have a mounting for a lens clip where you can put custom prescription lenses. When I bought my Airs they came with a lens frame and opticians' blanks - basically reference lenses that show where the eyeline is - which you can take to an optician for them to use.

    I got my first set done at a high street optician - Specsavers in the UK - and they were able to do it based on some lens blanks they already had that were close enough in size to what XReal sent. Took less than a week to let me know they were done.

    But also there's a partner that XReal advertise on their site to do the job. When I got a new prescription recently I gave them a try, and they results are just as good. A little better, actually, but I can't tell what's them and what's having a newer prescription.

    I should point out that my lenses mainly correct astigmatism, so any models which only have myopia correction wouldn't be any good to me at all. It's got to be custom lenses for me, and it's fine.

    tocs3(10000) 5 days ago [-]

    Can AR glasses be used as just a monitor? I am under the impression that they are sort of smart devices. How do they get a video signal from the computer?

    fragmede(1245) 5 days ago [-]

    xreal air just have a usb-C wire coming down from the back of the glasses

    Borealid(10000) 5 days ago [-]

    VR headsets are usually quite smart.

    For better or for worse - and I personally think it's very much for the better - many AR glasses are a DisplayPort monitor that you wear on your face. They have inertial sensors and speakers, but the interface to the PC is Displayport-over-USBC for video to the glasses, USB Audio Class for the speakers, and usually a proprietary USB peripheral for the inertial measurements.

    Some AR glasses attempt to require being paired with a dedicated video phone-like device, largely to attempt to extract subscription revenue. Most do not.

    It's perfectly possible to drive a pair of AR glasses from an Android smartphone, a video-game-focused SBC, or a miniPC. Anything with DisplayPort video out at 1080p or better (3840x1080 if you want 3D videos).

    kotaKat(1999) 5 days ago [-]

    So, the Xreal glasses are (generally) a dumb USB-C DisplayPort alt-mode device. Plug glasses in, get video to the little displays in your eyes. With a companion app (not needed) you can have your computer do some heavy lifting and make virtual displays out of it.

    The new 'One' unit referenced in this review is the same but does have some smarts to do on-glasses processing of the virtual displays itself instead, if I understand it.

    Xreal also sells you some companion devices that are just little Android bricks to cast media to and from and play things from as well.

    bobsmooth(10000) 5 days ago [-]

    Checkout Voidstar Labs. He hacked a set of AR glasses to use as a teleprompter. https://www.youtube.com/watch?v=qAuwW7Wzrng

    tdeck(3637) 5 days ago [-]

    I thought it would be a mini laptop like these:

    https://gpdstore.net/product-category/gpd-mini-laptop/

    But no, he carries around a little Nuc style machine and a full, separate keyboard and charger. It's cool and all, but there's no way this whole jumble would fit in a pocket or be convenient to use on the go.

    Borealid(10000) 5 days ago [-]

    There was a recent announcement of a mini PC that was itself built into a folding keyboard - no screen. That would be the ideal device for this lifestyle.

    ganoushoreilly(3546) 5 days ago [-]

    I was hoping to see this too. I regularly travel with my vision pro and it has been fantastic. It's definitely bulky though. I also tend to carry a couple laptops for work and recently switch my windows laptop to a GPD pocket. While I like it (using right now), the keyboard has many nuances you have to adjust to. Both of those options end up with me bringing a small keyboard and mouse.

    I own a previous gen Xreal set and it just wasn't there for me resolution wise. I may have to try this newer gen and see.

    benoau(10000) 5 days ago [-]

    I think a 'NUC' is the logical conclusion if you don't want the screen and don't like the keyboard compromises! There's a lot of room for powerful devices in that space like discrete GPUs, or AI crunching, etc.

    But I think what really sells this concept to me - unless I'm on a MacBook, I'll have to carry my keyboard, mouse and maybe powerbank or charger anyway. It's definitely more compact than that!

    regularfry(3415) 5 days ago [-]

    I've made myself a split wireless keyboard in part so that it can be more portable than having the style of keyboard in the article in my bag. And that's replacing an Atreus, so already relatively compact.

    But then, there are degrees of portability. This sort of thing is fine for a coffee shop. Better, in some ways, than a laptop because it's usable in full sunlight.

    It's only the fact that everything's wireless that makes it practical, really. I'd be tempted to print up a chassis for the NUC and the power bank so that they become a single unit, then the only setup is the glasses cable.

    user070223(3585) 5 days ago [-]

    Truly, it should have been a smartphone, their performant today is better than my 10 years old (totally fine) desktop

    mrbonner(10000) 5 days ago [-]

    The Xreal is a nice device. I got the first gen for $199. I'm able to plug this into the MacBook pro and watch Netflix in bed. The fonts do look a bit blurry and small. I don't think I can work with it full time. I don't have myopia (or my number is small to notice).

    jwr(10000) 5 days ago [-]

    Thanks for posting this! I'd be very interested in more real-life usage comments from people, I don't trust YouTube 'reviewers' (who get stuff for free and want cosy relationships with companies).

    I wonder specifically if their high-end devices (Xreal One Pro?) would be OK for some amount of coding work, or is it just a movie-watching screen. Even if it is only for watching movies, it might still be interesting for flights, though.

    KolibriFly(10000) 5 days ago [-]

    Watching Netflix in bed with a giant virtual screen sounds pretty ideal though, not gonna lie

    zabzonk(10000) 5 days ago [-]

    All those wires! Far more than my laptop (basically none, or one if I am charging). And what is the total weight and volume of all this stuff?

    supermatt(3661) 5 days ago [-]

    The wires are inside your laptop. I'm more confused as to why he wants to put it all on the desk rather than operate it from his bag.

    From the article it sounds like less than a macbook: 'Surprisingly, the Ugreen Nexode Power Bank is the heaviest item in this setup at just over 500 grams while the Khadas Mind 2S weighs 435 grams and the Xreal One glasses weigh 84 grams.'

    herpdyderp(10000) 5 days ago [-]

    How good are actual VR headsets at being virtual desktop screens? Specifically I've been interested in the Bigscreen Beyond 2 due to its extreme lightweight, but most people seem to use them for gaming instead of doing work. I want more screens (or, even better, an infinite screen) but I don't have the desk space for them. I know the Vision Pro sort of does this but I need the full power of my maxed out MacBook Pro, the Vision Pro is too heavy, and it's way too expensive.

    jbellis(10000) 5 days ago [-]

    The ones that require base stations like the BB are not very portable.

    heelix(10000) 5 days ago [-]

    I picked up a Quest 3 headset, with the thoughts of using it coding when I had to deal with a hotel style work desk. The text was just not sharp enough to be usable for programming.

    plun9(10000) 5 days ago [-]

    They're pretty good. It's just that they get uncomfortable to use for long periods of time.

    dr_kiszonka(10000) 5 days ago [-]

    I am very curious about BB2 too. I can't really imagine using them outside (cafe, train) because without a pass through I wouldn't feel comfortable, but at home it shouldn't be a problem. (Unless, you have cats maybe.)

    raffraffraff(3241) 5 days ago [-]

    These AR glasses are not. It feels like sitting at my desk looking at a single static 27' monitor with 1080p res. The fully immersive ones like the Quest 3 or Apple Vision are better.

    KronisLV(3660) 5 days ago [-]

    I remember using my old Quest 2 with an app called Immersed that ran on the Quest too and rendered the environment there, seemingly streaming the monitors in what felt like higher resolution vs the Quest Link. It was really pleasant until the Immersed app removed support for physical monitors and I could no longer use my 4 monitor PC setup in VR: https://www.reddit.com/r/virtualreality/comments/1cm2niy/imm...

    I actually enjoyed it, because having nothing other than a black void or space or whatever in my vision was surprisingly zen and nice. It wasn't quite like my 1080p monitors, a bit closer to what felt like 720p, though the absolute biggest issue was the pressure on my head which meant that it became uncomfortable after a few hours, even with a custom strap - something that had gotten better in the more recent hardware.

    Aside from that, I'd say that Virtual Desktop is pretty nice but also has artificial limitations on how many screens it can display: https://www.uploadvr.com/virtual-desktop-multiple-monitors-u...

    I've never really found that sweet spot that I had between discovering Immersed and them ruining the app for me again.

    sathackr(10000) 5 days ago [-]

    I've been doing this a few months now with an xreal one and minisforum um790.

    Same ability to power via usb-C and have other ports available.

    It's worked very well, the 1920x1080 resoultion of the glasses is pretty clear but I find 'anchoring' the screen to be most usable because the edges do get a little blurry, but with the screen anchored you can just 'look around' a little to bring them into focus.

    The biggest drawback is the resolution. While still very sharp and clear, it's tough going from a framework laptop 2256x1504 to 1920x1080.

    I'm just used to everything being a little smaller and being able to fit more info into my FoV vs having to look around a 'larger' screen for it.

    senectus1(10000) 5 days ago [-]

    yeah this is whats holding me back... if it was half the price i could handle that resoultion just for the portability benefit, but double the res and I'll dump my monitor

    cma(3612) 5 days ago [-]

    Are any of them 4:4:4 1080? Previous gen was only green at full resolution I think which wasn't great for text

    raffraffraff(3241) 5 days ago [-]

    Same with the Viture Pro. The OLED is crisp and colourful but the resolution is too low to be useful for productivity unless they really nail the head tracking, and can support lots of virtual monitors (and they haven't done that).

    KolibriFly(10000) 5 days ago [-]

    I feel like resolution is kind of the last big hurdle for AR glasses to really feel like a true laptop replacement

    eternityforest(10000) 5 days ago [-]

    Seems like the thing that actually makes this all work is the built in battery on the mini PC. Without it, accidentally unplugging the power bank would be a big problem.

    bee_rider(10000) 5 days ago [-]

    It is as bad as yanking the cord on your computer. I mean, not the greatest thing to do, but not the end of the world with modern filesystems.

    I used a NUC with some battery pack for ages, accidentally unplugging wasn't a big problem really. (Sadly smart glasses weren't where they are now at the time, so I had to lug around some kind of display sometimes).

    jareds(10000) 5 days ago [-]

    I got excided looking at this hoping there was a laptop with out a screen. I'm totally blind so the power draw of a screen is pointless. I currently use my ROG Alli with a Bluetooth keyboard to connect to my more powerful laptop which has a keyboard that's going bad. While this setup works well and the battery life is pretty good it would be much nicer if I didn't have to put a keyboard on my lap, and the Alli on a table. At least the Alli doesn't need to be somewhere where I can look at it.

    nemomarx(10000) 5 days ago [-]

    Would one of those computer in a keyboard set ups work, like the rapsberry pi one?

    tmzt(10000) 5 days ago [-]

    I'm not sure if this would work for you, but there are inexpensive devices that plug into an HDMI port. They appear to the computer as a monitor. I use them for screen sharing to a remote display, but they should enable to think there is a monitor attached. It negotiates the display information as if it was an actual monitor.

    Here's the pack of three I purchased on Amazon.

    Woieyeks 3 Pack HDMI Dummy Plug https://www.amazon.com/dp/B0CKKLTWMN

    CasperH2O(10000) 5 days ago [-]

    Since you mentioned the ROG Ally, if you are looking for a handheld without a screen (basically a controller with a built in computer) you may like the Tecno Pocket Go.

    Also, great pun with being blind and 'excited looking at this'.

    KolibriFly(10000) 5 days ago [-]

    Honestly surprised no one's really leaned into that as a product category yet. Seems like there could be a small but very appreciative market for it

    stoltzmann(10000) 5 days ago [-]

    You could take a normal laptop and remove the screen.

    justincormack(2391) 5 days ago [-]

    He says he uses the Khadas Mind / Khadas Mind 2 which is a mini pc that has a battery so its pretty much a screenless laptop. Not clear the battery is very large but he uses an external one too as its usb c powered.

    nashashmi(10000) 5 days ago [-]

    There is a handheld keyboard you can get called the mini keyboard. It has a trackpad for a mouse. Connects by Bluetooth.

    balfirevic(3664) 5 days ago [-]

    On my Macbook Air, if I bring the screen brightness all the way down the screen appears to be completely off.

    lnrd(10000) 4 days ago [-]

    Google 'headless macbook', there is a community of people making macbooks without displays.

    The idea started from recovering macs with a broken display and using them like a mac mini. It's possible to find 'broken macs' for cheap in second hand market and if the problem is only the display you can go for the headless approach and have macOS with Apple Silicon for very cheap.

    Apple Silicon has outstanding battery life, without a screen I would think even more.

    tippytippytango(10000) 5 days ago [-]

    These glasses give me an instant headache and 1080p is abysmal if you are used to 5K displays. I love the idea, hate the actual glasses.

    system2(10000) 5 days ago [-]

    Have you tried Xreal One? I heard nothing but good things about them. Although only YouTubers reviewing these are not from the United States, that makes me think Xreal has a different market in the EU.

    KolibriFly(10000) 5 days ago [-]

    The concept is super cool in theory, but in practice it kinda feels like early VR all over again

    LeonM(10000) 5 days ago [-]

    Well in all fairness, the first laptops were also barely usable. Ever seen the horrible LCD screen on an early 90s laptop?

    Being an early adopter will always have downsides, but give it a few more years and the glasses will get better.

    ohgr(10000) 5 days ago [-]

    Yeah. Also most of the people who review these things seem to have eyes that don't work.

    raffraffraff(3241) 5 days ago [-]

    I tried this with the Viture Pro XR glasses last year and it sucks. Can't use it with Linux, except in dumb monitor mode. No head tracking unless you're using a supported OS. Android app sucks becaus you can't use it with any old app, eg productivity apps (their app is like a demo of head tracking that only supports stuff like YouTube and local media). Maybe I should have purchased the Xreals?

    0x400000(10000) 5 days ago [-]

    The open-source Breezy GNOME is worth a try. It has head tracking and multi-monitor in beta with GNOME DE.

    https://github.com/wheaney/breezy-desktop

    regularfry(3415) 5 days ago [-]

    The first gen XReal glasses are similar in that you need software running on the host to get anything other than dumb monitor mode. With these newer models they've moved a bunch of the functionality into hardware on the glasses themselves, so you get virtual monitor and wider device support out of the box.

    There are a couple of projects that are trying to get better open source support of the Airs on linux; I've not kept up with their progress.

    gattr(10000) 5 days ago [-]

    I'd like to try this kind of setup (coding from a lounge chair with just a keyboard tray & trackball, yay!), 'dumb monitor' would be sufficient - but since switching to high-DPI displays in 2016 I really need this to be 4K.

    hoppp(10000) 5 days ago [-]

    I have xreal air 2 that gets zero use. I dont recommend them that much, working on a laptop is better and since they constantly making newer versions its worth the wait to not buy anything current and wait for the next one which which will be better. I had the buyers regret, wishing I waited longer for the newer version but unless I buy a steam deck to play games Ill probably never use them.

    georgewsinger(3043) 5 days ago [-]

    SimulaVR[1][2] is releasing our standalone (belt-strappable) compute packs this year, which will (ii) come pre-installed with our FOSS Linux VR Desktop compositor and (ii) work with AR headsets like the Rokid Max series (and potentially the XReal headsets). So basically: you'll get full Linux Desktop apps in AR (not just Android ones) with actual VR window management (not just 'dumb monitor mode').

    [1] I know we're taking forever D: But we intend for this to be a way to release an intermediate product (which we've been making anyway for our full headsets).

    [2] Our next blog update will be about this. Here's a video video preview: https://youtube.com/shorts/Y67D8DkqScU?si=LpdSpjmfGn2k2rxP

    psyclobe(10000) 5 days ago [-]

    No Linux? Full stop.

    rendaw(3067) 4 days ago [-]

    The drivers here https://github.com/wheaney/XRLinuxDriver mark Viture as 'recommended' with the best support. I do see some mention that head tracking is a desktop responsibility, but I presume that means some support in the driver... do you have more informatio non this?

    vaxman(10000) about 22 hours ago [-]

    If only there were credit-card sized, LiPol-battery powered 'puter with built-in wireless networking and a GPU-accelerated remote streaming app that output HDMI, all made and distributed by a Five Eyes alliance country for less than $15 each. If only... /s

    The choice of a trusted HMD is a little more complex, but very solvable ;)

    Abishek_Muthian(2101) 5 days ago [-]

    AR glasses brings great accessibility improvements, especially those who are bedridden; I wrote the need-gap for wearable low latency computer displays[1] ~6 years ago when I was in bed recovering from a spinal fusion surgery as the only option available to me were those unwieldy bed mounts for monitors and it requires help from others to adjust the angles.

    [1] https://needgap.com/problems/16-wearable-low-latency-display...

    wordpad(10000) 5 days ago [-]

    Since when is having a laptop on your lap or by your side a problem in bed? That's my default wfh setup. I even have a 2nd monitor on a standard arm mounted to my bedrest for when I need it. I do also use Xreal One but only when I'm trying not to wake my partner.

    EVa5I7bHFq9mnYK(10000) 5 days ago [-]

    I am partially bedridden ... so far mackbook air remoting to my desktop PC looks like the best solution - it's light, sturdy, stays cool, has decent resolution and excellent battery life. The only thing I don't like is non-standard keyboard.

    supermatt(3661) 5 days ago [-]

    How can the xreal one glasses be 3Dof and stay in place while this guy is moving forward and backwards in his chair?

    https://us.shop.xreal.com/cdn/shop/videos/c/vp/bc70020e90a74... https://us.shop.xreal.com/cdn/shop/videos/c/vp/a2b82ae2ea714...

    I appreciate its a marketing video, but this is just a lie, no?

    What is the actual supported input resolution of the display? How do virtual monitors work - are they just a composite screen that needs to fit in that max input resolution, or is there some virtual viewport that is being managed by the connected device?

    There is so little information about these on the website, and the few reviews I can find are basically people who got them for free (youtube is seemingly full of these right now) and clearly don't use multi-monitor setups to any great extent.

    wordpad(10000) 5 days ago [-]

    You can check discord for a lot of people trying these out in various ways.

    The screen gets anchored to a direction and distance from you, so yes, leaning in would push the screen back (which feels natural, especially when you walk around).

    skykooler(10000) 5 days ago [-]

    They do have accelerometers as well as gyroscopes, so technically they could integrate acceleration twice to keep track of position...but in practice it's way more reliable to just keep it at a constant distance from the head.

    KolibriFly(10000) 5 days ago [-]

    I love the creativity, but for me? If I forget one cable, the whole mission falls apart and I'm back to scribbling in a notebook like it's 1995.

    LeonM(10000) 5 days ago [-]

    Author's setup doesn't really have that problem as far as I can see. AFAIK the cable from the xreal glasses don't even detach, keyboard and mouse are wireless. I guess you could forget the USB-C cable for power of the minipc, but you can get a USB-C cable literally anywhere. Or borrow one from someone whose laptop is already charged.

    The 'problem' you describe is not much different from forgetting to bring the charger for your laptop. USB-C being ubiquitous made this so much less of a problem.

    nicbou(3055) 5 days ago [-]

    I don't see it mentioned, but I'd feel completely ridiculous using this in a coffee shop or on the train.

    LeonM(10000) 5 days ago [-]

    That was also my first thought when reading the title. But then looking at it, these just look like any regular sunglasses. Maybe slightly more bulky, but there are plenty of people wearing 'designer' sunglasses bigger than that. This is already a huge step up from full head units like Quest/VisionPro/etc.

    Just remember that only a couple of years ago that Apple introduced the wireless earbuds and people also thought they looked ridiculous, now they're everywhere and nobody even notices anymore.

    I feel like I'm defending this article a lot here in this topic, but for one, I am genuinely excited about this concept. Tech is not really there yet, but I can totally see me ditching my laptop for such a setup.

    roland35(10000) 5 days ago [-]

    Doesn't seem as dorky as a vr headset!

    ThrowawayR2(10000) 5 days ago [-]

    The question though is whether it is more ridiculous than everyone staring at a little glowing rectangular object held in their hands in a coffee shop or on the train was 25 years ago? Norms can change.

    Mortiffer(10000) 5 days ago [-]

    Sounds like sponsored content. Every other review I have read people say they go back to laptop because the text fidelity, eye strain and keyboard on lap is just the best productivity setup

    jbs789(10000) 5 days ago [-]

    I thought the same. Notice he doesn't say it's better than a laptop, only better than he expected. Then he goes on to explain what he doesn't like about laptops generally, without explaining what he doesn't like about this set up.

    videogreg93(10000) 5 days ago [-]

    I had trouble believing anything in the article since every sentence or 2 has a link to 'the best laptop' or 'the best powerbank'. Just seems like a hub for a bunch of links to sponsored content.

    andybak(2721) 5 days ago [-]

    I'm over 50 and need reading glasses as well as distance glasses. I actually find working in the Quest 3 better than a laptop in many ways. The balance betweeen (virtual) screen size and focussing distance seems to be easier to balance. With a laptop the distance sweet spot for vision isn't always the same as the comfort sweet spot for posture. I could probably optimize my desk setup to improve this - but the point of a laptop is freedom from being chained to a desk.

    If I could get a remote keyboard/trackpad with a better range then I wouldn't need a laptop at all but currently I also use a laptop and Chrome Remote Desktop when I need text entry or a regular mouse.

    regularfry(3415) 5 days ago [-]

    I really, really wanted the SimulaVR headset to work out because of th attention they were paying to text rendering. The hardware feels dead but the virtual desktop project might still have legs: https://github.com/SimulaVR/Simula

    As far as eye strain goes, I think there's room for argument: having virtual screens cinema-screen-distance away from you is less straining than something under a meter away, but only if the text rendering is up to the job.

    layer8(860) 5 days ago [-]

    Laptops are pretty bad ergonomically, compared to a proper desktop setup. It's true that current AR tech is even worse for most.

    NBJack(10000) 5 days ago [-]

    I use a pair of Air Ones with prescription lens inserts and a DIY nose pad for comfort. I can't beat my desktop monitors for clarity, but it is fantastic if you have to read a lot of documentation and like distraction free environments. My job let's me book up my Samsung phone for basic access to documents, and I enjoy reading up on things as I get away from my desk for a change of pace. To say nothing of flying coach with my steam deck on a massive screen.

    ikurei(10000) 5 days ago [-]

    I've seen a couple of this kinds of setups online and I'm intrigued, as I'm just done with the laptop form factor, but I don't think this is it.

    I see the appeal of the XR glasses for immersion and monitor real state, but if you wanted to be outside and went to a coffee shop... I woudldn't cover my eyes and immerse myself totally on the computer; for starters, I wouldn't feel safe. Also, I don't think anyone would also wear headphones with that in a public place, so I hope you don't get a particularly chatty group on the next table over...

    There's many situations where I want to look at a display but I don't want to cover my eyes.

    On the other hand, this kind of on-the-go-but-with-a-desktop-pc only works with glasses. Some have tried it with a portable display and it seems like way too much fussiness to set up and carry.

    I doubt this guy actually ditched his laptop. He did an experiment for content (nothing wrong with that) and I reckon he'll be back on a laptop sooner rather than later.

    Philpax(761) 5 days ago [-]

    They're AR, not VR, so you can still see your surroundings.

    danielEM(10000) 5 days ago [-]

    Having nreal air. It is so freaking inconvenient to wear for longer that every time I see someone posting how they replaced regular screen with ar glasses or vr (yes, tried also Quest 2) I laugh HARD!

    andybak(2721) 5 days ago [-]

    I regularly work for a few hours at a time in a Quest 3. Feel free to laugh.

    Tepix(2905) 5 days ago [-]

    It's sad to read through this article on Tom's that

    a) awfully reads like an ad and

    b) manages not to mention the screen resolution of AR glasses used as a desktop replacement!

    dazzawazza(10000) 5 days ago [-]

    After a few paragraphs I just assumed it was a marketing post and moved on.

    laweijfmvo(10000) 5 days ago [-]

    That battery pack is too large to fly with, unless they changed the regulations? Used to be 10,000 mAH no?

    daggersandscars(10000) 5 days ago [-]

    The limit (in the US?) is 100 Wh. If this is the right battery, the specs page says it's 90 Wh.

    https://www.ugreen.com/products/ugreen-nexode-power-bank-250...

    nashashmi(10000) 5 days ago [-]

    He mentions that it is FAA compliant.

    jdietrich(10000) 5 days ago [-]

    The limit is any number of spare batteries of up to 100Wh, or no more than two batteries of over 100Wh but less than 160Wh. 25000mAh worth of lithium cells works out to about 90Wh.

    Tepix(2905) 5 days ago [-]

    100 Wh, with LiIon usually around 27Ah

    contingencies(3614) 5 days ago [-]

    Going to try this. Gentoo desktop at home but have a few solid months of business travel coming up and need hardware I can rely on. Went to the Apple store recently and was shocked they try to sell Macbooks with screens with a chunk out of them... what? I asked the salesperson and they said it's been this way for 2-3 years. Shows how much of a rock I live under, but gee Steve Jobs is surely glitching in his grave!

    Looked seriously at framework but too slow and expensive here in Oz.

    Nobody else seems to have decent ARM mini PC hardware. Therefore despite a strong distate for Apple I'm looking at Mac Mini + glasses (for flights) + bluetooth input + portable screen / 27K mAh 140W USB powerbank (for occasional mobile use). Hidden in a backpack I think it'll be a better roaming experience than a laptop (more keyboard choice, larger screen, screen position flexible, improved ergonomics) for a fraction of the Macbook (much less Apple vision!) price. Also, unlike a modern Macbook the IO devices and power bank can be upgraded and Asahi Linux will eventually run well on the things, which lends an air of potential longevity.

    Final cost: Mini (24GB) @ USD$940 + 24k mAh USB powerbank @ USD$69 + 18' 2.5K screen @ USD$239 + VITURE Pro XR/AR @ USD$470 = $1718. Ordered some different input options, basically will be under $150 depending what I don't send back. So definitely under $1850. Entry level Macbook Pro with non-square screen, no glasses, lower specs, smaller fixed screen, annoying keyboard, zero repairability is $2500. I'll put the extra $650 toward upgrades later.

    contingencies(3614) 4 days ago [-]

    ... and glasses don't fit. So much for that notion!





    Historical Discussions: Vacheron Constantin breaks the world record for most complicated wristwatch (April 11, 2025: 331 points)

    (331) Vacheron Constantin breaks the world record for most complicated wristwatch

    331 points 7 days ago by bookofjoe in 20th position

    www.hodinkee.com | | comments | anchor

    Unlike the Berkley Grand Complication, which was made on commission, the Solaria is a fully Vacheron-driven project. One watchmaker, yes, just one, was given carte blanche to go hog wild and make the most incredible feat of horology he could, and spent eight years on the task. He certainly took full advantage of the brief. There was no budget, and there is no price tag but the watch is for sale. In fact, the Solaria is actually called "the Premiere" to end its official name, because the program is open to orders with future examples modified in ways to keep them all unique. Yet each would have the full suite of complications. We will have a list of all the complications at the end, but here are some highlights.




    All Comments: [-] | anchor

    bslalwn(10000) 6 days ago [-]

    That strap... way to ruin it

    w-ll(10000) 6 days ago [-]

    I kinda agree here, many threads look lose. Even the attach arms look outta place.

    nextos(3666) 6 days ago [-]

    Given the price tag, it's surely a custom order and I imagine you can tweak lots of details. That's the case for much cheaper Dornbluth & Sohn and other small boutique watchmakers.

    light_triad(10000) 6 days ago [-]

    If you're interested in the functioning of mechanical watches, they're amazing:

    https://ciechanow.ski/mechanical-watch/

    Previously on HN in 2022: https://news.ycombinator.com/item?id=31261533

    dang(143) 6 days ago [-]

    Thanks! Macroexpanded:

    Mechanical Watch (2022) - https://news.ycombinator.com/item?id=38591084 - Dec 2023 (163 comments)

    Mechanical Watch - https://news.ycombinator.com/item?id=31749299 - June 2022 (1 comment)

    Mechanical Watch - https://news.ycombinator.com/item?id=31261533 - May 2022 (413 comments)

    ecoffey(10000) 6 days ago [-]

    Bartosz links to it in the Further Reading section, but wanted to highlight the Wristwatch Revival YouTube channel[0] as well. Really great content and very understandable after reading the article!

    0: https://www.youtube.com/c/WristwatchRevival/videos

    LeafItAlone(10000) 6 days ago [-]

    That is one of the coolest demonstration sites I have ever seen. What a neat way to learn about watches. Kudos to whomever created that page

    nradov(537) 6 days ago [-]

    41 complications and no GPS? How am I supposed to upload my runs to Strava?

    layer8(860) 6 days ago [-]

    It does allow you to determine your longitude. So just run East or West, I guess?

    simpaticoder(10000) 6 days ago [-]

    I wonder if a mechanical watch could communicate something via radio with some clever placement of magnets and copper on the movement via Faraday induction. Imagine movement that encodes a simple BT handshake. On the more science fiction side, a very tiny Difference Engine that fits on your wrist (I am reminded of a Young Ladies Primer from The Diamond Age, where the compute was nano-mechanical).

    RobertDeNiro(10000) 6 days ago [-]

    Are watches going to be tariffed?

    kjellsbells(10000) 6 days ago [-]

    Yes. 31%, at least for now. The administration is...mercurial.

    Although one might argue that an additional 31% on a watch that retails for six figures is not going to make a difference to the kind of buyer that spends six figures on a watch. Even if a US watchmaker existed, this kind of buyer seems unlikely to substitute a Vacherin or a Patek for something made in Cleveland.

    https://www.swissinfo.ch/eng/workplace-switzerland/adding-up...

    rswail(10000) 6 days ago [-]

    Not if you wear it on your wrist as you arrive by your private jet to get the personalized immigration and customs service that whisks you through the private areas of the airport to your waiting limo.

    dole(10000) 6 days ago [-]

    I can nowhere near afford them, but I love most everything about Vacheron Constantin except for that godawful, cheap, brash font they use for their logo. The font on this piece is fine, their overall design and language is great, I'm glad a company like VC pushes the technological limits and industry forward, but that Helvetica-lookin font is visual fingernails-on-a-chalkboard.

    folkrav(10000) 6 days ago [-]

    I'll be honest, to me, it looks like every other luxury brand logo that happens to use a sans-serif font.

    russelldjimmy(10000) 6 days ago [-]

    Not just that, but it also appears to be stretched vertically!

    litoE(10000) 6 days ago [-]

    I'm impressed, but with my declining eyesight I don't think I could read most of the dials, even with glasses - I can't even read the date on my Timex. I would love to see a copy of the User's Guide for this watch though.

    boznz(3573) 6 days ago [-]

    They probably just throw a MechEng PhD Professor in for a year as part of the deal.

    charcircuit(10000) 6 days ago [-]

    A smartwatch is going to be much more complicated than this. Millions and millions if lines of code is not simple.

    umanwizard(10000) 6 days ago [-]

    Not what "complicated" means in this context (having complications).

    motohagiography(10000) 6 days ago [-]

    do timepiece complications have theoretical limits that might originate from the '7-fold limit' in origami, or huffman's work on folding curves in origami?

    I realize watch complications are stacked disc segments and not folds, but intuitively if you are dealing with a material in a fixed space you either run up against limits in the stiffness of parts down to sheets of atoms, or some theoretical folding limit relative to the thickness of the case. a watch that expressed the proof might be worth the indulgence.

    ggm(1620) 6 days ago [-]

    Mechanical losses in cog and ratchet. At some point, friction won.

    pests(10000) 6 days ago [-]

    Didn't mythbusters do 8 folds?

    gennarro(3590) 6 days ago [-]

    Model name is "The Veblen"

    anigbrowl(54) 6 days ago [-]

    [expensive chuckling]

    walrus01(869) 6 days ago [-]

    Seems like a Good name

    rsynnott(10000) 5 days ago [-]

    I mean, you could say that of the product category as a whole, really. Mechanical watches have been entirely impractical for some time now.

    pixelpoet(10000) 6 days ago [-]

    If only my software were valued by number of complications...

    Everything about the high end 'movement' scene rubs me the wrong way (I had a friend into it), but most of all, the pompous terminology.

    m463(2487) 6 days ago [-]

    software does have tail recursion.

    This might be more like wrist recursion.

    EDIT: I wonder if a nixie wristwatch would be a middle ground?

    slt2021(10000) 6 days ago [-]

    in the B2B SAAS world these are called 'features' or 'integrations'.

    Software with the most integrations and features is usually ends up being the most preferred solution

    LeoPanthera(954) 6 days ago [-]

    > If only my software were valued by number of complications...

    Amateur radio software would win:

    https://sv1cal.com/wp-content/uploads/2022/11/image.png

    __loam(10000) 6 days ago [-]

    You can get a watch that's more accurate and more complex than one of these for under $1000 in an Apple watch or a Casio.

    For me, this feels like one of the less harmful things rich people do. Ultimately you're paying a bunch of skilled labor in a developed state to maintain an artistic craft that uses very little energy and material, for a device that has worse functionality than one under $100. The only issue is where you got your money I suppose, and whether that money would have been better spent elsewhere.

    GuB-42(10000) 6 days ago [-]

    > If only my software were valued by number of complications...

    If it fits within a size and power budget, then you essentially described sizecoding. In its extreme form, it is not practical, but it is an art form.

    JumpCrisscross(69) 6 days ago [-]

    > Everything about the high end 'movement' scene rubs me the wrong way (I had a friend into it)

    Why? I'm not a watch guy. But I think the engineering is beautiful. It's also super niche, so there isn't a financing model outside this to fund it.

    konart(10000) 6 days ago [-]

    I (surely I'm not alone here) know many people who would say the same thing about software development 'scene'.

    Hell, even _inside_ the software development 'scene' you can easily find similar cases. Like when web developer who builds (relatevily) simple web apps on top of Rails earns notably more then someone who works with a complex hardware.

    tmnvix(10000) 6 days ago [-]

    Impressive. Here I am struggling to design a decent UI for a screen of at least 13 inches. I shudder to think how much harder it would be if the only means of interaction were a scroll wheel.

    TimByte(10000) 6 days ago [-]

    Imagine spending 8 years on a project where your entire user interface is literally tiny hands turning a crown the size of a lentil

    jsheard(301) 6 days ago [-]

    No price given. Needless to say, if you have to ask...

    boomboomsubban(10000) 6 days ago [-]

    A quick look at their website suggests it's probably several hundred thousand dollars.

    edit a look at their Wikipedia article, tens of millions seems more likely, if they even sell one.

    brikym(3525) 6 days ago [-]

    I don't think it looks very nice. But the whole point of it is for someone to show they have so much excess wealth they can thoughtlessly spend it on something useless and ugly.

    bradfitz(3179) 6 days ago [-]

    'most complicated' as if that's something's to be proud of! :)

    smugglerFlynn(3658) 6 days ago [-]

    This is a word play - in the watch world "complication" means "feature", and this watch has 41 features, which requires tricky design decisions and high precision to house everything in a case that is still wearable.

    Something to be proud of, for sure.

    internetter(10000) 6 days ago [-]

    Did anyone else struggle to read this article? It felt very circulatory and... complicated

    user3939382(3301) 6 days ago [-]

    Still can't tell time accurately over a long period. The ultimate irony of these collectible expensive watches. I like them anyway out of respect for the engineering but still.

    bslalwn(10000) 6 days ago [-]

    Quartz can't either :)

    DennisP(3447) 6 days ago [-]

    Achieving the accuracy they do, with just mechanical parts powered by a spring, seems reasonably impressive to me.

    It's basically the same technology that John Harrison used to win the Longitude Prize in the 1700s, revolutionizing navigation on the high seas.

    umanwizard(10000) 6 days ago [-]

    This is sort of like complaining that an expensive dress isn't very good at protecting the wearer from the elements.

    guax(10000) 6 days ago [-]

    This one I believe is not the collectible one. I think is the marketing one. Is the concept car of watch world. The LaFerrari that makes people buy the expensive but cheaper Purosangue.

    atonse(10000) 6 days ago [-]

    I've never heard of this company but according to the video below, they're large enough to have a huge building.

    How do these economics work? I'm guessing they're a maker of very expensive low volume products. But are there that many buyers?

    https://www.hodinkee.com/articles/video-vacheron-constantin-...

    Same with Richard Mille. Never heard of them but they're rich enough to sponsor the Ferrari F1 team.

    umanwizard(10000) 6 days ago [-]

    They are both extremely well-known luxury watch manufacturers. The fact that you haven't heard of them has nothing to do with them, it just means you're not into luxury watches.

    dharmab(10000) 6 days ago [-]

    To give you an idea of margins:

    - A real Rolex dive watch costs $5k-15k.

    - A similar Swiss-made dive watch from a less famous brand costs $2k-4k.

    - A similar Japanese-made dive watch from a famous brand costs $500-1000.

    - A Chinese-made replica/fake Rolex, mechanically identical to a real one, and only distinguishable by an expert under high magnification, costs about $400-800.

    - There are some low-volume watches that are sold for 4-6 figure sums to repeat buyers. Richard Mille in particular has done one-offs for celebrities in the range of 7-8 figures.

    As you can imagine you don't need a high volume with margins that large.

    lossolo(3427) 6 days ago [-]

    Richard Mille is well known to anyone interested in watches, especially very rich people. You probably haven't heard of Jacob & Co? Or maybe you've heard of Hublot? It's the same story with Loro Piana when it comes to clothing, and Koenigsegg or Pagani when it comes to cars.

    In certain circles, all of these brands are as common as Nike or Mercedes are to the general public.

    __loam(10000) 6 days ago [-]

    Vacheron Constantin is one of the big 3 Swiss watch brands that also include Patek Phillipe and Audemars Piguet. These are a tier above Rolex and Omega and they specifically trade on scarceness and exclusivity. You haven't heard of them because they advertise in very specific places to watch nerds and the very wealthy. Each watch can be like $30,000 to $50,000, or even $120,000 for small run products with unique complications.

    There's more interesting brands like Moritz Grossman and Bovet that make even rarer pieces but fewer people have heard of them.

    sbassi(10000) 6 days ago [-]

    Richard Mille watches, priced at $500,000 or more per piece, are primarily used by wealthy individuals, elite athletes, and Hollywood stars.

    bitmasher9(10000) 6 days ago [-]

    > economics

    * Margin. A relatively low prestige Swiss brand (Tag) has stated they charge 3x bill of materials for their watches. The more exclusive the brand, the higher this number goes.

    * Volume might be higher than you think. Popular Swiss models sell in the tens of thousands of units a year. Not bad if you're charging four or five figures per unit.

    * Consolidation. There's a handful of actual parent companies for watch making that are responsible for most sells. Swatch, Citizen, Rolex. They share resources between each other.

    * Common suppliers. Some movements are used in multiple brands, even across multiple parent companies. Sometimes a company will buy a movement, modify the movement, and completely rebrand it. This allows better economics of volume for the most complicated aspects of watches.

    * Marketing works. There's no practical reason to buy a $10k (or $40k) Rolex compared to a $25 Casio. There's a reason James Bond wears expensive watches and that reason is product placement. Some watch conglomerates are publicly traded, so you can look at how much they spend on marketing.

    * The fact that you haven't heard of the brand is part of the point. If you're wearing >$100k on your wrist you probably don't want everyone to know. Even at this price point, it's a highly liquid asset in some cities.

    quickthrowman(10000) 6 days ago [-]

    Vacheron is part of Richemont, a watchmaking conglomerate/holding company.

    https://en.m.wikipedia.org/wiki/Richemont

    It works like any other luxury company, charge an arm and a leg, control the supply so you don't overproduce, spend a ton on marketing.

    Almost all Swiss watch brands (by volume) are owned by either Richemont, Swatch Group, or LVMH. Rolex, Patek, Audemars Piguet, Breitling, and Chopard are the last of the big Swiss independents, but there are smaller ones like Czapek and Cie, H Moser & Cie, Gruebel Forsey, Richard Mille.

    7373737373(10000) 6 days ago [-]

    I do hope watchmakers start to integrate 'computational' (instead of temporal) complications into their watches, like a mechanical turing machine or other tiny mechanical computers or calculators which I believe have never been constructed this small.

    Inspiration:

    Wooden Turing Machine: https://youtube.com/watch?v=vo8izCKHiF0

    Curta Calculator: https://youtube.com/watch?v=ZDn_DDsBWws

    Zuse Z1 Computer: https://youtu.be/R5XnuT6ZLKg?t=283

    Maybe also analog ones!: https://youtube.com/watch?v=s1i-dnAH9Y4

    appplemac(10000) 6 days ago [-]

    It feels like a lot of complications the watchmakers are building now are stuck in the early 20th century. Sure, perpetual calendars will always be useful, but what about:

    * pomodoro focus timers * multiple TZ support - like GMT watches but more than one additional TZ shown at once * timers * alarms

    microtherion(3037) 6 days ago [-]

    What I really want is a mechanical bluetooth implementation. It would open up so much other functionality...

    iFire(10000) 6 days ago [-]

    Would an Apple iPhone 16 Pro be considered a very expensive wristwatch and would the number of transistors break a record?

    kijin(10000) 6 days ago [-]

    Apple watch maybe. Most people don't wear full-size phones on their wrists...

    kyledehovitz(10000) 6 days ago [-]

    So cool that Dan Flashes makes wristwatches now

    pmdev03(10000) 6 days ago [-]

    These watches are my EXACT style

    mofunnyman(10000) 6 days ago [-]

    For those of you that don't know a lot about Swiss mechanical movements, this watch isn't just nuts, it's fuckin nuts.

    TimByte(10000) 6 days ago [-]

    Right?? This is like mechanical watchmaking turned all the way up to 11, took a left turn into madness, and just kept going

    dyauspitr(10000) 6 days ago [-]

    I'm always impressed by the Swiss. They manage to charge an arm and leg for regular things that a lot of the world makes nearly as well on purely mystique and vibes. Watches, chocolates, diamonds, banking etc.

    eqvinox(10000) 6 days ago [-]

    I don't think 'a lot of the world' makes a clock like this.

    Also that 'mystique and vibes' is essentially 'a reputation of quality', which has to be earned, and I'd say they did that. Whether it still holds is another question.

    TimByte(10000) 6 days ago [-]

    I love that we've apparently reached the 'absurd flex' stage of watchmaking where it's less about telling time and more about seeing just how much ridiculous wizardry you can cram into a tiny mechanical space

    Hauthorn(10000) 6 days ago [-]

    I think watchmakers have been pushing this for quite a while.

    If you want more recent examples, see Richard Mille.

    quickthrowman(10000) 6 days ago [-]

    This is not new to watchmaking in the slightest. Highly complicated watches have been made for over 200 years.

    Henry Graves Supercomplication was made by Patel Philippe in 1933, which was 92 years ago; https://en.m.wikipedia.org/wiki/Patek_Philippe_Henry_Graves_...

    An even older example is the Marie Antoinette watch by Abraham Breguet, which was started in 1783, 243 years ago: https://en.m.wikipedia.org/wiki/Marie_Antoinette_(watch)

    mrweasel(10000) 6 days ago [-]

    Is anyone actually going to use those complications? That's really my question for most high-end watches. I can see a diver using the features on their watch, but how many are actually using a Rolex or an Omega as their regular dive watch?

    Chronographs, while cool, isn't exactly a useful why of measuring speed these days, and how often do you really need to do that anyway.

    On a mechanical watch having the date might be useful, I know I keep forgetting the exact date, but do I really need a watch to remind me that it's Saturday?

    I really love mechanical wristwatches, the mechanics of it is amazing and they are beautiful pieces or engineering and works great as an accessory/jewellery, but I don't understand the need for many of the complications.

    ZiiS(10000) 6 days ago [-]

    The watch with the most complications is any $200 WearOS. You will need to have spent over $1,000,000 on their other watches before they will talk to you about a price for this one; practicality is not a factor to consider.

    mytailorisrich(10000) 6 days ago [-]

    These are special, collectors items and pieces of art. Of course there is no 'need' for all these complications, but it isn't the point.

    barbs(3409) 6 days ago [-]

    I use the day-of-the-week indicator on my Casio watch an embarrassing amount!

    JacobiX(10000) 6 days ago [-]

    What I like about mechanical watches is that, having survived a near-death experience when quartz watches were introduced, they've evolved into a completely different kind of product. It's fascinating that, unlike most other businesses and products, people don't buy them for their utility, and the less automated their production process, the better. Brands like A. Lange & Söhne even pride themselves on assembling their movements twice.

    When inefficiency and craftsmanship are considered features rather than flaws, you have an industry that won't easily be replaced by AI or robots.

    wiether(10000) 6 days ago [-]

    > people don't buy them for their utility

    That's called luxury goods and that's not limited to watches.

    rlupi(10000) 6 days ago [-]

    Isn't this the closest thing to a portable antikitera?

    HarHarVeryFunny(10000) 6 days ago [-]

    Plot twist : the antikitera mechanism was worn round the neck as a piece of bling (jk).

    offsky(10000) 6 days ago [-]

    I became interested in complicated watches several years ago and knew I could never afford one, so I made a website with simulated watch dials. Just for fun and education. It was also a great way for me to learn svg animations. https://www.complication.watch/

    netsharc(10000) 6 days ago [-]

    The next step up from this would be to simulate all the internal mechanisms as 3D models that interact with each other...

    eddyg(2367) 6 days ago [-]

    Nice!

    I loved the Emerald Chronometer(1)app for iOS / iPadOS and all its various "calibres" that you could flip over and show in day or night mode. Sadly the dev has removed the apps from the App Store, but it still runs (for now.) It's a fun use for an older iPad on a stand.

    Wanted to mention it in case it gives you some inspiration. :)

    (1)https://emeraldsequoia.com/h/

    primax(10000) 6 days ago [-]

    There is a giant world of high end replica watches that are so close to the original that they take expert mechanics to tell apart. I've got a few $500 watches that are identical to $10-40k watches.

    Worth checking out reptime to scratch that itch without selling a kidney.

    Mainan_Tagonist(10000) 6 days ago [-]

    I happen to work in this industry, and just a word for those that compare this with an Apple Watch or a Casio, this Vacheron-Constantin will likely be around 200 years from now, it will still be a testimony of the refinement and engineering of a fine craft that few can achieve, a highly valued item with specialist technicians marvelling on the talent of its builders, just as is the case today with 200 year old timepieces.

    you'll be very lucky if your Casio can last as long. Your mass commoditised Apple watch will likely be worthless.

    Personaly, I like the IWC on my wrist as much as I like my Casio G-Shock, both are wonderful in their own way.

    The Apple watch on my wife's wrist is a fine computer i guess, but at some point, it will have the same 'quaint charm' as the IBM Thinkpad she owned 23 years ago.

    TimByte(10000) 6 days ago [-]

    My $40 Casio surviving everything from camping trips to getting dunked in a sink still feels like its own kind of masterpiece

    LeafItAlone(10000) 6 days ago [-]

    >this Vacheron-Constantin will likely be around 200 years from now

    I'm interested to hear more. Typically things that are "most complicated" and lost lasting don't go hand-in-hand.

    ZiiS(10000) 6 days ago [-]

    But I can buy an Apple Watch Ultra every year for the next 200 years for less.

    StopDisinfo910(10000) 6 days ago [-]

    > this Vacheron-Constantin will likely be around 200 years from now, it will still be a testimony of the refinement and engineering

    Usual playbook of the luxury watch market since marketing somehow made it relevant in the mid to end of the 20th century. Thank Haye for not being able to stand near a Swiss mechanical watch without someone uttering the world 'timeless'. This is the second best achievement of marketing after making people believing that diamonds are valuable.

    These watches use small mechanical pieces (which are still very far away from the state of the art - a watch is an engineering achievement by the standard of 200 years ago). They require very regular maintenance to keep working and this maintenance is very expensive. They are not in anyway 'timeless'.

    This is an expensive piece of jewellery, subject to everything related to expensive pieces of jewellery including fashion. It's basically a Veblen good signalling wealth.

    userbinator(1056) 6 days ago [-]

    The Apple Watch has billions of transistors in its microcircuits, mass-produced repeatably at very low cost. It's a different type of engineering but I think it's nonetheless impressive too (and I'm not actually a fan of Apple either.)

    jasode(10000) 6 days ago [-]
    >I happen to work in this industry, and just a word for those that compare this with an Apple Watch or a Casio, this Vacheron-Constantin will likely be around 200 years from now, it will still be a testimony of the refinement and engineering of a fine craft that few can achieve, a highly valued item [...] The Apple watch on my wife's wrist is a fine computer i guess,

    My friend does not work in the watch industry so maybe that's why she came to the opposite conclusion from yours. She has several high-end watches Omega, Ebel, Cartier ... and when she got the Apple Watch almost 10 years ago, it instantly demoted all her expensive jewelry watches to the drawer.

    The cheaper 'disposable' Apple Watch instantly cured her from wanting any new expensive jewelry watches. She let the batteries die off in the old watches and has never replaced them. Instead, she just loves having the weather, timers, task notifications, etc on her Apple Watch. Sure, the classic watches have 'diamond encrusted bezel, gold wristband, Swiss mechanical movement yada yada yada...' but all that is negated by the useful features of the smart watch.

    It's a rare situation where a cheap product completely replaces an expensive product.

    I had a a similar evolution in thinking when technology made me re-evaluate products I once coveted. When I was young before the internet existed, I drooled over this Geochron illuminated framed wall map $4000 : https://www.geochron.com/clocks/boardroom/

    A lot of expensive offices had that and I thought I had to have it too. But then I bought cheap atomic clocks you never had to set and the web had dynamic maps I could explore. Even the new Geochron units don't automatically set to the radio signal from atomic clocks. New technology completely cured me of wanting to buy a Geochron. People used to want tall grandfather clocks in the house foyer as an elegant piece of accent furniture. Now you can't even give away those clocks for free on craigslist. Everybody has clocks on their smartphones so buying a grandfather clock for the house isn't a priority anymore. Even if we romanticize grandfather clocks with descriptions about 'heirloom furniture craftsmanship, intricate wood carvings, etc', it still won't entice most people today to want one.

    lm28469(10000) 6 days ago [-]

    Mostly because it'll be worn by a rich dude who uses it one day per week and sends it for CLAa every 5 years, treating it like some sort of religious idol every step of the way. The most extreme thing it'll go through is the swing of a golf club

    Gud(10000) 6 days ago [-]

    I've had my Casio g-shock for 20 years, including bringing it to two war zones. I have a physical job and I abuse the shit out of it.

    I'll take my chances with my Casio.

    diego_moita(10000) 6 days ago [-]

    > this Vacheron-Constantin will likely be around 200 years from now

    And why should I care? I won't be alive 50 years from now.

    Besides, right now, what I care about is functionality. And, right now, my old Pebble offers far more of it than this jewelry for millionaires.

    This thing is just a stupid Veblen Good[1], like a diamond ring, a Hugo Boss suit or a Porche Carrerra.

    Remember, 150 years ago, millionaires used beaver fur top hats to show off. Have you seen any billionaire wearing them?

    [1] https://en.wikipedia.org/wiki/Veblen_good

    coldtea(1697) 6 days ago [-]

    >you'll be very lucky if your Casio can last as long

    The Casio would last even longer - and would be closer to the right time even without touching it in between.

    zx10rse(10000) 6 days ago [-]

    There is craftsmanship in software.

    It is just the reality that we live in you are not gonna exactly hear from A list celebrity talking about what a wizard Ken Thompson is but you are gonna spot the celebrity secure a brand deal wearing some monstrosity like RM.

    As much as like and appreciate mechanical watches let's not kid ourselves you are talking about CNC machines and cad models rest of it is marketing from the 70's quartz crisis.

    Given that just Apple watch outsold the whole swiss watch industry I am not sure if VC we will be here in 200 years but some piece of software will be probably still running.

    crazygringo(10000) 6 days ago [-]

    Sure, but I wear watches to tell the time or (mainly) as a fashion accessory. Not as an object to donate to a museum someday...

    And 200 years from now, I'm sure there will be a few Apple Watches in museums as well. And some Casios too.

    wenc(3513) 6 days ago [-]

    I own mechanical watches and had the hardest time switching to an Apple Watch.

    But one thing sold me on it. Apple Pay. It's so convenient to be able to wrist tap things without whipping out my phone. I can pay for things in 1 second. With express transit I can tap to ride subways and buses.

    I gave up the status of a mechanical watch wearer for this convenience. And the status is often more limited than we think — I realized no one except other mechanical watches enthusiasts really notice what watch I was wearing. You can wear a Vacheron Constantin and realistically 99% of people you meet will not know what it is and likely will not notice it.

    xvokcarts(10000) 6 days ago [-]

    One could argue that the potential number of complications in any smartwatch is practically limitless, and also that the sophistication and craftsmanship required to make it, including the hardware part, is the ultimate testimony of refinement and engineering.

    If you took an Apple Watch and this Vacheron 2000 years in the past, which one would the people of the time find more impressive (until the juice runs out, that is)? In other words - which one looks more like magic?

    We're just used to microprocessors we can't see tick and maybe don't always appreciate the complexity.

    boznz(3573) 4 days ago [-]

    Original iPhone 1 is worth quite a lot actually.

    _xtrimsky(10000) 4 days ago [-]

    I agree with what you said. But unfortunately I find this watch to have very little use. If I got it for free, I'd love it's value, wear it once or twice a year to some events, but that's pretty much it. On the other hand I sleep with my Garmin smartwatch, and use it every day. Between vibrating alarm clocks, notification synching (which allows me to use my phone less), NFC wallet, and all the fitness tracking for triathlons, it is one of the electronics I use the most.

    I got so used to all the value my Garmin provides, I don't think I could handle replacing it with a watch that does nothing. It would be like going from a smartphone to an old nokia. I'd go crazy not being able to flip my wrist just to check the outdoor temperature.

    staplung(3641) 6 days ago [-]

    Many moons ago, William Gibson did a piece for Wired about his obsession with mechanical watches[1]. The whole thing is worth a read but this bit is worth quoting:

    ''' Mechanical watches are so brilliantly unnecessary.

    Any Swatch or Casio keeps better time, and high-end contemporary Swiss watches are priced like small cars. But mechanical watches partake of what my friend John Clute calls the Tamagotchi Gesture. They're pointless in a peculiarly needful way; they're comforting precisely because they require tending.

    And vintage mechanical watches are among the very finest fossils of the pre-digital age. Each one is a miniature world unto itself, a tiny functioning mechanism, a congeries of minute and mysterious moving parts. Moving parts! And consequently these watches are, in a sense, alive. They have heartbeats. They seem to respond, Tamagotchi-like, to 'love,' in the form, usually, of the expensive ministrations of specialist technicians. Like ancient steam-tractors or Vincent motorcycles, they can be painstakingly restored from virtually any stage of ruin. '''

    https://web.archive.org/web/20240930092315/https://www.wired...

    philshem(2835) 6 days ago [-]

    Another nice longform essay, from the NYer (2017)

    https://www.newyorker.com/magazine/2017/03/20/confessions-of...

    CSSer(10000) 6 days ago [-]

    It reminds me of the Theo Jansen's Strandbeests

    nayuki(3299) 6 days ago [-]

    > mechanical watches are among the very finest fossils of the pre-digital age

    Clocks have discrete ticks. They are digital devices. Even a base-60 second hand is digital because the number of states is finite.

    Mechanical and digital are not mutually exclusive concepts. For example, 'The analytical engine was a proposed digital mechanical general-purpose computer designed by English mathematician and computer pioneer Charles Babbage.' -- https://en.wikipedia.org/wiki/Analytical_engine

    Going further, I could argue that the digital age is very old. Humans who wrote numbers for accounting purposes were engaging in a digital activity; only the numbers matter, not the medium they were written on or the exact handwriting style of the scribe who wrote those numbers. DNA is a form of digital data conveyed through a sequence of 4 possible symbols, and DNA predates humans by billions of years.

    The pedantic phrase substitution for 'pre-digital age' would be something like 'age before widespread digital electronic computers on solid-state microchips' (thus differentiating from analog electronic computers and vacuum tubes).

    snovv_crash(10000) 6 days ago [-]

    I have a feeling we'll feel the same looking back on combustion engine cars.

    DonDhump(10000) 6 days ago [-]

    Well that's certainly an achievement but not water resistant though.

    azinman2(3422) 6 days ago [-]

    what about this is practical?!





    Historical Discussions: Erlang's not about lightweight processes and message passing (2023) (April 11, 2025: 330 points)

    (330) Erlang's not about lightweight processes and message passing (2023)

    330 points 7 days ago by todsacerdoti in 1st position

    stevana.github.io | Estimated reading time – 28 minutes | comments | anchor

    Erlang's not about lightweight processes and message passing...

    Table of contents

    Posted on Jan 18, 2023

    I used to think that the big idea of Erlang is its lightweight processes and message passing. Over the last couple of years I've realised that there's a bigger insight to be had, and in this post I'd like to share it with you.

    Erlang has an interesting history. If I understand things correctly, it started off as a Prolog library for building reliable distributed systems, morphed into a Prolog dialect, before finally becoming a language in its own right.

    The goal seemed to have always been to solve the problem of building reliable distributed systems. It was developed at Ericsson and used to program their telephone switches. This was sometime in the 80s and 90s, before internet use become widespread. I suppose they were already dealing with "internet scale" traffic, i.e. hundreds of millions of users, with stricter SLAs than most internet services provide today. So in a sense they were ahead of their time.

    In 1998 Ericsson decided to ban all use of Erlang. The people responsible for developing it argued that if they were going to ban it, then they might as well open source it. Which Ericsson did and shortly after most of the team that created Erlang quit and started their own company.

    One of these people was Joe Armstrong, which also was one of the main people behind the design and implementation of Erlang. The company was called Bluetail and they got bought up a couple of times but in the end Joe got fired in 2002.

    Shortly after, still in 2002, Joe starts writing his PhD thesis at the Swedish Institute of Computer Science (SICS). Joe was born 1950, so he was probably 52 years old at this point. The topic of the thesis is Making reliable distributed systems in the presence of software errors and it was finished the year after in 2003.

    It's quite an unusual thesis in many ways. For starters, most theses are written by people in their twenties with zero experience of practical applications. Whereas in Joe's case he has been working professionally on this topic since the 80s, i.e. about twenty years. The thesis contains no math nor theory, it's merely a presentation of the ideas that underpin Erlang and how they used Erlang to achieve the original goal of building reliable distributed systems.

    I highly commend reading his thesis and forming your own opinion, but to me it's clear that the big idea there isn't lightweight processes and message passing, but rather the generic components which in Erlang are called behaviours.

    I'll first explain in more detail what behaviours are, and then I'll come back to the point that they are more important than the idea of lightweight processes.

    Erlang behaviours are like interfaces in, say, Java or Go. It's a collection of type signatures which can have multiple implementations, and once the programmer provides such an implementation they get access to functions written against that interface. To make it more concrete here's a contrived example in Go:

    // The interface.
    type HasName interface {
            Name() string
    }
    
    // A generic function written against the interface.
    func Greet(n HasName) {
        fmt.Printf('Hello %s!\n', n.Name())
    }
    
    // First implementation of the interface.
    type Joe struct {}
    
    func (_ *Joe) Name() string {
            return 'Joe'
    }
    
    // Second implementation of the interface.
    type Mike struct {}
    
    func (_ *Mike) Name() string {
            return 'Mike'
    }
    
    func main() {
            joe := &Joe{}
            mike := &Mike{}
            Greet(mike)
            Greet(joe)
    }

    Running the above program will display:

    Hello Mike!
    Hello Joe!

    This hopefully illustrates how Greet is generic in, or parametrised by, the interface HasName.

    Next lets have a look at a more complicated example in Erlang taken from Joe's thesis (p. 136). It's a key-value store where we can store a key value pair or lookup the value of a key, the handle_call part is the most interesting:

    -module(kv).
    -behaviour(gen_server).
    
    -export([start/0, stop/0, lookup/1, store/2]).
    
    -export([init/1, handle_call/3, handle_cast/2, terminate/2]).
    
    start() ->
      gen_server:start_link({local,kv},kv,arg1,[]).
    
    stop() -> gen_server:cast(kv, stop).
    
    init(arg1) ->
      io:format('Key-Value server starting~n'),
      {ok, dict:new()}.
    
    store(Key, Val) ->
      gen_server:call(kv, {store, Key, Val}).
    
    lookup(Key) -> gen_server:call(kv, {lookup, Key}).
    
    handle_call({store, Key, Val}, From, Dict) ->
      Dict1 = dict:store(Key, Val, Dict),
      {reply, ack, Dict1};
    handle_call({lookup, crash}, From, Dict) ->
      1/0; %% <- deliberate error :-)
    handle_call({lookup, Key}, From, Dict) ->
      {reply, dict:find(Key, Dict), Dict}.
    
    handle_cast(stop, Dict) -> {stop, normal, Dict}.
    
    terminate(Reason, Dict) ->
      io:format('K-V server terminating~n').

    This is an implementation of the gen_server behaviour/interface. Notice how handle_call updates the state (Dict) in case of a store and lookups the key in the state. Once gen_server is given this implementation it will provide a server which can handle concurrent store and lookup requests, similarly to how Greet provided the displaying functionality.

    At this point you might be thinking "OK, so what? Lots of programming languages have interfaces...". That's true, but notice how handle_call is completely sequential, i.e. all concurrency is hidden away in the generic gen_server component. "Yeah, but that's just good engineering practice which can be done in any language" you say. That's true as well, but the thesis pushes this idea quite far. It identifies six behaviours: gen_server, gen_event, gen_fsm, supervisor, application, and release and then says these are enough to build reliable distributed systems. As a case study Joe uses one of Ericsson's telephone switches (p. 157):

    When we look at the AXD301 project in chapter 8, we will see that there were 122 instances of gen_server, 36 instances of gen_event and 10 instances of gen_fsm. There were 20 supervisors and 6 applications. All this is packaged into one release.

    Joe gives several arguments for why behaviour should be used (pp. 157-158):

    1. The application programmer only has to provide the part of the code which defines the semantics (or "business logic") of their problem, while the infrastructure code is provided automatically by the behaviour;

    2. The application programmer writes sequential code, all concurrency is hidden away in the behaviour;

    3. Behaviours are written by experts, and based on years of experience and represent "best practices";

    4. Easier for new team members to get started: business logic is sequential, similar structure that they might have seen before elsewhere;

    5. If whole systems are implemented reusing a small set of behaviours: as behaviour implementations improve the whole systems will improve without requiring any code changes;

    6. Sticking to only using behaviours enforces structure, which in turn makes testing and formal verification much easier.

    We'll come back to this last point about testing later.

    Lets come back to the behaviours we listed above first. We looked at gen_server, but what are the others for? There's gen_event which is a generic event manager, which lets you register event handlers that are then run when the event manager gets messages associated with the handlers. Joe says this is useful for, e.g., error logging and gives the following example of an simple logger (p. 142):

    -module(simple_logger).
    -behaviour(gen_event).
    
    -export([start/0, stop/0, log/1, report/0]).
    
    -export([init/1, terminate/2,
             handle_event/2, handle_call/2]).
    
    -define(NAME, my_simple_event_logger).
    
    start() ->
      case gen_event:start_link({local, ?NAME}) of
        Ret = {ok, Pid} ->
          gen_event:add_handler(?NAME,?MODULE,arg1),
          Ret;
      Other ->
        Other
      end.
    
    stop() -> gen_event:stop(?NAME).
    
    log(E) -> gen_event:notify(?NAME, {log, E}).
    
    report() ->
      gen_event:call(?NAME, ?MODULE, report).
    
    init(arg1) ->
      io:format('Logger starting~n'),
      {ok, []}.
    
    handle_event({log, E}, S) -> {ok, trim([E|S])}.
    
    handle_call(report, S) -> {ok, S, S}.
    
    terminate(stop, _) -> true.
    
    trim([X1,X2,X3,X4,X5|_]) -> [X1,X2,X3,X4,X5];
    trim(L) -> L.

    The interesting part is handle_event, trim and report. Together they let the user log, keep track and display the last five error messages.

    The gen_fsm behavior has been renamed to gen_statem (for state machine) since thesis was written. It's very similar to gen_server, but more geared towards implementing protocols, which often are specified as state machines. I believe any gen_server can be implemented as a gen_statem and vice versa so we won't go into the details of gen_statem.

    The next interesting behavior is supervisor. Supervisors are processes which sole job is to make sure that other processes are healthy and doing their job. If a supervised process fails then the supervisor can restart it according to some predefined strategy. Here's an example due to Joe (p. 148):

    -module(simple_sup).
    -behaviour(supervisor).
    
    -export([start/0, init/1]).
    
    start() ->
      supervisor:start_link({local, simple_supervisor},
      ?MODULE, nil).
    
    init(_) ->
      {ok,
      {{one_for_one, 5, 1000},
      [
       {packet,
         {packet_assembler, start, []},
         permanent, 500, worker, [packet_assembler]},
       {server,
         {kv, start, []},
         permanent, 500, worker, [kv]},
       {logger,
         {simple_logger, start, []},
         permanent, 500, worker, [simple_logger]}]}}.

    The {one_for_one, 5, 1000} is the restart strategy. It says that if one of the supervised processes (packet_assembler, kv, and simple_logger) fail then only restart the failing process (one_for_one). If the supervisor needs to restart more than 5 times in 1000 seconds then the supervisor itself should fail.

    The permanent, 500, worker part means that this is a worker process which should be permanently kept alive and its given 500 milliseconds to gracefully stop what it's doing in case the supervisor wants to restart it.

    "Why would the supervisor want to restart it if it's not dead already?", one might wonder. Well, there are other restart strategies than one_for_one. For example, one_for_all where if one process fails then the supervisor restarts all of its children.

    If we also consider that supervisors can supervise supervisors, which are not necessarily running on the same computer, then I hope that you get an idea of how powerful this behaviour can be. And, no, this isn't "just Kubernetes", because it's at the thread/lightweight process level rather than docker container level.

    The idea for supervisors and their restart strategies comes from the observation that often a restart appears to fix the problem, as captured in the Have You Tried Turning It Off And On Again? sketches from IT Crowd.

    Knowing that failing processes will get restarted coupled with Jim Gray's idea of failing fast, that's either produce the output according to the specification or signal failure and stop operating, leads to Joe's slogan: "Let it crash!" (p. 107). Another way to think of it is that a program should only express its "happy path", should anything go wrong on its happy way it should crash, rather than trying to be clever about it and try to fix the problem (potentially making it worse), and another program higher up the supervisor tree will handle it.

    Supervisors and the "let it crash" philosophy, appear to produce reliable systems. Joe uses the Ericsson AXD301 telephone switch example again (p. 191):

    Evidence for the long-term operational stability of the system had also not been collected in any systematic way. For the Ericsson AXD301 the only information on the long-term stability of the system came from a power-point presentation showing some figures claiming that a major customer had run an 11 node system with a 99.9999999% reliability, though how these figure had been obtained was not documented.

    To put this in perspective, five nines (99.999%) reliability is considered good (5.26 minutes of downtime per year). "59% of Fortune 500 companies experience a minimum of 1.6 hours of downtime per week", according to some report from a biased company. Notice per year vs per week, but as we don't know how either reliability numbers are obtained its probably safe to assume that the truth is somewhere in the middle – still a big difference, but not 31.56 milliseconds (nine nines) of downtime per year vs 1.6 hours of downtime per week.

    I'm not sure if application and release technically are behaviours, i.e. interfaces. They are part of the same chapter as the other behaviours in the thesis and they do provide a clear structure which is a trait of the other behaviours though, so we'll include them in the discussion.

    So far we've presented behaviours from the bottom up. We started with "worker" behaviours gen_server, gen_statem and gen_event which together capture the semantics of our problem. We then saw how we can define supervisor trees whose children are other supervisor trees or workers, to deal with failures and restarts.

    Next level up is an application which consists of a supervisor tree together with everything else we need to deliver a particular application.

    A system can consist of several application and that's where the final "behaviour" comes in. A release packages up one or more applications. They also contain code to handle upgrades. If the upgrade fails, it must be able to rollback to the previous stable state.

    I hope that by now I'm managed to convince you that it's not actually the lightweight processes and message passing by themselves that make Erlang great for building reliable systems.

    At best one might be able to claim that lightweight processes and supervisors are the key mechanisms at play, but I think it would be more honest to recognise the structure that behaviours provide and how that ultimately leads to reliable software.

    I've not come across any other language, library, or framework which provides such relatively simple building blocks that compose into big systems like the AXD301 ("over a million lines of Erlang code", p. 167).

    This begs the question: why aren't language and library designers stealing the structure behind Erlang's behaviours, rather than copying the ideas of lightweight processes and message passing?

    Let's take a step back. We said earlier that behaviours are interfaces and many programming languages have interfaces. How would we go about starting to implement behaviours in other languages?

    Lets start with gen_server. I like to think its interface signature as being:

    Input -> State -> (State, Output)

    That's it takes some input, its current state and produces a pair of the new updated state and an output.

    How do we turn this sequential signature into something that can handle concurrent requests? One way would be to fire up a HTTP server which transforms requests into Inputs and puts them on a queue, have an event loop which pops inputs from the queue and feeds it to the sequential implementation, then writing the output back to the client response. It wouldn't be difficult to generalise this to be able to handle multiple gen_servers at the same time, by giving each a name and let the request include the name in addition to the input.

    gen_event could be implemented by allowing registration of callbacks to certain types of event on the queue.

    supervisors is more interesting, one simple way to think of it is: when we feed the gen_server function the next input from the queue, we wrap that call in an exception handler, and should it throw we notify its supervisor. It gets a bit more complicated if the supervisor is not running on the same computer as the gen_server.

    I haven't thought about application and releases much yet, but given that configuration, deployment and upgrades are difficult problems they seem important.

    Writing a post solely about stealing from Erlang doesn't seem fair, even though it's the right thing to do, so I'd like to finish off with how we can build upon the insights of Joe and the Erlang community.

    I've been interesting in testing for a while now. Most recently I've been looking into simulation testing distributed systems à la FoundationDB.

    Simulation testing in a nutshell is running your system in a simulated world, where the simulation has full control over which messages get sent when over the network.

    FoundationDB built their own programming language, or dialect of C++ with actors, in order do the simulation testing. Our team seemed to be able to get quite far with merely using state machines of type:

    Input -> State -> (State, [Output])

    where [Output] is a sequence of outputs.

    The idea being that the simulator keeps track of a priority queue of messages sorted by their arrival time, it pops a message, advances the clock to the arrival time of that message, feeds the message to the receiving state machine, generates new arrival times for all output messages and puts them back into the priority queue, rinse and repeat. As long as everything is deterministic and the arrival times are generated using a seed we can explore many different interleavings and get reproducible failures. It's also much faster than Jepsen, because messaging is done in-memory and we advance the clock to the arrival time, thereby triggering any timeouts without having to wait for them.

    We used to say that programs of this state machine type where written in "network normal form", and conjectured that every program which can receive and send stuff over the network can be refactored into this shape. Even if we had a proof, "network normal form" always felt a bit arbitrary. But then I read Joe's thesis and realised that gen_server and gen_statem basically have the same type, so I stopped being concerned about it. As I think that if a structure is found to be useful by different people, then it's usually a sign that it isn't arbitrary.

    Anyway, in, at least, one of Joe's talks he mentions how difficult it's to correctly implement distributed leader election.

    I believe this is a problem that would be greatly simplified by having access to a simulator. A bit like I'd imagine having access to a wind tunnel would make building an airplane easier. Both lets you test your system under extreme conditions, such as unreliable networking or power loss, before they happen in "production". Furthermore, this simulator can be generic in, or parametrised by, behaviours. Which means that the developer gets it for free while the complexity of the simulator is hidden away, just like the concurrent code of gen_server!

    FoundationDB is a good example of simulation testing working, as witnessed by this tweet where somebody asked Kyle "aphyr" Kingsbury to Jepsen test FoundationDB:

    "haven't tested foundation[db] in part because their testing appears to be waaaay more rigorous than mine."

    Formal verification is also made easier if the program is written a state machine. Basically all of Lamport's model checking work with TLA+ assumes that the specification is a state machine. Also more recently Kleppmann has shown how to exploit the state machine structure to do proof by (structural) induction to solve the state explosion problem.

    So there you have it, we've gone full circle. We started by taking inspiration from Joe and Erlang's behaviours, and ended up using the structure of the gen_server behaviour to make it easier to solve a problem that Joe used to have.

    There are a bunch of related ideas that I have started working on:

    • Stealing ideas from Martin Thompson's work on the LMAX Disruptor and aeron to make a fast event loop, on top of which the behaviours run;
    • Enriching the state machine type with async I/O;
    • How to implement supervisors in more detail;
    • Hot code swapping of state machines.

    Feel free to get in touch, if you find any of this interesting and would like to get involved, or if you have have comments, suggestions or questions.




    All Comments: [-] | anchor

    whalesalad(363) 7 days ago [-]

    I disagree. Interfaces are a trivial concept that can get bolted-on to any language. Even in languages without an official interface construct, you can replicate them in the program space.

    The BEAM succeeds because you can run 1M processes on a single node, represent complex distributed state machines with ease, and restart portions of the system with zero downtime. Among many other things.

    I really don't think behaviors/interfaces is the most critical piece.

    hinkley(10000) 7 days ago [-]

    I haven't used it enough to be able to say yet, but I believe the BEAM avoids part of the problem Ian Cooper (Where Did It All Go Wrong?) rediscovered, which is that microservices don't min-max the inter- versus intra-modular friction in systems.

    I would not say that Beam eliminates this problem in any way, but I do think it lowers the slope of the line. The self-consistent idioms and functionality, especially with deployment, auto recovery and load balancing, reduce the inter-module friction. It makes a system where 12 engineers can easily manage 30 endpoints, and your surface area can still follow a power law.

    rdtsc(3656) 7 days ago [-]

    I see your point to a degree.

    That's kind of how Erlang is. At first, anything Erlang has, some other system has too:

    Isolated process heaps? - Just use OS processes

    Supervision trees? - Use kubernetes.

    Message passing? - Not a big deal, I can write two threads and a shared queue in Java.

    Hot code loading? - Java can do that too

    Low latency processing? - I can tune my LMAX disruptor to kick Erlang's butt any day.

    Now getting all that into one platform or library that's the main idea. OS processes are heavyweight. Running 2M of them on a server is not easy. You could use some green threads or promises but now you lost the isolated heap bit.

    You can use kubernetes to some degree but it does not do nested supervision trees well. I guess it would work, but now you have your code, and you have pods and controllers, and volumes and all the shit.

    You can do message passing with an 'actors' libraries in many language. But you cannot do pattern matching on receive, and it doesn't transparently integrate with sending it across nodes to another thread.

    You can do hot code loading, but how do you deal with runtime data structure and state. Erlang is built around that: gen_servers since the state is immutable and explicit has callbacks to upgrade not just the code but the state itself.

    myth_drannon(476) 7 days ago [-]

    'In February 1998 Erlang was banned for new product development within Ericsson—the main reason for the ban was that Ericsson wanted to be a consumer of software technologies rather than a producer.' - The creator of the language banned any use of it internally.

    vvpan(3674) 7 days ago [-]

    But from the quote it seems that for reasons unrelated to the language itself?

    zdragnar(10000) 7 days ago [-]

    Being a consumer rather than a producer of tech is strictly a business decision. There's significant cost to producing and maintaining a language, and Ericsson no longer wanted to pay the upkeep.

    That's not necessarily an indictment on the language itself. The alternative would have been to keep using it while also open sourcing it, but I'm guessing they just wanted to be able to hire cheaper C developers or whatever the flavor of the time was.

    adamkittelson(10000) 7 days ago [-]

    It is wildly disingenuous to just copy paste that line from wikipedia and not the rest of the paragraph.

    > In February 1998, Ericsson Radio Systems banned the in-house use of Erlang for new products, citing a preference for non-proprietary languages.[15] The ban caused Armstrong and others to make plans to leave Ericsson.[16] In March 1998 Ericsson announced the AXD301 switch,[8] containing over a million lines of Erlang and reported to achieve a high availability of nine '9's.[17] In December 1998, the implementation of Erlang was open-sourced and most of the Erlang team resigned to form a new company, Bluetail AB.[8] Ericsson eventually relaxed the ban and re-hired Armstrong in 2004.

    - edit, poster was quoting a quote in the article, not wikipedia, the article is the one omitting the context

    debugnik(10000) 7 days ago [-]

    No, the company the creators worked for. And six years later they hired Armstrong again and silently lifted the ban.

    bcardarella(2307) 7 days ago [-]

    The amazing thing about Erlang and the BEAM is it's depth of features. To the OP the Behaviour/Interface of Erlang is their biggest take away. For me I believe it is how you require far far less development resources to build complex systems than you would require in any other language (provided comparable experience in both stacks). And for many the lightweight processes and programming model.

    OTP itself has so much in it. We've been working on compiling Elixir to run on iOS devices. Not only can we do that through the release process but through using the ei library provided in Erlang we can compile a Node in C that will interface with any other Erlang node over a typical distributed network as you would for Erlang, Elixir, Gleam, etc... furthermore there is a rpc library in Erlang where from C we can make function calls and interface with our Elixir application. Yes, the encoding/decoding has an overhead and FFI would be faster but we're still way within our latency budget and we got this stood up in a few days without even have heard of it before.

    The larger point here is that Erlang has been solving many of the problems that modern tech stacks are struggling with and it has solved for scale and implementation cost and it solved these problems decades ago. I know HN has a bit of a click-bait love relationship with Erlang/Elixir but it hasn't translated over to adoption and there are companies that are just burning money trying to do what you get out of the box for free with the Erlang stack.

    relistan(10000) 7 days ago [-]

    C nodes are under appreciated. We have one (Cgo) for communicating between Go and Elixir services running in the same Kubernetes pod. The docs are also pretty good for Erlang and its C libs.

    agent281(10000) 7 days ago [-]

    > I know HN has a bit of a click-bait love relationship with Erlang/Elixir but it hasn't translated over to adoption and there are companies that are just burning money trying to do what you get out of the box for free with the Erlang stack.

    Do you or the community have a sense why that is?

    hosh(10000) 7 days ago [-]

    I went from a company that used Elixir in the backend to one that uses Nodejs.

    I had gone in neutral about Nodejs, having never really used it much.

    These projects I worked on were backend data pipeline that did not even process that much data. And yet somehow, it was incredibly difficult to isolate exactly the main bug. Along the way, I found out all sorts of things about Nodejs and when I compare it with Elixir/Erlang/OTP, I came to the conclusion that Node.js is unreliable by design.

    Don't get me wrong. I've done a lot of Ruby work before, and I've messed with Python. Many current-generation language platforms are struggling with building reliable distributed systems, things that the BEAM VM and OTP platform had already figured out.

    paradox460(10000) 6 days ago [-]

    Adding to this, the primitives erlang, and descendants, give you are very easy to work with, and therefore very easy to test.

    Take GenServer. The workhorse of most BEAM systems. Everything it does it basically just calling various functions with simple parameters. So you can test it just by call l calling those functions and manually passing parameters to it, and asserting on its output. No need to set up complex testing systems that are capable of dealing with asynchronous code, no need to handle pauses and wait for coffee to finish running in your tests. It's something a lot of juniors tend to miss, but it's liberating when figured out

    jerf(3620) 7 days ago [-]

    'This begs the question: why aren't language and library designers stealing the structure behind Erlang's behaviours, rather than copying the ideas of lightweight processes and message passing?'

    Because the function signatures of Erlang's behaviors are critically tied to Erlang's other functionality, specifically its unusual use of immutability. You need a separate init call for its servers because of that, and a very distinct use of the state management to work exactly the same way.

    But to achieve the same goals in other languages, you almost always shouldn't directly copy what Erlang is doing. In fact when I see 'Look! I ported gen_server into $SOME_OTHER_LANGUAGE' and I see exactly and precisely the exact interface Erlang has, I know that the port doesn't deeply understand what Erlang is doing.

    When I ported the idea of supervisor trees into Go [1], I did so idiomatically. It turns out in modern Go the correct interface for 'a thing that can be supervised' is not precisely the same signature that Erlang has, but

        type Service interface {
            Serve(context.Context)
        }
    
    That's all you need and all you should use... in Go. Your other language may vary. Go doesn't need a 'handle_event/2' because it has channels, and you should use those, not because they are 'better' or 'worse' but because that's what this language does. In another language you may use something else. In another infrastructure you may end up sending things over Kafka or some cloud event bus rather than 'calling a handle_event/2'. The key is in building an event-based system, not copying the exact implementation Erlang has.

    A peculiar issue the Erlang community has is getting excessively convinced that there's something super-mega-special about the exact way Erlang does it, and that if you do it any other way it is ipso facto wrong and therefore not reliable. This may have been true in 2005; it is not true in 2025. Where once Erlang had almost the only sensible answer, in 2025 the problem is poking through the ocean of answers deluging us! While I recommend learning from Erlang about reliable software, I strongly recommend against just blind-porting out the exact way Erlang achieves it into any other language. It is in almost any other language context the wrong answer. Even other immutable languages generally vary enough that they can't just copy the same structure.

    [1]: https://jerf.org/iri/post/2930/

    asa400(10000) 7 days ago [-]

    To follow on from your excellent post, I think a reasonable next question is, 'why have these kinds of approaches and ideas in other languages and systems succeeded in gaining market adoption, but Erlang/Elixir has not?'

    This to me is the most interesting question about Erlang, and I say this as someone who works professionally in Elixir.

    It's _clear_ that there is incredible appetite for tools that help us design reliable concurrent systems given the wild success of things like k8s, Kafka, AWS's distributed systems products, etc., but why hasn't Erlang/Elixir been able to capture that share?

    My friends and I debate this all the time, but I don't know the answer.

    klabb3(10000) 7 days ago [-]

    Go is my favorite language but:

    > Go doesn't need a 'handle_event/2' because it has channels, and you should use those

    Of what type? But most importantly, channels are local to the process, so you need glue to make it networked. (I assume erlang has networked message handling abstracted away). In addition I've seen 3-4 different variations of your proposed pattern for long-running server like things.

    I agree fully that porting should make use of idiomatic constructs. But I also think languages can have hidden mechanics that loses the valuable essence while porting – a form of anti-relativism of PLs if you will.

    It's entirely possible to me that this "oh a channel? just wrap it in X" is much more detrimental to interop than what it sounds like. For instance take http.Handler in Go. Similarly simple but what are the real world implications of having it in std? An ecosystem of middleware that is largely compatible with one another, without pre-coordination (a non-std http server X can be used with auth middleware Y and logging middleware Z). Similar things can be said about io.Reader and friends. These extremely simply interfaces are arguably more valuable than the implementations.

    If, and I'm speculating here, Erlang got many of the interfaces for reliable distributed systems right, that can be what enables the whole.

    senderista(10000) 7 days ago [-]

    For me the most interesting concept in Erlang/BEAM is that partial recovery is built in from the ground up. When an unexpected state is encountered, instead of either killing the entire process or trying to proceed and risking corruption, you just roll back to a known good state, at the most granular level possible. This idea was researched many years ago under the name of 'microreboots'(associated with 'crash-only software'), but only Erlang/BEAM made it a first-class concept in a production system.

    benmmurphy(3473) 7 days ago [-]

    You still have to be careful with supervision trees and parts of the tree restarting. For example your system might work if the whole erlang operating system process is suddenly killed and restarted but your system might start corrupting data if parts of the erlang process tree is restarted. Erlang gives you a good model to work with these problems but it doesn't allow you to completely turn off your brain. If you walk in thinking that you can just let things restart and everything will be fine then you might end up getting burnt.

    groestl(10000) 7 days ago [-]

    > When an unexpected state is encountered, instead of either killing the entire process or trying to proceed and risking corruption, you just roll back to a known good state, at the most granular level possible.

    > but only Erlang/BEAM made it a first-class concept in a production system.

    Exceptions?

    Towaway69(10000) 7 days ago [-]

    I've just gotten back into Erlang becuase of the lightweight processes and message passing, so far behaviour has been secondary (i.e. just learning about them)!

    The project is about bring visual Flow Based Programming(FBP)[1] to Erlang. FBP seems to be made for Erlang and I was surprised there was something already but there does not seem to be.

    My goto tool for FBP is Node-RED and hence the basic idea is to bolt a Node-RED frontend on to an Erlang backend and to have every node being a process. Node-REDs frontend is great for modelling message passing between nodes, hence there is a very simply one-to-one mapping to Erlangs processes and messages.

    I've implemented some basics and started to create some unit tests as flows to slowly build up functionality. I would really like this to be 100% compatiable to Node-RED the NodeJS backend. For more details, the github repo --> https://github.com/gorenje/erlang-red

    Overall Erlang is amazingly well suited to this and astonished that no one else has done anything like this - or have they?

    [1] = https://jpaulm.github.io/fbp/index.html

    __jonas(10000) 7 days ago [-]

    Oh that's really cool to see! I always thought a visual programming language on the BEAM would be fun

    mcintyre1994(10000) 6 days ago [-]

    This is a really cool idea!

    runlaszlorun(10000) 6 days ago [-]

    Love the idea as well! Would I be wrong in thinking that, at a high-level, fbp is like erlang processes where message flow is one way?

    travisgriggs(3630) 7 days ago [-]

    To me, Erlang/Elixir's power is not necessarily the Actor model implementation, the matching from prolog, immutability, behaviors, etc, but Joes desire to demonstrate you could do more with less.

    It is a well thought out and trued system of computation that has a consistency rarely witnessed in other languages, much less the "web". It is not perfect. But it is pretty impressive.

    Unfortunately, I find the appreciation and uptake for what simplicity empowers in the software world pretty under appreciated. Complexity allows people to become specialists, managers to have big teams and lots of meetings, experts to stay experts.

    Erlang was being developed in a period where companies were trying to implement software solutions with smaller headcounts, limited horsepower, etc. A multi decade outpouring of cash into the domain has made the value of "less will mean more for all of us in good ways" less of an attractor.

    zelphirkalt(10000) 6 days ago [-]

    Reminds me of Rich Hickey's talk about Simple VS Easy.

    runlaszlorun(10000) 6 days ago [-]

    You've just convinced me to spend some more time with Erlang! I've dabbled a bit and, at least on the surface, prefer erlang syntax over elixir.

    roeles(10000) 6 days ago [-]

    Alan Kay has once said that you get simplicity by choosing a slightly more complicated building block.

    It appears to me that erlang does this.

    hinkley(10000) 7 days ago [-]

    I've worked with a few individuals, mostly managers, who intended to write books informed by our experiences. It was always frustrating for me to see that we disagreed about what aspects of our work made us successful. There was always something they minimized as being nice that I felt was essential.

    And here we see someone claiming that lightweight processes and message passing aren't the secret sauce, missing that Erlang as Communicating Sequential Processes is indivisible from those qualities, and then repeatedly mentioning CSP as part of the secret sauce.

    Examples:

    > The application programmer writes sequential code, all concurrency is hidden away in the behaviour;

    > Easier for new team members to get started: business logic is sequential, similar structure that they might have seen before elsewhere;

    > Supervisors and the "let it crash" philosophy, appear to produce reliable systems. Joe uses the Ericsson AXD301 telephone switch example again (p. 191):

    Behaviors are interesting and solve a commonly encountered problem in the 80's that was still being solved in some cases in the 00's, but it's a means as much as an end in Erlang. It's how they implemented those other qualities. But I don't know if they had to, to make Erlang still mostly be Erlang.

    sitkack(10000) 7 days ago [-]

    Managers make up their own narrative based on vibes.

    silisili(10000) 7 days ago [-]

    Is Erlang considered CSP? I've always thought it wasn't really, and had its own thing called 'actors' which are id'd and can communicate directly, vs CSP which are anonymous and use channel messaging.

    I've always thought the actor model made more sense, but highly YMMV.

    fidotron(2952) 7 days ago [-]

    Erlang isn't CSP, it's the Actor model. https://en.wikipedia.org/wiki/Actor_model

    CSP is what inspired the golang channels, via occam and some other languages. The whole synchronization on unbuffered channels is the most obvious differentiator, though there are others like the actor concept of pattern matching over a mailbox.

    The whole CSP vs actor debate is quite interesting when you get down to it because they superficially look kind of similar but are radically different in implications.

    Supersaiyan_IV(10000) 7 days ago [-]

    'In February 1998 Erlang was banned for new product development within Ericsson'

    False statement. Ericsson still uses Erlang, for example in their MME. Source: I used to work at Ericsson.

    bee_rider(10000) 7 days ago [-]

    Is there any additional context here? (Is this a common misperception that you've come across?)

    4ad(1753) 7 days ago [-]

    It is simultaneously possible that Ericsson banned Erlang in 1998 (a statement claimed multiple times by the creators of Erlang) and that Ericsson rescinded the ban later in 2004, when they hired back Joe Armstrong.

    jesperwe(10000) 7 days ago [-]

    And there is a small team of Ericsson full time devs working on developing the language itself and the BEAM.

    lysace(10000) 7 days ago [-]

    My impression from Ericssonland:

    Around year 2008 being an Erlang coder was often more or less seen as being a COBOL coder in Sweden. Bluetail had sort of failed, having burned lots of VC, iirc.

    So Erlang was something weird and custom that Ericsson used to build software for legacy phone exchanges. I remember that a colleague's wife working at Ericsson had received on-the-job training from essentially zero programming knowledge to become an Erlang developer in order to maintain some phone exchange software.

    It's been fascinating to see it morph into something cool. Whatsapp, etc.

    whorleater(10000) 7 days ago [-]

    Yeah, I don't know why this falsehood continues to persist. WhatsApp and Ericsson engineers continue to work together to evolve Erlang, alongside a bunch of other people across the industry.

    Source: I work at WhatsApp

    lamuswawir(10000) 6 days ago [-]

    It's not false, Erlang was indeed banned at Ericsson, which caused Joe Armstrong to leave. They later reversed course and brought him, together with the language back. This is a well documented fact in the history of the language.

    sbuttgereit(1237) 6 days ago [-]

    '5.2 Erlang is banned

    Just when we thought everything was going well, in 1998, Erlang was banned within Ericsson Radio AB (ERA) for new product development. This ban was the second most significant event in the history of Erlang: It led indirectly to Open Source Erlang and was the main reason why Erlang started spreading outside Ericsson.

    The reason given for the ban was as follows:

    The selection of an implementation language implies a more long-term commitment than the selection of a processor and OS, due to the longer life cycle of implemented products. Use of a proprietary language implies a continued effort to maintain and further develop the support and the development environment. It further implies that we cannot easily benefit from, and find synergy with, the evolution following the large scale deployment of globally used languages. [26] quoted in [12].

    In addition, projects that were already using Erlang were allowed to continue but had to make a plan as to how dependence upon Erlang could be eliminated. Although the ban was only within ERA, the damage was done. The ban was supported by the Ericsson technical directorate and flying the Erlang flag was thereafter not favored by middle management.'

    And to be completely fair....

    '6.2 Erlang in recent times

    In the aftermath of the IT boom, several small companies formed during the boom have survived, and Erlang has successfully rerooted itself outside Ericsson. The ban at Ericsson has not succeeded in completely killing the language, but it has limited its growth into new product areas.

    The plans within Ericsson to wean existing projects off Erlang did not materialise and Erlang is slowly winning ground due to a form of software Darwinism. Erlang projects are being delivered on time and within budget, and the managers of the Erlang projects are reluctant to make any changes to functioning and tested software.

    The usual survival strategy within Ericsson during this time period was to call Erlang something else. Erlang had been banned but OTP hadn't. So for a while no new projects using Erlang were started, but it was OK to use OTP. Then questions about OTP were asked: "Isn't OTP just a load of Erlang libraries?"—and so it became "Engine," and so on.'

    A History of Erlang Joe Armstrong Ericsson AB

    ©2007 ACM 978-1-59593-766-7/2007/06-ART6

    https://lfe.io/papers/%5B2007%5D%20Armstrong%20-%20HOPL%20II...

    There's probably a discussion on precisely what this means, but such descriptions as 'Erlang is banned' has significant and credible precedent.

    behnamoh(120) 7 days ago [-]

    Is it just me or does Erlang's syntax look a little bit nicer than Elixir's?

    Capricorn2481(10000) 7 days ago [-]

    I'm an outsider to this ecosystem, but I've seen a few people share that same opinion. They prefer the explicitness of Erlang.

    SoftTalker(3552) 7 days ago [-]

    It's inspired/descended from Prolog, and my impression is that many people find it a bit odd. It is at first, but I quickly adjusted to it and quite like it now.

    whalesalad(363) 7 days ago [-]

    gleam is probably my favorite middle ground between elixir and erlang.

    bmitc(3567) 7 days ago [-]

    Elixir came from Ruby developers and thus has similarly verbose syntax and macros. Erlang's syntax came from Prolog, which was used to implement the first compiler and is why Erlang's syntax is more concise.

    ValtteriL(3545) 7 days ago [-]

    I learned Erlang at school and used to prefer its syntax for years. However, after giving Elixir a chance and writing 1000 loc I was converted. Now I look at snippets of Erlang in docs with mild disgust.

    pton_xd(10000) 7 days ago [-]

    Erlangs syntax takes a bit of getting used to but it's very pleasant to use once you're familiar with it. I like it a lot.

    layer8(860) 7 days ago [-]

    From this article and others, it's still unclear to me what the state-handling and state-sharing model of Erlang is. Presumably, the granularity of the crashing/restarting sequential processes is also the granularity of in-memory state sharing. But what about external state, like databases, queues, file systems? For example, if a process has taken an item off a queue and then crashes before having fully processed it, how is that accounted for? Or you might not even know from the outside if it has been fully, partially, or not at all processed yet. This is an example where correct error handling or not crashing is crucial, in my experience. Or what about processing pipelines where a component in the middle crashes. Is there something like that in Erlang? Is there an article explaining Erlang from that perspective?

    fidotron(2952) 7 days ago [-]

    > For example, if a process has taken an item off a queue and then crashes before having fully processed it, how is that accounted for?

    I have worked with people that had deployed huge amounts on the BEAM that had a real problem with the answer to that, and resort to magical thinking.

    When erlang processes 'crash', assuming the whole system didn't crash, they almost certainly alerted a monitoring process of the fact, so that a process can be quickly restarted. This is the core of how supervision trees in erlang are built.

    There are a lot of subtleties to that. The whole system may or may not be a single BEAM instance, and if more than one then they can be distributed, i.e. processes on one machine receive failure messages from processes on others, and can restart the processes elsewhere. These mechanisms on a practical basis are sufficient to automatically pick up the majority of transient failures. (I should add there are two classic ways to blow up a BEAM instance which make this less good than it should be: a bad C function call 'NIF' for native something function, or posting messages to a process faster than it can consume them, which will eventually cause an OOM).

    But this differs from the underlying philosophy of the runtime, which is that things are only done when they're done, and you should expect failures at any time. This maps on to their messaging paradigm.

    What you actually sound like you want is a universe more like FoundationDB and QuiCK https://www.foundationdb.org/files/QuiCK.pdf where the DB and worker queue all live in one single transactional space, which certainly makes reasoning about a lot of these things easier, but have nothing to do with erlang.

    sshine(10000) 7 days ago [-]

    > what about [...] if a process has taken an item off a queue and then crashes before having fully processed it

    > you might not even know from the outside if it has been fully, partially, or not at all processed yet

    Erlang does not propose a unique solution to distributed problems, just good primitives.

    So the answer would be the same; you'd keep track in the queue if the element was partially popped, but not completed, and you report back to the queue that the processing failed and that the element should be fully put back.

    So in Erlang you might monitor a worker process and requeue items handled by processes that failed.

    ramchip(3304) 6 days ago [-]

    > For example, if a process has taken an item off a queue and then crashes before having fully processed it, how is that accounted for?

    I'm not sure I understand the question - all queue systems I've used separate delivery and acknowledgement, so if a process crashes during processing the messages will be redelivered once it restarts.

    Do you have a concrete example of a flow you're curious about?

    Maybe these could help:

    - https://ferd.ca/the-zen-of-erlang.html

    - https://jlouisramblings.blogspot.com/2010/11/on-erlang-state...

    procaryote(10000) 6 days ago [-]

    Erlang at least used to come with an in-memory database called Mnesia, that in the places I've encountered it depended on replicating all the state to every server, which usually caused some scaling issues.

    There's nothing outright stopping you from doing proper design and building separate erlang services that exchange state with regular protocols, but there does seem to be a temptation to just put all erlang in one big monolith and then run into very hard memory and scaling issues when usage and data grows.

    One high profile erlang user in the payment industry was mainly constrained by how big a server they could buy, as all their code ran on a single server with a hot standby. They have since moved to java, and rethought how they managed shared state

    Facebook managed to get ejabberd, the xmpp server written in erlang, to back their first Messenger, but it involved sharding to give each ejabberd-instance a small enough data set to cope, and a clever way to replicate presence data outside of erlang (storing it in compact memory blocks on each ejabberd server, and shipping them wholesale to a presence service at a regular cadence).

    Pretty soon they tore ejabberd out, metaphorically burned it in a field and salted the earth... but how much of that was the fault of erlang itself, and how much it was the issue of having one corner with erlang in a largely C++ world isn't known to me.

    geophile(2926) 7 days ago [-]

    In 2003 I joined a startup building a horizontally scalable archive. You could add nodes to add capacity for storing data and metadata, and the system could tolerate up to a configured number of failures and carry on without loss of data or service. (This was not a general-purpose file system, it was for write-once/read-many objects.)

    We built the system in Java and C. The distribution layer was done completely in Java. It was only after the system was done that I discovered Erlang. I REALLY wish I had known about it earlier. Erlang solved so many of the problems we had to solve by ourselves.

    DarkNova6(10000) 6 days ago [-]

    Even these says, now that Java got Virtual Threads?

    jiggawatts(10000) 7 days ago [-]

    Someone explain to me why I should prefer Erlang/BEAM/Elixir over something like Akka.NET?

    With the latter I get a huge ecosystem of packages and wide compatibility with platforms and tooling and also a robust and scalable actor model.

    Learning Erlang or any related language meanwhile feels like learning Tolkien's Elvish for the purposes of international trade.

    neonsunset(3115) 6 days ago [-]

    _Supposedly_ they are more convenient if you are willing to tolerate abysmally subpar efficiency, exotic semantics and lacking ecosystem.

    dqv(10000) 6 days ago [-]

    No, we can't explain to you why our blub language should be preferred to your blub language. It's your job to make that determination on your own.

    I can come back in 5 years to explain to you what is annoying about Akka.NET compared to the BEAM and vice versa. An expert in the BEAM who lacks experience in C# is not going to be able explain to an expert in C# who lacks experience in the BEAM why BEAM is better.

    You're asking for something incredibly rare - a person who is an expert in both runtimes and can concisely explain to you the tradeoffs of each.

    neonsunset(3115) 6 days ago [-]

    If you want to do exclusively distributed computing at the application level - Erlang/Elixir will be better. They can offer nice Northstar of where the UX of Akka.net/Orleans should sit at (and, arguably, Orleans is not exactly nice to use in comparison).

    Otherwise, aside from educational purposes, they are not worth spending your time on. Just skip to F# over Elixir because Elixir is not a serious language, lacking base language primitives and operations one would expect standard library to offer. It's not productive nor fast.

    HeavyRain266(10000) 7 days ago [-]

    Erlang, OTP, and the BEAM offer much more than just behaviours. The VM is similar to a virtual kernel with supervisor, isolated processes, and distributed mode that treats multiple (physical or virtual) machines as a single pool of resources. OTP provides numerous useful modes, such as Mnesia (database) and atomic counters/ETS tables (for caching), among others. The runtime also supports bytecode hot-reloading, a feature used to apply patches without any system downtime. While the syntax is not very screen reader-friendly, it is digestable.

    Apache Mesos[1] is the only thing that comes to my mind as a similar platform to BEAM in its ability to treat multi-machine resources as a single pool.

    Over a year ago, my private consulting company decided to adopt Erlang as our backend language. After some time, we started exploring BEAM's internals to, for example, replace the TCP-based stack with QUIC and integrate some Rust patches. A truly fantastic choice for lightweight and high-throughput systems that are only failing in case of kernel panic or power loss. We are currently working on very 'busy', concurrent software like a film/game production tracker and pipeline manager, and are now also preparing R&D for a private hospital management services.

    [1]: https://mesos.apache.org/

    HeavyRain266(10000) 7 days ago [-]

    Before you ask, we're not going to ever fully adopt Elixir (or Gleam) as its ecosystem is built around Phoenix framework and external services/databases. We would have to maintain internal bindings/implementations of things that are unmaintained on Elixir's side. Also worth to mention that it has a large amount of syntax sugar and its users have that weird fetish for abstracting stuff into DSL interfaces.

    spott(10000) 6 days ago [-]

    A question about erlang:

    Haskell taught me a lot about programming, things that I still use now, even though I only write Python.

    Does learning erlang teach you a new way of thinking? Or does it just make you wish you had erlang language features and libraries when not writing erlang?

    lgas(10000) 6 days ago [-]

    IMHO it will teach you a new way of thinking but that way is not as generally applicable as what most people take away from Haskell.

    unoti(10000) 6 days ago [-]

    I came here looking for information about why Ericsson stopped using Erlang, and for more information about Joe's firing.

    The short answer seems to be that they pivoted to Java for new projects, which marginalized Erlang. Then Joe and colleagues formed Bluetail in 1998. They were bought by Nortel. Nortel was a telecom giant forming about a third of the value of the Toronto Stock Exchange. In 2000 Nortel's stock reached $125 per share, but by 2002 the stock had gone down to less than $1. This was all part of the dot com crash, and Nortel was hit particularly hard because of the dot com bubble burst corresponding with a big downturn in telecom spending.

    It seems safe to look at Joe's layoff as more of a 'his unit was the first to slip beneath the waves on a sinking ship' situation, as they laid off 60,000 employees or more than two thirds of their workforce. The layoff was not a sign that he may not have been pulling his weight. It was part of a big move of desperation not to be taken as a sign of the ineffectiveness of that business unit.

    cmrdporcupine(2889) 6 days ago [-]

    It's very weird to me to see the word 'fired' in this context. 'Laid off' is more appropriate. 'Fired' is very value-laden and implies fault and termination with cause. Which I'm sure if that was somehow actually true the original article author would know nothing about, nor would it be any of their business.

    cmdrk(10000) 6 days ago [-]

    Erlang is my favorite language but getting a job writing Erlang feels impossible. I make it a habit to ctrl-F every Who's Hiring? thread and find Elixir occasionally and Erlang never.

    gavmor(10000) 6 days ago [-]

    Can you articulate the kinds of business problems Erlang is particularly well-suited to solve?

    When you choose Erlang for a project, what kind of return on investment do you think it typically offers? Does it lead to significant cost savings or help generate more revenue in ways that other languages might not?

    In situations where Erlang is chosen, what are some concrete examples of how it has demonstrably increased efficiency, reduced errors, or enabled new business opportunities that wouldn't have been as feasible with other technologies?

    Edit: I guess if I'd done any research myself before asking, I might've found this: https://www.erlang-solutions.com/blog/which-companies-are-us...

    LtdJorge(10000) 6 days ago [-]

    To me the most important aspect of Erlang is the runtime's scheduler, which is preemptive instead of cooperative. This allows the message passing, sequential code and lightweight processes to be much more effective than in any other general language or framework using cooperative scheduling (like async runtimes or coroutines in Rust, .Net, Kotlin, Lua).

    You can write actually synchronous code in Erlang and the runtime makes it so that no process blocks any other process by preempting them on a schedule.

    assbuttbuttass(10000) 6 days ago [-]

    Sounds a lot like Go





    Historical Discussions: Potatoes in the Mail (April 17, 2025: 327 points)

    (326) Potatoes in the Mail

    326 points about 14 hours ago by mooreds in 17th position

    facts.usps.com | Estimated reading time – 2 minutes | comments | anchor

    Trademarks

    The Sonic Eagle Logo, the trade dress of USPS packaging, the Letter Carrier Uniform and the Postal Truck and the following marks are among the many trademarks owned by the United States Postal Service: Click-N-Ship®, Deliver The Win®, EDDM®, ePostage®, Every Door Direct Mail®, Express Mail®, First-ClassTM, First-Class Mail®, First-Class Package International Service®, Forever®, Global Express Guaranteed®, IMb®, Informed Delivery®, Intelligent Mail®, Label BrokerTM, Parcel Select®, P.O. BoxTM, Post Office®, Pony Express®, Postal Inspection ServiceTM, PostalOne!®, Postal Police®, #PostalProud®, Priority Mail Express International®, Priority Mail Flat Rate®, Priority Mail International®, Priority: You®, Registered MailTM, Standard Mail®, The Postal Store®, United States Postal Inspection Service®, United States Postal Service®, U.S. Mail®, U.S. Postal InspectorTM, U.S. Postal Service®, USPS®, USPS BlueEarth®, USPS Mobile®, USPS Operation Santa®, USPS Tracking®, usps.com®, We are people delivering to peopleTM, ZIP+4® and ZIP CodeTM. This is not a comprehensive list of all Postal Service trademarks.

    Non-Postal Trademarks

    Dollar General®, Forest Stewardship Council®, How2Recycle®, McDonald's®, National Dog Bite Prevention Week®, Starbucks®, Subway®, Sustainable Forestry Initiative®, The Climate Registry®.

    Postal Facts 2024 provides the public with information about the U.S. Postal Service. The facts in this publication may be reproduced for the purpose of stating the fact itself, in a business, informational or academic context and the like, and in the body of text discussing factual subject matter relevant to the fact being presented. However, these facts may become outdated after publication and seeking the latest information is advised.

    Produced by U.S. Postal Service Corporate Communications

    © 2024 United States Postal Service. All rights reserved.

    facts.usps.com

    © 2016-2025 United States Postal Service. All rights reserved.




    All Comments: [-] | anchor

    memhole(10000) about 12 hours ago [-]

    USPS will mail all sorts of things. WIRED would let you mail them tons of interesting things. Working remotely I thought it would be hilarious to have everyone try and mail each other weird stuff as a company event.

    mooreds(17) about 12 hours ago [-]

    What was the weirdest thing that got through?

    eagerpace(10000) about 12 hours ago [-]

    Can you do it for just one stamp or do you need to weigh and label it?

    jkaplowitz(10000) about 12 hours ago [-]

    The linked article says you need it weighed for appropriate postage.

    null0ranje(2710) about 12 hours ago [-]

    You have to weigh it.

    bredren(10000) about 12 hours ago [-]

    How is postage attached? Can you just use stamps if you know the right amount? what if they fall off?

    dheera(3125) about 9 hours ago [-]

    Superglue and smear epoxy on top of it. If that doesn't work, bust out the Gorilla glue.

    neilv(3544) about 12 hours ago [-]

    On a childhood trip, to visit family in sunny Hawaii, we mailed back this coconut from the family yard, by writing our rainy Portland address on the coconut in Sharpie.

    (The coconut was one of the large, oblong ones, with a smooth surface. Not the small, spherical things in the grocery store. So there was plenty of room for a legible address.)

    When we got home, we planted it in a large indoor planter, hanged a lamp over it, and grew a sizable palm tree in our living room.

    nightfly(10000) about 11 hours ago [-]

    Lol, I remember seeing a coconut in the student mail receiving area at PSU in 2010 or so. So I like how this has been done multiple times

    suriya-ganesh(10000) about 11 hours ago [-]

    What am I getting wrong? you planted a coconut but grew a palm tree ?

    thaumasiotes(3580) about 9 hours ago [-]

    > The coconut was one of the large, oblong ones, with a smooth surface. Not the small, spherical things in the grocery store.

    You say that like you think those are different things.

    veunes(10000) about 5 hours ago [-]

    That's such a perfect blend of wholesome and chaotic

    m463(2487) about 12 hours ago [-]

    I wish I could find the article.

    Years ago, someone tried mailing a lot of stuff through the post office.

    I remember they mailed a $20 bill, and tried sneaking something oversized like skiis into a mail truck.

    can't find the article though - search has really been SEO'd to death by companies involved in mail.

    jen729w(10000) about 12 hours ago [-]

    Random UK postage fact. Our postcodes are so specific, it's sufficient to write the house number and the postcode.

    We sent ourselves a postcard from Spain addressed to:

    1

    S_3 _S_ (redacted)

    UK

    – and it arrived.

    rahimnathwani(2039) about 12 hours ago [-]

    My parents' house shares a postcode with just one other house.

    When I was in secondary school, one of my classmates didn't believe a letter would reach me if the envelope had only my name and postcode (no house number or street name), so I gave him a stamp and challenged him to try.

    I brought the letter to school a couple of days later.

    bigfatkitten(10000) about 11 hours ago [-]

    No need for an address in Ireland, a general description of the recipient will do.

    https://www.irishpost.com/life-style/irish-postman-somehow-d...

    yellowapple(10000) about 9 hours ago [-]

    Allegedly the US ZIP code system is similarly precise if you use the extra four digits plus the last two digits of the address number. For example, 89434-8669-35 should be enough to send mail to my favorite bar in town (assuming said bar accepts mail there; can't say I've ever tried).

    ipcress_file(10000) about 11 hours ago [-]

    My wife and I moved our stuff across Canada -- from Alberta to Nova Scotia -- by mail. That's when I found out about the 'monotainer,' a giant palletized wire box that they fill with items heading to a common destination. Our boxes all went in a monotainer and made it to Halifax before we did.

    The nicest part: Canada Post moved us in! Everything was waiting in our new apartment when we arrived.

    0xbadcafebee(3056) about 10 hours ago [-]

    You used to be able to ship via Amtrak, but they suspended the service. You could basically send up to a 500lbs pallet. You could also ship a bicycle, or a dead body. All three required correct packaging.

    A bunch of us used the service to ship cheap PCs and CRT monitors up to New York for HOPE one year. The shipping cost more than the computers, but it wasn't much (a couple hundred bucks). Public Terminal Cluster was a huge success. Afterward we didn't want to ship them back home, so we gave away two pallets worth of old computer gear to whoever passed by on 33rd St. Took about an hour.

    zkms(10000) about 12 hours ago [-]

    There are even multiple services that will mail a Potatoe to the recepient, possibly anonymously: https://potatoparcel.com https://www.mailaspud.com https://www.anonymouspotato.com https://mysterypotato.com (the only one I have used is 'anonymouspotato').

    ipjrms(10000) about 1 hour ago [-]

    Are they services or just middlemen who turn around and use USPS?

    rriley(10000) about 7 hours ago [-]

    USPS actually allows a bunch of odd items if they meet basic requirements:

    - Potato: write the address directly on the skin and add postage - Coconut: often mailed from Hawaii gift shops - Brick: just needs postage and an address - Inflated beach ball: address it directly, ships like a parcel - Plastic Easter egg: fill it, tape it shut, and label it - Flip-flop: address the sole and send it off - Small pumpkin: allowed if it's dry and not restricted by ag rules - Live queen bees (plus attendants): surface mail only, special label - Day-old chicks: special packaging and timing required

    IndrekR(3388) about 3 hours ago [-]

    Have mailed live queen bees in Europe as well. Funniest was when receiving some (I think it was from Denmark to Estonia before we joined the EU) and one delivery got stuck in customs due to unpaid alcohol tax — someone had misread "Live Bees" as "Live Beer". Fortunately this was cleared out within two days and bees were still alive (but a little short on food).

    weinzierl(233) about 5 hours ago [-]

    I once sent a beer coaster without envelope and just with an address scribbled on and a stamp to a beer loving friend from a holiday. We both were surprised it worked.

    Also in the late 90s I remember my favourite computer mag having a picture of a 5 1/4 inch floppy sent to them. Complete with postmarked stamp. Allegedly it survived the procedure.

    dcminter(1039) about 2 hours ago [-]

    Ha! I did that a few times with 31⁄2' disks - address and stamp on the label and slap a bit of tape over the shutter to prevent dust ingress. No issues.

    I don't think I'd have risked it with 51⁄4' floppies though, they were a lot less robust and I can't imagine the franking machines would have been good for them.

    paulkrush(3106) about 11 hours ago [-]

    I love parcels. Always have. My mom worked at the post office.

    Cheap postage hack: Nearly all U.S. stamps issued since World War II don't have value. You can buy old stamps on eBay for about 60–75 % of face value as "face" stamps—and they're perfectly valid for mailing.

    Unconventional postcards: A thin sheet of plywood with a Sharpie address label is a fun postcard. (it just costs a lot more than a normal postcard)

    Small Flat Rate Box physics: With a 70 lb limit, you'd need something exotic—say, a primordial black hole—to exceed the weight cap.

    Spare the carrier's back: A Medium Flat Rate Box packed with 10,000 pre 1982 copper pennies tips the scale at roughly 68 lb. Maybe ship the coins another way—your postal carrier will thank you!

    wileydragonfly(10000) about 11 hours ago [-]

    For a few years, your money was better spent investing in Forever stamps vs the stock market..

    abound(10000) about 10 hours ago [-]

    > Unconventional postcards: A thin sheet of plywood

    Can confirm, I laser cut wedding invitations out of 1/4' plywood and mailed them out like that. I think it required some 'non-machineable' stamp or similar, but they all arrived at their intended destinations.

    iterance(10000) about 10 hours ago [-]

    Several friends and I have been tossing around the idea of sending a solid billet of osmium in a small flat rate box, matching its size. 'One rate, any weight,' right?

    Sadly this experiment would cost in the high tens of thousands of dollars. We may try with titanium some day. That would only be ten thousand dollars.

    chneu(10000) about 10 hours ago [-]

    Back when flat rates originally came out I don't think they had an actual weight limit.

    A buddy of mine used to cast and paint figurines. Well, someone ordered a bunch of lead ones and they used a flat rate to ship it. The box weighed something like 80lbs. It was basically just a block of lead

    It's probably coincidence but a few months later a weight limit was placed on flat rate boxes. It's still crazy high. We always thought the timing was funny.

    SoftTalker(3552) about 8 hours ago [-]

    > Nearly all U.S. stamps issued since World War II don't have value

    'Forever stamps' were introduced in 2007. What other stamps before then didn't have a face value? I don't remember any.

    WalterBright(3248) about 8 hours ago [-]

    > Nearly all U.S. stamps issued since World War II don't have value.

    That's true of pretty much all stamps from all countries since WW2. Postal agencies have discovered that collectors will buy new issues and never mail them, preserving them as 'mint'. So it's pretty much free money for the Postal agency. Many countries (including the USPS) constantly come up with new designs to sell to collectors.

    I noticed that when I began collecting as a boy, thinking the post WW2 issues were all just 'soup can labels' and had zero interest in them.

    Scoundreller(10000) about 5 hours ago [-]

    On the opposite of the spectrum:

    From a set of year 2000 USPS experiments:

    > Helium balloon. The balloon was attached to a weight. The address was written on the balloon with magic marker; no postage was affixed. Our operative argued strongly that he should be charged a negative postage and refunded the postal fees, because the transport airplane would actually be lighter as a result of our postal item. This line of reasoning merely received a laugh from the clerk. The balloon was refused; reasons given: transportation of helium, not wrapped.

    https://improbable.com/airchives/paperair/volume6/v6i4/TMP-1...

    Image links are dead, including on archive.org :(

    veunes(10000) about 5 hours ago [-]

    The old stamp trick is genius! There's something extra satisfying about mailing a letter covered in vintage stamps like it's on a time-travel mission

    pmags(3338) about 13 hours ago [-]

    I thought there must be some sort of URL spoofing or invisible unicode character action going on. But no, I typed in the URL by hand and it appears to be real!

    I now know with certainty what sort of 'card' my siblings are getting for their next b-days!

    mooreds(17) about 13 hours ago [-]

    potato or coconut?

    htrp(3478) about 13 hours ago [-]

    Wait until you find out you can send chickens by mail

    https://facts.usps.com/shipping-chicks/

    thehappypm(10000) about 12 hours ago [-]

    I was just at a historical farm and they explained this to me! They said that it can often go badly though, like if there's a storm that delayed shipments, they can all die, which is super sad

    dmckeon(3337) about 12 hours ago [-]

    Various live animals, queen bees and up to 8 attendant bees by air, but bee hives by ground only. Fair warning: the recipient of mailed bee hives may get a phone call at any time of day or night to 'please come get them ASAP'.

    https://about.usps.com/posters/pos138/pos138__v04_revision_0... https://pe.usps.com/PUB52_Archive/NHTML/PUB52_Archive_202204...

    Dunan(10000) about 12 hours ago [-]

    People have sent children by mail:

    https://www.smithsonianmag.com/smart-news/brief-history-chil...

    ...I don't think they let you do this anymore.

    scottcha(3671) about 12 hours ago [-]

    My Grandparents lived in a very small farming town (pop 500) and word would get around town when chicks had arrived and she would take us down there to see them.

    thecosas(3385) about 13 hours ago [-]
    amccollum(10000) about 12 hours ago [-]

    The story of the bank built from bricks sent through the mail reminds me of the time I completed a move from Austin to Boston by packing all my possessions into rubber tubs and sending them by parcel post.

    The delivery date was a range, and I wasn't there on the day of the first attempted delivery. When I called the post office about it, their response (in a thick Boston accent) was, 'oh, so you're the tub guy, huh?'

    All in all, it was a really convenient way to execute a cross-country move, assuming you don't have a lot of stuff!

    drunkonvinyl(10000) about 12 hours ago [-]

    Flail and flail, it's just another brick in the mail.

    shoo(10000) about 11 hours ago [-]

    That history of the bank of Vernal was fascinating, thank you for sharing. Parcel post offered for packages of up to 50 pounds + price charged to post parcels from Salt Lake City to Vernal being less than half the cost charged by private carriers ==> lots of freight to Vernal starts getting sent by post! Then, bank director wanting pressed bricks for the front the new bank building in Vernal + closest pressed brick manufacturer to Vernal being in Salt Lake City + post still the cheapest freight option to Vernal ==> 37.5 tons of pressed bricks packed into 50 pound crates and posted!

    Anyone interested in the history of freight & trade may also enjoy reading Marc Levinson's book 'The Box' about the shipping container. https://press.princeton.edu/books/paperback/9780691170817/th...

    josephscott(680) about 11 hours ago [-]

    Looks like the bank built with bricks via the mail is still there - https://www.google.com/maps/@40.4555831,-109.528633,3a,75y,2...

    uticus(2824) about 9 hours ago [-]

    going up one level in url to facts.ups.com, then navigating to fun, lots of quirky stuff there.

    tptacek(94) about 8 hours ago [-]

    I don't understand how there can be 94 comments on this thread and not one of them is from someone who attempted (or succeeded) in mailing someone a potato. I am a homeowner. I have a address. I will receive a potato, or send one to whomever wants one. What's important about this story is 'is is true?'. Who's going to test it with me?

    andrewflnr(10000) about 8 hours ago [-]

    There's at least one who posted just a little bit before you. ;) https://news.ycombinator.com/item?id=43724688

    bigyabai(10000) about 7 hours ago [-]

    If you're willing to give your address to a Hacker News user then you need to spend more time researching your cohorts.

    fahrnfahrnfahrn(10000) about 6 hours ago [-]

    I sent a banana in the mail. I also sent a paperback book without any sort of box or wrapper. I think it was as Hitchhikers Guide to the Galaxy.

    jakebasile(10000) about 5 hours ago [-]

    I would like a potato. Emailed you.

    buu700(3142) about 4 hours ago [-]

    I did something pretty similar with USPS around 15 years ago. Walked into the post office, handed them a banana, they slapped a label on it, and off it went. A few weeks later I heard from my friend in Monaco that her mom had gone to check the mail and found her hand covered in rotten banana. Whoops.

    jedberg(3314) about 4 hours ago [-]

    > What's important about this story is 'is is true?'

    The URL is at usps.com, so I'm guessing this is about as official as it gets.

    I've mailed a coconut before and it worked. Never done a potato.

    9dev(2881) about 3 hours ago [-]

    I'm still wondering if they are going to potato internationally, in which case I would very gladly exchange some continental taters with a colony-grown variety with you!

    blululu(3013) about 2 hours ago [-]

    I have mailed a potato before. Sent it to a friend to celebrate Columbus Day (this was back when we overlooked his atrocities because it was a cool Italian guy who trafficked exotic nightshades across the Atlantic). It arrived just fine. The postal worker was quite helpful about wrapping it up with the appropriate postage. Post your address on the public internet and I'm sure you will get a lot more potatoes than you would expect.

    1024core(10000) about 11 hours ago [-]

    I was working for a postal contractor and we had to go to the local P&DC (warehouse sized building where all the local mail comes in to be sorted and then shipped to various destinations).

    The local foreman was giving us a lecture about safety and things not to do in there, and we were standing there listening to him. To my right about 10' away were a couple of boxes around 2' tall each. I was listening and my eyes were wandering, taking in the gigantic space when suddenly, out of the corner of my eye, I saw the box move! It like tilted a little and there was definite movement inside (it had a slit in it)! I yelped like a little kid: 'that box moved!'

    The foreman nonchalantly dismissed it saying, 'yeah those are ducks being mailed'. I was shocked to say the least.

    pixl97(10000) about 9 hours ago [-]

    Back in the late 90s and early 2000s a buddy of mine caught and mailed a lot of live snakes.

    Never heard of one getting out. Bet it would have been exciting if one did.

    GrantMoyer(10000) about 8 hours ago [-]

    How cruel.

    nonethewiser(3585) about 10 hours ago [-]

    Like it or not, this is a bad look for a service that many argue is a waste of money.

    yellowapple(10000) about 9 hours ago [-]

    Anyone who argues that USPS is a 'waste of money' is either grossly misinformed or lying through one's teeth; USPS is self-funded through postage and other fees, not through taxpayer funding. You still have to pay for postage to mail a potato.

    dsr_(1201) about 8 hours ago [-]

    It's easy to demonstrate that it is not a waste of money compared to commercial services, but let us argue counterfactually for the moment that it is the most expensive alternative.

    It is the only universal (in the USA) communications service, and therefore a necessary service which is not filled or reasonably filled by private alternatives.

    ForOldHack(10000) about 3 hours ago [-]

    If the post office mandate was only to be profitable, it would have been disbanded decades ago. It is a communication organization mandated by the constitution, by the founding fathers. Profit was never ever part of rurual postage service, neither was rural electrification, not rural phone service, and rural internet. The service that shows the most profit is the war machine.

    How many people does the post office unalive?

    The post office is loved by children, young adults, and senior citizens. Is the profitable military as popular amoung the people who call our veterans loosers? This comes from a propoganda machine of the oligarks who want, instead of government service, want only their own selfish profits.

    War is a waste of money, and arguing about it is a waste of time.

    To the many who think mail to rural people is a waste of money? I would rather recieve a letter from someone than a list of war dead.

    Thr many who think that profit is the reason for the existence of the post office, left a Marine for dead in Africa, lied about it, and never learned to pronounce his name to his mother.

    At least a coconut in the mail is not as empty headed as most of the political party that wants to run the entire government as a profitable business only to bankrupt it like a casino.

    How do they bankrupt a casino?

    Show me the first politician who ran on a platform of a profitable war machine? Pretty sure it was the German socialist Democratic party, who were never thet socialist not democratic.





    Historical Discussions: DolphinGemma: How Google AI is helping decode dolphin communication (April 14, 2025: 324 points)

    (324) DolphinGemma: How Google AI is helping decode dolphin communication

    324 points 4 days ago by alphabetting in 655th position

    blog.google | Estimated reading time – 1 minutes | comments | anchor

    Sharing DolphinGemma with the research community

    Recognizing the value of collaboration in scientific discovery, we're planning to share DolphinGemma as an open model this summer. While trained on Atlantic spotted dolphin sounds, we anticipate its potential utility for researchers studying other cetacean species, like bottlenose or spinner dolphins. Fine-tuning may be required for different species' vocalizations, and the open nature of the model facilitates this adaptation.

    By providing tools like DolphinGemma, we hope to give researchers worldwide the tools to mine their own acoustic datasets, accelerate the search for patterns and collectively deepen our understanding of these intelligent marine mammals.

    The journey to understanding dolphin communication is long, but the combination of dedicated field research by WDP, engineering expertise from Georgia Tech and the power of Google's technology is opening exciting new possibilities. We're not just listening anymore. We're beginning to understand the patterns within the sounds, paving the way for a future where the gap between human and dolphin communication might just get a little smaller.

    You can learn more about the Wild Dolphin Project on their website.




    All Comments: [-] | anchor

    srean(10000) 4 days ago [-]

    Can a powerful model become a fantastic autocomplete for dolphins ? Sure. Someday soon that's very likely to happen. But that alone would tell us almost nothing of what dolphin dialogue means.

    To understand their language we need shared experiences, shared emotions, common internal worlds. Observation of dolphin-dolphin interaction would help but to a limited degree.

    It would help if the dolphins are also interested in teaching us. Dolphins or we could say to the other '... that is how we pronounce sea-cucumber'. Shared nouns would be the easiest.

    The next level, a far harder level, would be to reach the stage where we can say 'the emotion that you are feeling now, that we call 'anger''.

    We will no quite have the right word for 'anxiety that I feel when my baby's blood flow doesn't sound right in Doppler'.

    Teaching or learning 'ennui' and 'schadenfreude' would be a whole lot harder.

    This begs a question can one fully feel and understand an emotion we do not have a word for ? Perhaps Wittgenstein has an answer.

    Postscript: I seem to have triggered quite a few of you and that has me surprised. I thought this would be neither controversial nor unpopular. It's ironic in a way. If we can't understand each other, understanding dolphin 'speech' would be a tough hill to climb.

    ruthvik947(10000) 4 days ago [-]

    Indeed! As Witt once said, 'if a lion could speak, we would not understand it.' (https://iep.utm.edu/wittgens/#H5)

    weard_beard(10000) 4 days ago [-]

    I think you are describing more of an edge case than you might think for a vertebrate, mammal, social, warm blooded, air breathing, earth living, pack hunter.

    charcircuit(10000) 4 days ago [-]

    >To understand their language we need shared experiences, shared emotions, common internal worlds

    Why? With modern AI there exists unsupervised learning for translation where you don't have to explicitly make translation pairs between the 2 languages. It seems possible to eventually create a way to translate without having to have a teaching process for individual words like you describe.

    Mystery-Machine(10000) 4 days ago [-]

    The fact that you cannot wrap your head around something doesn't mean that it's not possible. I do not claim that it is surely possible nor that it isn't. But it sure as hell looks possible. You also probably don't have kids. For example: how do you teach a child to speak? Or someone a new language? You show them some objects and their pronunciation. The same with the seagrass and/or a scarf. That's one way. Dolphins can also see (divers with) some objects and name them. We can also guess what they are saying from the sounds plus the actions they do. That's probably how we got 'seagrass' in the first place.

    For all the word that they don't have in their language, we/they can invent them. Just like we do all the time: artificial intelligence, social network, skyscraper, surfboard, tuxedo, black hole, whatever...

    It might also be possible that dolphins' language uses the same patterns as our language(s) and that an LLM that knows both can manage to translate between the two.

    I suggest a bit more optimistic look on the world, especially on something that's pretty-much impossible to have any negative consequences for humanity.

    sarreph(3372) 4 days ago [-]

    I'm pretty sure by the time we decode what they're saying it'll be "so long, and thanks for all the fish"

    nottorp(3629) 4 days ago [-]

    That's the good outcome.

    The bad outcome is the 'AI' will translate our hellos as an insult, the dolphins will drop the masquerade, reveal themselves as our superiors and pound us into dust once and forever.

    Picture the last surviving human surrounded by dolphins floating in the air with frickin laser beams coming out of their heads... all angrily asking 'why did you say that about our mother?'.

    And in the background, ChatGPT is saying 'I apologize if my previous response was not helpful'.

    nikolayasdf123(10000) 4 days ago [-]

    so, did it work?... anyone knows what is the result of this work?

    rideontime(10000) 4 days ago [-]

    The article says that they've only just begun deploying it, and that it will merely be used to speed up the process of recognizing patterns.

    > WDP is beginning to deploy DolphinGemma this field season with immediate potential benefits. By identifying recurring sound patterns, clusters and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins' natural communication — a task previously requiring immense human effort. Eventually, these patterns, augmented with synthetic sounds created by the researchers to refer to objects with which the dolphins like to play, may establish a shared vocabulary with the dolphins for interactive communication.

    xena(679) 4 days ago [-]

    This looks like a marine biologist desperately wanted to keep their job in spite of the 'nothing that's not AI' mandate so they made up some bullshit.

    vlovich123(10000) 4 days ago [-]

    They've been working on decoding dolphin sounds for a long time - Thad was telling me about this project in 2015 and it had been ongoing for a while. One challenge is doing this realtime is extremely difficult because of the frequency the dolphin speech occurs in. And they want to do this realtime which adds to the difficulty level. The other challenge on the AI side is that traditional AI is done using supervised learning whereas dolphin speech would require unsupervised learning. It would be interesting to learn more about how Gemma is helping here.

    Philpax(761) 4 days ago [-]

    That is a surprisingly cynical take; the marine biologists in question seemed pretty enthusiastic in the video!

    ZeroCool2u(3084) 4 days ago [-]

    Wow, there's a lot of cynicism in this thread, even for HN.

    Regardless of whether or not it works perfectly, surely we can all relate to the childhood desire to 'speak' to animals at one point or another?

    You can call it a waste of resources or someones desperate attempt at keeping their job if you want, but these are marine biologists. I imagine cross species communication would be a major achievement and seems like a worthwhile endeavor to me.

    davedigerati(10000) 4 days ago [-]

    I for one am simply happy to see us trying to apply LLMs to something other than replacing call centers... humankind SHOULD be exploring and learning sometimes even when there isn't an ROI.

    morkalork(10000) 4 days ago [-]

    I'd be less cynnical if researchers hadn't announced the same thing like 10 years ago

    https://www.nytimes.com/2017/12/08/science/dolphins-machine-...

    garciasn(10000) 4 days ago [-]

    Gemini supposedly allows for conversational speech w/your data. Have you tried it? We have; it's laughably bad and can't get the most basic stuff right from a well-crafted datamart.

    If it can't do the most basic stuff, please explain to be how in the fuck it is going to understand dolphin language and why would should believe its results anyway?

    janalsncm(10000) 4 days ago [-]

    Don't understand the cynicism either. Is this not way cooler than the latest pre-revenue series F marketing copy slop bot startup?

    To me this task looks less like next token prediction language modeling and more like translating a single "word" at a time into English. It's a pretty tractable problem. The harder parts probably come from all the messiness of hearing and playing sounds underwater.

    I would imagine adapting to new vocab would be pretty clunky in an LLM based system. It would be interesting if it were able to add new words in real time.

    amarant(3401) 4 days ago [-]

    It's trendy to hate Google, and even more trendy to hate anything AI.

    The cynicism on display here is little more than virtue signalling and/or upvote farming.

    Sad to see such thoughtless behaviour has reached even this bastion of reason.

    Nifty3929(10000) 4 days ago [-]

    I'm as or more cynical than the next guy - but it seems to me that being able to communicate with animals has high utility for humans. Partly from an emotional or companionship perspective as we've been doing with dogs for a long time, but maybe even on purely utilitarian grounds.

    If we want to know something about what's going on in the ocean, or high on a mountain or in the sky or whatever - what if we can just ask some animals about it? What about for things that animals can naturally perceive that humans have trouble with - certain wavelengths of light or magnetic fields for example? How about being able to recruit animals to do specific tasks that they are better suited for? Seems like a win for us, and maybe a win for them as well.

    Not sure what else, but history suggests that the more people have been able to communicate with each other, the better the outcomes. I assume this holds true more broadly as well.

    lukev(10000) 4 days ago [-]

    It's not even about the communication! Just having more insight into the brains and communication of other mammals has a ton of scientific value in its own right.

    Sometimes it's good just to know things. If we needed to find a practical justification for everything before we started exploring it, we'd still be animals.

    j45(3605) 4 days ago [-]

    The ability to understanding bee's communication was made possible, so I'm not sure why dolphins would seem harder?

    nsonha(10000) 3 days ago [-]

    Childhood dream aside, this to me seems like a much more legit use of AI than, say, generative art, so lame and pointless.

    neuroelectron(10000) 4 days ago [-]

    SeaQuest anyone? I still have the first comic.

    exe34(10000) 4 days ago [-]

    Darwin likes!

    canyon289(3676) 4 days ago [-]

    I work at Google on the Gemma team, and while not on the core team for this model, participated a bit on this project.

    I personally was happy to see this project get built. The dolphin researchers have been doing great science for years, from the computational/mathematics side it was quite neat see how that was combined with the Gemma models.

    moffkalast(10000) 4 days ago [-]

    It's great that dolphins are getting audio decoders in language models first, does the Gemma team intend to roll that out for human speech at some point eventually too?

    rcarmo(121) 4 days ago [-]

    The only output I'll believe from this is 'So long, and thanks for all the fish!'

    rcarmo(121) 4 days ago [-]

    I guess Douglas Adams isn't something a lot of people read these days.

    Imnimo(10000) 4 days ago [-]

    This sounds very cool at a conceptual level, but the article left me in the dark about what they're actually doing with DolphinGemma. The closest to an answer is:

    >By identifying recurring sound patterns, clusters and reliable sequences, the model can help researchers uncover hidden structures and potential meanings within the dolphins' natural communication — a task previously requiring immense human effort.

    But this doesn't really tell me anything. What does it mean to 'help researchers uncover' this stuff? What is the model actually doing?

    bjt(10000) 4 days ago [-]

    As far as I can tell, it hasn't actually done anything yet.

    The article reads like the press releases you see from academic departments, where an earth shattering breakthrough is juuuuust around the corner. In every single department, of every single university.

    It's more PR fluff than substance.

    lukev(10000) 4 days ago [-]

    Tangential, but this brings up a really interesting question for me.

    LLMs are multi-lingual without really trying assuming the languages in question are sufficiently well-represented in their training corpus.

    I presume their ability to translate comes from the fact that there are lots of human-translated passages in their corpus; the same work in multiple languages, which lets them figure out the necessary mappings between semantic points (words.)

    But I wonder about the translation capability of a model trained on multiple languages but with completely disjoint documents (no documents that were translations of another, no dictionaries, etc).

    Could the emerging latent 'concept space' of two completely different human languages be similar enough that the model could translate well, even without ever seeing examples of how a multilingual human would do a translation?

    I don't have a strong intuition here but it seems plausible. And if so, that's remarkable because that's basically a science-fiction babelfish or universal translator.

    beernet(10000) 4 days ago [-]

    My hunch is it would work somewhat, but poorly.

    Languages encode similar human experiences, so their conceptual spaces probably have natural alignments even without translation examples. Words for common objects or emotions might cluster similarly.

    But without seeing actual translations, a model would miss nuances, idioms, and how languages carve up meaning differently. It might grasp that 'dog' and 'perro' relate to similar concepts without knowing they're direct translations.

    ahartman00(10000) 4 days ago [-]

    >lots of human-translated passages in their corpus

    Yes. I remember reading that the EU parliamentary proceedings in particular are used to train machine translation models. Unfortunately, I cant remember where I read that. I did find the dataset: https://paperswithcode.com/dataset/europarl

    glomgril(10000) 3 days ago [-]

    Check out this recent benchmark MTOB (Machine Translation from One Book) -- relevant to your comment, though the book does have parallel passages so not exactly what you have in mind: https://arxiv.org/pdf/2309.16575

    In the case of non-human communication, I know there has been some fairly well-motivated theorizing about the semantics of individual whale vocalizations. You could imagine a first pass at something like this if the meaning of (say) a couple dozen vocalizations could be characterized with a reasonable degree of confidence.

    Super interesting domain that's ripe for some fresh perspectives imo. Feels like at this stage, all people can really do is throw stuff at the wall. The interesting part will begin when someone can get something to stick!

    > that's basically a science-fiction babelfish or universal translator

    Ten years ago I would have laughed at this notion, but today it doesn't feel that crazy.

    I'd conjecture that over the next ten years, this general line of research will yield some non-obvious insights into the structure of non-human communication systems.

    Increasingly feels like the sci-fi era has begun -- what a time to be alive.

    zoogeny(10000) 4 days ago [-]

    Not directly related, but one of those stories that is so bizarre you almost can't believe it isn't made up.

    There was a NASA funded attempt to communicate with Dolphins. This eccentric scientist created a house that was half water (a series of connected pools) and half dry spaces. A woman named Margaret Howe Lovatt lived full-time with the Dolphins attempting to learn a shared language between them.

    Things went completely off the rails in many, many ways. The lead scientist became obsessed with LSD and built an isolation chamber above the house. This was like the sensory deprivation tanks you get now (often called float tanks). He would take LSD and place himself in the tank and believed he was psychically communicating with the Dolphins.

    1. https://www.theguardian.com/environment/2014/jun/08/the-dolp...

    srean(10000) 4 days ago [-]

    Know the story. Such a tragic end.

    maebert(3654) 4 days ago [-]

    arguably the best episode of Drunken history has duncan trussel retelling this story: https://www.youtube.com/watch?v=p7ruBotHWUs

    Paraphrasing carl sagan: 'You don't go to Japan and kidnap a Japanese man start jking him off, give him fing acid, and then ask him to learn English!'

    meindnoch(10000) 3 days ago [-]

    >A woman named Margaret Howe Lovatt lived full-time with the Dolphins attempting to learn a shared language between them.

    She also had sex with a male dolphin called Peter.

    >He would take LSD and place himself in the tank and believed he was psychically communicating with the Dolphins.

    He eventually came to believe he was communicating with a cosmic entity called ECCO (Earth Coincidence Control Office). The story of the Sega game 'Ecco the Dolphin' [1] is a tongue-in-cheek reference to this. I recommend watching the Atrocity Guide episode on John C. Lily and his dolphin 'science' [2]. It's on par with The Men Who Stare at Goats (the non-fiction book [3], not the movie).

    He has a website that looks like it's been untouched since his death, 2001: http://www.johnclilly.com/

    [1] https://en.wikipedia.org/wiki/Ecco_the_Dolphin

    [2] https://www.youtube.com/watch?v=UziFw-jQSks

    [3] https://en.wikipedia.org/wiki/The_Men_Who_Stare_at_Goats

    trollied(3259) 3 days ago [-]

    Remember the game Ecco The Dolphin? Related... https://www.vice.com/en/article/the-ketamine-secrets-of-sega...

    amy214(10000) 2 days ago [-]

    It's funny you were thinking that, because I was thinking, 'how would you teach a japanese man english?.' The obvious answer is to jerk him off and give him high doses of LSD first. I immediately came to the same conclusion with this AI-dolphin stuff. Have they tried jerking off the dolphin and giving it LSD first? Apparently - yes.





    Historical Discussions: Reproducing Hacker News writing style fingerprinting (April 16, 2025: 322 points)

    (322) Reproducing Hacker News writing style fingerprinting

    322 points 2 days ago by grep_it in 2286th position

    antirez.com | Estimated reading time – 10 minutes | comments | anchor

    antirez 1 day ago. 54575 views. About three years ago I saw a quite curious and interesting post on Hacker News. A student, Christopher Tarry, was able to use cosine similarity against a vector of top words frequencies in comments, in order to detect similar HN accounts — and, sometimes, even accounts actually controlled by the same user, that is, fake accounts used to uncover the identity of the writer. This is the original post: https://news.ycombinator.com/item?id=33755016 I was not aware, back then, of Burrows-Delta method for style detection: it seemed kinda magical that you just needed to normalize a frequency vector of top words to reach such quite remarkable results. I read a few wikipedia pages and took mental note of it. Then, as I was working with Vectors for Redis I remembered about this post, searched the web only to discover that the original page was gone and that the author, in the original post and website, didn't really explained very well how the data was processed, the top words extracted (and, especially, how many were used) and so forth. I thought I could reproduce the work with Vector Sets, once I was done with the main work. Now the new data type is in the release candidate, and I found some time to work on the problem. This is a report of what I did, but before to continue, the mandatory demo site: you can play with it at the following link: https://antirez.com/hnstyle?username=pg&threshold=20&action=search NOTE: since the dataset takes 700MB of RAM, in my tiny server, in the next months I may take this down. However, later in this post you will find the link and the Github repository with the code to reproduce everything from scratch. NOTE2: I hope the web site will survive, it's a very crude Python script. I benchmarked the VSIM command in such a small server and yet it can deliver 80k VSIM per second! The wonders of int8 quantization, together with a few more optimizations. But the Python script is terrible, creates a new Redis connection each time and so forth. Fingers crossed. # Raw data download and processing Well, the first problem I had, in order to do something like that, was to find an archive with Hacker News comments. Luckily there was one with apparently everything posted on HN from the start to 2023, for a huge 10GB of total data. You can find it here: https://huggingface.co/datasets/OpenPipe/hacker-news and, honestly, I'm not really sure how this was obtained, if using scarping or if HN makes this data public in some way. Since I'm not a big fan of binary files, in the specific case of public datasets at least, I used two Python scripts in order to convert the Parquet files into something smaller and simpler to handle. The first script, gen-top-words.py, takes the binary files and generates a txt file with the list of the top N words used in the dataset. It generates 10k words by default, but for the statistical analysis a lot less are needed (or, actually: if you use too many words you no longer capture the style, but the kind of content a user is talking about!). Then, another Python script, accumulates all the comments for each single user and generates a very big JSONL file where there are just two keys: the user name and the frequency table of all the words used by a given user in all the history from HN starts to 2023. Each entry is like that: {'by': 'rtghrhtr', 'freqtab': {'everyone': 1, 'hates': 1, 'nvidia': 1, 'but': 1, 'treats': 1, 'ati': 1, 'as': 1, 'an': 1, 'afterthought': 1, 'another': 1, 'completely': 1, 'useless': 1, 'tool': 1, 'to': 1, 'throw': 1, 'on': 1, 'the': 1, 'pile': 1}} At this point, the final script, insert.py, could do all the real work: to apply the Borrows method for each user, create the user style vector, and insert it into Redis. The advantage of pre-processing the files (a slow operation) is that the insertion script could be called more easily with different parameters (especially the number of top words to use) in order to see the different results more promptly, without the need to re-process the Parquet files each time. # How the Burrow method works? In the original post, Christopher wrote that you just need to normalize the frequency of the words usage and apply cosine similarity. Actually the process is a bit more involved. First, let's ask ourselves, how this method actually works, in its essence? Well, it wants to capture words that each specific user over-uses or under-uses compared to the expected "average" language. To do so, we actually use the following steps (from the Python code). That's what we do for each of the top words: # Convert to relative frequency rel_freq = frequency / total_words # Standardize using z-score: z = (freq - mean) / stddev mean = word_means.get(word, 0.0) stddev = word_stddevs.get(word, 1.0) # Default to 1.0 to avoid division by zero z_score = (rel_freq - mean) / stddev # Set the z-score directly in the vector at the word's index vector[word_to_index[word]] = z_score So we start by "centering" the frequency the user used a given word, by subtracting the *global* usage frequency for that word. This way, we have a number that describes how much the user under (negative) or over (positive) used such word. But, if you think at it, words that have a much higher variance among usage of different writers are less important, when they change. We want to amplify the signal of words that are under of over used by this user in a much greater way compared to the normal variance of the word. This is why we divide the centered frequency by the global standard deviation of the word. Now we have what is called the "z score", an adjusted measure of how much a given word is an outlier in one or the other direction. Now, we are ready to insert the word into a Redis vector set, with just: VADD key FP32 [blob with 350 floats] username (I'll not cover the details of vector sets here since you can find the doc here -> https://github.com/redis/redis/blob/unstable/modules/vector-sets/README.md) Note that Redis performs L2 normalization of the inserted vectors, but remembers the L2 value in order to return back the values when VEMB is used to retrieve the associated vector, so the z_score was set as it is. Finally, with VSIM, we can get similar users: 127.0.0.1:6379> vsim hn_fingerprint ele pg 1) 'pg' 2) 'karaterobot' 3) 'Natsu' 4) 'mattmaroon' 5) 'chc' 6) 'montrose' 7) 'jfengel' 8) 'emodendroket' 9) 'vintermann' 10) 'c3534l' All the code (but the webapp itself) can be found here: https://github.com/antirez/hnstyle The README file explains how to reproduce every part. # Why 350 words? One of the things missing in the original post that stimulated this blog post, is how many top words one should use. If you use too many words, you'll see many comments of mine about Redis, since Redis is one of the top 10k words used. Guess what? I did exactly this error, initially, and VSIM continued to report users that talked about similar topics than myself, not with similar *style*. But fortunately the Internet Archive cached the Christopher results for the "pg" account, here: https://web.archive.org/web/20221126235433/https://stylometry.net/user?username=pg So now I could tune my top-k words to get similar results. Also, reading the original papers, I discovered that, with my surprise, for the analysis to work well you need even as little as 150 words. And in general the range from 150 to 500 is considered to be optimal. Warning: don't believe that when you search for a user you'll find mostly fake accounts. For many fake accounts there is too little data, as often people create throw away accounts, write a few comments, and that's it. So most of the accounts associated with a given user style will be just other people that have a similar writing style. This method I believe is quite powerful in distinguishing who is a native speaker and who is not. This is especially clear from the vectors visualization below. # Validate and visualize... Another thing that I reproduced (also an idea from OP) was to try inserting the same users in two variants, like antirez_A and antirez_B, using two different set of comments. Then check if asking for similar users to antirez_A would report B. Indeed, for *most* of the users I tested this against, it worked very well, and often times it was the top result. So we know that actually our method works. But since from the vectors it is so easy to "see" a style, what about our naked eyes? Recently I switched to Ghostty as my terminal, and it supports the Kitty graphics protocol, so you can display bitmaps directly in the terminal window. It is quite some time I want to play with it. Finally I had a good reason to test this feature. What's happening above is that we call the VEMB command, that returns just a list of floats (the vector). Then the vshow utility, also part of the repository, will care to find the smallest square that can contain the vector and show positive values in red, negative in green. As you can see, as a non native speaker I over-use very simple words and under-use more sophisticated words. Other authors stress certain specific words, others are much more "plain", showing less artifacts. At some point I was curious about what was really happening there: what words I would use too much and too little? So in the demo website you can also press the button to analyze a given user, and see the top 10 words over-used and under-used. Well, a few of mine are definitely due to my issues with English grammar :D Ok, enough with this investigation! Vector sets are now in Redis 8 RC1 and I have more work to do, but this was fun, and I believe it shows that vectors were definitely cool even before AI. Thanks for reading such a long post. EDIT: I forgot to say that the insert.py script also inserts the JSON metadata with the total words written by the user. So you can use FILTER in order to only show matches with a given number of words. This can be useful to detect duplicated accounts since often they are used only seldom, when the identity must be covered: 127.0.0.1:6379> vsim hn_fingerprint ele pg FILTER '.wordcount < 10000' 1) 'montrose' 2) 'kar5pt' 3) 'ryusage' 4) 'corwinstephen' 5) 'ElfinTrousers' 6) 'beaned' 7) 'MichaelDickens' 8) 'bananaface' 9) 'area51org' 10) 'william42' EDIT2: In case the matches look suspicious to you (meaningless), like tptacek noted in a comment in the HN submission of this blog post, here is a 'visual' match that shows how, for instance, montrose and pg are really similar in the words usage patterns: Please enable JavaScript to view the comments powered by Disqus.
    rss feed | twitter | google group | old site:



    All Comments: [-] | anchor

    mtlynch(187) 1 day ago [-]

    >Well, the first problem I had, in order to do something like that, was to find an archive with Hacker News comments. Luckily there was one with apparently everything posted on HN from the start to 2023, for a huge 10GB of total data.

    This is actually super easy. The data is available in BigQuery.[0] It's up to date, too. I tried the following query, and the latest comment was from yesterday.

        SELECT 
          id,
          text,
          `by` AS username,
          FORMAT_TIMESTAMP('%Y-%m-%dT%H:%M:%SZ', TIMESTAMP_SECONDS(time)) AS timestamp
        FROM 
          `bigquery-public-data.hacker_news.full`
        WHERE 
          type = 'comment'
          AND EXTRACT(YEAR FROM TIMESTAMP_SECONDS(time)) = 2025
        ORDER BY 
          time DESC
        LIMIT 
          100
    
    https://console.cloud.google.com/bigquery?ws=!1m5!1m4!4m3!1s...
    leetrout(3303) 1 day ago [-]

    My favorite which is also up to date is the ClickHouse playground.

    For example:

      SELECT * FROM hackernews_history ORDER BY time DESC LIMIT 10;
    
    https://gh-api.clickhouse.tech/play?user=play#U0VMRUNUICogRl...

    I subscribe to this issue to keep up with updates:

    https://github.com/ClickHouse/ClickHouse/issues/29693#issuec...

    And ofc, for those that don't know, the official API https://github.com/HackerNews/API

    laborcontract(10000) about 22 hours ago [-]

    ...i can't believe i've been running a script to ingest the data for the last six hours. thank you.

    scoresomefeed(10000) 1 day ago [-]

    The original version nailed all of my accounts with terrifying accuracy. Since then I make a new account every few days or weeks. Against the rules I know. And I've learned a lot about HN IP tracking and funny shadowbanning-like tricks they play but dont cop to. Like I get different error messages based on the different banned ips I use. And j see different behavior and inconsistency with flagged messages (like one that got upvoted a day after it was flagged and not visible to other users).

    AlexeyBelov(10000) about 5 hours ago [-]

    What you're doing makes HN worse, unfortunately.

    keepamovin(521) 1 day ago [-]

    This is great example of what's possible and how true anonymity, even online, is only 'technological threshold' anonymity. People obsessed with biometrics might not consider this is another biometric.

    Instead of just HN, now do it with the whole internet, imagine what you'd find. Then imagine that it's not being done already.

    consp(10000) 1 day ago [-]

    None of my throwaways and not even my old account shows up. We are not at that level yet. ymmv.

    tptacek(94) 2 days ago [-]

    This is an interesting and well-written post but the data in the app seems pretty much random.

    antirez(1163) 2 days ago [-]

    Thank you, tptacek. I was able to verify, thanks to the Internet Archive caching of 'pg' for the post of 3 years ago, that the entries are quite similar in the case of 'pg'. Consider that it captures just the statistical patterns in very common words, so you are not likely to see users that you believe are 'similar' to yourself. Notably: montrose may likely be a really be a secondary account of PG, and was also found as a cross reference in the original work of three years ago.

    Also note that vector similarity is not reciprocal, one thing can have a top scoring item, but such item may have much more items nearer, like in the 2D space when you have a cluster of points and a point nearby but a bit far apart.

    Unfortunately I don't think this technique works very well for actual duplicated accounts discovery because often times people post just a few comments in fake accounts. So there is not enough data, if not for the exception where one consistently uses another account to cover their identity.

    EDIT: at the end of the post I added the visual representations of pg and montrose.

    formerly_proven(10000) 2 days ago [-]

    I'm surprised no one has made this yet with a clustered visualization.

    antirez(1163) 2 days ago [-]

    Redis supports random projection to a lower dimensionality, but the reality is that projecting a 350d vector into 2d is nice but does not remotely captures the 'reality' of what is going on. But still, it is a nice idea to use some time. However I would do that with more than 350 top words, since when I used 10k it strongly captured the interest more than the style, so 2D projection of this is going to be much more interesting I believe.

    layer8(860) 2 days ago [-]

    Given that some matches are "mutual" and others are not, I don't see how that could translate to a symmetric distance measure.

    PaulHoule(97) 2 days ago [-]

    Personally I like this approach a lot

    https://scikit-learn.org/stable/modules/generated/sklearn.ma...

    I think other methods are more fashionable today

    https://scikit-learn.org/stable/modules/manifold.html

    particularly multi-dimension scaling, but personally I think tSNE plots are less pathological (they don't have as many of these crazy cusps that make me think it's projecting down from a higher-dimensional surface which is near-parallel to the page)

    After processing documents with BERT I really like the clusters generated by the simple and old k-Means algorithm

    https://scikit-learn.org/stable/modules/generated/sklearn.cl...

    It has the problem that it always finds 20 clusters if you set k=20 and a cluster which really oughta be one big cluster might get treated as three little clusters but the clusters I get from it reflect the way I see things.

    giancarlostoro(3167) 2 days ago [-]

    I tried my name, and I don't think a single 'match' is any of my (very rarely used) throw away alts ;) I guess I have a few people I talk like?

    antirez(1163) 2 days ago [-]

    When they are rarely used (a small amount of total words produced), they don't have meaningful statistical info for a match, unfortunately. A few users here reported finding actual duplicated accounts they used in the past.

    delichon(10000) 2 days ago [-]

    I got 3 correct matches out of 20, and I've had about 6 accounts total (using one at a time), with at least a fair number of comments in each. I guess that means that my word choices are more outliers than yours or there is just more to match. So it's not really good enough to reliably identify alt accounts, but it is quite suggestive.

    38(10000) 2 days ago [-]

    this got two accounts that I used to use

    antirez(1163) 2 days ago [-]

    Great! Thanks for the ACK.

    weinzierl(233) 2 days ago [-]

    How does it find the high similarity between 'dang' and 'dangg' when the 'dangg' account has no activity (like comments) at all?

    https://antirez.com/hnstyle?username=dang&threshold=20&actio...

    antirez(1163) 2 days ago [-]

    Probably it used to have when the database was created. Then the comments got removed.

    hammock(949) 2 days ago [-]

    The 'analyze' feature works pretty well.

    My comments underindex on 'this' - because I have drilled into my communication style never to use pronouns without clear one-word antecedents, meaning I use 'this' less frequently that I would otherwise.

    They also underindex on 'should' - a word I have drilled OUT of my communication style, since it is judgy and triggers a defensive reaction in others when used. (If required, I prefer 'ought to')

    My comments also underindex on personal pronouns (I, my). Again, my thought on good, interesting writing is that these are to be avoided.

    In case anyone cares.

    antirez(1163) 2 days ago [-]

    That's very interesting as I noticed that certain outliers seemed indeed conscious attempts.

    croemer(3663) 2 days ago [-]

    Since you seem to care about your writing, I'm wondering why you used 'that' here?

    > I use 'this' less frequently that I would otherwise

    Isn't it 'less than' as opposed to 'less that'?

    Joker_vD(10000) 2 days ago [-]

    > I prefer 'ought to'

    I too like when others use it, since a very easy and pretty universal retort against 'you ought to...' is 'No, I don't owe you anything'.

    jcims(10000) 2 days ago [-]

    I (also?) felt the 'words used less often' were much easier to connect to as a conscious effort. I pointed chatgpt to the article and pasted in my results and asked it what it could surmise about my writing style based on that. It probably connected about as well as the average horoscope but was still pretty interesting!

    tobr(421) 2 days ago [-]

    > Again, my thought on good, interesting writing is that these are to be avoided.

    You mean, "I think this should be avoided"? ;)

    milesrout(10000) 1 day ago [-]

    Should is a commonly used word and a fine one. You should feel free to use it. If someone gets hot under the collar because you said he should do something then he is an idiot.

    'Ought to' is essentially a synonym. Anyone that gets upset when you said they should do something but is fine when you say that they ought to do something is truly a moron.

    WhyNotHugo(2949) 1 day ago [-]

    I think "should" and "ought to" end up being equivalent.

    I prefer to avoid such absolutes and portray causality instead.

    For example, in place of "you should not do drugs at work" I prefer "if you take drugs at work you'll get in trouble".

    throwaway290(10000) 1 day ago [-]

    Now if you only underindex on 'underindex'... There's a good alternative that everyone understands, 'use less'

    alganet(10000) 2 days ago [-]

    Cool tool. It's a shame I don't have other accounts to test it.

    It's also a tool for wannabe impersonators to hoan their writing style mimic skills!

    shakna(1921) 2 days ago [-]

    I don't have other accounts, but still matched at 85+% accuracy for a half dozen accounts. Seems I don't have very original thoughts or writing style.

    andrewmcwatters(10000) 2 days ago [-]

    Well, well, well, cocktailpeanuts. :spiderman_pointing:

    I suspect, antirez, that you may have greater success removing some of the most common English words in order to find truly suspicious correlations in the data.

    cocktailpeanuts and I for example, mutually share some words like:

    because, people, you're, don't, they're, software, that, but, you, want

    Unfortunately, this is a forum where people will use words like 'because, people, and software.'

    Because, well, people here talk about software.

    <=^)

    Edit: Neat work, nonetheless.

    alganet(10000) 2 days ago [-]

    That seems to be a misconception.

    The usage frequency of simple words is a powerful tell.

    cratermoon(344) 2 days ago [-]

    I noted the 'analyze' feature didn't seem as useful as it could be because the majority of the words are common articles and conjunctions. I'd like to see a version of analyze that filters out at least the following stop words: a, an, and, are, as, at, be, but, by, for, if, in, into, is, it, no, not, of, on, or, such, that, the, their, then, there, these, they, this, to, was, will, with

    xnorswap(10000) 2 days ago [-]

    I wonder how much accuracy would be improved if expanding from single words to the most common pairs or n-tuples.

    You would need more computation to hash, but I bet adding frequency of the top 50 word-pairs and top 20 most common 3-tuples would be a strong signal.

    ( The nothing the accuracy is already good of course. I am indeed user eterm. I think I've said on this account or that one before that I don't sync passwords, so they are simply different machines that I use. I try not to cross-contribute or double-vote. )

    antirez(1163) 1 day ago [-]

    Maybe there isn't enough data for each user for pairs, but I thought about mixing the two approaches (but had no time to do it), that is, to have 350 components like now, for the single word frequency, plus other 350 for the most common pairs frequency. In this way part of the vector would remain a high enough signal even for users with comparable less data.

    Frieren(10000) 2 days ago [-]

    It works for me. The accounts I used long time ago are there in high positions. I guess that my style is very distinctive.

    But I also have seen some accounts that seem to be from other non-native English speakers. They may even have a Latin language as their native one (I just read some of their comments, and, at minimum, some of them seem to also be from the EU). So, I guess, that it is also grouping people by their native language other than English.

    So, maybe, it is grouping many accounts by the shared bias of different native-languages. Probably, we make the same type of mistakes while using English.

    My guess will be that native Indian or Chinese speakers accounts will also be grouped together, for the same reason. Even more so, as the language is more different to English and the bias probably stronger.

    It would be cool that Australians, British, Canadians tried the tool. My guess is that the probability of them finding alt-accounts is higher as the populations is smaller and the writing more distinctive than Americans.

    Thanks for sharing the projects. It is really interesting.

    Also, do not trust the comments too much. There is an incentive to lie as to not acknowledge alt-accounts if they were created to remain hidden.

    gostsamo(3330) 2 days ago [-]

    I discover 2 people in my top 20 who I can bet are from the same country as me and it is not a big country.





    Historical Discussions: Hacking a Smart Home Device (2024) (April 15, 2025: 314 points)
    Hacking a ESP32-Based Smart Home Device (February 11, 2025: 2 points)
    Hacking a Smart Home Device (ESP32 Smart Device –> HomeAssistant) (February 05, 2024: 2 points)

    (314) Hacking a Smart Home Device (2024)

    314 points 3 days ago by walterbell in 23rd position

    jmswrnr.com | Estimated reading time – 105 minutes | comments | anchor

    How I reverse engineered an ESP32-based smart home device to gain remote control access and integrate it with Home Assistant.

    Recently, I've been slightly obsessed with connecting anything and everything in my house to open_in_newHome Assistant. There's something so satisfying about having everything connected and automated in one application; I can finally forget every random mobile app for a different brand of smart product.

    But there is one product I own that stubbornly doesn't connect to anything other than its own mobile app. It's a sleek air purifier that is unfortunately let down by its disappointing app.

    So many modern products depend on an internet connection and cloud account for basic functions, and who knows what unnecessary data they collect or technical vulnerabilities they add to the home network?

    I want to control this expensive air purifier just like the rest of my smart gadgets. And that marks the start of this challenging yet undoubtedly fun journey.

    It's time to hack an air purifier! 😆

    By the way, if you enjoy my content, you can open_in_newBuy Me a Coffee to support my content creation!

    warning Disclaimer

    The contents of this post are intended for educational purposes on the process of reverse engineering IoT smart devices and network protocols.

    Hacking can be a scary term, so I'd like to make it clear that my intentions were solely to upgrade the smart device I've purchased to integrate with my smart home system. Doing so does not affect any other instances of this product or its cloud services. Therefore, any sensitive product-specific data, such as private keys, domains, or API endpoints, have been obfuscated or redacted from this post.

    Tinkering with your devices will likely void any warranty and carries a risk of permanently damaging the device; do so at your own risk.

    If we're going to hack this device to be controlled by custom software, we're going to need to understand its current capabilities and plan a point of attack, requiring the least amount of work to achieve our goal.

    The device already supports remote control with its own mobile app, which annoyingly requires a cloud account to use. By toggling my phone's Bluetooth, WiFi, and 5G, I was able to confirm that the app required an internet connection to control the device. Remote control was not possible locally via Bluetooth or WiFi.

    This means the mobile app and device must be connected to a cloud server for the remote control to be possible. So, somewhere in that network, data between the device and its cloud server must be the fan speed and everything else the app controls.

    So, that is our point of attack:

    • If we can intercept the device's network traffic and change those values, we have control of the device.

    • If we can emulate all of the server responses, we have control of the device without depending on an internet connection and its cloud server.

    One of the first things I looked into was the remote control mobile app. This can be a quick way to gather some information, as Android apps can be relatively simple to pull apart.

    Apps on Android are stored as a .apk file. With a quick search online, you can find a website to download a specific app's latest .apk. If you didn't know, the format of an .apk is technically a .zip file! you can simply extract them to browse the app's contents.

    Android apps include compiled Java executables, usually named classes.dex. You can convert these to a .jar file with open_in_newdex2jar and use open_in_newjd-gui to browse the contents as reconstructed source code.

    Locating the app MainActivity.class revealed that it is built with React Native!

    package com.smartdeviceapp;
    
    import com.facebook.react.ReactActivity;
    
    public class MainActivity extends ReactActivity {
      protected String getMainComponentName() {
        return 'SmartDeviceApp';
      }
    }

    For Android apps built with React Native, you can find the JavaScript bundle in assets/index.android.bundle.

    A quick scan of the app's bundle revealed it uses a secure WebSocket connection:

    self.ws = new WebSocket('wss://smartdeviceapi.---.com');

    There isn't too much interest here in this Android app; as expected, it connects with their cloud server in order to remote control the smart device. It's worth a quick look due to the simplicity of getting some readable source code. We can always reference this bundle to see if any shared values or logic can be found there.

    Next up, it's time to have a look at the network traffic between the device and its cloud server; this is what we're trying to intercept and, ideally, emulate.

    I use Pi-hole locally, which is a DNS server that blocks tracking and some ads, but it also has a useful feature to browse DNS queries by device. By navigating to the Tools > Network page and selecting the device's local network address, we can see it's querying the DNS server for the address of the cloud server's domain:

    So now we know the cloud server's domain it's connecting to, we can use the Local DNS feature to send that network traffic to my local workstation (192.168.0.10) instead of their cloud server:

    We can then use open_in_newWireshark to take a look at the traffic coming in from the smart device. We can do this by monitoring the workstation network interface with a filter of ip.addr == 192.168.0.61 (smart device address).

    By doing this, I was able to see UDP packets being sent from the smart device to the workstation on the port 41014!

    So, we know the smart device uses UDP to communicate with its cloud server. But right now, it's trying to communicate with my workstation and is expecting it to respond like its cloud server.

    We can use a simple UDP proxy for our workstation to act as a relay between the smart device and its cloud server.

    I used open_in_newCloudflare's DNS resolver (1.1.1.1) to look up the real IP address for their cloud server (because my Pi-hole DNS would have just resolved to my workstation's local IP address). Then I used open_in_newnode-udp-forwarder as a simple method to relay the traffic to their cloud server:

    udpforwarder \
    --destinationPort 41014 --destinationAddress X.X.X.X \
    --protocol udp4 --port 41014

    X.X.X.X being the real IP address of their cloud server.

    Looking at Wireshark again, we can see all the network traffic between the smart device and its cloud server!

    When booting the device, it would send a packet to the server with data like this:

    Hex View  00 01 02 03 04 05 06 07  08 09 0A 0B 0C 0D 0E 0F
     
    00000000  55 00 31 02 01 23 45 67  89 AB CD EF FF 00 01 EF  U.1..#Eg........
    00000010  1E 9C 2C C2 BE FD 0C 33  20 A5 8E D6 EF 4E D9 E3  ..,....3 ....N..
    00000020  6B 95 00 8D 1D 11 92 E2  81 CA 4C BD 46 C9 CD 09  k.........L.F...
    00000030  0E                                                .

    The server would then respond with the following:

    Hex View  00 01 02 03 04 05 06 07  08 09 0A 0B 0C 0D 0E 0F
     
    00000000  55 00 2F 82 01 23 45 67  89 AB CD EF FF 37 34 9A  U./..#Eg.....74.
    00000010  7E E6 59 7C 5D 0D AF 71  A0 5F FA 88 13 B0 BE 8D  ~.Y|]..q._......
    00000020  ED A0 AB FA 47 ED 99 9A  06 B9 80 96 95 C0 96     ....G..........

    All of the packets after this seemed to share a similar structure. They did not include any readable strings but were full of what appeared to be random bytes of data; this could be the open_in_newAvalanche effect pointing toward encryption.

    I searched around to see if this packet structure was an existing protocol. I read that DTLS is used by some smart devices and that it is based on UDP.

    However, Wireshark does support the detection of DTLS packets but listed this packet as UDP, which means it couldn't determine a UDP-based protocol from the data. I double-checked with the DTLS specification, but that described a header format different from what we see in the packet, so we know DTLS isn't used here.

    At this point, we hit a blocker; we don't understand how the data is formatted in these packets, which means we can't manipulate or emulate anything yet.

    This would have been a lot easier if it used a well-documented protocol, but where's the fun in that?

    We know there are 2 applications that understand how to read this packet data: the smart device and its cloud server. And well, I don't have their cloud server handy, so it's time to take a look inside the smart device!

    It was quite easy to disassemble with a few easily accessible screws. Inside was the main PCB containing the microcontroller, a port connecting to the fan, and a ribbon cable to the control panel on the front.

    The main controller is labeled as an ESP32-WROOM-32D. This microcontroller is commonly used in smart devices and features WiFi and Bluetooth.

    I stumbled across the open_in_newESP32-reversing GitHub repo, which contained a nice list of ESP32-related reverse engineering resources.

    The ESP32 contains a flash chip, which is where the firmware containing application logic is most likely stored.

    The manufacturer of the ESP32 provides a utility called open_in_newesptool to communicate with the ROM bootloader in the ESP32. With this tool, it's possible to read data from the flash, but first, we must establish a serial connection!

    Referencing the open_in_newESP32 datasheet, we can find the pin layout diagram:

    Here, we can see the TXD0(35) and RXD0(34) pins. We need to connect a wire to both of these pins and a ground pin for a serial connection.

    The device PCB had a few pin holes, which are commonly connected to the pins for debugging and flashing; I was able to visually follow the traces from both of these serial pins to the holes! This allowed me to easily solder on breakout headers that I could temporarily plug jumper wires into. Otherwise, I would have likely carefully soldered directly to the chip pins.

    With a multimeter set to continuity mode, I was able to locate which hole was ground by referencing the GND(38) pin on the ESP32.

    Now, we need a port to handle this UART serial communication. I used my open_in_newFlipper Zero, which has a handy USB-UART Bridge application under the GPIO category.

    Using 3 jumper wires, I connected them together:

    • Flipper Zero TX <--> RX ESP32

    • Flipper Zero RX <--> TX ESP32

    • Flipper Zero GND <--> GND ESP32

    info Note

    The TX and RX wires are intentionally crossed here; we want to transmit data to the other device's receiving line!

    In Windows Device Manager, under the Ports (COM & LPT) category, I found my Flipper Zero UART device as COM7. Using open_in_newPutty configured to a Serial connection on COM7 at 115200 speed, I was able to successfully connect to the Flipper Zero. While searching around, I saw this speed was often used for the ESP32, so I decided to go with it here.

    When booting up the smart device, I noticed a bunch of log data from the serial output:

    rst:0x1 (POWERON_RESET),boot:0x13 (SPI_FAST_FLASH_BOOT)
    configsip: 0, SPIWP:0xee
    clk_drv:0x00,q_drv:0x00,d_drv:0x00,cs0_drv:0x00,hd_drv:0x00,wp_drv:0x00
    mode:DIO, clock div:2
    load:0x3fff0030,len:4476
    ho 0 tail 12 room 4
    load:0x40078000,len:13512
    ho 0 tail 12 room 4
    load:0x40080400,len:3148
    entry 0x400805f0
    ********************************
    **    Starting SmartDevice    **
    ********************************
    This is esp32 chip with 2 CPU core(s), WiFi/BT/BLE, silicon revision 1, 4MB external flash
    Minimum free heap size: 280696 bytes
    nvs_flash_init ret: 0
    Running app from: factory
    Mounting FAT filesystem
    csize: 1
    122 KiB total drive space.
    0 KiB available.
    FAT filesystem mounted
    SERIAL GOOD
    CapSense Init
    Opening[rb]: /spiflash/serial
    Serial Number: 0123456789abcdefff
    Opening[rb]: /spiflash/dev_key.key
    Device key ready
    Base64 Public Key: **REDACTED**
    Opening[rb]: /spiflash/SmartDevice-root-ca.crt
    Opening[rb]: /spiflash/SmartDevice-signer-ca.crt
    Addtimeout: 10000, id: 0
    RELOAD FALSE
    Opening[rb]: /spiflash/server_config
    MP PARSE DONE
    Server: smartdeviceep.---.com:41014

    We can pick out some useful information from this output:

    • The device has a 4MB flash chip.

    • The application runs from factory, which is a common partition name for the default application flashed at the factory.

    • A FAT filesystem is mounted.

    • The application reads files for:
      • Serial number

      • Device key

      • Two CA certificates (root and signer)

      • Server config

    Awesome, now we have a working serial connection, we can focus on dumping the flash, hoping it contains information on how to read these packets!

    To read the flash, we need to boot the ESP32 in a different mode, specifically what it calls the Download Boot mode. This is technically explained in the Strapping Pins section of the datasheet. But TL;DR, I held a jumper wire from a GND port on my Flipper Zero to the IO0(25) pin on the ESP32 while it boots.

    Checking the serial output with Putty, we can see this successfully boots the smart device into the Download Boot mode:

    rst:0x1 (POWERON_RESET),boot:0x3 (DOWNLOAD_BOOT(UART0/UART1/SDIO_REI_REO_V2))
    waiting for download

    Now we can close Putty and switch over to a Terminal to use esptool.

    We're able to dump the entire 4MB of flash data from the ESP32 with the following command:

    esptool -p COM7 -b 115200 read_flash 0 0x400000 flash.bin

    I dumped the flash a couple of times to ensure I had a good read and backed them up in case we accidentally brick something because then we can flash back the dump.

    info Note

    To read the flash successfully using the Flipper Zero, I had to change its config to specify the baud rate of 115200 instead of Host.

    We have the ESP32 flash dumped into a single binary file, and now we need to make sense of it. I found open_in_newesp32knife to be the best utility for this.

    It reads the flash file and extracts a bunch of useful information. It was also the only utility that successfully reformatted this dump into ELF format with correctly mapped virtual memory, but more on that later! Let's see what we can find:

    python esp32knife.py --chip=esp32 load_from_file ./flash.bin

    This logs out a lot of information and saves the output data to a ./parsed folder.

    The first file of interest here is partitions.csv, this table maps areas of data in the flash:

    # ESP-IDF Partition Table
    # Name,   Type, SubType,  Offset,   Size, Flags
    nvs,      data, nvs,      0x9000,   16K,
    otadata,  data, ota,      0xd000,   8K,
    phy_init, data, phy,      0xf000,   4K,
    factory,  app,  factory,  0x10000,  768K,
    ota_0,    app,  ota_0,    0xd0000,  768K,
    ota_1,    app,  ota_1,    0x190000, 768K,
    storage,  data, fat,      0x250000, 1M,
    

    Here, we can see a few interesting entries:

    • There are three application partitions. Two are labeled ota, which is where over-the-air firmware updates are written. The other is labeled factory, and we know from the serial output during boot this is the application partition that is currently used.

    • That storage partition has the FAT type, this like likely the FAT filesystem we saw mounting in the serial output.

    • nvs is a key-value storage partition, there may be some useful data here.

    📌 Update

    Other readers have mentioned that this flash dump could have been protected if the device had enabled flash encryption (which it does not in this case).

    I was initially curious to see what data was in the nvs key-value storage partition.

    The latest state of this data was extracted to part.0.nvs.cvs, and the only interesting data I could see was my WiFi SSID and password. But I also found the full historical changelog of values in part.0.nvs.txt and that revealed a couple of previously used WiFi credentials; what!? did someone use this thing before me?😆

    Following that, it was time to look at the contents of the FAT storage partition. I found open_in_newOSFMount to be a great Windows application for this; it mounts the filesystem image as a virtual disk and allows writing to it!

    This revealed a few interesting files that we saw from the serial output earlier:

    dev_info
    dev_key.key
    serial
    server_config
    SmartDevice-root-ca.crt
    SmartDevice-signer-ca.crt
    wifi_config

    I inspected the contents of these files and found:

    • dev_info - a UUID labeled firmware, likely the version installed

    • dev_key.key - 256-bit private key (prime256v1), the public key for this was printed to the serial output labeled Device key!

    • serial - the serial number

    • server_config - the address and port number we found earlier

    • SmartDevice-root-ca.crt - a CA certificate with a 256-bit public key (prime256v1)

    • SmartDevice-signer-ca.crt - a CA certificate with a 256-bit public key (prime256v1) and the root certificate as its CA (certificate authority)

    • wifi_config - my WiFi SSID and password

    The dev_key.key file started with -----BEGIN EC PRIVATE KEY----- which is an Elliptic Curve private key; I used open_in_newopenssl to verify this with:

    openssl ec -in dev_key.key -text -noout

    And the two .crt files started with -----BEGIN CERTIFICATE----- which I also verified using openssl with:

    openssl x509 -in ./SmartDevice-root-ca.crt -text -noout
    openssl x509 -in ./SmartDevice-signer-ca.crt -text -noout

    Having the certificates and device key stored on the device strongly indicates they are used to encrypt the UDP network packet data.

    Now we've taken a look at the storage, it's time to look at the application which runs on the device.

    We know it's running the factory partition, so I opened the part.3.factory file in the open_in_newGhidra CodeBrowser. Ghidra is a free and open-source suite of reverse engineering tools from the NSA; it's an alternative to the paid open_in_newIDA Pro.

    This file we're opening is the partition image direct from the flash; it's comprised of multiple segments of data, each getting mapped to different virtual memory regions on the ESP32. For example, data at offset 0x17CC4 in the partition image is actually mapped to 0x40080ce0 in the device's virtual memory, so although this file contains all of the application logic and data, Ghidra won't understand how to resolve any absolute memory references, at least for now. There will be more on this later!

    The ESP32 microprocessor uses the Xtensa instruction set, and Ghidra has recently added support for this! When loading the image, you can select the language Tensilica Xtensa 32-bit little-endian. We can run the auto analysis; although it won't give us great results just yet, we can still look at any defined strings it is able to find.

    Text strings in a compiled application are a fast-track way of locating and understanding logic when reverse engineering; they can reveal a lot about the application.

    Because this compiled file only contains bytecode instructions for the processor, there are no function names, data types, or parameters. It can initially seem like a giant blob of nonsense, but as soon as you a string reference like Failed to read wifi config file, you can start to piece together what the logic is doing. Reverse engineering compiled applications can be difficult, but it is certainly a rewarding challenge.

    So, I had a look through the Defined Strings window in Ghidra to see what I could find, and noticed all of the strings we saw in the serial output, such as:

    000031c4	'Serial Number: %s\r\n'
    000031fc	'Device key ready\r'
    00003228	'Base64 Public Key: %s\r\n'

    As expected, the address is the string's location in the partition image. Ideally, this should be the address in the virtual memory when running on the ESP32; that way, we can see any bytecode that references this string. We'll tackle that soon!

    In close proximity to these strings were some others of interest:

    000030d0	'Message CRC error\r'
    00003150	'Seed Error: %d\r\n'
    000031c4	'Serial Number: %s\r\n'
    000031fc	'Device key ready\r'
    00003228	'Base64 Public Key: %s\r\n'
    00003240	'Error reading root cert!!!!\r'
    00003260	'Error reading signer cert!!!!\r'
    00003280	'PRNG fail\r'
    0000328c	'ECDH setup failed\r'
    000032a0	'mbedtls_ecdh_gen_public failed\r'
    000032c0	'mbedtls_mpi_read_binary failed\r'
    000032e0	'Error copying server key to ECDH\r'
    00003304	'mbedtls_ecdh_compute_shared failed: 0x%4.4X\r\n'
    00003334	'Error accessing shared secret\r'
    00003354	'####### MBED HKDF failed: -0x%4.4X ########\r\n'
    00003384	'Sign failed\n  ! mbedtls_ecp_group_copy returned 0x%4.4X\n'
    000033c0	'Sign failed\n  ! mbedtls_ecp_copy returned 0x%4.4X\n'
    000033f4	'Sign failed: 0x%4.4X\r\n'
    3f403d30	'Write ECC conn packet\r\n'

    There is so much useful information that we can extract from these strings. Even without reading the assembly, we can start to assume what it's doing with the data.

    Here's what I noticed:

    • CRC error code: this is a checksum algorithm that could be part of the packet data.

    • open_in_newmbedtls is an open-source library implementing cryptographic primitives, X509 certificate manipulation, and SSL/TLS and DTLS protocols.
    • ECDH and HKDF primitive functions are used directly from mbedtls. We already know it's not using the DTLS protocol, so we can assume it's using them to implement a custom protocol.

    • We can also assume the files mentioned nearby are also related:
      • Serial number

      • Device key

      • Root certificate

      • Signer certificate

    • An 'ECC conn packet' is sent from the client; this is part of the ECDH key exchange process; we'll also get to that later!

    Ok, it's about time we configure Ghidra to analyze this ESP32 application better.

    First up, esp32knife supports reformatting the binary partition image for the application into an ELF format, which Ghidra can better understand. I had to make a small tweak for it to support the RTC_DATA segment, which I've pushed to my fork on GitHub: open_in_newfeat: add support for RTC_DATA image segment.

    We can then import the more useful part.3.factory.elf instead of the part.3.factory binary partition image.

    But when importing this time, we want to do a couple of things before running the auto analysis, so let's opt out of doing that for now.

    Next, we can use the open_in_newSVD-Loader-Ghidra script to import the peripheral structs and memory maps from the official open_in_newesp32.svd file.

    We can also use the built-in SymbolImportScript script to load labels for all ROM functions. I've published a file with all ROM function labels for the ESP32 ready for Ghidra here: open_in_newESP32_ROM_LABELS.txt. This will help us identify common ROM functions like printf.

    Finally, we run the auto-analysis from the menu bar Analysis > Auto Analyze.

    Let's see what that does to the strings we found earlier:

    3f4031c4	'Serial Number: %s\r\n'
    3f4031fc	'Device key ready\r'
    3f403228	'Base64 Public Key: %s\r\n'

    We can now see the same strings are mapped correctly to their virtual memory addresses, meaning the analysis will detect any pointers or instructions that reference them!

    info Note

    There are multiple versions of the ESP32, such as ESP32c2, and ESP32s2. The ROM labels and .svd file I've linked are for the default ESP32. if you have a different version, you'll need to import the specific .svd and create specific ROM labels following the README in my gist.

    Up until this point, I have the PCB awkwardly positioned to keep the fan and control panel connected. So, I wanted to see if it would still function with them unplugged. Unfortunately, it did not; the serial logged the following:

    I2C read reg fail1
    No Cap device found!
    REGuru Meditation Error: Core  0 panic'ed (IllegalInstruction). Exception was unhandled.
    Memory dump at 0x400da020

    Now we have Ghidra configured nicely, I took a look at the address mentioned in the log; it was assembly right next to a reference for the No Cap device found! string, and at the start of the function, it logs 'CapSense Init\r'. This must be for the control panel that uses capacitive sensing input!

    I named this function in Ghidra to InitCapSense:

    void InitCapSense()
    {                       
      FUN_401483e0('CapSense Init\r');
      // ... CapSense logic
    }

    I then followed the references to this function back to another function that appeared to be starting as a task/service; I renamed this one StartCapSenseService:

    void StartCapSenseService()
    {
      _DAT_3ffb2e2c = FUN_40088410(1, 0, 3);
      FUN_4008905c(InitCapSense, &DAT_3f40243c, 0x800, 0, 10, 0, 0x7fffffff);
      return;
    }

    Again, I followed the function references and found the function that calls StartCapSenseService. Using Ghidra's Patch Instruction feature, I replaced the call instruction with a nop (no operation) instruction to remove the function call:

    // Original
    400d9a28  25 63 af    call8     FUN_4008905c
    
    400d9a2b  65 31 00    call8     StartCapSenseService
    
    400d9a2e  e5 37 00    call8     FUN_400d9dac
    
    
    // Patched
    400d9a28  25 63 af    call8     FUN_4008905c
    
    400d9a2b  f0 20 00    nop
    
    400d9a2e  e5 37 00    call8     FUN_400d9dac

    We want to flash this change to the ESP32, so I replaced the bytes that were modified, not in this ELF file, but in the part.3.factory binary partition image, because that is in a raw format directly from the flash, so it will be easy to write back. I use a hex editor to find & replace the bytes:

    2564af 653100 e53700 -> 2563af f02000 e53700

    Then, I wrote this modified image to the ESP32 flash at the offset 0x10000, that is the offset from the partition table for the factory partition:

    esptool -p COM7 -b 115200 write_flash 0x10000 ./patched.part.3.factory

    But when trying to boot this, we get an error from the serial output:

    E (983) esp_image: Checksum failed. Calculated 0xc7 read 0x43
    E (987) boot: Factory app partition is not bootable

    Alright, so there is a checksum. Luckily, the code inside esptool knows how to calculate this, so I threw together a quick little script to fix the checksums for an application partition image: open_in_newfeat: add image checksum repair script.

    Now, we can use this to repair the checksums and flash the repaired image:

    python esp32fix.py --chip=esp32 app_image ./patched.part.3.factory
    
    esptool -p COM7 -b 115200 write_flash 0x10000 ./patched.part.3.factory.fixed

    I tried booting the device without the control panel again; everything now works ok! We have successfully just modified the smart device's firmware!

    Let's get back to focusing on the packets. We know the packets do not follow a well-known protocol, meaning we must figure out the structure ourselves.

    I captured the packets from the device booting numerous times and compared them to each other. I noticed the first thirteen bytes were similar to other packets, while the rest of the packet seemed to be encrypted.

    Here's the first packet received from the server between boots; you can see the data matches up until the offset 0x0D:

    Hex View  00 01 02 03 04 05 06 07  08 09 0A 0B 0C 0D 0E 0F
     
    00000000  55 00 2F 82 01 23 45 67  89 AB CD EF FF 37 34 9A  U./..#Eg.....74.
    00000010  7E E6 59 7C 5D 0D AF 71  A0 5F FA 88 13 B0 BE 8D  ~.Y|]..q._......
    00000020  ED A0 AB FA 47 ED 99 9A  06 B9 80 96 95 C0 96     ....G..........
    
    Hex View  00 01 02 03 04 05 06 07  08 09 0A 0B 0C 0D 0E 0F
     
    00000000  55 00 2F 82 01 23 45 67  89 AB CD EF FF 81 85 3F  U./..#Eg.......?
    00000010  8A 10 F5 02 A5 F0 BD 28  73 C2 8C 05 71 6E E4 A3  .......(s...qn..
    00000020  A6 36 FD 5C E0 D5 AC 3E  1A D5 C5 88 99 86 28     .6.\...>......(

    It wasn't too difficult to figure out the first couple of values, then I noticed the remaining nine bytes matched the serial number from the device's serial output, and there we have the packet header format:

    55 // magic byte to identity the protocol
    00 31 // length of the packet in bytes
    02 // message identifier
    01 23 45 67 89 AB CD EF FF // device serial
    • A magic byte is commonly used to identify a piece of data in a specific format uniquely.

    • A size-related byte and message ID are very common to expect in a packet like this.

    The packets first sent and received had a slightly different format to those that followed; there were always the bytes 00 01 after the header in the client packet, and it was the only packet with the message ID of 0x02.

    Comparing it to the other packets, I noticed a pattern with the message ID:

    • 0x02 - First packet sent from smart device

    • 0x82 - First packet received from cloud server

    • 0x01 - All other packets sent from smart device

    • 0x81 - All other packets received from cloud server

    You can see the higher bits in this value represent if it's a client request (0x00) or a server response (0x80). And the lower bits are different between the first exchange (0x02) and all other packets (0x01).

    We noticed a string in the application earlier that said 'Message CRC error\r' which implied there is a CRC checksum in the packet. It would be helpful to know if there is a checksum in the data so it doesn't interfere with any decryption attempts.

    I followed the references to this string, and a single function references it.

    Let's take a look at the Decompiled code for that function:

    // ...
    iVar1 = FUN_4014b384(0, (char *)(uint)_DAT_3ffb2e40 + 0x3ffb2e42);
    iVar2 = FUN_400ddfc0(&DAT_3ffb2e44, _DAT_3ffb2e40 - 2);
    if (iVar1 == iVar2) {
      if (DAT_3ffb2e47 == '\x01') {
        FUN_400db5c4(0x3ffb2e48, _DAT_3ffb2e40 - 6);
      }
      else if (DAT_3ffb2e47 == '\x02') {
        FUN_401483e0(s_Connection_message_3f4030e4);
      }
      pcVar3 = (char *)0x0;
      _DAT_3ffb3644 = (char *)0x0;
    }
    else {
      FUN_401483e0(s_Message_CRC_error_3f4030d0);
      pcVar3 = (char *)0x0;
      _DAT_3ffb3644 = (char *)0x0;
    }
    // ...

    We can see the s_Message_CRC_error label being used in the else block, so the if statement must verify the CRC data for a message.

    This logic compares the results of 2 functions FUN_4014b384 and FUN_400ddfc0. If this is verifying the checksum of a packet, one must generate a checksum for the packet data, and the other must read the checksum value from the packet.

    We could use the arguments to help us decide which is which, but let's take a look at both:

    uint FUN_4014b384(int param_1, byte *param_2)
    {
      uint uVar1;
      
      if (param_1 == 0) {
        uVar1 = (uint)*param_2 * 0x100 + (uint)param_2[1];
      }
      else {
        uVar1 = (uint)*param_2 + (uint)param_2[1] * 0x100;
      }
      return uVar1 & 0xffff;
    }
    

    This doesn't look like a CRC function. It actually looks like a function that reads a 16-bit uint with configurable endianness; here's why:

    • Multiplying a value by 0x100 (256) is the equivalent of shifting left by 8 bits (half of a 16-bit value), so 0x37 becomes 0x3700. The logic in the first if code block adds this to the byte at index[1]; this is the next byte after it in memory, so that's basically reading a big-endian uint16 from the param_2 pointer

    • The logic of the else code block is similar but shifts the second byte instead of the first, thus reading a little-endian uint16. So, the param_1 parameter configures the endianness of the result.

    • The return statement does a bitwise AND (&) operator on the return value with 0xFFFF, this restricts the value to 16 bits of data by zeroing out any higher bits.

    uint FUN_400ddfc0(byte *param_1, uint param_2)
    {
      uint uVar1;
      uint uVar2;
      byte *pbVar3;
      
      pbVar3 = param_1 + (param_2 & 0xffff);
      uVar1 = 0xffff;
      for (; pbVar3 != param_1; param_1 = param_1 + 1) {
        uVar1 = (uint)*param_1 << 8 ^ uVar1;
        uVar2 = uVar1 << 1;
        if ((short)uVar1 < 0) {
          uVar2 = uVar2 ^ 0x1021;
        }
        uVar1 = uVar2 & 0xffff;
      }
      return uVar1;
    }

    Now, this looks a lot more like a checksum function; there's a for loop with a bunch of bitwise operators inside.

    I open up one of the captured packets into open_in_newImHex, a hex editor for reverse engineers. This has a handy feature to show the checksum of the currently selected data.

    Because the other function reads a 16-bit uint, I select CRC-16 and start selecting regions of bytes that would likely be hashed, leaving 2 bytes unselected where I think the 16-bit hash could be.

    No luck so far, but then I noticed you can configure the CRC-16 parameters in ImHex. So, I tried a cheap shortcut and set up ImHex to calculate CRC-16 checksums with a bunch of different parameter combinations using the values found in the decompiled function.

    Success! The last 2 bytes of the packet turned out to be a CRC checksum of all other data in the packet, specifically CRC-16 with 0x1021 polynomial and 0xFFFF initial value. I checked this with other packets, and they all passed the checksum.

    Now we know the last 2 bytes of every packet are a CRC-16 checksum and can exclude it from any decryption attempts!

    Earlier, we noticed mbedtls primitives labeled as ECDH and HKDF. So, what exactly are they?

    ECDH (Elliptic Curve Diffie–Hellman Key Exchange) is a key agreement protocol that allows 2 parties (like the smart device and its cloud server), each having an elliptic-curve public–private key pair, to establish a shared secret over an insecure channel (UDP). I found a great explanation of this in more detail in 'Practical Cryptography for Developers': open_in_newECDH Key Exchange.

    Essentially, if the smart device and server generate an EC key pair and exchange their public keys, they can use the other's public key with their private key to compute a shared secret key. This shared secret key could be used to encrypt and decrypt the packets! And even though they exchange public keys over the insecure network, you still need one of the private keys in order to compute the shared key.

    This is ideal for securing packets like this, and the first packet sent by the client is actually named the ECC conn packet in the logs:

    UDP Connect: smartdeviceep.---.com
    smartdeviceep.---.com = 192.168.0.10
    UDP Socket created
    UDP RX Thread Start
    Write ECC conn packet

    This is great progress; we know the first packet exchange is likely exchanging EC public keys to establish an ECDH key agreement to encrypt all the other packets.

    If we ignore the packet header (13 bytes from the start) and checksum (2 bytes at the end), we can see the contents of the packets for this potential key exchange are both 32 bytes (256 bits), which would be a valid size for a public key. Even though the client's request has 00 01 at the start, we can assume this is some unimportant data descriptor as it doesn't change value between boots:

    // Client request packet contents:
    
    Hex View  00 01 02 03 04 05 06 07  08 09 0A 0B 0C 0D 0E 0F
    
    00000000  00 01 D1 C2 B3 41 70 17  75 12 F7 69 25 17 50 4A  .....Ap.u..i%.PJ
    00000010  C5 DD D4 98 06 FE 24 6B  96 FD 56 14 4A 70 7E 51  ......$k..V.Jp~Q
    00000020  55 57                                            UW
    
    // Server response packet contents:
    
    Hex View  00 01 02 03 04 05 06 07  08 09 0A 0B 0C 0D 0E 0F
     
    00000000  07 A8 02 73 52 42 1F 1F  C1 41 B4 E4 5B D9 A9 9A  ...sRB...A..[...
    00000010  5A DD 0F 94 F1 AB 9E E8  86 C7 99 7E 08 68 52 C5  Z..........~.hR.

    Ok, so what is the HKDF? That is HMAC-based key derivation. It can be used to convert shared secrets computed from Diffie–Hellman into key material suitable for use in encryption. Wow, that makes a lot of sense; it's most likely doing exactly that to derive a key to encrypt and decrypt the other packets.

    To be able to decrypt these packets, we need to understand exactly how the key for encryption is generated. That includes any possible input data as well as configurable options.

    It's safe to assume the ECDH and HKDF functions are used for the packet data, so focusing on the key generation process, I summarize the variables we need to understand:

    • ECDH:

    • HKDF
      • Hashing method

      • Output key size

      • Optional salt

      • Optional info

    The smart device and its cloud server both exchange 256 bits of data during what we assume is the key exchange process. But remember, the smart device firmware also loads the following keys from storage:

    • 256-bit device key pair (private & public)

    • 256-bit cloud server 'root' public key

    • 256-bit cloud server 'signer' public key

    There are a lot of possibilities here, so I take another look at the application in Ghidra. By following the error strings, I located the function which generates this key! I steadily work my way through labeling functions and variables by comparing the assembly to the mbedtls source code. I was able to annotate and simplify it to the following pseudocode:

    int GenerateNetworkKey(uchar *outputKey, uchar *outputRandomBytes)
    {
      // Generate an ECDH key pair
      char privateKey1 [12];
      char publicKey1 [36];
      mbedtls_ecdh_gen_public(
        ecpGroup, 
        privateKey1, 
        publicKey1, 
        (char *)mbedtls_ctr_drbg_random, 
        drbgContext
      );
    
      // Overwrite generated private key?
      mbedtls_mpi_read_binary(privateKey1, (uchar *)(_DAT_3ffb3948 + 0x7c), 1);
    
      // Overwrite generated public key?
      mbedtls_ecp_copy(publicKey1, (char *)(_DAT_3ffb3948 + 0x88));
    
      // Load another public key?
      char publicKey2 [36];
      mbedtls_ecp_copy(publicKey2, (char *)(_DAT_3ffb38cc + 0x88));
      
      // Compute shared secret key using privateKey1 and publicKey 2
      char computedSharedSecret [100];
      uchar binarySharedSecret [35];
      mbedtls_ecdh_compute_shared(
        ecpGroup,
        computedSharedSecret,
        publicKey2,
        privateKey1,
        (char *)mbedtls_ctr_drbg_random,
        drbgContext
      );
      mbedtls_mpi_write_binary(computedSharedSecret, binarySharedSecret, 0x20);
    
      // Generate random bytes
      mbedtls_ctr_drbg_random(globalDrbgContext, outputRandomBytes, 0x20);
    
      // Derive key
      mbedtls_md_info_t *md = mbedtls_md_info_from_type(MBEDTLS_MD_SHA256);
      uchar* deviceSerialNumber = (uchar *)GetDeviceSerialNumber();
      mbedtls_hkdf(
        md, 
        binarySharedSecret, // salt
        0x20,
        outputRandomBytes, // input
        0x20,
        deviceSerialNumber, // info
        9,
        outputKey,
        0x10
      );
    }

    Being able to interpret assembly or even the decompiled code in Ghidra is certainly an acquired skill; I'd like to emphasize this took a while to figure out, with many breaks in between!

    This function does something unusual; here's what we can learn from it:

    • The generated ECDH key pair is discarded and replaced by keys loaded from somewhere else in memory, which is strange. Because the ECDH key pair generation function isn't used elsewhere in the application, it's likely these keys are the files from the firmware storage we saw earlier.

    • The algorithm used for the HKDF is SHA-256.

    • The computed shared secret is used as the HKDF salt.

    • Random bytes are generated as the HKDF input.

    • The device serial number is used as the HKDF info.

    • The HKDF output key size is 0x10 (16 bytes / 128 bits).

    We now have a much better understanding of how the smart device generates the potential encryption key.

    It's useful to keep in mind that their cloud server also has to generate this key, meaning it needs to have all the same input variables to the HKDF.

    Knowing this, we can recap the three dynamic inputs to the HKDF function and understand how the server will also have them:

    • salt - Shared secret: The server must have access to the same private and public keys used for the ECDH shared secret computation or use the public to our private and the private to our public.

    • input - Random bytes: The server must have access to these randomly generated bytes on the smart device; either we send these bytes to the server, or technically, the server could recreate the pseudo RNG method used. However, the generated bytes have the size of 0x20 (32 bytes / 256 bits) which exactly matches the size of the data sent in the key exchange packet, so it's highly likely we're sending it there!

    • info - Device serial number: We already know the device serial number is part of the packet header, so the server easily has access to this value.

    Curious to know what the application did with these randomly generated bytes, I checked what the calling function did with them:

    stack[0] = 0x00;
    stack[1] = 0x01;
    GenerateNetworkKey(&KeyOutput, stack[2]);
    log(2, 2, 'Write ECC conn packet\r\n');
    SendPacket((int)param_1, 2, stack[0], 0x22);

    We can see the random bytes from GenerateNetworkKey are written out to the stack, and better yet, the 00 01 bytes are written to the stack just before it, and then all 0x22 bytes are sent in the packet. That exactly matches the format we saw in the key exchange packet!

    Much progress has been made via static analysis, and the final value we need to calculate the decryption key is the shared secret.

    At this point of reverse engineering, I hadn't reversed the functions as cleanly as shown in this blog post and wanted to try to dynamically obtain keys directly from the device.

    Debugging via JTAG would be the sensible choice here. However, I didn't notice breakout points for these pins on the PCB, and I wanted to avoid soldering directly to the ESP32 pins, so I thought I'd challenge myself to patch the firmware to print it over serial!

    The CapSense service is still disabled, so I thought I'd write a function over that logic to print out the shared secret key and call it right after it was computed!

    So, planning in pseudocode, I'd want to add my function call to the GenerateNetworkKey function. Right after it has generated the key.:

    int GenerateNetworkKey(uchar *outputKey, uchar *outputRandomBytes)
    {
      // ... 
      
      // Add my function call:
      print_key(binarySharedSecret);
    }
    
    // Custom function saved over unused logic:
    void print_key(char *key)
    {
      for (int i = 0; i < 32; i++) {
        log('%2.2x', key[i]);
      }
    }

    While referring to the open_in_newXtensa instruction set architecture manual, I threw together some assembly like this:

    // Original
    400dbf2d  25 4b 6c    call8     GetDeviceSerialNumber
    
    // Patched
    400dbf2d  e5 ff fd    call8     print_key
    
    // print_key:
    400d9f2c  36 41 00    entry     a1, 0x20
    400d9f3b  42 c2 20    addi      a4, a2, 0x20
    400d9f3e  52 a0 02    movi      a5, 0x2
    400d9f41  61 ea db    l32r      a6, PTR_s_%2.2x // '%2.2x'
    400d9f44  d2 02 00    l8ui      a13, a2, 0x0
    400d9f47  60 c6 20    mov       a12, a6
    400d9f4a  50 b5 20    mov       a11, a5
    400d9f4d  50 a5 20    mov       a10, a5
    400d9f50  22 c2 01    addi      a2, a2, 0x1
    400d9f53  25 ed 05    call8     log
    400d9f56  27 94 ea    bne       a4, a2, LAB_400d9f44
    400d9f59  22 a0 00    movi      a2, 0x0
    400d9f5c  90 00 00    retw
    

    We patch over the GetDeviceSerialNumber function call because this is directly after the generation of the shared secret key, and the pointer to the key is still in the register a2.

    I flashed the modified firmware, booted up the device, and checked the serial output:

    Write ECC conn packet
    e883eaed93c63d2c09cddebce6bb15a7f4cb5cedf00c1d882b8b292796254c9c

    Success! We've printed out the shared secret key!

    I rebooted the device numerous times to see if the key changed, and it remained the same. It is most likely computed using the keys in the firmware storage, but now we have the computed static value, we don't need to reverse the computation process.

    Alright, we now understand the method to derive the decryption key and have all input values; it looks something like this:

    const hkdfOutputKey = hkdf({
      method: 'SHA-256',
      salt: Buffer.from(
        'e883eaed93c63d2c09cddebce6bb15a7f4cb5cedf00c1d882b8b292796254c9c', 'hex'
      ),
      input: randomBytesFromDeviceKeyExchangePacket,
      info: deviceSerialNumber,
      outputKeySize: 0x10,
    });

    To be on the safe side, I wrote another firmware patch to print the key output from the HKDF call and tried recreating the key from captured packets. It works! That confirms we have correctly reverse-engineered the key creation function and are able to replicate the key creation logic in our own application.

    But now we need to find which encryption algorithm is used. I refer back to the function which formats packets and found the call to the encryption function:

    char randomBytes [16];
    
    // Write device serial
    memcpy(0x3ffb3ce0, deviceSerialNumber, 9);
    
    // Generate and write random bytes
    mbedtls_ctr_drbg_random(globalDrbgContext, randomBytes, 0x10)
    memcpy(0x3ffb3ce9, randomBytes, 0x10);
    
    // Write packet data
    memcpy(0x3ffb3cf9, data, dataSize);
    
    // Pad with random bytes
    mbedtls_ctr_drbg_random(globalDrbgContext dataSize + 0x3ffb3cf9, paddingSize);
    
    // Run encryption on the data + padding
    FUN_400e2368(0x3ffb3cf9, dataSize + paddingSize, &HKDFOutputKey, randomBytes);

    I noticed that after the device serial number is copied to the packet, 16 random bytes are generated and copied directly after it. These bytes are also provided to the encryption function. So, we know they are an input variable to the encryption algorithm.

    We know the key is 128 bits, with another 128 bits of additional random data.

    I looked into the encryption function, which is very clearly crypto-related due to the looping of a bunch of bitwise operations, and noticed a reference to a static block of data.

    This data started with 63 7C 77 7B F2 6B 6F C5, a search in the mbedtls source code revealed it is the open_in_newAES Forward S-Box!

    I decided to jump straight into attempting AES decryption on the captured packets and successfully decrypted a packet!! 🎉

    Hex View  00 01 02 03 04 05 06 07  08 09 0A 0B 0C 0D 0E 0F
     
    00000000  00 00 65 00 53 00 82 A4  74 79 70 65 AF 6D 69 72  ..e.S...type.mir
    00000010  72 6F 72 5F 64 61 74 61  5F 67 65 74 A4 64 61 74  ror_data_get.dat
    00000020  61 85 A9 74 69 6D 65 73  74 61 6D 70 CF 00 00 01  a..timestamp....
    00000030  8D 18 05 31 FB A9 46 41  4E 5F 53 50 45 45 44 00  ...1..FAN_SPEED.
    00000040  A5 42 4F 4F 53 54 C2 A7  46 49 4C 54 45 52 31 00  .BOOST..FILTER1.
    00000050  A7 46 49 4C 54 45 52 32  00 07 07 07 07 07 07 07  .FILTER2........

    The algorithm was AES-128-CBC and the additional random data was used as the IV (Initialization vector).

    We can now create an MITM (man in the middle) attack that does not require any firmware patching. This is because the private key of the device is now known, the key derivation logic has been reverse-engineered, and any required dynamic data is exposed over the insecure network.

    If it correctly implemented ECDH, the smart device would have a unique private key that isn't exposed, and our easiest route of attack would be to generate our own server key pair and do any firmware modifications so the device accepts our custom public key.

    But because of their custom protocol's design, we can write an MITM script that can intercept, decrypt, and potentially modify network communications without any modifications to the smart device. So, that's what we're going to do!

    The main aim now is to decrypt and log as much data as possible; then, we can reference that to write a local server endpoint that entirely replaces their cloud server.

    I hack together a quick Node.js script to do this:

    const dns = require('dns');
    const udp = require('dgram');
    const crypto = require('crypto');
    const hkdf = require('futoin-hkdf');
    const fs = require('fs');
    
    // Key Gen
    
    const sharedSecretKey = Buffer.from(
      'e883eaed93c63d2c09cddebce6bb15a7f4cb5cedf00c1d882b8b292796254c9c',
      'hex'
    );
    
    function calculateAesKey(deviceSerialNumber, inputData) {
      return hkdf(inputData, 16, {
        salt: sharedSecretKey,
        info: deviceSerialNumber,
        hash: 'SHA-256',
      });
    }
    
    // Packet Parsing
    
    let latestAesKey = null;
    let packetCounter = 0;
    const proxyLogDir = path.join(__dirname, 'decrypted-packets');
    
    function decryptPacket(data, deviceSerial) {
      const IV = data.subarray(0xd, 0x1d);
      const encryptedBuffer = data.subarray(0x1d, data.length - 2);
      const decipher = crypto.createDecipheriv(
        'aes-128-cbc',
        latestAesKey,
        parsed.IV
      );
      decipher.setAutoPadding(false);
      return Buffer.concat([decipher.update(encryptedBuffer), decipher.final()]);
    }
    
    function logPacket(data) {
      const messageId = data.readUInt8(3);
      const deviceSerial = data.subarray(4, 4 + 9);
    
      if (messageId === 2) {
        // Key Exchange
        const randomlyGeneratedBytes = data.subarray(0xf, data.length - 2);
        latestAesKey = calculateAesKey(deviceSerial, randomlyGeneratedBytes);
      } else {
        // Encrypted Packets
        fs.writeFileSync(
          path.join(proxyLogDir, `packet-${id}.bin`),
          decryptPacket(data)
        );
      }
    }
    
    // Networking
    
    dns.setServers(['1.1.1.1', '[2606:4700:4700::1111]']);
    
    const PORT = 41014;
    const cloudIp = dns.resolve4('smartdeviceep.---.com')[0];
    const cloud = udp.createSocket('udp4');
    let latestClientIp = null;
    let latestClientPort = null;
    
    cloud.on('message', function (data, info) {
      logPacket(data);
      local.send(data, latestClientIp, latestClientPort);
    });
    
    const local = udp.createSocket('udp4');
    local.bind(PORT);
    
    local.on('message', function (data, info) {
      logPacket(data);
      latestClientIp = info.address;
      latestClientPort = info.port;
      cloud.send(data, PORT, cloudIp);
    });
    

    Here, we combine all of our research to implement an MITM attack.

    Just like when we first captured packets, we configure Node.js to use Cloudflare's DNS resolver to bypass our local DNS server.

    We create a UDP socket locally to accept packets from the smart device and also a socket to communicate with the cloud server.

    • Anything we receive from the smart device, we log and send to the cloud server

    • Anything we receive from the cloud server, we log and send to the smart device

    We treat packets with the messageId of 2 to be the key exchange packet where the smart device send the random bytes to the server, we then calculate the AES key used to decrypt future packets.

    While capturing, I used their mobile app to remotely control the smart device so we could reference the logs and replicate the logic ourselves.

    We now have the decrypted packet data, but the data is still in a serialized binary format:

    Hex View  00 01 02 03 04 05 06 07  08 09 0A 0B 0C 0D 0E 0F
     
    00000000  01 00 64 00 29 00 82 A4  74 79 70 65 A7 63 6F 6E  ..d.)...type.con
    00000010  6E 65 63 74 A8 66 69 72  6D 77 61 72 65 C4 10 00  nect.firmware...
    00000020  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 83  ................

    My mind was deep in the world of reverse engineering, and I managed to reverse the structure for all packets and hack together some JavaScript to convert the data to and from JSON.

    The header was quite simple, again just some IDs and length, but in little endianness:

    • 01 00 - packet ID

    • 64 00 - transaction ID

    • 29 00 - serialized data length

    And with some tinkering, I figured out the serialized format:

    • 82 - Map

    • A4 - String of 4 length

    • A7 - String of 7 length

    This was fun to reverse because the typing was more described in bits, but it's clearly readable from the bytes for these simple cases.

    Looking back on this, I'm not sure why I didn't look for an existing solution that matches this serialized binary data format; I was expecting everything to be a custom solution at this point. But having a search now, this is just open_in_newMessagePack, so I guess I just reverse-engineered and wrote a partial msgpack implementation 😆

    Switching over to a popular implementation, we can see the data is easily unpacked into JSON:

    const { unpack, pack } = require('msgpackr');
    
    const packedData = Buffer.from(
      '82A474797065A7636F6E6E656374A86669726D77617265C41000000000000000000000000000000000', 
      'hex'
    );
    
    const unpackedData = unpack(packedData);
    
    // unpackedData:
    {
      type: 'connect',
      firmware: <Buffer 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00>
    }

    In preparation for writing a custom local server for the smart device, let's take a look at the unpacked network logs we've captured:

    🔑 Key Exchange Packet:

    The smart device sends random bytes to the server to be used in the HKDF.

    // Smart Device Request
    D1C2B34170177512F7692517504AC5DDD49806FE246B96FD56144A707E515557
    
    // Server Response
    00000000000000000000000000000000

    ↙️ Get Device State:

    The smart device fetches its initial state from the server when it boots.

    // Smart Device Request
    { type: 'mirror_data_get' }
    
    // Server Response
    {
      type: 'mirror_data_get',
      data: {
        timestamp: 1705505010171n,
        FAN_SPEED: 0,
        BOOST: false,
        FILTER1: 0,
        FILTER2: 0
      }
    }

    🔗 On Connect:

    When the smart device connects to the server, it sends its current firmware UUID. The server responds with the potential UUID for a firmware or config update that could be downloaded.

    // Smart Device Request
    {
      type: 'connect',
      firmware: <Buffer 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00>
    }
    
    // Server Response
    {
      type: 'connect',
      server_time: 1706098993961n,
      firmware: <Buffer ab cd ef ab cd ef ab cd ef ab cd ef ab cd ef ab>,
      config: <Buffer 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00>,
      calibration: <Buffer 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00>,
      conditioning: <Buffer 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00>,
      server_address: 'smartdeviceep.---.com',
      server_port: 41014,
      rtc_sync: { ss: 13, mm: 23, hh: 12, DD: 24, MM: 1, YYYY: 2024, D: 3 }
    }

    ⤵️ Server Updates Smart Device State:

    When the server wants to update the smart device's state, it will send a packet like this.

    // Server Request
    { 
      type: 'mirror_data',
      data: {
        FAN_SPEED: 1,
        BOOST: false
      }
    }

    ⤴️ Smart Device Updates Server State:

    The smart device sends its latest state to the server whenever it changes.

    // Smart Device Request
    {
      type: 'mirror_data',
      data: {
        timestamp: 1706105072142n,
        FAN_SPEED: 1,
        BOOST: false,
        FILTER1: 0,
        FILTER2: 0
      }
    }
    
    // Server Response
    { type: 'mirror_data' }

    🛜 Keep Alive:

    The smart device frequently sends a keep-alive packet to the server so the server can potentially use the open connection to send state updates.

    // Smart Device Request
    {
      type: 'keep_alive',
      stats: {
        rssi: -127n,
        rtt: 684,
        pkt_drop: 1,
        con_count: 1,
        boot_str: '',
        uptime: 100080
      }
    }
    
    // Server Response
    { type: 'keep_alive' }

    We're going to need a way to connect Home Assistant to our custom server, which handles the smart device networking. open_in_newMQTT is ideal for this; it's a protocol designed for IoT messaging and can be easily configured within Home Assistant. For this, I set up the open_in_newMosquitto addon for Home Assistant, an open-source MQTT broker that connects everything together.

    The connection chain will look like this:

    Home Assistant <--> MQTT Broker <--> Custom Server <--> Smart Device.

    The custom server logic in pseudocode would look something like this:

    function HandleSmartDeviceRequest(req) {
      switch (req.type) {
        case 'mirror_data_get': {
          // Device wants state, send latest MQTT state or default fallback
          device.send({ fan_speed: mqtt.get('fan_speed') || 0 });
          return;
        }
        case 'mirror_data': {
          // Device state has changed, publish and retain in MQTT broker
          mqtt.publish('fan_speed', req.fan_speed, { retain: true });
          return;
        }
      }
    }
    
    function HandleMQTTMessage(topic, msg) {
      switch (topic) {
        case 'set_fan_speed': {
          // MQTT wants to change state, forward to device
          device.send({ fan_speed: msg.fan_speed });
          return;
        }
      }
    }

    This logic seems quite minimal but is carefully designed. The latest state is retained in the MQTT broker. However, the source of truth for state updates is always the device, meaning the state will never update in the MQTT broker unless the device updates it via the custom server. This covers a couple of edge cases:

    • If the state update was unsuccessful, we should not display the state as updated.

    • The state update should be reflected via the MQTT broker if the smart device was updated via its physical control panel.

    The three main cases we are supporting here are:

    • When the smart device boots and initially connects to the custom server, it requests the latest state; we can attempt to obtain this from the MQTT broker's retained value or fall back to a default state.

    • When Home Assistant wants to update the state, it will send a command to the MQTT broker. We can subscribe to this command topic from the custom server and forward the request to the smart device.

    • When the smart device's state changes for any reason, it sends the mirror_data packet to update the server state; we send this value to the MQTT broker to update the state and tell it to retain the data as the latest value.

    I run this custom server alongside Mosquitto and Home Assistant on my small home automation server. Then configured my Pi-hole local DNS to resolve the cloud server's domain to my custom server.

    The final step in this process is configuring Home Assistant to map the MQTT topics to a device type. For my air purifier, the closest integration was an open_in_newMQTT Fan; in my configuration.yaml I added something like this:

    mqtt:
      fan:
        - name: 'Air Purifier'
          unique_id: 'air_purifier.main'
          state_topic: 'air_purifier/on/state'
          command_topic: 'air_purifier/on/set'
          payload_on: 'true'
          payload_off: 'false'
          percentage_state_topic: 'air_purifier/speed/state'
          percentage_command_topic: 'air_purifier/speed/set'
          speed_range_min: 1
          speed_range_max: 4

    I added topics to control the fan speed and turn the device on and off.

    Everything works! I've been running this for a couple of weeks now, and it has worked fine without any issues! I've even set up a little automation, so if my separate air monitor's PM2.5 or VOC level gets too high, it boosts the air purifier for a while!

    For better or worse, the engineers behind the service decided not to implement a standard protocol like DTLS. They created a custom solution which introduced some downsides to the system:

    • We're not certain if each device has its own unique private key, but whether it does or not, both have downsides:
      • If all devices share the same firmware private key, the attacker needs to reverse engineer just a single device to MITM attack any other devices.

      • However, if every device has its own unique private key, the server must keep a data store mapping device serial numbers to the key of each device. So, In the case of any data loss, the server would entirely lose the ability to respond to any device communications; that is a scary thought for the business. Unless there is an insecure network fallback in place, which is equally alarming and time-consuming to develop

    • Because the firmware contains a private key that is static, an attacker needs a single firmware dump to obtain the key and perform an MITM attack. Whereas, if an EC private key was instead generated at runtime, write access would be required in order to patch the server public key or application firmware, which could be protected by other means.

    Also, the mobile app has a 1-star review on the app store. It makes me wonder if there is a correlation between the unexpectedly custom technical implementation and the abnormally poor end-user app experience. Building a custom system is far more than just the initial development; systems need support, and bugs need fixing.

    Overall, it wasn't a bad implementation from a security perspective; you'd still need physical access to attack the device; there are pros and cons to everything and variables that aren't visible from our perspective.

    The custom implementation increased the obscurity of network communication. However, open_in_newSecurity through obscurity is simply a short-term win. While it may deter generic attacks on standard technical implementations. In the bigger picture, it's just an annoying yet passable hoop for an attacker to jump through.

    I've had a few conversations recently about why engineers build from the ground up vs. using proven standards. And that's a very interesting topic; I'll save that for another post!

    What a crazy journey that was!

    I'd like to emphasize that the reverse-engineering process was not as smooth as it may seem from this post; I've done my best to format everything to be best read by you. But in reality, I was often in the dark, unsure if the next thing would work or not, and juggling many tasks and theories, iteratively making progress in multiple places to test my assumptions ASAP.

    I tried some things that hit dead-ends and weren't worth dedicated sections in this post:

    • I tried running the firmware in open_in_newEspressif's fork of QEMU, patched out the CapSense service, and loaded virtual e-fuses to match the MAC address from the firmware, all to find out it doesn't support WiFi emulation. It was fun to see it booting virtually, though!
    • I also tried flashing a different serial number, device key, and certificates to see if that affected anything before I got around to fully reversing the application logic. I didn't get much from this. Turns out this likely would have just affected the computed shared secret used for the HKDF salt, which we dumped anyway.

    I've certainly sharpened a variety of skills from this project. I'm also proud I achieved my goal of adding this device to Home Assistant! The moment I managed to successfully decrypt the first packet was great; everything just clicked into place.

    I'm still curious to explore creating an open-source project to de-cloud and debug smart home products; I've learned much more about the technical aspects of achieving that.

    Thanks for reading! I hope you found some value in this post. I put a massive amount of effort into creating it, probably more than I did actually doing the project itself. It would be amazing to receive feedback on the format!

    I'd also really appreciate it if you could help share the post.

    You can drop a follow on open_in_newX to stay updated with what I'm doing.

    If you found it helpful and would like to support my content creation, you can open_in_newBuy Me a Coffee! Your support helps me continue creating content and sharing my passion for reverse engineering!

    Take it easy 👋




    All Comments: [-] | anchor

    paranoidrobot(10000) 3 days ago [-]

    As far as I can tell it doesn't mention which air purifier.

    Knowing that might help influence purchasing decisions for those also interested in a 'sleek' air purifier that contains an ESP32.

    rx_tx(3271) 3 days ago [-]

    I suspect hiding the manufacturer/model was very much on purpose, they blurred the markings on the PCB and hid the domain name for the manufacturer's API calls (and in the console logs as well).

    deanc(10000) 3 days ago [-]

    I highly suspect that this is a Levoit air purifier. I recently purchased a Levoit 300S and had the same issue. The VeSync app connects the device directly over the internet and you can control it via an API on their domain with a username and password. Your air purifier is then a backdoor to your home network. I just put it on a guest network now rather than go through this.

    rickdeckard(10000) 3 days ago [-]

    I guess that is on purpose. After all the article could easily be rewritten as a successful attack on the manufacturer infra using a private key extracted from a device.

    So the Authors Home Assistant Integration could be at risk to stop working quite quickly...

    hxii(10000) 3 days ago [-]

    I've got a power station (Ugreen) with an ESP32 that I'd also love to connect to HomeAssistant, instead of their app which provides me no benefit.

    This is definitely beyond my capabilities at this point but it could be interesting to go through a similar process once mentally ready.

    walterbell(23) 3 days ago [-]

    Imagine a mental price tag alongside IoT cybersecurity label, https://arstechnica.com/information-technology/2023/07/the-c...

    NoMoreNicksLeft(10000) 3 days ago [-]

    It's not. Get a usb-serial cable. Open it up, attach that, load Tasmota firmware. Takes a little bit of fiddling to figure out which gpio goes to which relay sometimes, but once you've gotten the pattern you can upload it so others don't have to figure it out next time.

    walterbell(23) 3 days ago [-]

    For vendors of ESP32-based IoT devices:

      Give a man a fish, and you feed him for a day.
    
    > My intentions were solely to upgrade the smart device I've purchased to integrate with my smart home system. Doing so does not affect any other instances of this product or its cloud services.. sensitive product-specific data, such as private keys, domains, or API endpoints, have been obfuscated or redacted.

    For owners of ESP32-based IoT devices:

      Teach a man to fish, and you feed him for a lifetime.
    
    > Creating an open-source project to de-cloud and debug smart home products; I've learned much more about the technical aspects.. I put a massive amount of effort into creating [this post].. probably more than.. the project itself. It would be amazing to receive feedback on the format!

    blog author: https://x.com/jmswrnr

    brettermeier(10000) 3 days ago [-]

    Doesn't he have Bluesky? I refuse to use twitter.

    Edit: whoever downvotes this can rot in hell :D

    simgt(10000) 3 days ago [-]

    Very nice article!

    Every time I was part of a team designing IoT devices, there would be a slightly more security-focused engineer who would manage to have some level of protection for the boot. I'm surprised there was no resistance here to dump and reflash the firmware. Why would they not even bother encrypting the flash? How common is that?

    It would have been nice to give the product name.

    walterbell(23) 3 days ago [-]
    > I'm surprised there was no resistance here to dump and reflash the firmware.

    Some devices are purchased because their firmware is easy to replace. Upcoming regulations on IoT cybersecurity might make it harder to sell such devices. ESP32-based devices have been successful in several niches, https://hn.algolia.com/?query=esp32

    Oxodao(10000) 3 days ago [-]

    For initial RE, I'd highly suggest jadx-gui over dex2jar+jd-gui it has a lot of nice feature

    grishka(10000) 3 days ago [-]

    Not only that, jadx operates on dex files directly and the conversion from dex to regular JVM classes can sometimes be lossy. So you tend to get better decompilation with jadx vs dex2jar and any regular Java decompiler.

    jqpabc123(10000) 3 days ago [-]

    The ultimate long term solution --- refuse to buy any home product that defies local control.

    If a wifi password is required to make full use of the device, I will return it.

    If some users want to sacrifice security and privacy for 'convenience', that's on them. But if you want to sell me the product, at least provide the option to decline without loss of functionality. Otherwise, no sale.

    As an example, I refuse to buy a doorbell camera that doesn't support RTSP.

    123pie123(10000) 3 days ago [-]

    I've been doing this for years, but it's hard work trying to get information on how bad these devices could spy on you - before you buy them

    I just guess now and make sure the company has a good returns policy

    mrheosuper(10000) 3 days ago [-]

    > If a wifi password is required to make full use of the device, I will return it.

    By that logic, you will not buy any 'smart' devices

    A camera doorbell, in your example, need wifi password so that it can stream video.

    A smart lightbuld need wifi connection to change brightness or color.

    Without wifi connection, it will lose a part of functionality

    dzikimarian(10000) 3 days ago [-]

    Basically full local home assistant support or I'm not buying. Some products start to have badge on the box.

    fidotron(2952) 3 days ago [-]

    > As an example, I refuse to buy a doorbell camera that doesn't support RTSP.

    This is a good example of conflicting security requirements.

    Not wanting the video to go to the cloud is fine, but most cameras with RTSP enabled allow any other device on the network to trivially get the camera stream, and sometimes also control the camera. This is why some camera companies require you jump through hoops to unlock RTSP - I don't like it but I can see why they do it.

    This is one reason I've come to believe it's necessary that every device must see a totally different network universe from every other, able only to see the local controller server. (This is how I ended up playing with on AP video relays in my profile, as an effort to see what's involved). Things like multicast discovery is cool, but an absolute privacy and security disaster area.

    mzajc(10000) 3 days ago [-]

    > If a wifi password is required to make full use of the device, I will return it.

    This is one of my favourite uses of OpenWRT, or any other firmware that gives you proper control over the router - for WiFi-networked IoT devices, I set up a separate wireless network with no WAN/LAN access and client isolation. I can connect to the device, but it can't connect to WAN, any other devices on the IoT network, or my LAN.

    Of course this won't work for cloud-tethered devices, but many will expose their functionality directly over network.

    fcpk(10000) 3 days ago [-]

    One overlooked variable here is that price is a huge consideration factor into IoT acceptance. Convenience is one thing, but having to pay 10x more is another.

    China(up to now, now with tariffs stuff... who knows) has been exceptional in that they produced IoT devices for many use cases at very reasonnable prices. Want a water leak detector that's zigbee connected? that's only 5 bucks. if I want to buy one from a 'western' company(still produced in china) it instantly gets marketed to a premium market and costs 10x or 20x more.

    They have no incentive to make their products work in pure local when companies like Tuya provide SDKs, chips, and frameworks at a low price and easy entry barrier. But of course that locks into their ecosystem.

    It's possible that a company making an open toolkit with easy integration for esp32/etc could gain enough traction to get many devices to use that, but at this point it's unlikely.

    As for HA... I love it and run it locally, but it's not for the faint of heart. And spending dozens of hours modifying devices and configuration to get everything running is a priviledge few have the skills, time and knowledge to do.

    As always... this is a case of 'the only incentive is money and hence the system will lock itself'.

    Wouldn't it be great if the EU could force these companies to surrender local control?

    VladVladikoff(10000) 3 days ago [-]

    Can you tell me which one you arrived on in your research? I would like a local controlled doorbell camera

    systemtest(3547) 3 days ago [-]

    The result of this process is that the air purifier boosts when the air quality inside drops.

    I feel like that is something that doesn't or at least shouldn't require a string of IoT devices, apps, wireless communication and hubs. Why not leave all of that out and just attach an air quality sensor to the air purifier and a small LCD to adjust the settings?

    The light in my hallway turns on automatically when I walk past. No cloud, no HomeAssist, no WiFi, no Zigbee, no apps, no batteries to change. Just a motion sensor hardwired to the light fixture. Hasn't failed me once in the past ten years. Works great even if the network goes down.

    cheschire(3350) 3 days ago [-]

    While the author gave a contrived need of controlling this device like the others, they may be simplifying their motivations for the purpose of focusing the article.

    homeassistant allows you to perform follow on work or even long term analysis. For example the author could use the information to decide what times of day during which seasons are best for airing out the house (more popular in Europe than North America), or if air quality dips happen to coincide with their leaky clothes dryer spewing fibers and soap particles out into the home, or when they cook on their gas range, etc.

    Some people just like to explore and discover. Low threat information is nice these days.

    viraptor(1797) 3 days ago [-]

    > Just a motion sensor hardwired to the light fixture. Hasn't failed me once in the past ten years.

    Funny you mention that, because I'm putting in smart movement sensors to make sure the lights don't come up at night in the garage where the dog sleeps, but also so that I can force the light on for a long period, when I'm doing some work in the same area. People have different needs/expectations.

    turtlebits(10000) 3 days ago [-]

    AQ sensors add cost. I've also never seen a reliable AQ sensor on a air filter. I have several Coway which go into turbo mode at random times and a couple of others that never go above fan speed 1, even when my dedicated AQ sensor shows elevated PM2.5.

    A dumb device without leds/screens/connectivity that I can control with a smart plug via HA is much easier to deal with.

    lgunsch(10000) 3 days ago [-]

    I've seen a number of ESP32 IoT devices here on HN, and I haven't heard many of them use firmware encryption with an eFuse.

    In this case, it would have been pretty hard to create a certificate if you couldn't read the firmware.

    But, also pretty impressed at the same time. I think this is the first Hacker News article I've read about an ESP32 IoT device which has any encryption at all.

    gh02t(10000) 3 days ago [-]

    Even if they use firmware encryption, the footprint for most of the ESP32 packages is really easy to desolder and replace with a fresh one under your control with basic tools. This option is harder if the ESP32 is speaking some digital protocols to various devices, but having re-brained another air purifier myself they often are just flipping some GPIO lines to signal different components to turn on. Easy in that case to just stare at it for a bit then re-flash or replace and re-flash the ESP32 with your own firmware.

    smjburton(10000) 3 days ago [-]

    > For better or worse, the engineers behind the service decided not to implement a standard protocol like DTLS.

    > We're not certain if each device has its own unique private key, but whether it does or not, both have downsides ... If all devices share the same firmware private key, the attacker needs to reverse engineer just a single device to MITM attack any other devices.

    If anything, this article further highlights that security on these type of devices isn't as rigorous as other consumer electronics like laptops or smartphones. Anyone using smart devices should look into DD-WRT, OpenWrt, Tomato, or Asuswrt-Merlin and isolate these devices in their own VLAN away from the rest of your private network.

    vsviridov(10000) 3 days ago [-]

    If anything, devices of that nature should have local control via Bluetooth LE, and not require some crappy proprietary cloud

    Havoc(10000) 3 days ago [-]

    The recent drama around the unitree robot being effectively a beachhead on network has made me much more wary of connecting anything. Think I'll stick to tasmota and zigbee going forward

    simonjgreen(3494) 3 days ago [-]

    Can you tell me more about the Unitree drama?

    harg(10000) 3 days ago [-]

    I wonder if it would be possible to figure out which pins are connected to what on the device's board and just flash the thing completely with ESPHome and write a custom yaml config for it, rather than adapting the existing vendor firmware.

    ddeck(3677) 3 days ago [-]

    It's certainly possible. Tracing the MCUs IO lines to LEDs/buttons/relays etc on a PCB is usually pretty straightforward.

    I have just finished doing this and writing replacement firmware for the Aqara E1 series of Zigbee switches, after getting fed up with them not supporting basic Zigbee binding functionality.

    stereo(3677) 3 days ago [-]

    On top of that, it looks like it would be relatively easy to spoof the cloud server and make the device believe that there is a firmware update available to then feed it esphome, a bit like the switchbota hack.

    MadnessASAP(10000) 3 days ago [-]

    That would've been my go-to, and has been with most of the other 'smart' devices in my house.

    alright2565(10000) 3 days ago [-]

    It would be really easy. I'm not sure why the author has gone through so much effort to hide what filter this is, but I'm assuming J2 is the blower power output and J3 is touchpad controls.

    I've done exactly this on my own air filter, and it's about 200 lines of config. The hardest part is mapping binary outputs to a percentage:

        switch:
          - platform: gpio
            pin: GPIO21
            id: fan_low
            interlock_wait_time: 250ms
            interlock: &interlock_group [fan_low, fan_mid, fan_high, fan_turbo]
          - platform: gpio
            pin: GPIO25
            id: fan_mid
            interlock_wait_time: 250ms
            interlock: *interlock_group
          - platform: gpio
            pin: GPIO22
            id: fan_high
            interlock_wait_time: 250ms
            interlock: *interlock_group
          - platform: gpio
            pin: GPIO17
            id: fan_turbo
            interlock_wait_time: 250ms
            interlock: *interlock_group
        output:
          - platform: template
            id: fan_speed_output
            type: float
            write_action:
              - lambda: |-
                  id(fan_low).turn_off();
                  id(fan_mid).turn_off();
                  id(fan_high).turn_off();
                  id(fan_turbo).turn_off();
                  auto light = ((AddressableLight*)id(status_light).get_output());
                  for (int i = 6; i <= 9; i++) {
                    light->get(i).set(Color::BLACK);
                  }
                  if (state < 0.24) {
                  } else if (state < 0.26) {
                    id(fan_low).turn_on();
                    light->get(6).set(Color(255,0,0,0));
                  } else if (state < 0.51) {
                    id(fan_mid).turn_on();
                    light->get(7).set(Color(255,0,0,0));
                  } else if (state < 0.76) {
                    id(fan_high).turn_on();
                    light->get(8).set(Color(255,0,0,0));
                  } else {
                    id(fan_turbo).turn_on();
                    light->get(9).set(Color(255,0,0,0));
                  }
                  light->schedule_show();
        fan:
          - platform: speed
            name: 'Filter Speed'
            output: fan_speed_output
            speed_count: 4
            id: my_fan




    Historical Discussions: OpenAI is building a social network? (April 15, 2025: 313 points)

    (313) OpenAI is building a social network?

    313 points 3 days ago by noleary in 1586th position

    www.theverge.com | Estimated reading time – 2 minutes | comments | anchor

    OpenAI is working on its own X-like social network, according to multiple sources familiar with the matter.

    While the project is still in early stages, we're told there's an internal prototype focused on ChatGPT's image generation that has a social feed. CEO Sam Altman has been privately asking outsiders for feedback about the project, our sources say. It's unclear if OpenAI's plan is to release the social network as a separate app or integrate it into ChatGPT, which became the most downloaded app globally last month. An OpenAI spokesperson didn't respond in time for publication.

    Launching a social network in or around ChatGPT would likely increase Altman's already-bitter rivalry with Elon Musk. In February, after Musk made an unsolicited offer to purchase OpenAI for $97.4 billion, Altman responded: "no thank you but we will buy twitter for $9.74 billion if you want."

    Entering the social media market also puts OpenAI on more of a collision course with Meta, which we're told is planning to add a social feed to its coming standalone app for its AI assistant. When reports of Meta building a rival to the ChatGPT app first surfaced a couple of months ago, Altman shot back on X again by saying, "ok fine maybe we'll do a social app."

    A social app would also give OpenAI its own unique, real-time data that X and Meta already have to help train their AI models. Musk's Grok surfaces content from X in its results (Musk recently went so far as to merge X and xAI into the same company), while Meta trains Llama on its vast trove of user data.

    One idea behind the OpenAI social prototype is to have AI help people share better content. "The Grok integration with X has made everyone jealous," says someone working at another big AI lab. "Especially how people create viral tweets by getting it to say something stupid."

    OpenAI has a lot going on, of course, and it's unclear if its early-stage social media project will ever see the light of day. But its existence inside OpenAI shows how the company is thinking about expansion at a time when expectations for its future growth are sky high.




    All Comments: [-] | anchor

    paride5745(3590) 2 days ago [-]

    It makes no sense to build a social network nowadays.

    With Mastodon and Bluesky around, users have free options. Plus X and Threads, and you can see how the market is more than saturated.

    IMHO they should look into close collaboration/minority stake with Bluesky or Reddit instead. You have a huge pool of users already, without the need to build it up from the ground up from scratch.

    Heck, OpenAI probably has enough money to just buy Reddit if they want.

    b1n(10000) 2 days ago [-]

    Also, what is their USP? 'Join our social network so we can train our models on your data!'

    seafoamteal(10000) 2 days ago [-]

    I don't know about Reddit, but Bluesky would never in a million years partner themselves publicly with OpenAI. I can't comment on the opinions of the team themselves because I just don't know, but the users would revolt. Loudly.

    sharathnarayan(10000) 2 days ago [-]

    May be they need the social media data to improve their models? X and Meta have an edge here

    rvnx(837) 2 days ago [-]

    Data quality on social networks like Twitter/Meta, is very low compared to what you see in Wikipedia or Reddit

    antirez(1163) 2 days ago [-]

    Isn't Gemini 2.5 the proof you don't need social network alike data for training?

    dktp(10000) 2 days ago [-]

    Google has a deal with Reddit to scrape its content for training AI. It also has Youtube

    anentropic(10000) 2 days ago [-]

    > "The Grok integration with X has made everyone jealous," says someone working at another big AI lab. "Especially how people create viral tweets by getting it to say something stupid."

    It's awesome to see the amazing value for society being created by big tech these days.

    sph(683) 2 days ago [-]

    To think that even a year ago the idea of Instagram-style social media where all posts are openly AI-generated sounded very dystopian, now I can clearly so it is something people would pay for and HN people would gladly build. I wasn't always a Luddite, but damn they made me one.

    HPsquared(10000) 2 days ago [-]

    Are you not entertained?

    xyst(3582) 2 days ago [-]

    And at the expense of consuming massive amounts of energy and depleting our resources—-water, energy—-at an alarming rate.

    zombot(10000) 2 days ago [-]

    But "have AI help people share better content" is so indispensable! How could humanity ever survive without that?

    Even better, soon none of us will have to use social media at all, our AI bots will do it for us. Then we will finally find peace.

    kccqzy(2074) 2 days ago [-]

    In George Orwell's 1984, there is a machine called the versificator that generates music and literature without any human intervention, presumably for the 'entertainment' of the proletarians.

    kookamamie(10000) 2 days ago [-]

    It's also very dangerous, I think. Grok is used on X to arbitrating ground-truth for topics I think it has no chance assessing.

    thih9(2817) 2 days ago [-]

    I don't use X/Twitter - does anyone have an example of a viral tweet like this?

    moogly(10000) 2 days ago [-]

    I guess YTMND.com would've blown their mind if they had been alive and conscious 20 years ago.

    tempodox(818) 2 days ago [-]

    Each time I think I've seen dystopia and the pinnacle of stupidity someone finds a new way to top it. Either that's an amazing superpower, or I'm infected with incurable optimism.

    dkkergoog(10000) 2 days ago [-]

    Do you think there is any value by sending rockets to space?

    jrflowers(10000) 2 days ago [-]

    When your definition of "everyone" is like two, three guys tops

    Duanemclemore(10000) 2 days ago [-]

    I haven't been happier online in the last 10 years than after I stopped checking social media. And in that miserable time it wasn't even a naked beg for training data like this.

    But I really don't see why anyone would even use an open ai 'social network' in the first place.

    It does allow one thing for open ai. Other than training data which admittedly will probably be pretty low quality. It is a natural venue for ad sales.

    Duanemclemore(10000) 2 days ago [-]

    Oh I get one thing - other than ads. So the idea of an LLM filter to algorithmically tailor your own consumption has some utility.

    The logical application would be an existing social network -using- chat gpt to do this.

    But all the existing ones have their own models, so if they can't plug in to an existing one like goooooogle did to yahoo in the olden days, they have to start their own.

    That makes a certain amount of (backward) sense for them. I don't think it'll work. But there's some logic if you're looking from -their- worldview.

    SecretDreams(10000) 2 days ago [-]

    Social media is a plague, including LinkedIn. Anything that lets you follow others and/or erodes your anonymity is just different degrees of cancer waiting to happen.

    The best I ever enjoyed the internet was the sweet spot between dial up and DSL where I was gaming in text based/turn based games, talking on forums, and chatting using IRC.

    bufferoverflow(3152) 2 days ago [-]

    LOL, you're on a social network right now. HN is one. Yeah, it's semi-anonymous, but there are many users with known names here.

    interludead(10000) 2 days ago [-]

    Stepping away from social media can feel like getting your brain back

    timeon(10000) 2 days ago [-]

    > I haven't been happier online in the last 10 years than after I stopped checking social media. And in that miserable time it wasn't even a naked beg for training data like this.

    Meta/Twitter/etc. are drug dealers.

    > But I really don't see why anyone would even use an open ai 'social network' in the first place.

    I really don't see why anyone would even use Heroin yet they do.

    throw_m239339(3625) 2 days ago [-]

    What would be the point? Why would it even need real members?

    paxys(10000) 2 days ago [-]

    Ads

    lukev(10000) 2 days ago [-]

    This kind of news should be a death-knell for OpenAI.

    If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.

    pyfon(10000) 2 days ago [-]

    It is a Threads. How is that doing?

    Nuzzerino(10000) 2 days ago [-]

    > If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.

    I'm not a big fan of OpenAI but this seems a little unfair. They have (or at least had) a pretty kick ass product. Great brand value too.

    Death-knell? Maybe... but I wouldn't read into it. I'd be looking more at their key employees leaving. That's what kills companies.

    robotresearcher(10000) 2 days ago [-]

    AGI is a technology or a feature, not a product. ChatGPT is a product. They need some more products to pay for one of the most expensive technologies ever (to not be delivered yet).

    parhamn(10000) 2 days ago [-]

    There could be too-many-cooks in the AI research part of their work.

    Also, I don't think Sama thinks like a typical large org managers. OpenAI has enough money to have all sorts of products/labs that are startup like. No reason to standby waiting for the research work.

    make3(10000) 2 days ago [-]

    this might just be a way to generate data

    ChuckMcM(700) 2 days ago [-]

    Alternative is that OpenAI is being quickly locked out of sources of human interactions because of competition, one way to 'fix' that is build you're own meadow for data cows.

    xAI isn't allowing people to use the Twitter feed to train AI

    Google is keeping it's properties for Gemini

    Microsoft, who presumably could let OpenAI use it's data fields appears (publicly at least) to be in a love/hate relationship with OpenAI these days.

    So you plant a meadow of tasty human interaction morsels to get humans to sit around and munch on them while you hook up your milking machine to their data teats and start sucking data.

    saltysalt(3476) 2 days ago [-]

    Indeed! Ultimately, all online business models end at ad click revenue.

    westoncb(3004) 2 days ago [-]

    I think it might just be about distribution. Grok gets a lot of interesting opportunities for it over X, then throw in the way people reacted to new 4o image gen capabilities.

    ben_w(10000) 2 days ago [-]

    OpenAI's idea of 'shortly' offering AGI is 'thousands' of days, 2000 days is just under 5.5 years.

    kromem(10000) 2 days ago [-]

    Don't underestimate the importance of multi-user human/AI interactions.

    Right now OAI's synthetic data pipeline is very heavily weighted to 1-on-1 conversations.

    But models are being deployed into multi-user spaces that OAI doesn't have access to.

    If you look at where their products are headed right now, this is very much the right move.

    Expect it to be TikTok style media formats.

    pjc50(1402) 2 days ago [-]

    Someone down below mentioned ads, and I think that might well be the route they're going to try: charging advertisers to influence the output of the AI.

    As for whether it will work, I don't know how they're possibly going to get the 'seed community' which will encourage others to join up. Maybe they're hoping that all the people making slop posts on other social networks want to cut out the middleman and have communities of people who actually enjoy that. As always, the sfw/nsfw censorship line will be an important definer, and I can't imagine them choosing NSFW.

    weatherlite(10000) 2 days ago [-]

    > If you've built your value on promising imminent AGI then this sort of thing is purely a distraction, and you wouldn't even be considering it... unless you knew you weren't about to shortly offer AGI.

    Or even if you did come up with AGI, so would everyone else. Gemini is arguably better than ChatGPT now.

    sevensor(10000) 2 days ago [-]

    Adding social media to your thing is so 2018. Is the next big thing really just a warmed over version of the last big thing? Is sama just completely out of ideas to save his money-burner?

    jacobsenscott(10000) 2 days ago [-]

    Everything devolves to ad sales. Do you know the minute details about their lives people type into chat gpt prompts? It's a gold mine for ads.

    NewUser76312(10000) 2 days ago [-]

    I think it was a strategic mistake for Sam et al to talk about 'AGI'.

    You don't need some mythical AI to be a great company. You need great products, which OpenAI has, and they keep improving them.

    Now they've hamstrung themselves into this AGI nonsense to try and entice investors further, I guess.

    jug(10000) 2 days ago [-]

    AI as we know it (GPT based LLM's) have peaked. OpenAI noticed this sometime autumn last year when would-be GPT-5 was unimpressive despite the huge size. I still think ChatGPT 4.5 was GPT-5, just rebranded to set expectations.

    Google Gemini 2.5 Pro was remarkably good and I'm not sure how they did it. It's like an elite athlete doing a jump forward despite harsh competition. They probably have excellent training methodology and data quality.

    DeepSeek made huge inroads in affordability...

    But even with those, intelligence itself is seeing diminishing returns while training costs are not.

    So OpenAI _needs_ to diversify - somehow. If they rely on intelligence alone, then they're toast. So they can't.

    9rx(10000) 2 days ago [-]

    On the other hand, if you knew AGI was on the near horizon, you'd know that AGI will want to have friends to remain happy. You can give AGI a physical form so it can walk down to the bar – or you can, much more simply, give it an online social network.

    bhouston(2119) 3 days ago [-]

    I've always thought that the social networks like X and BlueSky are sort of like the distributed consciousness of society. It is what society, as a whole / in aggregate, is currently thinking about and knowing its ebbs and flows and what it responds to are important if you want to have up to date AI.

    So yeah, AI integrated with a popular social network is valuable.

    ahartmetz(10000) 3 days ago [-]

    Social networks tend to reflect the character of their founders. Do you really want to see what Sam Altman can do?

    chazeon(2625) 3 days ago [-]

    I think a social network is not necessarily a timeline-based product, but an LLM-native/enabled group chat can probably be a very interesting product. Remember, ChatGPT itself is already a chat.

    sho_hn(10000) 3 days ago [-]

    What's a 'LLM-native/enabled group chat'?

    simple10(10000) 3 days ago [-]

    Yes, this. That's my bet if OpenAI follows through with social features.

    Extend ChatGPT to allow multiple people / friends to interact with the bot and each other. Would be interesting UX challenge if they're able to pull it off. I frequently share chats from other platforms, but typically those platforms don't allow actual collaboration and instead clone the chat for the people I shared.

    sdwr(10000) 3 days ago [-]

    Yeah, the dream is the AI facilitating 'organic' human connection

    candiddevmike(3183) 3 days ago [-]

    What else are they going to spend billions on to turn a profit?

    grg0(10000) 2 days ago [-]

    I don't know, but a weight bench goes under $200 and Sam needs some chest gains fast.

    pontus(10000) 3 days ago [-]

    Is this just a data play? Need more data. Start a social network. Own said data.

    sva_(3428) 3 days ago [-]

    I think its more likely that they're desperate to find a profitable business model.

    guywithahat(10000) 2 days ago [-]

    Honestly I wonder if it's because Altman loves X and is threatened by Grok

    prvc(3000) 3 days ago [-]

    Is making yet another twitter clone really the way to build a path towards super-intelligence? A worthy use of the organization's talent?

    arcatech(10000) 3 days ago [-]

    Collecting millions of people's thoughts and interactions with each other IS probably on the path to better LLMs at least.

    blitzar(10000) 2 days ago [-]

    Another twitter clone will help the decline of human intelligence, the dumber humans are the smarter the Ai appears.

    rglover(3294) 2 days ago [-]

    I speculated a ways back [1] that this was why Elon Musk bought Twitter. Not to 'control the discourse' but to get unfettered access to real, live human thought that you can train an AI against.

    My guess is OpenAI has hit limits with 'produced' content (e.g., books, blog posts, etc) and think they can fill in the gaps in the LLMs ability to 'think' by leveraging raw, unpolished social data (and the social graph).

    [1] https://news.ycombinator.com/item?id=31397703

    godelski(10000) 2 days ago [-]

    But collecting more data is just a naive task. The reason scale works is because of the way we typically scale. By collecting more data, we also tend to collect a wider variety of data and are able to also collect more good quality data. But that has serious limits. You can only do this so much before you become equivalent to the naive scaling method. You can prove this yourself fairly easily. Try to train a model on image classification and take one of your images and permute one pixel at a time. You can get a huge amount of scale out of this but your network won't increase in performance. It is actually likely to decrease.

    chewbacha(3349) 2 days ago [-]

    If that were the case he (Musk) wouldn't have turned it into a Nazi-filled red pilled echo chamber.

    beloch(10000) 3 days ago [-]

    >One idea behind the OpenAI social prototype, we've heard, is to have AI help people share better content. "The Grok integration with X has made everyone jealous," says someone working at another big AI lab. "Especially how people create viral tweets by getting it to say something stupid."

    This would be a decent PR stunt, but would such a platform offer anything of value?

    It might be more valuable to set AI to the task of making the most human social platform out there. Right now, Facebook, TikTok, Reddit, etc. are all rife with bots, spam, and generative AI junk. Finding good content in this sea of noise is becoming increasingly difficult. A social media platform that uses AI to filter out spam, bots, and other AI with the goal of making human content easy to access might really catch on. Set a thief to catch thieves.

    Who are we kidding. It's going to be Will Smith eating spaghetti all the way down.

    add-sub-mul-div(10000) 3 days ago [-]

    No, nothing of value. If you ever want to lose faith in the future of humanity search '@grok' on Twitter and look at all the interactions people have with it. Just total infantilism, people needing tl;drs spoon-fed to them, needing summarization and one-word answers because they don't want to read, arguing with it or whining to Musk if they don't get the answer they want to confirm what they already believe.

    ein0p(10000) 3 days ago [-]

    You also can get Grok to fact check bullshit by tagging @grok and asking it a question about a post. Unfortunately this is not realtime as it can sometimes take up to an hour to respond, but I've found it to be pretty level headed in its responses. I use this feature often.

    dom96(2791) 2 days ago [-]

    Why would AI be any better at filtering out spam than developers have so far been with ML?

    The only way to avoid spam is to actually make a social network for humans, and the only way to do so is to verify each account belongs to a single human. The only way I've found that this can be done is by using passports[0].

    0 - https://onlyhumanhub.com

    TheOtherHobbes(10000) 2 days ago [-]

    An interesting use for AI right now would be using it as a gatekeeping filter, selecting social media for quality based on customisable definitions of quality.

    Using it as a filter instead of a generator would provide information about which content has real social value, which content doesn't, and what the many dimensions of 'value' are.

    The current maximalist 'Use AI to generate as much as possible' trend is the opposite of social intelligence.

    timeon(10000) 2 days ago [-]

    > This would be a decent PR stunt, but would such a platform offer anything of value?

    Like all those start-ups that are on the 'mission' to save the world with an app. Not sure if it is PR for users or VCs.

    ceroxylon(10000) 2 days ago [-]

    Sam's last social media project included users verifying their humanity, so there is hope that something like that slips into the new platform.

    kittikitti(10000) 3 days ago [-]

    I would try to make a platform like Deviantart or Tumblr except OpenAI pays you to make good content that the AI is trained on.

    malux85(10000) 3 days ago [-]

    Nice in theory but don't know how practical it is to actually do.

    How do you define "good"? Theres obvious examples at the extremes but a chasm of ambiguity between them.

    How do you compute value? If an AI takes 200 million images to train, wait let me write that out to get a better sense of the number:

    200,000,000

    Then what is the value of 1 image to it? Is it worth the 3 hours of human labour time put into creating it? Is it worth 1 hour of human labour time? Even at minimum wage? No, right?

    paxys(10000) 2 days ago [-]

    You really think an OpenAI-sponsored social network is going to attract people who create and share original content?

    pjc50(1402) 2 days ago [-]

    How do you stop people gaming this by feeding it the output of other AIs?

    (not to mention defining 'good')

    siva7(10000) 3 days ago [-]

    Sam got a jawline lift, anyone noticed?

    dlivingston(10000) 3 days ago [-]

    Did he? Flipping back and forth between old vs. new photos of him, his facial structure seems roughly the same.

    beeflet(10000) 2 days ago [-]

    Yes, I've been cataloging the mewing and lookmaxxing progress of hundreds of public figures

    labrador(2669) 3 days ago [-]

    It'd be cool to see Google+ resurrected with OpenAI branding. Google+ was actually a pretty well designed social network

    WJW(2595) 3 days ago [-]

    Not well designed enough to live, though.

    bluetux01(10000) 3 days ago [-]

    that would be cool, google+ was very unique and i was kinda sad google killed it off

    swyx(159) 3 days ago [-]

    what did you like about it?

    piva00(10000) 3 days ago [-]

    I don't believe it was well designed, it felt clunky to use, concepts weren't intuitive enough to understand after a few uses.

    I tried to use it for a few months after release, always got frustrated to the point I didn't feel like reaching out to friends to be part of it.

    The absurd annoyance of its marketing, pushing it into every nook and cranny of Google's products was the nail in the coffin. I'm starting to feel as annoyed by the push with Gemini, it just keeps popping up at annoying times when I want to do my work.

    tiffanyh(3390) 3 days ago [-]

    My guess ... it's probably less of a 'social network' and more of a 'they are trying to build a destination (portal) where users go to daily'.

    E.g. old days of Yahoo (portal)

    sho_hn(10000) 3 days ago [-]

    They just want the next wave of Ghibli meme clicks to go to them, really.

    This will be built on the existing thread+share infra ChatGPT already has, and just allow profiles to cross-post into conversations, with UI and features more geared toward remixing each other's images.

    beepbopboopp(10000) 3 days ago [-]

    The answer seems more obvious to me. They dont even care if its competitive or scales too much. xAI has a crazy data advantage firehousing Twitter, llama FB/IG and CGPT just has, well, the internet.

    Id hope they have some clever scheme to acquire users, but ultimately they want the data/

    latency-guy2(10000) 2 days ago [-]

    I actually would love this. I hate having to go to another website to share some thoughts I had using tools in a platform.

    I miss the days when experiences would actually choose to integrate other platforms into their experiences, yes I was sort of a fan of the FB/Google share button and Twitter side feed (not the tracking bits though).

    I wasn't a fan of LLM and the whole chat experience a few years ago, I'm a very mild convert now with the latest models and I'm getting some nominal benefit, so I would love to have some kind of shared chat session to brain storm, e.g. on a platform better than Figma.

    The one integration of AI that I think is actually neat is Teams + AI Note taking. It's still a hit or miss a lot of the time, but it at least saves and notes something important 30% of the time.

    Collaboration enhancements would be a wonderful outcome in place of AGI.

    mushufasa(10000) 3 days ago [-]

    Sounds like they are thinking about instagram, which originated as a phone app to apply filters to a camera and share with friends (like texting or emailing them or sending them a link to a hosted page), and evolved into a social network. Their new image generation feature has enough people organically sharing content that they probably are thinking about hosting that content on pages, then adding permissions + follow features to all of their existing users' accounts.

    honestly it's not a terrible idea. it may be a distraction from their core purpose, but it's probably something they can test and learn from within a ~90 day cycle.

    CharlieDigital(10000) 3 days ago [-]

    Sounds like some crossover with Civit.ai

    janalsncm(10000) 3 days ago [-]

    An idea which sounds horrifying but would probably be pretty popular: a Facebook like feed where all of your "friends" are bots and give you instant gratification, praise, and support no matter what you post. Solves the network effect because it scales from zero.

    samcgraw(10000) 3 days ago [-]

    I'm sorry to say this exists: https://socialai.co

    clonedhuman(10000) 3 days ago [-]

    AI bots already make up a significant percentage of users on most social networks. Might as well just take the mask off completely--soon, we'll all be having conversations (arguments, most likely) with 'users' with no real human anywhere near them.

    api(1616) 3 days ago [-]

    I've been saying for a while that the next innovation beyond TikTok, Instagram, and YouTube is to get rid of human creators entirely. Just have a 100% AI-generated slop-feed tailor made for the user.

    There's already a ton of AI slop on those platforms, so we're like half way there, but what I mean is eliminating the entire idea of humans submitting content. Just never-ending hypnotic slop guided by engagement maximizing algorithms.

    frabona(10000) 3 days ago [-]

    Feels like a natural next step, honestly. If they already have users generating tons of content via ChatGPT, hosting it natively and adding light social features might just be a way to keep people engaged and coming back. Not sure if it's meant to compete with Twitter/Instagram, or just quietly become another daily habit for users

    pclmulqdq(1741) 2 days ago [-]

    This would be a natural step if it were 2010. In 2025, it sounds like a lack of imagination to me.

    rnotaro(10000) 2 days ago [-]

    FYI there's already an (early) social feed in OpenAI Sora.

    https://sora.com/explore?type=videos

    paulvnickerson(10000) 3 days ago [-]

    Sam Altman is retaliating against Musk for Grok and Musk's lawsuit against OpenAI, trying to ride the wave of anti-Musk political heat, and figure out a way to pull in more training data due to copyright troubles.

    If they launch, expect a big splash with many claiming it is the X-killer (i.e. the same people that claimed the same of Mastadon, Threads, and Bluesky), especially around here at HN, and then nobody will talk about it anymore after a few months.

    AlienRobot(10000) 3 days ago [-]

    Here's how to kill Twitter and Bluesky AND Mastodon:

    1: use an LLM to extract the text from memes and relatable comics.

    2: use an LLM to extract the transcriptions of videos.

    3: use an LLM to censor all political speech.

    OpenAI, I believe in you. You can do it. Save the Internet.

    If you can clean my FYP of current events I'll join your social media before you can ask a GPT how to get more users.

    randomor(10000) 2 days ago [-]

    Controversial opinion: it's not about the generator of the content, human or not, but about the originality of the content itself. Human with the help of AI will generate more good quality as a result.

    Humans are just as good as bots in generating rubbish content, if not more so.

    Twitter reduced content production cost significantly, AI can take it another step down.

    At minimum, a social network where people share good prompt engineering techniques will be valuable to people who are on the hunt for prompts. Just like the Midjourney website, except creating a high quality image is no longer a trip to the beach, but a thought experiment. This will also significantly cut down the cold start friction and in combination with some free credits, people may have more reasons to stay, as the current chat based business model may reach it's limit for revenue generation and retention, as it's just single player mode.

    godelski(10000) 2 days ago [-]

      > but about the originality of the content itself
    
    Your metric is too ill-defined. Here, have some highly unique content

      gZbDrttzP6mQC5PoKXY2JNd9VIIxBUsV
      ClRF73KITgz5DVnSO0YUxMB6o7P9gh8I
      1ttcQiNdQuIs4axdAJvjaFXXkxq0EvGq
      Pd0qwVWgSvaPw8volLA0SWltnqcCNJiy
    
    If we need unique valid human language outputs I'll still disagree. Most human output is garbage. Good luck on your two tasks: 1) searching for high quality content 2) de-duplicating. Both are still open problems and we're pretty bad at both. De-duping images is still a tough task, before we even begin to address the problem of semantic de-duplication.
    gorgoiler(10000) 2 days ago [-]

    The analogy is with Iain Banks' The Culture.

    Anyone can be anything and do anything they want in an abundant, machine assisted world. The connections, cliques, friends and network you cultivate are more important than ever before if you want to be heard above the noise. Sheer talent has long fallen by the wayside as a differentiator.

    ...or alternatively it's not The Culture at all. Is live performance the new, ahem, rock star career? In fifty years time all the lawyers and engineers and bankers will be working two jobs for minimum wage. The real high earners will be the ones who can deliver live, unassisted art that showcases their skills with instruments and their voice.

    Those who are truly passionate about the law will only be able to pursue it as a barely-living-wage hobby while being advised to "not give up the night job" — their main, stable source of income — as a cabaret singer. They might be a journalist or a programmer in their twenties for fun before economics forces them to settle down and get a real, stable job: starting a rock band.

    comrade1234(10000) 2 days ago [-]

    Naah... in the culture you could change your sex at will, something soon to be illegal.

    retransmitfrom(10000) 2 days ago [-]

    The Culture is about a post-capitalist utopia. You're describing yet another cyberpunk-esque world where people have still have to do wage-labor to not starve.

    idiotsecant(10000) 2 days ago [-]

    The culture presents such a tempting world view for the type of people who populate HN.

    I've transitioned from strongly actually believing that such a thing was possible to strongly believing that we will destroy ourselves with AI long before we get there.

    I don't even think it'll be from terminators and nuclear wars and that sort of thing. I think it will come wrapped in a hyper-specific personalized emotional intelligence, tuned to find the chinks in our memetic firewalls just so. It'll sell us supplements and personalized media and politicians and we'll feel enormously emotionally satisfied the whole time.

    Nursie(10000) 2 days ago [-]

    > The real high earners will be the ones who can deliver live, unassisted art that showcases their skills with instruments and their voice.

    We already have so many of those that it's very hard to make any sort of living at it. Very hard to see a world in which more people go into that market and can earn a living as anything other than a fantasy.

    Cynically - I think we'd probably end up with more influencers, people who are young, good looking and/or charismatic enough to hold the attention of other people for long enough to sell them something.

    ur-whale(2802) 2 days ago [-]

    > Those who are truly passionate about the law will only be able to pursue it as a barely-living-wage hobby while being advised to "not give up the night job" — their main, stable source of income — as a cabaret singer. They might be a journalist or a programmer in their twenties for fun before economics forces them to settle down and get a real, stable job: starting a rock band.

    Controversial stance probably, but this very much sounds like a world I'd love to live in.

    beambot(2366) 2 days ago [-]

    Makes me (further) believe that Reddit is heavily undervalued...

    alphazard(10000) 2 days ago [-]

    Alright, I'll bite. What's a reasonable price for Reddit? Aren't most of their users bots?

    blitzar(10000) 2 days ago [-]

    Discord is the real play.

    aussieguy1234(3672) 2 days ago [-]

    LLM -> Social Media Platform -> Tiktok clone.

    That would be an interesting evolution.

    arizen(10000) 2 days ago [-]

    Social media is becoming TikTok's clone army, with algorithms hooked on short-form videos for max engagement.

    Text, images, and long-form content are getting crushed, forcing creators into bite-sized video to be favored by almighty algorithm.

    It's like letting a kid pick their meals - nothing but sugar and candy all day.

    pluto_modadic(10000) 2 days ago [-]

    They know AI can be addictive (people will prompt it far too often), so mixing it with social media can captivate users even more effectively.

    hybrid_study(10000) 2 days ago [-]

    and they can own all the data

    eagerpace(10000) 2 days ago [-]

    I thought they were building a new search engine. Now it's a social network. Tomorrow it will be robots. It's all a distraction from ClosedAI.

    bsima(10000) 2 days ago [-]

    Also rumored to be building a phone at one point? They are playing the media

    empath75(2913) 2 days ago [-]

    It already is a search engine and has been for a while.

    I think you don't recognize it as such because it's incorporated into the chat box, but I use chatgpt as my search engine 90% of the time and almost never use google any more.

    I think the social stuff will also just be incorporated into the chat interface in the form of 'share this image', etc, and isn't going to be like twitter with a bunch of bots posting.





    Historical Discussions: Intel sells 51% stake in Altera to private equity firm on a $8.75B valuation (April 14, 2025: 312 points)

    (312) Intel sells 51% stake in Altera to private equity firm on a $8.75B valuation

    312 points 4 days ago by voxadam in 666th position

    newsroom.intel.com | Estimated reading time – 8 minutes | comments | anchor

    SANTA CLARA, Calif.; SAN JOSE, Calif.; and MENLO PARK, Calif., April 14, 2025 – Intel Corporation today announced that it has entered into a definitive agreement to sell 51% of its Altera business to Silver Lake, a global leader in technology investing.

    The transaction, which values Altera at $8.75 billion, establishes Altera's operational independence and makes it the largest pure-play FPGA (field programmable gate array) semiconductor solutions company. Altera offers a proven and highly scalable architecture and tool chain and is focused on driving growth and FPGA innovation to meet the demands and opportunities of an AI-driven market.

    Intel will own the remaining 49% of the Altera business, enabling it to participate in Altera's future success while focusing on its core business.

    Intel also announced that Raghib Hussain will succeed Sandra Rivera as chief executive officer of Altera, effective May 5, 2025. Hussain is a highly accomplished and visionary technology executive with strong business acumen and engineering credentials. He joins Altera from his previous role as president of Products and Technologies at Marvell. Prior to joining Marvell in 2018, Hussain served as chief operating officer of Cavium, a company he co-founded. Prior to Cavium, Hussain held engineering roles at both Cisco and Cadence and helped found VPNet, an enterprise security company.

    "Today's announcement reflects our commitment to sharpening our focus, lowering our expense structure and strengthening our balance sheet," said Lip-Bu Tan, chief executive officer of Intel. "Altera continues to make progress repositioning its product portfolio to participate in the fastest growing and most profitable segments of the FPGA market. We are grateful for Sandra's strong leadership and lasting impact throughout her 25-year Intel career and wish her continued success as she begins a new chapter. Raghib is a superb executive we selected to lead the business forward based on his vast industry experience and proven track record of success. We look forward to partnering with Silver Lake upon closing of the transaction, as their industry expertise will help to accelerate Altera's efforts and unlock additional economic value for Intel."

    "This investment represents a once-in-a-generation opportunity to invest in a scale leader in advanced semiconductors. Together with Raghib, we will be focused on strengthening Altera's technology leadership position and investing in emerging AI-driven markets such as edge computing and robotics," said Kenneth Hao, chairman and managing partner of Silver Lake. "We look forward to working closely with Intel as a strategic partner who will continue to provide U.S.-based foundry services and complementary engagement with customers."

    "I am excited to lead Altera in its next chapter, and this milestone with Silver Lake furthers Altera's journey to be the world's No. 1 FPGA solutions provider," said Hussain. "Backed by Silver Lake's strong track record and now with clarity of focus as an independent company, Altera is well-positioned to build on its momentum and deliver breakthrough FPGA-based solutions that are shaping the future of compute driven by AI. I am grateful for the impact Sandra has made and the team she has built as we begin Altera's next phase of growth."

    Altera has been at the forefront of driving FPGA innovations for more than 40 years. The company provides leading programmable solutions that are easy-to-use and deploy in a range of strategically important segments such as industrial, communications, data center and military, aerospace, and government, as well as emerging markets such as AI/edge and robotics. Its broad portfolio of programmable semiconductor solutions, software and development tools deliver the reliability and flexibility needed to accelerate customer technology innovation.

    The transaction is expected to close in the second half of 2025, subject to customary closing conditions.

    Upon closing, Intel expects to deconsolidate Altera's financial results from Intel's consolidated financial statements. In Fiscal Year 2024, Altera generated revenues of $1.54 billion, GAAP gross margin of $361 million and GAAP operating loss of $615 million. Altera's Fiscal Year 2024 non-GAAP gross margin was $769 million and non-GAAP operating income was $35 million. Reconciliations between the GAAP and non-GAAP measures are provided below.

    Morgan Stanley & Co. LLC acted as financial advisor to Intel.

    Forward-Looking Statements

    This release contains forward-looking statements that involve a number of risks and uncertainties, including with respect to the terms and anticipated timing of closing the agreed upon sale of a controlling interest in Altera and the potential benefits of such sale to Intel and Altera. Such statements involve risks and uncertainties that could cause actual results to differ materially from those expressed or implied, including: the risk that the transaction may not be completed in a timely manner or at all, including as a result of a failure to receive regulatory approvals; the occurrence of any event, change or other circumstance that could give rise to the termination of the transaction; the risk that the expected benefits of the transaction, including as a result of the increased independence of Altera, may not be realized; the risk of future loss of the Altera business by Intel as a result of the sale of a controlling interest in Altera; disputes or potential litigation related to the transaction or the ownership, control and operation of the Altera business, including as it relates to Intel; unanticipated costs related to the transaction or the Altera business that may be incurred; risks as to the retention of key Altera personnel and customers; risks related to the diversion of management's attention during the pendency of the transaction; potential adverse reactions or changes to business relationships resulting from the announcement or completion of the transaction; changes in demand for Altera's semiconductor products; the high level of competition and rapid technological change in the semiconductor industry; and other risks and uncertainties described in Intel's 2024 Form 10-K and our other filings with the SEC.

    Given these risks and uncertainties, readers are cautioned not to place undue reliance on such forward-looking statements. Readers are urged to carefully review and consider the various disclosures made in this release and in other documents we file from time to time with the SEC that disclose risks and uncertainties that may affect our business.

    All information in this press release reflects Intel management views as of the date hereof unless an earlier date is specified. Intel does not undertake, and expressly disclaims any duty, to update such statements, whether as a result of new information, new developments, or otherwise, except to the extent that disclosure may be required by law.

    Non-GAAP Financial Measures

    This release contains references to non-GAAP financial measures: Altera non-GAAP gross margin and Altera non-GAAP operating income / (loss) measures. Set out below are reconciliations of these measures to the most directly comparable GAAP financial measures. The non-GAAP financial measures disclosed herein should not be considered a substitute for, or superior to, the financial measures prepared in accordance with GAAP. Please refer to "Explanation of Non-GAAP Measures" in Intel's earnings release dated Jan. 30, 2025 for a detailed explanation of the adjustments made to the comparable GAAP measures, the ways management uses the non-GAAP measures, and the reasons why management believes the non-GAAP measures provide investors with useful supplemental information.

    Twelve Months Ended
    (in Millions; Unaudited) Dec 28, 2024
    GAAP gross margin $ 361
    Acquisition-related adjustments 402
    Share-based compensation 6
    Non-GAAP gross margin $ 769
    GAAP operating income / (loss) $ (615)
    Acquisition-related adjustments 491
    Share-based compensation 122
    Restructuring and other charges 37
    Non-GAAP operating income / (loss) $ 35

    About Intel

    Intel (Nasdaq: INTC) is an industry leader, creating world-changing technology that enables global progress and enriches lives. Inspired by Moore's Law, we continuously work to advance the design and manufacturing of semiconductors to help address our customers' greatest challenges. By embedding intelligence in the cloud, network, edge and every kind of computing device, we unleash the potential of data to transform business and society for the better. To learn more about Intel's innovations, go to newsroom.intel.com and intel.com.

    About Altera Altera is a leading supplier of programmable hardware, software, and development tools that empower designers of electronic systems to innovate, differentiate, and succeed in their markets. With a broad portfolio of industry-leading FPGAs, SoCs, and design solutions, Altera enables customers to achieve faster time-to-market and unmatched performance in applications spanning data centers, communications, industrial, automotive, and more. For more information, visit www.altera.com.

    About Silver Lake Silver Lake is a global technology investment firm, with approximately $104 billion in combined assets under management and committed capital and a team of professionals based in North America, Europe and Asia. Silver Lake's portfolio companies collectively generate nearly $252 billion of revenue annually and employ approximately 433,000 people globally.




    All Comments: [-] | anchor

    bigfatkitten(10000) 4 days ago [-]

    It was a silly acquisition in the first place, and their justification clearly came from a coke-addled fever dream.

    Intel soon discovered the obvious, which is that customers with applications well-suited to FPGAs already use FPGAs.

    mschuster91(2748) 4 days ago [-]

    > Intel soon discovered the obvious, which is that customers with applications well-suited to FPGAs already use FPGAs.

    Yes, but pairing an FPGA somewhat tightly integrated with an actually powerful x86 CPU would have made an interesting alternative to the usual FPGA+some low end ARM combo that's common these days.

    georgeburdell(10000) 4 days ago [-]

    If AMD did the same thing years later, was it really that foolish?

    komadori(10000) 4 days ago [-]

    Do you think AMD's decision to buy Xilinx was any better or not?

    danielmarkbruce(10000) 4 days ago [-]

    There was some hope at the time that FPGAs could be used in a lot more applications in the data center. It is likely still feasible. Remember Hennessy published:

    https://www.doc.ic.ac.uk/~wl/teachlocal/arch/papers/cacm19go...

    And maybe this is/was a pipe dream - maybe there aren't enough people with the skills to have a 'golden age of architecture'. But MSFT was deploying FPGAs in the data center and there were certainly hopes and dreams this would become a big thing.

    matt3210(10000) 4 days ago [-]

    It made their stock pop for a while which was all that mattered to Brian Krzanich who took the bonus and left the mess in the hands of Bob Swan who did the same things and left the mess ... (recursion her).

    nativeit(3656) 4 days ago [-]

    > Intel soon discovered the obvious, which is that customers with applications well-suited to FPGAs already use FPGAs.

    So selling FPGA's was a bad move? Or was the purchase price just wildly out-of-line with the--checking...$9.8B annual market that's expected to rise to $23.3B by 2030?

    dbuder(10000) 3 days ago [-]

    It was a forced acquisition, iirc they made promises to Altera to get them to use their foundry, failed to keep those promises and could either get sued and embarrassed or just buy Altera outright for about what they were worth before the deal.

    Alupis(1304) 4 days ago [-]

    I wonder if we'll see more Intel sell-offs, as Tan et al try to get things under control.

    Will we see an AMD-esque fab spin-off?

    nxobject(3638) 4 days ago [-]

    Would market regulators allow a single buyer to acquire all of Intel's fabs in one go?

    jsight(10000) 4 days ago [-]

    I'd guess that they'll continue to sell off mobileye over time.

    DebtDeflation(10000) 3 days ago [-]

    Beyond ensuring adequate cash flow, they need to be 100% focused on getting 18A shipping in volume as soon as possible rather than financial engineering stuff.

    mastax(3442) 4 days ago [-]

    Intel acquired Altera in December 2015 for $16.7 billion in cash.

    nativeit(3656) 4 days ago [-]

    If only someone could have come up with a plausibly profitable use-case for advanced FPGA's and highly performant, efficient, real-time processing or hardware acceleration in those intervening years? What are ya gonna do?

    rsp1984(3005) 4 days ago [-]

    Should change title. They sold 51% at a valuation of $8.75B, so cash in is ~ $4.29B.

    voxadam(666) 4 days ago [-]

    I've updated the title as best as I could within the constraints of the max length.

    svnt(10000) 4 days ago [-]

    For those keeping score at home, 51% sold at a total valuation of $8.75B, which means they are bringing in around $4.5B, and recognizing a loss of roughly 50% on what was their biggest deal ever when it took place in 2015.

    Jach(2888) 4 days ago [-]

    'In December 2015, Intel acquired Altera for $16.7 billion in cash.' $21.5 bn inflation adjusted. Amazing ten year performance.

    scottyah(10000) 4 days ago [-]

    Or they got what they wanted from it and are selling off the rest, like when Google bought Motorola Wireless for the Patents then sold off the non-googly employees, culture, and brand for cheap.

    thot_experiment(10000) 4 days ago [-]

    Rest in Peace Altera I guess? I still drink out of my color changing Altera mug (that's long stopped changing color) most days. PE ruins everything so it's only a matter of time before they're gutted and sold for scraps by the vultures at Silver Lake. (though honestly the writing was on the wall since the Intel acquisition I had held onto some hope) If only we had a functioning government interested in actually maintaining our technological dominance and enforcing/expanding antitrust legislation. I wrote my first Verilog on an Altera chip and I'll remember them fondly.

    neilv(3544) 3 days ago [-]

    > [...] my color changing Altera mug (that's long stopped changing color) most days. PE ruins everything [...]

    I don't think PE is responsible for that one.

    jhallenworld(10000) 3 days ago [-]

    Mug? Well I got a Cubic Cyclonium- annoyingly no current tools support it and even the last version which did is no longer available.

    https://datasheet.octopart.com/CUBIC-CYCLONIUM-Altera-datash...

    ACAVJW4H(10000) 3 days ago [-]

    Quick search shows Altera held 30% of the FPGA market. That puts AMD's $50B acquisition of Xilinx (which holds ~50% of the market) in an awkward light. Using some extremely crude math, Xilinx's fair market value might now be closer to ~$15B.

    Did AMD massively overpay, or has the FPGA market fundamentally shifted? Curious to see how this new benchmark ripples into AMD's stock valuation.

    timewizard(10000) 3 days ago [-]

    The FPGA market shifted. For a brief moment they were allowed to be on BOMs of end user devices due to the rest of the computing field lagging behind somewhat. That period, as far as I can tell, is over.

    My anecdotal example would be high end broadcast audio processors. These do quite a bit beyond the actual processing of audio, in particular, into baseband or even RF signal generation.

    In any case these devices used to be fully analog, then when they first went digital were a combination of DSPs for processing and FPGAs for signal output. Later generations dropped the DSP and did everything in larger FPGAs as the larger FPGAs became available. Later generations dropped the whole stack and just run on an 8 core Intel processor using real time linux and some specialized real time signal processing software with custom designed signal generators.

    The high core and high frequency CPUs became good enough and getting custom made chips became exceptionally cheap as well. FPGAs became rather pointless in this pipline.

    The US military, for a time, had a next generation radio specification that specifically called for the use of FPGAs, as that would allow them to make manufacturer agnostic radios and custom software for them. That never panned out but it shows the peak use of FPGAs to manage the constraints of this time period.

    fuzzythinker(3358) 3 days ago [-]

    Not all market share are equal, like iphone vs. android. Also, the value for the leader will cost more than the second in line.

    TheMagicHorsey(10000) 3 days ago [-]

    I used to work at Intel (around 1999) in their Jones Farm campus in Oregon. My employee stock grants from that time are still underwater.

    This was the heyday at Intel. I left within a year because I noticed that the talent that was respected, compensated and influential at Intel was the sales engineers. I can't pretend to have known that would lead to the decline of the company, but I knew that as an engineer uninterested in sales, that it wasn't the place for me.

    ChrisGammell(10000) 3 days ago [-]

    I'd love to hear more about how the 'sales engineers were the influential ones' manifested. I have an idea in my head, but I'm curious about details.

    skeptrune(3507) 3 days ago [-]

    What would sales engineers be responsible for at a company like intel? I thought that was more of a SaaS thing.

    flanfly(3678) 3 days ago [-]

    Props to Intel duping AMD to buy Xillix for whooping $50B

    Panzer04(10000) 3 days ago [-]

    AMD bought an overpriced company with their own overpriced stock. Probably not as bad as it might look.

    roughly(10000) 4 days ago [-]

    Without arguing the merits of the Altera investment or divestment, a common pattern for Intel seems to be a wild see-sawing between an aggressive and a defensive market posture - it's a regular occurrence for Intel to announce a bold new venture to try to claim some new territory, and just as regular that they announce they're halting that venture in the name of "consolidating" and "focusing on their core." The consequence is that they never give new ventures time to actually succeed, so they just bleed money creating things they murder in the cradle, and nobody born before last Tuesday is investing in bothering to learn the new Intel thing because its expected lifespan is shorter than the average Google product.

    Intel either needs to focus or they need to be bold (and I'd actually prefer they be bold - they've started down some cool paths over time), but what they really need is to make up their goddamn minds and stop panicking every other quarter that their "ten-year bets" from last quarter haven't paid off yet.

    jrockway(3560) 4 days ago [-]

    This seems to be common for corporate America in general. I used to work at a YC startup. We kiiiiiinda maaaaaaaybe ran out of money (not my department) and happened to get bought by a large investor that also happens to be a US-based hardware manufacturer. Two years and countless reorgs later, they laid everyone off and as far as I know, are no longer in the business of selling the software products they bought. They never figured out how software worked, never had anyone managining the division for more than 6 months, and got bored. I think they thought by moving everyone over to Microsoft Word and Windows laptops (peppered with a half-hearted threat about RTO), they would just magically make billions of dollars the first month. It didn't happen.

    I am beginning to think M&A are just some sort of ego thing for bored megacorp execs, rather than serious attempts to add efficiency and value to the marketplace. (Prove me wrong, bored megacorp execs. I'll wait.)

    wmf(2049) 4 days ago [-]

    And Intel's acquisitions kill off promising startups. At least Altera is being sort of spun off instead of outright destroyed.

    smallmancontrov(10000) 4 days ago [-]

    M&A churn is a way for management to monetize their power. Efficacy is a distant second concern.

    thunder-blue-3(10000) 4 days ago [-]

    Speaking from personal experience, many director-level and above positions at Intel, especially in growth related areas are filled through nepotism and professional connections. I've never seen a headline about Intel's decline and thought, 'Wow, how could that happen?'

    nine_k(3565) 3 days ago [-]

    But, well, it was a ten-year bet: Altera was acquired in 2015.

    If they could not figure how to make it profitable, maybe somebody else should try. (Of course I don't think that the PE company is going to do just that.)

    rqtwteye(3305) 3 days ago [-]

    Seems they should read Andy Grove's books.

    ethbr1(3611) 3 days ago [-]

    > it's a regular occurrence for Intel to announce a bold new venture to try to claim some new territory, and just as regular that they announce they're halting that venture in the name of "consolidating" and "focusing on their core." [...] [Intel's new thing's] expected lifespan is shorter than the average Google product.

    You got there in the end. You get the same outcome with the same corporate incentive.

    Both Intel and Google prioritize {starting something new} over {growing an existing thing}, in terms of corporate promotions and rewards, and therefore employees and leaders self-optimize to produce the repeated behavior you see.

    The way to fix this would be to decrease the rewards for starting a new thing and increase the rewards for evolving and growing an existing line of business.

    wombatpm(10000) 3 days ago [-]

    I worked for a former Fortune 300 company that had an active internal investment strategy. They wanted the next billion dollar business, guaranteed, in 12 months. And wanted to invest more than 1 million dollars. Sadly they are now bankrupt and owned by PE.

    evertedsphere(10000) 3 days ago [-]

    > a wild see-sawing between an aggressive and a defensive market posture

    tick, tock

    fredoralive(10000) 3 days ago [-]

    My personal theory is that desktop / laptop / server x86 (usually) is such a giant money printer that a) Intel can invest in anything (Altera, antivirus, Optane...) but b) when they do, they quickly realise that this isn't a giant profit margin machine like x86, so why bother?

    bigfatkitten(10000) 3 days ago [-]

    They fuck their customers when they do that. A good friend of mine had a product designed around Quark that was about to go into production when Intel pulled the rug out from under him.

    apercu(10000) 3 days ago [-]

    It could just be a stock play.. Need the stock to move up? Buy a company.

    Stock down again? Sell the company you bought 2 years ago.

    From the top to the bottom the problem with late stage capitalism is misaligned incentives.

    Edit: I wrote 'the problem' and I should have written 'among the many, many problems'

    matt3210(10000) 4 days ago [-]

    Intel's problem is that they're trying to deliver short term shareholder value instead of long term stable value.

    sambull(10000) 4 days ago [-]

    They'll give any market a good 18 months and then dip

    lvl155(10000) 4 days ago [-]

    Not farfetched to think they're maybe 6-8 quarters away from imploding. They need to survive.

    varispeed(10000) 4 days ago [-]

    Seems quite cheap. If I was a state I'd buy it. Possibly give stake to the suitable university and then create internships and other learning opportunities. I would also subsidise products to SMEs and then invest more to ensure company can supply defence and other industries, decoupling the country from dependence on other countries from crucial tech.

    I mean it's a pipe dream, but why not.

    fc417fc802(10000) 3 days ago [-]

    I think nationalization is usually frowned on in the west, but your comment about universities got me wondering. It seems small enough that the state could donate it to a consortium of research universities. That'd have to be better than PE in terms of serving the national interest, wouldn't it?

    Jach(2888) 4 days ago [-]

    Man I remember being excited when Intel bought Altera, maybe they'd bring FPGAs to the masses, then they proceeded to do nothing with them...

    jeffparsons(10000) 3 days ago [-]

    I was excited, too. I was also excited when Intel announced Larrabee.

    That was before I learnt about the many and varied ways in which Intel sabotages itself, and released that Intel's underperformance has little to do with a lack of good technical ideas or talent.

    I.e. I was young and naive. I am now considerably less young, and at least a little less naive.

    bjourne(1594) 4 days ago [-]

    Apparently, the FPGA industry wasn't large enough for two major players. Maintaining an extremely specialized developer ecosystem for a relatively small niche can't have been cheap. Almost zero cross-over too, since FPGA tooling is much too foreign to be repurposed for other architectures. I suspect this move will make it a bit harder for Intel to collect 'developer mindshare' for their other hyped up stuff because no one likes having the rug pulled out from under them. Hope AMD can make a better job with Xilinx than what Intel could with Altera.

    rasz(3448) 4 days ago [-]

    Intel FPGA venture made tons more sense than AMD following it. FPGAs are great at filling up your idle fabs and honing engineering skills on reaching high yields.

    Selling now also makes sense. There was only one serious competitor in 2015. Now you got Tariffs both ways to the main place where everything is build, and said place has own homegrown vendors like GOWIN, Sipeed, Efinix. But the biggest reason is amount of stuff designed in the West/Taiwan is falling with China taking over actual product design.

    https://itif.org/publications/2024/08/19/how-innovative-is-c...

    >In 2015, China released its "Made in China 2025" (MIC 2025) strategy, which refined some of these targets, setting a goal of achieving 40 percent self-sufficiency in semiconductors by 2020 and 70 percent by 2025.

    https://en.wikipedia.org/wiki/Made_in_China_2025

    >In 2024, the majority of MIC 2025's goals were considered to be achieved, despite U.S. efforts to curb the program.

    Products coming out of China no longer use STM microcontrollers, Vishay/Analog mosfets/diodes and Altera/Xilinx FPGAs. Its all Chinese semiconductor brands you never heard about. Good example is this teardown of Deye SUN-5K-SG04LP1 5kW hybrid solar inverter https://www.youtube.com/watch?v=n0_cTg36A2Q

    d-moon(10000) 4 days ago [-]

    As someone who's worked at Xilinx before and after the merger, it's a surprise they were even able to sell it for that much. Altera has been noncompetitive to Xilinx in performance and to Lattice in terms of low-end/low-power offerings for at least the last 2 generations.

    I'm concerned about the future of FPGAs and wonder who will lead the way to fix these abhorrent toolchains these FPGA companies force upon developers.

    gscott(242) 3 days ago [-]

    It seems FPGA can now do things for LLM's so there might be some future in that

    https://www.achronix.com/blog/accelerating-llm-inferencing-f...

    aswanson(10000) 3 days ago [-]

    Alteras tools seemed more civilized than Xilinx's, in my limited experience.

    snvzz(2530) 3 days ago [-]

    >wonder who will lead the way to fix these abhorrent toolchains these FPGA companies force upon developers.

    Some FPGA vendors are contributing to and relying, partially or completely, on the open source stack (mainly yosys+nextpnr).

    It is still perceived as not being 'as good' as the universally hated proprietary tools, but it's getting there.

    imtringued(10000) 3 days ago [-]

    Yeah I personally wondered if AMD was just copying Intel, because apparently every CPU manufacturer also needs to manufacture FPGAs, or they actually have a long term strategy where it is essential for both the FPGA and CPU departments to cooperate.

    I think Xilinx did a fine job with their AI Engines and AMD decided to integrate a machine learning focused variant on their laptops as a result. The design of the intel NPU is nowhere near as good as AMD's. I have to say that AMD is not a software company though and while the hardware is interesting, their software support is nonexistent.

    Also, if you're worried about FPGAs that doesn't really make much sense, since Effinix is killing it.

    almostgotcaught(10000) 3 days ago [-]

    You worked at Xilinx and you're not aware that FPGA is not a growing segment?

    HelloNurse(10000) 3 days ago [-]

    So Intel found optimists who think they can make Altera more competitive? It's a success. Success with Intel products would be better, and excellence at M&A is hard to convert into excellence at chipmaking, but it's better than nothing.

    tliltocatl(10000) 3 days ago [-]

    Altera toolchain was a tad nicer than xilinx as of 2020, just saying. Still horrible, but at least the IDE wasn't a laggy Electron abomination.

    hermitShell(10000) 3 days ago [-]

    Agree on both. As things like the PIO on the rp line of micros gets more common, micros will have IO that can match FPGAs. For low end, micros are generally good enough or gain NPU compute cores. It's the IO that differentiates FPGAs.

    unethical_ban(10000) 3 days ago [-]

    Was altera the thing they bought to do some really cool networking/switching/SDN stuff? Paging bcantrill.

    saagarjha(10000) 3 days ago [-]

    You might be talking about Tofino?

    wmf(2049) 3 days ago [-]

    You're thinking of Barefoot which is also dead. (And Fulcrum before that.)

    MangoCoffee(10000) 3 days ago [-]

    What a waste! I can never understand corporate thinking and how CEOs get such massive fucking pay for decisions like this.

    Intel paid $16.7 billion in 2015 and sold it for $8.75 billion?! What about all the money dumped into Altera from 2015 to 2025? How much was that? Is Intel just handing over the FPGA market to AMD?

    petermcneeley(10000) 3 days ago [-]

    Right but they are only selling 51% of it.

    https://download.intel.com/newsroom/2021/archive/2015-12-28-...

    throwaway2037(2851) 3 days ago [-]

        > Is Intel just handing over the FPGA market to AMD?
    
    Maybe? But who cares. From all of the comments above, I learned that the FPGA market is stalled or shrinking. Even AMD likely overpaid for Xilinx.
    dtquad(3667) 3 days ago [-]

    GPGPUs ended up becoming the AI/cloud accelerators that FPGAs promised to be back when Intel bought Altera.

    FPGAs are not ideal for raw parallel number crunching like in AI/LLMs. They are more appropriate for predictable real-time/ultra-low-latency parallel things like the the modulation and demodulation of signals in 5G base state stations.

    AlotOfReading(3629) 3 days ago [-]

    FPGAs might not be ideal, but AMD's NPU IP originated with Xilinx.

    Intel was an early player to so many massive industries (e.g. XScale, GPGPU, hybrid FPGA SoCs). Intel abandoned all of them prematurely and has been left playing catch-up every time. We might be having a very different discussion if literally any of them had succeeded.





    Historical Discussions: The dark side of the Moomins (April 13, 2025: 307 points)

    (307) The dark side of the Moomins

    307 points 5 days ago by SebaSeba in 3572nd position

    www.newstatesman.com | Estimated reading time – 13 minutes | comments | anchor

    "I could vomit over Moomintroll," Tove Jansson confided in her notebook in the late 1950s. A decade after the hippo-like creature with low self-esteem made his debut appearance in 1945, Scandinavian homes had become versions of Moominvalley, with Moomin-themed aprons, curtains, wallpaper and crockery, while department stores stocked Moomins modelled in marzipan, ceramic and white leather (Jansson drew the line at Moomin sanitary towels). This world of whimsy bore little relation to the Finnish artist's initial conception of the Moomintrolls.

    The Moomins and the Great Flood, the 60-page picture book not translated into English until 2005 and now celebrating its 80th anniversary, was written during the Winter War in 1939, when Russia's invasion of Finland left 300,000 Finns homeless. (The Moomin estate is marking the anniversary by partnering with Counterpoints Arts and Refugee Week to commission artists to create public artworks inspired by the book.) A tale of displaced people and dangerous predators and living on borders, the first of the nine Moomin books begins with Moomintroll and Moominmamma arriving, "late in the afternoon one day at the end of August", in "the deepest part of the great forest". August, Jansson believed, was "the border between summer and winter" and twilight "the border between day and night".

    Part-Finnish and part-Swedish, part-storyteller and part-illustrator, a lover of both men and women, and an artist appealing equally to adults and children, Jansson was a border-dweller herself. A scratchy ink illustration on page one shows two tiny dark shapes, which might be roots or rocks, suspended beneath trees the size of Giant redwoods. Mother and son are in search of somewhere "snug" in which to hibernate, but they are also in search of Moominpappa, who long ago disappeared with the "mostly invisible" Hattifatteners: it is striking how many of the characters in Jansson's stories are searching for something, waiting for something, and in need of a home. The Moomins find another lost creature who will, in the later books, become Moomintroll's best friend and foster-brother, Sniff. There was a time, Moominmamma tells the small boys, when Moomins made their homes behind the stoves in other peoples' houses and did not need to "travel through fearsome forests and swamps in order to find somewhere to live".

    The Moomin stories were born, Jansson wrote to her friend Eva, "when I was feeling sad and scared of bombs and wanted to get away from gloomy thoughts... I would creep into an unbelievable world where everything was natural and friendly – and possible." The first book "had to be a fairy tale" with a "happy ending", and so when the Moomins find Moominpappa they move into his stove-shaped house, which a flood has transplanted, Ark-like, to the valley where they will live, we are told, for "the whole of their lives". There were no illustrations in Jansson's first draft of The Moomins and the Great Flood. She had trained as a painter but during the war she "stood still" as an artist and was no longer able to think in colour, so "it felt completely pointless to try to create pictures". Putting the pages in a drawer, she forgot about them for the next six years until a friend suggested that they could, with pictures, be turned into a children's book. The Moomins and the Great Flood, illustrated in sepia and black ink, was published only in Sweden, selling 219 copies in the first year.

    The Moomins, at this point in their gestation, were broad-backed with trunk-like noses, horn-like ears, and flattish stomachs. Their waistlines increased with fame, but their characteristics remained the same: anxious, romantic Moomintroll, dependable Moominmamma, and Moominpappa, the reckless, self-absorbed melancholic whose longing for adventure threatens to destroy them all. Jansson had found her cast, her perfect length – short to medium – and the balance between words and pictures that would prove her genius. The writing is spare, weighed down with silences, the images saying what the words elide. The Moomins and the Great Flood ends with the creation of Moominvalley, the kind of place that the psychotherapist Donald Winnicott – in whom Jansson had a strong interest – would call a "holding environment" where we can be determinedly ourselves. United in solipsism and contained by the love of Moominmamma, the Moomins and their eccentric friends live out their philosophies, compulsions, obsessions, paranoias, and various neuroses.

    Five further Moomin books followed in quick succession: Comet in Moominland (1946), in which a fireball is seen "diving headlong" towards Moominvalley and the Moomins wait in a cave for extinction (a response to the Soviet bombing of Helsinki and the American bombings of Nagasaki and Hiroshima); Finn Family Moomintroll (1948), a celebration of Jansson's first affair with a woman, the theatre director Vivica Bandler ("O, to be a newly awakened Moomin dancing in the glass-green waves while the sun is rising"); The Memoirs of Moominpappa, a parody of the life of the 16th-century Italian sculptor Benvenuto Cellini and of male pomposity ("When people read this book," Moomintroll tells his father, "they are going to believe you are famous"); and Moominsummer Madness (1954), when another flood renders the creatures once again homeless.

    The sixth novel, Moominland Midwinter (1958), written when Jansson was ready to "vomit" over her creation, contains the most devastating account of depression in 20th-century literature. Waking up early during the annual hibernation, Moomintroll finds himself snowed in and utterly alone in an alien world whose pleasure principle has disappeared. From now on in the books, things get darker. Family relations break down completely in Moominpappa at Sea (1965) when Moominpappa, realising that he is a failed artist, drags his family away from Moominvalley to an uninhabited rock in the middle of the sea that is "completely silent and terribly, terribly cold". Here, in his attempt to control the waves, he loses his mind, while a desolate Moominmamma hides inside the mural of Moominvalley that she's painted on the wall and Moomintroll, in love with a seahorse and profoundly depressed, finds a patch of earth on which to sleep. The island, meanwhile, shrinks with unhappiness.

    Subscribe to The New Statesman today from only £8.99 per month

    The final book, Moominvalley in November (1970), a spin on Waiting for Godot, takes place during the family's absence. Their friends, not knowing where they have gone or why they left without saying goodbye, wait in the Moomins' abandoned house (the one in which they would live for "the whole of their lives") for their return. There is no happy ending, and the readers who drank out of their Moomin mugs and slept beneath their Moomin duvet covers felt angry and cheated. But Jansson, aged 56, was at last free of her Frankenstein's monster. A book in which nothing happens save the passing of time, Moominvalley in November is an absurdist masterpiece. There is an aesthetic satisfaction to the series, which begins and ends with disappearance. It is Moominpappa who vanishes in the first book, and the entire family in the last. One of the oddest aspects of the Moomin phenomenon is how these complex tales of apocalypse, breakdown and disfunction have been consistently misread as cutesy celebrations of domestic life.

    Jansson's characters were a canvas for her own personality traits. Photo by Eva Konikoff

    Tove Jansson was born in Helsinki in August 1914. Her father, Viktor (known as "Faffan") was a sculptor from Finland's Swedish-speaking minority and her mother, Signe Hammarsten ("Ham") was a well-known draughtswoman, the daughter of a Swedish clergyman. Faffan's work did not sell and so Ham was the principal breadwinner. By the time she was 14, Tove was also contributing to the family finances by drawing cartoons for the satirical magazine Garm. In her early twenties, her satires of Hitler and Stalin were placed on Garm's cover. Faffan, who had returned from the Finnish Civil War (January-May 1918) a broken man, now fervently supported Germany and so he and his daughter were at loggerheads.

    The Janssons saw themselves as bohemians but there is nothing relaxed about the family portrait Tove painted in 1942, which shows five stiff figures in a cramped room, each locked in their own isolation and looking in different directions. Ham and Faffan are in white overalls, one of Tove's two brothers is in military uniform, while Tove herself, standing in the middle in a black hat, coat and gloves, looks as though her suitcase is packed and she is ready to board a train. "Faffan and I have said we hate each other," she told a friend during this same year. "It's hell to be still living at home."

    Jansson had lived with Moomins since childhood, when her uncle told her tales about the trolls behind the kitchen stove who would, if she stole jam from the larder, rub their cold snouts against her legs until she froze. By the time she was in her teens the trolls had evolved in her imagination into frightening "house ghosts" who made their presence known by breathing cold air on her neck: "Terrified, I turned the key in the lock and jumped into bed like a shot, and all that night I could hear the Moomintrolls pulling my slippers backwards and forwards under my bed." Jansson's first Moomin illustration ("the ugliest thing I could think of") was on the lavatory wall of the family's island summer house, where it can still be admired by tourists.

    The creatures had turned, by her late teens, into what Jansson's biographer Boel Westin describes as "ominous creatures associated with dreams, confusion and emptiness", drawn in a series of "expressive landscapes of boulders, seas, dark islands and deserted roads, fenced around with agitation, uncertainty and anguish". By her early twenties Moomintroll had become Jansson's "angry signature character". It is easy to overlook Moomintroll's anger, which expresses itself largely as fear, but it comes to the surface when his amour propre is challenged, such as in the comic strip story Moomin on the Riviera, where his girlfriend, Snorkmaiden, runs off with the movie star Mr Brisk and Moomintroll challenges him to a duel.

    The Moomintrolls were first introduced to an English audience in 1954 in the form of a comic strip in the London Evening News (circulation: 12 million) which by 1956 had been syndicated to 120 other papers in 20 further countries. These stories are funnier than those in the books and focus on what Jansson called "psychological moments" and Winnicott would call "nameless dread". Jansson had inadvertently become the analyst of the postwar psyche, but it was her own psyche she was exploring. The Moomin stories were, she said, "abreactions", a psychoanalytical term for catharsis ("I abreacted hugely through this book," she wrote of Moominpappa at Sea), and Jansson distributed herself throughout her characters: she was as dutiful and unassertive as Moomintroll, as misanthropic and frustrated as Moominpappa, as empathetic and reliable as Moominmamma, and as wild as the furious urchin Little My.

    She hoped that the income from the comic strips would allow her to return to painting, but it became clear by 1957 that this would never happen. As well as containing the world's fears, Jansson now singlehandedly controlled the Moomin merchandise industry, which involved answering by hand each of the 2,000 letters she received every year. "We look forward to your valued reply soonest concerning Moomin motifs on toilet paper in pastel shades," reads one letter." "Hi, my name is Olavi," reads another. "You write well but last time you did not make a happy ending. Why do you do this?" "What shall I do with my parents?" reads a third. "They're becoming more and more hopeless. Write!"

    Jansson, like the Moomins, wanted only to hibernate but instead she found herself snowed in beneath "an avalanche of things", her world now composed, she said, of "newspapers, telephones, telegrams, post post post in heaps, stacks, avalanches, strangers, lectures, conversations, conversations, masses of words and myriads of children. And never alone. Never ever really alone". One of the mysteries of Jansson's personality is why she allowed the mass commercialisation of her delicate, subtle work; another is why, given the urgency of her creative drive, she didn't employ a secretary to take over the administrative burden.

    In 1969, around the same time that she completed the Moomin books with Moominvalley in November, Jansson drew her last comic strip and killed off her main character. Moomintroll is diagnosed by a psychiatrist, Dr Schrünkel, with numerous complexes, and prescribed medication which makes him shrink until he completely disappears. The following year, Jansson's younger brother Lasse took over the cartoons. Moomintroll was now resurrected, after which the stories continued to run until 1975.

    Tove Jansson is not the first writer to fall out with her characters. Arthur Conan Doyle tried to kill off Sherlock Holmes by throwing him down the Reichenbach Falls, and after 30 years of living with Hercule Poirot, Agatha Christie described him as a "detestable, bombastic, tiresome, egocentric little creep". What distinguishes Jansson is that she detested her readers even more than her characters. They are satirised in her first cartoon, Moomin and the Brigands, as the hordes of uninvited guests who exploit Moomin's generosity and, once they have eaten him out of house and home, eat the home itself: "It's so difficult to tell your guests that you'd like your own bed sometimes," Moomintroll confides to Sniff. "I must learn to say No".

    In 1963, Jansson and her partner, the graphic artist Tuulikki Pietilä, built a cabin on the "angry little skerry" of Klovharu, a rocky and isolated island which could be circumnavigated in four and a half minutes. Even here, where for the next 30 summers she did her best to disappear, she was pursued by boatloads of Moomin fans. "Seven strangers came... to have coffee, drinks and soft drinks and talk and 'look at me'", Jansson wrote in her diary. "Kiss my arse... Threw stones. Angry."

    Frances Wilson's "Burning Man: The Ascent of DH Lawrence" is published by Bloomsbury

    [See also: David Hockney writ large]

    Content from our partners




    All Comments: [-] | anchor

    hiAndrewQuinn(2317) 5 days ago [-]

    My favorite piece of Moomin lore is that the very first proto-Moomin sketch was a caricature of Immanuel Kant Tove made to tease her sister, who was a big fan of that guy.

    buovjaga(1157) 5 days ago [-]

    I read that same story a long time ago, but apparently it had things mixed up and this is the way it actually went down: https://www.moomin.com/en/blog/the-story-of-moomintrolls/

    'On a summer day, she was discussing literary philosophy with her brother Per Olov Jansson next to the outhouse at their summer cottage in the archipelago. Tove quoted Immanuel Kant, who Per Olov immediately downplayed. To get back at her brother, Tove drew the ugliest creature she could imagine on the outhouse wall. That drawing, out of chance, is the first glimpse of a Moomin-like figure, although Tove called it a Snork.'

    Arn_Thor(10000) 5 days ago [-]

    Grew up watching Moomin on TV and it left with life lessons, good values and deep trauma...

    monero-xmr(10000) 5 days ago [-]

    Somehow encompasses the life outlook of all my Finnish relatives

    baq(3579) 5 days ago [-]

    Now read the comic books...

    binarysneaker(10000) 5 days ago [-]

    Same. Here's the first season (in English) for anyone who's interested https://archive.org/details/moomin-season-1/%5BMoomin+Master...

    amiga386(10000) 5 days ago [-]

    Same here. I'm not sure what the 'not translated into English until 2005' in TFA is meant to mean; sure, maybe that specific book wasn't translated until that date, but much of Europe watched the Polish fuzzy-felt TV adaption in 1978 or 1985.

    tikotus(10000) 5 days ago [-]

    I'm not sure how tongue in cheek this was, but I assume it's serious. Either way, it's a fun and smart read.

    The article spots well the dark side of the moomins, but in my opinion goes too deep into it. My disagreements boil down to this: 'One of the oddest aspects of the Moomin phenomenon is how these complex tales of apocalypse, breakdown and disfunction have been consistently misread as cutesy celebrations of domestic life.' Yes, all these things exist, but the point to me has always been that they are cutesy despite that! The stories paint a very typical family dynamic (at least of the time, at least in a Finnish swedish speaking family like Tove's), throws it into weirdest situations, and they all survive together thanks to, and despite, their dysfunctions. And Moominmamma is the most wholesome character ever, period.

    philips(2100) 5 days ago [-]

    I love the books, I have read them all to my kids, and I agree that I think the article takes its thesis too far.

    The books are strange tales. They have dark undertones. And sometimes the adults take actions that only someone with life experience would really understand (e.g. Moominpappa wanting to suddenly upend everything in the families life and move to an isolated island). But, my kids mostly pick up on the adventure and the friendships.

    I feel that the Moomins are like most media that is enjoyable by both children and parents in this way (e.g. Bluey, Pixar films, etc.).

    bazoom42(10000) 5 days ago [-]

    The cutesy family parts kind of evaporates towards the later books though. The last book is about longing for a moominmamma which is no longer there.

    To be fair, Jansson never claimed she wrote for kids in the first place.

    fsloth(3038) 5 days ago [-]

    Spot on. I think the author did not think through their argument: ''One of the oddest aspects of the Moomin phenomenon is how these complex tales of apocalypse, breakdown and disfunction have been consistently misread as cutesy celebrations of domestic life.''

    But that's exactly what makes domestic life worth celebrating - at best it sustains you through disaster and hardship. What better way to celebrate it than to show it's strength?

    xg15(2454) 5 days ago [-]

    I wonder if the title was tongue in cheek. Dark Side of the Moo(mi)n?

    TeMPOraL(1657) 5 days ago [-]

    I've been listening to Moomin audiobooks and reading some of the books to my wife in recent years, and I started to spot some of the more adult/darker subtext in it (I'm still processing the one where the Moominpappa makes the entire family move to a lighthouse, and Moominmamma is desperately trying to cope with growing depression). Still, I have an answer for the author's conundrum, that's accurate for a significant fraction of the readerbase:

    > 'One of the oddest aspects of the Moomin phenomenon is how these complex tales of apocalypse, breakdown and disfunction have been consistently misread as cutesy celebrations of domestic life.'

    It's actually really simple. Here in Poland, myself and my entire generation grew up watching the children cartoon adaptation of the Moomins. It was cute, it was happy, it had nice art and music, it was suitable for small children but engaging even to older ones, and it was aired when all kids would be watching[0]. This was our generation's intro to the Moomins, and it colored how we read the books.

    I imagine the case is similar all across Europe. A whole generation primed to read these stories as positive and light-hearted, because of a TV adaptation.

    --

    [0] - https://en.wikipedia.org/wiki/Wieczorynka - public TV (TVP1), every day at 19:00, just before the evening news slot. In times I grew up, watching this was pretty much a national tradition for any family with children.

    nanis(3639) 5 days ago [-]

    First time I heard about the Moomins. I thought this was about Mumins[1].

    [1]: https://en.wikipedia.org/wiki/Mumin

    selimthegrim(2528) 5 days ago [-]

    The crossover waiting to happen

    bazoom42(10000) 5 days ago [-]

    Multiple comments here referring to tv-shows. Just be aware that Tove Jansson wrote and illustrated books and comics but did not produce tv shows. What you have seen was not created by Tove Jansson.

    The comics and the books are different in genre, even if they use the same characters and storylines. The comics are darkly satirial of modern life while the illustrated books feels more poetic and timeless.

    Fun fact: Jansson illustrated The Hobbit and drew Gollum as a giant. Tolkien realized he never described the size of Gollum and made adjustments to later editions.

    franek(10000) 5 days ago [-]

    > Fun fact: Jansson illustrated The Hobbit and drew Gollum as a giant. Tolkien realized he never described the size of Gollum and made adjustments to later editions.

    For those curious like me, here are some low-res images:

    https://zepe.de/tjillu/hobbit/index.html

    And here an article about the illustrations (haven't read) with a a few images in higher resolution (including Gollum):

    https://tovejansson.com/hobbit-tolkien/

    gs17(10000) 5 days ago [-]

    I don't think there's any reason to gatekeep this so strongly. The original anime and it's sequel, maybe, but both Tove and Lars Jansson were heavily involved with other series.

    https://en.wikipedia.org/wiki/The_Moomins_(TV_series) :

    > It is, in contrast to the 1990s series, widely believed to be the most faithful TV adaptation of Tove Jansson's stories, and much closer to her vision. Tove herself had a great deal of involvement during the series' production and was very happy with it (as revealed in an interview with Anne Wood in Simon Sheridan's 2007 book The A to Z of Classic Children's Television). The scripts for each episode were translated from Polish into Swedish and sent to Tove and Lars Jansson, who, if they felt that anything needed to be changed, corrected the script, expanding or rewriting it; afterwards, the scripts were sent back and only then did production of the particular episode begin.

    https://en.wikipedia.org/wiki/Moomin_(1990_TV_series) :

    > Tove and Lars Jansson were also involved with the screenplay by doing certain changes in scripts.

    briandw(10000) 5 days ago [-]

    I lived in Finland for a couple years. Finns, like the Moomins, are whimsical yet profound, like midsummer's fleeting joy before the long winter. They mirror Finland's love of nature and quiet isolation, with their cozy valley echoing the Finns forest cabins by a lake. The happy vibe hides struggles—tough winters, heavy drinking—but the Moomins' warmth reflects the Finns' wholesome character.

    Paianni(10000) 5 days ago [-]

    Finns (or at least, the successors to tribes that assimilated into the modern-day Finnish nation) were exposed to Christianity later than most of Europe. Pre-Christian religions generally held a higher regard for relationships with nature, that might explain what you're getting at.

    weregiraffe(10000) 4 days ago [-]

    You might reconsider trying to explain a nation of millions through a few books.

    lifeisstillgood(2085) 5 days ago [-]

    They are children's tales - which are designed to hide lessons and warnings on the dark side of life in a wrapper that does not traumatise - like an inoculation against what comes.

    Everything the Grimms brothers collected and Disney sanitised still hides warnings.

    I have read all my children "The Tiger who came to Tea" as well as taken them to theatre performances- and the author ran from Germany hours before the Gestapo came knocking and it affected much of her life and writing ("Hitler Stole Pink Rabbit" is the autobiography I think)

    So yeah. It's got layers onion boy, layers.

    Still have fond memories of my kid hugging a six foot moomin in Covent Garden.

    logifail(10000) 5 days ago [-]

    'Kerr, however, stated more than once that the tiger represents nothing more than a tiger, and had no relevance to her upbringing'[0]

    [0] https://en.wikipedia.org/wiki/The_Tiger_Who_Came_to_Tea

    tejas911(10000) 5 days ago [-]

    It's striking how Jansson's cozy Moomin universe is layered with existential dread and the realities of a war-torn era.

    hiAndrewQuinn(2317) 5 days ago [-]

    There is a fascinating throughline between the themes of Moomin universe and Adventure Time I've been waiting to see someone much more familiar than me with both sources spool out into a 3 hour long YouTube video I can set on in the background.

    account-5(10000) 5 days ago [-]

    I never read any of the books, didn't actually know they were originally a book. I grew up with the TV show though. Hated it. I've never watched TV or film for the feels. Tv and film for me is escapism, I don't want to be depressed or have to think. I'm assuming this is why I never liked the moomins.

    fsckboy(10000) 5 days ago [-]

    Tove wrote them as escapist escape:

    (FTA)

    The Moomin stories were born, Jansson wrote to her friend Eva, "when I was feeling sad and scared of bombs and wanted to get away from gloomy thoughts... I would creep into an unbelievable world where everything was natural and friendly – and possible."

    raptorraver(3368) 5 days ago [-]

    Don't have time to read through the whole article. But just wanted to point out that there are also Moomin cartoons which have really politically uncorrect stories: like Moomins travelling to spain, trying to buy opium but eating some weird drugs instead and then staring for sea for a week and missing their fligth back.

    biorach(3625) 5 days ago [-]

    To be fair, that's an uncannily accurate prediction of many visitors' experiences when visiting Ibiza

    buovjaga(1157) 5 days ago [-]

    Moomins at Torrelorca: https://www.oocities.org/ghb17/muumit.html

    Relevant pages:

    https://www.oocities.org/ghb17/muumi/18.jpg

    https://www.oocities.org/ghb17/muumi/19.jpg

    https://www.oocities.org/ghb17/muumi/20.jpg

    https://www.oocities.org/ghb17/muumi/21.jpg

    'Waiter, four marijuanas' - they end up scoring LBJ pills instead as marijuana was so last season.

    Note that the comic is by Lars Jansson, Tove's brother.

    nabla9(144) 5 days ago [-]

    "How can I be so thirsty when I've been drinking all night?" – Moomintroll (in the Cartoon)

    designerarvid(10000) 5 days ago [-]

    Tove Jansson also drew political satire cartoons during WW2. Before Mumintrollen.

    https://tovejansson.com/sv/story/illustrator-barnboksforfatt...

    culebron21(10000) 5 days ago [-]

    Question to Swedes: what were you child impressions of 'Pettson och Findus'? I read it to children as an adult, and impressions are that it tells of the funny & sad sides of taking care of children, and I sympathize to Pettson, of course. I wonder how you saw it as children.

    On topic: interesting read. I'd never think these stories had so much dark side to them.

    I got all 9 stories in 3 books at the age of 11 and read most of them, and was very happy with the stories, never noticing any of the dread the article speaks about.

    Especially the Midwinter story was fascinating - we lived not that North, but in cold winter mid-continent, and the story was like looking out watching for the first signs of the spring, that eventually always comes, but you shouldn't celebrate any of those too early -- when day temperature comes above 0 in March, you know it's going to be freezing in the evening. (Later I was stunned with foreigners in our city complain of this March weather, call it 'winter' and be depressed!)

    A few years ago someone on social networks posted her impressions from reading them out loud to children -- that indeed it's depressive.

    So I guess, the conclusion is that people make opposite meanings and moods of the same events.

    impossiblefork(10000) 5 days ago [-]

    I liked Pettson because he's awesome and invents things. I think he's like a physical version of the guy who writes a bunch scripts that together are able to do all his work.

    Findus is more of experimenter. He comes up with an idea about something, and ends up following that idea so that it gets tested. He isn't a systematic, scientific experimenter though, since he's a cat.

    I also liked all the little animals. To contrast that with the Moomin stories, I only saw it on TV, but it was immediately obvious that they were very austere and very Finnish, even though of course, the author is a Finland-Swede. It's good stuff, but can be, not scary, but something adjacent, to watch as a child. It might be worth it since it allows you to understand these characters in this very austere, isolated environment.

    patall(10000) 4 days ago [-]

    Not a swede (yet) but grew up with the books (and merch): I never identified Findus as a child as he was, obviously, a cat. It was fun comic around 3-9 but I cannot say the lesson ever really made sense to me since it was just too abstract. Just funny, like the other Nordquist books. I also liked the associated PC games, which where interesting as they where quite challenging at a certain age with lots of engineering puzzles. But at that point it is really not much about Findus anymore, just the general mood that comes from the comics. Oh, and my brother loved the pancake-cake, whose receipt we somehow got from the book.

    justaswede(10000) 4 days ago [-]

    I did like me some Petsson och Findus. Besides agreeing with sibling commenter, the melancholic story with the fox and the fireworks was impactful. The dark moments and their resolution were in general the most meaningful. Fully agree with the notion that it's misguided to deprave ('spare') children from struggles and difficult questions of life. Nothing graphical or depraved but you get the point.

    As for the Moomins, I don't know what you all are on about in the comments. I'm with OP on this one. Lasting child Moomin impressions:

    - Original comic: Dark, heavy, existential, anxious, depressed, sarcastic, 'this is probably not for kids'. Still loved them and still find them underrated and wish more people read them.

    - Mainstream TV cartoon: Fun fantastical times. And Groke (aka Mårran) was indeed nightmare material

    - Newspaper comic: Couldn't keep track

    - TV live action: Now this was the true nightmare material. I think it was supposed to be lighthearted but my brother at 37 still talks of how it traumatized him.





    Historical Discussions: Whenever: Typed and DST-safe datetimes for Python (April 13, 2025: 286 points)
    Typed and DST-safe datetimes for Python, available in Rust or pure Python (January 26, 2025: 4 points)

    (286) Whenever: Typed and DST-safe datetimes for Python

    286 points 5 days ago by pkkm in 3280th position

    github.com | Estimated reading time – 9 minutes | comments | anchor

    Typed and DST-safe datetimes for Python, available in Rust or pure Python.

    Do you cross your fingers every time you work with Python's datetime—hoping that you didn't mix naive and aware? or that you avoided its other pitfalls? There's no way to be sure...

    ✨ Until now! ✨

    Whenever helps you write correct and type checked datetime code, using well-established concepts from modern libraries in other languages. It's also way faster than other third-party libraries—and usually the standard library as well. If performance isn't your top priority, a pure Python version is available as well.

    RFC3339-parse, normalize, compare to now, shift, and change timezone (1M times)

    ⚠️ Note: A 1.0 release is coming soon. Until then, the API may change as we gather feedback and improve the library. Leave a ⭐️ on GitHub if you'd like to see how this project develops!

    Why not the standard library?

    Over 20+ years, Python's datetime has grown out of step with what you'd expect from a modern datetime library. Two points stand out:

    1. It doesn't always account for Daylight Saving Time (DST). Here is a simple example:

      bedtime = datetime(2023, 3, 25, 22, tzinfo=ZoneInfo('Europe/Paris'))
      full_rest = bedtime + timedelta(hours=8)
      # It returns 6am, but should be 7am—because we skipped an hour due to DST!

      Note this isn't a bug, but a design decision that DST is only considered when calculations involve two timezones. If you think this is surprising, you are not alone.

    2. Typing can't distinguish between naive and aware datetimes. Your code probably only works with one or the other, but there's no way to enforce this in the type system!

      # Does this expect naive or aware? Can't tell!
      def schedule_meeting(at: datetime) -> None: ...

    There are two other popular third-party libraries, but they don't (fully) address these issues. Here's how they compare to whenever and the standard library:

    Whenever datetime Arrow Pendulum DST-safe ✅ ❌ ❌ ⚠️ Typed aware/naive ✅ ❌ ❌ ❌ Fast ✅ ✅ ❌ ❌

    Arrow is probably the most historically popular 3rd party datetime library. It attempts to provide a more 'friendly' API than the standard library, but doesn't address the core issues: it keeps the same footguns, and its decision to reduce the number of types to just one (arrow.Arrow) means that it's even harder for typecheckers to catch mistakes.

    Pendulum arrived on the scene in 2016, promising better DST-handling, as well as improved performance. However, it only fixes some DST-related pitfalls, and its performance has significantly degraded over time. Additionally, it's in maintenance limbo with only one release in the last four years, and many issues remaining unaddressed.

    • 🌐 DST-safe arithmetic
    • 🛡️ Typesafe API prevents common bugs
    • ✅ Fixes issues arrow/pendulum don't
    • ⚖️ Based on proven and familiar concepts
    • ⚡️ Unmatched performance
    • 💎 Thoroughly tested and documented
    • 📆 Support for date arithmetic
    • ⏱️ Nanosecond precision
    • 🦀 Rust!—but with a pure-Python option
    • 🚀 Support for the latest GIL-related improvements (experimental)
    >>> from whenever import (
    ...    # Explicit types for different use cases
    ...    Instant,
    ...    ZonedDateTime,
    ...    LocalDateTime,
    ... )
    # Identify moments in time, without timezone/calendar complexity
    >>> now = Instant.now()
    Instant(2024-07-04 10:36:56Z)
    # Simple, explicit conversions
    >>> now.to_tz('Europe/Paris')
    ZonedDateTime(2024-07-04 12:36:56+02:00[Europe/Paris])
    # A 'naive' local time can't accidentally mix with other types.
    # You need to explicitly convert it and handle ambiguity.
    >>> party_invite = LocalDateTime(2023, 10, 28, hour=22)
    >>> party_invite.add(hours=6)
    Traceback (most recent call last):
      ImplicitlyIgnoringDST: Adjusting a local datetime implicitly ignores DST [...]
    >>> party_starts = party_invite.assume_tz('Europe/Amsterdam')
    ZonedDateTime(2023-10-28 22:00:00+02:00[Europe/Amsterdam])
    # DST-safe arithmetic
    >>> party_starts.add(hours=6)
    ZonedDateTime(2023-10-29 03:00:00+01:00[Europe/Amsterdam])
    # Comparison and equality
    >>> now > party_starts
    True
    # Rounding and truncation
    >>> now.round('minute', increment=15)
    Instant(2024-07-04 10:30:00Z)
    # Formatting & parsing common formats (ISO8601, RFC3339, RFC2822)
    >>> now.format_rfc2822()
    'Thu, 04 Jul 2024 10:36:56 GMT'
    # If you must: you can convert to/from the standard lib
    >>> now.py_datetime()
    datetime.datetime(2024, 7, 4, 10, 36, 56, tzinfo=datetime.timezone.utc)

    Read more in the feature overview or API reference.

    • 🧪 0.x: get to feature-parity, process feedback, and tweak the API:

      • ✅ Datetime classes
      • ✅ Deltas
      • ✅ Date and time of day (separate from datetime)
      • ✅ Implement Rust extension for performance
      • 🚧 Tweaks to the delta API
    • 🔒 1.0: API stability and backwards compatibility

      • 🚧 Customizable parsing and formatting
      • 🚧 Intervals
      • 🚧 Ranges and recurring times
      • 🚧 Parsing leap seconds
    • Supports the proleptic Gregorian calendar between 1 and 9999 AD
    • Timezone offsets are limited to whole seconds (consistent with IANA TZ DB)
    • No support for leap seconds (consistent with industry standards and other modern libraries)

    Versioning and compatibility policy

    Whenever follows semantic versioning. Until the 1.0 version, the API may change with minor releases. Breaking changes will be meticulously explained in the changelog. Since the API is fully typed, your typechecker and/or IDE will help you adjust to any API changes.

    ⚠️ Note: until 1.x, pickled objects may not be unpicklable across versions. After 1.0, backwards compatibility of pickles will be maintained as much as possible.

    Whenever is licensed under the MIT License. The binary wheels contain Rust dependencies which are licensed under similarly permissive licenses (MIT, Apache-2.0, and others). For more details, see the licenses included in the distribution.

    This project is inspired by—and borrows most concepts from—the following projects. Check them out!

    The benchmark comparison graph is based on the one from the Ruff project. For timezone data, Whenever uses Python's own zoneinfo module.




    All Comments: [-] | anchor

    wesselbindt(10000) 5 days ago [-]

    Ah nice it solves the Liskov violation that the standard library has. In the standard library, dates can be compared with <, and datetimes are dates. But compare a datetime with a date with <, and you get an error. This drove me nuts at work recently.

    I wonder what benefits this choice has that outweigh the risks of this behavior.

    heavenlyblue(10000) 5 days ago [-]

    What do you expect? There are so many ways to handle this behvaiour it's pretty obvious why this is not allowed. Do you take datetime.date and then compare? Do you assume all dates are datetimes at midnight?

    OJFord(667) 5 days ago [-]

    What would you do about equality comparisons?

    wodenokoto(3676) 5 days ago [-]

    Funny it doesn't add comparison to date times in pandas, which is probably used to handle more dates than any of the others.

    jiggunjer(10000) 5 days ago [-]

    Pandas uses stdlib or numpy for it seems.

    Kwpolska(3586) 5 days ago [-]

    > available in Rust or pure Python.

    Hard pass. The complexity of having to use binary packages or build things is not worth the performance benefit. The pure-Python version requires building from source and passing special flags, so it is not possible to specify it in requirements.txt.

    stavros(1602) 5 days ago [-]

    That seems like an easy fix, they could release it as `whenever[pure]`. It would probably take less time to write up the issue than to write your comment.

    OJFord(667) 5 days ago [-]

    > The pure-Python version requires building from source and passing special flags, so it is not possible to specify it in requirements.txt.

    You can put any flags in requirements.txt, including -r[equiring] another txt etc.

    Your point may apply to modern pyproject.toml tooling though, or at least that it wouldn't be simply another entry in the dependencies array.

    BiteCode_dev(2837) 5 days ago [-]

    Ah, so you are not using pyQT, numpy, any database driver, pillow or anything using cryptography, then?

    apeters(10000) 5 days ago [-]

    Am I the only one to stick with the std lib, read the docs and changelogs carefully, and implement functions I really need the way my application makes use of them?

    I learned the hard way, that dependencies kill projects.

    Not saying this isn't great, thanks for creating it! It does have its use cases, of course.

    pkkm(3280) 5 days ago [-]

    I'm not the creator, the credit for that goes to Arie Bovenberg. I just wanted to show this to people.

    EdwardDiego(3564) 5 days ago [-]

    There are so many footguns in the datetime lib.

    That's why I use a Flake8 plugin to prohibit especially egregious footguns.

    https://github.com/jkittner/flake8-ban-utcnow

    stavros(1602) 5 days ago [-]

    > Am I the only one to stick with the std lib, read the docs and changelogs carefully

    I work in healthcare. If I have a choice between 'reading docs/changelogs carefully, implementing functions', and 'adding an extra dependency', I'm taking the dependency every single time.

    I don't want footguns in my code, I don't want code I have to write and test myself, and I don't want to have to become an expert in a domain before I can write something that serves my purpose.

    For the datetime library, specifically, I'm switching to whenever for everything, because I've been bitten by conversions and naive/aware datetime confusion too many times.

    sgarland(10000) 5 days ago [-]

    You are a sad minority, IME. I'm right there with you. I extended the uuid library to generate UUIDv7, based off of the RFC. It's pretty easy to implement, as it turns out. Overruled, because "we don't want to have to maintain additional code." As if the ABI for bitshifts is going to change?!

    mvanbaak(10000) 5 days ago [-]

    As others stated, there are many rough edges and footguns in the stdlib. BUT ... in my (and yours apparently) opinion, it's a matter of knowing those edges/guns, and work with them. Like you, I also prefer to create my own code around those instead of bringing in some library that brings in their own foot guns and possibly sub-dependencies and and and...

    mr_mitm(10000) 5 days ago [-]

    Are you saying you never pull in dependencies? Why stop there, why not re-implement the std lib as well? Surely there is a sensible middle ground: If you only need a small part of a dependency, consider implementing it. If you make heavy use of a dependency and want to benefit of years if not decades of dedicated developers testing and maturing its code, with a large community who has already stepped in all pitfalls you might step into and collectively encountered all the edge cases, just use the dependency.

    xandrius(10000) 5 days ago [-]

    Creating from scratch also creates hidden debt, it's just moved onto yourself. Especially when working with dates and timezones.

    dmos62(10000) 5 days ago [-]

    Curious about examples of projects being killed by dependencies.

    foolfoolz(10000) 5 days ago [-]

    this is a great idea if you want to slow down your project. most projects start with few rules and "best practices" like this. everyone is free to pull in dependencies as needed. because they are needed. but then once the project grows larger, those who have been around longer want to reverse course and gatekeep dependencies. but this is the opposite of what helped the project grow initially. and later contributors have a harder time making similar progress because they have to fight to add basic libraries. ensuring that efficiency per engineer goes down

    johnfn(10000) 5 days ago [-]

    I think this is fairly unrealistic. Does all your datetime manipulation involve proper use of the fold parameter as indicated in the article?

    BiteCode_dev(2837) 5 days ago [-]

    Functions that you have to document, test and maintain of course. You do that, right? And all the people in your team, they do that and will keep doing that once you leave, right? And they all understand the business domain and all the pitfalls that come with it and have the skill, time, and resources to take care of it, right?

    And this for every single problem: time, text, maths, network, parsing, formatting, validating, authenticating...

    snvzz(2530) 5 days ago [-]

    A tangent, but I hope the world gets its shit together and gets rid of DST.

    I am currently enjoying DST-free life in Japan, and feel that people around the world deserve to get this much respect from their own official clocks.

    Mountain_Skies(10000) 5 days ago [-]

    Almost everyone wants to get rid of the twice annual clock changes but are nearly evenly divided on if DST should be permanent or cease to exist. It's a strange artifact of wanting clock noon to be the midpoint of the workday but also wanting to maximize the hours of daylight after work.

    layer8(860) 5 days ago [-]

    I would wish for that as well, but it's unlikely to happen. In the EU for example, some countries would be on the losing side, either by getting "bad" hours or by having to move to a different time zone than their neighbor, which has significant economic consequences. Such countries won't agree to a DST abolishment that disadvantages them.

    And for program code, it wouldn't really help as long as it's still expected to be able to correctly handle dates in the past.

    BrandoElFollito(3407) 5 days ago [-]

    Dates and HTTP requests are the two things I always manipulate through libraries (no matter the language, except maybe for timestamps). It is so much simpler that way.

    I am an amateur dev, though, so maybe someone who masters the language will be better off using the raw standard libraries.

    scott_w(10000) 5 days ago [-]

    Honestly, no. There are times when you want to get low level but, when you do, you need to commit to learning that domain as well as the problem domain you're being paid to solve. If those are disjoint, well, have fun!

    vjerancrnjak(10000) 5 days ago [-]

    Does someone know when these performance issues matter? My understanding is that datetime is a shortlived object, you wouldn't want thousands of datetime objects all over the codebase.

    Almost all of the time UTC is enough, if I need to filter/bucket/aggregate by some range, I can reach for datetime with tz for these filter/bucket/aggregate criteria, convert them to UTC and on continues `int` comparison.

    I'd imagine all of the cases handled by Whenever are mostly when datetime is a long lived object, which I don't see a need for at all.

    I use it purely for allowing tz input from client, convert to UTC immediately when it arrives, or, if I really need the tz, then save it separately, which is rare (one example is calendar, where tz should be stored, although probably not even next to every UTC but at the user level, another is workforce scheduling, where 8am-4pm or 8pm-4am can mean different things for different locations -- but this is no longer datetime, it's purely time in a timezone).

    crazygringo(10000) 5 days ago [-]

    In my experience it's for calendar-related stuff. You need to store things permanently with the timezone, especially for recurring events. You don't want your scheduled lunch to move from 12 to 1 because it's DST.

    And so anything server-related with calendars will be making tons of these conversions constantly. And you can't cache things long-term in UTC because the conversions of future events can change, when countries change DST etc.

    Hasnep(10000) 5 days ago [-]

    If you've not read the blog post that explains why this library exists I recommend it. It's called 'Ten Python datetime pitfalls, and what libraries are (not) doing about it'

    https://dev.arie.bovenberg.net/blog/python-datetime-pitfalls...

    JodieBenitez(10000) 5 days ago [-]

    Excellent read.

    jwilk(2140) 5 days ago [-]

    Discussed on HN back then:

    https://news.ycombinator.com/item?id=39417231 (147 comments)

    barbazoo(2418) 5 days ago [-]

    I am a seasoned programmer but whenever I deal with datetime objects I do my best with unit tests and then just hope none of these "edge" cases apply to us. Meaning: I have no idea really how it works under the hood.

    Now at least there's an LLM that might spot a bug every now and then so that's nice.

    qwertox(10000) 5 days ago [-]

    > If performance isn't your top priority, a pure Python version is available as well.

    Then it would have been nice to see the benchmarks of the pure Python implementation as well. What if it's worse than arrow?

    ariebovenberg(10000) 5 days ago [-]

    Author here. It's answered briefly in the FAQ

    > In casual benchmarks, the pure-Python version is about 10x slower than the Rust version, making it 5x slower than the standard library but still (in general) faster than Pendulum and Arrow.

    '(in general)' here since the speed compares differently per operation, while the Rust version is faster across the board. That said, there's no operation that is _significantly_ (or unnecessarily) slower than Arrow or Pendulum.

    edit: I'm considering adding comparison to the pure Python version once I get the time for a more expanded 'benchmarks' page in the docs

    iknownothow(10000) 5 days ago [-]

    I've read the link and the GitHub readme page.

    I'm sure I'm in the top 1% of software devs for the most number of timestamps parsed. [1]

    DST is not a problem in Python. It's parsing string timestamps. All libraries are bad, including this one, except Pandas. Pandas does great at DST too btw.

    And I'm not shilling for Pandas either. I'm a Polars user who helicopters Pandas in whenever there's a timestamp that needs to be parsed.

    Pandas has great defaults. Here's string timestamps I expect to be paesed by default. I'm willing to pass timezone in case of naive timestamps:

    * All ISO 8601 formats and all its weird mutant children that differ by a tiny bit.

    * 2025-05-01 (parsed not as date, but as timestamp)

    * 2025-05-01 00:00:00 (or 00.0 or 00.000 or 0.000000 etc)

    * 2025-05-01 00:00:00z (or uppercase Z or 00.0z or 00.000z or 0.000000z)

    * 2025-05-01 00:00:00+02:00 (I don't need this converted to some time zone. Store offset if you must or convert to UTC. It should be comparable to other non naive timestamps).

    * 2025-03-30 02:30:00+02:00 (This is a non existent timestamp wrt European DST but a legitimate timestamp in timestamp representation, therefore it should be allowed unless I specify CET or Europe/Berlin whatever)

    * There's other timestamps formats that are non standard but are obvious. Allow for a Boolean parameter called accept_sensible_string_parsing and then parse the following:

      \* 2025-05-01 00:00 (HH:mm format)
      \* 2025-05-01 00:00+01:00 (HH:mm format)
    
    [1] It's not a real statistic, it's just that I work with a lot of time series and customer data.

    Disclaimer: I'm on the phone and on the couch so I wasn't able to test the lib for its string parsing before posting this comment.

    ariebovenberg(10000) 5 days ago [-]

    Author here. It's indeed a hard problem to parse 'All ISO 8601 formats and all its weird mutant children that differ by a tiny bit.' Since the ISO standard is so expansive, every library needs to decide for itself what to support. The ISO standard allows all sorts of weird things, like 2-digit years, fractional months, disallowing -00:00 offset, ordinal days, etc.

    Javascript's big datetime redesign (Temporal) has an interesting overview of the decisions they made [1]. Whenever is currently undergoing an expansion of ISO support as well, if you'd like to chime in [2].

    [1] https://tc39.es/proposal-temporal/#sec-temporal-iso8601gramm... [2] https://github.com/ariebovenberg/whenever/issues/204#issueco...

    mixmastamyk(3343) 5 days ago [-]

    Sounds like we need an industry/language-wide test suite to check these many date/time/calendar libraries against. Like the browser acid tests, though focused to baseline functionality only.

    https://en.wikipedia.org/wiki/Acid3

    I like this new lib (Thank You) but the name unfortunately implies the opposite of what it is. 'Whenever' sounds like you don't care, but you'd only be using this if you did care! Also Shakira, haha. Hmm, pedantic is taken. Timely, precise, punctual, meticulous, ahorita, pronto, etc. I like that temporal name.

    Finally, none of these links mention immutability, but it should be mentioned at the top.

    mdaniel(3640) 5 days ago [-]

    Without the slightest sense of irony, I actually strongly suspect such a test suite would only be valid at one moment in time, since the timezone legislation is almost continuously in flux. That's why <https://www.iana.org/time-zones> and its friend <https://www.oracle.com/java/technologies/javase/tzupdater-re...> exist. As if to illustrate my point, the latest update was 2025-03-22, presumably nuking any such conformance test from Mar 21st

    kelseydh(10000) 4 days ago [-]

    A big revelation for me in solving so much timezone insanity came from realising that timezones should be expressed as locations rather than zones.

    Avoid general terms like 'Pacific Standard Time' and stick to location-specific ones like: 'Vancouver/Canada'. The latter is how people expect their time to work, and correctly handles whatever quirky choices jurisdictions choose to do with their time.

    throwaway2037(2851) 4 days ago [-]

    In my experience, all worthy date/time libraries use time zone IDs from the 'tz database'. Ref: https://en.wikipedia.org/wiki/Tz_database

    Searching the list here: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones

    I cannot find an entry for 'Pacific Standard Time' nor 'Vancouver/Canada', but I can see: 'America/Vancouver'.

    JimDabell(2160) 4 days ago [-]

    The rule of thumb is: Use UTC to record when things happened (e.g. logging), use local time + timezone name (e.g. `Europe/London`) to schedule things for the future (e.g. meetings).





    Historical Discussions: What Is Entropy? (April 14, 2025: 285 points)

    (285) What Is Entropy?

    285 points 4 days ago by jfantl in 3380th position

    jasonfantl.com | Estimated reading time – 39 minutes | comments | anchor

    People say many things about entropy: entropy increases with time, entropy is disorder, entropy increases with energy, entropy determines the arrow of time, etc.. But I have no idea what entropy is, and from what I find, neither do most other people. This is the introduction I wish I had when first told about entropy, so hopefully you find it helpful. My goal is that by the end of this long post we will have a rigorous and intuitive understanding of those statements, and in particular, why the universe looks different when moving forward through time versus when traveling backward through time.

    This journey begins with defining and understanding entropy. There are multiple formal definitions of entropy across disciplines—thermodynamics, statistical mechanics, information theory—but they all share a central idea: entropy quantifies uncertainty. The easiest introduction to entropy is through Information Theory, which will lead to entropy in physical systems, and then finally to the relationship between entropy and time.

    Information Theory

    Imagine you want to communicate to your friend the outcome of some random events, like the outcome of a dice roll or the winner of a lottery, but you want to do it with the fewest number of bits (only 1s and 0s) as possible. How few bits could you use?

    The creator of Information Theory, Claude Shannon, was trying to answer questions such as these during his time at Bell labs. He was developing the mathematical foundations of communication and compression, and eventually he discovered that the minimum number of bits required for a message was directly related to the uncertainty of the message. He was able to then formulate an equation to quantify the uncertainty of a message. When he shared it with his physicist colleague at Bell Labs, John von Neumann, von Neumann suggested calling it entropy for two reasons:

    Von Neumann, Shannon reports, suggested that there were two good reasons for calling the function "entropy". "It is already in use under that name," he is reported to have said, "and besides, it will give you a great edge in debates because nobody really knows what entropy is anyway." Shannon called the function "entropy" and used it as a measure of "uncertainty," interchanging the two words in his writings without discrimination. — Harold A. Johnson (ed.), Heat Transfer, Thermodynamics and Education: Boelter Anniversary Volume (New York: McGraw-Hill, 1964), p. 354.

    Later we will see that the relationship between Shannon's entropy and the pre-existing definition of entropy was more than coincidental, they are deeply intertwined.

    But now let us see how Shannon found definitions for these usually vague terms of "information" and "uncertainty".

    In Information Theory, the information of an observed state is formally defined as the number of bits needed to communicate that state (at least for a system with equally likely outcomes with powers of two, we'll see shortly how to generalize this). Here are some examples of information:

    • If I flip a fair coin, it will take one bit of information to tell you the outcome: I use a 0 for head and a 1 for tails.
    • If I roll a fair 8-sided dice, I can represent the outcome with 3 bits: I use 000 for a 1, 001 for 2, 010 for 3, etc.

    The more outcomes a system can have, the more bits (information) it will require to represent its outcome. If a system has equally likely outcomes, then it will take bits of information to represent an outcome of that system.

    Entropy is defined as the expected number of bits of information needed to represent the state of a system (this is a lie, but it's the most useful definition for the moment, we'll fix it later). So the entropy of a coin is 1 since on average we expect it to take 1 bit of information to represent the outcome of the coin. An 8-sided dice will have an entropy of 3 bits, since we expect it to take an average of 3 bits to represent the outcome.

    It initially seems that entropy is an unnecessary definition since we can just look at how many bits it takes to represent the outcome of our system and use that value, but this is only true when the chance of the outcomes are all equally likely.

    Imagine now that I have a weighted 8-sided dice, so the number 7 comes up % of the time while the rest of the faces come up % of the time. Now, if we are clever, we can reduce the expected number of bits needed to communicate the outcome of the dice. We can decide to represent a 7 with a 0, and all the other numbers will be represented with 1XXX where the Xs are some unique bits. This would mean that % percent of the time we only have to use 1 bit of information to represent the outcome, and the other % of the time we use 4 bits, so the expected number of bits (the entropy of the dice) is 2.5. This is lower than the 3 bits of entropy for the fair 8-sided dice.

    Fortunately, we don't need to come up with a clever encoding scheme for every possible system, there exists a pattern to how many bits of information it takes to represent a state with probability . We know if such as in the case of a coin landing on heads, then it takes 1 bit of information to represent that outcome. If such as in the case of a fair 8-sided dice landing on the number 5, it takes 3 bits of information to represent that outcome. If such as in the case of our unfair 8-sided dice landing on the number 7, then it takes 1 bit of information, just like the coin, which shows us that all that matters is the probability of the outcome. With this, we can discover an equation for the number of bits of information needed for a state with probability .

    This value is usually called information content or surprise, since the lower the probability of a state occurring, the higher the surprise when it does occur.

    When the probability is low, the surprise is high, and when the probability is high, the surprise is low. This is a more general formula then "the number of bits needed" since it allows for states that are exceptionally likely (such as % likely) to have surprise less then 1, which would make less sense if we tried to interpret the value as "the number of needed bits to represent the outcome".

    And now we can fix our definition of entropy (the lie I told earlier). Entropy is not necessarily the expected number of bits used to represent a system (although it is when you use an optimal encoding scheme), but more generally the entropy is the expected surprise of the system.

    And now we can calculate the entropy of systems like a dice or a coin or any system with known probabilities for its outcomes. The expected surprise (entropy) of a system with possible outcomes each with probability (all adding up to 1) can be calculated as

    And notice that if all the probabilities are the same (so ), then the entropy equation can simplify to

    Here are some basic examples using .

    • The entropy of a fair coin is
    • The entropy of a fair 8-sided dice is
    • The entropy of an unfair 8-sided dice, where the dice lands on one face % of the time and lands on the other faces the remaining % of the time with equal probability (about % each), is

    Hopefully it is a bit more intuitive now that entropy represents uncertainty. An 8-sided dice would have higher entropy than a coin since we are more uncertain about the outcome of the 8-sided dice than we are about the coin (8 equally likely outcomes are more uncertain than only 2 equally likely outcomes). But a highly unfair 8-sided dice has less entropy than even a coin since we have very high certainty about the outcome of the unfair dice. Now we have an actual equation to quantify that uncertainty (entropy) about a system.

    It is not clear right now how this definition of entropy has anything to do with disorder, heat, or time, but this idea of entropy as uncertainty is fundamental to understanding the entropy of the universe which we will explore shortly. For reference, this definition of entropy is called Shannon entropy.

    We will move on now, but I recommend looking further into Information Theory. It has many important direct implications for data compression, error correction, cryptography, and even linguistics, and touches nearly any field that deals with uncertainty, signals, or knowledge.

    Physical Entropy

    Now we will see entropy from a very different lens, that of Statistical Mechanics. We begin with the tried-and-true introduction to entropy which every student is given.

    Balls in a box

    I shall give you a box with 10 balls in it, through , and we will count how many balls are on the left side of the box and on the right side of the box. Assume every ball is equally likely to be on either side. Immediately we can see it is highly unlikely that we count all the balls are on the left side of the box, and more likely that we count an equal number of balls on each side. Why is that?

    Well, there is only one state in which we count all the balls on the left, and that is if every ball is on the left (truly astounding, but stay with me). But there are many ways in which the box is balanced: We could have through one side and the rest on the other, or the same groups but flipped from left to right, or we could have all the even balls on one side and the odd on the other, or again flipped, or any of the other many possible combinations.

    This box is a system that we can measure the entropy of, at least once I tell you how many balls are counted on each side. It can take a moment to see, but imagine the box with our left and right counts as a system where the outcome will be finding out where all the individual balls are in the box, similar to rolling a dice and seeing which face it lands on.

    This would mean that the box where we count all the balls on the left side only has one possible outcome: all the balls are on the left side. We would take this to mean that this system has entropy (no expected surprise) since we already know where we will find each individual ball.

    The box with balanced sides (5 on each) has many possible equally likely outcomes, and in fact, we can count them. A famous equation in combinatorics is the N-choose-k equation, which calculates exactly this scenario. It tells us that there are 252 possible ways in which we can place 5 balls on each side. The entropy for this system would then be . This is the same as calculating the entropy of a 252-sided dice.

    And if we were to increase the number of balls, the entropy of the balanced box would increase since there would then be even more possible combinations that could make up a balanced box.

    We should interpret these results as: The larger the number of ways there are to satisfy the large-scale measurement (counting the number of balls on each side), the higher the entropy of the system. When all the balls are on the left, there is only one way to satisfy that measurement and so it has a low entropy. When there are many ways to balance it on both sides, it has high entropy.

    Here we see 1000 balls bouncing around in a box. They will all start on the left, so the box would have 0 entropy, but once the balls start crossing to the right and changing the count on each side, the entropy will increase.

    In Statistical Mechanics, the formal term for the large-scale measurement is the macrostate, and the specific states that can satisfy that measurement are microstates. We would call the measurement of the number of balls on each side of the box the macrostate, and the different combinations of positions of individual balls the microstates. So rephrasing the above: There is only one microstate representing the macrostate of all balls being counted on one side, and there are many microstates representing the macrostate of a balanced box.

    But why did we decide to measure the number of balls on the left and right? We could have measured a different macrostate, and the entropy would be different.

    Macrostates

    Imagine instead of selecting the left and right halves of the box to count the number of balls, we instead count how many balls are in each pixel of the box. In this scenario, the entropy would almost always be maximized, as the balls rarely share a pixel. Even if all the balls were on the left side of the box, they would likely still each occupy a different pixel, and the measured entropy would be the same as if the balls were evenly distributed in the box.

    If we use an expensive instrument to measure the box and track the balls with high precision, then the entropy would rarely change and would be very high. If we instead use an inexpensive instrument that can only tell if a ball is on the left or right of the box, then the entropy will be low and could very easily fluctuate if some of the balls temporarily end up on the same side of the box.

    Let's run exactly the same simulation of 1000 balls in the box again, still starting with the balls on the left. But, this time we count how many balls are in each cell in a 50x50 grid, as opposed to the previous two cells (the left and right cells). The entropy will be high since there are many microstates that represent a bunch of cells with only 1 ball in it, and the entropy won't change much since two balls rarely share the same cell. Recall that if two balls share the same cell, the count would go up, and there are fewer microstates that satisfy a cell with a count of 2 compared to two cells with a count of 1 in each.

    Entropy is not intrinsic to the physical system alone, but rather to our description of it as well — i.e., the macrostate we're measuring, and the resolution at which we observe it.

    This process of measuring a lower-resolution version of our system (like counting how many balls are on the left or right side of a box) is called coarse-graining.

    How we choose/measure the macrostate, that is, how we coarse-grain the system, is dependent on the problem we are solving.

    • Imagine you have a box of gas (like our balls in a box, but at the scale of balls in the box), and we place a temperature-reader on the left and right side of the box. This gives us a macrostate of two counts of the average ball speed on the left and right sides of the box. We can then calculate the entropy by comparing when the temperature-readers are equal to when they are different by degrees. Once we learn how time and entropy interact, we will use this model to show that the two temperature-readers are expected to converge to the same value over time.
    • Imagine you sequence the genome of many different people in a population, you could choose many different macrostates based on what you care about. You could count how many of each nucleotide there are in all the sequences, allowing you to quantify how variable the four nucleotides are in DNA. You could calculate the entropy of every individual position in the DNA sequence by counting how many nucleotide types are used in that position across the population, allowing you to identify portions of DNA that are constant across individuals or vary across individuals.

    How you choose to measure the macrostate can come in many forms for the same system, depending on what you are capable of measuring and/or what you care about measuring.

    But once we have a macrostate, we need a way to identify all the microstates and assign probabilities to them.

    Microstates

    When we were looking at the positions of balls in a box in equally sized cells, it was easy to see that every ball was equally likely to be in any of the cells, so each microstate was equally likely. This made calculating the entropy very simple, we just used the simplified version of to find that for microstates that satisfy a given macrostate, the entropy of the system is . It isn't too hard to extend this idea to microstates that are not equally likely.

    For example, let's calculate the entropy of a box with 5 balls on the left and 5 balls on the right, but we replace one of the balls in the box with a metal ball that is pulled by a magnet to the left. In this case, the probability of each microstate is no longer equally likely. If we assume there is an % chance that the metal ball is on the left side instead of the right side, then the entropy of the box can be calculated as follows: For all of the 252 microstates, 126 of them have the metal ball on the left, which has a chance of being true, and the other 126 have the metal ball on the right with a chance. This means using the we get an entropy of

    This is a little less than the box with normal balls which had entropy. This is exactly what we should expect, we are a bit more certain about the outcome of this system since we knew where one of the balls was more likely to be.

    But this raises a subtle question: why did we choose this particular set of microstates? For example, if we have the macrostate of 5 balls on the left and 5 balls on the right, but we decide to use the 50x50 grid of cells to describe the microstates, then there are far more microstates that satisfy the macrostate compared to when we were using the 2x1 grid of left and right.

    Let's calculate the entropy for those two examples. Keep in mind they both have the same macrostate: 5 balls on the left and 5 balls on the right.

    • If we choose to use the microstates of looking at the position of individual balls between two cells splitting the box in half, then we can use n-choose-k to calculate that there are 252 possible combinations of balls across the two cells. This gives us an entropy of .
    • If we choose to use the microstates of looking at the position of individual balls between 50x50 (2500) cells splitting the box into a grid, then we can use n-choose-k to calculate that there are 252 possible combinations of balls across the two halves of the box, for each of which every ball could be in any of 50x25 (1250) cells. This gives us an entropy of .

    This result lines up very well with our Information-theoretic understanding of entropy: when we allow more microstates to represent the same macrostate, we are more uncertain about the microstate our system is in. But this result does raise some concerns.

    If different microstates give different entropy, how do we choose the right microstates for our problem? Unlike the macrostate, this decision of which microstates to use is not determined by our instruments or the scope of the problem, it has to be determined by the person making the calculation. Often for physical systems people will use the set of microstates that capture all the relevant information related to the macrostate. For example, if our macrostate is about balls on the left or right side of a box, then we probably don't care about the ball's velocity or mass or anything else but the ball position.

    Another concern is that it feels wrong that the same physical system with the same macrostate can have different entropies depending on the microstate representation we use. Usually, we expect physical systems to have invariant measurements regardless of the internal representation we decide to use for our measurement. But this is incorrect for entropy. We need to recall that entropy is the uncertainty of a system and that the definition of entropy is completely dependent on what we are uncertain about, which for physical systems are the microstates. This would be similar to someone asking "How many parts make up that machine?", to which we should respond "How do you define a 'part'?". When we ask "What is the entropy of this macrostate?", we need to respond with "What microstates are we using?".

    With all that said, there is some small truth to what our intuition is telling us, although it doesn't apply to the general case. While the entropy of the system changes when we change the microstates, the relative differences in entropy across macrostates will be equal if the new microstates uniformly multiply the old microstates. That is, if each original microstate is split into the same number of refined microstates, then the entropy of every macrostate increases by a constant. We're getting lost in the terminology, an example will demonstrate.

    Let us again take the 10 balls in a box, and we will calculate the entropy of the system for a few different macrostates and microstate representations. We indicate the number of balls on each side of the box with (L, R), where L is the number of balls on the left and R is the number of balls on the right. Then we calculate the entropy using the microstate of a 2x1 grid of cells (just the left and right halves of the box) and for the 50x50 grid of cells.

    (10,0) (9,1) (8,2) (7,3) (6,4) (5,5) (4,6) (3,7) (2,8) (1,9) (0,10)
    2x1 0.00000 3.32193 5.49185 6.90689 7.71425 7.97728 7.71425 6.90689 5.49185 3.32193 0.00000
    50x50 102.87712 106.19905 108.36898 109.78401 110.59137 110.85440 110.59137 109.78401 108.36898 106.19905 102.87712

    And if we look, we will see that the entropy in the 50x50 grid microstate values is just the 2x1 grid values plus a constant. The relative entropy in both cases would be identical. This is even more clear if we mathematically show how the entropy is calculated. For the 2x1 grid we use the equation , and for the 50x50 grid we use . Mathematically we can see that it is the same as the entropy of the 2x1 grid offset by .

    You can imagine if we added another dimension along the microstates that we would increase the entropy again by a constant. For example, if each of the 10 balls could be one of 3 colors, then the number of microstates would grow by a factor of , and so the entropy of the whole system would increase by .

    Our intuition was correct when we used different microstates that are multiples of each other, but that intuition fails if the microstates are not so neatly multiples of each other. An easy example of this is if we represent the left side of the box as one cell and the right as a 50x25 grid of cells, then the entropy looks very different. Below is the table again, but with the added row of our non-homogenous microstates. An example of how we calculate the entropy of macrostate is: there are 120 equally likely ways to place 3 balls on the left and 7 balls on the right, but the balls on the right can also be in different states, so the entropy is .

    (10,0) (9,1) (8,2) (7,3) (6,4) (5,5) (4,6) (3,7) (2,8) (1,9) (0,10)
    2x1 0.00000 3.32193 5.49185 6.90689 7.71425 7.97728 7.71425 6.90689 5.49185 3.32193 0.00000
    50x50 102.87712 106.19905 108.36898 109.78401 110.59137 110.85440 110.59137 109.78401 108.36898 106.19905 102.87712
    mixed 0.00000 13.60964 26.06728 37.77003 48.86510 59.41584 69.44052 78.92088 87.79355 95.91134 102.87712

    A funny thing to note is that when all the balls are on the left, the entropy is zero, but when all the balls are on the right, the entropy is maximized. And again, hopefully, this makes sense from our understanding of entropy, that it measures uncertainty relative to our microstates. If we know all the balls are on the left, then we know they must be in the single left cell, so no uncertainty. If we know the balls are all on the right, then they could be in any of microstates, so high uncertainty.

    Clearly, we need to be careful and aware of what microstates we are choosing when measuring the entropy of a system. Fortunately, for most physical systems we use the standard microstates of a uniform grid of positions and momentums of the balls (particles) in the system. Another standard microstate to use is the continuous space of position and momentum.

    Continuous Microstates

    So far, we've looked at discrete sets of microstates — such as balls in cells. But in physical systems, microstates are often continuous: positions and momenta can vary over a continuum. How do we compute entropy in this setting? This is not related to the rest of the explanation, but it is an interesting tangent to explore.

    Let's return to our 10 balls in a 2D box. If each ball can occupy any position in the square, then the microstate of the system is a point in a -dimensional space (2 dimensions per ball). The number of possible microstates is infinite — and each individual one has infinitesimal probability.

    In this setting, we use a probability density function , and entropy becomes a continuous integral:

    This is called differential entropy. It generalizes Shannon entropy to continuous systems, though it has some subtleties — it can be negative, and it's not invariant under coordinate transformations.

    If the density is uniform, say over a region of volume , then the entropy becomes:

    So entropy still grows with the logarithm of the accessible state volume, just as in the discrete case.

    This formalism is particularly natural in quantum mechanics, where the wavefunction defines a probability density . Consider a 1D Gaussian wavefunction:

    Its entropy (in bits) is:

    This shows that wider distributions have higher entropy, as expected: a more spread-out wavefunction indicates more uncertainty in the particle's location.

    For instance:

    Which again should make sense: When we are less certain about a system, like where a particle will be when measured, the more entropy it has.

    And a quick issue to address: If the state space is unbounded, like momentum in classical mechanics, then the entropy can diverge. This isn't a problem in practice because physical systems typically have probability distributions (like Gaussians) that decay quickly enough at infinity to keep the entropy finite. When that's not the case, we either limit the system to a finite region or focus on entropy differences, which remain well-defined even when absolute entropy diverges.

    But let's get back to our main topic, and we'll get back into it with a historical overview.

    Standard Usage of Entropy

    Eighty years before Claude Shannon developed Information Theory, Ludwig Boltzmann formulated a statistical definition of entropy for an ideal gas. He proposed that the entropy of a system is proportional to the logarithm of the number of microstates consistent with a given macrostate:

    This equation should look familiar: it's the equal-probability special case of the Shannon entropy we've been using, just with a change of base (from to ) and a scaling factor (Boltzmann's constant). The connection between Boltzmann's statistical mechanics and Shannon's information theory is more than historical coincidence—both quantify uncertainty, whether in physical states or messages.

    A few years later, Josiah Willard Gibbs generalized Boltzmann's definition to cases where microstates are not equally likely. His formulation remains the standard definition of entropy in modern physics:

    This is formally identical to Shannon entropy, again differing only in logarithm base and physical units. But Gibbs's generalization was a profound leap: it enabled thermodynamics to describe systems in contact with heat baths, particle reservoirs, and other environments where probability distributions over microstates are non-uniform. This made entropy applicable far beyond ideal gases—covering chemical reactions, phase transitions, and statistical ensembles of all kinds.

    Now that we have a formal understanding of entropy with some historical background, let's try to understand how entropy relates to our universe and in particular to time.

    Time

    How does time play a role in all of this?

    When you drop a spot of milk into tea, it always spreads and mixes, and yet you never see the reverse where the milk molecules spontaneously separate and return to a neat droplet. When ocean waves crash into the shore, the spray and foam disperse, but we never see that chaos reassemble into a coherent wave that launches back into the sea. These examples are drawn from this lecture on entropy by Richard Feynman. If you were shown a reversed video of these events, you'd immediately recognize something was off. This sounds obvious at first, but it actually isn't clear this should be true if we just look at the laws of physics. All the known laws of physics are time-reversible (the wave function collapse seems to be debatable), which just means that they do look the same playing forward and backward. The individual molecules all obey these time-reversible laws, and yet the cup of tea gets murky from the milk always mixing in.

    This highlights a fundamental paradox: the microscopic laws of physics are time-reversible, but the macroscopic world is not. If you took a video of two atoms bouncing off each other and played it backward, it would still look physically valid, but play a video of milk mixing into coffee backward, and it looks obviously wrong.

    We want to build a simplified model of time in a way that reflects both the time-reversibility of microscopic laws and the time-asymmetry of macroscopic behavior. Let's imagine the complete state of a physical system, like a box of particles, as a single point in a high-dimensional space called phase space, with each dimension corresponding to a particle's position and momentum. As time evolves, the system traces out a continuous trajectory through this space.

    The laws of physics, such as Newton's equations, Hamiltonian mechanics, or Schrödinger's equation, all govern this trajectory. They are deterministic and time-reversible. That means if you reverse the momenta of all particles at any moment, the system will retrace its path backward through state space.

    So far everything is time-reversible, including this view of how the universe moves through time. But we will see that even in this toy model, time appears to have a preferred direction, an arrow of time.

    The key lies in coarse-graining. When we observe the world, we don't see every microscopic detail. Instead, we measure macrostates: aggregate properties like temperature, pressure, position of an object, or color distribution in a cup of tea. Each macrostate corresponds to many underlying microstates — and not all macrostates are created equal.

    For example, consider a box sliding across the floor and coming to rest due to friction. At the microscopic level, the system is just particles exchanging momentum, and all time-reversible. But we certainly would not call this action time-reversible, we never see a box spontaneously start speeding up from stand-still. But, if we took the moment after the box comes to a rest due to friction, and you reversed the velocities of all the particles (including those in the floor that absorbed the box's kinetic energy as heat), the box would spontaneously start moving and slide back to its original position. This would obey Newton's laws, but it's astronomically unlikely. Why?

    The number of microstates where the energy is spread out as heat (the box is at rest, and the molecules in the floor are jiggling) vastly outnumber the microstates where all that energy is coordinated to move the box. The stand-still macrostate has high entropy while the spontaneous-movement macrostate has low entropy. When the system evolves randomly or deterministically from low entropy, it is overwhelmingly likely to move toward higher entropy simply because there are more such microstates.

    If you had perfect knowledge of all particles in the universe (i.e., you lived at the level of microstates), time wouldn't seem to have a direction. But from the perspective of a coarse-grained observer, like us, entropy tends to increase. And that's why a movie of tea mixing looks natural, but the reverse looks fake. At the level of physical laws, both are valid. But one is typical, and one is astronomically rare, all because we coarse-grained.

    To drive the point home, let's again look at the balls in a box. We'll define macrostates by dividing the box into a grid of cells and counting how many balls are in each bin.

    Now suppose the balls move via random small jitters (our toy model of microscopic dynamics). Over time, the system will naturally tend to explore the most probable macrostates, as the most probable macrostates have far more microstates for you to wander into. That is, entropy increases over time, not because of any fundamental irreversibility in the laws, but because high-entropy macrostates are far more typical.

    If we started the simulation with all the balls packed on the left, that's a very specific (low entropy) macrostate. As they spread out, the number of compatible microstates grows, and so does the entropy.

    This leads to a crucial realization: Entropy increases because we started in a low-entropy state. This is often called the Past Hypothesis, the postulate that the universe began in an extremely low-entropy state. Given that, the Second Law of Thermodynamics follows naturally. The arrow of time emerges not from the dynamics themselves, but from the statistical unlikelihood of reversing them after coarse-graining, and the fact that we began in a low-entropy state.

    You could imagine once a system reaches near-maximum entropy that it no longer looks time-irreversible. The entropy of such a system would fluctuate a tiny bit since entropy is an inherently statistical measure, but they would be small enough not to notice. For example, while it is clear when a video of milk being poured into tea (a low-entropy macrostate) is playing forward as opposed to backward, you couldn't tell if a video of already-combined milk and tea (a high-entropy macrostate) being swirled around is playing forward or backward.

    While there are tiny fluctuations in entropy, they are not enough to explain the large-scale phenomena that sometimes seem to violate this principle that we just established of entropy always increasing with time.

    Violations of the Second Law?

    Some real-world examples seem to contradict the claim that entropy always increases. For instance, oil and water separate after mixing, dust clumps into stars and planets, and we build machines like filters and refrigerators that separate mixed substances. Aren't these violations?

    The issue is we have only been considering the position of molecules, while physical systems have many different properties which allow for more microstates. For example, if we start considering both the position and velocity of balls in a box, then the entropy can be high even while all the balls are on the left side of the box since every ball could have a different velocity. If the balls were all on the left and the velocities were all the same, then the entropy would be low. Once we consider velocity as well, entropy can increase both from more spread out positions and more spread out velocities.

    When water and oil separate, the positions of the molecules separate into top and bottom, which appears to decrease positional entropy. However, this separation actually increases the total entropy of the system. Why? Water molecules strongly prefer to form hydrogen bonds with other water molecules rather than interact with oil molecules. When water molecules are forced to be near oil molecules in a mixed state, they must adopt more constrained arrangements to minimize unfavorable interactions, reducing the number of available microstates. When water and oil separate, water molecules can interact freely with other water molecules in more configurations, and oil molecules can interact with other oil molecules more freely. This increase in available microstates for molecular arrangements and interactions more than compensates for the decrease in positional mixing entropy. So, while the entropy decreases if we only consider the general positions of molecules (mixed versus separated), the total entropy increases when we account for all the molecular interactions, orientations, and local arrangements. This demonstrates why we need to consider all properties of a system when calculating its entropy.

    When stars or planets form together from dust particles floating around in space and clump together from gravity, it would seem that even when we consider position and velocity of the particles that the entropy might be decreasing. Even though the particles speed up to clump together, they slow down after they collide, seemingly decreasing entropy. This is because we are again failing to consider the entire system. When particles collide with each other, their speed decreases a bit by turning that kinetic energy into radiation, causing photons to get sent out into space. If we considered a system where radiation isn't allowed, then the kinetic energy would just get transferred from one particle to another through changes in velocity, and the entropy of the system would still be increasing because of the faster velocities. Once we start considering the entropy of the position, velocity, and all particles in a system, we can consider all the microstates that are equally likely and calculate the correct entropy.

    Similarly, once we consider the entire system around a refrigerator, the decrease in entropy disappears. The entropy from the power generated to run the refrigerator and the heat moved from the inside to the outside of the refrigerator will offset the decrease in entropy caused by cooling the inside of the refrigerator. Local decreases in entropy can be generated, as long as the entropy of the entire system is still increasing.

    Ensure that the entire system is being considered when analyzing the entropy of a system, with the position, velocity, other interactions of particles, that all particles are included, and that the entire system is actually being analyzed.

    Disorder

    Entropy is sometimes described as "disorder," but this analogy is imprecise and often misleading. In statistical mechanics, entropy has a rigorous definition: it quantifies the number of microstates compatible with a given macrostate. That is, entropy measures our uncertainty about the exact microscopic configuration of a system given some coarse-grained, macroscopic description.

    So where does the idea of "disorder" come from?

    Empirically, macrostates we label as "disordered" often correspond to a vastly larger number of microstates than those we consider "ordered". For example, in a child's room, there are many more configurations where toys are scattered randomly than ones where everything is neatly shelved. Since the scattered room corresponds to more microstates, it has higher entropy.

    But this connection between entropy and disorder is not fundamental. The problem is that "disorder" is subjective—it depends on human perception, context, and labeling. For instance, in our earlier example of 1000 balls bouncing around a box, a perfectly uniform grid of balls would have high entropy due to the huge number of possible microstates realizing it. And yet to a human observer, such a grid might appear highly "ordered."

    The key point is: entropy is objective and well-defined given a macrostate and a set of microstates, while "disorder" is a human-centric heuristic concept that sometimes, but not always, tracks entropy. Relying on "disorder" to explain entropy risks confusion, especially in systems where visual symmetry or regularity masks the underlying statistical structure.

    Conclusion

    So here are some thoughts in regard to some common statements made about entropy:

    • Entropy is a measure of disorder.
      • "disorder" is a subjective term for states of a system that humans don't find useful/nice, and usually has much higher entropy than the "ordered" macrostate that humans create. Because of this, when entropy increases, it is more likely that we end up in disordered state, although not guaranteed.
    • Entropy always increases in a closed system.
      • This is a statistical statement that for all practical purposes is true, but is not guaranteed and can fail when you look at very small isolated systems or measure down to the smallest details of a system. It also assumes you started in a low-entropy state, giving your system space to increase in entropy. This has the neat implication that since our universe has been observed to be increasing in entropy, it must have begun in a low-entropy state.
    • Heat flows from hot to cold because of entropy.
      • Heat flows from hot to cold because the number of ways in which the system can be non-uniform in temperature is much lower than the number of ways it can be uniform in temperature, and so as the system "randomly" moves to new states, it will statistically end up in states that are more uniform.
    • Entropy is the only time-irreversible law of physics.
      • All the fundamental laws of physics are time-reversible, but by coarse-graining and starting from a lower-entropy state, a system will statistically move to a higher-entropy state. This means if a system is already in a near-maximum entropy state (either because of its configuration or because of the choice for coarse-graining) or we don't coarse-grain, then entropy will not look time-irreversible.

    And here is some further reading, all of which I found supremely helpful in learning about entropy.




    All Comments: [-] | anchor

    glial(10000) 4 days ago [-]

    One thing that helped me was the realization that, at least as used in the context of information theory, entropy is a property of an individual (typically the person receiving a message) and NOT purely of the system or message itself.

    > entropy quantifies uncertainty

    This sums it up. Uncertainty is the property of a person and not a system/message. That uncertainty is a function of both a person's model of a system/message and their prior observations.

    You and I may have different entropies about the content of the same message. If we're calculating the entropy of dice rolls (where the outcome is the 'message'), and I know the dice are loaded but you don't, my entropy will be lower than yours.

    ninetyninenine(10000) 4 days ago [-]

    Not true. The uncertainty of the dice rolls is not controlled by you. It is the property of the loaded dice itself.

    Here's a better way to put it. If I roll the dice infinite times. The uncertainty of the outcome of the dice will become evident in the distribution of the outcomes of the dice. Whether you or another person is certain or uncertain of this does not indicate anything.

    Now when you realize this you'll start to think about this thing in probability called frequentists vs. bayesian and you'll realize that all entropy is, is a consequence of probability and that the philosophical debate in probability applies to entropy as well because they are one and the same.

    I think the word 'entropy' confuses people into thinking it's some other thing when really it's just probability at work.

    empath75(2913) 4 days ago [-]

    > If we're calculating the entropy of dice rolls (where the outcome is the 'message'), and I know the dice are loaded but you don't, my entropy will be lower than yours.

    That's got nothing to do with entropy being subjective. If 2 people are calculating any property and one of them is making a false assumption, they'll end up with a different (false) conclusion.

    pharrington(10000) 3 days ago [-]

    Are you basically just saying 'we're not oracles'?

    Geee(2632) 3 days ago [-]

    It's both. The system or process has it's actual entropy, and the sequence of observations we make has a certain entropy. We can say that 'this sequence of numbers has this entropy', which is slightly different from the entropy of the process which created the numbers. For example, when we make more coin tosses, our sequence of observations has an entropy which gets closer and closer to the actual entropy of the coin.

    gozzoo(2320) 4 days ago [-]

    The visualisation is great, the topic is interesting and very well explained. Can sombody recomend some other blogs with similar type of presentation?

    floxy(10000) 4 days ago [-]

    If you haven't seen it, you'll probably like:

    https://ciechanow.ski/archives/

    alganet(10000) 4 days ago [-]

    Nowadays, it seems to be a buzzword to confuse people.

    We IT folk should find another word for disorder that increases over time, specially when that disorder has human factors (number of contributors, number of users, etc). It clearly cannot be treated in the same way as in chemistry.

    soulofmischief(10000) 4 days ago [-]

    Maybe you're confused by entropy? It's pretty well established in different domains. There are multiple ways to look at the same phenomenon, because it's ubiquitous and generalized across systems. It comes down to information and uncertainty. The article in question does attempt to explain all of this if you read it.

    petsfed(10000) 4 days ago [-]

    When I use it in an IT (or honestly, any non-physics or non-physics) context, I typically mean 'how many different ways can we do it with the same effective outcome?'.

    To whit, 'contract entropy': how many different ways can a contractor technically fulfill the terms of the contract, and thus get paid? If your contract has high entropy, then there's a high probability that you'll pay your contractor to not actually achieve what you wanted.

    bargava(10000) 4 days ago [-]

    Here is a good overview on Entropy [1]

    [1] https://arxiv.org/abs/2409.09232

    perihelions(137) 4 days ago [-]

    Here's the HN thread about that overview on Entropy,

    https://news.ycombinator.com/item?id=41037981 ('What Is Entropy? (johncarlosbaez.wordpress.com)' — 209 comments)

    nihakue(10000) 4 days ago [-]

    I'm not in any way qualified to have a take here, but I have one anyway:

    My understanding is that entropy is a way of quantifying how many different ways a thing could 'actually be' and yet still 'appear to be' how it is. So it is largely a result of an observer's limited ability to perceive / interrogate the 'true' nature of the system in question.

    So for example you could observe that a single coin flip is heads, and entropy will help you quantify how many different ways that could have come to pass. e.g. is it a fair coin, a weighted coin, a coin with two head faces, etc. All these possibilities increase the entropy of the system. An arrangement _not_ counted towards the system's entropy is the arrangement where the coin has no heads face, only ever comes up tails, etc.

    Related, my intuition about the observation that entropy tends to increase is that it's purely a result of more likely things happening more often on average.

    Would be delighted if anyone wanted to correct either of these intuitions.

    fsckboy(10000) 4 days ago [-]

    >purely a result of more likely things happening more often on average

    according to your wording, no. if you have a perfect six sided die (or perfect two sided coin), none/neither of the outcomes are more likely at any point in time... yet something approximating entropy occurs after many repeated trials. what's expected to happen is the average thing even though it's never the most likely thing to happen.

    you want to look at how repeated re-convolution of a function with itself always converges on the same gaussian function, no matter the shape of the starting function is (as long as it's not some pathological case, such as an impulse function... but even then, consider the convolution of the impulse function with the gaussian)

    russdill(10000) 4 days ago [-]

    This is based on entropy being closely tied to your knowledge of the system. It's one of many useful definitions of entropy.

    867-5309(3644) 4 days ago [-]

    > 'actually be' and yet still 'appear to be'

    esse quam videri

    tshaddox(10000) 3 days ago [-]

    > My understanding is that entropy is a way of quantifying how many different ways a thing could 'actually be' and yet still 'appear to be' how it is. So it is largely a result of an observer's limited ability to perceive / interrogate the 'true' nature of the system in question.

    When ice cubes in a glass of water slowly melt, and the temperature of the liquid water decreases, where does the limited ability of an observer come into play?

    It seems to me that two things in this scenario are true:

    1) The fundamental physical interactions (i.e. particle collisions) are all time-reversible, and no observer of any one such interaction would be able to tell which directly time is flowing.

    2) The states of the overall system are not time-reversible.

    karpathy(10000) 4 days ago [-]

    What I never fully understood is that there is some implicit assumption about the dynamics of the system. So what that there are more microstates of some macrostate as far as counting is concerned? We also have to make assumptions about the dynamics, and in particular about some property that encourages mixing.

    tomnicholas1(10000) 4 days ago [-]

    Yes, that assumption is called the Ergodic Hypothesis, and generally justified in undergraduate statistical mechanics courses by proving and appealing to Liouville's theorem.

    [1] https://en.wikipedia.org/wiki/Ergodic_hypothesis

    oh_my_goodness(10000) 3 days ago [-]

    In equilibrium we don't have to make an assumption about the dynamics or the mixing. We just expect to see the most probable state when we measure.

    It's interesting to try to show that the time average equals the ensemble average. It's very cool to think about the dynamics. That stuff must be happening. But those extra ideas aren't necessary for applying the equilibrium theory.

    TexanFeller(10000) 4 days ago [-]

    I don't see Sean Carroll's musings mentioned yet, so repeating my previous comment:

    Entropy got a lot more exciting to me after hearing Sean Carroll talk about it. He has a foundational/philosophical bent and likes to point out that there are competing definitions of entropy set on different philosophical foundations, one of them seemingly observer dependent: - https://youtu.be/x9COqqqsFtc?si=cQkfV5IpLC039Cl5 - https://youtu.be/XJ14ZO-e9NY?si=xi8idD5JmQbT5zxN

    Leonard Susskind has lots of great talks and books about quantum information and calculating the entropy of black holes which led to a lot of wild new hypotheses.

    Stephen Wolfram gave a long talk about the history of the concept of entropy which was pretty good: https://www.youtube.com/live/ocOHxPs1LQ0?si=zvQNsj_FEGbTX2R3

    infogulch(2777) 3 days ago [-]

    Half a year after that talk Wolfram appeared on a popular podcast [1] to discuss his book on the Second Law of Thermodynamics [2]. That discussion contained the best one-sentence description of entropy I've ever heard:

    > Entropy is the logarithm of the number of states that are consistent with what you know about a system.

    [1]: Mystery of Entropy FINALLY Solved After 50 Years? (Stephen Wolfram) - Machine Learning Street Talk Podcast - https://www.youtube.com/watch?v=dkpDjd2nHgo

    [2]: The Second Law: Resolving the Mystery of the Second Law of Thermodynamics - https://www.amazon.com/Second-Law-Resolving-Mystery-Thermody...

    gsf_emergency(10000) 3 days ago [-]

    By Jeeves, it's rentropy!!

    Sean and Stephen are absolutely thoughtful popularizers, but complexity, not entropy, is what they are truly interested in talking about.

    Although it doesn't make complexity less scary, here's something Sean's been working on for more than a decade. The paper seems to be more accessible to the layman than he thinks..

    https://arxiv.org/abs/1405.6903 https://scottaaronson.blog/?p=762

    [When practitioners say 'entropy', they mean RELATIVE ENTROPY, which is another can of worms.. rentropy is the one that is observer dependent: 'That's Relative as in Relativity'. Entropy by itself is simple, blame von Neumann for making it live rent-free]

    https://en.wikipedia.org/wiki/Relative_entropy

    @nyrikki below hints (too softly, imho) at this:

    >You can also approach the property that people often want to communicate when using the term entropy as effective measure 0 sets, null cover, martingales, kolmogorov complexity, compressibility, set shattering, etc...

    anon84873628(10000) 4 days ago [-]

    Nitpick in the article conclusion:

    >Heat flows from hot to cold because the number of ways in which the system can be non-uniform in temperature is much lower than the number of ways it can be uniform in temperature ...

    Should probably say 'thermal energy' instead of 'temperature' if we want to be really precise with our thermodynamics terms. Temperature is not a direct measure of energy, rather it is an extensive property describing the relationship between change in energy to change in entropy.

    johan_felisaz(10000) 3 days ago [-]

    Nitpick of the nitpick... Temperature is actually an intensive quantity, i.e. combining two subsystems with the same temperature yields a bigger system with the same temperature, not twice bigger.

    kgwgk(248) 3 days ago [-]

    I think you used "extensive" in the sense of "defined for the whole system and not locally". It's true that thermodynamics is about systems at equilibrium.

    hatthew(10000) 3 days ago [-]

    I'm not sure I understand the distinction between 'high-entropy macrostate' and 'order'. Aren't macrostates just as subjective as order? Let's say my friend's password is 6dVcOgm8. If we have a system whose microstate consists of an arbitrary string of alphanumeric characters, and the system arranges itself in the configuration 6dVcOgm8, then I would describe the macrostate as 'random' and 'disordered'. However, if my friend sees that configuration, they would describe the macrostate as 'my password' and 'ordered'.

    If we see another configuration M2JlH8qc, I would say that the macrostate is the same, it's still 'random' and 'unordered', and my friend would agree. I say that both macrostates are the same: 'random and unordered', and there are many microstates that could be called that, so therefore both are microstates representing the same high-entropy macrostate. However, my friend sees the macrostates as different: one is 'my password and ordered', and the other is 'random and unordered'. There is only one microstate that she would describe as 'my password', so from her perspective that's a low-entropy macrostate, while they would agree with me that M2JlH8qc represents a high-entropy macrostate.

    So while I agree that 'order' is subjective, isn't 'how many microstates could result in this macrostate' equally subjective? And then wouldn't it be reasonable to use the words 'order' and 'disorder' to count (in relative terms) how many microstates could result in the macrostate we subjectively observe?

    vzqx(10000) 3 days ago [-]

    I think you need to rigorously define your macrostates. If your two states are 'my friend's password' and 'not my friend's password' then the macrostates are perfectly objective. You don't know what macrostate the system is in, but that doesn't change the fact that the system is objectively in one of those two macrostates.

    If you define your macrostates using subjective terms (e.g. 'a string that's meaningful to me' or 'a string that looks ordered to me') then yeah, your entropy calculations will be subjective.

    Ono-Sendai(10000) 3 days ago [-]

    Anyone else notice how the entropy in the 1000 bouncing balls simulation goes down at some point, thereby violating the second law of thermodynamics? :)

    thowawatp302(10000) 3 days ago [-]

    Over long enough scales there is no conservation of energy because the universe does not have temporal symmetry.

    xavivives(10000) 3 days ago [-]

    Over the last few months, I've been developing an unorthodox perspective on entropy [1] . It defines the phenomenon in much more detail, allowing for a unification of all forms of entropy. It also defines probability through the same lens.

    I define both concepts fundamentally in relation to priors and possibilities:

    - Entropy is the relationship between priors and ANY possibility, relative to the entire space of possibilities.

    - Probability is the relationship between priors and a SPECIFIC possibility, relative to the entire space of possibilities.

    The framing of priors and possibilities shows why entropy appears differently across disciplines like statistical mechanics and information theory. Entropy is not merely observer-dependent, but prior-dependent. Including priors not held by any specific observer but embedded in the framework itself. This helps resolve the apparent contradiction between objective and subjective interpretations of entropy.

    It also defines possibilities as constraints imposed on an otherwise unrestricted reality. This framing unifies how possibility spaces are defined across frameworks.

    [1]: https://buttondown.com/themeaninggap/archive/a-unified-persp...

    3abiton(10000) 3 days ago [-]

    I am curious why the word 'entropy' encompasses so many concepts? Wouldn't it have made sense to just give each concept a different word?

    FilosofumRex(10000) 3 days ago [-]

    Boltzmann and Gibbs turn in their graves, every time some information theorist mutilates their beloved entropy. Shanon & Von Neumann were hacking a new theory of communication, not doing real physics and never meant to equate thermodynamic concepts to encoding techniques - but alas now dissertations are written on it.

    Entropy can't be a measure of uncertainty, because all the uncertainty is in the probability distribution p(x) - multiplying it with its own logarithm and summing doesn't tell us anything new. If it did, it'd violate quantum physics principles including the Bell inequality and Heisenberg uncertainty.

    The article never mentions the simplest and most basic definition of entropy, ie its units (KJ/Kelvin), nor the 3rd law of thermodynamics which is the basis for its measurement.

    "Every physicist knows what entropy is. Not one can write it down in words." Clifford Truesdell

    kgwgk(248) 3 days ago [-]

    > Shanon & Von Neumann were hacking a new theory of communication, not doing real physics

    Maybe I'm misunderstanding the reference to von Neumann but his work on entropy was about physics, not about communication.

    kgwgk(248) 3 days ago [-]

    > Entropy can't be a measure of uncertainty

    Gibbs' entropy is derived from "the probability that an unspecified system of the ensemble (i.e. one of which we only know that it belongs to the ensemble) will lie within the given limits" in phase space. That's the "coefficient of probability" of the phase, its logarithm is the "index of probability" of the phase, the average of that is the entropy.

    Of course the probability distribution corresponds to the uncertainty. That's why the entropy is defined from the probability distribution.

    Your claim sounds like saying that the area of a polygon cannot be a measure of its extension because the extension is given by the shape and calculating the area doesn't tell us anything new.

    quietbritishjim(10000) 3 days ago [-]

    I like the axiomatic definition of entropy. Here's the introduction from Pattern Recognition and Machine Learning by C. Bishop (2006):

    > The amount of information can be viewed as the 'degree of surprise' on learning the value of x. If we are told that a highly improbable event has just occurred, we will have received more information than if we were told that some very likely event has just occurred, and if we knew that the event was certain to happen we would receive no information. Our measure of information content will therefore depend on the probability distribution p(x), and we therefore look for a quantity h(x) that is a monotonic function of the probability p(x) and that expresses the information content. The form of h(·) can be found by noting that if we have two events x and y that are unrelated, then the information gain from observing both of them should be the sum of the information gained from each of them separately, so that h(x, y) = h(x) + h(y). Two unrelated events will be statistically independent and so p(x, y) = p(x)p(y). From these two relationships, it is easily shown that h(x) must be given by the logarithm of p(x) and so we have h(x) = − log2 p(x).

    This is the definition of information for a single probabilistic event. The definition of entropy of a random variable follows from this by just taking the expectation.

    dkislyuk(10000) 3 days ago [-]

    This is a great characterization of self-information. I would add that the `log` term doesn't just conveniently appear to satisfy the additivity axiom, but instead is the exact historical reason why it was invented in the first place. As in, the log function was specifically defined to find a family of functions that satisfied f(xy) = f(x) + f(y).

    So, self-information is uniquely defined by (1) assuming that information is a function transform of probability, (2) that no information is transmitted for an event that certainly happens (i.e. f(1) = 0), and (3) independent information is additive. h(x) = -log p(x) is the only set of functions that satisfies all of these properties.

    tshaddox(10000) 3 days ago [-]

    According to my perhaps naive interpretation of that, the 'degree of surprise' would depend on at least three things:

    1. the laws of nature (i.e. how accurately do the laws of physics permit measuring the system and how determined are future states based on current states)

    2. one's present understanding of the laws of nature

    3. one's ability to measure the state of a system accurately and compute the predictions in practice

    It strikes me as odd to include 2 and 3 in a definition of 'entropy.'

    overu589(10000) 3 days ago [-]

    How can that be axiomatic?

    I offer a coherent, concise dissenting view.

    Information is the removal of uncertainty. If it does not remove uncertainty it is not information. Uncertainty is state unresolved (potential resolves to state through constructive and destructive interference.)

    Entropy is the existential phenomenon of potential distributing over the infinite manifold of negative potential. "Uncertainty."

    Emergence is a potential outcome greater than the capacity found in the sum of any parts.

    Modern humanity's erroneous extrapolations:

    - asserting P>=0 without account that in existential reality 0 is the infinite expanse of cosmic void, thus the true mathematical description would be P>=-1

    - confuse heat with entropy. Heat is the ultimate universal expression as heat is a product of all work and all existence is winding down (after all). Entropy directs thermodynamics, thermodynamics is not the extent of entropy.

    - entropy is NOT the number of possible states in a system. Entropy is the distribution of potential; number of states are boundary conditions which uncalculated potential may reconfigure (the "cosmic ray" or murfy's rule of component failure.) Existential reality is interference and decay.

    - entropy is not "loss". Loss is the entropy less work achieved.

    - this business about "in a closed system " is an example of how brilliant minds lie to themselves. No such thing exists anywhere accessible by Man. Even theoretically, the principles of decay and the "exogenous" influence of one impercieved influence over a "contained system." Or "modeled system", for one self deception is for the scientist or engineer to presume these speak for or on behalf of reality.

    Emergence is the potential (the vector space of some capacity) "created" through some system of dynamics (work). "Some" includes the expressive space of all existential or theoretical reality. All emergent potential is "paid for" by burning available potential of some other kind. In nature the natural forces induce work in their extremes. In natural systems these design for the "mitigation of uncertainty" [soft form entropy], aka "intelligence."

    Entropy is the existential phenomenon of potential distributing over negative potential.

    Information is the removal of uncertainty. If it does not remove uncertainty, it is not information. (And intelligence is the mitigation of uncertainty.)

    Emergence is a potential outcome greater than the capacity found in the sum of any parts.





    Historical Discussions: Why Fennel? (April 13, 2025: 277 points)
    Why Fennel? (September 13, 2023: 238 points)
    The Fennel programming language: rationale (August 26, 2020: 4 points)

    (277) Why Fennel?

    277 points 5 days ago by behnamoh in 120th position

    fennel-lang.org | Estimated reading time – 4 minutes | comments | anchor

    Fennel is a programming language that runs on the Lua runtime.

    Why Lua?

    The Lua programming language is an excellent and very underrated tool. Is it remarkably powerful yet keeps a very small footprint both conceptually as a language and in terms of the size of its implementation. (The reference implementation consists of about nineteen thousand lines of C and compiles to 278kb.) Partly because it is so simple, Lua is also extremely fast. But the most important thing about Lua is that it's specifically designed to be put in other programs to make them reprogrammable by the end user.

    The conceptual simplicity of Lua stands in stark contrast to other 'easy to learn' languages like JavaScript or Python--Lua contains very close to the minimum number of ideas needed to get the job done; only Forth and Scheme offer a comparable simplicity. When you combine this meticulous simplicity with the emphasis on making programs reprogrammable, the result is a powerful antidote to prevailing trends in technology of treating programs as black boxes out of the control of the user.

    And yet...

    So if Lua is so great, why not just use Lua? In many cases you should! But there are a handful of shortcomings in Lua which over time have shown to be error-prone or unclear. Fennel runs on Lua, and the runtime semantics of Fennel are a subset of Lua's, but you can think of Fennel as an alternate notation you can use to write Lua programs which helps you avoid common pitfalls. This allows Fennel to focus on doing one thing very well and not get dragged down with things like implementing a virtual machine, a standard library, or profilers and debuggers. Any library or tool that already works for Lua will work just as well for Fennel.

    The most obvious difference between Lua and Fennel is the parens-first syntax; Fennel belongs to the Lisp family of programming languages. You could say that this removes complexity from the grammar; the paren-based syntax is more regular and has fewer edge cases. Simply by virtue of being a lisp, Fennel removes from Lua:

    • statements (everything is an expression),
    • operator precedence (there is no ambiguity about what comes first), and
    • early returns (functions always return in tail positions).

    Variables

    One of the most common legitimate criticisms leveled at Lua is that it makes it easy to accidentally use globals, either by forgetting to add a local declaration or by making a typo. Fennel allows you to use globals in the rare case they are necessary but makes it very difficult to use them by accident.

    Fennel also removes the ability to reassign normal locals. If you declare a variable that will be reassigned, you must introduce it with var instead. This encourages cleaner code and makes it obvious at a glance when reassignment is going to happen. Note that Lua 5.4 introduced a similar idea with <const> variables, but since Fennel did not have to keep decades of existing code like Lua it was able to make the cleaner choice be the default rather than opt-in.

    Tables and Loops

    Lua's notation for tables (its data structure) feels somewhat dated. It uses curly brackets for both sequential (array-like) and key/value (dictionary-like) tables, while Fennel uses the much more familiar notation of using square brackets for sequential tables and curly brackets for key/value tables.

    In addition Lua overloads the for keyword for both numeric 'count from X to Y' style loops as well as more generic iterator-based loops. Fennel uses for in the first case and introduces the each form for the latter.

    Functions

    Another common criticism of Lua is that it lacks arity checks; that is, if you call a function without enough arguments, it will simply proceed instead of indicating an error. Fennel allows you to write functions that work this way (fn) when it's needed for speed, but it also lets you write functions which check for the arguments they expect using lambda.

    Other

    If you've been programming in newer languages, you are likely to be spoiled by pervasive destructuring of data structures when binding variables, as well as by pattern matching to write more declarative conditionals. Both these are absent from Lua and included in Fennel.

    Finally Fennel includes a macro system so that you can easily extend the language to include new syntactic forms. This feature is intentionally listed last because while lisp programmers have historically made a big deal about how powerful it is, it is relatively rare to encounter situations where such a powerful construct is justified.

    For a more detailed look at the guiding principles of Fennel from a design perspective see the Values of Fennel.

    Home source for this site



    All Comments: [-] | anchor

    cardanome(10000) 5 days ago [-]

    Fennel is pretty nice.

    I wish it had gradual typing support though or at least allowed for type annotation for static tooling. Not that dynamic typing isn't a valid choice but with more and more languages getting gradual typing support it is hard to go back.

    I guess we could build something like Coalton but for Lua.

    codr7(10000) 5 days ago [-]

    I've been working on something along those lines in eli:

    https://github.com/codr7/eli?tab=readme-ov-file#type-checkin...

    HexDecOctBin(10000) 5 days ago [-]

    I did find this, though it seems runime only: https://github.com/dokutan/typed-fennel

    Maybe a static system can be built upon it.

    R4tY9jQ2(10000) 5 days ago [-]

    Fennel's approach of compiling to Lua while maintaining meta-programming capabilities is elegant. The syntax reminds me of Clojure, but without the JVM overhead. For embedded systems or game development, having both functional idioms and Lua's tooling seems like a powerful combination.

    giraffe_lady(10000) 5 days ago [-]

    Another spot it's great for is in legacy lua programs that you inherit from who knows where, which in my experience is a lot of the live lua out there. It hooks into the module loader system so you can just freely mix functions and tables between the two.

    quectophoton(10000) 5 days ago [-]

    > Fennel's approach of compiling to Lua while maintaining meta-programming capabilities is elegant.

    Yeah, it is very nice to work with.

    The only tiny 'complaint' I have is that it doesn't compile to pure Lua, but instead assumes you'll be running it together with Lua's libraries.

    I say this because, for me, the places where I'd like to use Fennel have a lot of overlap with the places where I'd like to use Lua without loading any of the provided libraries (e.g. embedding Lua into other software, instead of using it standalone).

    benwilber0(3368) 5 days ago [-]

    I love seeing new languages targeting the Lua runtime. I've been adding Lua scripting support to pretty much everything I make now. I recently made my SSE server programmable with Lua and it's extended the functionality far beyond what I would have had the patience and time to do myself. Highly recommend Lua with mlua-rs Rust bindings.

    [0] https://tinysse.com

    [1] https://github.com/benwilber/tinysse

    [2] https://github.com/mlua-rs/mlua

    ronsor(2793) 5 days ago [-]

    I don't have any use cases in mind right now, but this looks cool. You should try posting another Show HN.

    giraffe_lady(10000) 5 days ago [-]

    I would love to see a stripped back ML-style language that targets lua, just something like ocaml's type system and exhaustive pattern match right on top would be all I need. There have been a few attempts but nothing I know of that got usably far and is maintained.

    There might be a way to get standard ML to output lua or something but I'm not that familiar with it. I think it would be an incredible fit for a third backend for gleam, but they say they aren't adding any more beyond erlang and js.

    duncanfwalker(10000) 5 days ago [-]

    The comparison with Closure is really interesting. They make the point that they do less reinvention of Lua than Closure does with Java - functions, standard library, tooling. I'd love to know why. Is it just that Lua has less problems than old-Java

    macmac(10000) 5 days ago [-]

    Clojure

    giraffe_lady(10000) 5 days ago [-]

    I'm not sure if this was the up front reasoning but a lot of lua code is run in situations where you don't have full control over the runtime or distribution method.

    So anything that requires C libs would automatically rule out fennel for a lot of projects that are essentially using someone's lua api as the target platform. Roblox, mud client scripting, openresty, that sort of thing.

    And these environments usually have so much added to them, pcre, stdlib extensions, class systems etc fennel works best not making any assumptions about any of that. It's just straight up the lua semantics, and so anywhere lua works it works. I've used it a lot and originally recoiled from this decision but now I think it is genius.

    frogulis(10000) 4 days ago [-]

    I get the impression that Fennel is intended as a different/better interface for Lua.

    In contrast, Clojure is intended as the language Rich Hickey wanted for writing the sort of applications he wrote, and the JVM happened to be a powerful (and already existing) platform that was suitable for doing that.

    nine_k(3565) 4 days ago [-]

    JVM has no notion of a function, only of a method! You don't have something to piggy-back on. Java stdlib from 15 years ago (to say nothing of 25) is a realm of heavy OOP and mutability everywhere, something you may not want to adapt your Lisp code to.

    TinkersW(10000) 4 days ago [-]

    I'd guess a major reason is that Java is statically typed, and Lua/Fennel are dynamic; making it easier to reuse any existing libraries.

    torginus(10000) 5 days ago [-]

    I do not understand the appeal of LISPy languages. I get that the parser is simple and elegant, but I believe the developer (of the compiler in this case) should serve the convenience of the user, not the other way around.

    Writing code like this is cumbersome and unnecessarily symbol heavy, and reading it isn't really nice as well.

    I'd rather have the language add that extra complexity into the parser than have me stare down these endless paretheses. Parsing something C-like is not that, hard, trust me, I've done it

    n4ture(10000) 5 days ago [-]

    I do not understand the appeal of non-LISPy languages. I get that most people are used to reading it and that they are efficent, but I believe the developer (of the compiler in this case) should serve the convenience of the user, not the other way around.

    Writing code like this is combersome and unnecessarily symbol heavy, and reading it isn't really nice as well.

    I'd rather have the language add those extra parens into the parser than have me stare down these endless semi-colon, linebreaks or indentation. Parsing something Lisp-like is not that, hard, trust me, I've done it.

    vbezhenar(3496) 5 days ago [-]

    The unmatched beauty of the Lisp is the elegance of writing code generators (macros).

    Code is list and main structure is list. This is genius.

    skavi(10000) 5 days ago [-]

    i have never used a lisp, but i'd assume due to its focus on macros, you are alternately the developer of a compiler and the user of that compiler. so making it easy on the compiler dev makes it easy on you.

    Zambyte(10000) 5 days ago [-]

    Are you interested in learning the appeal?

    endgame(3654) 5 days ago [-]

    Focusing on the runtime's parser is a red herring and I think a common error in lisp advocacy.

    Even if I didn't the full power of a lisp macro system, it is an absolute joy to manipulate programs written in s-expressions. Being able to cut/copy/paste/jump-[forward/back] by sexpr is really convenient, and often done nowhere near as well in other languages. I think this is because until the invention of tree-sitter and LSPs (and the former isn't yet widely adopted in editor tech), most editors had regex-based syntax highlighting and some kind of ad-hoc 'parser' for a language. This makes them less aware of the language the developer is editing, but was probably a pragmatic design decision by editor implementers: it's easier than writing a full parser and means the editor can still assist even if a program is syntactically ill-formed.

    yuppiemephisto(3670) 5 days ago [-]

    > I'd rather have the language ...

    check out Lean 4 then. Its syntax system is based on Racket but —instead of parens— implements stuff like [JSX syntax](https://github.com/leanprover-community/ProofWidgets4/blob/d...) and a [maze](https://github.com/dwrensha/lean4-maze)

    unchar1(10000) 5 days ago [-]

    The first thing that comes to mind is macros.

    chongli(10000) 4 days ago [-]

    The appeal can be seen with paredit-style [1] editor plugins. They give you the power of working on trees rather than text. When you master the paredit way of editing you'll wish you could do that with every language.

    [1] https://paredit.org/

    caddzooks(10000) 4 days ago [-]

    Consider the following LISP function that performs a transformation of an MxN matrix:

    (defun transpose (matrix) (apply #'mapcar #'list matrix))

    Based on my own experience I think I can say that It isn't until one has acquired a reasonable amount of experience with the language that they can fully appreciate its power.

    evdubs(10000) 4 days ago [-]

    Try defining data in C. Try extracting data from that data you've defined in C.

    If you can understand the appeal of having JSON in JavaScript, you can understand some of the appeal of Lisp.

    shakna(1921) 4 days ago [-]

    Most Lisp-y language have multiple parsers. The frontend may be that one, or it might be another. Racket has hundreds of frontends [2], Scheme has Wisp [0], and so on.

    The ideal part of it comes down to the language being able to manipulate itself. Make the tokens an array, that you can shift, inject and/or mould into what you need.

    That being said, that power isn't isolated to just Lisp-y. A few stack languages have it, like Forth, or to plug myself [1]. However, stack languages are a bit harder to optimise.

    It isn't that they don't want a complicated parser. It's that you want to be able to easily modify that parser as its running, without hitting TeX levels of performance slowdowns.

    [0] https://srfi.schemers.org/srfi-119/srfi-119.html

    [1] https://git.sr.ht/~shakna/jstack

    [2] https://doi.org/10.1145/3580417

    nimih(10000) 4 days ago [-]

    I personally find lisp-y syntax to be pleasant to write, and to generally be straightforward and easy to read. It's interesting to hear you have the opposite opinion, though.

    mmcromp(10000) 4 days ago [-]

    I tried fennel for a game jam and honestly was pretty disappointed. The way lisp languages are pitched here, I thought I was in for a mind opening experience, but instead the end experience was pretty much identical to lua in ever meaningful way, the only differences felt surface level (I.e. using closures, and parentheses).

    I'm forever indebt to lisp for giving JS it's saving graces (closures and FN as first class citizens), but I think we some honestly on what the end experience really is.

    dimitar(3642) 4 days ago [-]

    And yet people write a ton of XML, JSON or YAML by hand.

    Graziano_M(10000) 5 days ago [-]

    Fennel is nice. I converted my neovim config[1] to fennel and haven't looked back.

    [1]: https://github.com/Grazfather/dotfiles/blob/master/nvim/fnl/...

    qrobit(10000) 5 days ago [-]

    Fennel is indeed nice and I rewrote my config in it too, but looked back ~2 years later and rewrote it again in Lua. I think Fennel for configuration is not justified and just adds complexity. Also the tools are not there: two existing language servers[1][2] can't compete with Sumneko's Lua language server[3] and they are fennel-exclusive and clueless about Lua code. I still like Fennel for writing more complicated code (my plugins: [4][5]) because of neat features like pattern matching and structural decomposition, both are surprisingly robust.

    [1]: https://git.sr.ht/~xerool/fennel-ls/

    [2]: https://github.com/rydesun/fennel-language-server

    [3]: https://github.com/LuaLS/lua-language-server

    [4]: https://gitlab.com/repetitivesin/16cm.nvim/-/tree/main

    [5]: https://gitlab.com/repetitivesin/madol.nvim

    hyperbrainer(10000) 5 days ago [-]

    If only there was an editor which could act as an interpreter for Lisp directly ...

    threatofrain(1307) 5 days ago [-]
    https://janet-lang.org

    Also by the same author.

    2mlWQbCK(10000) 5 days ago [-]

    I prefer Janet, but Fennel is great in places Lua is already supported, like in Löve2D.

    https://git.sr.ht/~benthor/absolutely-minimal-love2d-fennel

    sgt(3284) 5 days ago [-]

    Dammit, Janet! Ok, looks good. I'll need to look into it.

    AlienRobot(10000) 5 days ago [-]

    >by the same author

    What? People are just creating new languages these days as if they were Javascript libraries?

    Let's say I wanted to make my own programming language. What's the easiest way to prototype it in a way I can share it with the world? Are the programming language development toolkits that come with a tokenizer library and things like that? Should I write my own program to output machine code? Or maybe it's easier to just transpile to Javascript?

    grzm(402) 5 days ago [-]

    I believe Fennel was originated by Phil Hagelberg (technomancy)

    https://git.sr.ht/~technomancy/fennel-lang.org

    Janet looks like is by Calvin Rose (bakpakin) https://github.com/janet-lang/janet/graphs/contributors

    monomers(10000) 5 days ago [-]

    I like Janet a lot, and have been using it for small personal projects for about a year.

    But it does come with some design decisions that I'm a bit ambivalent about and for which I haven't found a good explanation:

    - No persistent data structures. I guess this has something to do with limitations of the GC?

    - unhygienic macros combined with lack of namespaces. XOR those two choices would be fine, but the combination is janky

    - Somewhat peculiar choices in syntax. It's neither Scheme, nor is it Clojure. # starts comments, ; is splice, @ marks literals as mutable...

    zitterbewegung(359) 5 days ago [-]

    Linking to this without the fennel-lang.org main page which states the following

    'Fennel is a programming language that brings together the simplicity, speed, and reach of Lua with the flexibility of a lisp syntax and macro system.' is a bad idea. Not having this sentence on your justification is ill advised.

    Not to detract from the language or anything I have found many programming languages justification to just not have an elevator pitch and I have a hard time understanding why this is the case. Unfortunately people's attention spans are extremely short.

    fredrikholm(10000) 5 days ago [-]

    > Not to detract from the language or anything I have found many programming languages justification to just not have an elevator pitch and I have a hard time understanding why this is the case.

    But they do have one, that you just copied?

    kras143(10000) 5 days ago [-]

    I believe that people who complain about parens have not coded in Lisp (atleast not enough)! Once you get over the 'parens', the homogeneity of the language shines through and you will appreciate why some people like me never get over Lisp.

    lerax(10000) 5 days ago [-]

    it's kinda funny that whole noise about parenthesis. For a experienced Lisper parenthesis are so meaningless that can be ignored by reading good indented code, however... for a newbie, the amount of parenthesis can be a real nightmare. All that can just be properly solved by using a decent editor that support good parenthesis edition... like emacs. Truly funny. I've been on this community for more than 10 years and always the same thing.

    ersiees(10000) 4 days ago [-]

    I don't love fennel, it usually dominates the whole taste of a dish for me

    stronglikedan(10000) 4 days ago [-]

    But in the spirit of answering the headline's question, it's because nothing else tastes quite like it!





    Historical Discussions: Kezurou-Kai #39 (April 14, 2025: 269 points)

    (269) Kezurou-Kai #39

    269 points 4 days ago by nabla9 in 144th position

    www.bigsandwoodworking.com | Estimated reading time – 12 minutes | comments | anchor

    Last weekend I went to the 39th annual Kezurou-kai event in Itoigawa, Niigata. It was my first time going to the event here in Japan, and it was such a blast. For those who are unfamiliar with kezurou-kai, it's an event where people compete to take the thinnest shavings of wood using Japanese planes. But more than that it's really a gathering of people who are passionate about woodworking and carpentry, sharpening and hand tools, who are pushing their skills to the absolute limits of what is possible.

    The event takes place over two days, with preliminary planing running all through the first day, and ending around mid-day on day 2. Throughout that time competitors have three chances each day to bring a plane shaving up for official measurement. 5 individuals with the thinnest shavings then go on to the final planing contest toward the end of the day on day 2.

    The main contest required using 70 mm kanna, and the material was limited to hinoki at 55 mm wide by 1800 mm long. Hinoki has become the standard wood for thin planing, since it cuts beautifully and can be planed down to an extreme level without breaking up. For preliminary planing each competitor or group was required to bring their own material for planing. The final contest however involved planing material selected by the event organizers, with the final 5 competitors all planing the same board.

    The event took place in a gymnasium which was filled with planing benches shared by teams and individuals. When I arrived on day 1 I met up with my friends from Somakosha and we pretty much started taking shavings right away. Here's Yamamoto-san getting things started.

    We all came with a few different planes, and myself I brought 2 kanna, an old Ishido blue steel blade and another from an unknown maker which I'm pretty confident is some type of white steel. We also had a Mitutoyo digital micrometer for measuring our shavings.

    Given than none of us had been doing any kind of practice our shavings on day one were pretty decent. We were all able to take really clean and consistent shavings in the 10-12 micron range without too much trouble. It was getting under 10 microns that was the real challenge.

    This is something that I've faced before when having "kezurou-kai nights" with friends. With careful sharpening and tuning of the dai, it's fairly straightforward to get really clean consistent shavings in the 10-15 micron range. But pushing past 10 microns requires a whole other level a fastidiousness when it comes to every aspect of planing. In any case, on that first day at Kezuroukai we struggled a bit, but we kept sharpening and adjusting out planes trying to break the sub-10 micron barrier.

    Once you had a good shaving you could take it up for official measurement. The shaving needed to be full length and free of tears, splits, etc. Simple jigs were provided which allowed you to clamp a 1 meter section of the shaving for the purpose of bringing it up for official measurement. Here's a line of people waiting to get their shavings measured on day 1. You can see everyone holding a the jig with their shavings clamped.

    And here is the official measuring device; three digital calipers which were pneumatically controlled to measure each shaving with a consistent pressure. When you brought your shaving up, you had to carefully set it below the calipers, and when everything was set the operator would push a button and all three calipers simultaneously plunged down. The calipers were offset along the length of the shaving, but also across the width, giving measurements which revealed the overall consistency.

    If the measurement was satisfactory you could then take it over and paste it on the boards seen below. Shavings on the far right were all 5 microns and less. The other two boards were for the remainder of the shavings, most of which were between 6-12 microns.

    Outside the venue was a space setup for sharpening. There was a good mix of people using synthetic and natural stones. I personally stuck with a variation on my usual routine, 1000 grit Hibiki, an 8000 King or 8000 Hibiki, and a 12000 grit Kagayaki stone, doing a micro-bevel on the 8000 and 12000 stones.

    Day 1 went fast. I planed a lot but I also spent a fair amount of time catching up with old friends. In terms of shaving I wasn't able to break through the 10 micron barrier with a consistent shaving. It's easy enough to have parts of a shaving break below that barrier, but getting a consistent shaving for the full length and width of the board is really difficult. On one hand it's frustrating but it's also becomes an interesting puzzle figuring out how to improve things. At the Izakaya that night pretty much all we talked about was sharpening and how to improve our results.


    Day 2 was a fair amount busier, with more people showing up to plane. All of us from team Somakosha experimented with some different sharpening techniques to see if we could get thinner shavings. Some things seemed to work better than others, but more than our sharpening technique or dai adjustments, it became clear that our material was a big limiting factor. As you approach ultra thin sub-10 micron shavings the quality of the material becomes a huge factor in how thin you can go. The evenness and density of the grain, and especially the moisture content of the wood are really important factors.

    Overall we had really nice material, with nice even straight grain, but it was definitely on the drier side. It was really interesting to see how much other competitors cared for and maintained their material. Most people had their planing blanks wrapped in plastic to prevent moisture loss, and many went to great lengths to protect the wood when not planing by protecting it with blankets or foam packing.

    The two guys who we shared a bench with were Kezurou-kai veterans, having started some 20 years ago, and they had 2 planing beams that they were rotating in and out as they planed. Whenever they set aside a board they would cover it will moist towels to maintain a high moisture content in the wood. In another case Yamamoto-san went over to a friend's bench and was able to take some shavings from their hinoki which was definitely higher quality and well maintained. He had been pulling shavings in the 10-12 micron range on our board, but taking the same plane, without resharpening to the his friend's higher quality board, he was able to plane down to 6 microns. Pretty amazing how much of a difference the quality of material and moisture content makes.

    As day 2 went on you could sense the energy level rising as everyone worked to take ultra-thin shaving before time was up. About an hour before the deadline for preliminary planing and the leaderboards really started to fill up.


    Back at our bench we started to try every possible trick we could think of to improve our results. What seemed to work best was simply wiping the board with a lightly damp rag prior to planing. It would definitley be better to have the wood "pre-soaked" rather than wiping the wood before hand, since exceess moisture on the surface of the wood can cause the dai to move, but given the situation and with time running out we did what we needed to do. And it did help, a lot. The quality of shaving between really dry wood and moist wood is completely different.

    In the end one of my last shavings turned out to be my best. With a freshly sharpened blade, and a touch of moisture on the wood, I was able to pull a really clean shaving. I took it up to the judges for measurement and the results were 10, 6, and 9 microns. I'm pretty happy with that result. It'd be great if the whole thing came out around 6, but I'm glad to have gotten a really clean full length/width shaving at that level.

    Here are the top 5 winners from the preliminary contest and their numbers. Insanity! Crazy thin and consistent.

    With the preliminary contest over, the top 5 went on to the final challenge which was planing a 3 meter quartersawn piece of sugi (Japanese cedar). Compared to hinoki, sugi is not an easy wood to plane, especially thin. This time the rules for the final round also changed, and each person had just a few minutes (I think it was 3-4) for both setting their planes and planing. In otherwords, before the timer started your blade had to be loose in the dai. Then once the clock started ticking you could begin setting the blade in dai and start planing. Kind of intense given the time allotted and overall pressure of the situation.

    Here's the first person up, taking a fairly thick shaving.

    With sugi theres a fine line between planing too thick and too thin. Too thin and the shaving just falls apart.

    Each person only had one chance to have a complete shaving measured, which means you have to really gauge the material and your capabilities. It's all about taking the thinnest shaving you can manage and knowing when to stop. Spend too much time trying to get a thin shaving and you risk running out of time. But it's also tricky to gauge the thickness of the shaving until you ask the judges to measure it. In reality it may look thinner than it actually is.

    The winning shaving from the final round of 5 competitors was somewhere around 50 microns (it may have been 48), which just goes to show you how different sugi is from hinoki. It also reveals how different it is to plane material that is of unknown quality versus planing your own moisture controlled material.

    I love the challenge of ultra-thin planing, and it's fascinating to see the skill and dedication it takes to plane at a this level. But planing in the sub-10 micron range really requires a high level of control over the material (not to mention the kanna), which as a woodworker/carpenter is pretty far from the reality of day-to-day work. So I like the idea of a contest which requires people to plane an unknown piece of wood, which is more or less how the final competition here goes. I'd also love to see some sort of tear-out challenge, where the goal is to plane a really gnarly piece of wood with knots or difficult grain, and try to perfect the surface. A challenge like that would be really beneficial for folks looking to use kanna for real work.


    Throughout the event I was pretty focused on visiting with friends and planing, but I did take a quick lap towards the end of day 2 to snap some photos of other some of the other things taking place.

    In one corner of the venue a craftsman was demonstrating carving a sumitsubo. (I didn't realize until later when I edited these photos that he also had carved wooden shoes in the foreground!)

    Next to him was a guy demonstrating how to cut a new kanna dai. If you search for Kezuroukai videos you can find a good video of this same person chopping a dai at a previous event.

    Outside near the sharpening area were several people demonstrating hewing, and brave spectators could also give it a go with a bit of supervision.

    Back inside the venue were also plenty of vendors selling anything and everything related to planes and handtools. Here was one of the natural sharpening stone vendors.

    The NSK company who are making a new variety of diamond sharpening stones were also present. They made their stones available to try for anyone who was interested.

    And of course there were plenty of kanna for sale...


    There's a lot I wasn't able to cover but that's the quick story behind Kezuroukai #39. It really was a busy couple of days, and hard to take everything in. I'd love to go back and try my hand at planing again, but I'd also love to just go as a spectator and spend more time watching. There's so much you can learn at Kezuroukai, and also so many really passionate and inspired people to meet. I highly recommend a visit to anyone who can make the trip to Japan, but if not then definitely seek out a more local event or start one up! In the US now we have Kezuroua-kai USA along with a few other kez events like Jason Fox's Maine event. So go, plane wood, and help spread the joy of hand tools and craft!




    All Comments: [-] | anchor

    zkmon(10000) 4 days ago [-]

    Wondering why it is so satisfying. It tells that what you pursue doesn't matter. It can be a wood-planing contest or some silly hobby. What matters is that you are motivated to pursue it. You believe in improving that pursuit, you see others doing the same, you believe it is the social norm, you see that it is valued and respected. And most importantly you feel good about it.

    Talk about things like investing in stocks, being known as a great techie or entrepreneur, exiting a great startup, running a venture capital, making a few million, becoming US citizen, having a great home etc. These goals are not bad. Just that they cost more, for the same returns (satisfaction). You are more successful when your happiness doesn't cost you a life-time running around or some herculean effort.

    snarf21(10000) 4 days ago [-]

    It is all about purpose and hope and expectations. It is why 90% of the satisfaction of a vacation is from the planning. We highly underestimate the mental health benefits of a hobby. They are also a great place to make friends and connect with others, especially as we get older. People deep in a hobby will gladly spend hours helping n00bs and will talk your ear off about all the ins and outs. There are also lots of hobbies that have almost no barrier to entry, just the willingness to try something new.

    We'd all be a lot happier if we spent more time on a hobby and less time streaming shows.

    numpad0(10000) 4 days ago [-]

    These guys aren't privileged ruling class elites. They have no skills and paths and connections needed to see successes in such ventures. I actually think that is how China now has 'football fields full of engineers', the competitive environment in Far East regions had been so over the top that qualities that should make somebody cream of the crop globally only float them halfway down the mug locally.

    tcholewik(10000) 3 days ago [-]

    I have taken up woodworking last year(so I am a super beginner), and in summary it nourishes my mind and soul in ways that tech world keeps failing to. Instead of getting instant gratification of buying things on Amazon, I spend weeks to build one item, and that item brings me unparalleled satisfaction, trains my patience, concentration, and unlike coding which does check some of above boxes it pulls me away from my computer which is responsible for half a dozen of my bad habits. In a way it takes me back to era before content overload, consumerism, and capitalism.

    I know that this very vague but there is a lot that is coursing through my head as I'm reading your question. I am happy to answer any more specific questions. I also took up couple (black)smithing projects and they are very satisfying as well, just harder to start with.

    WJW(2595) 4 days ago [-]

    Wow 10 micron is a lot smaller than I thought a handmade wood shaving would be. Th champions are even better in the single digits consistently.

    temp0826(10000) 4 days ago [-]

    The picture of the winners had '3 4 5' and '4 4 4' which I think is 3 measurements on each of the cuts

    tcholewik(10000) 3 days ago [-]

    10 microns is about the size of cells that make up wood shaving. And those traditional hand planners used in this competition existed for couple thousand years now with very few changes(most recent on being the chip breaker brought to japan about 200 years ago). I might be repeating stuff for the article but I read this one long ago so no longer sure if it includes this. Lol

    bamboozled(3414) 4 days ago [-]

    Japanese hand plane has to be one of the most satisfying tools to use...if you're into wood working, really worth trying one.

    cinntaile(3393) 4 days ago [-]

    What's the difference between a regular hand plane and a Japanese one? They look quite similar to me?





    Historical Discussions: JSX over the Wire (April 15, 2025: 261 points)

    (261) JSX over the Wire

    261 points 3 days ago by danabramov in 816th position

    overreacted.io | Estimated reading time – 163 minutes | comments | anchor

    Suppose you have an API route that returns some data as JSON:

    app.get('/api/likes/:postId', async (req, res) => {
      const postId = req.params.postId;
      const [post, friendLikes] = await Promise.all([
        getPost(postId),
        getFriendLikes(postId, { limit: 2 }),
      ]);
      const json = {
        totalLikeCount: post.totalLikeCount,
        isLikedByUser: post.isLikedByUser,
        friendLikes: friendLikes,
      };
      res.json(json);
    });

    You also have a React component that needs that data:

    function LikeButton({
      totalLikeCount,
      isLikedByUser,
      friendLikes
    }) {
      let buttonText = 'Like';
      if (totalLikeCount > 0) {
        // e.g. 'Liked by You, Alice, and 13 others'
        buttonText = formatLikeText(totalLikeCount, isLikedByUser, friendLikes);
      }
      return (
        <button className={isLikedByUser ? 'liked' : ''}>
          {buttonText}
        </button>
      );
    }

    How do you get that data into that component?

    You could pass it from a parent component using some data fetching library:

    function PostLikeButton({ postId }) {
      const [json, isLoading] = useData(`/api/likes/${postId}`);
      // ...
      return (
        <LikeButton
          totalLikeCount={json.totalLikeCount}
          isLikedByUser={json.isLikedByUser}
          friendLikes={json.friendLikes}
        />
      );
    }

    That's one way of thinking about it.

    But have another look at your API:

    app.get('/api/likes/:postId', async (req, res) => {
      const postId = req.params.postId;
      const [post, friendLikes] = await Promise.all([
        getPost(postId),
        getFriendLikes(postId, { limit: 2 }),
      ]);
      const json = {
        totalLikeCount: post.totalLikeCount,
        isLikedByUser: post.isLikedByUser,
        friendLikes: friendLikes,
      };
      res.json(json);
    });

    Do these lines remind you of anything?

    Props. You're passing props. You just didn't specify where to.

    But you already know their final destination—LikeButton.

    Why not just fill that in?

    app.get('/api/likes/:postId', async (req, res) => {
      const postId = req.params.postId;
      const [post, friendLikes] = await Promise.all([
        getPost(postId),
        getFriendLikes(postId, { limit: 2 }),
      ]);
      const json = (
        <LikeButton
          totalLikeCount={post.totalLikeCount}
          isLikedByUser={post.isLikedByUser}
          friendLikes={friendLikes}
        />
      );
      res.json(json);
    });

    Now the "parent component" of LikeButton is the API itself.

    Wait, what?

    Weird, I know. We're going to worry about whether it's a good idea later. But for now, notice how this inverts the relationship between components and the API. This is sometimes known as the Hollywood Principle: "Don't call me, I'll call you."

    Your components don't call your API.

    Instead, your API returns your components.

    Why would you ever want to do that?



    There is a fundamental tension between how we want to store information and how we want to display it. We generally want to store more things than we display.

    For example, consider a Like button on a Post. When we store Likes for a given Post, we might want to represent them as a table of Like rows like this:

    type Like = {
      createdAt: string, // Timestamp
      likedById: number, // User ID
      postId: number     // Post ID
    };

    Let's call this kind of data a "Model". It represents the raw shape of the data.

    So our Likes database table might contain data of that shape:

    [{
      createdAt: '2025-04-13T02:04:41.668Z',
      likedById: 123,
      postId: 1001
    }, {
      createdAt: '2025-04-13T02:04:42.668Z',
      likedById: 456,
      postId: 1001
    }, {
      createdAt: '2025-04-13T02:04:43.668Z',
      likedById: 789,
      postId: 1002
    }, /* ... */]

    However, what we want to display to the user is different.

    What we want to display is the number of Likes for that Post, whether the user has already liked it, and the names of their friends who also liked it. For example, the Like button could appear pressed in (which means that you already liked this post) and say "You, Alice, and 13 others liked this." Or "Alice, Bob, and 12 others liked this."

    type LikeButtonProps = {
      totalLikeCount: number,
      isLikedByUser: boolean,
      friendLikes: string[]
    }

    Let's call this kind of data a "ViewModel".

    type ViewModel = LikeButtonProps;

    A ViewModel represents data in a way that is directly consumable by the UI (i.e the view). It is often significantly different from the raw Model. In our example:

    • ViewModel's totalLikeCount is aggregated from individual Like models.
    • ViewModel's isLikedByUser is personalized and depends on the user.
    • ViewModel's friendLikes is both aggregated and personalized. To calculate it, you'd have to takes the Likes for this post, filter them down to likes from friends, and get the first few friends' names (which are likely stored in a different table).

    Clearly, Models will need to turn into ViewModels at some point. The question is where and when this happens in the code, and how that code evolves over time.


    The most common way to solve this problem is to expose some kind of a JSON API that the client can hit to assemble the ViewModel. There are different ways to design such an API, but the most common way is what's usually known as REST.

    The typical way to approach REST (let's say we've never read this article) is to pick some "Resources"—such as a Post, or a Like—and provide JSON API endpoints that list, create, update, and delete such Resources. Naturally, REST does not specify anything about how you should shape these Resources so there's a lot of flexibility.

    Often, you might start by returning the shape of the Model:

    // GET /api/post/123
    {
      title: 'My Post',
      content: 'Hello world...',
      authorId: 123,
      createdAt: '2025-04-13T02:04:40.668Z'
    }

    So far so good. But how would you incorporate Likes into this? Maybe totalLikeCount and isLikedByUser could be a part of the Post Resource:

    // GET /api/post/123
    {
      title: 'My Post',
      content: 'Hello world...',
      authorId: 123,
      createdAt: '2025-04-13T02:04:40.668Z',
      totalLikeCount: 13,
      isLikedByUser: true
    }

    Now, should friendLikes also go there? We need this information on the client.

    // GET /api/post/123
    {
      title: 'My Post',
      authorId: 123,
      content: 'Hello world...',
      createdAt: '2025-04-13T02:04:40.668Z',
      totalLikeCount: 13,
      isLikedByUser: true,
      friendLikes: ['Alice', 'Bob']
    }

    Or are we starting to abuse the notion of a Post by adding too much stuff to it? Okay, how about this, maybe we could offer a separate endpoint for a Post's Likes:

    // GET /api/post/123/likes
    {
      totalCount: 13,
      likes: [{
        createdAt: '2025-04-13T02:04:41.668Z',
        likedById: 123,
      }, {
        createdAt: '2025-04-13T02:04:42.668Z',
        likedById: 768,
      }, /* ... */]
    }

    So a Post's Like becomes its own "Resource".

    That's nice in theory but we're going to need to know the likers' names, and we don't want to make a request for each Like. So we need to "expand" the users here:

    // GET /api/post/123/likes
    {
      totalCount: 13,
      likes: [{
        createdAt: '2025-04-13T02:04:41.668Z',
        likedBy: {
          id: 123,
          firstName: 'Alice',
          lastName: 'Lovelace'
        }
      }, {
        createdAt: '2025-04-13T02:04:42.668Z',
        likedBy: {
          id: 768,
          firstName: 'Bob',
          lastName: 'Babbage'
        }
      }]
    }

    We also "forgot" which of these Likes are from friends. Should we solve this by having a separate /api/post/123/friend-likes endpoint? Or should we order by friends first and include isFriend into the likes array items so we can disambiguate friends from other likes? Or should we add ?filter=friends?

    Or should we include the friend likes directly into the Post to avoid two API calls?

    // GET /api/post/123
    {
      title: 'My Post',
      authorId: 123,
      content: 'Hello world...',
      createdAt: '2025-04-13T02:04:40.668Z',
      totalLikeCount: 13,
      isLikedByUser: true,
      friendLikes: [{
        createdAt: '2025-04-13T02:04:41.668Z',
        likedBy: {
          id: 123,
          firstName: 'Alice',
          lastName: 'Lovelace'
        }
      }, {
        createdAt: '2025-04-13T02:04:42.668Z',
        likedBy: {
          id: 768,
          firstName: 'Bob',
          lastName: 'Babbage'
        }
      }]
    }

    This seems useful but what if /api/post/123 gets called from other screens that don't need this information—and you'd rather not slow them down? Maybe there could be an opt-in like /api/post/123?expand=friendLikes?

    Anyway, the point I'm trying to make here is not that it's impossible to design a good REST API. The vast majority of apps I've seen works this way so it's at the very least doable. But anyone who designed one and then worked on it for more than a few months knows the drill. Evolving REST endpoints is a pain in the ass.

    It usually goes like this:

    1. Initially, you have to decide how to structure the JSON output. None of the options are clearly better than the rest; mostly you're just guessing how the app will evolve.
    2. The initial decisions tend to settle down after a few back-and-forth iterations... until the next UI redesign which causes ViewModels to have slightly different shapes. The already existing REST endpoints don't quite cover the new needs.
    3. It's possible to add new REST API endpoints, but at some point you're not really "supposed to" add more because you already defined all the possible Resources. For example, if /posts/123 exists, you likely won't add another "get post" API.
    4. Now you're running into issues with calculating and sending either not enough or too much data. You either aggressively "expand" fields in the existing Resources or come up with an elaborate set of conventions for doing it on-demand.
    5. Some ViewModels are only needed by a subset of screens but they're always included in the response because that's easier than making them configurable.
    6. Some screens resort to cobbling their ViewModels together from multiple API calls because no single response contains all the necessary information anymore.
    7. Then the design and the functionality of your product changes again. Repeat.

    There's clearly some fundamental tension here, but what is causing it?

    First, note how the shape of the ViewModels is determined by the UI. It's not a reflection of some platonic idea of a Like; rather, it's dictated by the design. We want to show "You, Ann, and 13 others liked this", therefore we need these fields:

    type LikeButtonProps = {
      totalLikeCount: number,
      isLikedByUser: boolean,
      friendLikes: string[]
    }

    If this screen's design or functionality changes (for example, if you want to show the avatars of your friends who liked the post), the ViewModel will change as well:

    type LikeButtonProps = {
      totalLikeCount: number,
      isLikedByUser: boolean,
      friendLikes: {
        firstName: string
        avatar: string
      }[]
    }

    But here's the rub.

    REST (or, rather, how REST is broadly used) encourages you to think in terms of Resources rather than Models or ViewModels. At first, your Resources start out as mirroring Models. But a single Model rarely has enough data for a screen, so you develop ad-hoc conventions for nesting Models in a Resource. However, including all the relevant Models (e.g. all Likes of a Post) is often impossible or impractical, so you start adding ViewModel-ish fields like friendLikes to your Resources.

    But putting ViewModels in Resources also doesn't work very well. ViewModels are not abstract concepts like "a post"; each ViewModel describes a specific piece of UI. As a result, the shape of your "Post" Resource grows to encompass the needs of every screen displaying a post. But those needs also change over time, so the "Post" Resource's shape is at best a compromise between what different screens need now, and at worst a fossilized record of everything they've ever needed in the past.

    Let me put this more bluntly:

    REST Resources don't have a firm grounding in the reality. Their shapes are not sufficiently constrained—we're making up concepts mostly out of thin air. Unlike Models, they're not grounded in the reality of how the data is stored. And unlike ViewModels, they're not grounded in the reality of how the data is presented. Unfortunately, nudging them in either direction only makes things worse.

    If you keep REST Resources close to the Models, you'll hurt the user experience. Now things that could be fetched in a single request would require a couple or, god forbid, N calls. This is especially noticeable in products from companies where the backend team "hands off" a REST API to the frontend team and takes no feedback. The API may look simple and elegant but it is completely impractical to consume.

    On the other hand, if you nudge REST Resources to stay closer to the ViewModels, you're hurting maintainability. ViewModels are fickle! Most ViewModels are going to change the next time the corresponding piece of UI is redesigned. But changing the shape of REST Resources is hard—the same Resources are being fetched by many screens. As a result, their shape gradually drifts away from the needs of the current ViewModels, and becomes difficult to evolve. There's a reason the backend teams often resist adding UI-specific fields to the response: they'll likely get stale!

    This doesn't necessarily mean that REST itself, as it's broadly understood, is broken. It can be very nice to use when the Resources are well-defined and their fields are well-chosen. But this often goes against the client's needs, which is to get all the data for a particular screen. There's something missing in the middle.

    We need a translation layer.


    There is a way to resolve this tension.

    You have some latitude with how exactly you could approach it but the main idea is that your client should be able to request all data for a specific screen at once.

    It's such a simple idea!

    Instead of requesting "canonical" REST Resources from the client such as:

    GET /data/post/123       # Get Post Resource
    GET /data/post/123/likes # Get Post Likes Resource

    you request a ViewModel for a specific screen (i.e. a route):

    GET /screens/post-details/123 # Get ViewModel for the PostDetails screen

    This data would include everything that screen needs.

    The difference is subtle but profound. You're no longer trying to define a universal canonical shape of a Post. Rather, you send whatever data the PostDetails screen needs in order to display its components today. If the PostDetails screen gets deleted, this endpoint gets deleted too. If a different screen wants to display some related information (for example, a PostLikedBy popup), it will gets its own route:

    GET /screens/post-details/123 # Get ViewModel for the PostDetails screen
    GET /screens/post-liked-by/123 # Get ViewModel for the PostLikedBy screen

    Okay, but how does this help?

    This avoids the trap of "ungrounded" abstraction. The ViewModel interface for every screen precisely specifies the shape of the server response. If you need to change it or fine-tune it, you can do that without affecting any other screens.

    For example, a PostDetails screen ViewModel might look like this:

    type PostDetailsViewModel = {
      postTitle: string,
      postContent: string,
      postAuthor: {
        name: string,
        avatar: string,
        id: number
      },
      friendLikes: {
        totalLikeCount: number,
        isLikedByUser: boolean,
        friendLikes: string[]
      }
    };

    So that's what the server would return for /screens/post-details/123. Later, if you want to display avatars of friend likes, you'd just add it to that ViewModel:

    type PostDetailsViewModel = {
      postTitle: string,
      postContent: string,
      postAuthor: {
        name: string,
        avatar: string,
        id: number
      },
      friendLikes: {
        totalLikeCount: number,
        isLikedByUser: boolean,
        friendLikes: {
          firstName: string
          avatar: string
        }[]
      }
    }

    Note that you'd only have to update that screen's endpoint. You're no longer forced to balance what one screen needs with what another screen needs. There are no questions like "which Resource does this field belong to?", or whether it should be "expanded". If some screen needs more data than others, you can just include more data in that screen's response—it doesn't have to be generic or configurable. The shape of the server response is exactly determined by each screen's needs.

    This does solve the stated problems with REST.

    It also introduces a few novel questions:

    1. There's going to be a lot more endpoints than with REST Resources—an endpoint per screen. How will these endpoints be structured and kept maintainable?
    2. How do you reuse code between the endpoints? Presumably there would be a lot of duplicated data access and other business logic between those endpoints.
    3. How do you convince the backend team to pivot from their REST APIs to this?

    The last question is probably the first we need to resolve. The backend team will likely have very warranted reservations about this approach. At the very least, if this approach proves terrible, it would be good to have a way to migrate back.

    Luckily, there's no need to throw anything away.


    Instead or replacing your existing REST API, you can add a new layer in front of it:

    // You're adding new screen-specific endpoints...
    app.get('/screen/post-details/:postId', async (req, res) => {
      const [post, friendLikes] = await Promise.all([
        // ...which call your existing REST API here
        fetch(`/api/post/${postId}`).then(r => r.json()),
        fetch(`/api/post/${postId}/friend-likes`).then(r => r.json()),
      ]);
      const viewModel = {
        postTitle: post.title,
        postContent: parseMarkdown(post.content),
        postAuthor: post.author,
        postLikes: {
          totalLikeCount: post.totalLikeCount,
          isLikedByUser: post.isLikedByUser,
          friendLikes: friendLikes.likes.map(l => l.firstName)
        }
      };
      res.json(viewModel);
    });

    This is not a new idea. Such a layer is often called BFF, or Backend for Frontend. In this case, the job of the BFF is to adapt your REST API to returning ViewModels.

    If some screen needs more data, a BFF lets you serve more data to it without changing your entire data model. It keeps screen-specific changes scoped. Crucially, it lets you deliver all the data any screen needs in a single roundtrip.

    The BFF doesn't have to be written in the same language as your REST API. For reasons we'll get into later, it's advantageous to write BFF in the same language as your frontend code. You can think of it as a piece of the frontend that happens to run on the server. It's like the frontend's "ambassador" to the server. It "adapts" the REST responses into the shape that each screen of the frontend UI actually wants.

    Although you can get some of the benefits of BFF with client-only per-route loaders like clientLoader in React Router, there's a lot you unlock by actually deploying this layer on the server close to where the REST endpoints are deployed.

    For example, even if you do have to make several REST API requests serially one after another to load all the necessary data for a screen, the latency between the BFF and your REST API would be much lower than when making multiple serial requests from the client. If your REST API responses are fast on the internal network, you can cut down literal seconds of what used to be client/sever waterfalls without actually parallelizing the (sometimes inevitable) serial calls.

    A BFF also lets you apply data transformations before sending data to the client, which can significantly improve performance on low-end client devices. You can even go as far as to cache or persist some computations on the disk, even between different users, since you have access to the disk—and to server caches like Redis. In that sense, a BFF lets a frontend team have their very own little slice of the server.

    Importantly, a BFF gives you a way to experiment with alternatives to your REST APIs without affecting the client application. For example, if your REST API has no other consumers, you can turn it into an internal microservice and avoid exposing it to the world. Moreover, you could turn it into a data access layer rather than an HTTP service, and simply import that data access layer in-process from your BFF:

    import { getPost, getFriendLikes } from '@your-company/data-layer';
     
    app.get('/screen/post-details/:postId', async (req, res) => {
      const postId = req.params.postId;
      const [post, friendLikes] = await Promise.all([
        // Reads from an ORM and applies business logic.
        getPost(postId),
        getFriendLikes(postId, { limit: 2 }),
      ]);
      const viewModel = {
        postTitle: post.title,
        postContent: parseMarkdown(post.content),
        postAuthor: post.author,
        postLikes: {
          totalLikeCount: post.totalLikeCount,
          isLikedByUser: post.isLikedByUser,
          friendLikes: friendLikes.likes.map(l => l.firstName)
        }
      };
      res.json(viewModel);
    });

    (Of course, this part only works if you can write lower-level backend logic in JS.)

    This can help you avoid problems like loading the same information many times from the database (no fetch calls means database reads can be batched). It also lets you "drop down" the abstraction level when needed—for example, to run a fine-tuned stored database procedure that isn't neatly exposed over the REST API.

    There's a lot to like about the BFF pattern. It solves quite a few problems but it also raises new questions. For example, how do you organize its code? If each screen is essentially its own API method, how do you avoid duplication of code? And how do you keep your BFF synchronized with data requirements of the front-end side?

    Let's try to make some progress on answering those.


    Suppose you're adding a new PostList screen. It's going to render an array of <PostDetails> components, each of which needs the same data as before:

    type PostDetailsViewModel = {
      postTitle: string,
      postContent: string,
      postAuthor: {
        name: string,
        avatar: string,
        id: number
      },
      friendLikes: {
        totalLikeCount: number,
        isLikedByUser: boolean,
        friendLikes: string[]
      }
    };

    So the ViewModel for PostList contains an array of PostDetailsViewModel:

    type PostListViewModel = {
      posts: PostDetailsViewModel[]
    };

    How would you load the data for PostList?

    Your first inclination may be to make a series of requests from the client to the existing /screen/post-details/:postId endpoint which already knows how to prepare a ViewModel for a single post. We just need to call it for every post.

    But wait, this defeats the entire purpose of the BFF! Making many requests for a single screen is inefficient and is precisely the kind of compromise that we've been trying to avoid. Instead, we'll add a new BFF endpoint for the new screen.

    The new endpoint might initially look like this:

    import { getPost, getFriendLikes, getRecentPostIds } from '@your-company/data-layer';
     
    app.get('/screen/post-details/:postId', async (req, res) => {
      const postId = req.params.postId;
      const [post, friendLikes] = await Promise.all([
        getPost(postId),
        getFriendLikes(postId, { limit: 2 }),
      ]);
      const viewModel = {
        postTitle: post.title,
        postContent: parseMarkdown(post.content),
        postAuthor: post.author,
        postLikes: {
          totalLikeCount: post.totalLikeCount,
          isLikedByUser: post.isLikedByUser,
          friendLikes: friendLikes.likes.map(l => l.firstName)
        }
      };
      res.json(viewModel);
    });
     
    app.get('/screen/post-list', async (req, res) => {
      // Grab the recent post IDs
      const postIds = await getRecentPostIds();
      const viewModel = {
        // For each post ID, load the data in parallel
        posts: await Promise.all(postIds.map(async postId => {
          const [post, friendLikes] = await Promise.all([
            getPost(postId),
            getFriendLikes(postId, { limit: 2 }),
          ]);
          const postDetailsViewModel = {
            postTitle: post.title,
            postContent: parseMarkdown(post.content),
            postAuthor: post.author,
            postLikes: {
              totalLikeCount: post.totalLikeCount,
              isLikedByUser: post.isLikedByUser,
              friendLikes: friendLikes.likes.map(l => l.firstName)
            }
          };
          return postDetailsViewModel;
        }))
      };
      res.json(viewModel);
    });

    However, note that there's significant code duplication between the endpoints:

    import { getPost, getFriendLikes, getRecentPostIds } from '@your-company/data-layer';
     
    app.get('/screen/post-details/:postId', async (req, res) => {
      const postId = req.params.postId;
      const [post, friendLikes] = await Promise.all([
        getPost(postId),
        getFriendLikes(postId, { limit: 2 }),
      ]);
      const viewModel = {
        postTitle: post.title,
        postContent: parseMarkdown(post.content),
        postAuthor: post.author,
        postLikes: {
          totalLikeCount: post.totalLikeCount,
          isLikedByUser: post.isLikedByUser,
          friendLikes: friendLikes.likes.map(l => l.firstName)
        }
      };
      res.json(viewModel);
    });
     
    app.get('/screen/post-list', async (req, res) => {
      const postIds = await getRecentPostIds();
      const viewModel = {
        posts: await Promise.all(postIds.map(async postId => {
          const [post, friendLikes] = await Promise.all([
            getPost(postId),
            getFriendLikes(postId, { limit: 2 }),
          ]);
          const postDetailsViewModel = {
            postTitle: post.title,
            postAuthor: post.author,
            postContent: parseMarkdown(post.content),
            postLikes: {
              totalLikeCount: post.totalLikeCount,
              isLikedByUser: post.isLikedByUser,
              friendLikes: friendLikes.likes.map(l => l.firstName)
            }
          };
          return postDetailsViewModel;
        }))
      };
      res.json(viewModel);
    });

    It's almost like there is a notion of "PostDetails ViewModel" begging to be extracted. This should not be surprising—both screens render the same <PostDetails> component, so they need similar code to load the data for it.


    Let's extract a PostDetailsViewModel function:

    import { getPost, getFriendLikes, getRecentPostIds } from '@your-company/data-layer';
     
    async function PostDetailsViewModel({ postId }) {
      const [post, friendLikes] = await Promise.all([
        getPost(postId),
        getFriendLikes(postId, { limit: 2 }),
      ]);
      return {
        postTitle: post.title,
        postContent: parseMarkdown(post.content),
        postAuthor: post.author,
        postLikes: {
          totalLikeCount: post.totalLikeCount,
          isLikedByUser: post.isLikedByUser,
          friendLikes: friendLikes.likes.map(l => l.firstName)
        }
      };
    }
     
    app.get('/screen/post-details/:postId', async (req, res) => {
      const postId = req.params.postId;
      const viewModel = await PostDetailsViewModel({ postId });
      res.json(viewModel);
    });
     
    app.get('/screen/post-list', async (req, res) => {
      const postIds = await getRecentPostIds();
      const viewModel = {
        posts: await Promise.all(postIds.map(postId =>
          PostDetailsViewModel({ postId })
        ))
      };
      res.json(viewModel);
    });

    This makes our BFF endpoints significantly simpler.

    In fact, we can go a bit further. Look at this part of PostDetailsViewModel:

    async function PostDetailsViewModel({ postId }) {
      const [post, friendLikes] = await Promise.all([
        getPost(postId),
        getFriendLikes(postId, { limit: 2 }),
      ]);
      return {
        postTitle: post.title,
        postContent: parseMarkdown(post.content),
        postAuthor: post.author,
        postLikes: {
          totalLikeCount: post.totalLikeCount,
          isLikedByUser: post.isLikedByUser,
          friendLikes: friendLikes.likes.map(l => l.firstName)
        }
      };
    }

    We know that the purpose of the postLikes field is to eventually become props for the LikeButton component—i.e. this field is LikeButton's ViewModel:

    function LikeButton({
      totalLikeCount,
      isLikedByUser,
      friendLikes
    }) {
      // ...
    }

    So let's extract the logic preparing these props into LikeButtonViewModel:

    import { getPost, getFriendLikes, getRecentPostIds } from '@your-company/data-layer';
     
    async function LikeButtonViewModel({ postId }) {
      const [post, friendLikes] = await Promise.all([
        getPost(postId),
        getFriendLikes(postId, { limit: 2 }),
      ]);
      return {
        totalLikeCount: post.totalLikeCount,
        isLikedByUser: post.isLikedByUser,
        friendLikes: friendLikes.likes.map(l => l.firstName)
      };
    }
     
    async function PostDetailsViewModel({ postId }) {
      const [post, postLikes] = await Promise.all([
        getPost(postId), // It's fine to getPost() here again. Our data layer deduplicates calls via an in-memory cache.
        LikeButtonViewModel({ postId }),
      ]);
      return {
        postTitle: post.title,
        postContent: parseMarkdown(post.content),
        postAuthor: post.author,
        postLikes
      };
    }

    Now we have a tree of functions that load data as JSON—our ViewModels.

    Depending on your background, this might remind you of a few other things. It might remind you of composing Redux reducers out of smaller reducers. It might also remind you of composing GraphQL fragments out of smaller fragments. Or it might remind you of composing React components from other React components.

    Although the code style is a little verbose now, there is something oddly satisfying in breaking apart a screen's ViewModel into smaller ViewModels. It feels similar to writing a React component tree, except that we're decomposing a backend API. It's like the data has its own shape but it roughly lines up with your React component tree.

    Let's see what happens when the UI needs to evolve.


    Suppose the UI design changes, and we want to display friends' avatars too:

    type LikeButtonProps = {
      totalLikeCount: number,
      isLikedByUser: boolean,
      friendLikes: {
        firstName: string
        avatar: string
      }[]
    }

    Assuming we use TypeScript, we'll immediately get a type error in the ViewModel:

    async function LikeButtonViewModel(
      { postId } : { postId: number }
    ) : LikeButtonProps {
      const [post, friendLikes] = await Promise.all([
        getPost(postId),
        getFriendLikes(postId, { limit: 2 }),
      ]);
      return {
        totalLikeCount: post.totalLikeCount,
        isLikedByUser: post.isLikedByUser,
        // 🔴 Type 'string[]' is not assignable to type '{ firstName: string; avatar: string; }[]'.
        friendLikes: friendLikes.likes.map(l => l.firstName)
      };
    }

    Let's fix it:

    async function LikeButtonViewModel(
      { postId } : { postId: number }
    ) : LikeButtonProps {
      const [post, friendLikes] = await Promise.all([
        getPost(postId),
        getFriendLikes(postId, { limit: 2 }),
      ]);
      return {
        totalLikeCount: post.totalLikeCount,
        isLikedByUser: post.isLikedByUser,
        friendLikes: friendLikes.likes.map(l => ({
          firstName: l.firstName,
          avatar: l.avatar,
        }))
      };
    }

    Now the BFF response for every screen that includes a LikeButton ViewModel will use the new friendLikes format, which is exactly what the LikeButton React component wants. There are no further changes to make—it just works. We know that it works because LikeButtonViewModel is the only place generating props for a LikeButton, no matter which screen we're requesting from the BFF. (For now assume that this is true; we're still yet to decide how exactly to tie them.)

    I'd like to call attention to the previous fact because this is quite profound.

    When was the last time you could clearly trace the correspondence between a deeply nested piece of server code generating a fragment of data, and a deeply nested piece of the client code consuming that data? We're clearly onto something.


    You might have noticed that ViewModel functions can take parameters. Importantly, these parameters can be specified by the "parent" ViewModel functions and plumbed down—so the client doesn't need to be aware of them.

    For example, suppose you wanted to make the Post List page only display the first paragraph of every post's content. Let's add a parameter to its ViewModel:

    async function PostDetailsViewModel({
      postId,
      truncateContent
    }) {
      const [post, postLikes] = await Promise.all([
        getPost(postId),
        LikeButtonViewModel({ postId }),
      ]);
      return {
        postTitle: post.title,
        postContent: parseMarkdown(post.content, {
          maxParagraphs: truncateContent ? 1 : undefined
        }),
        postAuthor: post.author,
        postLikes
      };
    }
     
    app.get('/screen/post-details/:postId', async (req, res) => {
      const postId = req.params.postId;
      const viewModel = await PostDetailsViewModel({
        postId,
        truncateContent: false
      });
      res.json(viewModel);
    });
     
    app.get('/screen/post-list', async (req, res) => {
      const postIds = await getRecentPostIds();
      const viewModel = {
        posts: await Promise.all(postIds.map(postId =>
          PostDetailsViewModel({
            postId,
            truncateContent: true
          })
        ))
      };
      res.json(viewModel);
    });

    The JSON response for the post-details endpoint still includes the entire posts, but the post-list JSON endpoint will now only serve their abridged summaries. This is a view model concern, and now we have a natural place to express it in code.


    Next, suppose you wanted to include avatars only on the Details screen. Let's edit LikeButtonViewModel to take and respect an includeAvatars parameter:

    async function LikeButtonViewModel({
      postId,
      includeAvatars
    }) {
      const [post, friendLikes] = await Promise.all([
        getPost(postId),
        getFriendLikes(postId, { limit: 2 }),
      ]);
      return {
        totalLikeCount: post.totalLikeCount,
        isLikedByUser: post.isLikedByUser,
        friendLikes: friendLikes.likes.map(l => ({
          firstName: l.firstName,
          avatar: includeAvatars ? l.avatar : null,
        }))
      };
    }

    Now you can plumb it down all the way from the BFF endpoints:

    async function PostDetailsViewModel({
      postId,
      truncateContent,
      includeAvatars
    }) {
      const [post, postLikes] = await Promise.all([
        getPost(postId),
        LikeButtonViewModel({ postId, includeAvatars }),
      ]);
      return {
        postTitle: post.title,
        postContent: parseMarkdown(post.content, {
          maxParagraphs: truncateContent ? 1 : undefined
        }),
        postAuthor: post.author,
        postLikes
      };
    }
     
    app.get('/screen/post-details/:postId', async (req, res) => {
      const postId = req.params.postId;
      const viewModel = await PostDetailsViewModel({
        postId,
        truncateContent: false,
        includeAvatars: true
      });
      res.json(viewModel);
    });
     
    app.get('/screen/post-list', async (req, res) => {
      const postIds = await getRecentPostIds();
      const viewModel = {
        posts: await Promise.all(postIds.map(postId =>
          PostDetailsViewModel({
            postId,
            truncateContent: true,
            includeAvatars: false
          })
        ))
      };
      res.json(viewModel);
    });

    Again, the client doesn't pass ad-hoc parameters like ?includeAvatars=true to the server to ensure that the avatars are included in the JSON response. Instead, the post-list BFF endpoint itself knows a Post List shouldn't include avatars, so it can pass includeAvatars: false to PostDetailsViewModel, which plumbs it down to LikeButtonViewModel. The client code doesn't need to be aware of the server logic at all—all it cares about is that it gets the props that it wants.

    For the case when we do show avatars of friends, we might want to show five rather than two. We can make that change directly in LikeButtonViewModel:

    async function LikeButtonViewModel({
      postId,
      includeAvatars
    }) {
      const [post, friendLikes] = await Promise.all([
        getPost(postId),
        getFriendLikes(postId, { limit: includeAvatars ? 5 : 2 }),
      ]);
      return {
        totalLikeCount: post.totalLikeCount,
        isLikedByUser: post.isLikedByUser,
        friendLikes: friendLikes.likes.map(l => ({
          firstName: l.firstName,
          avatar: includeAvatars ? l.avatar : null,
        }))
      };
    }

    Since the LikeButtonViewModel function exists solely to generate the LikeButton props, adding more presentational logic here feels natural. It's a view model, right? If another view wanted to show a different number of avatars, it could do that. Unlike with REST, there is no canonical notion of a "post"—so any UI can specify exactly the data it needs, from a screen all the way down to a button.

    Our ViewModels evolve in the exact lockstep with the needs of the client.


    Something interesting is taking shape. We've started to split our BFF endpoints into units of reusable logic, and we've found that these units let us encapsulate data loading in a similar way as we've been encapsulating the user interface. If you squint at ViewModels, you might even see some parallels to components.

    And yet the end result of the ViewModel tree is not a UI tree—it's just JSON.

    // GET /screen/post-list
    {
      /* Begin screen/post-list ViewModel */
      posts: [{
        /* Begin PostDetailsViewModel */
        postTitle: 'JSX Over The Wire',
        postAuthor: 'Dan',
        postContent: 'Suppose you have an API route that returns some data as JSON.',
        postLikes: {
          /* Begin LikeButtonViewModel */
          totalLikeCount: 8,
          isLikedByUser: false,
          friendLikes: [{
            firstName: 'Alice'
          }, {
            firstName: 'Bob'
          }]
          /* End LikeButtonViewModel */
        }
        /* End PostDetailsViewModel */
      }, {
        /* Begin PostDetailsViewModel */
        postTitle: 'React for Two Computers',
        postAuthor: 'Dan',
        postContent: 'I've been trying to write this post at least a dozen times.',
        postLikes: {
          /* Begin LikeButtonViewModel */
          totalLikeCount: 13,
          isLikedByUser: true,
          friendLikes: [{
            firstName: 'Bob'
          }]
          /* End LikeButtonViewModel */
        }
        /* End PostDetailsViewModel */
      }]
    }

    But what should we do with that JSON?

    In the end, somehow we want the props generated by LikeButtonViewModel to end up in the LikeButton component. Likewise, somehow we want the props generated by PostDetailsViewModel to get to the PostDetails component. We don't want to generate a huge ViewModel tree of JSON just to manually plumb every piece of it down exactly to the component that needs that ViewModel's data.

    We're building two parallel hierarchies in the two worlds.

    But these worlds are not connected yet.

    Something is missing.


    • For any UI, the data begins its life as Models and ends its life as ViewModels. The transformation between Models and ViewModels has to happen somewhere.
    • The shape of ViewModels is fully dictated by the design of our user interface. This means that they will evolve over time together with our designs. Also, different screens need different ViewModels aggregated from the same underlying Models.
    • Modeling data from the server as REST Resources creates a tension. If REST Resources are close to raw Models, it may require multiple roundtrips and complex ad-hoc conventions to obtain the necessary ViewModels for a screen. If REST Resources are close to ViewModels, they get too coupled to the initial screens they were designed to represent, and don't evolve together with the needs of the client.
    • We can resolve this tension by creating another layer—a Backend For Frontend (BFF). The job of the BFF is to translate the needs of the client ("give me data for this screen") to REST calls on the backend. A BFF can also evolve beyond being a facade for REST, and instead load data directly using an in-process data layer.
    • Since the BFF's job is to return all the data needed for each screen as a piece of JSON, it is natural to split up the data loading logic into reusable units. A screen's ViewModel can be decomposed into a tree of ViewModels, corresponding to the pieces of server data that different components will want to receive on the client. These individual ViewModels can then be recombined and composed together.
    • These ViewModel functions can pass information to each other. This lets us customize the JSON we're sending depending on the screen. Unlike with REST, we're no longer trying to design canonical shapes like a "a post object" used throughout all responses. At any point, we can diverge and serve different ViewModels for the same information to different screens—whatever they want. These ViewModels are view models. They can—should?—have presentation logic.
    • We're beginning to realize that ViewModels form a very similar structure to React components. ViewModels are like components, but for generating JSON. However, we still haven't figured out how to actually pass the JSON they're generating on the server to the components that need it on the client. It's also annoying to deal with two parallel hierarchies. We're onto something, but we're missing something.

    What are we missing?


    JSON, MVVM, BFF, what the hell was that?!

    What an incredibly overengineered way to make a website. These React complexity peddlers are so out of touch. If only they knew the history.

    Back in my days, we'd just write a bit of HTML and call it a day.

    My index.html homepage would look like this:

    <html>
      <body>
        <h1>Welcome to my blog!</h1>
        <h2>Latest posts</h2>
        <h3>
          <a href='/jsx-over-the-wire.html'>
            JSX Over The Wire
          </a>
        </h3>
        <p>
          Suppose you have an API route that returns some data as JSON. [...]
        </p>
        <h3>
          <a href='/react-for-two-computers.html'>
            React for Two Computers
          </a>
        </h3>
        <p>
          I've been trying to write this post at least a dozen times. [...]
        </p>
        ...
      </body>
    </html>

    Then my jsx-over-the-wire.html post details page would look like this:

    <html>
      <body>
        <h1>JSX Over The Wire</h1>
        <p>
          Suppose you have an API route that returns some data as JSON.
        </p>
        ...
      </body>
    </html>

    I'd put these files on a box with Apache and that would be it!

    Now suppose I wanted to add a footer to all my pages. That couldn't be easier. First, let me create a file called includes/footer.html with my footer:

    <marquee>
      <a href='/'>overreacted</a>
    </marquee>

    Now I can include my footer on any page with Server-Side Includes (SSI):

    <html>
      <body>
        <h1>Welcome to my blog!</h1>
        <h2>Latest posts</h2>
        ...
        <!--#include virtual='/includes/footer.html' -->
      </body>
    </html>

    In fact, I don't want to copy and paste the first paragraph of each blog post into my index.html file so I might use SSI together with CGI to generate my index page:

    <html>
      <body>
        <h1>Welcome to my blog!</h1>
        <h2>Latest posts</h2>
        <!--#include virtual='/cgi-bin/post-details.cgi?jsx-over-the-wire&truncateContent=true' -->
        <!--#include virtual='/cgi-bin/post-details.cgi?react-for-two-computers&truncateContent=true' -->
        <!--#include virtual='/includes/footer.html' -->
      </body>
    </html>

    Likewise, the details page will delegate to the same post-details.cgi script:

    <html>
      <body>
        <!--#include virtual='/cgi-bin/post-details.cgi?jsx-over-the-wire&truncateContent=false' -->
        <!--#include virtual='/includes/footer.html' -->
      </body>
    </html>

    Finally, the post-details.cgi script might talk to the database:

    #!/bin/sh
    echo 'Content-type: text/html'
    echo ''
     
    POST_ID='$(echo '$QUERY_STRING' | cut -d'&' -f1 | tr -cd '[:alnum:]._-')'
    TRUNCATE='$(echo '$QUERY_STRING' | grep -c 'truncateContent=true')'
     
    TITLE=$(mysql -u admin -p'password' -D blog --skip-column-names -e \
      'SELECT title FROM posts WHERE url='$POST_ID'')
    CONTENT=$(mysql -u admin -p'password' -D blog --skip-column-names -e \
      'SELECT content FROM posts WHERE url='$POST_ID'')
     
    if [ '$TRUNCATE' = '1' ]; then
      FIRST_PARAGRAPH='$(printf '%s' '$CONTENT' | sed '/^$/q')'
      echo '<h3><a href=\'/$POST_ID.html\'>$TITLE</a></h3>'
      echo '<p>$FIRST_PARAGRAPH [...]</p>'
    else
      echo '<h1>$TITLE</h1>'
      echo '<p>'
      echo '$CONTENT'
      echo '</p>'
    fi

    We're in the nineties, okay?

    So far everything is very simple, even if a bit tedious to write. What we have here is a server that returns all the data necessary for any given screen in one roundtrip.

    (Hmm...)

    Of course, different screens may need the same data, and we don't want to duplicate the logic. Luckily, we can reuse dynamic includes such as post-details.cgi. We can even pass parameters to them like truncateContent.

    The most annoying thing about this code is that working in Bash is really not for the faint-hearted (i.e. not for me). Let's see if we can improve on that part.


    We could translate this entire example to old school PHP, which gives us better control flow, function calls, variables, and so on. However, I want to skip ahead.

    No, not to the modern PHP MVC frameworks.

    I want to skip ahead to XHP.

    You see, the problem with the early PHP programs was that they relied on string manipulation of HTML. In that sense the PHP version doesn't improve by much:

    if ($truncate) {
      $splitContent = explode('\n\n', $content);
      $firstParagraph = $splitContent[0];
      echo '<h3><a href=\'/$postId.php\'>$title</a></h3>';
      echo '<p>$firstParagraph [...]</p>';
    } else {
      echo '<h1>$title</h1>';
      echo '<p>$content</p>';
    }

    Manipulating HTML as strings leads to code that's tangled, insecure, and difficult to maintain. Most people in the web development community took that as a signal to embrace Rails-style MVC where all the HTML was safely moved out of the code into separate files called templates (and all the data fetching moved to controllers).

    However, that's not what happened at Facebook.

    At Facebook, they had a different idea.

    The problem with PHP, said Facebook engineers, was not the manipulation of markup per se. What was bad is unprincipled manipulation of markup, i.e. treating markup as a plain string. Markup has a certain shape to it—stuff contained in other stuff. What we need is a way to build and manipulate that markup without accidentally destroying its contents or interpolating unsafe content into it:

    if ($truncate) {
      $splitContent = explode('\n\n', $content);
      $firstParagraph = $splitContent[0];
      echo
        <x:frag>
          <h3><a href={'/{$postId}.php'}>{$title}</a></h3>
          <p>{$firstParagraph} [...]</p>
        </x:frag>;
    } else {
      echo
        <x:frag>
          <h1>{$title}</h1>
          <p>{$content}</p>
        </x:frag>;
    }

    These tags are not strings of HTML! They're objects than can be turned into HTML.

    Now that we've moved markup into our code in a maintainable way, we can create our own abstractions. For example, we can define our own <ui:post-details>:

    class :ui:post-details extends :x:element {
      protected function render(): XHPRoot {
        if ($this->:truncateContent) {
          $splitContent = explode('\n\n', $this->:content);
          $firstParagraph = $splitContent[0];
          return
            <x:frag>
              <h3><a href={'/{$postId}.php'}>{$this->:title}</a></h3>
              <p>{$firstParagraph} [...]</p>
            </x:frag>;
        } else {
          return
            <x:frag>
              <h1>{$this->:title}</h1>
              <p>{$this->:content}</p>
            </x:frag>;
        }
      }
    }

    And then we can render it to the page:

    echo
      <ui:post-details
        postId='jsx-over-the-wire'
        truncateContent={true}
        title='JSX Over The Wire'
        content='Suppose you have an API route that returns some data...'
      />;

    In fact, we can build an entire web application this way. Tags render other tags, which render other tags, and so on. By eschewing the Rails-style MVC model, we've accidentally discovered a much older principle: function composition.

    One downside of XHP is that it isn't very well-suited to client interactivity. Since XHP executes on a server that emits HTML, the most that you can do relatively seamlessly is to replace parts of an existing markup with the newly generated HTML markup from the server by updating innerHTML of some DOM node.

    Replacing innerHTML wasn't working out particularly well—especially for the highly interative Ads product—which made an engineer (who was not me, by the way) wonder whether it's possible to run an XHP-style "tags render other tags" paradigm directly on the client computer without losing state between the re-renders. As you might gave guessed, this led to the invention of JSX and React.

    Who cares about React though?

    We're here to shill XHP.


    Earlier, <ui:post-details> got title and content from the calling code:

    echo
      <ui:post-details
        postId='jsx-over-the-wire'
        truncateContent={true}
        title='JSX Over The Wire'
        content='Suppose you have an API route that returns some data...'
      />;

    It was not reading title or content on its own—after all, reading them from a database is (ideally) an asynchronous operation, while XHP tags are synchronous.

    Were.

    At some point, engineers at Facebook realized that XHP tags would be a lot more powerful if they could load their own data. Async XHP tags were born:

    class :ui:post-details extends :x:element {
      use XHPAsync;
     
      protected async function asyncRender(): Awaitable<XHPRoot> {
        $post = await loadPost($this->:postId);
        $title = $post->title;
        $content = $post->content;
        // ...
      }
    }

    Now the <ui:post-details> can load its own data based on postId alone:

    class :ui:post-list extends :x:element {
      protected function render(): XHPRoot {
        return
          <x:frag>
            <ui:post-details
              postId='jsx-over-the-wire'
              truncateContent={true}
            />
            <ui:post-details
              postId='react-for-two-computers'
              truncateContent={true}
            />
            ...
          </x:frag>;
      }
    }

    This approach lets you write the entire UI as asynchronous tags rendering other asynchronous tags—until the final HTML is generated. It's a powerful way to think about UI and data. It lets you write self-contained components that load their own data, and then plug those components anywhere in the tree with a one-liner. And since XHP tags run on the server, the entire screen is resolved in a single roundtrip.

    <ui:post-list /> // An entire page of HTML

    I need to emphasize this again. Async XHP allowed self-contained components that load their own data — but! — displaying a screen took a single client/server roundtrip. There aren't many UI frameworks that satisfy both of these points.

    If you're making a similar framework, there's a few details you should think about:

    1. You want the siblings to be resolved in parallel. For example, the two <ui:post-details> above should loadPost around the same time. Async XHP did this.
    2. You also need some way to unblock the rest of the page if a particular branch of the tree is taking too long. Facebook had a BigPipe "pagelet" system that flushes the tree "in parts" with explicitly designed loading states acting as the seams.
    3. Ideally, you want a data access layer that's able to batch reads and share an in-memory cache across different parts of the request. This ensures that even if tags deeper in the tree start "fetching" later than their parents, you're utilizing both CPU and IO well—there are always some tags to render while waiting for the DB.

    Overall, async XHP was an incredibly productive mental model to work with—as long as your app was not very interactive. Unfortunately, for highly interactive apps, emitting HTML is not enough. You need to be able to navigate, handle mutations, and refresh content without losing the client-side state. Since XHP targeted HTML, it was a poor fit for rich interfaces, and React gradually took over.

    Still, as interfaces were being converted to React, there was a noticeable loss in conceptual simplicity. The UI and the data that it needs—two things that are so naturally described together—were being pulled apart into separate codebases.

    GraphQL with Relay were somewhat bridging that gap and contributed some very important innovations, but using them never felt as direct as writing async XHP.


    XHP had an unlikely comeback at Facebook.

    The mental model it offered was so productive that people didn't just want to write web interfaces with it. They also wanted to make native apps with it.

    Think about it.

    This piece of XHP is an object:

    <x:frag>
      <h1>{$this->:title}</h1>
      <p>{$this->:content}</p>
    </x:frag>

    Yes, it can be turned into a piece of HTML:

    <h1>JSX Over The Wire</h1>
    <p>Suppose you have an API route that returns some data as JSON</p>

    But it could also be turned into another representation, such as JSON:

    {
      type: 'x:frag',
      props: {
        children: [{
          type: 'h1',
          props: {
            children: 'JSX Over The Wire'
          }
        },
        {
          type: 'p',
          props: {
            children: 'Suppose you have an API route that returns some data as JSON'
          }
        }]
      }
    }

    There's nothing that actually constrains you to the primitives available in HTML. For example, <ui:post-details> could have been emitting iOS views instead:

    <x:frag>
      <ios:UITextView>{$this->:title}</ios:UITextView>
      <ios:UITextView>{$this->:content}</ios:UITextView>
    </x:frag>

    These tags could be transported as JSON over the network to a native iOS app that would read that JSON and construct a native iOS view hierarchy from these tags.

    {
      type: 'x:frag',
      props: {
        children: [{
          type: 'ios:UITextView',
          props: {
            children: 'JSX Over The Wire'
          }
        },
        {
          type: 'ios:UITextView',
          props: {
            children: 'Suppose you have an API route that returns some data as JSON'
          }
        }]
      }
    }

    Meanwhile, on the server, you can define your own tags that render those tags:

    class :ui:post-list extends :x:element {
      protected function render(): XHPRoot {
        return
          <x:frag>
            <ui:post-details
              postId='jsx-over-the-wire'
              truncateContent={true}
            />
            <ui:post-details
              postId='react-for-two-computers'
              truncateContent={true}
            />
            ...
          </x:frag>
      }
    }

    In other words, you'd have a server endpoint that returns the entire data that any particular screen needs in a single roundtrip. Where the "data" is the native UI.

    <ui:post-list /> // A screen of iOS components

    You might think this wouldn't work because a native app can't rely on a backend in the critical path. However, that's a misunderstanding of the approach. All you need to ensure is that you request more UI in the same situations as when you would make an API call, and not more often. You'll also want to have a fallback UI (like a spinner) available instantly just like when making an API call. In fact, you can even bundle the JSON for some of the initial screens directly within your app's binary.

    In practice, system components like ios:UITextView are a bit too low-level to be good primitives for this kind of format. You really want to have a good "palette" of highly interactive primitives since you want some interactions to "skip the server" and be entirely local. For example, you might implement an ios:ColorPicker primitive in the native code so that it follows your finger's movement, but persist the chosen color using a call to the API that will serve you the next screen as JSON.

    Also, if you made the primitives platform-agnostic (which Facebook did), you could use the same server codebase to assemble screens for both iOS and Android:

    <nt:flexbox flex-direction='column'>
      <nt:text font-size={24} font-weight={FontWeight::BOLD}>
        {$this->:title}
      </nt:text>
      <nt:text font-size={18}>
        {$this->:content}
      </nt:text>
    </nt:flexbox>

    Okay, returning an entire screen as JSON, has anyone done this before?


    This is not a novel idea.

    This is not even a controversial idea.

    You've heard of HTML, right? This is like HTML, but with your design system. Imagine an API endpoint that returns some UI as JSON. Let's use the JSX syntax:

    app.get('/app/profile/:personId', async (req, res) => {
      const [person, featureFlags] = await Promise.all([
        findPerson(req.params.personId),
        getFeatureFlags(req.user.id)
      ]);
     
      const json = (
        <Page title={`${person.firstName}'s Profile`}>
          <Header>
            <Avatar src={person.avatarUrl} />
            {person.isPremium && <PremiumBadge />}
          </Header>
     
          <Layout columns={featureFlags.includes('TWO_COL_LAYOUT') ? 2 : 1}>
            <Panel title='User Info'>
              <UserDetails user={person} />
              {req.user.id === person.id && <EditButton />}
            </Panel>
     
            <Panel title='Activity'>
              <ActivityFeed userId={person.id} limit={3} />
            </Panel>
          </Layout>
        </Page>
      );
     
      res.json(json);
    }

    But since you're essentially coding an API endpoint, you can do anything your API can do—check the feature flags, run server-only logic, read from the data layer.

    Again, this is not a new idea.

    In fact, it's how many of the top native apps are built. Instagram does this, Airbnb does this, Uber does this, Reddit does this, etc. These companies use in-house frameworks that implement this pattern. Many web developers are completely unaware of this pattern which is ironic because the pattern is incredibly "webby".

    In the native sphere, the pattern is known as "SDUI"—"server driven UI". This sounds fancy but essentially it's just JSON endpoints that return UI trees:

    // /app/profile/123
    {
      type: 'Page',
      props: {
        title: 'Jae's Profile',
        children: [{
          type: 'Header',
          props: {
            children: [{
              type: 'Avatar',
              props: {
                src: 'https://example.com/avatar.jpg'
              }
            }, {
              type: 'PremiumBadge',
              props: {},
            }]
          }
        }, {
          type: 'Layout',
          props: {
            columns: 2,
            children: [
              // ...
            ]
          }
        }]
      }
    }

    Then, on the native side, you have some concrete implementations of those primitives—Page, Header, Avatar, PremiumBadge, Layout, and so on.

    Ultimately, this feels like passing props from the server to the client.

    So if we ever find ourselves in a situation where we have a bunch of data prepared on the server, and we need to find a good way to pass pieces of that data to a bunch of functions declared on the client, a format like this might turn out to be handy.

    Let's keep that in mind.


    • From the beginning of time, making web apps involved responding to request for a specific screen with all the data needed for that screen. (HTML is data, too.)
    • From the beginning of time, people looked for ways to make the generation of that "data" dynamic, to split it into reusable logic, and to pass parameters to that logic.
    • In the early days of the web, it was common to compose HTML by string manipulation. Unfortunately, it was easy to mess up and led to many issues.
    • This led many in the web community to banish markup to templates. But at Facebook, XHP proposed another approach: markup that produces objects.
    • It turns out that making markup a first-class coding primitive naturally leads to tags "returning" other tags—instead of MVC, we got functional composition.
    • XHP evolved into Async XHP, which allowed to keep the logic for rendering some UI close to the logic for loading the data it needs. This was extremely powerful.
    • Unfortunately, producing HTML as the primary output format is a dead end for interactive applications. You can't "refresh" HTML in-place without blowing away the state, and state is important.
    • However, nothing actually constraints us to HTML. If tags are objects, they can be sent as JSON. Many of the most successful native apps are built this paradigm. (And if you need HTML, you can always turn JSON into HTML later on.)
    • Returning a tag of client primitives as a JSON tree is a nice way to represent "passing props" to the client.

    So far, we've explored two separate lines of thought:

    • Directly calling REST APIs from the client layer ignores the realities of how user interfaces evolve. We can solve this by adding a new backend layer that assembles the data on the server according to what each screen needs. This layer can be split into functions that each specify how to load data for a particular part of the screen. Then these functions can be composed together. However, we're not sure how to actually tie those functions to the components whose props they are preparing.
    • We can also start from plain HTML and "server includes". If we avoid early MVC-ification and instead explore treating markup as objects, we'll eventually invent the concept of asynchronous tags that load their own data and return more tags. This approach is very powerful because it lets us build self-contained components without causing multiple client/server roundtrips for fetching a single screen. Emitting HTML as the only target format is a dead end, but as proven by many top native applications using this approach, emitting JSON retains all the benefits. All you need is a set of client-side primitives that can be composed from the server.

    It turns out that these are two different ways to talk about the same thing. Ultimately, all we want is a system with these five properties:

    1. Our system lets us split a user interface into rich, interactive components.
    2. Components should have a direct connection with the logic that specifies how their server data is computed. If a component receives some information from the server, you should be a single Ctrl+Click or "Find All References" away from every place on the server where that particular component's props are being calculated. It should be straightforward to change which data is received by which component.
    3. There should be a way to make pieces of UI truly self-contained—including their server data dependencies and corresponding server logic. You should be able to nest a piece of UI inside another piece of UI without worrying what data it needs.
    4. A navigation to a new screen should be possible to complete in one client/server roundtrip. Even if you have hundreds of components that each want to load some data, from the client's perspective, a screen should arrive as a single response. In fact, we'd like our system to stand in the way of creating client/server waterfalls.
    5. We'd like our system to fully support rich interactivity. This means that, even if some parts of it run on the server, it is unacceptable to require full-page refreshes on navigation or after a mutation. In fact, the system should support in-place refreshing of server data directly within an interactive tree. A component should be able to "receive new props" from the server without losing any client state.

    Do you know any such systems? (Try scoring the frameworks you know.)

    If not, let's invent one right now.


    Let's get back to the last version of LikeButtonViewModel from earlier:

    async function LikeButtonViewModel({
      postId,
      includeAvatars
    }) {
      const [post, friendLikes] = await Promise.all([
        getPost(postId),
        getFriendLikes(postId, { limit: includeAvatars ? 5 : 2 }),
      ]);
      return {
        totalLikeCount: post.totalLikeCount,
        isLikedByUser: post.isLikedByUser,
        friendLikes: friendLikes.likes.map(l => ({
          firstName: l.firstName,
          avatar: includeAvatars ? l.avatar : null,
        }))
      };
    }

    This function is a slice of the backend that prepares the props for the LikeButton:

    {
      totalLikeCount: 8,
      isLikedByUser: false,
      friendLikes: [{
        firstName: 'Alice',
        avatar: 'https://example.com/alice.jpg'
      }, {
        firstName: 'Bob',
        avatar: 'https://example.com/bob.jpg'
      }]
    }

    Eventually we were hoping that the LikeButton will receive these props:

    function LikeButton({
      totalLikeCount,
      isLikedByUser,
      friendLikes
    }) {
      // ...
    }

    However, we haven't come up with any mechanism to connect the two sides yet. Who's gonna pass the JSON returned by the LikeButtonViewModel to the LikeButton component? How do we tie the ViewModels to their components?

    What if we took a page out of SDUI and expressed that by returning a tag:

    async function LikeButtonViewModel({
      postId,
      includeAvatars
    }) {
      const [post, friendLikes] = await Promise.all([
        getPost(postId),
        getFriendLikes(postId, { limit: includeAvatars ? 5 : 2 }),
      ]);
      return (
        <LikeButton
          totalLikeCount={post.totalLikeCount}
          isLikedByUser={post.isLikedByUser}
          friendLikes={friendLikes.likes.map(l => ({
            firstName: l.firstName,
            avatar: includeAvatars ? l.avatar : null,
          }))}
        />
      );
    }

    As we know from earlier, we can represent this JSX as a tree of JSON. In fact, it's almost like the original JSON, but now it specifies the receiving component:

    {
      type: 'LikeButton',
      props: {
        totalLikeCount: 8,
        isLikedByUser: false,
        friendLikes: [{
          firstName: 'Alice',
          avatar: 'https://example.com/alice.jpg'
        }, {
          firstName: 'Bob',
          avatar: 'https://example.com/bob.jpg'
        }]
      }
    }

    Then React on the client would know to pass these props to the LikeButton:

    function LikeButton({
      totalLikeCount,
      isLikedByUser,
      friendLikes
    }) {
      // ...
    }

    And so we've finally stitched the ViewModel and its component together!

    We've tied the code generating the props with the code consuming those props. Now our ViewModel and our component are a Ctrl+Click away from each other. Since JSX expressions are typechecked, we also get full typechecking for free.

    Have a look at the complete picture:

    async function LikeButtonViewModel({
      postId,
      includeAvatars
    }) {
      const [post, friendLikes] = await Promise.all([
        getPost(postId),
        getFriendLikes(postId, { limit: includeAvatars ? 5 : 2 }),
      ]);
      return (
        <LikeButton
          totalLikeCount={post.totalLikeCount}
          isLikedByUser={post.isLikedByUser}
          friendLikes={friendLikes.likes.map(l => ({
            firstName: l.firstName,
            avatar: includeAvatars ? l.avatar : null,
          }))}
        />
      );
    }
    function LikeButton({
      totalLikeCount,
      isLikedByUser,
      friendLikes
    }) {
      let buttonText = 'Like';
      if (totalLikeCount > 0) {
        // e.g. 'Liked by You, Alice, and 13 others'
        buttonText = formatLikeText(totalLikeCount, isLikedByUser, friendLikes);
      }
      return (
        <button className={isLikedByUser ? 'liked' : ''}>
          {buttonText}
        </button>
      );
    }

    Our ViewModel is just like an Async XHP tag, passing some information to our own <LikeButton> primitive that lives on client (just like in SDUI). Together, they represent a self-contained piece of UI that knows how to load its own data.

    Let's do this again with another ViewModel.


    Now let's revisit the PostDetailsViewModel from this section:

    async function PostDetailsViewModel({
      postId,
      truncateContent,
      includeAvatars
    }) {
      const [post, postLikes] = await Promise.all([
        getPost(postId),
        LikeButtonViewModel({ postId, includeAvatars }),
      ]);
      return {
        postTitle: post.title,
        postContent: parseMarkdown(post.content, {
          maxParagraphs: truncateContent ? 1 : undefined
        }),
        postAuthor: post.author,
        postLikes
      };
    }

    We've never explicitly written it down, but suppose that there was a matching PostDetails component that can take that JSON and actually render the post:

    function PostDetails({
      postTitle,
      postContent,
      postAuthor,
      postLikes,
    }) {
      // ...
    }

    Let's connect them together.

    First, let's change PostDetailsViewModel to return a PostDetails tag:

    async function PostDetailsViewModel({
      postId,
      truncateContent,
      includeAvatars
    }) {
      const [post, postLikes] = await Promise.all([
        getPost(postId),
        LikeButtonViewModel({ postId, includeAvatars }),
      ]);
      return (
        <PostDetails
          postTitle={post.title}
          postContent={parseMarkdown(post.content, {
            maxParagraphs: truncateContent ? 1 : undefined
          })}
          postAuthor={post.author}
          postLikes={postLikes}
        />
      );
    }

    Now the JSON it returns will be wrapped into a PostDetails JSX element:

    {
      type: 'PostDetails',
      props: {
        postTitle: 'JSX Over The Wire',
        postAuthor: 'Dan',
        postContent: 'Suppose you have an API route that returns some data as JSON.',
        postLikes: {
          type: 'LikeButton',
          props: {
            totalLikeCount: 8,
            isLikedByUser: false,
            friendLikes: [{
              firstName: 'Alice'
            }, {
              firstName: 'Bob'
            }]
          }
        }
      }
    }

    On the client, React will take these props and pass them to PostDetails:

    function PostDetails({
      postTitle,
      postContent,
      postAuthor,
      postLikes,
    }) {
      return (
        <article>
          <h1>{postTitle}</h1>
          <div dangerouslySetInnerHTML={{ __html: postContent }} />
          <p>by {postAuthor.name}</p>
          <section>
            {postLikes}
          </section>
        </article>
      );
    }

    And that connects the ViewModel with its component!


    Notice how postLikes in the last example is rendered directly into UI:

    <section>
      {postLikes}
    </section>

    We can do this because it's the <LikeButton> with its props already preconfigured by LikeButtonViewModel. It was right here in the JSON:

    {
      type: 'PostDetails',
      props: {
        // ...
        postLikes: {
          type: 'LikeButton',
          props: {
            totalLikeCount: 8,
            // ...
          }
        }
      }
    }

    You might recall that we obtained it by calling LikeButtonViewModel:

    async function PostDetailsViewModel({
      postId,
      truncateContent,
      includeAvatars
    }) {
      const [post, postLikes] = await Promise.all([
        getPost(postId),
        LikeButtonViewModel({ postId, includeAvatars }),
      ]);
      // ...

    However, having ViewModels manually call other ViewModels inside Promise.all quickly gets very tedious. So we'll adopt a new convention. Let's assume that a ViewModel can embed another ViewModel by returning a JSX tag.

    This will let us clean up the code quite a bit:

    async function PostDetailsViewModel({
      postId,
      truncateContent,
      includeAvatars
    }) {
      const post = await getPost(postId);
      return (
        <PostDetails
          postTitle={post.title}
          postContent={parseMarkdown(post.content, {
            maxParagraphs: truncateContent ? 1 : undefined
          })}
          postAuthor={post.author}
          postLikes={
            <LikeButtonViewModel
              postId={postId}
              includeAvatars={includeAvatars}
            />
          }}
        />
      );
    }

    After this change, calling PostDetailsViewModel will return "unfinished" JSON:

    {
      type: 'PostDetails', // ✅ This is a component on the client
      props: {
        postTitle: 'JSX Over The Wire',
        // ...
        postLikes: {
          type: LikeButtonViewModel, // 🟡 We haven't run this ViewModel yet
          props: {
            postId: 'jsx-over-the-wire',
            includeAvatars: false,
          }
        }
      }
    }

    The code responsible for sending JSON to the client will see that it's a ViewModel (so it still needs to run!), and will call LikeButtonViewModel to get more JSON:

    {
      type: 'PostDetails', // ✅ This is a component on the client
      props: {
        postTitle: 'JSX Over The Wire',
        // ...
        postLikes: {
          type: 'LikeButton', // ✅ This is a component on the client
          props: {
            totalLikeCount: 8,
            // ...
          }
        }
      }
    }

    ViewModels will get recursively unfolded as they each contribute their part of the JSON. This might remind you of how XHP tags can recursively render other XHP tags. The final JSON will be turned on the client into a React component tree.

    <PostDetails
      postTitle='JSX Over The Wire'
      // ...
      postLikes={
        <LikeButton
          totalLikeCount={8}
          // ...
        />
      }
    />

    To make the JSX look slightly nicer, we can also rename postLikes to children. This will let us nest LikeButtonViewModel as a JSX child of PostDetails.

    Here's the entire code so far. Notice how the data flows down:

    async function PostDetailsViewModel({
      postId,
      truncateContent,
      includeAvatars
    }) {
      const post = await getPost(postId);
      return (
        <PostDetails
          postTitle={post.title}
          postContent={parseMarkdown(post.content, {
            maxParagraphs: truncateContent ? 1 : undefined
          })}
          postAuthor={post.author}
        >
          <LikeButtonViewModel
            postId={postId}
            includeAvatars={includeAvatars}
          />
        </PostDetails>
      );
    }
     
    async function LikeButtonViewModel({
      postId,
      includeAvatars
    }) {
    const [post, friendLikes] = await Promise.all([
      getPost(postId),
      getFriendLikes(postId, { limit: includeAvatars ? 5 : 2 }),
    ]);
    return (
      <LikeButton
        totalLikeCount={post.totalLikeCount}
        isLikedByUser={post.isLikedByUser}
        friendLikes={friendLikes.likes.map(l => ({
          firstName: l.firstName,
          avatar: includeAvatars ? l.avatar : null,
        }))}
      />
    );

    All of the server logic above will execute while generating the JSON. This includes both getPost, parseMarkdown, and getFriendLikes. The response will contain the data for the entire screen, satisfying one of our key requirements:

    {
      type: 'PostDetails', // ✅ This is a component on the client
      props: {
        postTitle: 'JSX Over The Wire',
        // ...
        children: {
          type: 'LikeButton', // ✅ This is a component on the client
          props: {
            totalLikeCount: 8,
            // ...
          }
        }
      }
    }

    From the client's perspective, everything will appear precomputed:

    function PostDetails({
      postTitle,
      postContent,
      postAuthor,
      children,
    }) {
      return (
        <article>
          <h1>{postTitle}</h1>
          <div dangerouslySetInnerHTML={{ __html: postContent }} />
          <p>by {postAuthor.name}</p>
          <section>
            {children}
          </section>
        </article>
      );
    }
     
    function LikeButton({ totalLikeCount, isLikedByUser, friendLikes }) {
      // ...
    }

    In particular, by the time PostDetails runs, the children it receives will be the <LikeButton> tag itself with predefined props. The ViewModels configure the props for the client. This is why on the client, all the props are "already there".

    Spend some time with the code above and make sure it sinks in.

    Yes, this is weird.

    It is also glorious.

    What we found is a way to compose tags across client-server boundaries where the server parts can be freely wrapped in the client parts, the client parts can be freely wrapped in the server parts, and not only do they just work—we're also performing the data loading for all of the server parts in a single roundtrip.

    In fact, this approach satisfies every point on my checklist.

    Now let's tidy it up and clean up some loose ends.


    As we refactor our ViewModels to use JSX (for the JSX-sceptical readers—the point here isn't just the syntax, although the syntax is nice—but lazy evaluation), we might realize that we don't actually need separate Express routes for every screen.

    Instead, we might want to do something like this:

    app.get('/*', async (req, res) => {
      const url = req.url;
      const json = await toJSON(<RouterViewModel url={url} />); // Evaluate JSX
      res.json(json);
    });

    Then we'd have a Router ViewModel that matches screens to routes:

    function RouterViewModel({ url }) {
      let route;
      if (matchRoute(url, '/screen/post-details/:postId')) {
        const { postId } = parseRoute(url, '/screen/post-details/:postId');
        route = <PostDetailsRouteViewModel postId={postId} />;
      } else if (matchRoute(url, '/screen/post-list')) {
        route = <PostListRouteViewModel />;
      }
      return route;
    }

    And then each route would also be a ViewModel:

    function PostDetailsRouteViewModel({ postId }) {
      return <PostDetailsViewModel postId={postId} />
    }
     
    async function PostListRouteViewModel() {
      const postIds = await getRecentPostIds();
      return (
        <>
          {postIds.map(postId =>
            <PostDetailsViewModel key={postId} postId={postId} />
          )}
        </>
      );
    }

    On the server, it's ViewModels all the way down.

    This might seem superfluous at this point. But moving the routing logic into the ViewModel world would let RouterViewModel wrap its output into a client-side <Router> that could re-request the JSON when you navigate to another screen.

    function RouterViewModel({ url }) {
      let route;
      if (matchRoute(url, '/screen/post-details/:postId')) {
        const { postId } = parseRoute(url, '/screen/post-details/:postId');
        route = <PostDetailsRouteViewModel postId={postId} />;
      } else if (matchRoute(url, '/screen/post-list')) {
        route = <PostListRouteViewModel />;
      }
      return (
        <Router>
          {route}
        </Router>
      );
    }
    function Router({ children }) {
      const [tree, setTree] = useState(children);
      // ... maybe add some logic here later ...
      return tree;
    }

    This could also let us—if we wanted to—implement a more granular router that can split the path into segments, prepare the ViewModels for each segment in parallel when it receives a request, and even re-request individual segments on navigation. This way, we would no longer have to re-request the entire page whenever we need to go to another screen. Of course, we wouldn't want to implement this kind of logic within the app. Ideally, a framework would do this.


    We can drop the pretense now—we're describing React Server Components:

    • Our "ViewModels" are Server Components.
    • Our "Components" are Client Components.

    There are good reasons to call both of them Components. Although in the first part of this post, Server Components began their journey as ViewModels, their lineage can be equally convincingly traced back to Async XHP tags. Since they no longer have to return JSON objects, and because in practice you'll often import the same components from both "sides", it makes sense to say Components. (In fact, in my incomplete example, all Client Components could be moved to the Server.)

    In this post, we haven't discussed the actual mechanism "connecting" the module systems of Server and Client worlds. This will be a topic for another post, but in short, when you import something from a module with 'use client', you don't get the real thing—you just get a reference which describes how to load it.

    import { LikeButton } from './LikeButton';
     
    console.log(LikeButton);
    // 'src/LikeButton.js#LikeButton'
     
    async function LikeButtonViewModel({
      postId,
      includeAvatars
    }) {
    const [post, friendLikes] = await Promise.all([
      getPost(postId),
      getFriendLikes(postId, { limit: includeAvatars ? 5 : 2 }),
    ]);
    return (
      <LikeButton
        totalLikeCount={post.totalLikeCount}
        isLikedByUser={post.isLikedByUser}
        friendLikes={friendLikes.likes.map(l => ({
          firstName: l.firstName,
          avatar: includeAvatars ? l.avatar : null,
        }))}
      />
    );
    'use client';
     
    export function LikeButton({
      totalLikeCount,
      isLikedByUser,
      friendLikes
    }) {
      let buttonText = 'Like';
      if (totalLikeCount > 0) {
        // e.g. 'Liked by You, Alice, and 13 others'
        buttonText = formatLikeText(totalLikeCount, isLikedByUser, friendLikes);
      }
      return (
        <button className={isLikedByUser ? 'liked' : ''}>
          {buttonText}
        </button>
      );
    }

    So the generated JSON will contain an instruction for loading the LikeButton:

    {
      type: 'src/LikeButton.js#LikeButton', // ✅ This is a Client Component
      props: {
        totalLikeCount: 8,
        // ...
      }
    }

    React will read that instruction and load it as a <script> tag (or read it from the bundler cache). The format is bundler-specific, which explains why React Server Components requires a bundler integration. (Parcel just released theirs which isn't tied to a framework, so it's perfect if you want to play with how RSC works.)

    It's important that React Server Components emit JSON rather than HTML:

    • Server tree can be refetched in-place without losing state. (React will just do its "virtual DOM" thing, i.e. apply the new props to the already existing components.)
    • You can target other platforms than web. (Here's a cool demo.)
    • You can still turn that JSON into HTML by executing all the Client Components within it! That's not required by RSC, but it is definitely doable. That's why "Client" components may run on the "server"—to output HTML, you'd run both "sides".

    To conclude this post, I'll say the following. I know that React Server Components have not been everyone's cup of tea. It twists your brain but I think it twists it in a good way. I'll be posting more about why I'm excited about RSC and will try to distill some of these explanations into shorter posts. But in the meantime, I hope that this post provided some historical background on the motivation behind RSC, what it can do, as well as how you could arrive at RSC through your own thinking.

    (By the way, if you enjoy more philosophical and whimsical longreads, check out my last post which arrives at RSC from the first principles without any history.)


    • React Server Components solve the problems outlined in the first part by using techniques outlined in the second part. In particular, they let you "componentize" the UI-specific parts of your API and ensure they evolve together with your UI.
    • This means that there is a direct connection between your components and the server code that prepares their props. You can always "Find All References" to find from where on the server the data is flowing into each of your components.
    • Because React Server Components emit JSON, they don't "blow away" the state of the page on refetches. Your components can receive fresh props from the server.
    • React Server Components emit JSON, but that JSON can also be (optionally) turned to HTML for first render. It's easy to make HTML out of JSON, but not the inverse.
    • React Server Components let you create self-contained pieces of UI that take care of preparing their own server data. However, all this preparation occurs within a single roundtrip. Although your code is modular, their execution is coalesced.
    • RSC is mindbending, I won't lie. Sometimes you have to think inside-out. But personally, I think RSC is awesome. The tooling is still evolving but I'm excited for its future. I hope to see more technologies thoughtfully blending the boundaries.

    While this isn't a runnable application (I bet you could get there with Next or Parcel) and might contain mistakes, here's the complete code example. I've done a few renames to drop the "ViewModel" terminology so it looks more idiomatic.

    import { PostDetails, LikeButton } from './client';
     
    export function PostDetailsRoute({ postId }) {
      return <Post postId={postId} />
    }
     
    export async function PostListRoute() {
      const postIds = await getRecentPostIds();
      return (
        <>
          {postIds.map(postId =>
            <Post key={postId} postId={postId} />
          )}
        </>
      );
    }
     
    async function Post({
      postId,
      truncateContent,
      includeAvatars
    }) {
      const post = await getPost(postId);
      return (
        <PostLayout
          postTitle={post.title}
          postContent={parseMarkdown(post.content, {
            maxParagraphs: truncateContent ? 1 : undefined
          })}
          postAuthor={post.author}
        >
          <PostLikeButton
            postId={postId}
            includeAvatars={includeAvatars}
          />
        </PostLayout>
      );
    }
     
    async function PostLikeButton({
      postId,
      includeAvatars
    }) {
    const [post, friendLikes] = await Promise.all([
      getPost(postId),
      getFriendLikes(postId, { limit: includeAvatars ? 5 : 2 }),
    ]);
    return (
      <LikeButton
        totalLikeCount={post.totalLikeCount}
        isLikedByUser={post.isLikedByUser}
        friendLikes={friendLikes.likes.map(l => ({
          firstName: l.firstName,
          avatar: includeAvatars ? l.avatar : null,
        }))}
      />
    );
    'use client';
     
    export function PostLayout({
      postTitle,
      postContent,
      postAuthor,
      children,
    }) {
      return (
        <article>
          <h1>{postTitle}</h1>
          <div dangerouslySetInnerHTML={{ __html: postContent }} />
          <p>by {postAuthor.name}</p>
          <section>
            {children}
          </section>
        </article>
      );
    }
     
    export function LikeButton({
      totalLikeCount,
      isLikedByUser,
      friendLikes
    }) {
      let buttonText = 'Like';
      if (totalLikeCount > 0) {
        buttonText = formatLikeText(totalLikeCount, isLikedByUser, friendLikes);
      }
      return (
        <button className={isLikedByUser ? 'liked' : ''}>
          {buttonText}
        </button>
      );
    }

    Happy stitching!




    All Comments: [-] | anchor

    nop_slide(2834) 3 days ago [-]

    Just use Django/HTMX, Rails/Hotwire, or Laravel/Livewire

    pier25(1375) 3 days ago [-]

    Phoenix/Liveviews

    Fresh/Partials

    Astro/HTMX with Partials

    cpursley(3464) 2 days ago [-]

    LiveView is the OG and absolutely smokes those in terms of performance (and DX), but ecosystem is lacking. Anyways, I'd rather use full stack React/Typescript over slow and untyped Rails or Python and their inferior ORMs.

    spellboots(10000) 3 days ago [-]

    This feels a lot like https://inertiajs.com/ which I've really been enjoying using recently

    chrisvenum(10000) 3 days ago [-]

    I am a huge fan of Inertia. I always felt limited by Blade but drained by the complexity of SPAs. Inertia makes using React/Vue feel as simple as old-school Laravel app. Long live the monolith.

    danabramov(816) 3 days ago [-]

    Yeah, there is quite a bit of overlap!

    tillcarlos(10000) 2 days ago [-]

    This. We started using it with Rails and it's been great.

    I do like scrappy rails views that can be assembled fast - but the React views our FE dev is putting on top of existing rails controllers have a much better UX.

    motoboi(10000) 3 days ago [-]

    Step by step coming back go JSF.

    Tade0(10000) 3 days ago [-]

    Or back to its PHP roots.

    merb(10000) 3 days ago [-]

    or webforms, I hate it.

    altbdoor(10000) 3 days ago [-]

    IMO this feels like Preact 'render to string' with Express, though I might be oversimplifying things, and granted it wouldn't have all the niceties that React offers.

    Feels like HTMX, feels like we've come full circle.

    danabramov(816) 3 days ago [-]

    In my checklist (https://overreacted.io/jsx-over-the-wire/#dans-async-ui-fram...), that would satisfy only (2), (3) if it supports async/await in components, and (4). It would not satisfy (1) or (5) because then you'd have to hydrate the components on the client, which you wouldn't be able to do with Preact if they had server-only logic.

    esco27(10000) 3 days ago [-]

    Yes, another case of old school web dev making a comeback. "HTML over the wire" is basically server-rendered templates (php, erb, ejs, jinja), sent asynchronously as structured data and interpreted by React to render the component tree.

    What's being done here isn't entirely new. Turbo/Hotwire [1], Phoenix LiveView, even Facebook's old Async XHP explored similar patterns. The twist is using JSX to define the component tree server-side and send it as JSON, so the view model logic and UI live in the same place. Feels new, but super familiar, even going back to CGI days.

    [1] https://hotwired.dev

    danabramov(816) 3 days ago [-]
    >What's being done here isn't entirely new. Turbo/Hotwire [1], Phoenix LiveView, even Facebook's old Async XHP explored similar patterns.

    Right, that's why it's in the post: https://overreacted.io/jsx-over-the-wire/#async-xhp

    Likewise with CGI: https://overreacted.io/jsx-over-the-wire/#html-ssi-and-cgi

    Agree there's echoes of 'old' in 'new' but there are also distinct new things too :)

    gavmor(10000) 2 days ago [-]

    Right? Right. I had similar thoughts (API that's the parent of the view? You mean a controller?), and quit very early into the post. Didn't realize it was Dan Abramov, or I might've at least skimmed the 70% and 99% marks, but there's no going back now.

    Who is this written for? A junior dev? Or, are we minting senior devs with no historical knowledge?

    bk496(10000) 3 days ago [-]

    Another great post!

    I like the abstraction of server components but some of my co-workers seem to prefer HTMX (sending HTML rather than JSON) and can't really see any performance benefit from server components.

    Maybe OP could clear up - Whether HTML could be sent instead (depending on platform), there is a brief point about not losing state but if your component does not have input elements or can have it state thrown away then maybe raw HTML could work? - prop size vs markup/component size. If you send a component down with a 1:9 dynamic to static content component. Then wouldn't it be better to have the the 90% static preloaded in the client, then only 10% of the data transmitted? Any good heuristic options here? - 'It's easy to make HTML out of JSON, but not the inverse'. What is intrinsic about HTML/XML?

    --

    Also is Dan the only maintainer on the React team who does these kind of posts? do other members write long form. would be interesting to have a second angle.

    tbeseda(10000) 3 days ago [-]

    A second angle from the same team?

    Or reference the 2+ decades written about the same pattern in simpler, faster, less complex implementations.

    skydhash(10000) 3 days ago [-]

    Everything old is new again, and I'm not even that old to know that you can return HTML fragments from AJAX call. But this is worse from any architectural point view. Why?

    The old way was to return HTML fragments and add them to the DOM. There was still a separation of concern as the presentation layer on the server didn't care about the interface presented on the client. It was just data generally composed by a template library. The advent of SPA makes it so that we can reunite the presentation layer (with the template library) on the frontend and just send the data to be composed down with the request's response.

    The issue with this approach is to again split the frontend, but now you have two template libraries to take care of (in this case one, but on the two sides). The main advantages of having a boundary is that you can have the best representation of data for each side's logic, converting only when needs. And the conversion layer needs to be simple enough to not introduce complexity of its own. JSON is fine as it's easy to audit a parser and HTML is fine, because it's mostly used as is on the other layer. We also have binary representation, but they also have strong arguments for their use.

    With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.

    tshaddox(10000) 3 days ago [-]

    > The old way was to return HTML fragments and add them to the DOM. There was still a separation of concern as the presentation layer on the server didn't care about the interface presented on the client.

    I doubt there were many systems where the server-generated HTML fragments were generic enough that the server and client HTML documents didn't need to know anything about each other's HTML. It's conceivable to build such a system, particularly if it's intended for a screen-reader or an extremely thinly-styled web page, but in either of those cases HTML injection over AJAX would have been an unlikely architectural choice.

    In practice, all these systems that did HTML injection over AJAX were tightly coupled. The server made strong assumptions about the HTML documents that would be requesting HTML fragments, and the HTML documents made strong assumptions about the shape of the HTML fragments the server would give it.

    danabramov(816) 3 days ago [-]

    It feels like you haven't read the article and commented on the title.

    >The old way was to return HTML fragments and add them to the DOM.

    Yes, and the problem with that is described at the end of this part: https://overreacted.io/jsx-over-the-wire/#async-xhp

    >JSON is fine [..] With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.

    I really don't know what you mean; the transport literally is JSON. We're not literally sending JSX anywhere. That's also in the article. The JSON output is shown about a dozen times throughout, especially in the third part. You can search for 'JSON' on the page. It appears 97 times.

    rapnie(314) 3 days ago [-]

    > Everything old is new again

    An age ago I took interest in KnockoutJS based on Model-View-ViewModel and found it pragmatic and easy to use. It was however at the beginning of the mad javascript framework-hopping marathon, so it was considered 'obsolete' after a few months. I just peeked, Knockout still exists.

    https://knockoutjs.com/

    Btw, I wouldn't hop back, but better hop forward, like with Datastar that was on HN the other day: https://news.ycombinator.com/item?id=43655914

    aylmao(3486) 3 days ago [-]

    > The main advantages of having a boundary is that you can have the best representation of data for each side's logic, converting only when needs.

    RSC doesn't impede this. In fact it improves it. Instead of having your ORM's objects, to be converted to JSON, sent, parsed, and finally manipulated to your UIs needs, you skip the whole 'convert to JSON' part. You can go straight from your ORM objects (best for data operations) to UI (best for rendering) and skip having to think about how the heck you'll serialize this to be serialized over the wire.

    > With JSX on the server side, it's abstraction when there's no need to be. And in the wrong place to boot.

    JSX is syntactic sugar for a specific format of JavaScript object. It's a pretty simple format really. From ReactJSXElement.js, L242 [1]:

      element = {
        // This tag allows us to uniquely identify this as a React Element
        $$typeof: REACT_ELEMENT_TYPE,
        // Built-in properties that belong on the element
        type,
        key,
        ref,
        props,
      };
    
    As far as I'm aware, TC39 hasn't yet specified which shape of literal is 'ok' and which one is 'wrong' to run on a computer, depending on wether that computer has a screen or not. I imagine this is why V8, JSC and SpiderMonkey, etc let you create objects of any shape you want on any environment. I don't understand what's wrong about using this shape on the server.

    [1] https://github.com/facebook/react/blob/e71d4205aed6c41b88e36...

    low_tech_punk(10000) 3 days ago [-]

    The X in JSX stands for HTMX.

    recursivedoubts(2853) 3 days ago [-]

    unfathomably based

    danabramov(816) 3 days ago [-]

    Yes

    wild_egg(10000) 3 days ago [-]

    Deja vu with this blog. Another overengineered abstraction recreating things that already exist.

    Misunderstanding REST only to reinvent it in a more complex way. If your API speaks JSON, it's not REST unless/until you jump through all of these hoops to build a hypermedia client on top of it to translate the bespoke JSON into something meaningful.

    Everyone ignores the 'hypermedia constraint' part of REST and then has to work crazy magic to make up for it.

    Instead, have your backend respond with HTML and you get everything else out of the box for free with a real REST interface.

    danabramov(816) 3 days ago [-]
    >Another overengineered abstraction recreating things that already exist.

    This section is for you: https://overreacted.io/jsx-over-the-wire/#html-ssi-and-cgi

    >Everyone ignores the 'hypermedia constraint' part of REST and then has to work crazy magic to make up for it.

    Right, that's why I've linked to https://htmx.org/essays/how-did-rest-come-to-mean-the-opposi... the moment we started talking about this. The post also clarifies multiple times that I'm talking about how REST is used in practice, not its 'textbook' interpretation that nobody refers to except in these arguments.

    timw4mail(10000) 3 days ago [-]

    The hypermedia constraint is crazy magic itself. It's not like HATEOAS is fewer steps on the application and server side.

    aylmao(3486) 3 days ago [-]

    We already have a way one way to render things on the browser, everyone. Wrap it up, there's definitely no more to explore here.

    And while we're at it, I'd like to know, why are people still building new and different game engines, programming languages, web browsers, operating systems, shells, etc, etc. Don't they know those things already exist?

    /s

    Joking aside, what's wrong with finding a new way of doing something? This is how we learn and discover things.

    gherkinnn(3616) 3 days ago [-]

    There is a part of my brain that is intrigued by React Server Components. I kinda get it.

    And yet, I see nothing but confusion around this topic. For two years now. I see Next.js shipping foot guns, I see docs on these rendering modes almost as long as those covering all of Django, and I see blog lengthy blog posts like this.

    When the majority of problems can be solved with Django, why tie yourself in to knots like this? At what point is it worth it?

    danabramov(816) 3 days ago [-]

    I think the rollout is a bit messy (especially because it wasn't introduced as a new thing but kind of replaced an already highly used but different thing). There are pros and cons to that kind of rollout. The tooling is also yet to mature. And we're still figuring out how to educate people on it.

    That said, I also think the basic concepts or RSC itself (not 'rendering modes' which are a Next thing) are very simple and 'up there' with closures, imports, async/await and structured programming in general. They deserve to be learned and broadly understood.

    chacham15(10000) 3 days ago [-]

    The main thing that confuses me is that this seems to be PHP implemented in React...and talks about how to render the first page without a waterfall and all that makes sense, but the main issue with PHP was that reactivity was much harder. I didnt see / I dont understand how this deals with that.

    When you have a post with a like button and the user presses the like button, how do the like button props update? I assume that it would be a REST request to update the like model. You could make the like button refetch the like view model when the button is clicked, but then how do you tie that back to all the other UI elements that need to update as a result? E.g. what if the UI designer wants to put a highlight around posts which have been liked?

    On the server, you've already lost the state of the client after that first render, so doing some sort of reverse dependency trail seems fragile. So the only option would be to have the client do it, but then you're back to the waterfall (unless you somehow know the entire state of the client on the server for the server to be able to fully re-render the sub-tree, and what if multiple separate subtrees are involved in this?). I suppose that it is do-able if there exists NO client side state, but it still seems difficult. Am I missing something?

    danabramov(816) 3 days ago [-]
    >When you have a post with a like button and the user presses the like button, how do the like button props update?

    Right, so there's actually a few ways to do this, and the 'best' one kind of depends on the tradeoffs of your UI.

    Since Like itself is a Client Component, it can just hit the POST endpoint and update its state locally. I.e. without 'refreshing' any of the server stuff. It 'knows' it's been liked. This is the traditional Client-only approach.

    Another option is to refetch UI from the server. In the simplest case, refetching the entire screen. Then yes, new props would be sent down (as JSON) and this would update both the Like button (if it uses them as its source of truth) and other UI elements (like the highlights you mentioned). It'll just send the entire thing down (but it will be gracefully merged into the UI instead of replacing it). Of course, if your server always returns an unpredictable output (e.g. a Feed that's always different), then you don't want to do that. You could get more surgical with refreshing parts of the tree (e.g. a subroute) but going the first way (Client-only) in this case would be easier.

    In other words, the key thing that's different is that the client-side things are highly dynamic so they have agency in whether to do a client change surgically or to do a coarse roundtrip.

    yawaramin(3635) 3 days ago [-]

    I skimmed over this and imho it would be better to cut like 30% of the exposition and split it up into a series of articles tackling each style separately. Just my 2c.

    danabramov(816) 3 days ago [-]

    I'm hoping someone will do something like that. I try to write with the audience of writers in mind.

    android521(10000) 3 days ago [-]

    Very well written. It is rare to see these kinds of high quality articles these days.

    danabramov(816) 3 days ago [-]

    Thanks!

    wallrat(10000) 3 days ago [-]

    Very well written (as expected) argument for RSC. It's interesting to see the parallels with Inertia.js.

    (a bit sad to see all the commenters that clearly haven't read the article though)

    jeppester(10000) 2 days ago [-]

    I was immediately thinking of inertia.js.

    Inertia is 'dumb' in that a component can't request data, but must rely on that the API knows which data it needs.

    RSC is 'smarter', but also to it's detriment in my opinion. I have yet to see a 'clean' Next project using RSC. Developers end up confused about which components should be what (and that some can be both), and 'use client' becomes a crutch of sorts, making the projects messy.

    Ultimately I think most projects would be better off with Inertia's (BFF) model, because of its simplicity.

    mattbessey(2182) 3 days ago [-]

    This was a really compelling article Dan, and I say that as a long time l advocate of 'traditional' server side rendering like Rails of old.

    I think your checklist of characteristics frames things well. it reminds me of Remix's introduction to the library

    https://remix.run/docs/en/main/discussion/introduction > Building a plain HTML form and server-side handler in a back-end heavy web framework is just as easy to do as it is in Remix. But as soon as you want to cross over into an experience with animated validation messages, focus management, and pending UI, it requires a fundamental change in the code. Typically, people build an API route and then bring in a splash of client-side JavaScript to connect the two. With Remix, you simply add some code around the existing 'server side view' without changing how it works fundamentally

    it was this argument (and a lot of playing around with challengers like htmx and JSX like syntax for Python / Go) that has brought me round to the idea that RSCs or something similar might well be the way to go.

    Bit of a shame seeing how poor some of the engagement has been on here and Reddit though. I thought the structure and length of the article was justified and helpful. Concerning how many peoples' responses are quite clearly covered in TFA they didn't read...

    swyx(159) 2 days ago [-]

    its absolutely ridiculous and sad the level of responses failing basic comprehension and this is a topic i happen to know well... makes you wonder how much to trust the avg hn comment where i am NOT knowledgeable...

    Vinnl(132) 2 days ago [-]

    There are a couple of 'red flag' quips that if I hear them coming out of my mouth (or feel the urge to do so), I have to do a quick double take and reconsider my stance. 'Everything old is new again' is one of them — usually, that means I'm missing some of the progress that has happened in the meantime.

    parthdesai(10000) 2 days ago [-]

    Not aware of remix, but how do you manage connection pooling, read vs write queries in these use cases?

    h14h(10000) 3 days ago [-]

    Excellent read! This is the first time I feel like I finally have a good handle on the 'what' & 'why' of RSCs.

    It has also sparked a strong desire to see RSCs compared and contrasted with Phoenix LiveView.

    The distinction between RSCs sending 'JSX' over the Wire, and LiveViews sending 'minimal HTML diffs'[0] over the wire is fascinating to me, and I'm really curious how the two methodologies compare/contrast in practice.

    It'd be especially interesting to see how client-driven mutations are handled under each paradigm. For example, let's say an 'onClick' is added to the `<button>` element in the `LikeButton` client component -- it immediately brings up a laundry list of questions for me:

    1. Do you update the client state optimistically? 2. If you do, what do you do if the server request fails? 3. If you don't, what do you do instead? Intermediate loading state? 4. What happens if some of your friends submit likes the same time you do? 5. What if a user accidentally 'liked', and tries to immediately 'unlike' by double-clicking? 6. What if a friend submitted a like right after you did, but theirs was persisted before yours?

    (I'll refrain from adding questions about how all this would work in a globally distributed system (like BlueSky) with multiple servers and DB replicas ;))

    Essentially, I'm curious whether RSCs offer potential solutions to the same sorts of problems Jose Valim identified here[1] when looking at Remix Submission & Revalidation.

    Overall, LiveView & RSCs are easily my top two most exciting 'full stack' application frameworks, and I love seeing how radically different their approaches are to solving the same set of problems.

    [0]: <https://www.phoenixframework.org/blog/phoenix-liveview-1.0-r...> [1]: <https://dashbit.co/blog/remix-concurrent-submissions-flawed>

    sophiebits(3169) 2 days ago [-]

    React offers a useOptimistic Hook that is designed for client-side optimistic updates and automatically handles reverting the update upon failure, etc: https://react.dev/reference/react/useOptimistic

    rwieruch(1712) 2 days ago [-]

    I have used RSCs only in Next.js, but to answer your questions:

    1./2.: You can update it optimistically. [0]

    3.: Depends on the framework's implementation. In Next.js, you'd invalidate the cache. [1][2]

    4.: In the case of the like button, it would be a 'form button' [3] which would have different ways [4] to show a pending state. It can be done with useFormStatus, useTransition or useActionState depending on your other needs in this component.

    5.: You block the double request with useTransition [5] to disable the button.

    6.: In Next, you would invalidate the cache and would see your like and the like of the other user.

    [0] https://react.dev/reference/react/useOptimistic

    [1] https://nextjs.org/docs/app/api-reference/functions/revalida...

    [2] https://nextjs.org/docs/app/api-reference/directives/use-cac...

    [3] https://www.robinwieruch.de/react-form-button/

    [4] https://www.robinwieruch.de/react-form-loading-pending-actio...

    [5] https://react.dev/reference/react/useTransition

    esprehn(10000) 2 days ago [-]

    The big challenge with the approach not touched on in the post is version skew. During a deploy you'll have some new clients talk to old servers and some old clients talk to new servers. The ViewModel is a minimal representation of the data and you can constrain it with backwards compatibility guarantees (ex. Protos or Thrift), while the UI component JSON and their associated JS must be compatible with the running client.

    Vercel fixes this for a fee: https://vercel.com/docs/skew-protection

    I do wonder how many people will use the new React features and then have short outages during deploys like the FOUC of the past. Even their Pro plan has only 12 hours of protection so if you leave a tab open for 24 hours and then click a button it might hit a server where the server components and functions are incompatible.

    yawaramin(3635) 2 days ago [-]

    Wouldn't this be easy to fix by injecting a a version number field in every JSON payload and if the expected version doesn't match the received one, just force a redirect/reload?

    cadamsdotcom(10000) 2 days ago [-]

    Really like this pattern, it's a new location of the curve of "how much rendering do you give the client". In the described architecture, JSX-as-JSON provides versatility once you've already shipped all the behavior to the client (a bunch of React components in a static JS that can be cached, the React Native example really demonstrated this well)

    One way to decide if this architecture is for you, is to consider where your app lands on the curve of "how much rendering code should you ship to client vs. how much unhydrated data should you ship". On that curve you can find everything from fully server-rendered HTML to REST APIs and everything in between, plus some less common examples too.

    Fully server-rendered HTML is among the fastest to usefulness - only relying on the browser to render HTML. By contrast in traditional React server rendering is only half of the story. Since after the layout is sent a great many API calls have to happen to provide a fully hydrated page.

    Your sweet spot on that curve is different for every app and depends on a few factors - chiefly, your app's blend of rate-of-change (maintenance burden over time) and its interactivity.

    If the app will not be interactive, take advantage of fully-backend rendering of HTML since the browser's rendering code is already installed and wicked fast.

    If it'll be highly interactive with changes that ripple across the app, you could go all the way past plain React to a Redux/Flux-like central client-side data store.

    And if it'll be extremely interactive client-side (eg. Google Docs), you may wish to ship all the code to the client and have it update its local store then sync to the server in the background.

    But this React Server Components paradigm is surprisingly suited to a great many CRUD apps. Definitely will consider it for future projects - thanks for such a great writeup!

    _heimdall(10000) about 17 hours ago [-]

    > from fully server-rendered HTML to REST APIs and everything in between

    Fully server-rendered HTML is the REST API. Anything feeding back json is a form of RPC call, the consumer has to be deeply familiar with what is in the response and how it can be used.

    modal-soul(10000) 2 days ago [-]

    I like this article a lot more than the previous one; not because of length.

    In the previous article, I was annoyed a bit by some of the fluffiness and redefinition of concepts that I was already familiar with. This one, however, felt much more concrete, and grounded in the history of the space, showing the tradeoffs and improvements in certain areas between them.

    The section that amounted to 'I'm doing all of this other stuff just to turn it into HTML. With nice, functional, reusable JSX components, but still.' really hit close to how I've felt.

    My question is: When did you first realize the usefulness of something like RSC? If React had cooked a little longer before gaining traction as the client-side thing, would it have been for 'two computers'?

    I'm imagining a past where there was some 'fuller stack' version that came out first, then there would've been something that could've been run on its own. 'Here's our page-stitcher made to run client-side-only'.

    acemarke(3157) 2 days ago [-]

    Sounds like another one of Dan's talks, 'React from Another Dimension', where he imagines a world in which server-side React came first and then extracted client functionality:

    - https://www.youtube.com/watch?v=zMf_xeGPn6s

    csbartus(3326) 2 days ago [-]

    What happened to the very elegant GraphQL? Where the client _declares_ its data needs, and _that's all_, all the rest is taken care by the framework?

    Compared to GraphQL, Server Components are a big step back: you have to do manually on the server what was given by default by GraphQL

    moi2388(10000) 2 days ago [-]

    N+1, security, authorisation, performance, caching, schema stitching..

    hyuuu(10000) 2 days ago [-]

    I was just going to say, all of this has been solved with graphql, elegantly.

    anentropic(10000) 2 days ago [-]

    Couldn't you have both?

    I assumed RSC was more concerned with which end did the rendering, and GraphQL with how to fetch just the right data in one request

    eadmund(3321) 2 days ago [-]

    > the very elegant GraphQL

    The GraphQL which 'elegantly' returns a 200 on errors? The GraphQL which 'elegantly' encodes idempotent reads as mutating POSTS? The GraphQL which 'elegantly' creates its own ad hoc JSON-but-not-JSON language?

    The right approach, of course, is HTMX-style real REST (incidentally there needs to be a quick way to distinguish real REST from fake OpenAPI-style JSON-as-a-service). E.g., the article says: 'your client should be able to request all data for a specific screen at once.' Yes, of course: the way to request a page is to (wait for it, JavaScript kiddies): request a page.

    The even better approach is to advance the state of the art beyond JavaScript, beyond HTML and beyond CSS. There is no good reason for these three to be completely separate syntaxes. Fortunately, there is already a good universal syntax for trees of data: S-expressions. The original article mentions SDUI as 'essentially it's just JSON endpoints that return UI trees': in a sane web development model the UI trees would be S-expressions macro-expanded into SHTML.

    5Qn8mNbc2FNCiVV(3481) about 14 hours ago [-]

    That's the thing, this brings the benefits of GraphQL without requiring GraphQL (+Relay). This was one of the main drivers of RSC (afaik).

    Obviously if you have a GraphQL backend, you could care less and the only benefit you'd get is reducing bundle size f.e. for content heavy static pages. But you'll lose client-side caching, so you can't have your cake and eat it too.

    Just a matter of trade-offs

    hcarvalhoalves(3569) 2 days ago [-]

    > REST (or, rather, how REST is broadly used) encourages you to think in terms of Resources rather than Models or ViewModels. At first, your Resources start out as mirroring Models. But a single Model rarely has enough data for a screen, so you develop ad-hoc conventions for nesting Models in a Resource. However, including all the relevant Models (e.g. all Likes of a Post) is often impossible or impractical, so you start adding ViewModel-ish fields like friendLikes to your Resources.

    So, let's assume the alternative universe, where we did not mess up and got REST wrong.

    There's no constraint saying a resource (in the hypermedia sense) has to have the same shape as your business data, or anything else really. A resource should have whatever representation is most useful to the client. If your language is 'components' because you're making an interactive app – sure, go ahead and represent this as a resource. And we did that for a while, with xmlhttprequest + HTML fragments, and PHP includes on the server side.

    What we were missing all along was a way to decouple the browser from a single resource (the whole document), so we could have nested resources, and keep client state intact on refresh?

    yawaramin(3635) 2 days ago [-]

    And this is exactly what we get with htmx.

    bastawhiz(10000) 2 days ago [-]

    This article doesn't mention 'event handlers' a single time. Even if you get past the client and server getting out of sync and addressing each component by a unique id that's stable between deploys (unless it's been updated), this article doesn't show how you might make any of these components interactive. You can't add an onClick on the server. The best I can figure, you pass these in with a context?

    Ultimately this really just smooshed around the interface without solving the problem it sets out to solve: it moves the formatting of the mail markup to the server, but you can't move all of it unless your content is entirely static (and if you're getting it from the server, SOMETHING has to be interactive).

    wonnage(10000) 2 days ago [-]

    you put interactivity in client components, that seemed pretty clear to me

    rwieruch(1712) 2 days ago [-]

    It's not really the scope of the article, but what about adding a client directive [0] and dropping in your event handler? Just like that, you're back in a familiar CSR React world, like in the 'old' days.

    [0] https://react.dev/reference/rsc/use-client

    kassner(10000) 2 days ago [-]

    I feel the article could have ended after Step 1. It makes the point that you don't have to follow REST and can build your own session-dependent API endpoints, and use them to fetch data from a component.

    I don't see a point in making that a server-side render. You are now coupling backend to frontend, and forcing the backend to do something that is not its job (assuming you don't do SSR already).

    One can argue that its useful if you would use the endpoint for ESI/SSI (I loved it in my Varnish days) but that's only a sane option if you are doing server-side renders for everything. Mixing CSR and SSR is OK, but that's a huge amount of extra complexity that you could avoid by just picking one, and generally adding SSRs is mostly for SEO-purposes, which session-dependent content is excluded anyway.

    My brain much prefers the separation of concerns. Just give me a JSON API, and let the frontend take care of representation.

    barrkel(3584) 2 days ago [-]

    The point of doing a server-side render follows from two other ideas:

    * that the code which fetches data required for UI is much more efficiently executed on the server-side, especially when there's data dependencies - when a later bit of data needs to be fetched using keys loaded in a previous load

    * that the code which fetches and assembles data for the UI necessarily has the same structure as the UI itself; it is already tied to the UI semantically. It's made up out of front end concerns, and it changes in lockstep with the front end. Logically, if it makes life easier / faster, responsibility may migrate between the client and server, since this back end logic is part of the UI.

    The BFF thing is a place to put this on the server. It's specifically a back end service which is owned by the front end UI engineers. FWIW, it's also a pattern that you see a lot in Google. Back end services serve up RPC endpoints which are consumed by front end services (or other back end services). The front end service is a service that runs server-side, and assembles data from all the back end services so the client can render. And the front end service is owned by the front end team.

    hu3(2897) 2 days ago [-]

    Random JSX nugget:

    JSX is a descendant of a PHP extention called XHP [1] [2]

    [1] https://legacy.reactjs.org/blog/2016/09/28/our-first-50000-s...

    [2] https://www.facebook.com/notes/10158791323777200/

    zarzavat(10000) 2 days ago [-]

    I'm annoyed to learn that even the original PHP version had `class=` working.

    Ambroos(10000) 2 days ago [-]

    Internally at Facebook you could also just call React components from XHP. Not very relevant on what you see on Facebook now as a user, but in older internal tools built with XHP it made it very easy to just throw in React components.

    When you'd annotate a React component with ReactXHP (if I remember correctly), some codegen would generate an equivalent XHP components that takes the same props, and can just be used anywhere in XHP. It worked very well when I last used it!

    Slightly less related but still somewhat, they have an extension to GraphQL as well that allows you to call/require React components from within GraphQL. If you look at a random GraphQL response there's a good chance you will see things like `'__dr': 'GroupsCometHighlightStoryAlbumAttachmentStyle.react'`. I never looked into the mechanics of how these worked.





    Historical Discussions: Clolog (April 15, 2025: 257 points)

    (257) Clolog

    257 points 3 days ago by todsacerdoti in 1st position

    github.com | Estimated reading time – 30 minutes | comments | anchor

    Full-featured logic programming (AKA 'Prolog') embedded in/callable from and supporting calls to Clojure. In the spirit of LogLisp, Lisp Machine Prolog, and Franz Inc.'s Allegro Prolog, with some extra goodies. Emphasis on expressive power and execution transparency, supporting rapid prototyping, proof-of-concept development, and outer-loop reasoning (i.e., not real fast, so far).

    Highlights, with examples

    • Clojure-based, Lispy (i.e., homoiconic) syntax, e.g., ...

      (do 
          ;; Set up, clear knowledge base.
          (initialize-prolog)
          ;; Create unit assertion.    
          (<- (has-subtype vertebrate mammal)) 
          ;; Execute query.
          (? ?x ; Answer template
             (has-subtype vertebrate ?x) ; Goal.
             )
          )
        [mammal] ; Answer(s) in vector (perhaps empty).
    • Logical variable- ('?var')-containing Clojure seqs (so, lists) and vectors as 'complex' terms---in assertion statements and answer templates

      > (? (?a ?b)
           (same [?a 2] [1 ?b]))
      [(1 2)]
    • Clojure calling predicates

      • Truthiness check: truthy?

        > (? true (truthy? (+ 1 2)))
        [true]
      • ?var-bearing term unification: evals-from?

        > (? ?x (evals-from? ?x (+ 1 2)))
        [3]
      • Side effect: do

        > (? nil (do (println 'Hello')))
        Hello
        [nil]
    • Access to ?var bindings in Clojure calls---even within quoted expressions

      > (do (<-- (male laban))
            (? ?y (male ?x) (evals-from? ?y (list '?x))))
      [(laban)]
    • Negation as failure: not

      > (do (initialize-prolog) ; Clear knowledge base.
            (? :nothing (not (Huh?))))
      [:nothing]
    • Facilitated access to Clojure values (evals-from? shorthand ->?) in goals with Clojure-calling predicates

      > (binding [*leash* true]
          (? true (same (->? (+ 0 1)) 1)))
      0. Processing query: ((same (->? (+ 0 1)) 1))
       Applied ->? transform
       (evals-from?): Entering (evals-from? ??-0:0 (+ 0 1))
       (evals-from?): Succeeded (evals-from? 1 (+ 0 1))
       (same): Entering (same 1 1)
       (same): Succeeded (same 1 1)
      Recorded answer: true
      Answer limit reached. ; Because answer template `true` has no ?vars.
      [true]
    • Built-in term [non-]matching predicates: same, different

      > (? (?a ?b)
           (same [?a 2] [1 ?b]))
      [(1 2)]
      > (? (?a ?b)
           (different [?a 2] [1 ?b]))
      []
    • Built-in term inspection predicates: var, ground

      > (? ?x (same ?x 1) (ground ?x))
      [1]
    • Built-in unconditional predicates: true, false

    • Nestable built-in logical operators: and, or, not, if

      > (? ?x (and (if (false)
                     (same ?x :succeed)
                     (same ?x :fail))
                   (evals-from? ?x :fail)
               (or (true) (false))))
      [:fail]
    • 'Cut' operator: first

      > (do (initialize-prolog)
            (<- (sister laban rebecca))
            (<- (sister rachel leah))
            (? [?sibling ?sister]
               (first (sister ?sibling ?sister))))
       [[laban rebecca]]
    • User-custom predicate transforms, supporting (e.g.) varieties of if, cond, optional

      > (create-predicate-transform '((if% ?if ?then ?else)
                                    (if (first ?if) ?then ?else)))
    • Full leashing of predicates, including operators

      > (binding [*leash* true]
          (? [?sibling ?sister ?x] 
            (if% (sister ?sibling ?sister)
                 (evals-from? ?x true)
                 (evals-from? ?x false))))
      0. Processing query: ((if% (sister ?sibling ?sister) (evals-from? ?x true) (evals-from? ?x false)))
       (if%): Applying logic transform (if% ?if ?then ?else)
       (if): Entering (if (first (sister ?sibling:0 ?sister:0)) (evals-from? ?x:0 true) (evals-from? ?x:0 false))
       (if): Checking 'if' condition (if (first (sister ?sibling:0 ?sister:0)) (evals-from? ?x:0 true) (evals-from? ?x:0 false))
        (if first): Entering first (first (sister ?sibling:0 ?sister:0))
         1. Entering 'sister/2': (sister ?sibling:0 ?sister:0)
         1. Matched head (sister laban rebecca): (sister laban rebecca)
         1. Succeeded 'sister/2': (sister laban rebecca)
        (if first): Succeeded, cutting (first (sister laban rebecca))
       (if): Taking 'then' branch of (if (first (sister laban rebecca)) (evals-from? ?x:0 true) (evals-from? ?x:0 false))
        (if evals-from?): Entering (evals-from? ?x:0 true)
        (if evals-from?): Succeeded (evals-from? true true)
       (if): Succeeded (if (first (sister laban rebecca)) (evals-from? true true) (evals-from? true false))
      Recorded answer: [laban rebecca true]
        (if first): Failed (first (sister ?sibling:0 ?sister:0))
       (if): Failed (if (first (sister ?sibling:0 ?sister:0)) (evals-from? ?x:0 true) (evals-from? ?x:0 false))
      0. Exhausted query: ((if% (sister ?sibling ?sister) (evals-from? ?x true) (evals-from? ?x false)))
      [[laban rebecca true]]
    • Symbols interpreted as logic terms or predicates, regardless of their Clojure values

      > (do (<- (false true))
            (? ?x (false ?x)))
      [true]
      > (do (<- (neg? 3))
            (? true (neg? 3)))
      [true]
    • Arbitrary Clojure things as terms or predicates, e.g., ...

      • Strings (supporting, e.g., RDF URIs)

        > (do (<- ('false' true))
              (? ?x ('false' ?x)))
        [true]
      • Numbers

        > (do (<- (3 neg?))
              (? ?x (3 ?x)))
        [neg?]
      • Complex terms

        > (do (initialize-prolog)
              (<- ([treasure] (buried ?x)))
          (? ?r ([treasure] ?r)))
        [(buried ?unbound-0)]
    • Predicates that are ?var-bearing complex terms

      > (do (initialize-prolog)
            (<- ([treasure chest] (buried ?x)))
        (? [?r ?thing] ([treasure ?thing] ?r)))
      [[(buried ?unbound-0) chest]]
    • Predicates that are ?vars

      > (do (initialize-prolog)
            (<- (male jacob))
        (? ?pred (?pred jacob)))
      [male]
    • Variadic (variable-tail/arity) predicates and complex terms

      > (do (initialize-prolog)
            (<- (variadic))
            (<- (variadic 1))
            (<- (variadic 1 2))
            (? ?rest (variadic & ?rest)))
      [() (1) (1 2)]
      > (do (initialize-prolog)
            (<- (variadic-term [1]))
            (<- (variadic-term [1 2]))
        (? ?rest (variadic-term [1 & ?rest])))
      [[] [2]]
    • Goals that are ?vars

      > (do (initialize-prolog)
            (<- (male jacob))
        (? ?goal ?goal)) ; Tell me everything you can prove.
      [(male jacob)]
      > (do (initialize-prolog)
            (<- (male jacob))
        (? ?goal (unasserted) ?goal)) ; ...with what you know so far.
      []
    • Anonymous ?vars

      > (do (initialize-prolog)
            (<- (sister laban rebecca))
            (<- (sister rachel leah))
            (? true (sister ?_person ?_person)))
      [true]
      > (? true (sister ? ?))
      [true]
    • Suppression of answers that are (under ?var renaming) duplicates

      > (do (initialize-prolog)
            (<- (male laban))
        (<- (male jacob))
        (binding [*leash* true]
              (? ?x (or (male ?x) (male ?x)))))
      0. Processing query: ((or (male ?x) (male ?x)))
       (or): Entering (or (male ?x:0) (male ?x:0))
        1. Entering 'male/1': (male laban)
        1. Matched head (male laban): (male laban)
        1. Succeeded 'male/1': (male laban)
      Recorded answer: laban
        1. Backtracking into 'male/1': (male ?x:0)
        1. Succeeded 'male/1': (male jacob)
      Recorded answer: jacob
        1. Backtracking into 'male/1': (male ?x:0)
        1. Failed 'male/1': (male ?x:0)
       (or): Backtracking into (or (male ?x:0) (male ?x:0))
        1. Entering 'male/1': (male laban)
        1. Matched head (male laban): (male laban)
        1. Succeeded 'male/1': (male laban)
      Duplicate answer (not recorded): laban
        1. Backtracking into 'male/1': (male ?x:0)
        1. Succeeded 'male/1': (male jacob)
      Duplicate answer (not recorded): jacob
        1. Backtracking into 'male/1': (male ?x:0)
        1. Failed 'male/1': (male ?x:0)
       (or): Failed (or (male ?x:0) (male ?x:0))
      0. Exhausted query: ((or (male ?x) (male ?x)))
      [laban jacob]
    • Optional suppression of answers subsumed by other answers

      > (do (initialize-prolog)
            (<- (sister laban rebecca))
            (<- (sister ?x ?y))
            (binding [*leash* true]
              (? [?x ?y] (sister ?x ?y))))
      0. Processing query: ((sister ?x ?y))
       1. Entering 'sister/2': (sister laban rebecca)
       1. Matched head (sister laban rebecca): (sister laban rebecca)
       1. Succeeded 'sister/2': (sister laban rebecca)
      Recorded answer: [laban rebecca]
       1. Backtracking into 'sister/2': (sister ?x:0 ?y:0)
       1. Succeeded 'sister/2': (sister ?x:0 ?y:0)
      Recorded subsuming answer (discarded 1 subsumed answer(s)):  [?x ?y]
       1. Backtracking into 'sister/2': (sister ?x:0 ?y:0)
       1. Failed 'sister/2': (sister ?x:0 ?y:0)
      0. Exhausted query: ((sister ?x ?y))
      [[?x ?y]]
    • Failure (i.e., not system error) when no assertions have been defined for a called logic predicate and arity

      > (do (initialize-prolog)
                     (binding [*leash* true]
                       (? answer (undefined ?arity-1))))
      0. Processing query: ((undefined ?arity-1))
       1. Entering 'undefined/1': (undefined ?arity-1:0)
       1. Failed 'undefined/1': (undefined ?arity-1:0)
      0. Exhausted query: ((undefined ?arity-1))
      []

    In production rules below, ...

    • Angle brackets surround a grammar <element>.
    • <element>+ denotes one or more of <element>.
    • <element>* denotes zero or more of <element>.
    • ':-' separates rules' left- and right-hand sides.
    • '|' separates right-hand sides' alternatives.

    <assertion>: (<head-statement>+ <body-statement>*)

    <head-statement> :- <statement>

    <body-statement> :- <statement>

    <statement> :- <fixed-arity-statement> | <variable-arity-statement>

    <fixed-arity-statement> :- (<predicate>+ <argument-term>*)

    <argument-term> :- <term>

    <variable-arity-statement> :- (<predicate>+ <term>* & <?var>)

    <predicate> :- <special-predicate> | <assertion-predicate>

    <special-predicate> :- <built-in-predicate> | <transform-predicate>

    <built-in-predicate> :- <operator> | <Clojure-calling-predicate> | same | different | var | ground | true | false

    <operator> :- and | or | if | not | first

    <Clojure-calling-predicate> :- truthy? | evals-from? | do

    <transform-predicate>: A predicate constant registered using create-predicate-transform

    <assertion-predicate>: A predicate all of whose assertions (if any) are from calls to one of the <-... macros or assert<-... functions

    <term> :- <transparent-term> | <opaque-term>

    <transparent-term> :- <?var> | <complex-term>

    <complex-term> :- <fixed-artiy-complex-term> | <variable-arity-complex-term>

    <fixed-arity-complex-term> :- (<term>*) | [<term>*]

    <variable-arity-complex-term> :- (<term>* & <?var>) | [<term>* & <?var>]

    <opaque-term> :- Any Clojure value supporting Clojure = (so, not a regex) that is not a transparent term

    <?var> :- <binding-?var> | <anonymous-?var>

    <anonymous-?var> :- ? | <_-anonymous-?var>

    <_-anonymous-?var>: Symbol whose name begins with '?_'

    <constant>: An opaque term or a ?var-free complex term

    <answer-template> :- <term>

    Note:

    • All predicates are terms.

    • All ?vars are symbols.

    • Statements and assertions, being lists, are terms.

    • The arguments of operators are statements. See our Built-in predicates section.

    • Outside of Clojure-calling predicates' Clojure form arguments: Symbols appearing in statements are taken at face value, not evaluated. A symbol used in Prolog otherwise has no relationship to its value (or the lack thereof) in Clojure.

    Additional terminology and conventions

    Considering for the moment only assertion (not special) predicates, logic programming search processes (or calls), in turn from left to right, each goal in an (implicitly) conjunctive query by...

    • Identifying assertions whose head statement matches the goal

    • Prepending a matching assertion's body statements (AKA the assertion's goals) to the query's remaining goals, after applying the match's ?var bindings to each such goal

    • Processing remaining goals, recursively, ...

      • Backtracking to remaining matching assertions, when matching a given assertion fails

      • When no goals remain, succeed by...

        • Recording an answer that realizes the query's answer template according to ?var matches made along the search path

        • Backtracking to search for any additional answers.

    Search generally proceeds depth-first and from left to right.

    We match two statements or transparent terms by associating their respective terms and ?vars, position by position, with consistent matching for non-anonymous ?vars. In matching (AKA 'unification'), ...

    • A ?var matches a ?var, a transparent term, or a constant.

    • Constants match equal (Clojure =) constants.

    • Complex terms match recursively.

    • A tail ?var (last in a statement or complex term, and preceded by &) matches the (possibly empty) seq or vector of terms remaining in the parallel traversal of its opposing complex term.

    One term subsumes another if the two terms match and---considering ?var occurrences---the former is at least as general as the latter.

    A ground term has no ?vars (none outside of any opaque included terms, where they are not treated as ?vars).

    Here---and in leash (execution tracing) reports---the notation <predicate>/<integer> (e.g., sibling/2) refers to the <integer> arity of <predicate>.

    By convention, we take the first argument of a 2-ary statement to be the predicate's subject, the second to be its object. Thus, in (brother Jane John), we take Jane to be the subject (or agent), John to be the object (or patient). ('A brother of Jane is John.')

    A unit assertion has only a head statement, no body statements.

    Clear the knowledge base and any existing special predicate transforms, then execute the transform definitions in function create-predicate-transforms.

    Knowledge base and predicate transform contexts

    Bind *assertions* and/or *predicate-transforms*, per their doc strings, to set up contexts for different knowledge bases and/or transform definitions.

    Creating assertions---macros and functions

    We provide four assertion creation functions and four corresponding macros. The macros, which don't require quoting arguments, so are simpler to use at the REPL or from top level in a file, take their statement arguments at top-level. The functions take theirs in a list.

    An assertion's head statement...

    • May not be a ?var.

    • May be variadic, but must require arity >= 1 (i.e., must not start with &).

    • Must not have a built-in special predicate in its predicate position. We don't flag assertions to transform predicates; however, once a predicate has been used on the left-hand side of a transform's defining production rule, we refrain from exercising same-predicate assertions.

    See the functions' doc strings for other fine points.

    The following forms have equivalent effect: Add the assertion with head statement (sibling ?x ?y) and lone goal statement (brother ?x ?y) to the knowledge base.

    (<- (sibling ?x ?y) (brother ?x ?y)) ; Macro.
    (assert<- '((sibling ?x ?y) (brother ?x ?y))) ; Function.

    The following place their constant-predicate, fixed-arity assertion first for consideration in search. We provide no explicit control over the order in which (less conventional) assertions with variadic, variable, or non-ground complex head statement predicates are examined during backtracking search.

    (<-0 (sibling ?x ?y) (brother ?x ?y)) ; Macro.
    (assert<-0 '((sibling ?x ?y) (brother ?x ?y))) ; Function.

    The following clear sibling/2 before making their assertion.

    (<-- (sibling ?x ?y) (brother ?x ?y)) ; Macro.
    (assert<-- '((sibling ?x ?y) (brother ?x ?y))) ; Function.

    The following clear the entire knowledge base of all but special transforms before making their assertion.

    (<--- (sibling ?x ?y) (brother ?x ?y)) ; Macro.
    (assert<--- '((sibling ?x ?y) (brother ?x ?y))) ; Function.

    The following---when employed systematically---avoid subsumed-subsuming assertion pairs in the knowledge base, by declining to add would-be-subsumed assertions and by retracting subsumed assertions.

    (<-_ (sibling ?x ?y) (brother ?x ?y)) ; Macro.
    (assert<-_ '((sibling ?x ?y) (brother ?x ?y))) ; Function.

    We retrieve assertions once upon calling a predicate, and assertion or retraction operations otherwise relevant to that predicate will be reflected during the call.

    We provide three functions for retrieving assertions by matching their heads against a statement pattern. Each returns a vector containing the knowledge base's assertions whose head statements exhibit the function's required relationship to statement-pattern.

    Get assertions whose head matches statement-pattern.

    (get-matching-head-assertions statement-pattern)

    Get assertions whose head is subsumed by statement-pattern.

    (get-subsumed-head-assertions statement-pattern)

    Get assertions whose head subsumes statement-pattern.

    (get-subsuming-head-assertions statement-pattern)

    We provide two similar functions that match assertions against a full assertion pattern.

    Get assertions entirely subsumed by assertion-pattern.

    (get-subsumed-assertions assertion-pattern)

    Get assertions entirely subsuming assertion-pattern.

    (get-subsuming-assertions assertion-pattern)

    We provide two functions, and two corresponding macros, for retracting assertions by matching their head statements against a pattern and one function to retract assertions entirely matching an assertion pattern.

    The following have equivalent effect. As in the assertion retrieval functions, statement-pattern refers to assertions' head statements.

    (retract-subsumed-head-assertions statement-pattern)
    (--- statement-pattern)

    The following have equivalent effect. Here, assertion must be equal (Clojure =, including equal ?var symbols) to an assertion in the knowledge base, for the latter to be retracted.

    (retract-specific-assertion assertion) ; Function.
    (-- statement-pattern) ; Macro.
    (retract-subsumed-assertions '((?pred deceased-person)))

    The following macro and function are equivalent---except that the macro does not support keyword arguments (instead, bind the default-value globals). With a truthy limit, terminate search upon having recorded so many answers.

    (? answer-template & goals) ; Macro.
    (query answer-template goals ; Function.
           :limit *answer-count-limit*
           :discard-subsumed *discard-subsumed-answers*)

    For now, leashing is an all-or-nothing proposition. Perform any query with *leash* bound truthy, for goal-by-goal reports describing execution.

    (binding [*leash* true]
      ;; Query form(s) in here.
      )

    As demonstrated in our Highlights section and in test/prolog/leash-tests.txt, leashing reports...

    • Entry into and success or failure of goals
    • Backtracking into...
      • Remaining matching assertions of goals with assertion predicates
      • Remaining disjuncts (remaining alternatives goals) of or goals
    • first operator-induced cuts
    • Application of predicate transforms
    • The discovery of answers and their disposition
    • Search termination upon reaching an answer count limit.

    Leashing also...

    • Indexes reports per depth of assertion nesting
    • Indicates the nesting of built-in predicates for the current assertion
    • Left-pads reports per nesting of assertion and built-in predicate goals.

    When *pprint-leash-statements* is truthy, ...'Entering', ...

    • 'Matched head' leash reports are omitted.
    • 'Succeeded', and 'Failed' leash reports pprint (vs. print) statement content, starting on a new line, with indentation, as in...
    clolog.core> (binding [*leash* true
                           *pprint-leash-statements* true]
                   (query '[?h ?w ?z] '((zebra ?h ?w ?z)) :limit 1))
    0. Processing query: ((zebra ?h ?w ?z))
     1. Entering `zebra`/3:
        (zebra ?h:0 ?w:0 ?z:0)
      1. (same): Entering...
                 (same
                  ?h:0
                  ((house norwegian ?anon-0:1 ?anon-1:1 ?anon-2:1 ?anon-3:1)
                   ?anon-4:1
                   (house ?anon-5:1 ?anon-6:1 ?anon-7:1 milk ?anon-8:1)
                   ?anon-9:1
                   ?anon-10:1))
      1. (same): Succeeded...
                 (same
                  ((house norwegian ?anon-0:1 ?anon-1:1 ?anon-2:1 ?anon-3:1)
                   ?anon-4:1
                   (house ?anon-5:1 ?anon-6:1 ?anon-7:1 milk ?anon-8:1)
                   ?anon-9:1
                   ?anon-10:1)
                  ((house norwegian ?anon-0:1 ?anon-1:1 ?anon-2:1 ?anon-3:1)
                   ?anon-4:1
                   (house ?anon-5:1 ?anon-6:1 ?anon-7:1 milk ?anon-8:1)
                   ?anon-9:1
                   ?anon-10:1))
      2. Entering `member`/2:
         (member
          (house englishman ?anon-11:1 ?anon-12:1 ?anon-13:1 red)
          ((house norwegian ?anon-0:1 ?anon-1:1 ?anon-2:1 ?anon-3:1)
           ?anon-4:1
           (house ?anon-5:1 ?anon-6:1 ?anon-7:1 milk ?anon-8:1)
           ?anon-9:1
           ?anon-10:1))

    We support the following built-in predicates. We borrow some notation from our Grammar section and allow ourselves to introduce types via obvious naming (e.g., a <condition-statement> is a <statement>---distinguished merely by its role/argument position in the built-in predicate if). We invoke the exclued middle: If a goal does not succeed, then it fails.

    • (and <statement>*) succeeds if, proceeding from left to right, every conjunct statement succeeds.

    • (or <statement>*) succeeds if, proceeding from left to right, some disjunct statement succeeds (and remaining disjuncts are ignored). Backtracking will explore first alternative ways to satisfy a failing statement, then subsequent statements.

    • (if <condition-statement> <then-statement> <else-statement>) succeeds if either:

      • The condition statement succeeds and the then statement succeeds (in which case we do not examine the else statement under the bindings for the condition statement's ?vars)

      • The condition statement fails and the else statement succeeds (in which case we do not examine then-statement).

      Backtracking will explore alternative ways to satisfy the argument statements.

    • (not <statement>) succeeds if the wrapped statement fails.

    • (first <statement>) succeeds if the argument statement succeeds. This form (AKA Prolog 'cut') skips backtracking to explore other ways of satisfying the statement, upon its first success.

    • (same <term> <term>) succeeds if the two terms match.

    • (true) succeeds unconditionally.

    • (false) fails unconditionally.

    • (var <term>) succeeds if the argument term is a ?var.

    • (ground \<term\>) succeeds if the argument term is ground.

    • (truthy? <form>) succeeds if the argument form is ground and the result of its evaluation (in Clojure) is truthy.

    • (evals-from? <term> <form>) succeeds if the argument form is ground and the result of its evaluation (in Clojure) matches the argument term (often a ?var).

    • (do <form>*) succeeds if the whole do expression is ground, evaluating it (in Clojure) for side effect, only.

    Creating special transforms

    The function call below---performed by initialize-prolog---seeds Clolog with some transforms for predicates we have found useful in other Lisp-based Prologs. As we intend this facility to support customization, you may wish to copy our version of create-predicate-transforms and edit it to your liking.

    (create-predicate-transforms)

    create-predicate-transforms includes calls to create-predicate-transform. Each call is a production rule. During search, a goal matching source-statement is transformed---via de-referencing---into target-statement.

    (create-predicate-transform source-statement target-statement)

    The execution machinery for transform predicates applies the first matching transform irrevocably, with no backtracking in case of failure. Compared to an assertion predicate defined using using one assertion per transform and the same statements in each transform-assertion pair, it is as if the transform predicate's goal always were wrapped with first. We consider predicate transforms to be 'macros' for Prolog, affording us cleaner leashing than would similar assertion predicates. Assertion predicatess more verbose leashing may nonetheless be helpful in prototyping and debugging prospective transforms. It may help to call create-predicate-transforms with optional argument debugging? truthy---and either disregard any effects resulting from backtracking into prospective transform predicates ultimately intended or (as in tests/clolog/core_tests.clj) avoid backtracking by limiting the count of answers found.

    Potential future enhancements

    We might pursue some of the following ideas towards increasing expressivity/leashing, robustness/scale, and efficiency, given motivating use cases.

    • Potential enhancements to expressiveness and leashing:

      • Accommodate non-ground Clojure expressions in Clojure-calling forms---in case a called form would use these in crafting subsequent goal (e.g.).

      • Make the local/lexical environment accessible within called Clojure forms.

      • Support RDF, RDFS, selected aspects of OWL (e.g., inverses, functional dependencies).

      • Selective leashing, considering (e.g.) predicate, arity, report type (e.g., answer disposition).

      • Selective detail in leashing, e.g., re if subgoals

      • Greater precision in leash report prefixes for n-ary operators and, or (e.g., indexing potentially like-predicate conjuncts, disjuncts).

    • Potential enhancements to robustness and scale

      • Error-check user/application inputs more pervasively.

      • Support Prolog stack limits, breakpoints, stepping/debugger integration.

      • Support database integration---access to unit ground assertions.

    • Potential efficiency enhancements

      • Perform further indexing, including trie-based indexing.

      • Qualify seq/vector matching with early check for compatible lengths of candidate-matching seqs and vectors.

      • Decline to explore alternative satisfactions of a ground goal.

      • Skirt search branches that cannot instantiate an answer template ?var.

      • Support parallelism and/or laziness.

    Copyright © 2023 Robert Carl Schrag

    This program and the accompanying materials are made available under the terms of the Eclipse Public License 2.0 which is available at http://www.eclipse.org/legal/epl-2.0.

    This Source Code may also be made available under the following Secondary Licenses when the conditions for such availability set forth in the Eclipse Public License, v. 2.0 are satisfied: GNU General Public License as published by the Free Software Foundation, either version 2 of the License, or (at your option) any later version, with the GNU Classpath Exception which is available at https://www.gnu.org/software/classpath/license.html.




    All Comments: [-] | anchor

    mark_l_watson(3619) 3 days ago [-]

    Very cool! I just cloned the repository, will play with it later.

    BTW, Clojure was such a brilliant name (from Rich): Whenever I see a project starting with 'Clo' I pay attention.

    EDIT: had a chance to try it: a very cool resource!

    iLemming(10000) 3 days ago [-]

    > Clojure was such a brilliant name

    IIRC Rich wanted a name that has CLR and J in it - Clojure initially was officially to be supported on both .Net and Java stacks. Later he realized that keeping it completely compatible on both platforms is an uphill battle. CLR Clojure still exists, but it's not 'an officially supported' runtime.

    mindcrime(738) 2 days ago [-]

    > Whenever I see a project starting with 'Clo' I pay attention.

    You're going to love my 'Cobol in Clojure' project 'Clobol' then!

    sterlind(10000) 3 days ago [-]

    really happy to see something of a revival of interest for logic programming lately. it's an extremely powerful tool if you know when to reach for it.

    MarkMarine(10000) 3 days ago [-]

    When would you reach for it?

    paddy_m(10000) 3 days ago [-]

    I'm working on a problem that I think logic programming might be a fit for. And I already have a lisp. Anyone interested in giving me some feedback on a mini language for heuristics?

    https://marimo.io/p/@paddy-mullen/notebook-b79pj7

    jdminhbg(3389) 3 days ago [-]

    Can anybody comment on when or why to choose this over core.logic?

    drob518(10000) 2 days ago [-]

    Clolog is more of a direct translation of Prolog into Clojure, with an s-expression syntax rather than Prolog's standard syntax, but close. Core.logic is a translation of Mini-Kanren into Clojure and doesn't use anything close to Prolog's syntax, even one based on s-expressions. Prolog and Mini-Kanren, while both logic programming systems also use different search algorithms. Prolog uses a depth-first exploration of the solution space, whereas Mini-Kanren uses a breadth-first search. Consequently, Prolog can be more memory efficient (remember, it was created in the 1970s), but it can get stuck in infinite parts of the solution tree and never find a solution. Mini-Karen is less memory efficient as it explores the solution tree more broadly, but it can find solutions even if the solution tree has infinite branches.

    So, why/when to choose this? When you want something much more Prolog-like, using the same search algorithm as Prolog. That said, they both do logic programming. I haven't benchmarked, but from comments in the README, I suspect core.logic will be more performant as it compiles down to Clojure function calls which are then compiled down to Java function calls. It's sort of like choosing between Python and Java. They both do imperative programming with objects but they both have their own strengths and weaknesses.

    Blackthorn(10000) 2 days ago [-]

    core.logic has a lot of limitations you pretty quickly run into, and they languish on their bug tracker for years now because nobody actually works on it.

    cpdean(10000) 3 days ago [-]

    I absolutely love the aesthetic of a repo having a giant README.md

    SOLAR_FIELDS(10000) 3 days ago [-]

    I think about docs a lot and the best docs are the ones that are easiest to find. There is few things right in front of you more than README.md

    AtlasBarfed(3590) 2 days ago [-]

    So is prolog just a big SAT solver?

    drob518(10000) 2 days ago [-]

    No, but they share logic as the foundation. A SAT solver merely solves a series of Boolean equations, typically in conjunctive normal form. Prolog has deduction capabilities that go far beyond that, where you can reason over a tree data structure, computing various parts of it according to a set of constraints. A SAT solver is not Turing complete. Prolog is. You could use Prolog to write a SAT solver (though it wouldn't be very competitive with solvers written in C or other languages).

    alex-robbins(10000) 2 days ago [-]

    It strikes me as too bad that this API is so imperative. You can see a pattern over and over in the README where they have `do` blocks, in which they clear some global state (`initialize-prolog`), then add some assertions back into that global state (via side-effectful API calls), and finally run a query (which is implicitly a query on the state that's been built up). Why not represent the knowledge base as a normal Clojure data structure, rather than as the effect of a sequence of API calls? Then it can be passed in by the caller alongside the query, instead of being stored as mutable state.

    This isn't just a style thing, either; there are consequences. REPL-driven development loses a lot of its slickness when some expressions can only be eval'd in a special context.

    Also, what am I supposed to do if two different parts of my program want to use Clolog? If I'm using Clolog, and one of my dependencies is using it too, will we end up trashing each other's global state? (The case of external dependencies is just an example. Let's say I have an application that sets up the Clolog state and runs a query. Part of the setup involves calls to other parts of my application. At the time the code was written, those other parts didn't use Clolog, but now they do, and there's a bug, because those other parts are trashing the Clolog state that their caller had set up, before it runs its query.) Of course, you could get around this by adding something like a dynamically bound variable that points to the instance of state to use, but at that point you're jumping through hoops to support a subpar (stateful) paradigm that Clojure developers expect to be free of.

    Pet_Ant(10000) 2 days ago [-]

    Submit a PR? If you have an idea how it would look better, submit it. It's nice that they got it to this point as is. Lets make it better.





    Historical Discussions: Datastar: Web Framework for the Future? (April 11, 2025: 255 points)

    (255) Datastar: Web Framework for the Future?

    255 points 7 days ago by 1659447091 in 3623rd position

    chrismalek.me | Estimated reading time – 34 minutes | comments | anchor

    Datastar is a new hypermedia framework that makes building real-time web applications simpler and more efficient. It prioritizes server-side logic, uses "signals" for automatic UI updates, and leverages Server-Sent Events for lightning-fast performance. If you're looking for a streamlined alternative to traditional JavaScript frameworks or HTMX, Datastar is worth exploring.

    However, it requires that you approach web development with a fresh perspective, embracing server-driven architecture and reactive programming.

    I've been diving into hypermedia lately looking at frameworks and libraries to build a new product and to help quickly create proof of concepts and web tools for clients.

    HTMX at the time of writing was getting basically all the attention in the Hypermedia world. It demos really well and the examples are great. However, this article is NOT about HTMX.

    I believe hypermedia and HTMX offer a promising direction, but when I tried to develop a new product using HTMX, I felt stuck due to challenges in figuring out the project structure, the HTML structure combined with excessive HTMX tags and realizing HTMX cannot handle the front-end interactively for which you have to bring in something like AlpineJS. (Did I mention I hate javascript?). HTMX is cool but I think before you start a new project with it you might want to look at Datastar as well.

    I had looked at Datastar in the past while evaluating HTMX but I did not grasp its potential over HTMX until I took a second look at started to feel some HTMX pain. Your results may vary.

    First let's understand my biased perspective. Everyone's background is different and I think it is important to understand where I am coming from.

    • I am an expert in the PeopleSoft ERP platform creating "enterprise" applications
      • PeopleSoft is a large ERP system that is used by many large organizations. It is a very powerful but a bit dated. Since it is the hub of most corporate data it is not going away anytime soon but SAAS apps are slowly chipping away at it.
      • I spent most of my time creating business CRUD applications that are used by business users and students.
      • These are always "config" driven because PeopleSoft makes it easy to create setup/config tables and GUIs to manage the data. Think of PeopleSoft has one big ball of "feature flags" in the form of setup tables.
      • PeopleSoft completely abstracts the front-end away. You never worry about the front-end. The back-end controls the front-end. PeopleSoft was built 30 years ago and it's meta-data architecture allowed the porting from a client-server architecture to a web architecture. Developers can deploy applications and never worry about JavaScript or even HTML and CSS at all. This is amazing.
        • This has kept me away from the front-end for most of my career. I have always been a back-end developer but with PeopleSoft a back-end developer can easily deploy front-end user-facing applications. So I am used to handling and delivering solutions to clients that are 100% server-side but also user facing. You don't need a front-end developer to deploy a PeopleSoft application.
      • These applications contain some of the most sensitive corporate data from payroll, biographic data, student data, financial data, etc. You cannot "move fast and break things" in these applications. You have to be very careful with the data and the business logic.
      • In these applications, you cannot trust code running in the browser and your backend code must protect access to the data.
    • I use Go for most of my side projects. I like the simplicity, speed and type safety of Go.

    I have been looking for a framework or a set of libraries to build a new product that is modern and efficient. I had been looking at Hypermedia to do this.

    My rough requirements are:

    • A modern web framework that is efficient and can handle real-time updates.
    • A framework that can handle the front-end and the back-end but rely 100% on the back-end where possible.
    • A framework that can handle the front-end state and interactions.
    • Simplicity, Simplicity, Simplicity
    • A framework where I can use Go on the back-end.
    • Rapid development and prototyping
    • Avoid Javascript and NPM as much as possible or totally from a development perspective.
      • Javascript makes me queezy 🤢 and every time I see NPM I get a headache 😵‍💫 and my instincts tell me to run away. I am not a fan of the JavaScript ecosystem.
    • A "stable" platform that I can deploy something and just have it work for years without me worrying about it.
    • Rapid deployment to the cloud like Fly.io.
    • Freedom to use any CSS framework I want because those seem to change with the wind.
    • Avoid the split-team, JSON API approach of modern web apps where frontend and backend are disjointed.

    Previously I was looking at Phoenix but did not want to shift to another language. If you google "hypermedia" you will see a lot of articles about HTMX and it seems GitHub is full of projects using it. I think it is the most popular hypermedia library at the moment. I had looked hard at the Live Golang Library .

    ## My First Impressions of HTMX

    Of course, I looked at HTMX and started to get excited about its potential. So I started to develop a real application after deploying some fairly simple "web tools" that my clients use for some one-off tasks.

    I created some simple tools for myself and clients. I did not need any front-end state or interactions. I was just updating the UI with new HTML fragments from the server mostly as a result of field change or clicks. In these simple cases, HTMX handles those well.

    In my HTMX prototyping of a more complex application, the HTML code became a mess with HTMX tags for some parts that were non-trivial. I found myself struggling to understand the project structure and myriad of HTMX tags and how to manage the front-end and the back-end. I also needed some front-end functionality and state and HTMX is NOT designed to handle that. With HTMX you have to import AlpineJS and I broke out in hives because I hate JavaScript 😢.

    It was when I got to the more complex parts of the application that I felt HTMX was sort of getting in the way. I was starting to a get a huge lint ball building up. My intuition told me I was headed in the wrong direction. I hit the pause button and started to look for alternatives because I felt I was working too hard and the code was getting too complex.

    # Revisiting Datastar and a Turning Point

    I was busy working at client sites and I had put a pause on my research for HTMX alterntives or some non-trival TODO application examples. Then on my YouTube feed one day, I saw an interview with creator of Datastar , and it had me look again. If I had not had some experience with HTMX I would not have "been ready" for that interview and understood some of the points he was making.

    Here are some AI Generated main points of the interview. The ones I put in bold are the ones that got me to look again at Datastar:

    1. Delaney explores hypermedia beyond HTMX, focusing on real-time applications.
    2. HTMX is seen as solving 1999's hypermedia issues, but not pushing current boundaries.
    3. Server-sent events (SSE) offer efficient real-time updates, surpassing traditional AJAX.
    4. Datastar, a modular alternative to HTMX, aims to simplify code and enhance performance.
    5. Game development's efficiency can inspire web development's speed and capability.
    6. SSE enables microsecond updates, challenging the limitations of polling in HTMX.
    7. The event-driven architecture is vital for scalable, efficient, real-time applications.
    8. Datastar's declarative approach reduces complexity compared to HTMX.
    9. Emphasizing server control, SSE can optimize and simplify web interactions.
    10. Delaney argues for a paradigm shift towards smarter use of hypermedia technology.

    I had looked at the Datastar documention previously when I was evaluating HTMX. I probably found it from some discussions on Reddit. However, I previously struggled to grasp its purpose and found its documentation confusing and dense. Honestly, I think it was just over my head when I first read the Datastar docs. I was not ready to understand it. It claims to be a better alternative to HTMX and AlpineJS. Another thing that turned me off was the project did not have a huge amount of contributors. However, after watching the interview from the Datastar author, I realized he might have some serious insights and I should give it another look. He is also a contributor to HTMX.

    The two things that I originally found confusing about Datastar were:

    • Use of SSE (Server-Sent Events) for real-time updates.
      • I had no experience with SSE and didn't understand how it could be used in a real-time application. I had some vague memories from reading about how they did not scale or they suffered from issues with the connection being dropped. I had not looked at them in years. I have zero experience with them.
    • The concept of "signals" for reactive programming.
      • I did not realize how this can drastically simplify your code.
      • The term signal was confusing and I just did NOT get it on the first read. I had no experience with reactive programming. It turns out signals can help me avoid a lot of front-end code and state management but I did not realize that at first.

    That interview had me look at the documentation again which had undergone some updates. After I spent some time reading and re-reading the documentation and looking at the examples then trying some "hello world" examples on my own then the light bulb went off.

    Datastar might be the library I had been looking for. It looked promising after I started to peel off some onion layers.

    It seems the author delaneyj is taking some base primitives of HTMX and Hypermedia and making it easier to use and Datastar is his answer. Those are the claims at least. At the time of writing, I am still creating my first application with it. I am not ready to give a full review. But I am excited about the potential.

    It seems that the Author is also a big fan of Go which helps me because any examples and libraries will have Go examples.

    First let me clarify. I am not an expert in Datastar. I am just starting to learn it. I am also NOT a contributor or author to Datastar in any way. I am NOT taking any credit. I just want to spread the word about it as I don't think it is getting the attention it deserves.

    From my current understanding of Datastar, there are some key concepts that form the foundation of Datastar:

    • Signals: Reactive programming primitives that automatically update the UI when data changes.
      • We will explore what the heck these are shortly.
      • You as the developer will decide what signals you want and put some special tags on the HTML elements that will trigger the server to send back updates to the signals. This will be associated with some sort of server state.
    • Server-Sent Events (SSE): Efficient data streaming for real-time updates and page changes.
      • These are just the repsonses sent back from the server to the client. They are just text over HTTP and are generally HTML fragments that update the UI. You can do many other things but let's not get ahead of ourselves.
    • Actions: HTTP verbs (GET, POST, PUT, DELETE) that trigger server-side logic and UI updates.
      • These are the HTML tags that you put on HTML elements that trigger the server to send back updates to the signals or new HTML fragments that update the UI.
    • Fragments: HTML snippets that update the UI based on server-side logic and user interactions.
      • Your server side has to be structured to send back these HTML fragments that will update the UI.

    You include the Datastar JavaScript library in your HTML and then you can start to use these concepts to build your application. You will also need to structure your server to handle the SSE requests and the GET/POST/PUT/DELETE requests.

    Your backend choice does not matter.

    I will mostly compare it to HTMX because that is the current perspective I have and HTMX is getting a ton of ink and attention.

    • With HTMX to build a real application you need:

      • Front End
        • HTML
        • HTMX JavaScript and Tags to handle the triggers to backend updates
        • AlpineJS (or other JavaScript framework) to handle front end logic and interactions and state.
      • Backend
        • HTML fragments that are that is dependent on your UX
        • Routes and Code to handle the GET/POST/PUT/DELETE
    • With Datastar to build a real application you need:

      • Front End
        • HTML
        • Datastar JavaScript to handle the triggers to backend updates and all the UI state and interactions.
      • Backend
        • HTML fragments that are dependent on your UX
        • Routes and Code to handle the GET/POST/PUT/DELETE
        • SSE routes to handle the updates to the signals

    So just from looking at the dependencies, Datastar gives you a single JavaScript library that can handle state on the front-end (signals) and making HTML attributes perform actions (GET/POST/PUT/DELETE) and handle the updates from the server. The server is 100% responsible for generating the HTML fragments and the updates to the signals.

    • Datastar provides the benefits of HTMX and AlpineJS under a single library. You get the best of both worlds.
    • You can ditch much of what a front-end framework like React or Vue.js would provide and use Datastar. (Bold claim)
    • Your server is 100% responsible for generating HTML snippets and templates very similar to what you would do with HTMX.
    • It is back-end agnostic and can be used with Go, Node.js, PHP, etc. I prefer Go but it does NOT matter.
    • It relies heavily on Server-Sent Events (SSE) for real-time updates but after you pull the cover off of SSE it is just text HTTP responses with a some different headers.

    ## What is a "Signal" and what is "Reactive Programming"?

    I think one of the biggest things that I missed in my initial read of the docs was the concept of a signal. This was not invented by Datastar and I believe is implemented in Datastar using a library from another developer.

    I'm sure I'm just behind the curve and you may already know what a signal. I'm going to try to explain it. Getting an fundamental understanding of what a signal represents and can do for you is what give Datastar its power. It makes creating user interfaces much simpler and more maintainable.

    Before we talk about signals, let's talk about reactive programming because they are related. Reactive programming makes your application code automatically "react" to changes in data and propagate those changes automatically through the application. Instead of telling the computer how to do things step-by-step, you tell it what should happen when data changes and the computer figures out how to do it. It allows you to define a relationship between data sources and data consumers. When the data source changes, the data consumer is automatically updated. In a non-reactive system, you would have to manually update the data consumer when the data source changes. This is generally in the form of "on-change" Javascript events and functions to bind all the data and UI together.

    Ok, Ok that is still too much jargon!!!!

    ## Understanding Reactive Programming via Spreadsheets

    I think the best way to understand reactive programming is to think about a spreadsheet.

    • A spreadsheet application like Microsoft Excel or Google Sheets is the best example here.
      • If you have any experience working with a complex spreadsheet in the engineering or financial realms, you have already worked with reactive programming.
      • I am NOT talking about using excel as a CSV viewer.
      • I am talking about using it as a tool to do calculations, and you "build up layers" of intermediate calculations to get to a final result.
        • Very often you have intermediate calculations you need for other calculations or just doing checks for mistakes. This leads to a series of calculations that are dependent on each other. This is represented in formulas in spreadsheet cells that reference other cells. For complex calculations, you can have a "pipeline" or spiderweb of calculations that are dependent on each other.
        • I have an engineering degree and worked for engineering firms while in college and after college, I have used Excel for some very complex calculations and engineering modeling for HVAC cooling and plumbing systems. Excel is an great tool for this.

    The amazing thing about Spreadsheets is that it is reactive. When you change a value in a cell, all the dependent cells are automatically updated. This is the essence of reactive programming. In you define the relationships between the "cells" or data elements and the underlying framework progates changes when data changes. This is very powerful and can simplify your code and make it more maintainable.

    Here is a rough schematic of that where the arrows represent the dependencies between cells:

    Datastar gives you some of these some of these same capabilities in a web application via the concept of a signal.

    I conceptually think of a Datastar signal as a link between "cell" or HTML elements. I did not make this connection at first.

    • In Datastar, signals are used to update the UI when the data changes.
    • It can also trigger back end posts/gets/puts/deletes.

    Signals are part of the glue of a Datastar application. You place signals on the page and the UI can be automatically updated. See the Model Binding Example and the Signals change example

    The server can send down updates to the "signals" (spreadsheet cells contents) or even send down a new HTML fragment that update the UI. In spreadsheet terms, this would be like adding new cells, charts, etc from the server.

    There is more you can do with signals. If you read the docs and still don't understand, I would re-read them. I had to read them a few times to get it.

    # Actions - GETS/POSTS/PUTS/DELETES

    When/if you start looking at HTMX you see that you can trigger actions on the server with a GET/POST/PUT/DELETE. This is the same in Datastar. You can trigger these actions with a signal.

    HTMX and Datastar both trigger this server request to the server to update or get updated UI elements. The difference is that Datastar uses SSE to get the updates back to the client. I was scratching my head on this until I started reading more about it and looking at the examples.

    SSE is very simple. It is just text which I maybe had read before but since I had no real experience doing development work I did not understand. I work daily with HTTP web services and have a firm grasp on how HTTP works.

    You can add some "tags" to the HTML elements (button on click, input on change, etc) and then when the user interacts with the page, the server can send down updates to the signals.

    From the Datastar examples :

    
    <div id='contact_1'>
      <label>First Name: John</label>
      <label>Last Name: Doe</label>
      <label>Email: [email protected]</label>
      <div>
        <button data-on-click='@get('/examples/click_to_edit/contact/1/edit')'>
          Edit
        </button>
        <button data-on-click='@get('/examples/click_to_edit/contact/1/reset')'>
          Reset
        </button>
      </div>
    </div>
    

    The Datastar Javascript library running in the browser connects to the server with a connection that is kept open (until the server closes it). The server can send down updates to the signals. The server can also send down new HTML fragments that update the UI.

    Datastar and HTMX have a similar concept but Datastar is built out of the box to handle updating any part of the page using the "ID" of the element. This is possible in HTMX but requires some extra work/tags.

    Basically, the Datastar "actions" can do anything that HTMX can do.

    ## Understanding SSE - It is just text

    First let's quickly understand SSE. At we will see SSE is just text. It is not some magical protocol. It is just text over HTTP with some special headers and browsers support it out of the box.

    Datastar leverages SSE and the Javascript library expects and interprets the responses from the server in a certain way. The server can send down updates to the signals or new HTML fragments that will update the UI. The server can also close the connection when it is done.

    For the authoritative source, refer to the Datastar SEE Reference

    In Datastar, you add some "tags" to the HTML elements (button "on click", input "on change", etc) which causes the browser to send a request to the server to open an SSE connection. That connection stays open until the server closes it. The server can send down updates to the signals or new HTML fragments that will update the UI.

    For most CRUD applications you will be sending down HTML fragments that will update the UI. Then close the connection. If you were making some sort of real-time dashboard you would keep the connection open and send down updates to the UI as the server finds changes in the data. The server might be monitoring a database or some other data source and sending down updates to the UI as they change.

    Let's first look at the simplest case that is most like the HTMX examples which is more inline with CRUD applications.

    You will have some attribute on an HTML element that will trigger an SSE call to the server. For simplicity, let's say it is a button click.

    <button id='button1' data-on-click='@get('/example/buttonpress')'>
      Click Me
    </button>
    

    That triggers an HTTP call by the browser with the SSE header of Accept: text/event-stream to the server.

    GET /example/buttonpress HTTP/1.1
    Host: example.com
    Accept: text/event-stream
    Cache-Control: no-cache
    

    There are options to send extra data and Datastar will automatically send along any local signals on the page. This automatic signal sending is a feature of Datastar that is not in HTMX and I did not realize how powerful it can be. In the HTTP example above I am NOT showing any signals because they are not needed for this simple example.

    • Here the browser will keep the connection open and listen for updates from the server.
    • The server can send down updates to the signals or new HTML fragments that will update the UI.
      • In this example, we will focus on HTML Fragments
      • The server sends back a response with the event of Datastar-merge-fragments and the data of the new HTML fragment that will update the UI.
        • In this case, the server "knows" that its only job is to send back some HTML when the button in pressed and close the connection.
      • The HTTP response will look like this:
    HTTP/1.1 200 OK
    Content-Type: text/event-stream
    Cache-Control: no-cache
    Connection: close
    
    event: Datastar-merge-fragments
    data: fragments <div id='button1'>Button Pressed and removed.</div>
    

    In the example above, the server sends back a new HTML fragment that will replace the button that was clicked. The Datastar JavaScript running in the browser will match the ID of the element and replace it with the new HTML fragment.

    The server could have sent several fragments to update any part of the page. HTMX can do this but I think Datastar is built out of the box to handle this.

    • What is an example of a case where the SSE connection is kept open and the server sends down updates to the signals?
      • Imagine a case, where you have a web page that is tracking the location of a food delivery vehicle.
      • The server is monitoring the GPS location of the vehicle and sending down updates to the signals to update the location of the truck on the map. The server can also send down new HTML fragments that update the UI.
      • The browser keeps an SSE connection open and the server can send down updates.
      • The HTTP response from the server will look like the following where there is some time elapsed between each of those event and data pairs.
    HTTP/1.1 200 OK
    Content-Type: text/event-stream
    Cache-Control: no-cache
    Connection: keep-alive
    
    event: Datastar-merge-fragments
    data: fragments <div id='truckstatus'>The Truck is under a bridge</div>
    
    
    event: Datastar-merge-fragments
    data: fragments <div id='truckstatus'>The truck is at Jersey Mikes and the driver is enjoying a sandwich</div>
    

    In HTMX you would have to implement polling which works but is not as efficient as SSE.

    If you look at the Progress Bar Example you can see that there is an SEE endpoint there like this:

    GET https://data-star.dev/examples/progress_bar/data
    

    It sends back a stream of updates to both the title and div with id='progress_bar'. As the browser receives the updates, it updates the UI in real-time. The browser keeps an SSE connection open and the server can send down updates to the signals. The server can also send down new HTML fragments that update the UI.

    HTTP/1.1 200 OK
    cache-control: no-cache
    connection: keep-alive
    content-type: text/event-stream
    date: Thu, 16 Jan 2025 05:36:26 GMT
    fly-request-id: 01JHPSSQHJMTZ82JYZXE5T43BM-sjc
    server: Fly/3f202fc64 (2025-01-13)
    transfer-encoding: chunked
    via: 1.1 fly.io
    
    event: Datastar-merge-fragments
    retry: 1000
    data: fragments <div id='progress_bar'><svg width='200' height='200' viewbox='-25 -25 250 250' style='transform: rotate(-90deg)'><circle r='90' cx='100' cy='100' fill='transparent' stroke='#e0e0e0' stroke-width='16px' stroke-dasharray='565.48px' stroke-dashoffset='565px'></circle> <circle r='90' cx='100' cy='100' fill='transparent' stroke='#6bdba7' stroke-width='16px' stroke-linecap='round' stroke-dashoffset='559px' stroke-dasharray='565.48px'></circle> <text x='44px' y='115px' fill='#6bdba7' font-size='52px' font-weight='bold' style='transform:rotate(90deg) translate(0px, -196px)'>1%</text></svg></div>
    
    
    event: Datastar-merge-fragments
    retry: 1000
    data: selector title
    data: fragments <title>1%</title>
    
    
    event: Datastar-merge-fragments
    retry: 1000
    data: fragments <div id='progress_bar'><svg width='200' height='200' viewbox='-25 -25 250 250' style='transform: rotate(-90deg)'><circle r='90' cx='100' cy='100' fill='transparent' stroke='#e0e0e0' stroke-width='16px' stroke-dasharray='565.48px' stroke-dashoffset='565px'></circle> <circle r='90' cx='100' cy='100' fill='transparent' stroke='#6bdba7' stroke-width='16px' stroke-linecap='round' stroke-dashoffset='554px' stroke-dasharray='565.48px'></circle> <text x='44px' y='115px' fill='#6bdba7' font-size='52px' font-weight='bold' style='transform:rotate(90deg) translate(0px, -196px)'>2%</text></svg></div>
    
    
    ...
    

    ## Example of a Datastar Application

    Let's take a look at a high level of how a Datastar application might be structured. This is a very high level and I am still learning Datastar so don't nitpick me if there are mistakes here.

    Let's imagine you have a page that allows you to track your food delivery.

    • The driver has some application in his phone that reports up to a central server where they are.
    • There is a web page I can go to that will allow me to track the driver up until the order goes into "delivered" status.
    • That Datastar application might look like this:
      • The browser sends a GET request to the server to get the initial page at https://feedme.now/delivery/driver/location
      • The server sends back the full page HTML template.
        • This page has some specific HTML tags that presents a button that allows you to start the status updates of how far the driver is from their house. The page will automatically update as the driver gets closer.
      • The user presses the "Track Driver" button and the browser sends a GET request to the server to open an SSE connection to https://feedme.now/delivery/driver/location/_monitor (This can be any URL)
        • The browser will keep this SSE connection open until the server closes it.
        • The Datastar library is listening to those SSE events sent by the server.
          • They could come all at once or trickle in over a few minutes.
          • Since this is a delivery driver tracking it will probably take several minutes for the driver to get to the house.
        • In the simplest case, the server sends back "html fragments" in the SSE event and Datastar figures out how to update the DOM.
          • There will be an event every few minutes to update the UI with:
            • The driver is 3 miles away.
            • The driver is 2 miles away.
            • The driver is 1 mile away.
            • The driver has entered your driveway
            • The food was delivered. {Close connection}

    The sequence diagram version of this follows.

    # Rethinking Web Development

    If you've done any web development in the past decade, it's worth reconsidering how you approach projects and develop proof-of-concept ideas. You might be surprised at how much simpler and more maintainable your code can become. While having everything on the server is advantageous, it requires a different approach to project structure.

    With Datastar, a significant portion of your application logic and state management can reside on the server. This shift in perspective may require you to rethink traditional web development paradigms.

    Additional Considerations:

    • Project Structure: Carefully plan how you'll organize your server-side code to handle SSE connections, manage signals, and generate HTML fragments efficiently.
    • State Management: Leverage Datastar's signals to manage your application state primarily on the server. This can simplify your client-side code and reduce the need for complex frontend frameworks.
    • Templating: Choose a templating engine that allows you to easily create and send dynamic HTML fragments. Consider using a template language that promotes code reusability and maintainability.
    • Real-time Updates: Explore the power of SSE for real-time updates in your application. Think about how you can use real-time data to enhance the user experience.
    • Security: As with any web application, security should be a top priority. Ensure that your server-side code is secure and protects sensitive data.

    By rethinking your approach to web development and embracing the capabilities of Datastar, you can create highly efficient, maintainable, and real-time applications.

    Your server-side setup will involve a few key components:

    • An HTML templating system that's organized to send HTML fragments for UI updates. This likely means breaking down your HTML into smaller, manageable chunks that can be generated and sent to the client.
    • The concept of "routes" is central to all web frameworks. A route maps a URL to a function that handles requests and sends responses.
    • In Datastar, you'll often need a route to handle the initial HTML request and another to handle SSE requests. There are multiple ways to structure this on your backend.

    The server must also handle SSE requests, GET/POST/PUT/DELETE requests, and send back the HTML fragments that update the UI.

    ## Additional Considerations

    • SSE Endpoint: Establish a dedicated endpoint for handling SSE connections. This endpoint will be responsible for managing the persistent connections and sending updates to the client.
    • Data Handling: Implement server-side logic to process incoming data, update signals, and generate the appropriate HTML fragments to send back to the client.
    • Error Handling: Incorporate robust error handling to manage unexpected situations and ensure the stability of your application.
    • Scalability: If you anticipate high traffic or require your application to scale, consider using a backend technology that can handle a large number of concurrent SSE connections efficiently.
    • Deployment: Choose a deployment platform that supports SSE and can accommodate the requirements of your Datastar application.

    By carefully considering these server requirements, you can build a solid foundation for your Datastar application and ensure its performance, stability, and scalability.

    Datastar offers a fresh approach to web development, streamlining real-time applications and minimizing front-end dependencies. While it demands a shift in perspective, its potential for simplicity, efficiency, and maintainability makes it worth exploring for modern developers. With its unified architecture and focus on server-driven logic, Datastar stands out as a promising alternative to traditional frameworks.




    All Comments: [-] | anchor

    dpc_01234(10000) 7 days ago [-]

    This matches 100% my experience and thoughts.

    I really enjoy HTMX and it's a blessing for my small-scale reactivity web interfaces, but I can immediately tell: 'Well, this is hard to organize in a way that will scale with complexity well. It works great now, but I can tell where are the limits'. And when I had to add alpine.js to do client-side reactivity, it immediately was obvious that I'd love to have both sides (backend and frontent) unified.

    Still need more time opportunities to roll some stuff with datastar in it, but ATM I'm convinced datastar is the way to go.

    For reference, my typical 'web tech stack': Rust, axum, maud, datastar, redb.

    naasking(10000) 6 days ago [-]

    > And when I had to add alpine.js to do client-side reactivity, it immediately was obvious that I'd love to have both sides (backend and frontent) unified.

    https://alpine-ajax.js.org/

    PaulHoule(97) 7 days ago [-]

    ... was kinda inevitable that HTMX was going to bring about a Cambrian explosion in frameworks like the one it was built to escape.

    sudodevnull(10000) 7 days ago [-]

    Datastar started as an attempt to help shape HTMX2 before that was a thing... https://github.com/delaneyj/nothtmx2

    Not sure the negativity. It's a superset of HTMX and it's 40% smaller with more features. Can you please tell me issue? I'm to dumb dumb grug please teach me senpai

    devnull3(10000) 6 days ago [-]

    It will still be much lesser than perma Cambrian explosion in js frameworks.

    Infact, a lot of the patterns in the likes of HTMX will be standardised.

    dalmo3(10000) 7 days ago [-]

    Reading tfa I kept wondering 'is this yet another framework where every click is a server round trip?' Judging by the demos1, the answer is yes?

    If this is 'the Future', I'm branching off to the timeline where local-first wins.

    1. https://data-star.dev/examples/click_to_edit

    sudodevnull(10000) 7 days ago [-]

    Our free shared fly.io was not built to handle hackernews. We are looking into alternatives but in the mean time checkout https://andersmurphy.com/2025/04/07/clojure-realtime-collabo... as it's the same tech but on a slight better machine.

    tipiirai(1598) 6 days ago [-]

    A JavaScript framework, built by a person who hates JavaScript doesn't sound right

    infamia(10000) 6 days ago [-]

    idk if I'd put it quite that strongly. https://data-star.dev/examples/dbmon

    Also, multiplayer for free on every page due to SSE (if you want it).

    fbn79(2521) 6 days ago [-]

    Every time I read 'Web Framework' I run.

    Ripley: These techs are here to protect you. They're frameworks.

    Newt: It won't make any difference.

    zamalek(10000) 6 days ago [-]

    I think the happy place is somewhere in-between. Use JS to allow the user to build up a request/form (basically DHTML circa 2000), but use one of these hypermedia frameworks when interacting with the server. I think that these are successfully showing that BFFs were a mistake.

    tauroid(10000) 6 days ago [-]

    Counterexample with just local signals: https://data-star.dev/guide/getting_started#data-on

    resonious(10000) 6 days ago [-]

    Nitpicking but

    > SSE enables microsecond updates, challenging the limitations of polling in HTMX.

    How is this true? SSE is just the server sending a message to the client. If server and client are in opposite sides of the world, it will not be a matter of microseconds...

    ivanjermakov(10000) 6 days ago [-]

    Reminds me of the joke 'hey, check out the website I just made: localhost:8080'

    sudodevnull(10000) 6 days ago [-]

    Well obviously there's a difference between latency and throughput. Of course it's going to be microsecond plus your rtt/2. Sorry, we can't beat physics.

    andersmurphy(10000) 6 days ago [-]

    You can have microsecond updated, once the connection is established you can stream. Regardless of your latency.

    Say your ping is 100 (units are irrelevant here). It will take you 100 before you see your first byte but if the server is sending updates down that connection you will have data at whatever rate the server can send data. Say the server sends every 10.

    Then you will have updates on the client at 100 110 120 130 etc.

    brap(10000) 6 days ago [-]

    Future? Looking at some of the examples, this seems a lot like the same old web server frameworks we had like 15 years ago, maybe more. Granted they didn't have SSE but regardless the DX was pretty bad. I don't see a compelling reason to go back.

    sesm(10000) 6 days ago [-]

    People with 15+ years of experience are not the target audience for this framework.

    nhumrich(10000) 6 days ago [-]

    I love the idea of datastar, but wonder how does one test it without using e2e testing? Also, I think it would be amazing and so much simpler if instead of using SSE, it just included all the events in a response. Maybe with SSE as an option for those who need true server pushes? I feel like most apps don't require server push, and instead just need a list of events/things to update from an action.

    sudodevnull(10000) 6 days ago [-]

    So unlike HTMX we support merge in both fragments and signals. We also support custom events natively for purely local state. We just make the browser declaratively reactive, that's it

    memset(1429) 6 days ago [-]

    This is probably a silly question, but how do I use loops? For example, if my backend returns an array of TODO items, how can i iterate through that and display on the frontend?

    fnord123(10000) 6 days ago [-]

    Htmx and datastar use backend rendering. So you write the html in the backend and serve that. In the case of an array, you render them as Todo ítems.

    You might be using a template system for that. E.g. Jinja2, moustache, askama, templ, etc depending on your backend language and libraries.

    j13n(10000) 7 days ago [-]

    This is the second post I've seen praising Datastar in the last 24 hours, and once again no mention of the requirement to punch a gaping hole in one's Content-Security-Policy.

    If this is the framework of the future, cyber criminals are going to have a bright future!

    max_(901) 7 days ago [-]

    How does this compare to HTMX (security wise)?

    sudodevnull(10000) 7 days ago [-]

    That's the nature of anything that does this kind of work. React, Svelte, Solid. Alpine has a CSP version but it does so little that I recommend you just accept being a Web1 MPA basic site.

    I have ideas around ways around this but it's a per language template middleware.

    nchmy(10000) 7 days ago [-]

    could you please elaborate on this?

    andersmurphy(10000) 6 days ago [-]

    Please don't cargo cult CSP without understanding it.

    unsafe-eval constrained to function constructors without inline scripts is only a concern if you are rendering user submitted HTML (most common case I see is markdown). Regardless of your CSP configuration you should be sanitizing that user submitted HTML anyway.

    ilrwbwrkhv(3613) 7 days ago [-]

    The web framework of the future is for better or for worse what Vercel and YouTubers talk about.

    Original thinking is sorely lacking in the majority of the web dev community.

    sudodevnull(10000) 7 days ago [-]

    I hate how right you are. We are now the smallest (v1 is on track to be 11.4Kb), have the fastest signal implementation and looking like over 2x faster than idiomorph. So it's the smallest and fastest shim to build real-time apps or simple CRUD. Shocked how much tech world is vibes based but so be it.

    rphumulock(10000) 7 days ago [-]

    Guess we just gotta get some new YouTubers to cover other things then :)

    There was a funny convo about this a bit

    https://www.youtube.com/watch?v=y79L3fhJI3o&t=8636s

    johndevor(10000) 7 days ago [-]

    Also worth checking out is the recent release of RedwoodSDK: https://news.ycombinator.com/item?id=43657215

    imjonse(3634) 7 days ago [-]

    Oh good, they finally realized GraphQL was holding them back.

    aiiizzz(10000) 6 days ago [-]

    Haha, didn't realize redwood was already deprecated and forked into redwoodsdk under new management.

    Looks good though, like remix except without those pesky route handlers. Then again I didn't get around to using the RR version. I wish the doc had a 'differences with RR' section

    devrandoom(10000) 7 days ago [-]

    > fresh perspective, embracing server-driven architecture

    This is not fresh perspective. I used to be on 'team everything on server' but it's a mistake on insist on that today.

    sudodevnull(10000) 7 days ago [-]

    I think, at least as the creator, I've seen the 'fight' be MPA vs SPA. IMO, both are wrong. It's about state management. MOST state lives in the backend but you still need fine grain reactivity on the frontend. On the number line between React and HTMX; Datastar is complex :)

    sudodevnull(10000) 7 days ago [-]

    Datastar author here... AMA, but know that Datastar is pure yak shaving for me to do real work stuff so I have no golden calves, just approaches I've seen work at scale.

    theboywho(10000) 7 days ago [-]

    What do you think about the Hotwire stack (Stimulus, Turbo) as compared to Datastar ?

    vb-8448(10000) 7 days ago [-]

    Doesn't it make stateful the whole stack?

    buangakun(10000) 7 days ago [-]

    Hello, I've heard of Datastar before but didn't really pay attention to it since all the air in the room was sucked up by HTMX.

    I tried HTMX and I found that it is really, really hard to manage complexity once the codebase gets big.

    Is there an example of Datastar being used with Go in a highly interactive application that goes beyond just a TODO app so I could see how the project structure should be organized?

    postepowanieadm(10000) 6 days ago [-]

    So how are your server bills? Does Datastar supports caching/prerendering?

    andersmurphy(10000) 7 days ago [-]

    If you want a solid demo of what you can do with datastar. You can checkout this naive multiplayer game of life I wrote earlier in the week. Sends down 2500 divs every 200ms to all connected cliends via compressed SSE.

    https://example.andersmurphy.com/

    CharlesW(114) 7 days ago [-]

    Is sending 10,000 divs/sec the right solution for this problem, or is this an 'everything looks like a nail' solution?

    danesparza(10000) 7 days ago [-]

    'Sends down 2500 divs every 200ms to all connected cliends via compressed SSE.'

    If I didn't know better, I'd say this was an April Fool's joke.

    kaycebasques(481) 7 days ago [-]

    Wow, I've never done multiplayer GoL. Simple yet addictively fun. LONG LIVE THE ORANGE CIVILIZATION!!

    edit: damn, purple civilization got hands

    jgalt212(10000) 6 days ago [-]

    your server logs are going to be an intelligible mess. This framework will be a yuge money maker for AWS CloudWatch.

    thanhnguyen2187(10000) 7 days ago [-]

    Really well-written and well-structured post! I'll seriously evaluate Datastar in my next toy project because of the author's praises!

    For people who are looking for HTMX alternatives, I think Alpine AJAX is another choice if you are already using AlpineJS

    sudodevnull(10000) 6 days ago [-]

    Ian is great, if you want progressive enhancement it would be my go-to every time!

    midzer(3663) 7 days ago [-]

    The future is frameworkless.

    sudodevnull(10000) 7 days ago [-]

    I agree! That's kinda the point with Datastar. EVERYTHING is a plugin, the core is a < 300 LOC engine for parsing data-* attributes and making available to plugins. You can pick and choose what makes sense for you. If you want to have declarative spec compliant interfaces, it can't get any smaller from what I've seen in the wild. Happy to get help to shrink it even more!

    bitbasher(10000) 7 days ago [-]

    Correct me if I'm wrong, but isn't half the point of htmx to allow for adaptive web design (ie, if js fails to load or is disabled it can still function via the form submission)?

    It seems like Datastar is doing away with that entirely and binding the UI more tightly to JavaScript to function correctly.

    sudodevnull(10000) 7 days ago [-]

    I'm very much of the opinion that progressive enhancement leads to lowest common denominator and you should just do a static MPA (nothing wrong with that). Modern browsers are a combination of HTML+CSS+JS and you should just embrace that as what modern hypermedia is. We aren't fighting against the browser. If you want just links and forms, you should just do that and have less code to maintain. But in my experience that's not what most are looking for in their apps.

    CharlesW(114) 7 days ago [-]

    The TODOS mini application at data-star.dev is slow and doesn't work correctly for me (checking/unchecking items isn't reliable). To me, this highlights one common problem I've seen with frameworks that insist on doing everything on the server.

    tevon(10000) 7 days ago [-]

    Agreed, I have gig internet and a hardwire connection and still get more lag than I'd want from a web app.

    Potentially could be solved with some client side cache but still..

    tasqyn(10000) 7 days ago [-]

    I have the fastest internet in the whole country and I couldn't add new todo, also deleting the todo item is very slow.

    macmac(10000) 7 days ago [-]

    Link?

    sudodevnull(10000) 7 days ago [-]

    Yeah I'm seeing that too. We're getting ready for V1 and I probably missed a test around the Todo. My fault, didn't think we'd get hit by hackernews on a free shared fly.io server. I'll look into it now

    sudodevnull(10000) 7 days ago [-]

    UPDATE: I have no idea why fly.io hate the TODO, but https://example.andersmurphy.com/ is a decent example (that's way more fun) that's running now. I'm commenting out that demo until I have more time to investigate. If y'all find other ones that are acting up please let me know. Looks likes it might be time to actual host this thing on a real server.

    tcdent(10000) 7 days ago [-]

    > 'what is a signal?'

    it's another word for event

    evertedsphere(10000) 7 days ago [-]

    a signal is not a single event but rather a stream of events at given timestamps

    (or, if you wish, a stream where you have an Option<Event> at each timestamp)

    sudodevnull(10000) 7 days ago [-]

    Signals have dependencies and subscribers. It's a value and publisher and subscriber if you want to be more correct.

    udioron(10000) 6 days ago [-]

    From datastar's docs:

    > Backend Setup

    > Data star uses Server-Sent Events (SSE) to stream zero or more events from the web server to the browser. There's no special backend plumbing required to use SSE, just some syntax. Fortunately, SSE is straightforward and provides us with some advantages.

    As a django developer, this is very far from true. With htmx i get almost no backend changes (mainly in template code), where datastar would require me to rewrite it and may not be possible to implement at all.

    andersmurphy(10000) 6 days ago [-]

    Sounds like a django over abstraction problem. SSE is standard HTTP.

    If laravel can do it django can.





    Historical Discussions: Rust to C compiler – 95.9% test pass rate, odd platforms (April 12, 2025: 253 points)

    (253) Rust to C compiler – 95.9% test pass rate, odd platforms

    253 points 6 days ago by todsacerdoti in 1st position

    fractalfir.github.io | Estimated reading time – 15 minutes | comments | anchor

    This is an update on the progress I have made on my Rust to C compiler.

    I am experimenting a bit with a new article format: instead of an overarching theme, this is more of a collection of smaller bits and pieces, sewn together.

    The big news

    I will first start with the biggest news: I am going to be giving a talk about the project during Rust Week(in Utrecht, Netherlands).

    Creating this talk has been an interesting challenge: I tried to strike a good balance between being approachable for beginners, while still talking about a pretty advanced topic.

    So, if you are attending Rust Week, and are interested in what I have to say, you can come and hear it in person! If you see me during the conference, and want to talk, don't be shy to say hi.

    Now that this is out of the way...

    Passing more tests

    I have also been slowly working on fixing as many tests as possible, and I can already boast a 95.9 % core test pass rate. This is a nice bump from the 92% pass rate two months ago.

    There still are about 65 tests that need fixing, but they all seem to have pretty similar causes. So, fixing them should not be too difficult.

    The .NET side of the project has also heavily benefited from the fixes I implemented: now, 96.3 % of Rust core tests run in .NET.

    Bugfixes

    128 bit ints

    Most of the current improvements come from fixes to 128 bit intrinsics, checked arithmetics, and subslicing.

    The C popcount intrinsic has 3 variants: __builtin_popcount(unsigned int), __builtin_popcountl(unsigned long) and __builtin_popcountll(unsigned long long).

    It might seem logical to assume that the C intrinsic __builtin_popcountll works on 128 bit ints - it does not.

    It works on the unsigned long long type, which is not the same as __int128_t. At least on x86_64 Linux(with the GCC compiler), unsigned long and unsigned long long are both 64 bits in size. This is something I knew about, but I did not consider that 2 differently named intrinsics would end up just being one and the same thing.

    int pop_count64(long num) {
       return __builtin_popcountl(num);
    }
    int pop_count128(__int128_t num) {
       return __builtin_popcountll(num);
    }
    pop_count64:
           xor     eax, eax
           popcnt  rax, rdi
           ret
    pop_count128:
           xor     eax, eax
           popcnt  rax, rdi
           ret

    It turns out that my implementation of most of the bit counting intrinsics(count leading / trailing zeroes) have been silently truncating 128 bit ints to 64 bit ones, and only them performing the needed calculations. That obviously yields incorrect results.

    However, emulating those 128 bit intrinsics is not too difficult. The popcount intrinsic simply checks how many bits are set in an integer. So, I can add up the number of bits set in the lower and higher half of that integer, and get the correct result .

    static inline __uint128_t pop_count128(__uint128_t val) {
       return __builtin_popcountl((uint64_t)val) +  __builtin_popcountl((uint64_t)(val>>64));
    }

    I have also finally fully implemented the very last checked arithmetic operations. Checking for overflows during 128 bit int multiplication is hard. For quite some time. I have been trying to come up with some clever ideas for fast overflow checks. Sadly none of them ended up working out for 128 bit multiplication.

    After much deliberation, I decided to simply settle for the easy, but inefficient check. Basically, as long as (a * b) / b == a, and b is not zero, overflow did not occur.

    bool u128_mul_ovf_check(__uint128_t A0 ,__uint128_t A1 ){
    bb0:
    	if((A1) != (0)) goto bb1;
    	return false;
    bb1:
    	// Not UB: b != 0, unsigned overflow is well-defined.
    	return (((A0) * (A1)) / (A1)) == (A1); 
    }

    This is nothing groundbreaking, but they - at least it works, and it gets a few more tests to pass.

    Subslicing

    The subslicing bug was quite embarrassing: I forgot a sizeof, and was offsetting the slice's data pointer by bytes instead of elements. It is not hard to see why this is wrong.

    With how simple this bug is, you might wonder how on earth it has managed to stay undetected for so long. Well, the code was only broken for subslicing from the end of the slice, and not its beginning. To my knowledge, that sub slicing mode is mainly used in pattern matching.

    let ok = slice[2..5];
    let still_ok = slice[5..];
    // broken
    if let [start, reminder] = slice{
    	panic!();
    };

    So, subslicing was only broken for this specific pattern, and always worked fine for byte/string slices(bytes and UTF8 code units are a 1 byte in size). This allowed it to sneak past my own tests, and only showed up when running the whole Rust compiler test suite.

    Fallback intrinsics

    It turns out I did not have to implement some intrinsics manually - the Rust compiler already supports emulating them. For certain intrinsics, this is a god send - since they are a pain to implement by hand.

    For example, carrying_mul_add requires you to perform multiplication on an integer 2x larger than the input one. This is fine up to 64 bits, but... what integer is larger than 128 bits? LLVM supports 256 bit ints, but C(and .NET) does not.

    define void @carrying_mul_add(ptr dead_on_unwind noalias nocapture noundef writable writeonly sret([32 x i8]) align 16 dereferenceable(32) initializes((0, 32)) %_0, i128 noundef %a, i128 noundef %b, i128 noundef %c, i128 noundef %d) unnamed_addr #0 !dbg !7 {
      %0 = zext i128 %a to i256, !dbg !25
      %1 = zext i128 %b to i256, !dbg !25
      %2 = zext i128 %c to i256, !dbg !25
      %3 = zext i128 %d to i256, !dbg !25
      %4 = mul nuw i256 %1, %0, !dbg !25
      %5 = add nuw i256 %4, %2, !dbg !25
      %6 = add nuw i256 %5, %3, !dbg !25
      %7 = trunc i256 %6 to i128, !dbg !25
      %8 = lshr i256 %6, 128, !dbg !25
      %9 = trunc nuw i256 %8 to i128, !dbg !25
      store i128 %7, ptr %_0, align 16, !dbg !25
      %10 = getelementptr inbounds nuw i8, ptr %_0, i64 16, !dbg !25
      store i128 %9, ptr %10, align 16, !dbg !25
      ret void, !dbg !26
    }

    So, the ability to just use a built-in emulated version of this intrinsic is amazing: this means I don't need to fiddle around and find my own solution to this problem.

    This is also very interesting for another reason: since carrying_mul_add performs 256 bit multiplication and addition using 128 bit integers, it means it is capable of performing 128 bit operations using 64 bit ints.

    I am currently looking into understanding that fallback implementation a little bit better, in order to base my own emulation of 128 bit ints on that.

    While a lot of modern C compilers and platforms support 128 bit ints without a major hassle, I want to support as many platforms as possible.

    Supporting more C compilers.

    Besides that, I have been working on improving C compiler compatibility. You might have seen Rust code running on a Game boy, compiled to movs, or the April Fool's special of Rust running on Temple OS.

    The more obscure C compilers I support(to any degree) the higher the chance Rust code will run with proprietary C compilers I have no direct access to.

    This has been a bigger problem for the project as of late. Turns out, a lot of platforms are not supported for a good reason(lack of docs + lack of access). Not supporting those platforms is a bit of a hindrance for the project.

    To give an example: there have been discussions about writing some new parts of Git in Rust.

    Sadly, doing that would mean degrading / dropping Git support for the proprietary platform NonStop - since it does not support Rust(or LLVM or even GCC), at all.

    Originally, I was a bit optimistic about situations like this: if my project compiled Rust to C, it could eliminate this problem altogether.

    In theory Rust would be able to run anywhere C can. There are some big asterisks to this(I am still unsure if I can work around certain issues on all platforms), but hey - this might be the best shot at supporting Rust there, save for companies stepping in and adding LLVM support, which I feel is... unlikely.

    Recently, I wanted to check if 'Supporting Rust by compling it to C' is a viable strategy in a case like this.

    However, I immediately hit a stone wall. I could find no legal way of obtaining a compiler for this platform without buying a server, which is definitely way, way outside my budget.

    So, I don't belive Rust is going to run on a platform like this any time soon.

    Plan for now

    For now, the plan is to get as close to standard-compliant C99(or maybe even ANSI C) as possible, and only use standard POSIX APIs(I need some threading support to properly initialise thread-locals).

    That means I have my own fallbacks for certain intrinsics, and I am slowly but surely working on expanding that list. I have had some success running very, very simple Rust programs on ANSI C compilers, so there is definitely some hope.

    Fingers crossed, that'll mean that adding support for currently unviable platforms is easy enough when a need for that arises.

    Tiny perf improvements

    I have also worked on a wide variety of performance improvements. The smallest changes were related to integer literals. I realized that, for integers smaller than 2^32, their hex form is always bigger if not as big as their decimal form, due to the 0x prefix. Eg. 255 is a byte shorter than 0xFF, and so is 65536(0xFFFF). Only for 2^32 things start to even out. This may seem like a negligible change. However, I generate a lot of C code. In some more extreme cases(transpling the entire Rust compiler to C), I have generated up to 1GB of C source files. At that point, shaving even a fraction of a percent of the total file size has an impact.

    My way of embedding debug info(using the #line directive) also got a bit smarter - the source file name will not repeat, and is only included when that changes.

    So this:

    #line 1 great.rs
    L0 = A0 + A0;
    #line 2 great.rs
    L1 = L0 * 5.5;
    #line 1 amazing.rs
    L2 = L1 * L1 * L1;
    #line 4 great.rs
    L3 = L2 - A0

    Is written like this, instead:

    #line 1 great.rs
    L0 = A0 + A0;
    #line 2 
    L1 = L0 * 5.5;
    #line 1 amazing.rs
    L2 = L1 * L1 * L1;
    #line 4 great.rs
    L3 = L2 - A0

    It may seem like a tiny change, but it reduces file sizes by a lot(when using debug info).

    Refactors

    rustc_codegen_clr has seen some big, internal refactors. I have managed to split some parts of it into separate crates, which speeds up incremental builds. That makes development a bit easier.

    I am also progressing along with my move to a more memory-efficient interned IR. Along the way, I am also slowly removing some jank from the old IR.

    The main issue is that there exist some rather exotic r/lvalues which don't map too well to C. They are quite hard to show without going into some more obscure features of Rust, like dynamically sized types. You can safely skip this section.

    Consider this piece of Rust code:

    /// Custom DST.
    struct MyStr{
    	sized:u8,
    	s:str
    }
    impl MyStr{
    	fn inner_str(&self)->&str{
        		&self.s
    	}
    }

    This line &self.s may seem simple but it is not. Since MyStr is a dynamically sized type, so the pointer to it is "fat" - it contains metadata.

    Let us think about what kind of C code this function will produce.

    FatPtr_str inner_str(FatPtr_MyStr self){
    	// What goes here?
    }

    Here, we need to do 2 things: Offset the "data" pointer of our self fat pointer by 1(the size of the fixed-size fields) Create a new slice from that data pointer, and some metadata. This is quite easy to do in modern C.

    struct FatPtr_str inner_str(struct FatPtr_MyStr self){
       return (struct FatPtr_str){self.data + 1, self.meta};
    }

    However, compound literals were not part of the language until C99, and a lot of old/obscure compilers don't support that.

    Instead, we need to do something like this:

    struct FatPtr_str inner_str(struct FatPtr_MyStr self){
       struct FatPtr_str tmp;
       tmp.data = self.data;
       tmp.meta =  self.meta;
       return tmp;
    }

    This is an ANSI-C compliant way of doing things. However you might notice that 1 line of Rust(and MIR) now corresponds to multiple lines of C. That is a pain to manage on the IR level. The old IR had an odd way of dealing with this: it essentially allowed you to create an inner scope, with a temporary local, and some "sub-statements".

    This is quite messy, and frankly an idiotic way of dealing with this problem. Well, at least I now know that I will not be making this exact mistake again. The new way of doing things is a bit more complex in the setup phase, but it makes the whole IR much more simple.

    There are other cases where this "temporary scope" was useful, but now, only 1 of the most annoying cases like this remains. Once I get that solved, I'll be able to entirely get rid of this abomination of a feature.

    This will allow me to fully move to the new IR, which is going to be very neat.

    Conclusion

    I have made a fair bit of progress during the last few months. There definitely are some diminishing results to bug fixing: the less bugs there are, the more time I need to track them all down. Still, there is something new to learn about both C and Rust every day. I have been working on `rustc_codegen_clr` for 1.5 years now - that feels a bit... odd. A lot has happened in that time: both in my personal life, and in the wider world.

    Truth be told, that sometimes feels like it was a lifetime ago.

    In this strange, new world, there is a bit of comfort in the monotony of work - each day, I inch towards a grander goal. I learned a lot along the way, but with each passing moment, I see there is so much more to know. It is calming.

    But I digress - you have come here to hear about Rust, and compilers.

    I have some interesting things coming: I am working on finishing the part 2 of 'Rust panics under the hood' - a step by step explanation of the Rust panicking process. I am considering splitting that article in two: It is already 10 minutes long, and I have only just finished explaining how panic messages are created.

    Besides that, I have been working on a few odd things, including a tiny(2K LOC), but very accurate memory profiler for Rust. My schedule is quite tight, but I hope I will write something about this in the coming weeks.

    If you like this project(`rustc_codegen_clr`), and think somebody else might find my work interesting, feel free to share my posts on Bluesky and Linkedin.

    *If you want to know more about the project(and it's .NET roots), I have more articles about it you can find on the home page, under the rustc codegen clr category.




    All Comments: [-] | anchor

    dilawar(3671) 6 days ago [-]

    Is it LLVM IR --> C? Or Rust AST to C?

    dilawar(3671) 6 days ago [-]

    Found the answer in the project readme.

    > My representation of .NETs IR maps nicely to C, which means that I was able to add support for compiling Rust to C in 2-3K LOC. Almost all of the codebase is reused, with the C and .NET specific code only present in the very last stage of compilation

    epage(10000) 6 days ago [-]

    It is a rustc backend, ie an alternative to llvm, gcc, or the cranelift backends.

    It started as a .NET backend but they found that their approach could easily support C code generation as well so they added that. They do this by turning what rustc gives them into their own IR.

    OutOfHere(10000) 6 days ago [-]

    How is this not dangerous? How can one be assured that all of the compile-time safety features of the Rust compiler are still in effect? Handwaving does not help.

    HeliumHydride(10000) 6 days ago [-]

    It's as safe as LLVM IR is safe, assuming you trust the LLVM IR -> C translation step.

    grandempire(10000) 6 days ago [-]

    Because they happen at compile time?

    cv5005(10000) 6 days ago [-]

    How does the rust compiler assure that when compiling to machine code? Machine code is less safe than C after all.

    wiseowise(10000) 6 days ago [-]

    How can one be assured that all of the compile-time safety features of Java are is still in effect in bytecode?

    flomo(10000) 6 days ago [-]

    Of course, everyone votes up the headlines, but this link seems like premature WIP. Hopefully this will get posted for real after the presentation.

    ay(10000) 6 days ago [-]

    I clicked through to the project at https://github.com/FractalFir/rustc_codegen_clr - from a quick glance at it, with 1.8k stars and 17 contributors, it deserves a better treatment than a passive—aggressive dismissal like this as a top comment.

    It is a very impressive piece of work.

    xmodem(10000) 6 days ago [-]

    Yeah, exactly. Here on a website called 'Hacker News', we're only interested in projects when they're feature complete and mature enough for production deployment, not before. (/s)

    baq(3579) 6 days ago [-]

    This is Hacker News, not Product Hunt.

    EasyMark(3653) 6 days ago [-]

    If you read the article you'll see this is a status report and not a pitch for a final product.

    cod1r(3564) 6 days ago [-]

    this fractalfir person is super talented. See them on the rust reddit all the time. I'm not knowledgeable on compilers at all but others seem to really like their work.

    landr0id(10000) 6 days ago [-]

    I think they're pretty young too. Hoping for a bright future ahead of them!

    jokoon(10000) 6 days ago [-]

    At first I read it as C to rust compiler.

    What is the point of compiling rust to C?

    drdeca(3395) 6 days ago [-]

    I think there are probably C compilers for more platforms than there are rust compilers. So, if you want to compile your rust project on some obscure platform that doesn't have a rust compiler for it yet, you could compile to C and then compile the resulting C code for that platform?

    Just a guess.

    teo_zero(10000) 6 days ago [-]

    > What is the point of compiling rust to C?

    To address platforms that don't support Rust. TFA mentions NonStop, whatever it is.

    arghwhat(10000) 6 days ago [-]

    Using C compiler infrastructure, taking Rust where rustc/llvm does not go. Proprietary platforms with proprietary compilers for example.

    oulipo(3506) 6 days ago [-]

    I guess it's to target platforms (like some microcontrollers) which don't yet have a native Rust compiler, but often do have a C compiler?

    vblanco(2711) 6 days ago [-]

    Game consoles generally only offer clang as a possibility for compiler. If you can compile rust to C, then you can finally use rust for videogames that need to run everywhere.

    jeroenhd(3638) 6 days ago [-]

    To use rust in places where you can only use C. I imagine there are quite a few obscure microcontrollers that would benefit greatly from this pipeline.

    Hell, you might finally be able to get Rust into the Linux kernel. Just don't tell them the code was originally written in Rust to calm their nerves.

    1vuio0pswjnm7(974) 6 days ago [-]

    'Most components of std are about 95% working in .NET, and 80% working in C.'

    .NET

    Core tests 1662 39 12 97.02%

    C

    Core tests 1419 294 82.83%

    Missing from HN title: The '95%' pass rate only applies to .NET. For GCC/Clang it is only '80%'.

    FractalFir(10000) 4 days ago [-]

    Sorry, the README was out of date. Those numbers are from the beginning of the year, and now they are: | .NET Core tests | 1764 | 48 | 20 | 96.29% | | C Core tests | 1712 | 71 | 8 | 95.59% |

    db48x(2985) 6 days ago [-]

    I'm not convinced that it's worth spending any time supporting most proprietary systems. Maybe not even Windows, but especially the really expensive ones.

    o11c(10000) 6 days ago [-]

    You shouldn't spend your own effort; you should make it clear that you're open to users of such systems contributing.

    That's how GCC became so dominant - there were people already using all sorts of Unixen and they wanted a compiler, so they made it work.

    AlienRobot(10000) 6 days ago [-]

    Funny, because the average person is convinced it's not worth spending any time supporting Linux!

    EasyMark(3653) 6 days ago [-]

    I'm always convinced that people will pick up arbitrary projects that they are interested in and might not necessarily lead to a new pitch for venture capital or the next unicorn.

    iaaan(10000) 6 days ago [-]

    Lots of interesting use cases for this. First one that comes to mind is better interop with other languages, like Python.

    xmodem(10000) 6 days ago [-]

    What does this gain you that you can't already do with `extern 'c'` functions from rust?

    pornel(3085) 6 days ago [-]

    The interop is already great via PyO3, except when people want to build the Rust part from source, but are grumpy about having to install the Rust compiler.

    This hack is a Rust compiler back-end. Backends get platform-specific instructions as an input, so non-trivial generated C code won't be portable. Users will need to either get pre-generated platform-specific source, or install the Rust compiler and this back-end to generate one themselves.

    claudiojulio(10000) 6 days ago [-]

    Very cool. C to Rust would be fantastic.

    ndndjdnd(10000) 6 days ago [-]

    What benefit would you envision from this?

    Aurornis(10000) 6 days ago [-]

    > C to Rust would be fantastic.

    This would have to go into one big unsafe block for any nontrivial program. C doesn't convey all of the explicit things you need to know about the code to make it even compile in Rust.

    g-mork(10000) 6 days ago [-]

    Mark Russinovich recently gave a talk at a UK Rust conference that mentioned Microsoft's internal attempts at large scale C->Rust translation, https://www.youtube.com/watch?v=1VgptLwP588

    jeroenhd(3638) 6 days ago [-]

    Tools like those exist. The problem with them is that they use unsafe blocks a lot, and the code usually isn't very idiomatic. Translating global variable state machines into more idiomatic Rust state machines based on things like named enums, for instance, would be very difficult.

    With the help of powerful enough AI we might be able to get a tool like this, but as AI still very much sucks at actually doing what it's supposed to do, I don't think we're quite ready yet. I imagine you'd also need enough memory to keep the entire C and Rust code base inside of your context window, which would quickly require very expensive hardware once your code grows beyond a certain threshold. If you don't, you end up like many code assisting LLMs, generating code independently that's incompatible with itself.

    Still, if you're looking to take a C project and extend it in Rust, or perhaps slowly rewrite it piece by piece, https://c2rust.com/ is ready for action.

    Krutonium(10000) 6 days ago [-]

    But does it carry the Rusty guarantees?

    cryptonector(10000) 6 days ago [-]

    Why wouldn't it?

    GolDDranks(3223) 6 days ago [-]

    If the transpilation itself is bug-free, why not? For static guarantees, provided we transpile Rust code that already compiles on a normal Rust compiler, the guarantees are already checked and there, and the dynamic ones such as bounds checking can be implemented runtime in C with no problems.

    pixelfarmer(10000) 6 days ago [-]

    If I see something like 'At least on Linux, long and long long are both 64 bits in size.' my skin starts to crawl. Not only that, but GCC defines __builtin_popcount() with unsigned int / long / long long, respective, i.e. even in the text it should be mentioned correctly (unless a different compiler uses signed types there ... ugh). The call is done with unsigned, using uint64_t as a type-cast, but using a fixed __builtin_popcountl() which translates to unsigned long. There are systems where this will fail, i.e. the only safe bet to use here is __builtin_popcountll() as this will cover at least 64 bit wide arguments.

    Also, if a * b overflows within the result type, it is an undefined behavior according to the C standard, so this overflow check is at least not properly portable, either, and the shown code for that is actually buggy because the last A1 has to be A0.

    No idea why all that gets me so grumpy today ...

    dlahoda(10000) 6 days ago [-]

    thank for PR. very fast turn around.

    FractalFir(10000) 6 days ago [-]

    Correct me if I am wrong C, unsigned overflow is well-defined - at least the GCC manual says so, but I'll have to check the standard.

    https://www.gnu.org/software/c-intro-and-ref/manual/html_nod...

    Since signed multiplication is bitwise-equivalent to unsigned multiplication, I use unsigned multiplication to emulate UB-free signed multiplication. The signed variant of this overflow check is a bit harder to read because of that, but it still works just fine.

    bool i128_mul_ovf_check(__int128 A0 ,__int128 A1 ){

    bb0:

    if((A1) != (0)) goto bb1;

    return false;

    bb1:

    return (((__int128)((__uint128_t)(A0) * (__uint128_t)(A1))) / (A1)) == (A1);

    }

    As for using `__builtin_popcountll` instead - you are right, my mistake. Thanks for pointing that out :).

    I did not use the word 'unsigned' before long long for the sake of readability - I know that repeating a word so many times can make it harder to parse for some folk. The project itself uses the correct types in the code, I was just kind of loose with the language in the article itself. My bad, I'll fix that and be a bit more accurate.

    Once again, thanks for the feedback!

    zwnow(10000) 6 days ago [-]

    Why would I use a tool that doesn't pass all tests?

    01HNNWZ0MV43FF(10000) 6 days ago [-]

    To not write C

    haswell(10000) 6 days ago [-]

    The post is an update on the status of an ongoing project.

    > This is an update on the progress I have made on my Rust to C compiler.

    > There still are about 65 tests that need fixing, but they all seem to have pretty similar causes. So, fixing them should not be too difficult.

    cbmuser(10000) 6 days ago [-]

    I am still waiting for any of the alternative Rust front- or backends to allow me to bootstrap Rust on alpha, hppa, m68k and sh4 which are still lacking Rust support.

    Originally, the rustc_codegen_gcc project made this promise but never fulfilled it.

    Aurornis(10000) 6 days ago [-]

    > to allow me to bootstrap Rust on alpha, hppa, m68k and sh4

    Do you actually use all four of those platforms, or is this an arbitrary threshold for what you consider a complete set of platform support?

    hedgehog(10000) 6 days ago [-]

    Did they abandon that goal? Last I heard it was still under development.

    shakna(1921) 6 days ago [-]

    'm68k-unknown-linux-gnu' was merged as a Tier-3 target for Rust, wasn't it? [0]

    [0] https://github.com/rust-lang/compiler-team/issues/458

    jedisct1(2109) 6 days ago [-]

    rust still doesn't even support OpenBSD on x86_64...

    alexpadula(10000) 6 days ago [-]

    Rust to C? Why would someone do that. Just write C.. if you can figure rust out you surely can figure C out and be proficient.

    alexpadula(10000) 6 days ago [-]

    I will read further into the project just off the bat I don't get the point. Good luck it looks quite extensive :)

    AS04(10000) 6 days ago [-]

    Because of the niceties of Rust, combined with the widespread compatibility and architecture support of gcc / C compilers in general?

    Rust is a modern language, with package management, streamlined integrated build/testing tools, much less cruft, and lots of high-level features and syntax that people actually like. C is neat but complex codebases benefit from modern languages that help in building robust abstractions while still maintaining the speed of C. Not to mention, of course, the borrow checker and memory safety.

    AlotOfReading(3629) 6 days ago [-]

    So you can get the benefits of Rust on platforms that rustc doesn't support. Seems pretty straightforward.

    wolrah(10000) 6 days ago [-]

    It seems like there's a healthy dose of 'because it can be done' in play here, but also because there are a lot of platforms that are not supported by Rust where a Rust-to-C converter that generated standard-enough code could be used to bridge the gap.

    nullpoint420(10000) 6 days ago [-]

    Would it be possible for Rust to output LLVM IR? Would that make it easier to port if they have a LLVM frontend?

    guipsp(10000) 6 days ago [-]

    This comment is strange, given that LLVM is rust's most mature backend





    Historical Discussions: Behind the 6-digit code: Building HOTP and TOTP from scratch (April 11, 2025: 248 points)

    (248) Behind the 6-digit code: Building HOTP and TOTP from scratch

    248 points 7 days ago by dogacel in 3635th position

    blog.dogac.dev | Estimated reading time – 14 minutes | comments | anchor

    A while ago, I have started working on authorization and authentication at work. This taught me a lot about how modern authentication systems work. However I have always thought One-Time Password logins are the most mystical ones. A six-digit code that changes every time and can be used to verify your identity. How does the server know the newly generated one, and how is it really secure? In this post, I will explain what HOTP, TOTP is and how they work by sharing my own implementation from scratch.

    A sample OTP login code

    What Are OTPs?

    One-Time Passwords (OTPs) are a widely-used form of authentication. You've likely encountered them when using a "Secure Login" app like Google Authenticator, or during a "Forgot Password" flow where a temporary code is sent to your email or phone.

    Unlike traditional passwords, OTPs are only valid for a single use or a limited time window. This greatly reduces the risk of password replay attacks, where someone captures the password used to login and tries to reuse it.

    Passwords can be used repeatedly. When leaked, malicious actors can impersonate the user and access critical information.

    Like the traditional password authentication approach, the user and the authority (server) still needs to agree on a common secret key. During the regular password authentication, this secret key is directly communicated to the authority. There are many ways of doing this process safely, such as hashing the password or sending it over an encrypted network. However the risk still exists, as the password itself never changes, as long as we use our devices to type our passwords, there is some way those malicious actors can watch and get that information before it reaching the network.

    So instead of using a constant secret key, we can use something dynamic that changes over time. As a simple example, assume when those two people first met, they have set their secretly hidden clocks to a random time together.

    Using secret clocks as a basic OTP implementation

    Also in some examples like a password recovery, we can use also use a secret clock. This secret clock not shared with the user directly but rather server's generated one-time password is sent via a trusted medium, such as an email to the user.

    Edit: Several readers have warned me it is much easier to generate random numbers instead. The server has to store number of attempts to make sure it is not brute forced as well.

    Obviously a clock on its own is not secure, as in this example Plankton could have predicted the time-shift of the secret clock based on the real time. However for the sake of this example, I wanted to show how copying the 'password' is not enough on its own. Let's take a look at some strategies to build this 'secret clock' and make sure it is not possible to predict time just by knowing a single code in some point in time.

    There are two common types of OTP algorithms:

    • HOTP (HMAC-based One-Time Password) – based on a counter that increments every time an OTP is requested.
    • TOTP (Time-based One-Time Password) – based on the current time, typically using 30-second intervals.

    These methods are standardized in RFC 4226 (for HOTP) and RFC 6238 (for TOTP), and are used in many modern 2FA (two-factor authentication) implementations.

    A counter based password method is easier to understand. Imagine two people met and generated a totally random series of numbers. They both start from count 0, as in each attempt, user needs to communicate to the server with the secret key in the given index. However this comes with several problems,

    1. Clients needs to sync their counter, if there is a skew, they might get temporarily locked out.
    2. Malicious actors can collect upcoming login codes by phishing the user and those codes can be used for a long time.

    Therefore, instead of storing a counter, we can use the current time as the counter. That's how TOTP works. Using time makes synchronization easier, as many modern machines already use technologies such as NTP to sync their time and this prevents malicious actors from harvesting codes as their code will be valid for only next 30 seconds or so, not for a long sequence of future login attempts.

    How to Generate TOTPs?

    The analogy of two people met and decided on a totally random series of numbers is partially realistic. However it is not feasible to have such a huge list, you potentially need to have millions of secret numbers to support OTPs for a reasonable time. Therefore we should use algorithms that are cryptographically safe that generate values based on a secret key. It is important that this algorithm is not random, as both user and the authority will hold a copy of this secret key and they should be able to generate the same value given the same time.

    We have introduced HOTP first because the actual implementation of TOTPs are actually HOTP based. Instead of using a static counter, TOTPs use the time as the current counter. We can write the following formula to find the counter in any given time,

    c(t)=⌊t−t0X⌋ c(t) = \left\lfloor \frac{t - t_0}{X} \right\rfloor c(t)=Xtt0

    Here t0t_0t0 is the starting time, in most systems this is the default UNIX epoch timestamp, 1 January 1970. XXX is the period you want the code to rotate. For example, if you want the login code to change every 30 seconds, X should be 30 seconds.

    How to Actually Generate HOTPs?

    In order to generate an HOTP, you need to decide on three things:

    1. A secret key
    2. A hash function
    3. Number of digits you will output

    First, we need to start by hashing our secret key. For example, if we have chosen SHA-1 as our hashing algorithm, our output would be only 64 bytes. If secret key is shorted than 64 bytes, we can just pad it with zeroes. Otherwise, given KKK is our secret key and HHH is our hashing algorithm,

    Kpad=H(K) K_{pad} = H(K) Kpad=H(K)

    Later we do an XOR operation on text with some pre-defined magic constants IpadI_{pad}Ipad and OpadO_{pad}Opad.

    Ipad=[0x36,... ]Opad=[0x5c,... ] I_{pad} = [\texttt{0x36}, \dots] \newline O_{pad} = [\texttt{0x5c}, \dots] Ipad=[0x36,...]Opad=[0x5c,...]

    Those numbers are originally chosen by HMAC designers and any pair where Ipad≠OpadI_{pad} \neq O_{pad}Ipad=Opad could have been chosen. Their lenght should be also 64 bytes, same as our hashing algorithm's digest length. Later we define the famous HMAC \text{HMAC} HMAC, Hash-based Message Authentication Code, function. It outputs a crypthographic hash calculated using the given key and message.

    HMAC(K,M)=H(Kpad⊕Opad+H(Kpad⊕Ipad+M)) \text{HMAC}(K, M) = H(K_{pad} \oplus O_{pad} + H(K_{pad} \oplus I_{pad} + M)) HMAC(K,M)=H(KpadOpad+H(KpadIpad+M))

    This cryptographic hash function is secure, so that user can't infer the secret key Kpad K_{pad} Kpad even if they knew M M M and the resulting hash.

    Later we will define a new function to generate a 4-byte result. Here is the definition of DT from the original RFC,

        DT(String) // String = String[0]...String[19]
         Let OffsetBits be the low-order 4 bits of String[19]
         Offset = StToNum(OffsetBits) // 0 <= OffSet <= 15
         Let P = String[OffSet]...String[OffSet+3]
         Return the Last 31 bits of P

    This function allows us to shrink our 20 byte input to 4 bytes dynamically by choosing the bytes offsetted by the number that is represented using the last 4 bits of the input. The outputs of the DT on distinct counter inputs are uniformly and independently distributed.

    Finally, we can define our HOTP function as,

    HOTP(K,C)=DT(HMAC(K,C)) mod 10digits \text{HOTP}(K,C) = \text{DT}(\text{HMAC}(K,C)) \bmod 10^{\text{digits}} HOTP(K,C)=DT(HMAC(K,C))mod10digits

    Here we can replace our counter C C C with c(t) c(t) c(t) to get a TOTP code.

    There are many online resources with TOTP and HOTPs, however I have struggled to find a website that help me check my implementation as their secret-key representations were not standardized. Thus, I have published my own short demo app to showcase.

    OTP Generator

    Test and validate OTP workflows such as TOTP and HOTP.

    I have published this app on my website and also on GitHub, the implementation uses Kotlin.

    To recap: We've looked at how HOTP and TOTP work, explored how they're derived from HMAC, and saw how the server and client can generate matching codes without ever transmitting the password itself.

    Working on this project helped me understand how OTPs work at a much deeper level. What once felt like magic now feels like elegant design.




    All Comments: [-] | anchor

    3eb7988a1663(10000) 3 days ago [-]

    It is a bit terse, but there is a 20-line Python implementation which cleared up the ideas for me: https://github.com/susam/mintotp

    easterncalculus(2998) 3 days ago [-]

    I love this one. The neat thing about TOTP is that while the algorithm itself is simple, the algorithms it depends on are also relatively simple, at least for cryptography. For HMAC you just need SHA1, and that can be implemented relatively easily without much more code. As a learning exercise it's quite good.

    lifthrasiir(2959) 3 days ago [-]

    It is even shorter without boilerplates:

        def hotp(key, counter, digits=6, digest='sha1'):
            key = base64.b32decode(key.upper() + '=' * ((8 - len(key)) % 8))
            counter = struct.pack('>Q', counter)
            mac = hmac.new(key, counter, digest).digest()
            offset = mac[-1] & 0x0f
            binary = struct.unpack('>L', mac[offset:offset+4])[0] & 0x7fffffff
            return str(binary)[-digits:].zfill(digits)
        
        def totp(key, time_step=30, digits=6, digest='sha1'):
            return hotp(key, int(time.time() / time_step), digits, digest)
    SkiFire13(3545) 3 days ago [-]

    Those `>Q` and `>L` just make it more confusing for me, they just feel like a different language in the language...

    jillesvangurp(3201) 3 days ago [-]

    I adapted code for Java back in the day from here: https://github.com/j256/two-factor-auth/blob/master/src/main...

    A bit longer but most of it is just boilerplate Java stuff to deal with polymorphism and a base32 implementation. I recall, stripping most of that away in our internal adapted version of that.

    Key points:

    - generate a 16 character base32 secret and stuff it in a totp link. otpauth://totp/Alice:[email protected]?secret=JBSWY3DPEHPK3PXP&issuer=Alice

    - stuff that in a QR code and show it to the user so they point their phone authenticator app at it to store the secret. We used a js library for this.

    - store the secret with the user account in a secure way (we used aes encryption for this)

    - when verifying, use the secret, a timestamp in seconds after the epoch divided by 30 (simple normalization step applied on the client as well) and use the user provided number to construct a sha1 hmac and grab the last digits and prepend with zeros. The calculated string should be the same as what the user typed from their token app as long as their clock is in sync.

    - we actually implemented a grace period by calculating the before and after code as well so the user isn't screwed over if the number rotates while they were tapping out the code.

    While relatively easy to implement, we ran into a lot of friction rolling this out to normal users. Basically non technical people find this stuff super confusing and we had to hand hold quite a few people through the process and we also had to deal with people that lost their secret, or kept on using the wrong code (for a different account). The UX of this stuff is just terrible. Be prepared to deal with a lot of support overhead if you choose to roll this out. A non trivial percentage of users will manage to lock themselves out of their accounts.

    jqpabc123(10000) 2 days ago [-]

    I implemented TOTP as a command line app doing lookup and generation by pulling secrets from a locally encrypted password file.

    And before someone asks, the decrypt key is only stored in my head and the app fails silently after a significant delay if the decrypt fails.

    What I don't get is how HOTP is anything but a fail waiting to happen if used across an unreliable network. Maybe this explains why I have yet to encounter a real world deployment of HOTP.

    GoblinSlayer(10000) 2 days ago [-]

    In my experience HOTP works fine, why not. The real world deployment is a replacement for sms otp.

    ucarion(3561) 3 days ago [-]

    Six-digit verification codes for something like a 'forgot password' flow are OTPs -- they're only good for one login -- but they are not HOTP/TOTPs. HOTP/TOTP has a registration step, where you copy a server-generated secret to your phone through a QR-code-encoded otpauth:// URI (https://github.com/google/google-authenticator/wiki/Key-Uri-...). That doesn't happen in a 'forgot password' flow.

    Incidentally, if you think of TOTP as being HMAC(unix mod 30, secret), one idea would be to do public key crypto instead of symmetric HMAC stuff. That's basically what a security key is.

    If you additionally made it so that you couldn't phish the security key -- by having the OS + web browser know which apps can ask for which security keys -- you'd have reinvented WebAuthn.

    P.S.: Make you sure you have stuffing protection in place against these kinds of six-digit-code auth schemes. A million possibilities is often acceptable for a secondary factor, but it's useless if attackers can just try all million codes.

    Since they're in the thread, nice article 'dogacel! I've never seen an article on this that also took the time to dig into HMAC internals and that gnarly DT function.

    dogacel(3635) 3 days ago [-]

    All very valuable comments! Actually I had a small edit on the 'forget password' flow.

    I agree that an asymmetric key makes much sense. Secret key can be left at the user device while server only contains the public key. That sounds much more secure. I will dig deeper!

    True about the stuffing proteciton, I actually want to do further reading on how TOTP is secured from random attacks. Statistically you are expected to crack 1 account in every 1 million attempts in 6 digits codes. Those numbers look pretty huge in the context of security, and a bot-net can potentially brute force couple hundred accounts every day.

    anilakar(10000) 3 days ago [-]

    > HOTP/TOTP has a registration step, where you copy a server-generated secret to your phone through a QR-code-encoded otpauth:// URI

    RFC4226 and RFC6238 do not specify anything but the actual algorithm(s), which is exactly what OP implemented.

    dfox(10000) 3 days ago [-]

    Doing similar idea with asymetric cryptography is problematic due to the size of messages involved that are not exactly convenient to type. Lower bound for the signature size is going to be something on the order of 128bits if we include 'weird' signature algorithms (ie. string that looks like MS Product Key), 240b for Schnorr with safe-ish parameters, at least 512b for anything widely accepted.

    You can probably come up with something related to S/KEY (which was kind of a precursor to HOTP) that can be made to work with arbitrary sized one time passwords and is technically asymetric (and quantum resistant at that), but the security trade-offs involved in that and somewhat wild user registration step of S/KEY make HOTP/TOTP more sane choice.

    danieldk(3334) 3 days ago [-]
    Incidentally, if you think of TOTP as being HMAC(unix mod 30, secret), one idea would be to do public key crypto instead of symmetric HMAC stuff. That's basically what a security key is.

    If you additionally made it so that you couldn't phish the security key -- by having the OS + web browser know which apps can ask for which security keys -- you'd have reinvented WebAuthn.

    Another key part of FIDO2 phishing protection is challenge-response. The relying party sends some random material/nonce that the authenticator has to sign. This avoids replay attacks that e.g. a time-based method would have, since when a phisher tries to authenticate, the RP will send a different nonce and the phisher cannot sign it.

    notpushkin(1263) 3 days ago [-]

    > Also in some examples like Facebook's password recovery, this secret clock is not shared with the user directly but rather server's generated one-time password is sent via a trusted medium, such as an email to the user.

    I'm pretty sure Facebook just makes up a random number and stores it?

    SoftTalker(3552) 3 days ago [-]

    Yes if you're sending the number to the user, might as well just be random that's a lot easier.

    Clocks and secrets only needed if the user is providing a number generated on the remote side.

    dogacel(3635) 3 days ago [-]

    Good catch. In my mind storing that random number is similar to storing a plain-text password, thus I thought they were generating TOTPs. Let's hear from others how they implemented it.

    yuliyp(10000) 3 days ago [-]

    Facebook's login/account recovery codes are not TOTP/HOTP, but are random numbers. Also, the author struggled to check their implementation. One can easily compare an implementation of many websites by grabbing the QR codes they use for login and importing into your favorite authenticator app and also decoding the QR code to get the secret. In theory your code should produce the same codes at the same time as the app.

    dogacel(3635) 3 days ago [-]

    Hi,

    > Also, the author struggled to check their implementation. One can easily compare an implementation of many websites by grabbing the QR codes they use for login and importing into your favorite authenticator app and also decoding the QR code to get the secret.

    Can you clarify this? It's been some time since I have written the code, AFAIK it was working fine. Did you see any discrepencies when you tested the implementation against a real authenticator app?

    Erikun(3103) 3 days ago [-]

    Both RFC:s have test vectors you can use to write tests as well.

    coppsilgold(10000) 3 days ago [-]

    It's often a good idea to set up TOTP on accounts just because they may treat you differently due to having 2FA enabled. It would be harder to lose a gmail account to their 'security' systems if you add TOTP to it for example. In the case of gmail adding it is a hassle involving devtools to emulate a hardware key first then add TOTP and then delete the hardware 2FA.

    Some password managers such as KeepassXC have TOTP incorporated into them and you can have it available right next to the password. It may defeat the purpose of 2FA under some assumptions.

    dogacel(3635) 3 days ago [-]

    I personally use 1Password with hardware keys where possible.

    > It may defeat the purpose of 2FA

    True, I think this as a mid-step of smooth transition from plain-text passwords to secure keys. You kinda get the benefit of both.

    Also those apps are secured much better than a traditional password manager as browser auto-fill for example.

    rothfuss(10000) 3 days ago [-]

    Thanks for the read, I learnt something about HOTP/TOTP today.

    I would like to know why the clocks are all weird though - the numbers aren't in the right places. Were the images in this blog post 'AI' generated?

    dogacel(3635) 3 days ago [-]

    Nope not AI generated, I have used excalidraw. Only the cover page is AI generated.

    Clock drawing was an asset, I didn't really spent time trying to match the time on clock to the time mentioned by the actors.

    encom(10000) 3 days ago [-]

    Well I started reading, but then the page was blurred and blocked by a popup, so I only made it about a third down.

    dogacel(3635) 3 days ago [-]

    A simple click on a random place on screen should discard it. I wanted to connect with my readers so I have added that subscribe popup recently. As I have figured nobody subscribed to my newsletter yet :(

    Let me know if it doesn't work. Also would be glad if you can give browser / platform.

    ajsnigrutin(10000) 3 days ago [-]

    What is it with modern web design... can't even read a third of the page, and they already want my email to subscribe...

    dogacel(3635) 3 days ago [-]

    Clicking anywhere else discards it.

    I have removed the popup anyway, seems like most people don't like it.

    unethical_ban(10000) 3 days ago [-]

    I always thought it odd that companies would spend so much money on services like Symantec VIP, with their proprietary BS and high costs, when someone could implement TOTP in 15 minutes as an internal service.

    It's a little more complicated now with push notifications and more complex flows, but for generic TOTP?

    dogacel(3635) 3 days ago [-]

    Agree and disagree,

    Deciding on how to store the credentials is still a hard task. Even storing the secret. Ideally it shouldn't stay as a plain text in your database. If you use cloud, something like KMS can be used for additional security. Also you should still consider replay attacks, rate limits etc.

    I agree in the sense that TOTP is hard to implement, no it is not. I hope this article helped people understand how TOTP works.

    coolThingsFirst(10000) 3 days ago [-]

    > Like the traditional password authentication approach, the user and the authority (server) still needs to agree on a common secret key.

    Not sure what you mean by this, the server checks the hashed version of the password.

    dogacel(3635) 3 days ago [-]

    Hashing is done before storing the secret on the server side. Therefore they still need to communicate regarding the intial secret.

    sksxihve(3454) 3 days ago [-]

    On a side note, does anyone know why banks still rely on sms 2fa codes instead of TOTP? Is there some regulatory issue that makes it more difficult?

    UncleMeat(10000) 3 days ago [-]

    Everybody with a phone has SMS baked in. SMS also has a recovery process if you drop your phone in the toilet. Ultimately, this improved user experience outweighs the security benefit to TOTP for many organizations.

    TOTP also doesn't stop the biggest threat that SMS faces: phishing. Saving you from sim-swap attacks is just not a particular huge increase in security posture.

    My bank at least offers TOTP as an option, but the huge majority of people are going to enroll with SMS.

    Rygian(3479) 3 days ago [-]

    My two banks require additional approval via push notification to the phone app. No SMS involved.

    (In France.)

    dogacel(3635) 3 days ago [-]

    Some banks in Switzerland give customers a device that generates TOTP codes.

    coolThingsFirst(10000) 3 days ago [-]

    What is HMAC i still dont understand this part? Is it RSA encrytion?

    dogacel(3635) 3 days ago [-]

    No, RSA is asymetric, where it has a public/private key pair.

    HMAC is symetric, it only has a secret and it can be used to hash values one-way.





    Historical Discussions: Teuken-7B-Base and Teuken-7B-Instruct: Towards European LLMs (2024) (April 15, 2025: 246 points)
    Teuken-7B-Base and Teuken-7B-Instruct: Towards European LLMs (November 27, 2024: 7 points)
    SQFT: Low-Cost Model Adaptation in Low-Precision Sparse Foundation Models (October 29, 2024: 3 points)
    Unsupervised Human Preference Learning (October 24, 2024: 3 points)
    Agent Instructs Large Language Models to Be General Zero-Shot Reasoners (October 07, 2023: 5 points)
    Next-Generation OS Physical Memory Management for Terabyte-Scale NVMMs (October 14, 2023: 3 points)
    Exploring the Viability of Unikernels for ARM-Powered Edge Computing (December 11, 2024: 2 points)
    DroneARchery: Human-Drone Interaction Through Augmented Reality (November 02, 2022: 2 points)
    Shaping AI's Impact on Billions of Lives (December 07, 2024: 1 points)

    (246) Teuken-7B-Base and Teuken-7B-Instruct: Towards European LLMs (2024)

    246 points 3 days ago by doener in 22nd position

    arxiv.org | | comments | anchor

    arXivLabs: experimental projects with community collaborators

    arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website.

    Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them.

    Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs.




    All Comments: [-] | anchor

    smokel(10000) 3 days ago [-]

    A paper on languages that begins with a grammatical error in the first sentence does not inspire confidence:

    > LLMs represents a disruptive technology

    NitpickLawyer(10000) 3 days ago [-]

    Hey, at least it's not generated by chatgpt :D

    Funny how LLMs now write cleaner than humans in most cases.

    croes(347) 3 days ago [-]

    Given that it's about non-English languages it is forgivable

    JKolios(10000) 3 days ago [-]

    More diversity in the LLM space is always good. In my experience though, speaking as a native speaker of one of the less-used European languages, Mistral's models already use it pretty well.

    Etheryte(10000) 3 days ago [-]

    As a native of another small European language, no state of the art model comes anywhere close to not being laughably bad, so more work in this space is definitely welcomed as far as I'm concerned.

    debugnik(10000) 3 days ago [-]

    Really? In my experience, Le Chat eventually devolves into spanglish when trying to speak Spanish, so I would have expected worse from Mistral for minority languages.

    isodev(1173) 3 days ago [-]

    I live in a country with 3 national languages and I happen to use all of them + English + another one where most of our clients are based. Mistral is the only model atm which doesn't make a mess of it all. It's not perfect, but it doesn't force me to "pretranslate" things.

    kiru_io(3612) 3 days ago [-]

    Maybe someone should edit the title to mention this is from 2024: [Submitted on 30 Sep 2024 (v1), last revised 15 Oct 2024 (this version, v2)]

    dang(143) 3 days ago [-]

    Added. Thanks!

    KronisLV(3660) 3 days ago [-]

    I also quite liked the EuroLLM project: https://huggingface.co/blog/eurollm-team/eurollm-9b

    Was pretty good with Latvian (better than other models this size as well as variants of Llama or Qwen that I could run) and I assume probably with other EU languages as well.

    TheMatten(10000) 3 days ago [-]

    I've just tried it in one of the supported languages, and it seems to respond far better than any model under 24B that I've tried before. With its licensing, it sounds much more exciting to me than the OP.

    ozgune(10000) 3 days ago [-]

    I had a related, but orthogonal question about multilingual LLMs.

    When I ask smaller models a question in English, the model does well. When I ask the same model a question in Turkish, the answer is mediocre. When I ask the model to translate my question into English, get the answer, and translate the answer back to Turkish, the model again does well.

    For example, I tried the above with Llama 3.3 70B, and asked it to plan me a 3-day trip to Istanbul. When I asked Llama to do the translations between English <> Turkish, the answer was notably better.

    Anyone else observed a similar behavior?

    petesergeant(3553) 3 days ago [-]

    Fascinating phenomenon. It's like a new Sapir–Whorf hypothesis. Do language models act differently in different languages due to those languages or the training materials?

    mrweasel(10000) 3 days ago [-]

    Someone apparently did observe ChatGPT (I think it was ChatGPT) switch to Chinese for some parts of it's reasoning/calculations and then back to English for the final answer. That's somehow even weirder than the LLM giving different answers depending on the input.

    spacebanana7(10000) 3 days ago [-]

    I suspect this also happens in programming languages. Subjectively I get the feeling that LLMs prefer to write in Python or JS.

    Would be interesting to see whether they actually score better in leetcode questions when using python.

    hnfong(10000) 3 days ago [-]

    I'd mentally put this in the same box as 'chain of thought', where models perform better when explicitly describing the reasoning steps. The only difference in your case being that the model is undertrained in non-English data, so it's 'next token prediction' of non-English prompts is less robust, and thus explicitly converting to English and then back makes it better.

    This is probably the case for the 'deep reasoning' models as well. If you for example try DeepSeek R1, it will likely reason in either English or Chinese (where it presumably is well trained) even if the prompt is in other languages.

    laurent_du(10000) 3 days ago [-]

    ChatGPT is very informal and talks like a millennial when I ask questions in French. I hate it.

    mdp2021(1673) 3 days ago [-]

    Some studies are trying to ensure that the model reasons through abstractions instead of linguistic representations. (Of course the phenomenon of reasoning in substantially different quality depending on input language signals a fault - reasoning is beyond 'spoken' language.)

    In the past hours a related, seemingly important article appeared - see https://www.quantamagazine.org/to-make-language-models-work-...

    omneity(10000) 3 days ago [-]

    For most low-resource languages, support in LLMs is trained through translation pairs between english and the other languages, because translation data is easier to come across than say, conversations about coding, history, physics, basically the kind of data that is usually used for instruct training.

    This kind of training data typically looks like ChatGPT style conversations where all the prompts are all templated like "Translate the following text from X to Y: [text]" and the LLM's expected answer is the translated text.

    LLMs can generalize through transfer learning (to a certain extent) from these translation pairs to some understanding (strong) and even answering (weak) in the target language. It also means that the LLM's actual sweet spot is in translation itself since that's what was trained in, not just a generalization.

    anon291(10000) 3 days ago [-]

    I have observed this and this is what I would expect to have happened thinking from first principles.

    n49o7(10000) 3 days ago [-]

    I sometimes dream that they would internally reason in Ithkuil and gain amazing precision.

    quonn(10000) 3 days ago [-]

    Given the fact that LLMs like most neural networks work by passing their input through layers, wouldn't this be expected? There's no going back to an earlier layer and if the first layers are in some sense needed for 'translating' [0] to English, any other functionality in those layers cannot be used.

    [0] I am simplifying here, but it would make sense for an LLM to learn this, even though the intermediate representation is not exactly English, given the fact that much of the internet in English and the empirical fact that they are good at translating.

    dingdingdang(10000) 3 days ago [-]

    Indeed. I've thought from the beginning that LLMs should focus specifically on ONE language for this exact reason (i.e. mediocre/bad duplication of data in multiple languages). All other languages than English essentially 'syphon' off capacity/layers/weights that could otherwise have held more genuine data/knowledge. Other languages should not come into the picture afaics - dedicated translation LLMs/existing-solutions can handle this aspect just fine and there's just no salient reason to fold partial-multi-language-capacity in through fuzzy/unorganised training.

    miros_love(10000) 3 days ago [-]

    >European versions of ARC

    But this is an image-like benchmark. Has anyone looked at the article about the EU-ARC, what is the difference? Why can't you measure it on a regular one?

    I glanced through it, didn't find it right away, but judging by their tokenizer, they are learning from scratch. In general, I don't like this approach for the task at hand. For large languages, there are already good models that they don't want to compare with. And for low-resource languages, it is very important to take more languages from this language group, which are not necessarily part of the EU

    whiplash451(10000) 3 days ago [-]

    You might be confusing ARC-AGI and EU-ARC which is a language benchmark [1]

    [1] https://arxiv.org/pdf/2410.08928

    Etheryte(10000) 3 days ago [-]

    Why would they want more languages from outside of the EU when they've clearly stated they only target the 24 official languages of the European Union?

    tannhaeuser(1013) 3 days ago [-]

    I mean, Mistral AI is a Paris-based company, and theirs was considered on par or better than other open weight models such as llama3.1 and qwen2.5, and mistral-24b is currently beating oh-so-great gemma3-27b depending on tasks.

    Also, Stable Diffusion was originally (and still is I believe) developed in Munich.

    It's true though that raising capital and finding investors works wayyy better in the US (kindof needless to say on HN) and so was getting top talent - at least in the past. Don't get me started on energy prices ;) but I don't believe those contribute significantly in the end anyway.

    nickpsecurity(3676) 3 days ago [-]

    You don't think American companies raising hundreds of millions to ten billion for training models contributed to their model performance or market positions?

    I think a pile of money and talent is largely the cause of where they're at.

    jug(10000) 3 days ago [-]

    On this topic, don't miss the quite useful benchmark:

    https://euroeval.com

    anhner(10000) 3 days ago [-]

    ah, yes... Europe, the continent with 10 countries

    one of them with 50k population

    NKosmatos(1818) 3 days ago [-]

    There is also a Greek LLM from 2024.

    Meltemi: A large foundation Language Model for the Greek language

    https://huggingface.co/ilsp/Meltemi-7B-v1.5

    pehtis(10000) 3 days ago [-]

    Meltemi is ok, but it's 'old' and not that good by today's standards. If you need a good Greek local LLM try https://huggingface.co/ilsp/Llama-Krikri-8B-Instruct. Yes, I know it's based on LLama and not a foundation model, but it is still a LOT better than Meltemi.




    (243) Kagi Assistant is now available to all users

    243 points about 7 hours ago by angilr in 10000th position

    blog.kagi.com | Estimated reading time – 8 minutes | comments | anchor

    17 Apr, 2025

    At Kagi, our mission is simple: to humanise the web. We want to deliver a search experience that prioritises human needs, allowing you to explore the web effectively, privately, and without manipulation. We evaluate new technologies not for their acclaim but for their true potential to support our mission.

    Since its launch, Kagi Assistant has been a favorite for many users as it allows access to world top large language models, grounded in Kagi Search, all in one place in one beautiful user interface - and all that for +$15/mo upgrade from our Professional plan that provides unlimited Kagi Search.

    Today, we're excited to announce that Kagi Assistant is now available to all users across all plans, expanding from its previous exclusivity to Ultimate subscribers, as an added value to all Kagi customers, without increasing the price.

    An important note: We are enabling the Assistant for all users in phases, based on regions, starting with USA today. The full rollout for 'Assistant for All' is scheduled to be completed by Sunday, 23:59 UTC.

    Our approach to integrating AI is shaped by these realities and guided by three principles:

    1. AI serves a defined, search-relevant context: Kagi Assistant is a research aid.
    2. AI enhances, it doesn't replace: Kagi Search remains our core offering, functioning independently. Kagi Assistant is an optional tool you can use as needed.
    3. AI should enhance humanity, not diminish it: Our goal is to improve your research process by helping you synthesise information or explore topics grounded in Kagi Search results, not to replace your critical thinking.

    Kagi Assistant embodies these principles, working within the context of Kagi's search results to provide a new way to interact with information. It's built to make research easier while respecting your privacy and AI's limits.

    By making Kagi Assistant available to everyone, we're giving all users the choice to explore this capability as part of their Kagi toolkit - at no additional cost to their subscription. Use it when and how it suits your workflow, knowing it's built with privacy, responsibility, and human-centric values at its core.

    Let's talk about the specifics!

    AI grounded in Kagi search, guided by you

    When you enable web access, the Assistant has access to Kagi Search results. It will also respect your personalised domain rankings and allows the use of Lenses to narrow search scope.

    Or, if you'd prefer to discuss directly with the model, you can also turn off web access. It also supports file uploads, allowing you to provide additional context or information for your queries.

    Custom assistants tailored to your needs

    Create specialized assistants with unique instructions, defining their purpose, context, and web access preferences. Need help with coding, grammar reviews, or diagnosing an issue with your classic VW Bus? Build an assistant for it.

    Pro-tip: assign a custom bang (!name) for instant access via your browser's search bar.

    Refine and redirect with editing

    Conversations don't always go as planned. If a response misses the mark, Kagi Assistant lets you edit prompts, switch models, or adjust settings mid-thread. This ensures you stay in control and can redirect the conversation without starting over.

    Privacy as a foundation

    Your privacy is our priority. Assistant threads are private by default, automatically expire based on your settings, and your interaction data is not used to train AI models. This applies to both Kagi and third-party providers, under strict contractual terms.

    Please see Kagi LLMs privacy for additional information.

    A note on our fair-use policy

    Providing powerful AI tools requires significant resources. To ensure sustainability, we're starting to enforce our fair-use policy.

    Basically our policy states that you can use AI models based on your plan's value. For example, a $25 monthly plan allows up to $25 worth of raw token cost across all models (there is a 20% built-in margin that we reserve for providing searches, development and infrastructure for the service). From our token usage statistics, 95% of users should never hit this limit.

    While most users won't be affected, those exceeding the generous threshold will have the possibility to renew their subscription cycle instantly. Soon, we'll introduce credit top-ups for added flexibility. This approach ensures a fair, user-funded model while maintaining quality service and is a simple way to control usage, compared to arbitrary usage limits found in other services.

    Your favourite models are waiting for you

    Choose from a range of leading LLMs from OpenAI, Anthropic, Google, Mistral, and more. You can switch models mid-thread and explore their performance through our regularly updated open-source LLM benchmark. Choice of models in non-Ultimate plans will be limited compared to our full offering in the Ultimate plan, please see below.

    Access to your favourite LLMs makes Kagi Assistant mould to your requirements and query customisations, so we feature an array of models for you to choose from.

    Model Name Plan
    GPT 4o mini All
    GPT 4.1 mini All
    GPT 4.1 nano All
    Gemini 2.5 Flash All
    Mistral Pixtral All
    Llama 3.3 70B All
    Llama 4 Scout All
    Llama 4 Maverick All
    Nova Lite All
    DeepSeek Chat V3 All
    GPT 4o Ultimate
    o3 mini Ultimate
    o4 mini Ultimate
    GPT 4.1 Ultimate
    ChatGPT 4o Ultimate
    Grok 3 Ultimate
    Grok 3 Mini Ultimate
    Claude 3.5 Haiku Ultimate
    Claude 3.7 Sonnet Ultimate
    Claude 3.7 Sonnet with extended thinking Ultimate
    Claude 3 Opus Ultimate
    Gemini 1.5 Pro Ultimate
    Gemini 2.5 Pro Preview Ultimate
    Mistral Large Ultimate
    Llama 3.1 405B Ultimate
    Qwen QwQ 32B Ultimate
    Nova Pro Ultimate
    DeepSeek R1 Ultimate
    DeepSeek R1 Distill Llama 70B Ultimate

    Explore further

    This is just the beginning for Kagi Assistant. Explore more in our documentation.

    Happy fetching, Team Kagi.

    F.A.Q.

    Q: Does using a less costly model (like DeepSeek) compared to larger ones use fewer credits? A: Yes. The fair use policy calculates usage based on the actual cost charged by the model provider. Therefore, using smaller, less expensive models will allow for significantly more token usage compared to larger models.

    Q: Does Kagi receive discounted rates from AI model providers? A: No, Kagi does not receive discounts. However, we utilize smart caching techniques for the models to reduce operational costs, and these savings are passed on to the user.

    Q: Why did Kagi start enforcing the fair use policy? A: The policy was enforced due to excessive use. For instance, the top 10 users accounted for approximately 14% of the total costs, with some individuals consistently using up to 50 million tokens per week on the most advanced models. Our profit margins are already quite narrow. 95% of users should never hit any usage limits.

    Q: What is the specific usage limit? A: The limit corresponds directly to the monetary value of your Kagi plan, converted into an equivalent token amount. For example, a $25 plan provides $25 worth of token usage. This calculation includes a 20% margin for Kagi to cover search provision, development, and infrastructure costs. Savings achieved through prompt caching and other optimizations are passed on to you.

    Q: Where can I view my token usage? A: Currently, you can monitor your token usage on the Consumption page: https://kagi.com/settings?p=consumption. We plan to display cost and interaction details more prominently soon, potentially on the billing page or directly within the Assistant interface.

    Q: I can not access Assistant! A: We are doing staged rollout beginning with USA, full rollout scheduled by Sunday, 23:59 UTC. This will include other regions and even the trial plan.




    All Comments: [-] | anchor

    blissofbeing(10000) about 6 hours ago [-]

    It would be nice if all models where available on every plan too.

    shinryuu(2958) about 5 hours ago [-]

    Second that. Given that they have their fair use policy it should be in their interest as well I would believe since they have a baked in margin.

    AlotOfReading(3629) about 5 hours ago [-]

    That would eliminate the one differentiating feature on the (presumably) highest margin plan they have.

    Moving to a pay-as-you-go model across all their plans might be interesting, but could equally give the wrong impression to some audiences given that it's a pricing strategy usually reserved for budget brands in the consumer space and tends to scare people off.

    jjmarr(10000) about 5 hours ago [-]

    If you try openrouter you'll see why they have to charge $25/month for the best models. Pay per use and you'll intuitively feel the price.

    Valodim(10000) about 5 hours ago [-]

    They give more for free, and your only thought is 'sure would be nice if they gave even more for free'?

    JumpCrisscross(69) about 4 hours ago [-]

    > would be nice if all models where available on every plan too

    Would be nice if I had a lay-flat intercontinental jet.

    gaiagraphia(10000) about 2 hours ago [-]

    It'd be nice if you could see how much each request actually cost in relation to your plan, and to have some type of easily accessible meter.

    A lot of AI providers operate in black box territory when it comes to limits, which is quite annoying.

    moebrowne(10000) about 1 hour ago [-]

    > We plan to display cost and interaction details more prominently soon, potentially on the billing page or directly within the Assistant interface.

    I too want to see this soon. As a long time user of Ultimate it isn't uncommon for me to use 5M tokens per month and I have no idea if this will be covered by my subscription now.

    viraptor(1797) about 7 hours ago [-]

    If the staff sees this - please stop preventing zoom. Not only is that bad for accessibility, it makes the article less useful for everyone - there's a screenshot included showing off the feature, but it's too small to read on the phone and I can't zoom in.

    scary-size(10000) about 6 hours ago [-]

    I can zoom just fine on mobile Safari.

    catlikesshrimp(10000) about 6 hours ago [-]

    I can zoom in

    Android 14 Firefox 136.0.1 (Build #2016078447), hg-e7956a4db6c5+ GV: 136.0.1-20250310180126 AS: 136.0

    ublock origin enable zoom in all websites

    Edit: I know this is not what you are asking for, but try opening the image in a new tab. Can you zoom in there?

    https://kagifeedback.org/assets/files/2025-04-17/1744906741-...

    dean2432(10000) about 6 hours ago [-]

    This has been bugging me as well.

    Hasnep(10000) about 5 hours ago [-]

    Firefox on Android has an accessibility setting called 'Zoom on all web sites' that gets around this. Firefox's reader mode would help with this as well.

    It's a shame we need these workarounds instead of all websites being accessible by default :/

    GrayShade(3600) about 5 hours ago [-]

    You can open it in a new tab and zoom there.

    jeffhuys(10000) about 4 hours ago [-]

    What browser prevents this actually? None of the browsers (even mobile) I just quickly tested just... worked? No extensions.

    C4stor(10000) about 5 hours ago [-]

    The 'fair use' part takes a lot of place in this article.

    It talks a lot about what happens if you use more tokens than what you're allowed, but curiously doesn't pip a word about what happens if you use less - for example maybe with a partial rebate on your next billing cycle ?

    I think 'fair' should mean 'fair for all parties involved', currently it's rather a 'we don't want to incur any risk' policy, since I don't see how it's fair for my end of the contract. I'd rather pay for my actual usage at any other provider than pay for min(actual usage, 25$) at Kagi.

    jen729w(10000) about 5 hours ago [-]

    As an existing happy subscriber to Kagi, this statement is illogical.

    I currently pay for x. Soon I'll get x + y for the same money.

    That's better.

    Phenomenit(3314) about 5 hours ago [-]

    Yeah I concur.

    As an early adopter I first got forced off my grandfather plan to the regular one(at least I got a T-shirt). Now I have a limited number of searches that I have to keep track of and this has made me only use Kagi if necessary. This has dropped my number of searches significantly but at the end of the year I'm still being charged for renewing my plan even though I haven't used a quarter of my allotted searches.

    I don't care about LLMs so this brings nothing of value to me. Give me an email account or some backup storage and open source office suite and I would be willing to pay and pay more.

    I'm seriously considering not re-newing my subscription for the first time in ages.

    mediumsmart(10000) about 5 hours ago [-]

    That is a fair point. Considering the alternatives and realities Kagi is way too cheap for the life improvement it provides.

    maronato(10000) about 4 hours ago [-]

    It's the exact opposite. They are incurring a huge amount of risk with this.

    6 hours ago most users didn't have access to this feature at all. Now we have $4-8 of raw token credits a month to use on a well-built feature.

    I'm paying $9 a month with the annual subscription, and it was worth it just for Search. Now they're giving me $17 worth of value for the same price.

    Their margins must be razor thin, and they're only able to offer this much value because they're counting on most people not using all credits. If everyone did, or if they gave rebates, they'd go out of business.

    zuzulo(10000) about 4 hours ago [-]

    Is it fair enough to ask your favorite restaurant to lower the bill because you didn't eat the two last franch fries ?

    nirvdrum(2803) about 3 hours ago [-]

    Huh? The title of the blog post is 'Kagi Assistant is now available to all users!'. Their users are people paying for what up until now was just their search service. They're now rolling in Assistant as a value-add. Your subscription price didn't increase. You're strictly getting more for what you were already paying. If you don't use it all, you're no worse off than you were yesterday.

    If you want metered billing, there's no shortage of AI services that offer that option. Kagi even offers one by way of the FastGPT. You can also pay to use their search API if you don't think the subscription is worthwhile. You can cobble something together with Open WebUI pretty easily.

    I have Kagi Family plan for my household. I've been paying for the Ultimate upgrade for my account in order to access Assistant, but given how infrequently others in my family would use it, it never made sense to upgrade them. Still, it would have been convenient if they could occasionally access Assistant. And now they can. And my bill didn't increase. And they're being incredibly transparent about what the limits are and why they're there. I'm a really happy customer today.

    m1keil(10000) about 5 hours ago [-]

    Anyone used both Kagi assistant and perplexity and can share how was the experience?

    greatgib(3476) about 4 hours ago [-]

    I don't use the Kagi assistant yet, just the kind of AI response in search results. But regarding perplexity, I'm a little bit disappointed.

    I started to use Perplexity like 1 or 1.5 years ago when it was really good in term of efficiency to find good results with an efficient interface, compared to chatgpt and co. But nowadays I find the assistant response to be not that good at all, with a lot of the links provided or the suggested follow up questions on the same quality as Google SEO top results or ads.

    Despite having the paid plan or Perplexity, most of the time I try a request there and then still go to chatgpt or mistral to ask the question again.

    For Kagi, when I use the in search ai response, it is mostly good directly.

    loehnsberg(10000) about 4 hours ago [-]

    I use both but cancelled my Perplexity subscription.

    Kagi is the better version of Google search, especially if you learn how to use lenses, bangs, and all these features. Kagi Assistant is great if you're happy with basic no-frills chat, i.e. no usable voice input, no image gen, no canvas.

    Perplexity is not bad, but somewhat stuck in the middle between ChatGPT/Gemini and search. They provide sources for search results which are somewhat more spot-on than what I've seen elsewhere. For example it could find EV chargers with restaurants for a trip I made along a route, which ChatGPT, Gemini, Kagi Assist failed greatly).

    I found refining searches with Perplexity terse and it kept forgetting context once you started to reply. They have an AI generated news feed which lured me into more doom scrolling.

    Also, be aware that Perplexity free-tier may collect data about you, which Kagi does not.

    Tldr; Kagi is a superior search engine worth paying for. Perplexity seems good at queries that require context but quite expensive.

    spooneybarger(3391) about 4 hours ago [-]

    I use both. I only pay for Kagi because I have many models I can use and I can set up different contexts to use them in.

    I rarely use Kagi search anymore and instead search via assistant. Both it and perplexity give me much better results than I get from a traditional search engine.

    I've never been great at getting what I want from search engines. With assistant and perplexity, I type plain English with context and get what I am looking for a large chunk of the time. That's a godsend to me.

    I've found things that assistant does that make it worth paying for. I often use perplexity but what I use it for (deep research) isn't valuable enough at the time to pay for.

    I like the perplexity iOS app a lot and use it almost exclusively on my phone which isn't enough use to necessitate needing a subscription.

    Zambyte(10000) about 1 hour ago [-]

    Just typed this up elsewhere in the thread: https://news.ycombinator.com/item?id=43726582

    colonial(10000) about 5 hours ago [-]

    > A note on our fair-use policy

    > Basically our policy states that you can use AI models based on your plan's value.

    Although I likely won't use Assistant, stuff like this is why I love Kagi. My relationship with them as a customer feels refreshingly transparent; I can't think of any other consumer SaaS provider that automatically answers my reflexive 'how does this make money?' question.

    (Compare, say, Discord. It's best in class, but eternally unprofitable - which makes me wary that it might fold or go to hell at the drop of a hat.)

    weird-eye-issue(10000) about 4 hours ago [-]

    I've paid for a monthly subscription with Discord for years

    They also have ads in the app and they have other monetization features...

    fhd2(10000) about 5 hours ago [-]

    I wonder why the rollout is specifically over the weekend. I'd personally do something like that Monday to Wednesday rather than Friday to Sunday. It seems like the kind of thing that needs monitoring and quick reactions - can easily get expensive if something goes wrong.

    Maxion(10000) about 5 hours ago [-]

    Possible that they see lower usage on weekends.

    zuzulo(10000) about 5 hours ago [-]

    Lower WE usage. 'Let's see if it crash'

    deanc(10000) about 4 hours ago [-]

    On the other hand a huge number of countries have the whole Easter holiday off. Plenty of time to read these articles and sign up to stuff.

    haroldship(10000) about 3 hours ago [-]

    How do I get this to work? When I try to access the Assistant I just get the help page: https://help.kagi.com/kagi/ai/assistant.html

    j01(10000) about 3 hours ago [-]

    You have to login first.

    For some reason instead of redirecting you to login kagi.com/assistant redirects you to the wiki rather than a login page when you're not logged in.

    jacek(10000) about 3 hours ago [-]

    It's right there in the article:

    > An important note: We are enabling the Assistant for all users in phases, based on regions, starting with USA today. The full rollout for 'Assistant for All' is scheduled to be completed by Sunday, 23:59 UTC.

    louthy(3001) about 3 hours ago [-]

    Are you outside the US?

    Q: I can not access Assistant!

    A: We are doing staged rollout beginning with USA, full rollout scheduled by Sunday, 23:59 UTC. This will include other regions and even the trial plan.

    baobabKoodaa(2534) about 3 hours ago [-]

    I don't like how this was rolled out. I'm currently paying for 'Unlimited Kagi Assistant' and the Kagi website STILL advertises 'Unlimited Kagi Assistant'. And they stealthily rolled in limits? I pay the same amount, but it's no longer unlimited, and I only know about this because I happened to notice it on HN. Otherwise I would only know after hitting a limit.

    louthy(3001) about 3 hours ago [-]

    Fair-use limitations were always there. It sounds like they weren't actively enforcing it, but now they are because of some problem users. I don't think anything has changed for you unless you're one of the users this refers to:

    Q: Why did Kagi start enforcing the fair use policy?

    A: The policy was enforced due to excessive use. For instance, the top 10 users accounted for approximately 14% of the total costs, with some individuals consistently using up to 50 million tokens per week on the most advanced models. Our profit margins are already quite narrow. 95% of users should never hit any usage limits.

    mppm(10000) about 1 hour ago [-]

    Maybe they've changed it in the past hour, but as I write this comment the 25$ plan is called 'Ultimate' and promises unlimited search, but not unlimited assistant.

    I agree about the need for appropriate wording and advertising, but other than that, the new limits seem entirely reasonable and in line with what other aggregators like Abacus and Poe are doing. The paid plans of the major AI labs themselves always have usage limits too. It simply can't work any other way if you include costly models in the mix.

    greatgib(3476) about 4 hours ago [-]

    Obviously I'm happy to benefit from being able to use most model 'for free' in my paid non ultimate account.

    But I'm concerned that this will not rot the business model like that kind of thing happen for other services.

    I would have preferred that the full of my subscription cost goes to the core feature of developing the search engine and directly the related feature. And as of today, I pay a separate premium if I'm interested by the AI assistant.

    Now, with it being in all subscriptions, and knowing that anyway they can only work by paying the token price per request to all AI providers, it means less of my money going to the search index improvement, and what I'm more worried is that a forced increased of the subscription price in the coming years.

    Something like, as you know, our costs are high, so we need to raise the pricing to stay sustainable.

    Even if not the best reference, this remind me of Netflix saying look we are adding 'videogames' (that no one wants) to your subscription for free, but now we will have to raise our prices, because you know, inflation and all of that ...

    bayindirh(10000) about 4 hours ago [-]

    From my experience, Kagi always prefer to 'trickle down' features to lower tiers . First they removed search limits from some plans without increasing the price. Now they're allowing to use the AI assistant, if you want.

    The gist is, when you don't use the AI assistant, you still pay the base price, and that money goes to R&D, since your subscription money doesn't go to AI providers in the first place.

    For example, I have no interest in AI assistant, and I won't use it. As a result, my support will Kagi won't change.

    TekMol(1596) about 4 hours ago [-]

    'Privacy by default'

    I don't know. To me, requiring me to give them my email and then having all my searches associated with that email is the opposite of privacy to me.

    Yes, Google, Bing, Perplexity and Co could do fingerprinting and try fuzzy matching to cluster my searches. But at least that would be fuzzy and against the law in many places. While with Kagi, every search of mine would be clearly labeled as coming from me.

    dharmab(10000) about 4 hours ago [-]

    There is a feature where you can search anonymously, using IETF's Privacy Pass standard: https://help.kagi.com/kagi/privacy/privacy-pass.html

    fancy_pantser(10000) about 4 hours ago [-]

    Maybe their privacy pass is useful then?

    https://help.kagi.com/kagi/privacy/privacy-pass.html

    flexagoon(2659) about 3 hours ago [-]

    How is requiring an email 'the opposite of privacy' when making a one-time disposable email takes like 5 seconds?

    qwertox(10000) about 4 hours ago [-]

    I was given a free month of Kagi to test, and it had so many rough edges that during the last days of of the trial I was already using Google again.

    Notable issues for me:

    - maps (from Mapbox) are really bad. Sluggish performance and lack of information

    - barely any info boxes

    - no translation feature ('gründonnerstag englisch') gives me links to leo.org (which was a cool site in the 00s) and to other sites, but Google gives me a translation box with the result

    - no timezone calculations: '10 am PT' in Kagi: '= 10 Pt am (metric petaton attometers)' in Google: '10:00 Freitag Pacific Time (PT) entspricht 19:00 Freitag in ...'

    - no search history, which is sometimes really useful to have

    Other than that, the search results are really good.

    bigstrat2003(10000) about 4 hours ago [-]

    > Other than that, the search results are really good.

    I'm confused why anything else would matter. For example, I'll readily admit that Kagi maps sucks compared to Google maps. But I just use Google for map stuff, and use Kagi for searching. It doesn't seem like a big deal to me that it's a tool which does one thing and does it well.

    msdz(10000) about 4 hours ago [-]

    While I'm aware this is a case of 'you're holding it wrong' – !translate <phrase> should do the trick. And that's not an excuse for not having better detection for when an info box should exist, because they do have them, especially for, but not limited to the WolframAlpha integration stuff. (For example, a friend and fellow user was awed when searching 'internet speed test' and saw it integrated, no idea if Google has that too though).

    Other than that, make sure your region/locale is set correctly (I'm not getting the metric petaton, for example), and for everything else, they have an excellent feedback forum for suggestions/bug reports.

    hobofan(10000) about 2 hours ago [-]

    I think those are all perfectly valid points. As a Kagi early adopter they don't weigh as heavily that they ultimately make a difference, but it also feels like most things that are not AI related are not receiving that much attention nowadays, which is a bit disheartening to see.

    tomjen3(10000) about 2 hours ago [-]

    When did you try it?

    Because they have been working on all those issues. They even have their own translation now.

    i_love_retros(10000) 10 minutes ago [-]

    Kagi not saving search history is a big selling point for me. I don't want yet another tech company keeping tabs on me.

    And I wouldn't care if they dropped maps, I pay for kagi for search and the assistant.

    mocmoc(10000) about 2 hours ago [-]

    Good idea 1 year later. Perplexity is on top of the game

    Zambyte(10000) about 1 hour ago [-]

    I paid for both for close to half a year to see which one I wanted to keep. I decided to drop Perplexity in favor of Kagi, because Perplexity felt like it was trying to position / portray itself as a supernatural-esque Source of Truth, where as Kagi does a better job at letting you use the tools how you want.

    Perplexity is also much less flexible than Kagi Assistant. The most customization you can do on Perplexity is answer a few questions about yourself, and hope that the info you add is injected into relevant prompts (spoiler alert: hope isn't very powerful here). With Kagi, I created a lens about a year ago to filter search results down to sources I find useful relating to GNU Guix, which I use for my machines. When Kagi Assistant rolled out (I pay for Ultimate, so I have had this a while) I made an Assistant that only pulls search results from my GNU Guix lens. The practical comparison here between Kagi and Perplexity is that I can go to Kagi and search '!guixc How do I install nginx?' (or simply ask the question in the Assistant interface; the bang will bring me there from search) and I will get back the answer I want. I added info that I use GNU Guix on my Perplexity profile, and there is not a chance that my question would have been answered within the context of GNU Guix as I wanted.

    Perplexity is cool, but I found Kagi to simply be more useful.

    bwb(2547) about 1 hour ago [-]

    how do you actually get it? none of the links work and I am a paying user...

    just takes me to documentation.

    moebrowne(10000) about 1 hour ago [-]

    > We are enabling the Assistant for all users in phases, based on regions, starting with USA today. The full rollout for 'Assistant for All' is scheduled to be completed by Sunday, 23:59 UTC.





    Historical Discussions: HDR‐Infused Emoji (April 17, 2025: 239 points)

    (239) HDR‐Infused Emoji

    239 points about 21 hours ago by tabletcorry in 10000th position

    sharpletters.net | Estimated reading time – 1 minutes | comments | anchor

    Need a little more pop to your Slack emoji? Want to really stand out when you react with your favorite image?

    Turns out you can add HDR emoji to Slack, and they will be rendered in eye-searing brightness, at least on hardware that supports it. Works great in Chrome and Slack, and not at all on Android devices.

    Examples:#

    Note: These examples will work best when posted to Slack. Support in browsers and on devices varies, YMMV. Known to work in Chrome and Slack (mostly), and doesn't work in Safari (mostly).

    Script#

    brew install imagemagick
    
    # Adjust the Multiply value up or down to preserve color as opposed to brightness
    magick input.png \
      -define quantum:format=floating-point \
      -colorspace RGB \
      -auto-gamma \
      -evaluate Multiply 1.5 \
      -evaluate Pow 0.9 \
      -colorspace sRGB \
      -depth 16 \
      -profile 2020_profile.icc \
      output.png
    
    copy

    You will need the 2020_profile.icc downloaded to your working directory.




    All Comments: [-] | anchor

    muglug(2956) about 19 hours ago [-]

    Can confirm that this works, and can also confirm that people who post glaring HDR images to Slack are frequently peer-pressured to remove them shortly thereafter by everyone in the channel.

    tasuki(10000) about 6 hours ago [-]

    Do y'all have HDR screens? Apparently I don't! And judging by this thread, I'm not missing much?

    jchw(10000) about 20 hours ago [-]

    Looks like this works on Chrome for Android, but Firefox doesn't seem to support HDR at all.

    https://bugzil.la/hdr

    Maybe some day.

    lxgr(10000) about 20 hours ago [-]

    Neither does Safari on macOS – which honestly seems like the correct behavior, given that this will inevitably be used by websites in user-hostile ways.

    new_user_final(10000) about 19 hours ago [-]

    So many people push for more browser engines yet Firefox can't implement HDR in 6 years.

    matsemann(2434) about 19 hours ago [-]

    Feels like either Chrome or my android phone is cheating, because if I cover the hdr image with my finger and switch between Firefox and Chrome, the page background in Chrome is noticeable more grey than the one in Firefox.

    Groxx(10000) about 20 hours ago [-]

    This might be the best use of HDR I've ever seen.

    And will continue to see for quite some time when my eyes are closed.

    pier25(1375) about 18 hours ago [-]

    yes it's blinding on my MBP lol

    BoorishBears(10000) about 18 hours ago [-]

    > These examples will work best when posted to Slack.

    I should not have been clued into this power.

    joshuaturner(10000) about 20 hours ago [-]

    Time to make my Slack profile pic really stand out

    Hamuko(3097) about 17 hours ago [-]

    Oh god it fucking works. It's brilliant in every sense of the word.

    tuetuopay(10000) about 16 hours ago [-]

    oh god. off my evening goes tweaking the multiply value for proper effect.

    ionwake(10000) about 19 hours ago [-]

    Sorry for the noob question but I think finally someone in this thread can answer this for me. Sometimes when I see a youtube short video it looks like its HDR is whacked up by like 500% as per the image in this page, but Im confused how this could be done. Is video processing on the video before it is uploaded somehow giving it some sort of encoding which chrome just wacks up? Or is it the hardware doing it and encoding it a certain way?

    I am not talking about a slight brightness increase, I am talking Ill be scrolling youtube and suddenly this video is like a portal into another dimension its so bright.

    Can anyone explain how its done?

    harrall(10000) about 19 hours ago [-]

    Screens can't often do full brightness on the whole screen so if you come across a video or image that is supposed to have a higher contrast ratio, the system will darken everything and then brighten up the pixels that are supposed to be brighter.

    Yes, there are formats that able to store a higher contrast ratio so that's why it doesn't happen on non-HDR content but the actual brightening of a portal on your screen isn't because of the format but because of your hardware (and software) choosing to interpret the format that way.

    For more a practical example, if you had an 8-bit HDR image, 255 on the red channel (after inputting this number through a math function like HLG[1] to 'extract' a brightness number) might mean 'make this pixel really bright red' whereas 255 on a SDR format would mean 'just regular red.' However, each red channel is still a number between 0 and 255 on both formats but your hardware decided to make it brighter on the HDR format.

    (Although in reality, HDR formats are often 10-bit or higher because 256 values is not enough range to store both color and brightness so you would see banding[2]. Also, I have been using RGB for my example but you can store color/brightness number many other ways, such as with chroma subsampling[3], especially when you realize human eyes are more sensitive to some colors more than others so you could 'devote fewer bits' to some colors.)

    [1] https://en.wikipedia.org/wiki/Hybrid_log%E2%80%93gamma

    [2] https://en.wikipedia.org/wiki/Colour_banding

    [3] https://en.wikipedia.org/wiki/Chroma_subsampling

    detaro(695) about 19 hours ago [-]

    The video is marked as containing a different color space with a higher brightness/color range. That could either be because the initial camera recorded it that way (e.g. iPhones can do that) or because someone took a 'normal' video and edited it.

    kllrnohj(10000) about 19 hours ago [-]

    There's many factors in play from what your SDR white point is at, how your OS handles HDR video, what the content contains, and finally what your brain is doing.

    HDR10(+) & Dolby Vision, for example, encode content at absolute luminance, so they are basically completely trash formats since that's an insane thing to expect (the spec for authoring content in this format literally just goes 'lol idk do X if you think it's going to be seen in a movie theater of Y for TV and hope'). Sadly, they are also quite common. Mobile phones (both Android & iOS) are instead pushing HLG, which is better. Although then hilariously MacOS's handling of HLG was atrocious until the latest update which fixed it but only if the video contains a magic flag that iPhone sets, but isn't standard so nobody else sets it (the 'avme' tag https://developer.apple.com/documentation/technotes/tn3145-h... )

    There's then also just how your eyes & brain react. When HDR shows up and suddenly the white background of a page looks like a dim gray? That's 100% a perceptual illusion. The actual light being emitted didn't change, just your perception of it did. This is a very hard problem to deal with, and it's one that so far the HDR industry as a whole has basically just ignored. But it's why there's a push to artificially limit the HDR range in mixed conditions, eg https://github.com/w3c/csswg-drafts/issues/9074

    recursive(10000) about 19 hours ago [-]

    I don't think I understand HDR. It just looks brighter and more contrast. I can just do that with normal manipulations. What's this all about?

    Edit: Maybe my hardware doesn't support it. I'm using an LG monitor with Windows. There's also a good chance I've never actually seen anything in HDR.

    detaro(695) about 19 hours ago [-]

    > I can just do that with normal manipulations

    Then you are probably not viewing this with HDR-capable hardware and software. Otherwise it'd go past what you can just do with normal manipulation on an sRGB image.

    dangoodmanUT(10000) about 19 hours ago [-]

    HDR is terrible

    The fact that you can't turn it off system wide shows the macOS leadership is asleep at the wheel

    Night_Thastus(10000) about 19 hours ago [-]

    HDR is terribly implemented, in most cases. (Especially Windows)

    macOS handles it about the best of the bunch.

    What I hate is on Windows, you need to basically explicitly set the program, the OS, and the monitor into an 'HDR mode'. Then, once you're done, you need to un-set it or the colors and brightness will be screwed up.

    That is tedious AF. I refuse to use it until it doesn't require constantly toggling crap on and off.

    LoganDark(10000) about 19 hours ago [-]

    > The fact that you can't turn it off system wide shows the macOS leadership is asleep at the wheel

    You totally can, at least on Apple's XDR displays.

    Just go to System Settings -> Displays -> Preset and change it from 'Apple XDR Display (P3-1600 nits)' (or whatever) to 'Internet & Web (sRGB)'. You lose the ability to change screen brightness (I assume because you're locked to reference brightness), but HDR is fully off.

    pier25(1375) about 18 hours ago [-]

    I love HDR for movies/shows on OLED but other than that I agree. It really sucks you can't disable HDR in apps like Netflix etc. It does look terrible on non OLED TVs. In Chrome you can force a specific color profile in the settings. I believe sRGB shouldn't allow HDR content.

    Personally I think the biggest benefit of HDR is not even those super bright annoying colors but 10-12 bit colors and the fact that we can finally have dark content. If you look at movies from 10-20 years ago everything is so damn bright.

    tshaddox(10000) about 18 hours ago [-]

    That strikes me as an odd opinion. Surely the colorspaces and display technologies that predate HDR had as much dynamic range as they could reasonably squeeze out of the technology at the time. Is it the brightness specifically that bugs you? I could understand that, although brightness is not directly related to HDR (in the same way that loudness in digital audio is not directly related to bit depth).

    Of course I do agree that these things should be configurable. And on my MacBook Pro, I can set the built-in display to sRGB. Is that option not available on your particular Mac and display?

    bigstrat2003(10000) about 18 hours ago [-]

    Agreed. I've used it on my PS4, and all that it accomplished was an annoying screen blank and restart every time I started a game which used HDR. It didn't actually make anything look better. I turned it off after some experimentation and I don't plan to ever mess with it again with how underwhelming it was.

    MasterScrat(2721) about 18 hours ago [-]

    More HDR shenanigans from some time ago: https://news.ycombinator.com/item?id=36389285

    Demo: https://notes.dt.in.th/HDRQRCode

    Interestingly that one worked on iPhone, while the new emojis one doesn't

    WhyNotHugo(2949) about 18 hours ago [-]

    Nice! Using HDR to improve contraste of a QR code is a really neat idea.

    basisword(1073) about 17 hours ago [-]

    This worked well on my iPhone but my M3 MacBook Pro doesn't seem to render the HDR version of the image in Safari. Is that expected? Pretty sure the Photos app works with HDR.

    sgt(3284) about 3 hours ago [-]

    Yes, that is expected. I think it is intentional as it can be pretty disturbing.

    markrages(10000) about 7 hours ago [-]

    The Loudness War has come to Slack.

    https://en.wikipedia.org/wiki/Loudness_war

    globular-toast(10000) about 3 hours ago [-]

    It seems this is the sad inevitability whenever a high dynamic range format doesn't include loudness/brightness normalisation in the standard. We just can't help ourselves. If I understand correctly, things like Dolby Vision do include some kind of normalisation.

    donohoe(128) about 19 hours ago [-]

    I used (abused) HDR in an editorial project last year. We were working with an amazing illustrator doing a take on series of stories exploring the intersection of faith, storytelling, and technology.

    As the early versions of the images emerged we thought we could used HDR to provide more or a aura to some elements. We tried to make it subtle and not overwhelm.

    This example is my favorite:

    https://restofworld.org/2024/divinity-altered-reality-muslim...

    I think it worked well - and this technique would have been useful. We tried something similar but could not get it to work.

    Our method was to use a stretched HDR video in the background.

    Here are the steps I used:

    In Photoshop create white image to proportions required. Save as MP4:

      File > Export > Render Video
    
    Save as 'sample.mp4'

    With the MP4, generate a HDR version in WEBM:

      ffmpeg -i sample.mp4 -pix_fmt yuv420p10le -color_primaries 9 -color_trc 16 -colorspace 9 -color_range 1 -profile:v 2 -vcodec libvpx-vp9 sample.webm
    
    With the plain MP4, generate the HDR version:

      ffmpeg -i sample.mp4 -pix_fmt yuv420p10le -color_primaries 9 -color_trc 16 -colorspace 9 -color_range 1 -profile:v high10 -vcodec libx264 sample.mp4
    timciep(10000) about 19 hours ago [-]

    That looks amazing!

    shahahmed(10000) about 19 hours ago [-]

    these look so tasteful and well done

    BolexNOLA(10000) about 19 hours ago [-]

    Big fan of the final result. Very striking

    tobr(421) about 18 hours ago [-]

    Remember seeing this when it was published. Excellent work, great use of HDR.

    mzs(590) about 18 hours ago [-]

    Here's how RoW did it:

        .religion-atf__nav-chapter--current .religion-atf__nav-chapter__book {
            box-shadow: -4px -4px 50px 0 #fff,4px 4px 50px 0 #fff
        }
    InsideOutSanta(10000) about 18 hours ago [-]

    Wow, this is super smart, and the effect is really compelling and novel.

    razkarcy(10000) about 16 hours ago [-]

    This is a beautiful implementation all-around. It captures a similar 'wow-factor' that gilded pages in physical books provide. If this is the future of digital media I'm excited!

    jjcm(1979) about 14 hours ago [-]

    Incredibly well done. FWIW, the video hack is no longer needed. Originally that was required due to browsers only having hdr support with video, but recently support for PNGs were added as well. You can just use an all-white png with the rec2020 color space set.

    ValveFan6969(10000) about 14 hours ago [-]

    This is a lot of technical mumbo jumbo for a simple thing like brightness. HDR is a gimmick like 3D TVs. The best image quality is not the one with the most colors, which is entirely pointless, but instead a simple image, with no fancy features that only serve to distract the eye.

    Like in the famous case of the Apple logo in the 1990s. Steve Jobs, when asked why he uses a black and white Apple logo instead of a color one, said - 'color will only distract the eye from what's important'.

    ben0x539(10000) about 13 hours ago [-]

    What devices is this meant to work on? On my laptop I'm not seeing anything out of the ordinary.

    HatchedLake721(3368) about 11 hours ago [-]

    Have you done any magic with the scroll behavior?

    Usually the first rule of web development is to not touch scrolling, however, I'm on the iPhone and it's seems to be faster than native scroll, and surprisingly it feels very good!

    baobabKoodaa(2534) about 2 hours ago [-]

    Hey, could you please post a before/after HDR of one of the images?

    dmd(2344) about 19 hours ago [-]

    To forestall confusion: If the smiley face on the right is not much much brighter than the page background (which is #ffffff), then your hardware does not support this and you are not seeing what others are seeing.

    ZeWaka(3330) about 19 hours ago [-]

    To forestall more confusion: If your system is set to dark mode, the page background is not #fff, and is instead #1d1e20.

    zimpenfish(10000) about 17 hours ago [-]

    > If the smiley face on the right is not much much brighter than the page background [...] then your hardware does not support this

    Or you're using Safari because my hardware absolutely does support this (tested in Chrome and I am thankful that Safari does not support it because good grief.)

    nine_k(3565) about 15 hours ago [-]

    Works in mobile Chrome, not in mobile Firefox; increases the overall screen brightness a bit to add the dynamic range. Shines!





    Historical Discussions: Albert Einstein's theory of relativity in words of four letters or less (1999) (April 14, 2025: 239 points)
    Short Words to Explain Relativity (March 10, 2025: 2 points)
    Theory of Relativity Explained in Words of Four Letters or Less (June 25, 2023: 2 points)
    Albert Einstein's Theory of Relativity in Words of Four Letters or Less (February 03, 2023: 1 points)
    Albert Einstein's Theory of Relativity in Words of Four Letters or Less (November 11, 2020: 1 points)

    (239) Albert Einstein's theory of relativity in words of four letters or less (1999)

    239 points 4 days ago by signa11 in 14th position

    www.muppetlabs.com | Estimated reading time – 24 minutes | comments | anchor

    Albert Einstein's Theory of Relativity

    In Words of Four Letters or Less


    [ 0 ]

    So, have a seat. Put your feet up. This may take some time. Can I get you some tea? Earl Grey? You got it.

    Okay. How do I want to do this? He did so much. It's hard to just dive in. You know? You pick a spot to go from, but soon you have to back up and and go over this or that item, and you get done with that only to see that you have to back up some more. So if you feel like I'm off to the side of the tale half the time, well, this is why. Just bear with me, and we'll get to the end in good time. Okay?

    Okay. Let's see....

    [ I ]

    Say you woke up one day and your bed was gone. Your room, too. Gone. It's all gone. You wake up in an inky void. Not even a star. Okay, yes, it's a dumb idea, but just go with it. Now say you want to know if you move or not. Are you held fast in one spot? Or do you, say, list off to the left some? What I want to ask you is: Can you find out? Hell no. You can see that, sure. You don't need me to tell you. To move, you have to move to or away from ... well, from what? You'd have to say that you don't even get to use a word like 'move' when you are the only body in that void. Sure. Okay.

    Now, let's add the bed back. Your bed is with you in the void. But not for long -- it goes away from you. You don't have any way to get it back, so you just let it go. But so now we have a body in the void with you. So does the bed move, or do you move? Or both? Well, you can see as well as I that it can go any way you like. Flip a coin. Who's to say? It's best to just say that you move away from the bed, and that the bed goes away from you. No one can say who's held fast and who isn't.

    Now, if I took the bed back but gave you the sun -- just you and the sun in the void, now -- I'll bet you'd say that the sun is so big, next to you, that odds are you move and not the sun. It's easy to move a body like ours, and not so easy to kick a sun to and fro. But that isn't the way to see it. Just like with the bed, no one can say who's held fast.

    In a word, you can't find any one true 'at rest'. Izzy was the one who told us that. Izzy said that you can't tell if you move or are at rest at any time. You can say that you go and all else is at rest, or you can say that you are at rest and all else goes. It all adds up the same both ways. So we all knew that much from way back when.

    Aha, but now wait! The sun puts off rays! So: why not look at how fast the rays go past you? From that you'd see how fast you move, yes? For you see, rays move just the same if what puts them off is held fast or not. (Make a note of that, now.) Izzy had no way to know that, back then, but it's true. Rays all move the same. We call how fast that is: c. So, you can see how fast the rays go by you, and how far off that is from c will tell you how fast you move! Hell, you don't even need the sun for that. You can just have a lamp with you -- the one by your bed that you use to read by. You can have that lamp in your hand, and see how fast the rays go by you when you turn it on. The lamp will move with you, but the rays will move at c. You will see the rays move a bit more or less than c, and that will be how fast you move. An open-and-shut case, yes?

    Well, and so we went to test this idea out. Hey, you don't need to be in a void to do this test. We move all the time, even as we sit here. We spin, in fact. So they shot some rays off and took note of how fast they went east, and how fast they went west, and so on. Well, what do you know? The rays went just as fast both ways. All ways, in fact. They all went at c, just the same. Not an iota more or less.

    To say that we were less than glad to find that out is to be kind. It blew the mind, is more like it. 'What is up with that?' we said. And here is when old Al came in.

    [ II ]

    Old Al, he came out the blue and said, 'Not only do rays move at c if what puts them out is held fast or not: they move at c even if you are held fast or not.' Now that may not look like such a big deal on the face of it, but hold on. What this says is that you can move as fast or as slow as you want, and rays will go by you at c all the time. You can have a pal run past you and when you both look at a ray go by at the same time, you will both see the same ray go by at c! That is a bit wild, no? You, back in that void, you just can not say if you move or not -- with the lamp or no. Not that you can't tell: it can't be said. It's moot!

    But for that to be true, then time also has to get in on the act. For you and your pal to see the same ray go by at the same clip, her idea of time must be off from your idea of time!

    I can hear you say, 'No way. That can't be!' But I tell you it is. Old Al said so. He said, here, I'll show you. Get a load of this. We have Bert and Dana. Take a bus, and put Bert on the bus. The bus goes down the road. Dana, she sits here, on the side of the road. He's in the bus and she's on her ass. And now take a rock off of the moon, and let it fall at them. It hits the air and cuts in two. The two bits burn, and then land just as Bert and Dana are side by side. One hits the dirt up the road a ways, and one hits down the road a ways. Dana sees each rock at the same time, but Bert sees one rock and then sees the next rock. Now: if Bert and Dana both see Dana as the one who is 'at rest', they both will say that the two bits came down at the same time. Dana will say, 'I am 'at rest', and I saw them both land at the same time, so they both did, in fact, land at the same time.' And Bert will say, 'I move away from the rock down the road, so when I add that fact in, I can see that if I were 'at rest', I'd have seen both land at the same time. So it must be the case that they did land at the same time.' Okay, but what if Bert and Dana now see Bert as the one who is 'at rest'? Eh? You get to pick who is 'at rest' and who isn't, no? So make Bert be 'at rest'. Now Bert will say, 'I am 'at rest', so the one up the road beat the one down the road, on the way to the dirt, just the way I saw it.' And Dana will say, 'I saw them land at the same time, but I move away from the rock up the road, so when I add that fact in, I can see that the rock up the road must have beat the one down the road.'

    So you see, when you give up on the idea of a one true 'at rest', then you have to give up on the idea of a one true time as well! And even that is not the end of it. If you lose your one true way to see time, then you also lose your one true way to see size and your one true way to see mass. You can't talk of any of that, if you don't also say what it is you call 'at rest'. If you don't, then Bert or Dana can pick an 'at rest' that isn't the same as what you used, and then what they will get for time and size and mass won't be the same.

    What a snag, eh? I hope you can see how that gave some of them the fits, back when old Al told us that one. But even so, that ain't the half of it. I mean, most of us know that if old Al had got hit by a bus at age ten, we'd have got this far on our own in good time. No, it was what came next that was the real slap in the face.

    [ III ]

    Now, I've said a lot here on how to see (or how not to see) how fast you 'move'. What I need to tell you now is just what I mean by that word 'move'. When I say 'move', I also mean that you don't slow down or get sped up at any time, and that you don't veer to one side at all. When you move, you just keep all that the same as you go. How we say it is, you don't have any 'pull'. Why do I make a big deal out of that, you ask? Okay, let me tell you.

    Cast your mind back to Ari, from way way back when. He's the one who said that if you are at rest, you tend to stay at rest, and if you move, you tend to come to rest. He was off, you know, as he had no way to know that it was the air that has you come to rest. We had to wait a long time for Izzy to come by and say, 'No, Ari: if you move, you tend to just go on and on. To come to rest, you need to have a pull.' The air will give you a pull, a pull that has you come to rest. Then we also have the big pull, the one that says what is down and what is up, the one that has all of us in its grip. Izzy saw that this pull was the same pull that has the moon in its grip, too. I said that a pull can be a veer, yes? That is what the pull on the moon does. The moon has to veer all the time for it to stay with us. Were it not for that pull, it'd just go off in a line -- no veer -- and we'd just sit here and wave bye bye. Same with us and the sun. We veer, each hour, or else we'd get real cold real fast.

    But then, see, Izzy had to deal with the way that the pull acts. If a body has more mass, then it also has more pull, yes? That is why the sun is the axis we spin upon, and we are not the axis for the sun. But then why can't it go both ways? You take your ball of lead and your ball of wood and drop them, they land at the same time. But the lead ball has more mass, so it must get more pull. Izzy said, 'Well, see, a body has one more kind of pull. This pull is such that it will want to stay put all the time. And the more mass it has, the more it will want to stay put. That pull is the 'a body at rest will tend to stay at rest' part of the deal. So you see, that pull and the big pull are in a tug-of-war, and they work out so that any mass will fall just as fast.'

    I call it a 'new kind of pull', but it isn't so new: you feel it all the time. Get in a car and step on the gas -- you feel a pull back into your seat. Let up on the gas a bit, and the pull goes away. Make a left, and you feel a pull to the side. Stop, and you feel a pull out of your seat as you slow down. Or, go to the fair and get on a ride. As you spin, you feel a pull out, away from the ride. You spin: that is to say you veer, and veer and veer and veer, just like the moon. If you had no seat belt, you'd fly off the ride, and you'd fly off in a line. (Well, that is to say, you'd fly off in a line as a bird sees it. To be fair you'd also arc down at the same time. But put that to one side.)

    Okay but now, see, old Al's big idea did not work when you look at pull. Go back to when you were lost in the void. You can't say if you move or not, yeah, but you sure can say if you have a pull on you or not. If you did, you'd feel it, no? Sure. So then you have no one true 'at rest', no one true way to look at time, or mass, or size, but you do have one true way to look at a pull? Old Al said, 'Erm. I don't buy that.' We all said, 'Aah, why not? Just give it a rest, Al.' You can see why Al did not want to give it a rest, I bet. But this one was not such an easy nut.

    [ IV ]

    Izzy once said, Look here: say you have a disk that can spin, and so you put a pail of milk on it and you make it spin. You will see the milk go up the side of the pail, and fly over and out onto the disk. No big deal, eh? The spin will make a pull. But now what if you said that the pail of milk is your 'at rest'? Then you have you and the sky and all that in a big huge spin, and the disk with its pail of milk is the only body that is 'at rest', yes? How can you say then why the milk goes up? What can make the at-rest milk fly out of the pail like that?

    This is why Izzy came to say: Yes, we have no one true 'at rest', and when you move, some may say you do move and some may say you don't, and that is okay -- but not so with a pull! A pull is a pull, damn it.

    But old Al's mind was set. And he had a big clue that that was not the full tale. I told you that Izzy put a new kind of pull next to the old kind. Well, even he felt that this new pull was a tad bit odd. Not to put it down, mind you -- just that this new kind of pull was so much like the old kind of pull in a lot of ways. You know? Say I put you in a box, and then put that box out in a void. (But this time I don't need to have you in a true void. I just want you to be well away from any pull. You can have a star or two, or as many as you like, as long as you keep them far off. Okay?) Now, say I tied a rope from the box to a ship, and then I got in that ship and sent it up, so that it went fast, and more fast, and more fast ... I just burn up fuel as long as I have any left. As long as I see to it that you get sped up all the time, and at the same rate, you will feel a pull that will feel just like the pull you'd feel if you were back here, at home. If you have a ball of lead and a ball of wood in that box with you, you can drop them and they will both land at the same time. That is a bit odd, no? Puts a bug in your ear, yes? You can bet it put bugs in our ears. But no one had come up with a good way to say why that was so. Not yet.

    Old Al, he took that ball and ran with it. He went off for a year, and then ten more. Yep. That long. This was no walk in the park, let me tell you. In fact, some of us said that it was more like a walk off the deep end! For you see, when old Al came back, he said, 'This 'new' pull that Izzy gave us, it is just the old pull. Not just like it. It is it. The two are one and the same. And from this, you will then see that we have no 'one true pull'.'

    Do you see what he said, here? When you are in that box with the rope on the ship, the pull you feel won't just act like the pull back home: it is in fact the same kind of pull! So when you say, 'Hey! What if I want this box to be my 'at rest', huh? What then? Why does this ball fall down if I'm at rest and all?' -- old Al will say back at you, 'Well, you see, you have this big old void that goes by, and gets sped up all the time, and that has a pull on you and your box.' You'd say, 'Get out of here! The mass in this void is too far away to give me that big of a pull!' But old Al'd say, 'Nope. You don't get it. How much mass you have in your void is moot. It's the fact that it's all the mass in the void. All of it but you and your box, that is.'

    Same with the milk in the pail. If you say that the pail is at rest, then old Al will say that the spin of all else will pull on the milk, and make it jump out over the side.

    So here is what we get when we boil it all down. Izzy said that you can't tell if you move or are at rest at any time. You can say that you go and all else is at rest, or you can say that you are at rest and all else goes. It all adds up the same both ways. But old Al then said not only that, but that you can't even tell if you have a pull on you or not. So, at no time, in no way, can you act so that you can't be seen as 'at rest'. You can go this way or that way or jump up or down or what have you: even so, you can say that you are at rest -- and it will all add up just the same.

    This was the big one for old Al. He'd like to jump for joy, it all came out just so. But the rest of us, well, we felt more like it was time to lock Al up, what he said was so wild.

    [ V ]

    So some of us said, 'Al, you are mad. Look here: you want to make this pull, this pull that we need to keep next to the sun -- you want to make this very real pull into some kind of fake pull! I mean, what kind of pull is it that can go away and come back as you pick what to call your 'at rest'? That is no way for a pull to act.' And old Al said, 'Yeah, you hit the nail on the head. It is a fake pull.' And we said, 'Okay, that is it. You, Al, have lost it.' And old Al said, 'Feh. Read this and weep.' And we read it, or we gave it a try, more like. It was a real mess. Some of us got it, but most of us just went, 'Huh?' And some of us said that even if it was true, we'd just as soon stay with the old lie, Al's idea was so hard to make head or tail of.

    But Herb -- what? No, Herb isn't his real name, but I like to call him that -- But so then Herb was one of the ones who got it, and he went in with old Al and his new idea, and what they came up with goes like this.

    You know all the ways you can move, here. You have your up-and-down, and you have your east-and-west, and you have your fore-and-back. Well, Herb had said, we want to add one more way here: time. Yeah, time as just one more way to move in. Four ways, all told. And now Herb and old Al said, 'Let's take a look at what we can do when we look at here as a four-way here. Like, what if this four-way here can be bent? We don't mean that what is in a four-way spot gets bent: what if the very spot gets bent?' Some of us said, 'You two have got bent, is more like it.' But they said, 'Ha. Get a load of this.'

    They said, what if mass puts a bend in this four-way here of ours? The more mass you have in one spot, the more bent that spot gets. So now pick out a spot A and a spot B, one on each side of some mass, and each at its own time. What does it look like when a body goes from A to B? You will say: A line. Well, yes and no. It is a line, but it's also bent, as it goes past the bent spot. You see, this line will only look like a line if you can see all four ways! If you can't see one of the ways, if for you the way you can't see is what you call time, then you will see it as a line with a big old veer in it, half way in. Now, take a lot of mass, as much as our sun has, and pick spot A and spot B to be near the mass, and to be the same spot but for the time. Well, when you do that, the line from A to B in the four-way here will be an arc to you and me! An arc that will spin on and on, with that mass as the axis!

    'You see?' old Al said. 'You say that the sun has a pull, but when we spin with the sun as our axis, in the bent-up four-way here we just move in a line! We don't veer off at all! That is why I say that your pull is a fake pull. You don't need any pull if you just want to stay on a line!'

    A few more of us got it, then. But most of us just said, 'What are you two on? Put down the bong and get real! This is way too wild to be true.' But they just said, 'Just try and see if it isn't true.'

    So we came up with ways to test old Al's idea, and each time Al hit the gold. His idea had the sun's rays a tiny bit more red than what Izzy said. They were. His idea put Mars a tiny bit off from how Izzy had Mars. It was.

    The big one, the one that got told over and over, was the one with the dark-at-day time. You know, when the moon gets in the way of the sun. At that time you can get a real good look at a star when it's up next to the sun. (Next to it in the sky, that is. Not next to it for real. You know what I mean.) They went off and got a good look at a star that was very near the sun, and then they used a book to see just what spot that star was in. You see, the rays from the star pass so near the sun that they get bent, on the way to us. Old Al, his idea said just how much the rays get bent. With Izzy, the rays get bent, too, but only by half as much. So they took a look at the star, and they took at look at the big book, and ... well, I'll bet you can tell me as well as I can tell you just how far off that star was.

    A-yup.

    And then all of us, we all just sat back and said: 'Whoa.'

    And then we all went back to old Al and said to him, 'Al, you must have some kind of head on you, to pull an idea like that out of thin air.' We said, 'Why don't you quit this dumb job you have here and come with us?' We said, 'You know what, Al? We like you.'

    [ end ]

    And that is just the way it was. (Well, that is to say, more or less.) Oh dear me, look at the time! Sigh. I do know how to run on, don't I? It must be well past time to turn in. Let me show you out. It was very nice to have you over, and I hope I was of help.

    And y'all come back now, hear?


    Note: 'Herb' actually refers to Hermann Minkowski. (And 'Izzy' and 'Ari' are, of course, Isaac Newton and Aristotle.)

    Texts Brian Raiter




    All Comments: [-] | anchor

    crooked-v(10000) 4 days ago [-]

    People talk about the 'good old days' of the web, but boy, in a multi-tab environment it stucks to try and read something that doesn't put any effort at all into side margins.

    politelemon(2288) 4 days ago [-]

    Reader mode (FF) helps a lot here.

    hexo(10000) 4 days ago [-]

    And yet, it is 1000 times more readable than any 'modern' website.

    nxpnsv(10000) 4 days ago [-]

    the lack of large video ads really is jarring too

    dgoldstein0(3571) 4 days ago [-]

    Works great on mobile, fwiw

    globular-toast(10000) 4 days ago [-]

    What does multi-tab have to do with it? You are in control of your computer aren't you? Just make the window narrower.

    creata(10000) 4 days ago [-]

    It's annoying for sure, but at least you can resize the window.

    Side note: Dan Luu claims[0][1] that there's no readability advantage to narrow line width. I haven't really looked into it, but in my experience it feels like he's very wrong.

    [0]: https://danluu.com/slow-device/ [Appendix: this site vs. sites that don't work on slow devices or slow connections]

    [1]: https://nitter.net/danluu/status/1115707741102727168

    mdp2021(1673) 4 days ago [-]

    Open the 'developer's tools', find the '<body>', inject a 'margin' CSS - customize the page locally.

    flysand7(10000) 4 days ago [-]

    Folks, just for these kinds of websites I made an extension that trims the body of the text to 80 characters. I don't have a way to pay to get it on google's or firefox's extension marketplace, so you'd have to install it from source.

    https://github.com/flysand7/breader

    ghusto(10000) 4 days ago [-]

    We did have ways to create margins, you know :/ Aside from simple CSS, you could still do it with pure HTML.

    bslanej(10000) 4 days ago [-]

    Screens were much narrower then so constraining the width of text was not necessary.

    danadam(10000) 4 days ago [-]

    I have a bookmarklet, since forever, labelled 'sane width', with the following code:

      javascript:(function(){var newSS, styles='body { width: 800px ! important; margin: auto !important } '; if(document.createStyleSheet) { document.createStyleSheet('javascript:''+styles+'''); } else { newSS=document.createElement('link'); newSS.rel='stylesheet'; newSS.href='data:text/css,'+escape(styles); document.getElementsByTagName('head')[0].appendChild(newSS); } })();
    
    It forces the body width to 800px and centers it. Crude but it is enough for me.
    TZubiri(10000) 4 days ago [-]

    I haven't checked and I don't know how it would render. But it is worth noting that since this was designed against an earlier version of css, it might render differently in older browsers.

    For example, older monitors had less pixels, so it's likely that the wrapping was sensible in older monitor/browser configs.

    To say nothing of browser defaults being different, if this was pre-css, then the margins might have been baked into the default browser interpretation. In other words, pre-margin property, a webpage without margin didn't mean 'this has no margin', in the sense that a modern webpage without margin specified would mean 'DO NOT ADD MARGIN TO THIS!'.

    hkmaxpro(2459) 4 days ago [-]

    Reminds me of Yasha Berchenko-Kogan's excellent answer to the question "What do grad students in math do all day?"

    https://www.quora.com/Mathematics/What-do-grad-students-in-m...

    > a bit like trying to explain a vacuum cleaner to someone who has never seen one, except you're only allowed to use words that are four letters long or shorter.

    > What can you say?

    > 'It is a tool that does suck up dust to make what you walk on in a home tidy.'

    pavlov(3282) 4 days ago [-]

    Somehow the sequences of small words and ample syntax make this sentence quite difficult to parse.

    Maybe just go full pidgin:

    "Tool to suck dust, make tidy for walk in home."

    stevage(3583) 4 days ago [-]

    You don't need the awkward 'does'. I'd go with:

    It is a tool to suck up dust and dirt from rugs, wood or even tile.

    HPsquared(10000) 4 days ago [-]

    A tool to take away dust and dirt in the home.

    jaynetics(10000) 4 days ago [-]

    Reminds me of 'Gadsby', a 50.000 word novel without the letter 'e':

    https://en.m.wikipedia.org/wiki/Gadsby_(novel)

    koiueo(3516) 4 days ago [-]

    I imagine LLMs would excel in this kind of writing these days.

    But really impressive for the time.

    isolli(2928) 4 days ago [-]

    I'd be curious to know if it was easier or harder (or perhaps just as difficult) to write than the French equivalent. [0]

    The Wikipedia article goes on to discuss interesting aspects of how the book was translated in different languages, with different self-imposed constraints.

    [0] https://en.wikipedia.org/wiki/A_Void

    vodou(10000) 4 days ago [-]

    Georges Perec did the same with his novel 'La Disparition'.

    What is almost as impressive is that these novels (at least Perec's) have been translated to other languages.

    pyfon(10000) 4 days ago [-]

    8 of them on the cover!

    amelius(2195) 4 days ago [-]

    Reads like it could have been AI generated.

    Tepix(2905) 4 days ago [-]

    Not in 1999.

    ahazred8ta(10000) 4 days ago [-]

    For reference, Poul Anderson's 'Uncleftish Beholding' -- an essay on atomic theory written in modernized anglo-saxon.

    https://en.wikipedia.org/wiki/Uncleftish_Beholding

    Up Goer Five; rocket science explained using only the one thousand most common english words.

    https://www.explainxkcd.com/wiki/index.php/1133:_Up_Goer_Fiv...

    https://www.explainxkcd.com/wiki/index.php/Thing_Explainer

    rootbear(10000) 4 days ago [-]

    I love "Uncleftish Beholding", which someone said is written in "Anders-Saxon". I think it would be fun to do it live as a Power-Point presentation.

    TobTobXX(3456) 4 days ago [-]

    Reminds me also of the 'Up Goer Five'. An xkcd poster which roughly explains Saturn V with only the top 1000 used words in English[0]. Even better IMO is the collab video with MinutePhysics[1].

    [0]: https://xkcd.com/1133/

    [1]: https://www.youtube.com/watch?v=2p_8gx-XHJo

    erk__(10000) 4 days ago [-]

    Randall Munroe (of xkcd) went on to write a full book in that style: https://xkcd.com/thing-explainer/

    stavros(1602) 4 days ago [-]

    This essay is fantastic at demonstrating that putting a word length limit actually makes explaining things more complicated. I got lost at around chapter 5 because the author couldn't use words like 'gravity' and 'acceleration' and I got confused by which one is 'new pull' and which one is 'old pull'. It's too bad, as it was interesting up to that point.

    wizzwizz4(10000) 4 days ago [-]

    Of course you find it hard to distinguish the two! You don't have equipment for measuring tidal forces, and they are locally indistinguishable.

    4gotunameagain(10000) 4 days ago [-]

    > It's too bad

    I think that's the whole point. It was never meant as being easier to grok

    K0balt(10000) 4 days ago [-]

    There's a reason why vocabulary exists. It isn't to make things harder to understand. Sometimes the best way to explain something to someone with a limited vocabulary is to expand their vocabulary in the process.

    karmakaze(3671) 4 days ago [-]

    It's an exercise. I would have much preferred using the 20k most common words or something like that. The first thing that came to mind is 'elevator' which is where the equivalence eureka comes from. It can be done in British English as 'lift' but difficult otherwise.

    Elevators are cool like telephone booths. I've wondered what a dog thinks using them for the first time, then accepting what they do and how much they understand its geometries.

    chuckadams(10000) 4 days ago [-]

    Reminds me of Guy Steele making the point about big languages and small ones in his talk about Scheme. Started the whole lecture using only one-syllable words then gradually defined two-syllable words using only single syllables and so on.

    malfmalf(10000) 4 days ago [-]

    There was a talk at a university, where the presenter used only words of two or less SYLABLES , but he allowed himself to use more complicated words after explaining them (but kept that to a minimum).

    I can't find either the author or the talk. I think it was some 5 years ago.

    At first, I thought it was Randall Munroe, but I might be remembering this: https://xkcd.com/thing-explainer/

    I've also tried with Paul Graham, who has some articles trying to convey something similar, but no luck there.

    Edited to add : I think the original proponent of a similar idea was Richard Feynman : https://www.hpcdan.org/reeds_ruminations/2022/03/understandi...

    freetonik(3070) 4 days ago [-]

    It was interesting to notice that not all short words are necessarily simple. Words like 'void', 'iota', 'mass', or 'veer'.

    patates(10000) 4 days ago [-]

    Thanks to Javascript, I know void.

    Thanks to Go, I know iota.

    gcanyon(10000) 4 days ago [-]

    Four letters is an interesting constraint, but it doesn't guarantee simplicity. I'd replace

    > no one can say who's held fast

    with 'no one can what does move and what does not'

    gcanyon(10000) 2 days ago [-]

    ...and of course I missed a word. I meant to type:

    'no one can say what does move and what does not'

    api(1616) 4 days ago [-]

    I'm not sure if this is physically accurate, but the best description I've encountered for relativity is:

    You are always traveling at the same speed. That speed is 'c', the speed of light.

    If you are sitting still, you are 'falling' through the time dimension at 'c'. If you move in the X,Y,Z dimensions, you must move slower in the 't' dimension so that your velocity vector still sums to 'c'.

    quibono(3612) 4 days ago [-]

    An immediate follow-up is: why do we always travel at c?

    andai(3664) 4 days ago [-]

    I appreciate this, though the hard rule seems to be doing more harm than good. For example, one 5-letter word became 6 words, because 5-letter words aren't allowed!

    So while the vocabulary is kept low, the writing style becomes harder to process, at least for me. I wonder if there's a way to win on both fronts, to make it maximally comprehensible for all involved.

    I'd argue 'use normal words that everyone knows' (even if they are 5 letters!) would be included in such a strategy.

    Edit: Okay now I made it further in and I'm being asked to keep several different perspectives in my head simultaneously, perceiving different events at different rates of time... I think I need a diagram... or a microdose...

    lgeorget(10000) 4 days ago [-]

    Several variants of simplified English have been designed for the purpose of being understood by learners or people with only basic command of English as a foreign language. Wikipedia has a version in Simple English for instance: https://simple.wikipedia.org/wiki/Simple_English_Wikipedia.

    ActorNightly(10000) 4 days ago [-]

    The explanation still kinda sucks. I like this one:

    The easiest way to understand the relationship between time and space is repeat the thought experiment with the void, but assume that there is no consciousness there (i.e nothing running that can sense time passing).

    Now imagine the only action you can take is to fire particles (say photons) in a given direction. In a void, that action is meaningless - the particle fires and never comes back. No information exists.

    Now imagine there is a mirror somewhere in space. A particle fires, and then comes back. And maybe interacts with another particle. But still, this is generally meaningless and you cant derive any measurable thing from it, but you have a piece of information - particle comes back.

    Imagine there are 2 mirrors in different directions. What you do is you set up 2 identical devices. Each one fires a particle, and when the particle comes back, it triggers a certain color ball to fall down a common shared tube, and then the particle gets fired again.

    So with 2 mirrors, you get a sequence in the tube that looks something like blue, blue, blue, green, blue, blue, blue, green. Now you can make a measure of distance. You take the 'blue' mirror as your unit, and say green mirror is 2 away.

    You have also in fact created a clock. The tube contains information on how many cycles have passed - i.e in order to say that mirror is x away, you need to have counted x blue balls before that respective ball shows up. So you can see how distance and time is intimately intertwined. To measure distance, you have to necessarily have something that measures time.

    Now lets say that the 'green' mirror starts moving away from you, at a slow speed (i.e your particles are much faster. You start to see 3 balls in sequence, then 4, then 5, and so on. By comparing the difference in the subsequent position of the green balls, you can measure speed.

    What happens if the speed of the mirror is 99% of the particle speed? The particle takes its sweet time getting there, and sweet time coming back. Even if you fire the particle as the green mirror is close to the particle emitter, its going to result in a measurement of a very large distance.

    This is the relativistic effect where the space behind something moving fast increases.

    This whole experiment demonstrates that what we consider space is precisely defined by measurements, and relativistic effects alter these measurements, which alters our perception of space.

    You can do similar thought experiments to understand why space in front of you seems to shrink, why time dilation becomes a thing, and so on.

    arijun(10000) 4 days ago [-]

    That explanation seems like it would not line up with the mathematical reality of the situation. It seems like one of those handwave-y things that always confused me as a child. "Gravity is just massive objects deforming space like a weight deforming a sheet, and things fall into the well they make." Ok but what would make something fall into the well, there is no gravity.

    meindnoch(10000) 4 days ago [-]

    No. What you described is still 100% Galilean relativity. Special relativity cannot be explained with Galilean relativity.

    lifeisstillgood(2085) 4 days ago [-]

    I think I get it ... kinda. Thank you.

    notTooFarGone(10000) 4 days ago [-]

    Hi, as a person who can only read words with 4 or less characters your explanation is really confusing

    TZubiri(10000) 4 days ago [-]

    I personally don't find metaphorical explanations helpful, especially considering this is not the only time I have heard or will be hearing about relativity, so if I get another explanation I will have to either map the concept of balls to whatever metaphor another teacher uses, which is just more work. I'm fine with using generic words like 'information', which I can map more naturally to other explanation wordings like 'signal'.

    The same applies for explanations of bitcoin, or Machine Learning, or stock markets, just use the proper wording, difficulty, weights, secondary market. Metaphors are not teaching.

    janpmz(10000) 4 days ago [-]

    I turned this into a little audio book: https://www.pdftomp3.com/shared/67fcc7f933aa6c3115b114da

    no_news_is(10000) 4 days ago [-]

    No, you didn't. This doesn't match the original text.

    0:47 Added in text: 'Okay, here's the text prepared for reading aloud.'

    0:58

    Original: 'Okay, yes, it's a dumb idea,'

    Audio: 'Okay, yes, it's a bit of a strange idea'

    1:08

    Original: 'Or do you, say, list off to the left some? What I want to ask you is: Can you find out? Hell no. You can see that, sure.'

    Audio: 'Or do you drift off to the left a bit? The question is, can you figure it out? No, you can't. You can see that.'

    ---

    It appears you are using 'Variational Lossy Autoencoder (VLAE)' as the basis for your website[1], which might be good for simplifying more complex things but defeats the purpose here. It's using more than four letters in words, and censoring out 'dumb' and 'hell'?

    Why don't you try pointing that another explanation of the theory of relativity without this limitation? Seems like that'd be a more interesting exercise.

    [1a] https://www.pdftomp3.com/shared/67e178f428779824db2e06c6 [1b] https://pdf-reader-storage-f55b8c51173224-staging.s3.us-east...





    Historical Discussions: How to bike across the country (April 14, 2025: 239 points)

    (239) How to bike across the country

    239 points 4 days ago by benjbrooks in 3646th position

    www.brooks.team | Estimated reading time – 27 minutes | comments | anchor

    I spent 51 straight days on my bicycle last year, traveling 3,900 miles through high desert, mountain passes, endless prairies, and rolling hills from San Francisco, California to the eastern coast of Virginia. I did the majority of the route (Sacramento to Virginia) solo. Yet I didn't even own a bike until two weeks before the trip. How'd that happen?

    After shutting down my startup in summer 2024, I was burned out and unsure what to do next. Accordingly, I sat down to brainstorm a few crazy ideas in hopes of tackling a meaningful challenge and taking time to clear my head. I considered, but ultimately ruled out due to skill/weather issues, ideas to sail across an ocean or hike the Appalachian or Pacific Crest trails. Bicycling across the continent seemed like the perfect blend of crazy and possible.

    The Route

    All of my research led me to the Adventure Cycling Association (ACA) maps. The ACA sells digital routes as gpx files - each route contains turn-by-turn directions along with a detailed list of waypoints marking campgrounds, convenience stores, motels, and notable tourist attractions along the route.

    The most popular cross-country route is the TransAmerica Trail, created in 1976 for the United States Bicentennial. It starts in Astoria, Oregon and runs 4200 miles before finishing in Yorktown, Virginia. Although I was initially interested in following this route, due to my last minute planning (late August), I couldn't start the ride until September 21. By that time of year, sections of the TransAmerica in Montana and Wyoming are typically snowed out. All the advice I read recommended completing the TransAmerica before the time of year I decided to start.

    With the northern sections of the TransAmerica nixed, I looked further south. The Southern Tier Route (San Diego, California to Saint Augustine, Florida) looked compelling but I checked out a few Reddit reviews and everyone said it was boring, particularly the 1000 mile stretch through Texas. Besides complaints about how challenging the ride was, most folks had positive things to say about the Western Express (San Francisco, California to Pueblo, Colorado). The trail is largely remote so there's barely any suburban traffic and it weaves through several spectacular national parks and mountain ranges. That said, reviews were mixed about whether to attempt such a ride in October. I decided to roll the dice, hope there was no snow in the Rockies when I passed through Southern Colorado via the Western Express, and rejoin the TransAmerica route in Pueblo. I was confident I could pass through the Midwest and Mid-Atlantic without encountering bad weather, assuming I made it over the Continental Divide safely.

    The Prep

    There were a few important areas of preparation here:

    • Fitness Prep -> how do I bike every day without ripping my body apart?
    • Survival Prep -> how do I sleep comfortably every night without freezing/overheating and make sure I never run out of food & water?
    • Mechanical Prep -> how do I repair my bicycle when I don't have service and my tire goes flat?

    I bought a touring bike on Craigslist while assembling all my gear for the trip. In all honesty, my physical preparation was minimal - I only logged a couple short practice rides around Central Park, never exceeding 30 miles. Though I did worry about my ability to bike long distances, I was confident in my general fitness. At the time, I was halfway through training for the NYC Marathon (which I ultimately skipped to finish the bike tour), and I figured that aerobic fitness would carry over well, even if my quads weren't fully conditioned. My instincts were mostly right. I was definitely sore for the first two weeks of the ride but I never got injured. The key was to push through muscle soreness but never tendon soreness. If there was a pinching or pulling sensation in my knees while I pedaled, I needed to move to a lower gear. Once I started the trip, I just woke up every morning expecting some baseline quad soreness.

    One move I don't regret making is getting a bike fitting in advance of the ride. These are expensive (I got mine for $400 at enduranceWERX in Harlem) but it made the bicycle feel like an extension of my body and, so I hear, is the best way to prevent injuries.

    For the reader, preparing for camping every night might require a bit more practice. A typical backcountry night involved me collecting water from a nearby stream before running it through my filter, boiling water with my camp stove to make an instant meal, hand-washing my clothes before hanging them to dry, giving myself a "shower" with a travel washcloth, and setting up my tent (often in the dark). In the morning, I'd wake up 90+ minutes before sunrise to make a quick breakfast and break down camp into a small pack that re-attached to my bike before heading out for the day. I was fortunate to go on a bunch of backpacking trips during college (shout out Peaks & Professors) so I was comfortable with all of the above. For those less experienced, I'd recommend doing a couple practice overnight trips to get the hang of using your gear before setting off on the long haul journey.

    one pot dinner (Tunnel Hill, Illinois)

    all the essentials (Pilot Knob, Missouri)

    I was a complete noob on the cycling maintenance side before preparing for this trip. I had never changed a tire and didn't even own a bike (just had a NYC Citibike subscription). Before I flew to San Francisco to start the ride, I had a cycling shop disassemble my bike and pack it into a cardboard box so I could check it as an oversized bag. Rebuilding the bike to kick off the trip was a great way to get familiar with my tools and the bicycle itself. If I were to relay advice to my past self, I'd also say to be prepared to grease your chain, replace a flat tire on both wheels, and use a Quick Link on a broken chain. Luckily, I never had to fix a broken chain but my back tire did go flat 8 times during the trip and my chain would regularly detach from the gears.

    What's more important than any physical preparation is the willpower to get the trip done. There will be hard moments on the trail - I found myself crying on the side of the road more than once. No amount of training could have prepared me for the daily struggle of simply getting up and making progress. But this experience is not reserved for experts and athletes. People have been making the trek since long before the federal interstate existed and elderly cyclists manage it every year. The key is to just keep chugging along!

    The Gear

    Being on the road for several weeks straight requires a few key supplies. Full Disclosure: All the Amazon products below have affiliate hyperlinks.

    Bike: I bought an old school Trek 520 off Craigslist for less than $500. Although it's heavy, this frame is considered to be a classic bike touring frame. I got compliments on it throughout my ride. I will say that I didn't find any other good bikes on Craigslist/Facebook Marketplace in the Manhattan/Brooklyn area that were under $1000. A brand new touring bike will run you at least $1500. I'd recommend going used if possible.

    Bike Accessories: I used a Rockbros Rear Rack and two Rockbros Panniers to store the majority of my supplies. Rockbros makes high quality waterproof bags. I cycled and camped through torrential rainstorms in Missouri, Kentucky, and Virginia - water never made it inside the bags. The straps on the side of the panniers are also a great spot for air-drying wet clothes. I chose not to buy a front rack and loaded all the weight on my back tire, which I would not recommend. All the necessary gear can fit into two panniers, but I'd recommend putting one on a front rack and one on a rear rack. It's also worth having a bottle cage on the frame to have easy access to water while in the saddle - my bike came with one pre-installed.

    80 miles until the next town (Hite, Utah)

    Bike Maintenance & Repair: I used a leatherman multitool and crankbrothers multitool for all my tightening, pinching, and cutting needs. For tire-related maintenance, I used a small portable pump, a pair of levers, a patch kit (talk to any bike shop), and two spare tubes. I replaced my tires (and got another spare tube) about halfway through the trip. I brought along a roll of duct tape, which comes in handy when a thorn or piece of glass rips a sizable hole in the tire itself. I also carried a Quick Link but never ended up needing it.

    Safety: I used a MIPS helmet, reflective vest, re-chargeable front light, and re-chargeable rear light at all times. I didn't do any night riding unless an emergency demanded it. The primary risk of cycling is drivers hitting cyclists, usually because drivers can't see them. For better or worse, it's on cyclists to be undeniably visible - being appropriately cautious means being obnoxiously bright and reflective. Not only is it safer, wearing a crazy outfit is a fantastic conversation starter. I brought pepper spray to deal with loose dogs but never ended up using it - I've heard it's easier to use an air horn instead. I brought a 20 ft rope to string up my pack in bear country, sunscreen lotion for my face, and spray sunscreen for my legs/arms/neck. My first aid kit included Tylenol, eye drops, chapstick, bandaids, and even a small pack of Mylar thermal blankets.

    Shelter: I spent the majority of nights on my trip inside a Naturehike Cloud-Up 1 Person Tent. I was happy with the volume of the assembled tent as a 5'11' person. It was also a small, light (~1500 grams) piece of luggage. That said, the tent didn't make it through an overnight rainstorm without leaking water through the floor. I'd recommend finding another one :/ I also brought a 20-degree sleeping bag and a small sleeping pad. If I had taken the trip in the summer, a 40-degree bag would probably have sufficed.

    Water Storage: For water access in underpopulated areas, I used a Sawyer Squeeze water filter. It's important to note that these filters are rendered ineffective if exposed to sub-freezing temperatures. If it got chilly enough overnight, I'd throw the filter in my sleeping bag so it wouldn't freeze. I packed a 6-liter water container (I'd occasionally duct-tape this to the frame), a 2-liter camelbak, and a small Rockbros water bottle for water access while I was in motion. In retrospect, the camelbak tube was a pain to pack and it would often leak. In retrospect, I should have just used a couple big storage bags that are designed to be emptied out into a classic water bottle/cooking stove.

    Clothing: The rule of thumb I followed was 2+ sets of active wear and one set of camp clothes. For daily riding, I brought 2 Ultimate Black Bibs and 2 neon riding shirts. It's worth noting that the zippers on my riding shirts broke almost immediately. I brought a set of tights for riding on cold mornings. I brought three pairs of socks - I went for warm workout socks because of the season. I wore clip-in shoes for riding and Birkenstocks for camp. Noting I traveled from late September to early November: the temperature peaked at 85 in the Nevada Desert and dropped down to 28/29 overnight in Gunnison, Colorado. It stayed in the 40-60 range as I passed through the midwest and Appalachia.

    salt flats (outside Fallon, Nevada)

    Your browser does not support the video tag. frozen socks (Gunnison, Colorado)

    Electronics: I brought an iPhone 11, a Garmin Edge 530 (bike computer), a Garmin inReach (satellite phone), a portable charger, and necessary cords. The 530 was great at recording my daily rides and buzzing when I needed to make a turn to stay on the route but it isn't meant to be used for trip planning or waypoint finding. On iOS, I'd recommend downloading GPX files via GoodReader, then transferring those files over to EasyTrails for easy viewing/navigation. I used EasyTrails every morning to plan out my ride for the day. Notably, I did not bring headphones. It's more fun without them!

    Camping Accoutrements: For cooking, I brought a 900 mL kettle, SOTO windmaster stove, and a small canister of cooking fuel (to be replaced every 2-3 weeks). A Black Diamond headlamp helped me navigate around camp at night. For cleaning dishes, laundry, and showers, I brought a 4oz container of Dr. Bronner's (to be replaced every 2 weeks). I hear it can also be used as toothpaste but I never tested it out myself. Additionally, I carried a small washcloth that doubled as a towel. While nigh impossible to completely escape chafing and saddle sores on such a long ride, Chamois Butt'r made the discomfort more manageable.

    Sustenance

    Planning my food and water consumption was the most logistically challenging component of the trip. Because I was weight constrained and typically only packed a day's worth of food/water, I needed to constantly monitor where the closest grocery stores and water sources were. The most common grocery chains on the trail were Family Dollar & Dollar General. These chains are price-friendly. They have a variety of instant meals and calorie-dense snacks. I wouldn't recommend shopping there if you're looking for a nutritionally balanced diet, but they're perfect for a cyclist exclusively in need of carbs. My favorite instant meals included mashed potatoes, beans & rice, ramen, and pasta. I avoided canned food as it added unnecessary water weight to my pack.

    I'm vegetarian, and rural America doesn't lend itself well to that dietary restriction. That said, while there wasn't much variety, I could usually find something to eat. I reliably found mexican restaurants along the route, and if there was a bar that only served burgers I'd usually be able to convince them to make me a grilled cheese.

    a delicious veggie burrito (Dolores, Colorado)

    Given my daily average was about 7 hours on the bike, I generally ate between 4,000 and 6,000 calories a day.

    For breakfast, I would always have instant oatmeal and instant coffee. The key was to make the oatmeal inside the packets so there are fewer dishes to clean - the boiling water also warmed up my hands on cold mornings. Otherwise, 40-50% of my daily intake was trail mix. I would grab snacks at the grocery store to eat every 80 minutes during sunscreen breaks - usually it was peanut butter pretzels, Clover Valley's Monster Trail Mix, or Ritz Cheese Sandwiches. By the end of the ride, I was probably eating both lunch and dinner at some restaurant, which is definitely the more expensive way to do the ride. Subway was a great option - I could eat half of a footlong sub for lunch and save the rest for dinner at my campsite.

    I regularly bought electrolyte packets at the grocery store. In the desert, I was prepared to go through a full bottle (20 oz) of water every hour. And as much as every other bottle needed to be an electrolyte drink. As I moved further east, there'd be days when I'd only need a bottle of water every four hours, no electrolytes necessary. Regardless, I always tried to overpack on water. Running out of it in the wrong place is a death sentence. I needed all 8.5 liters of my storage on the longest stretch without services (between Hanksville and Blanding in SE Utah).

    Sleeping Spots

    The best source of truth for places to sleep is the ACA map. The waypoints will list a location, a phone number to call, and occasionally a website to visit for bookings. These range from a free place to sleep with no running water to a car campsite to a hotel with breakfast included. Most nights on my trip involved some form of camping, with varying access to bathroom/shower facilities. I did occasionally stay in Airbnbs or motels but I tried to use those as a last resort following particularly challenging riding days.

    I'd recommend making a Warmshowers account in advance of any bike tour. There are a number of generous Warmshowers hosts across the country that will let cyclists stay in their homes for free, occasionally even providing a free meal, access to laundry machines or showers. In Pueblo, Colorado, one such host drove across town to pick me up after my tire kept going flat. He fed me dinner and breakfast, gifted me a bicycle rack, and let me use his laundry machines. One desperate night in Utica, Kentucky, I called a host with 10 minutes notice and they ended up feeding me and letting me stay in their garage. I was blown away by the acts of kindness that complete strangers consistently offered me.

    I generally didn't plan sleeping arrangements more than 36 hours in advance. My mileage was highly dependent on weather, elevation change, and unforeseen delays so I had a hard time making multi-day plans. For example, when attempting to cross the Ohio River between Illinois and Kentucky, I didn't realize that I needed to take a ferry over the water. And the ferry was conveniently closed over the weekend when I arrived at the north bank at 3pm on a Sunday.

    There are loads of RV parks open for overnight bike tourists on the westside of the Rockies. They'll usually include a small campsite, access to running water, electricity, showers, and coin laundry machines. These are safe places to stay and great places to meet other traveling sightseers. They pretty much disappear once you cross over into Eastern Colorado though. There were always spots to camp along the route as I went eastward, but they were often small plots in public parks with limited access to water, showers, or bathrooms. I'd often have to call the local sheriff's office to let them know I would be spending the night at the park. In a pinch, you can always stop by the local fire department and ask them where the best campsite is. Sometimes they'll even let you sleep inside the station.

    Some notable places I spent the night:

    • a church basement in Kansas
    • a fire station in Kentucky
    • a greenhouse in Virginia
    • an old pony express station in Nevada
    • an RV park that doubled as a goat/pony farm in Colorado
    • a bed&breakfast for Appalachian Trail hikers
    • a shack outside a bar in Illinois

    a BnB with a dog and delicious homemade breakfast included (Damascus, Virginia)

    the fire chief came by to welcome me! (White Mills, Kentucky)

    Your browser does not support the video tag. setting up camp next to the flower beds (Draper, Virginia)

    On Safety

    Passing through strange towns on my route, I never felt unsafe around other people. I'd always leave my bike unlocked and, assuming the forecast didn't call for rain, my supply bags outside of my tent overnight. I'd often start conversations with strangers over lunch/dinner and chat with dog walkers as I set up camp in public parks. Admittedly, I can only speak to my own experience - a taller-than-average 25 year old white guy will have an easier time making their way through rural America than basically anyone else. With that said, I'd recommend traveling with a buddy and carrying a taser/pepper spray when possible.

    The biggest precaution I took was making sure my electronics were on me at all times. I'd always put my bike computer and phone in my pocket when heading into restaurants and grocery stores. I spoke with a westbound cross-country cycling group when making my way through Nevada and one man mentioned that his phone was stolen off his bike when he left it unattended at a convenience store.

    It is worth mentioning that much of the rural American West doesn't have cell service, making a satellite phone an absolute necessity. Being able to quickly make SOS calls in dire circumstances and notify loved ones of my exact location, regardless of cell tower availability, was crucial. I made sure to broadcast my coordinates twice a day and keep my family/girlfriend updated as to my travel plans every morning for my own safety. I was unexpectedly delayed by a flat tire on a mountain with no bars on two separate occasions - it was a godsend to be able to have that phone as a lifeline as I saw the sun moving closer to the horizon.

    It's important to have a general idea of the wildlife (and corresponding precautions to take) in every region on the trail. For example, passing through bear country required me to either string up all my scented supplies or put them in a bear canister. That was particularly important as I made my way through Eldorado National Forest between Folsom & South Lake Tahoe. Similarly, I didn't realize that Black Widow spiders were native to Nevada (and live in burrows right under the sand surface) until I found one crawling toward my tent. I triple-checked my cycling shoes for overnight visitors after that :)

    Joys of the Journey

    Cycling across the continent is like going through a season of Planet Earth. In a single week, I went from towering redwood forests shrouded by morning fog, to crystal clear alpine lakes surrounded by pungent evergreens, to bone-dry salt flats in sweltering heat. Miles from city lights in Nevada and Colorado, the night sky presented thousands of stars and the glowing band of the Milky Way. When I made it to Kansas, the blue sky was so vast, and the horizon so uninterrupted, that I felt like I was going to fall off the face of the Earth. Cruising along the Blue Ridge parkway at sunrise, I watched the mountains turn from black to purple to blue to brown and the rising sun race across the valleys down below. I heard mooing cows, neighing horses, clucking chickens, barking dogs, chirping crickets and singing birds. Never have I felt more connected to, or grateful for, natural wonders than during this ride.

    A bike touring outfit and gear collection makes a cyclist stick out like a sore thumb, inevitably attracting inquisitive strangers. People will usually have stories of their own - either they know someone who has done something similar or are into cycling themselves. These conversations with strangers were some of the high points of my trip. I met people from all walks of life, and oftentimes they would offer to help in any way they could. Complete strangers would hand me extra food, buy me dinner, pull up beside me on the road to ask if I needed help, offer a room in their house for the night, or get their hands dirty to help me replace a flat tire. Again, this was an n=1 experience, but I was pleasantly surprised by the warmth and hospitality I encountered pretty much everywhere. What's more, I would occasionally meet crazy cyclists like me. I bumped into an Australian couple heading from Vancouver to Argentina on mountain bikes, a crew of people cycling from Boston to San Francisco with van support, and two Frenchmen heading from Montreal to San Francisco.

    Another memorable aspect of riding along the TransAmerica trail was showing up to restaurants/rest stops in small towns that regularly see cyclists. In one such restaurant in Sugar City, Colorado, the owner took a quick look at me as I walked in before saying "I have something for you" and disappearing into a corner. She came back with a spiral notebook and told me to take a look. Inside were log book entries dating back years - every cross-country cyclist who'd passed through had taken time during their meal to write a personalized note. Some wrote little else but the date and the direction they were headed. Others wrote paragraphs of gratitude for the delicious meal they'd been served or diatribes on the struggles they'd endured to make it to this point. As I continued to head east from Sugar City, I found that these log books were not a rare occurrence, nearly every spot on my ACA map had something similar. The books, of all things, were what made me feel most connected to something bigger than myself during the ride.

    Your browser does not support the video tag. flipping through TransAmerica history (Sugar City, Colorado)

    I was lucky enough to have my dad join me on the ride from the Kentucky-Virginia border to the finish line in Yorktown. Weeks of riding solo was starting to make the trip feel like a never ending journey - seeing him and my mom in my home state was like getting an energy boost in the last mile of a marathon. Together, we pedaled through farm country, set up camp in tiny towns, and remarked at how little we'd previously seen of the state we'd both spent the majority of our lives in. It's rare to have extended quality time with family after moving out of the house and I know we'll both cherish this trip's memory for the rest of our lives. I pitched my dad on doing the whole thing with me but he, understandably, wasn't able to take two months off with such little notice - having completed it with him I can definitely imagine wanting to take a similar journey with my own children a few years down the line.

    setting off with my dad through morning fog (Breaks Interstate Park)

    Some notes for next time

    • The climb from Somerset, CA to the peak above South Lake Tahoe is sustained and there are long stretches without water.
    • If a dog starts running at you, it probably wants to bite you more than it wants to play with you.
    • There are basically six places to comfortably camp in Nevada so you better plan on biking some multiple of 70 miles per day or sleeping on the side of the road.
    • The 20 mile climb (4% grade) out of Cedar City, UT is unbelievably brutal.
    • The 10 mile climb (5% grade) to summit the Continental divide is similarly brutal.
    • Seven of the nine states I cycled through have a city named Eureka.
    • The 125 mile stretch between Hanksville and Blanding, Utah is nightmarish. There are NO SERVICES and only one water source (filter required).
    • Don't believe anybody when they talk about 'prevailing winds'. Eastern Colorado and Kansas are so flat and so windy (both directions) that it will reduce most sane riders to tears.
    • The Ozarks are hillier than expected. Several of my top 5 net elevation gain days were in Missouri.
    • This is one of the few times in life when it makes sense to have ice cream for dinner.
    • Appalachia is the steepest range on the route. There are several 2+ mile 7-8% grade climbs in Kentucky & Virginia.
    • If you get three flat tires in quick succession, there's probably something sharp embedded in your tire. Replacing your tube is necessary but not sufficient - it might just get punctured again.

    Resources I Found Useful

    My gear list and general trip prep was heavily informed by Sam Westby's How I Biked Across the U.S. video and corresponding doc. I'd highly recommend both - you can get 90% of the way there without consuming anything else.

    These videos captured the vibe of the Western Express quite well:

    crazyguyonabike.com has a bunch of great day-by-day accounts of previous TransAmerica journeys. This series, for example, captures the monotony and little details quite well.

    Shoutouts

    • Mitali, for being my emotional support and safety line at every step of the trip, coming down all the way from New York to see me finish, and helping me edit my writing.
    • My mom, for checking in on me every day, dropping my dad off at the Kentucky border, and meeting up with us in Charlottesville and Yorktown.
    • Mitali's parents, for hosting me before my ride, helping me pick up all kinds of random supplies, and driving me to the start.
    • Eric & Georgia, for sending me off at Ocean Beach.
    • Nate, for giving feedback on this write-up and joining me on the way to Sacramento.
    • Michael for joining the first 90 miles of the ride .
    • My dad, for joining me all the way through Virginia.

    P.S. If you want to follow every step of the route via Google Street View, check out the companion site. P.P.S. Check out all 51 Strava posts that comprised this journey.




    All Comments: [-] | anchor

    floriannn(10000) 4 days ago [-]

    I have 2500 miles so far this year and could do a century any random day without preparation and I'm doubting whether or not I can do GDMBR, meanwhile this guy didn't even own a bike, didn't even do more than 30 miles once he did, and just set off across the country. I guess I should just do it.

    beezlebroxxxxxx(10000) 4 days ago [-]

    At your fitness level, you're more than capable of doing a long bikepacking trail.

    The hard part isn't really fitness (for any moderately experienced biker unless your trip has a specific time or FKT goal), it's the logistics of food + shelter, the mental grind, and dealing with possible repairs.

    hackingonempty(10000) 4 days ago [-]

    If you haven't, check out Mat Ryder's videos on YouTube. He's newly retired guy in decent shape from jogging who buys a bike and does the GDMBR while making a bunch of videos. He shows everything and at the end talks about how much he spent and how much less he could have spent if he tried harder to be frugal. You can see how an average guy without any bikepacking experience do it. You can do it too!

    https://www.youtube.com/playlist?list=PL3-zVwEVdJ-UbC1DT4tSG...

    0_____0(10000) 3 days ago [-]

    Yep just do it.

    (Gdmbr 2022)

    juliogreff(10000) 3 days ago [-]

    I often do ultra races, always trying to be at the pointy end. I have all the training, and all the fancy equipment you could possibly imagine. Doing something of the magnitude of this article though still scares the hell out of me. Every year I watch the Transcontinental Race, and every year I say 'yeah, would love to do it, but next time'. I still haven't signed up.

    The gear, the legs, they help with going faster. Whether or not you can finish (barring catastrophical mechanical or injury) is all in your head!

    blindstitch(10000) 4 days ago [-]

    I think that finding free camping outside when you are in some shit nothing town is probably the most important skill to have, which is easy with satellite maps. Once you get the hang of it you realize that every town has at least one site where you can definitely get away with pitching a tent for one night. I think I have camped this way about 80 times and have never even been asked what I'm doing. That said, state and national park campgrounds are a great deal and you sometimes meet other tourers there, so they're good for a day when you want to take it easy for a morning. I sometimes get a kick out of zooming in on nowhere, USA and looking for spots.

    And some advice for anyone doing this for the first time and feels compelled to pay to camp - never stay at a KOA, consider them an absolute last resort. There is no bigger waste of money and RV culture is extremely cursed.

    zhivota(10000) 4 days ago [-]

    I did it this way back in 2007 when I didn't even have a smartphone, you can develop an eye for it at the ground level as well. I camped 8 nights without paying once and never had an issue. The only time I had to resort to help was in suburban Cleveland area, it got dark and it was too built up to stealth camp anywhere, so I ended up stopping at the fire station and they let me camp in their yard. They are there all night anyway so they are usually fine with it.

    The weirdest spot was in another suburban area, I camped behind a row of shrubs next to a cellphone tower installation haha. Wasn't the best setup but places like that usually don't get any traffic until business hours, so as long as you're in late and out early, you're fine.

    mvdtnz(10000) 4 days ago [-]

    What's a KOA?

    mauvehaus(10000) 3 days ago [-]

    Counterpoint on RV centered 'campgrounds': they have sweet amenities like a grill, a pool, laundry, and often a building with some air conditioning and some books previous guests have left behind.

    Not an every night kind of thing, and you're unlikely to find much in the way of grass to put a tent on, but I stayed at one with another guy who was bike touring and we get like kings for the night.

    When you're digging holes in the national forest to shit in, it doesn't take much!

    testing22321(10000) 3 days ago [-]

    I've camped thousands of nights in nearly 70 countries this way.

    seizethecheese(10000) 4 days ago [-]

    Of interest for the HN audience: the founder of grubhub has a memoir that tells the stories of biking across the country and starting grubhub in parallel. I found it an enjoyable read.

    unreal6(10000) 4 days ago [-]

    name/link?

    timonoko(10000) 3 days ago [-]

    I made it twice in 1980s. Or maybe trice, at least piecewise.

    -- Note about 'prepadness'. No need for that. I started at 70km per day, but eventually made 500 km in 24h. Because good back wind and too hot for camping by the road.

    It takes about two weeks to totally numbify youres backside. Thereafter rockhard professional saddle is the best.

    https://youtu.be/8D-S8nYCwjA?si=TZfnb2qrkiZdiYU6

    fransje26(10000) 3 days ago [-]

    > https://youtu.be/8D-S8nYCwjA?si=TZfnb2qrkiZdiYU6

    TIL that Pan Am was flying Airbuses!

    > It takes about two weeks to totally numbify your backside.

    Beat you sit-bones into submission until your pain-reporting nerves give up.. The week-and-a-half before you get there, though.. Ouch.

    fransje26(10000) 3 days ago [-]

    > https://youtu.be/8D-S8nYCwjA?si=TZfnb2qrkiZdiYU6

    Quite a helmet! Cool, eclectic video. With a bit of a teleport jump between the Pecos river and NY. :-)

    Chipen(10000) 3 days ago [-]

    My friend and I attempted to cycle over 1300 kilometers during our university summer vacation without any special training. We just started off. This experience was very memorable because we had limited funds; I didn't have a sun-protective suit, and the bicycle was borrowed. Some protective gear was even handmade by my friend's mother, for which I am truly grateful. There was a particularly embarrassing incident where I even used women's sanitary pads because the area had become numb after long hours of cycling, and I needed something to cushion it. My friend bought them for me and insisted I use them. We also met many interesting people along the way. In short, although it was challenging, it was very fun. I believe that in life, some out-of-the-ordinary things are worth doing, but please always pay attention to safety.

    Chipen(10000) 3 days ago [-]

    It took us ten days.

    ch33zer(10000) 4 days ago [-]

    Congrats! It was super interesting to read about the western express, when I did this a few years ago I did the astoria route: https://blaise.bike/

    Did you look into different tires? 8 flats seems like a lot. I got exactly one running schwalbe marathon plus tires.

    Overall what was your favorite part of the trip?

    dmwiens(10000) 4 days ago [-]

    Not OP, but I also went across America along the Northern Tier in 2023 with Schwalbe Marathon Plus's. I think I got 9 flats total, 7 of which were in Montana for some reason. I always tried to investigate and eliminate the source of the flat, but sometimes you are just repeatedly unlucky (in my experience).

    benjbrooks(3646) 4 days ago [-]

    i didn't look into different tires. my hypothesis is that most of my flats can be attributed to all the weight being on the back tire.

    favorite part was jumping into extended conversations with strangers. from a scenery perspective, coming down into Lake Tahoe from Eldorado was just absolutely stunning. same when I went past Bryce Canyon.

    googlryas(10000) 4 days ago [-]

    After getting 4 flats in 4 days on a bike trip, I had good luck with anti-puncture kevlar tire liner tape.

    rd(10000) 4 days ago [-]

    I'd love to do this one day! Curious - after reading, the part about wildlife scares me. Did you ever run into genuinely worrying situations with wildlife? Hearing about Black widow spiders alone makes me want to only do this with a van following behind me to sleep in at night!

    wincy(10000) 4 days ago [-]

    That black widow spider could be inside your house right now. Houses afford us protection but not immunity from these things. Spiders are notoriously resistant to pesticides as they require direct contact since they don't clean themselves like insects do (thus not ingesting the poison on the floor or wherever they're creeping along).

    bluGill(10000) 4 days ago [-]

    Most wildlife is somewhat afraid of humans so long as they are not taught otherwise. They know you are big and don't know if you are going to eat them so they stay away. Mountain lions are the only possible exception. So long as you don't get close and don't give them reason to get close they will generally leave you alone.

    The above is why it is critical to keep food either hung in a tree or in bear proof containers. So long as bears don't see humans and think 'I've found food near them' they will stay away - but once they realize humans mean food there is trouble. Wild areas rarely have problems - causal campers don't realize how important proper bear protection is and over time bears have figured it out.

    The black widow and a few other spiders and insects are exceptions - they will target you. (though mostly spiders leave you alone)

    JKCalhoun(3408) 4 days ago [-]

    Ha ha, I felt like you did when I moved to California and found them everywhere when I started looking for them. Never got bit in the 26 years I lived among them.

    And people there were freaked out when they heard I was from Kansas and thought little of having grown up around the perhaps more frightening Brown Recluse.

    You'll be fine.

    benjbrooks(3646) 4 days ago [-]

    i was a little worried about bears for the night or two i was in bear country but my fear of cars and weather was far more top of mind

    googlryas(10000) 4 days ago [-]

    I bike packed 2000 miles around Europe, and one time in the mountains outside San Sebastian I was chased by a black bear. Weird people were probably the most dangerous wildlife, but like OP, basically every interaction with strangers I had was positive. But, I did move my tent a few times after setting it up upon realizing that the weird person I interacted with earlier knew where I was sleeping.

    jaxtracks(10000) 4 days ago [-]

    One theme that pops out to me here is the reliance on other people being a positive experience for the author. In the software field, we tend to live pretty high up the economic value chain, which can abstract us a bit from participation in the more grassroots co-operative aspect of society. This can be alienating and warp worldview.

    When I'm hitchhiking to support packrafting trips or get back to where I launched my paraglider, I have no say in who I'm going to be chatting with and feeling gratitude towards. Initially that feeling of being reliant on whoever comes my way was difficult to adjust to after the false sense of individualism that a high paying job in a bubble of similar people brings.

    The benefit though is enormous. Now I stop to help anyone who's broken down on the side of the road despite the flash judgements their car or bumper stickers might bring. I'm much more aware of the value and interconnectedness of our society, and feel inspired to actively seek to contribute instead of remaining aloof. Most importantly, I realize that there's a whole lot of people out there looking to help people out at any turn, and that gives me a lot of faith.

    cynicalpeace(10000) 4 days ago [-]

    I hitchhiked Mainland China in 2019, and it's true that you are constantly relying on the kindness of other people.

    But I would argue that the type of person that does this kind of thing is very independent and thrives in an individualist environment.

    After all- it's you that's inserting yourself into an environment of strangers.

    When I was in China, people were bewildered as to why anyone would ever hitchhike. Whereas in America, a 5 year old knows what hitchhiking is.

    raffael_de(10000) 4 days ago [-]

    I made similar experiences - some also through hitch hiking. One major takeaway for me was how often my 'flash judgements' are wrong or unfair. I'd also say that asking for help and trusting is more of a strength one has to develop and nurture than a sign of weakness, which is what I used to believe.

    schmookeeg(10000) 4 days ago [-]

    Thank you for this. You gelled several ideas I was ruminating on over my morning tea -- my aloofness and my sneaking suspicion that self-sufficiency is isolating from society at large.

    I still pull over to help motorists. You've inspired me to look for more opportunities like those. :)

    dangus(10000) 3 days ago [-]

    I feel like this comment and the article itself together in context have kind of a sour taste for me.

    Just the fact that it takes such a great effort to experience first-hand how poorer people just help each other out because nobody has money, so they help. But for a tech bro to do that they have to engage in a self-indulgent hobby and cosplay as poor like they're on Undercover Boss.

    Ironically this effort to relate to other real live humans with normal incomes is only possible by indulging in the ultimate luxury, which is taking major time off of work rather than being stuck working a shit job.

    This is all done with a straight face while jamming a sentence full of words like 'paraglider' and 'packrafting.'

    This whole subject is all so stereotypical tech bro in such an unappealing way.

    Maybe this sounds unnecessarily bitter, but I think a valid alternate take on this is that privileged people are taking advantage of the kindness of others to get a bunch of help they don't need to help them achieve a goal that is a frivolous luxury. It's great we all get to feel warm fuzzy gratitude but it seems like the NPCs in this main character syndrome story are the people inconvenienced by the OP.

    Example: asking the fire department for a place to sleep, they probably feel bad so they let OP sleep in the fire department. But as a tech startup founder and software engineer, OP could have almost certainly afforded a basic motel each night with minimal to zero planning and effort and not resorted to inconveniencing other people.

    It feels a little bit like your CEO going to the food bank doesn't it? The median firefighter earns under $60k and dude who has probably outearns that salary in passive investment income is asking for a place to crash. I bet if the firefighter knew that they'd surely still be nice on the outside but they'd probably have a negative story to tell their spouse when they got home.

    I completely understand that not booking a motel facilitated human connection and all that loveliness but I sense that the benefit is very one-sided. In Zuckerberg-esque style, the tech bro gets to cosplay as a human with real emotions, while on the 'normie NPC' side they get to deal with a tech bro on a bicycle asking for weird shit while they're just trying to get through a shift.

    giantg2(10000) 3 days ago [-]

    'we tend to live pretty high up the economic value chain, which can abstract us a bit from participation in the more grassroots co-operative aspect of society.'

    I really don't see this as being directly true. Most sorts of interactions where we would depend on others/strangers would happen outside of a job, just like all the examples you give. Maybe there's some truth to the stereotype that us IT guys are nerds and participate in fewer IRL group hobbies, which could make your statement indirectly true. However, there's still communities build around stuff like MMORPGs, FOSS, etc where people are from different backgrounds and regions. But then again, maybe I'm the odd one out as a middle class developer with everyone making more than me.

    7402(3095) 3 days ago [-]

    > Now I stop to help anyone who's broken down on the side of the road

    I have a certain amount of fear about doing this sort of thing. I am ashamed of that, too.

    When I was in college (this was in a small city), I was walking at night by the library and I saw someone trip and fall in front of me. I asked if they were hurt and if I could help. He hobbled up and said yes, one leg was injured, but he just needed some help to get back to his car. I helped walk him four or five blocks, supporting his shoulder. In a darker bit of street, his friend tackled me to the ground and threatened to kill me with his gun. He took my wallet, ordered me not to stir from where he pushed me under a car, and they ran off. To be explicit here, the tripping and falling was fake.

    The campus police took me to the student health services; my knee was banged and slightly scraped from the tackle. I related the story to the doctor and he said, 'Well, you can't stop helping people.' On the other hand, the cop just said, if anything like that ever happens, I didn't have to handle it myself, just call them, they were happy to come and assist anyone who might need help on campus.

    I still help others when I can, but I am always cautious about my environment and assessing the circumstances

    soared(10000) 4 days ago [-]

    Props to the author for grinding through this, but I think a very strongly worded and formatted warning is needed at the top. Embarking on this trip with so little knowledge meant putting yourself far away from civilizations while criminally underprepared.

    I love the energy of Supertramps, but there is a reason they are controversial. It would be very easy to make a mistake and be in big trouble - underestimating water needs in a barren stretch, a hole in your tire (not tube) and not knowing how to fix it, etc. it's pure luck you didn't not over exert a small muscle or ligament locking you out of cycling during recovery.

    pavel_lishin(234) 4 days ago [-]

    For what it's worth, he did carry a satellite phone. But I do agree - this felt like a wildly optimistic decision to make :P

    1024core(10000) 4 days ago [-]

    Author was in the middle of prepping for the NYC marathon, so they were in decent shape physically.

    My fat ass would have given up before I even reached the Bay Bridge.

    That reminds me: the author did not mention how they crossed the Bay Bridge. There is no cycling path from SF to EB AFAICT.

    xandrius(10000) 4 days ago [-]

    Just to be fair, Supertramps are not controversial for those very valid reasons; those reasons require thought, empathy and actual understanding of the situation they are in.

    Those kinds of lifestyles generally create a knee-jerk reaction to people merely because they are different than the 'normalcy'. That is clear because, while some people are indeed being lucky/foolish in their endeavours (totally fine by me unless they don't directly hurt others with their choices), some other people have a pretty solid plan/foundation for being able to handle such a lifestyle and people still give them grief.

    My lifestyle is far from an extreme one and I still get puzzled questions and the usual 'oh, one day, you'll stop and grow up' kind of comments. Imagine if I had decided to drop everything and start cycling around the world.

    mturmon(10000) 4 days ago [-]

    Hmm, this take seems too all-or-nothing to me. (I made a similar trip with similar prep - bought the bike a month before going.)

    The first chunk of the trip is very civilized, and you can use that to build skills before you get out in rural Utah.

    If you have some experience with dry-country hiking, you understand about bringing water. That's the main threat. Most of the other mishaps you can think of are just inconvenient/unpleasant - 'made poor time, got stuck at dusk in the middle of nowhere with only the snacks in my panniers, and had to camp by the roadside'.

    The author did prep for some other gotcha's, including having safety gear and doing some physical training in advance.

    JKCalhoun(3408) 4 days ago [-]

    I confess that I am in the camp that is inclined to say, fuck it, throw caution to the wind.

    I reflect on the times in my life when I did just that and I have been amply rewarded with a life having been made just a little more worth having lived.

    Seeing people holed up because of their fears makes me sad. I suppose the thing that I am most afraid of is finding out too late that I am too old to do these sort of things with the few years that I may have left in the world.

    (And that goes as well to spending time with my daughters, wife, family.)

    cynicalpeace(10000) 4 days ago [-]

    nope, you really don't need so much prep to do this type of thing. I've done these types of things multiple times and whenever I prepped too much, the experience was actually worse- heavier bags, less spontaneity, etc

    zhivota(10000) 4 days ago [-]

    Life is risk. Compared to journeys undertaken by those in the past, this trip had an extremely minimal chance of disaster. I mean, the guy had a satellite phone! Unless he literally crashed his bike and died on the side of the road, the worst outcome here was a big bill from emergency services when they had to come rescue him from somewhere.

    I rode my bike around Lake Erie back in 2007 without even a smart phone. I didn't have a map of places to stay, I just scoped out surreptitious camping sites mostly if I didn't happen past a campground at the right time of day.

    PaulDavisThe1st(3579) 4 days ago [-]

    2 years ago, I rode solo from Santa Fe to Seattle (about 1600 miles). The ride crossed some of the emptiest terrain in the lower 48 states of the USA. I have done several significant bike tours in the past, have travelled throughout the west in a powered vehicle and generally know how to look after myself in the wilderness.

    I fully expected to face several significant sections where risks where high, notably from lack of water but also just general remoteness.

    The reality was quite different. Just the distribution of gas stations meant that water supply was rarely a problem (though I did have a fancy australian 4 liter bottle on my bike and water bladder on my trailer). There was one day when I came close to running out and that was a little scary, but tiny sips and another 12 miles got me to a gas station.

    But it wasn't just gas stations. There are not many places in the lower 48 where you can go 40 miles without passing some sort of human habitation if you're on a paved road. The Mojave and parts of Nevada might be an exception. I didn't need to get help from any such places, but I was always aware that I was passing by them.

    In addition, sure, some of the most back- of the backroads I took got very little traffic, it was still the case that there would be at least a car every 2 hours or so.

    My point is this: if you're travelling on paved roads in the lower 48, you are extremely unlikely to die from mistakes arising from unpreparedness. You might suffer a bit, but you will encounter someone who is very likely to be willing to help you.

    One thing I would say, however: in years and decades past, I would never have had any hesitation riding or walking down a farm/ranch driveway if I needed water or help. News events over the last few years involving shootings of 'strangers' in driveways now make me extremely reluctant to do such a thing. I contemplated this often on that ride, and if that situation had arisen, my plan was to stay on the road and make as much noise as I could before being OK'ed to cross their property line. A sad change for me, and for the country.

    mauvehaus(10000) 4 days ago [-]

    I did 7,000 miles of touring in the US in 2006 without a cell phone, relying mostly on a paper Rand McNally road atlas and partially on Adventure Cycling's paper maps. I did most of the Western Express, and a good chunk of the Trans-Am between where they join and Missouri.

    You are greatly overestimating the hazards associated with bike touring.

    Folks are decent, and if you're on Adventure Cycling's routes, they are familiar with seeing cyclists. People offer help and stop to ask if you're ok. The route is well travelled by cars; if you passed out from heat exhaustion in the middle of the road, you'd be no more than an hour from being found, and in most places, a good deal less.

    Water is pretty readily available, and most of the route passes through populated areas where you're a knock on a door away from a fillup if you're desperate. Mostly, I filled up with water at gas stations or where I camped in the evenings.

    If you can ride a bike, fix a flat (you'll likely get a lot. I did), camp in a tent, and cook over a camp stove, you can do what the author of TFA did. Maybe a little/lot slower (75 miles a day is hauling ass fully loaded touring) but it's totally doable.

    NB: Trek discontinued the 520 in 2023. Dozens of us are furious. The Surly Disc Trucker is well-recommended for touring, though I haven't been on one personally. Any bike that fits you with relaxed enough geometry, a long enough wheelbase, low enough gears, and the capacity to carry you and your gear will do.

    dharmab(10000) 3 days ago [-]

    I've done a lot of motorcycle touring and there's only a few things that concern me at all now:

    1. The few remaining 100 mile stretches of no services, when extreme weather is possible.

    2. Sundown towns, if you aren't white. Yes, they still exist.

    3. Running out of water.

    Especially nowadays, when cell phone + satellite coverage is nearly universal and affordable, you can run a phone off a small solar panel, and a credit card can fix any fuckup.

    petersteinberg(10000) 3 days ago [-]

    A very close friend de used to end his freshman year in Western Massachusetts by cycling home...

    to Portland, Oregon.

    In 1989.

    So before cell phones, satellite phones, Strava, electrolyte powders, websites full of helpful tips, Google Maps...

    He was likely criminally prepared and yet he says he had a great time. He mostly slept in the back yard of strangers and I vaguely recall that people offered him so much free food that for the entirety of the trip he spent about $35 and went through one giant tub of peanut butter (that he hauled with him). He got some sort of puncture-proof tires and never got a flat.

    Skipping the dessert southwest helped avoid the risk of water shortage and she clearly got lucky and avoiding a variety of problems and it's an n of 1, but it's a data point saying one doesn't have to plan to the nth degree.

    carabiner(1041) 4 days ago [-]

    Met an Austrian guy who biked from NYC to LA in the early '90s. He had a paper list of people across the country who were bike tourer friendly who could house him, and he'd call them on payphones. He didn't have a tent, so he'd also sleep in post offices.

    googlryas(10000) 4 days ago [-]

    I never slept in a post office, but rural firefighters were always very good to me on bikepacking trips. Plying me with food and letting me sleep in their gym or somewhere around the station.

    bryanlarsen(2963) 4 days ago [-]

    Another way to do it is the way my cousin did: do it over a period of 15 years. She took a week of vacation time during most of those years to do a chunk of the route.

    ghaff(3110) 4 days ago [-]

    Section hiking on long distance trails is pretty common as well. Most people aren't in a position to just take off and do the Appalachian Trail or Pacific Crest Trail in one shot.

    fifilura(10000) 4 days ago [-]

    Some do this in 8 days

    https://en.wikipedia.org/wiki/Race_Across_America

    But 51 days is also fantastic!

    yunusabd(3234) 4 days ago [-]

    Your brother learned to read at the age of 4, but learning it at 7 1/2 is also fantastic!

    JKCalhoun(3408) 4 days ago [-]

    Deciding whether to trade sleep for distance... Wild.

    juliogreff(10000) 3 days ago [-]

    Completely different experience though, since RAAM is a supported race, a very different kind of suffering. The Trans Am is a more comparable one (though still a race): https://en.wikipedia.org/wiki/Trans_Am_Bike_Race

    downut(10000) 4 days ago [-]

    For the people who are wondering whether this is a good idea or not, lemme tell you about some x-country cyclists I met on a ride. 3 years ago in the middle of summer I was climbing Iron Springs Rd on the west side of Prescott AZ. 3 youngish cyclists were paused on the side of the road with an apparent mechanical. They had a modest amount of camping gear in their panniers. Turns out they were French, had the barest grasp of English (I have the barest grasp of French), and needed a derailleur adjusted (no gears, no climb). I fixed them up and of course I was damned curious about their situation. Turns out, they on a whim flew into NYC, bought some not serious bikes and camping gear, and... just started biking across the country! In the middle of summer! In the wrong direction! Going to LA! And their pins... NOT CYCLISTS.

    The Iron Springs climb tops out at 6000' or so, the weather is awesome in summer. However that is the end of weather happiness for 300 miles or so, because it's a steady drop from there into the desert, all the way down to the Colorado River. Temps in the 100-115F range are normal. Water is scarcer there than on just about any roads in the country. I was pretty alarmed so I got it across that they needed to show me their route. As best I could I showed them the best way on maps to not die. I tried my damnedest to get across they should not bike in the afternoons. 'extra chaud!' etc.

    And off they went. Never found out if they made it or not, but... you just can't keep humans down. They will always find a way to do the craziest things.

    stevage(3583) 3 days ago [-]

    Yeah, I'm always amazed what young people can get away with on the spur of the moment.

    Was in Kyrgyzstan recently, and there's a popular hike that everyone does (Ala Kul). But it's HARD. And the people that do it are often not hikers. It's 3 days, but it involves a massive climb at altitude, and you have all these random backpackers attempting it because...well, that's what you do. And by and large they all seem to get through it ok.

    testing22321(10000) 3 days ago [-]

    I've bumped into scores of people doing the same around Africa, from Alaska to Argentina, all over Europe etc.

    There are tons of people out there having great adventures!

    RankingMember(3502) 4 days ago [-]

    Hey man, nice one! Only critique of the write-up is I'm sure you have more pics and would love to see them interspersed or in a gallery at the end!

    benjbrooks(3646) 4 days ago [-]

    check out the map! https://map.brooks.team





    Historical Discussions: Adipose tissue retains an epigenetic memory of obesity after weight loss (April 14, 2025: 235 points)
    Adipose tissue retains an epigenetic memory of obesity after weight loss (November 24, 2024: 4 points)
    Adipose tissue retains an epigenetic memory of obesity after weight loss (November 19, 2024: 3 points)
    Adipose tissue retains an epigenetic memory of obesity after weight loss (November 21, 2024: 3 points)
    DNA from ancient graves reveals the culture of a mysterious nomadic people (April 24, 2024: 2 points)
    DNA from ancient graves reveals the culture of a mysterious nomadic people (April 29, 2024: 2 points)
    DNA from ancient graves reveals the culture of a mysterious nomadic people (April 25, 2024: 1 points)

    (235) Adipose tissue retains an epigenetic memory of obesity after weight loss

    235 points 4 days ago by paulpauper in 104th position

    www.nature.com | Estimated reading time – 37 minutes | comments | anchor

    Data reporting

    No statistical methods were used to predetermine sample size. The experiments were not randomized, and the investigators were not blinded to allocation during experiments and outcome assessment.

    Clinical sample acquisition

    Human AT biopsies were obtained from three independent studies: MTSS, LTSS and NEFA.

    MTSS

    The MTSS samples comprised samples from omental visceral AT biopsies obtained in the context of a two-step BaS treatment, which included a sleeve gastrectomy as the first step (T0) and laparoscopic RYGB as the second step (T1)16. Individuals with syndromal, monogenic, early-onset obesity or individuals with other known concurrent diseases, including acute infections or malignant diseases, were not included in the study. Individuals were not required to adhere to any specific diet before or after surgery but received individual dietary recommendations during regular visits in the obesity management centre. Insulin resistance was determined using a hyperinsulinaemic–euglycaemic clamp technique or the homeostatic model assessment for insulin resistance (HOMA-IR). Only biopsies from individuals that (1) lost 25% or more of BMI between T0 and T1 (Extended Data Table 1), (2) had undergone surgery at the Municipal Hospital Karlsruhe or Municipal Hospital Dresden-Neustadt, (3) were not diagnosed with diabetes, and (4) did not receive any glucose-lowering medication were used for snRNA-seq in this study. AT samples were collected during elective laparoscopic abdominal surgery as previously described63, snap-frozen in liquid nitrogen and stored at −80 °C. Body composition and metabolic parameters were measured as previously described64. Samples of healthy individuals who were not obese were collected during routine elective surgeries such as herniotomies, explorative laparoscopies and cholecystectomies at the same hospitals. The study was approved by the Ethics Committee of the University of Leipzig under approval number 159-12–21052012 and was performed in agreement with the Declaration of Helsinki.

    LTSS

    The human study samples comprised samples from omental visceral and subcutaneous abdominal AT, collected in the context of a two-step BaS treatment. Following an initial sleeve gastrectomy (T0), a laparoscopic RYGB was made in the second step (T1)16. Individuals with syndromal, early-onset obesity or individuals with other known concurrent diseases, including acute infections or malignant diseases, were not included in the study. Individuals did not adhere to any specific diet before or after surgery but received individual healthy diet recommendations during regular visits in the obesity management centre. Insulin resistance was determined using HOMA-IR. Only individuals that (1) lost 25% or more of BMI between T0 and T1 (Extended Data Table 1), (2) had undergone surgery at the Leipzig University Hospital, (3) were not diagnosed with diabetes and (4) did not receive any glucose-lowering medication were included. AT samples were collected during elective laparoscopic abdominal surgery as previously described63, snap-frozen in liquid nitrogen and stored at −80 °C. Body composition and metabolic parameters were measured as previously described64. Samples from healthy donors that were not obese were collected during routine elective surgeries (herniotomies, explorative laparoscopies, cholecystectomies) at the same hospital. The study was approved by the Ethics Committee of the University of Leipzig under approval number 159-12–21052012 and performed in agreement with the Declaration of Helsinki.

    NEFA study

    The NEFA study (NCT01727245) comprises samples from subcutaneous abdominal AT from individuals before and after RYGB surgery, as well as healthy controls who had never been obese8,65. For this, biopsies were obtained under local anaesthesia before (T0) and 2 yr post-surgery (T1). Only samples from individuals that (1) lost more than 25% BMI between T0 and T1, (2) were not diagnosed with diabetes at T0 and T1 and (3) did not take glucose-lowering medication were included in the present study (Extended Data Table 1). Samples from control subjects were obtained from individuals that were BMI- and age-matched to RYGB patients at T1 as reported previously8. AT samples were handled as reported before65, snap-frozen in liquid nitrogen and stored at −80 °C. The study was conducted in accordance with the Declaration of Helsinki and approved by the Ethics Committee of the Karolinska Institute, Stockholm (approval number 2011/1002-31/1).

    Mice

    All mice were kept on a 12-h/12-h light/dark cycle at 20–60% (23 °C) humidity in individually ventilated cages, in groups of between two and five mice, in a pathogen-free animal facility in the SLA building at ETH Zurich. The health of mice was monitored closely, and any mouse exhibiting persistent clinical signs of ill health or distress was excluded from this study. The 16- and 29-week-old male C57BL/6J diet-induced obesity mice (catalogue no. 380050) and diet-induced obesity control mice (catalogue no. 380056) were obtained from The Jackson Laboratory and were kept on the respective diets for another 2 weeks until tissue harvest or diet switch. Different mice were used for insulin tolerance tests and glucose tolerance tests. AdipoERCre66 and NuTRAP67 mice were maintained on a C57BL/N background. Homozygous NuTRAP and AdipoERCre mice were bred to generate AdipoERCre x NuTRAP mice. AdipoERCre x NuTRAP mice were kept on HFD or chow diet for 12 or 25 weeks before tissue harvest or diet switch. The HFD used contained 60% (kcal%) fat (diet no. 2127, Provimi Kliba); the low-fat chow diet used contained 10% (kcal%) fat (diet no. 2125, Provimi Kliba). During the WL period both experimental groups received chow diet (diet no. 3437, Provimi Kliba). All animal experiments were approved by the Cantonal Veterinary Office, Zurich.

    Tamoxifen application

    The 4–5-week-old AdipoERCre x NuTRAP mice were gavaged two times with 1 mg of tamoxifen dissolved in corn oil. Tamoxifen was washed out for 2 weeks before starting HFD.

    Physiological measurements

    Glucose tolerance test

    Mice were fasted for 6 h during dark phase before administration of 1 g of glucose per kg body weight by intraperitoneal injection. Blood was collected from the tail vein at 0, 15, 30, 60, 90 and 120 min and blood glucose concentrations were measured using an Accu-Check Aviva glucometer.

    Insulin tolerance test

    Mice were fasted for 6 h during dark phase before administration of 1 U per kg body weight of human insulin (insulin Actrapid HM, Novo Nordisk) by intraperitoneal injection. Blood was collected from the tail vein at 0, 15, 30, 60, 90 and 120 min and blood glucose concentrations were measured using a Accu-Check Aviva glucometer.

    In vivo indirect calorimetry

    Measurements were obtained from one 8-cage and one 16-cage Promethion Core Behavioral System that were in the same room. Mice were habituated to the system for 36 h before measurements were started.

    Live body composition

    Mice were fasted for 6 h during dark phase. Live mouse body composition was measured with a magnetic resonance imaging technique (EchoMRI 130, Echo Medical Systems). Fat and lean mass were analysed using EchoMRI 14 software.

    Fasting insulin

    EDTA plasma was isolated from fasted blood samples (fasting 6 h). Insulin was measured with Ultra Sensitive Mouse Insulin ELISA Kit (Crystal Chem, catalogue no. 90080).

    Postprandial insulin

    EDTA plasma (50 μl) was thawed on ice and used in a custom U-PLEX assay (Meso Scale Discovery) according to the manufacturer's instructions. A Mesoscale SI 2400 was used to read the plate.

    Postprandial leptin

    EDTA plasma (50 μl) was thawed on ice and used in a custom U-PLEX assay (Meso Scale Discovery) according to the manufacturer's instructions. A Mesoscale SI 2400 was used to read the plate.

    Liver triglycerides

    First, 50 mg of frozen liver was homogenized in 1 ml of isopropanol, lysed for 1 h at 4 °C and centrifuged for 10 min at 2,000g at 4 °C. The supernatant was transferred into a new tube and stored at −80 °C until use. Triglyceride levels were measured by mixing 200 μl of reagent R (Monlab, catalogue no. SR-41031) and 5 μl of sample or Cfas calibrator dilutions (Roche, catalogue no. 10759350; lot no. 41009301), then incubating for 10 min while shaking at room temperature and measuring optical density at 505 nm (OD505) with a plate reader (BioTek Gen5 Microplate Reader).

    Cell culture experiments

    AT digestion

    AT was minced and digested at 37 °C while shaking in collagenase buffer (25 mM NaHCO3, 12 mM KH2PO4, 1.3 mM MgSO4, 4.8 mM KCl, 120 mM NaCl, 1.2 mM CaCl2, 5 mM glucose, 2.5% BSA; pH 7.4) using 2 mg of collagenase type II (Sigma-Aldrich, catalogue no. C6885-1G) per 0.25 g of tissue. After 30 min tissues were resuspended, and for ingAT digestion continued for 15 min whereas epiAT was processed immediately. An equal volume of growth medium (DMEM (Gibco, catalogue no. 31966021), 10% FBS (Gibco, catalogue no. 10500-064, Lot no. 2378399H), 1% penicillin-streptomycin (Gibco, catalogue no. 15140-122)) was added and digested tissue was centrifuged for 4 min at 300g, and the floating fraction was transferred into a new Falcon tube and kept at 37 °C. The SVF was resuspended in 5 ml of erythrocyte lysis buffer (154 mM NH4Cl, 10 mM NaHCO3, 0.1 mM EDTA, 1% penicillin-streptomycin), incubated at room temperature for 5 min, filtered through a 40 μM mesh filter and centrifuged for 5 min, 300g. The SVF was resuspended in growth medium and counted.

    SVF differentiation

    A total of 10,000 cells were plated into one well of a collagen-coated (Sigma-Aldrich, catalogue no. C3867) 96-well plate and kept in culture until they reached confluency, with media change every 48 h. At 2 d post-confluence, medium was changed to induction medium (DMEM, 10% FBS, 1% penicillin-streptomycin, 10 nM insulin (Sigma-Aldrich, catalogue no. I9278), 0.5 mM 3-isobutyl-1-methylxanthin (Sigma-Aldrich, catalogue no. I7018-1G), 1 μM dexamethasone (Sigma-Aldrich, catalogue no. D4902), 1 μM rosiglitazone (Adipogen, catalogue no. AG-CR1-3570-M010)). After 48 h medium was changed to maintenance medium (DMEM, 10% FBS, 1% penicillin-streptomycin, 10 nM insulin). Medium was changed every 48 h for 8 d.

    AdipoRed assay

    The SVF was cultured as described and controls were either kept in growth medium or only maintenance medium without induction. On day 8 after induction, cells were washed twice in PBS, and AdipoRed (Lonza, catalogue no. LZ-PT-7009) reagent was used according to the manufacturer's instructions and read with a plate reader (BioTek Gen5 Microplate Reader).

    Primary adipocyte culture

    Primary floating adipocytes were cultured under membranes according to Harms et al.68. Packed adipocytes (30 μl) were seeded onto one membrane and kept in inverted culture for 48 h in maintenance medium (DMEM-F12 (Gibco, catalogue no. 31330095), 10% FBS, 1% penicillin-streptomycin, 10 nM insulin). After 48 h of maintenance, adipocytes were washed and serum and glucose starved overnight in KREBBS-Ringer buffer (120 mM NaCl, 4.7 mM KCl, 1.2 mM KH2PO4, 1.2 mM MgSO4, 2.5 mM CaCl2, 25 mM HEPES (Lonza, catalogue no. BEBP17-737E), pH 7.4) and 2.5% fat-free BSA (Sigma-Aldrich, catalogue no. A6003).

    Glucose uptake

    Glucose uptake from primary adipocytes was measured using the Glucose Uptake-Glo Assay Kit (Promega, catalogue no. J1341) according to the manufacturer's instructions. Adipocytes were preincubated with 5 nM insulin for 15 min before 2-deoxy-d-glucose was added at 1 mM final concentration. Protein concentration was measured using a Pierce 660 nm Protein Assay Kit (Thermo Fisher, catalogue no. 22662) and the Ionic Detergent Compatibility Reagent (Thermo Fisher, catalogue no. 22663). Both assays were read with a plate reader (BioTek Gen5 Microplate Reader).

    C16 uptake

    Starved adipocytes were incubated with 5 nM BODIPY-palmitate (Thermo Fisher, catalogue no. D3821) in the presence of 10 nM insulin for 1 h. Subsequently, adipocytes were washed twice and lysed in 200 μl of RIPA buffer. Then, 100 μl of lysate was used to measure BODIPY signal. Diluted lysate was used to measure protein concentration using a DC Protein Assay Kit II (Bio-Rad Laboratories, catalogue no. 5000112) for normalization. Both assays were read with a plate reader (BioTek Gen5 Microplate Reader).

    Histology

    Tissues were collected, fixed in 4% PBS-buffered formalin for 72 h at 4 °C and stored in PBS at 4 °C. Following paraffin embedding, tissues were sent to the pathology service centre at Instituto Murciano de Investigación Biosanitaria Virgen de la Arrixaca for sectioning, trichrome staining, haematoxylin and eosin staining, and imaging. Tissues from two independent experiments were sent for sectioning.

    Adipocyte size quantification

    Images of ingAT and epiAT were taken with 3DHISTECH Slide Viewer 2 and then analysed with Adiposoft69 using Fiji ImageJ70. Five to ten images were taken of each section belonging to a biological replicate (n = 4).

    Sample processing and library preparation

    Isolation of nuclei from mouse tissue

    Nuclei were isolated from snap-frozen epiAT in ice-cold Nuclei Extraction Buffer (Miltenyi, catalogue no. 130-128-024) supplemented with 0.2 U μl−1 recombinant RNase Inhibitor (Takara, catalogue no. 2313) and 1× cOmplete EDTA-free Protease Inhibitor (Roche, catalogue no. 5056489001) using the gentleMACS Octo Dissociator (Miltenyi, catalogue no. 130-096-427), using C-tubes (Miltenyi, catalogue no. 130-093-237). Nuclei were subsequently filtered through a 50 μm cell strainer (Sysmex, catalogue no. 04-0042-2317) and washed two times in PBS-BSA (1% w/v) containing 0.2 U μl−1 RNase inhibitor. For snRNA-seq, five mice were pooled per condition.

    Isolation of nuclei from human tissue

    Nuclei were isolated from snap-frozen human AT (10–50 mg) in ice-cold Nuclei Extraction Buffer (Miltenyi, catalogue no. 130-128-024) supplemented with 1 U μl−1 recombinant RNase Inhibitor (Takara, catalogue no. 2313), 1× cOmplete EDTA-free Protease Inhibitor (Roche, catalogue no. 5056489001) and 10 mM sodium butyrate using the gentleMACS Octo Dissociator (Miltenyi, catalogue no. 130-096-427), using C-tubes (Miltenyi, catalogue no. 130-093-237).

    The nuclei suspension was filtered through a 50 μm strainer, supplemented with PBS-BSA (1% w/v) containing 1× protease inhibitor and RNase inhibitor and centrifuged at 4 °C, at 500g for 10 min. The nuclei pellet was resuspended in 1 ml of PBS-BSA (1%, w/v) supplemented with RNase inhibitor (0.5 U μl−1) and 1× protease inhibitor and was transferred into a new 1.5 ml tube.

    snRNA-seq of AT

    Nuclei were counted using a haemocytometer and Trypan blue, concentration was adjusted to approximately 1,000 nuclei per μl and they were loaded onto a G-chip (10x Genomics, catalogue no. PN-1000127). Single-cell gene expression libraries were prepared using the Chromium Next GEM Single Cell 3′ v3.1 kit (10x Genomics) according to the manufacturer's instructions. To accommodate for low RNA content, two cycles were added to the complementary DNA amplification PCR. Libraries were pooled equimolecularly and sequenced in PE150 (paired-end 150) mode on a NovaSeq 6000 with about 40,000 reads per nucleus at Novogene or using a NovaSeqX at the Functional Genomics Center, Zurich.

    Paired TRAP–seq, CUT&Tag and ATAC–seq

    Paired TRAP–seq, CUT&Tag and ATAC–seq protocols were developed on the basis of published protocols67,71,72,73,74.

    Ribosome and nuclei isolation

    Nuclei and ribosomes were isolated from snap-frozen epiAT from AdipoERCre x NuTRAP mice in ice-cold Nuclei Extraction Buffer (Miltenyi, catalogue no. 130-128-024) supplemented with 0.2 U μl−1 recombinant RNase Inhibitor (Takara, catalogue no. 2313), 1× cOmplete EDTA-free Protease Inhibitor (Roche, catalogue no. 5056489001) and 10 mM sodium butyrate using the gentleMACS Octo Dissociator (Miltenyi, catalogue no. 130-096-427), using C-tubes (Miltenyi, catalogue no. 130-093-237). The nuclei suspension was filtered through a 50 μm strainer and centrifuged at 4 °C, 500g for 5 min. The supernatant was transferred into a new tube and supplemented with 2 mM dithiothreitol, 100 μg ml−1 cycloheximide (Sigma-Aldrich, catalogue no. 01810) and 1 mg ml−1 sodium heparin (Sigma-Aldrich, catalogue no. H3149-10KU) and kept on ice. The nuclei pellet was resuspended in 1 ml of PBS-BSA (1%, w/v) supplemented with 0.2 U μl−1 RNase inhibitor, 1× cOmplete EDTA-free Protease Inhibitor and 10 mM sodium butyrate and transferred into a new 1.5 ml tube. Nuclei were centrifuged and subsequently bound to Dynabeads MyOne Streptavidin C1 beads (Thermo Fisher, catalogue no. 65002) for 30 min at 4 °C followed by three washes with PBS-BSA (1% w/v).

    TRAP–seq

    Per sample, 25 μl of GFP-Trap Magnetic Agarose Beads (ChromoTEK, catalogue no. gtma-20) were washed in 2 ml of polysome lysis buffer (50 mM TRIS-HCl pH 7.5, 100 mM NaCl, 12 mM MgCl2, 1% Igepal CA-630 (Sigma-Aldrich, catalogue no. I8896), 1× protease inhibitor). The supernatant was mixed with the beads and incubated at 4 °C on a rotator for 1–2 h. Subsequently, tubes were put on a magnetic stand and the supernatant was removed. The beads were washed three times with polysome lysis buffer supplemented with 2 mM dithiothreitol (Sigma-Aldrich, catalogue no. D0632-10G), 100 μg ml−1 cycloheximide (Sigma, catalogue no. D0632-10G) and 1 mg ml−1 sodium heparin (VWR, catalogue no. ACRO411210010) and resuspended in 1 ml Trizol (Thermo Fisher, catalogue no. 15596). Trizol preserved samples were kept at −80 °C until RNA isolation. RNA was isolated by adding 200 μl of chloroform (Sigma-Aldrich, catalogue no. 288306) to samples, followed by shaking and centrifugation at 4 °C, 12,000g for 15 min. The aqueous phase was transferred into a new tube and RNA was isolated and DNase treated with the RNA Clean and Concentrator-5 kit (Zymo Research, catalogue no. R1016), following the manufacturer's instructions.

    RNA libraries were prepared by performing reverse transcription and template switching using Maxima H Minus reverse transcriptase (Thermo Fisher, catalogue no. EP0753), a template switch oligo and an oligodT primer to generate full-length cDNA. cDNA was amplified using the KAPA Hotstart 2x ReadyMix (Roche Diagnostics, catalogue no. 7958935001). Then, 1–3 ng of cDNA was tagmentated using 1.3 μg of Tn5 and amplified using KAPA HiFi plus dNTPs (Roche Diagnostics, catalogue no. 07958846001) and the following PCR settings: 72 °C 5 min, 98 °C 30 s, 10 cycles of 98 °C for 10 s, 63 °C for 30 s, 72 °C for 1 min, hold at 4 °C. Libraries were quantified using the KAPA library quantification kit (Roche Diagnostics, catalogue no. 079602), and sequenced in PE150 mode on a NovaSeq 6000 at Novogene.

    CUT&Tag

    CUT&Tag was performed as previously described with minor adjustments74,75. All buffers were supplemented with 1 x cOmplete EDTA-free Protease Inhibitor and 10 mM sodium butyrate. Briefly, nuclei bound to beads were aliquoted into 96-well LoBind plates (Eppendorf, catalogue no. 0030129547) and incubated with primary antibodies—anti-H3K4me3 (abcam, catalogue no. ab8580), anti-H3K27me3 (Cell Signaling Technology, catalogue no. C36B11), anti-H3K27ac (abcam, catalogue no. ab4729), anti-H3K4me1 (abcam, catalogue no. ab8895)—overnight at 4 °C. With the plate on a magnet, the primary antibody solution was removed, and the beads were resuspended in secondary antibody solution (guinea pig anti-rabbit IgG (antibodies-online, catalogue no. ABIN101961)) and incubated at room temperature. pA-Tn5 was bound to antibodies, and transposition was performed at 37 °C and stopped using TAPS-Wash solution. Nuclei were lysed and pA-Tn5 decrosslinked using SDS-release solution. PCR was performed using KAPA HiFi plus dNTPs (Roche Diagnostics, catalogue no. 07958846001) with the following PCR settings: 72 °C 5 min, 98 °C 30 s, 15 cycles of 98 °C 10 s, 63 °C 30 s, and 72 °C final extension for 1 min, hold at 4 °C.

    ATAC–seq

    Beads with nuclei were resuspended in ATAC–seq solution (10 mM TAPS pH 8.5, 5 mM MgCl2, 10% DMF (Sigma-Aldrich, catalogue no. D4551), 0.2 μg μl−1 transposase (Tn5)) and incubated at 37 °C for 30 min. Thereafter, 100 μl of DNA binding buffer (Zymo Research, catalogue no. D4003-1) was added and samples were stored at −20 °C. Then, DNA was extracted using Zymo DNA Clean and Concentrator-5 (Zymo Research, catalogue no. D4004). Library amplification was performed using KAPA HiFi plus dNTPs (Roche Diagnostics, catalogue no. 07958846001) and the following PCR settings: 72 °C 5 min, 98 °C 30 s, 10 cycles of 98 °C 10 s, 63 °C 30 s, 72 °C 1 min, hold at 4 °C.

    Both ATAC–seq and CUT&Tag libraries were cleaned using SPRI beads, eluted in nuclease-free water and pooled equimolecularly after library quantification using the KAPA library quantification kit (Roche Diagnostics, catalogue no. 079602). Libraries were sequenced in PE150 mode on a NovaSeq 6000 at Novogene.

    Sequencing data processing

    snRNA-seq data processing and analysis

    Data integration and differential expression analysis for mouse snRNA-seq

    The 10x Genomics Cell Ranger v.6.1.2 pipeline was used for demultiplexing, read alignment to reference genome mm10-2020A (10x Genomics), barcode processing and unique molecular identifier (UMI) counting with Include introns argument set to 'True'. The R package Seurat v.4.1.0 (ref. 76) was used to process, integrate and analyse datasets. scDblFinder77 was used to identify and remove doublets. Nuclei with unique feature counts less than 500 or greater than 3,000 and UMI counts greater than 40,000 were discarded during quality control (Extended Data Fig. 11a). Highly expressed genes such as mitochondrial genes, pseudogenes and Malat1 were excluded from the count matrix before normalization. SoupX78 was used to estimate potential ambient RNA contamination in all samples, but no sample required any correction. Samples were normalized using sctransform and integrated using the CCA (canonical correlation analysis) method built into Seurat. Filtered, normalized and integrated nuclei data were clustered by using the Louvain algorithm with a resolution of 0.4 using the first 30 principal components. Cluster markers were identified on the basis of differential gene expression analysis (Wilcoxon rank-sum test with |log2FC| > 0.25 and adjusted P < 0.05). Clusters were then annotated on the basis of known markers from literature34,36,37,46,79,80. Additionally, our manual cluster annotation was confirmed by reference mapping against a reference male mouse epiAT34 dataset (Extended Data Fig. 11b,c). Differential expression analysis (Wilcoxon rank-sum test with |log2FC| > 0.5 and adjusted P < 0.01) per cell type between different conditions was done using the FindMarkers function from Seurat. Differential expression analysis hits were intersected with a list of epigenetic modifier genes (see the Source Data to Extended Data Fig. 8) to investigate their expression dynamics. For visualization of snRNA-seq data we used the R package SCpubr v.1 (ref. 81).

    Data integration and differential expression analysis for human snRNA-seq

    The 10x Genomics Cell Ranger v.7.2.0 pipeline was used for demultiplexing, read alignment to reference genome GRCh38-2020-A (10x Genomics), barcode processing and UMI counting, with force cells set to 10,000. The R package Seurat v.4.1.0 (ref. 76) was used to process, integrate and analyse datasets. scDblFinder77 was used to identify and remove doublets. Nuclei with unique feature counts <300 or >4,000 (LTSS) / 6,000 (NEFA), UMI counts >15,000 (LTSS) / 25,000 (NEFA) and mitochondrial gene counts greater than 5% were discarded during quality control (Extended Data Fig. 12). SoupX78 was used to estimate and correct for potential ambient RNA contamination in all samples. Samples were normalized using sctransform and integrated using the CCA method built into Seurat. Filtered, normalized and integrated nuclei data were clustered by using Louvain algorithm using the first 30 principal components. For each study, the cluster resolution was determined using the R package clustree82. Cluster markers were identified on the basis of differential gene expression analysis (Wilcoxon rank-sum test with |log2FC| > 0.25 and adjusted P < 0.01). Clusters were then annotated on the basis of known markers from literature34,35,36,37,83. Additionally, our manual cluster annotation was confirmed by reference mapping against reference human white AT atlas34 (Extended Data Figs. 2 and 3). For each AT depot, adipocytes from two studies were integrated together using the first 20 principal components following the steps as mentioned above. Differential expression analysis (Wilcoxon rank-sum test with |log2FC| > 0.5 and adjusted P < 0.01) per cell type between different conditions was done using the FindMarkers function from Seurat. Differential expression analysis hits were validated using MAST and likelihood-ratio tests using the FindMarkers function from Seurat. For visualization of snRNA-seq data, we used the R package SCpubr v.1 (ref. 81).

    SNP-based demultiplexing of human snRNA-seq datasets

    To perform SNP calling and demultiplexing on the pooled samples, cellsnp-lite84 was first used to call SNPs on a cell level using the 1000 Genomes-based reference variant call file for hg38 at a resolution of 7.4 million SNPs. SNPs with less than 20 counts and a minor allele frequency of less than 10% were filtered out, as per the developer recommendations. Finally, the tool vireo85 was used to demultiplex the pooled data using the cellsnp-lite-derived genotype information.

    For each donor, we analysed tissue composition and removed nuclei belonging to donors in the case in which no nuclei were assigned as adipocytes (one case in NEFA) or more than 50% or nuclei were assigned as B cells (one case in MTSS; lean donor) after correspondence with surgeons.

    Transcriptional retention

    DEGs from obese and WL cells from mouse and human were overlayed, respectively. A DEG was considered restored if it was no longer deregulated in WL cells when compared with controls. If not restored, we considered a DEG part of a transcriptional memory. Clusters identified as similar cell types (for example, three clusters of endothelial cells) were merged for DEG quantification but not differential expression analysis itself. For human snRNA-seq, only cell types for which we obtained at least 30 cells per donor were considered for the retention analysis. T cells were not included in differential expression analysis or transcriptional retention analysis. For integrated human adipocyte differential expression analysis quantification, non-coding transcripts were excluded.

    TRAP–seq

    Quality control of the raw reads was performed using FastQC v.0.11.9. Raw reads were trimmed using TrimGalore v.0.6.6 (https://github.com/FelixKrueger/TrimGalore). Filtered reads were aligned against the reference mouse genome assembly mm10 using HISAT2 v.2.2.1. Raw gene counts were quantified using the featureCounts86 program of subread v.2.0.1. Differential expression analysis was performed using the R package EdgeR87, with |log2FC| ≥ 1 and nominal P < 0.01 as cut-offs.

    CUT&Tag and ATAC–seq data processing and analysis

    Quality control of CUT&Tag and ATAC–seq data and generation of bedgraph files was performed as described previously75. Peaks were called from CUT&Tag sequencing and ATAC–seq libraries on individual bedgraph files using SEACR88 v.1.3 in stringent mode with a peak calling threshold of 0.01. Peaks overlapping with mouse blacklist regions89 were filtered out. Called peaks were annotated using the R package ChIPSeeker90. Peak fold enrichment against genomic features was calculated using the formula: Σ(base pair (bp) overlap) × genome_size/[Σ(bp hPTM peak) × Σ(bp genomic feature)]. Genomic features tracks were downloaded from ENCODE using the R package annotatr91. Visual quality control of bam files was performed with Seqmonk92. Called peaks were combined to generate a union peak list and quantified using the R package chromVAR93 v.1.16, generating a raw peak count matrix.

    MOFA

    MOFA50,94 was run to identify the driving variation source across all conditions using all data modalities. For each modality, the top 3,000 variable features (genes or peaks) between all samples were selected using the R package DESeq2 (ref. 95) and used as input to train the MOFA model. The trained MOFA model represented data variability in terms of five latent factors, which were further explored and visualized.

    Generation of enhancer tracks of adipocytes

    Adipocyte chromatin states were identified using ChromHMM v.1.22 (ref. 96) in concatenated mode with binned bam files (200-bp bins) from each condition combining all hPTMs and ATAC–seq. After final model selection75 with eight chromatin states and emission parameter calculation of hPTMs and ATAC–seq, chromatin state fold enrichment was performed against genomic features and ENCODE candidate cis-regulatory elements. Enhancer states were selected on the basis of genomic localization and hPTM enrichment. Subsequently, an enhancer track was generated per condition and merged for differential analysis.

    Differential analysis of hPTMs and ATAC–seq

    Promoters

    Promoters were defined using the getPromoters function from ChIPSeeker with TxDb.Mmusculus.UCSC.mm10.knownGene as input and setting the TSSRegion to c(-2000, 2000). Peaks overlapping with promoters were extracted using the annotatePeak function from ChIPseeker90 by selecting peaks annotated as promoters. For differential analysis, our raw peak count matrix was filtered for these promoter regions and counts were aggregated at gene level. Differential analysis of the same hPTM between two conditions was performed using the R package EdgeR87 with nominal P < 0.01 and |log2FC| > 1 as cut-offs.

    Enhancers

    ChromHMM was used to identify regions in the genome that were marked by H3K4me1, H3K27ac and open (ATAC–seq) but not enriched for H3K4me3 and that were not promoters (Extended Data Fig. 9b–e). States 6 and 5 were selected as enhancer regions on the basis of their genomic locations (distal enhancer elements) (Extended Data Fig. 9b–e).

    Our raw peak count matrix was filtered for enhancer regions defined by chromHMM, and peaks around the TSS (±2,000 bp) were discarded. Linkage of putative enhancers to genes was done using the R package ChIPSeeker by selecting the closest gene (TSS or gene body) within 20,000 bp distance. Putative enhancers farther away than 20,000 from a TSS or gene body were not linked to any gene and were discarded from downstream GSEA.

    For each hPTM, the raw filtered peak matrices were log-normalized using the R package EdgeR and Pearson's correlation coefficient was computed using the cor function from the R package stats v.3.6.2.

    Differential analysis of the same hPTM between two conditions was performed using the R package EdgeR with nominal FDR < 0.05 and |log2FC| > 1 as cut-offs.

    PCA

    Raw gene and promoter/enhancer-specific peak count matrices were log-normalized using the R package EdgeR. PCA of the normalized count matrices was performed using the prcomp function of R package stats v.3.6.2.

    GSEA

    GSEA was performed using the R package enrichR97,98,99. For generation of heatmaps summarizing GSEA across cell types, significantly enriched terms were selected using the adjusted P value (<0.01) and the combined.score (enrichment score) was scaled and visualized.

    Visualization

    R v.4.2, GraphPad Prism v.9.5.1 and Seqmonk v.1.48.1 were used to generate plots and Affinity Designer and Publisher were used to adjust plots for clarity (for example, colour schemes).

    Statistical analysis of physiological parameters from mice

    GraphPad Prism v.9.5.1 was used to analyse physiological data from mice. Each dataset of physiological parameters was tested for normality using the Shapiro–Wilk test. On the basis of the results, parametric or non-parametric tests were used to compare experimental with age-matched control groups. Tests are indicated in figure legends and the Source Data.

    Reporting summary

    Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.




    All Comments: [-] | anchor

    meindnoch(10000) about 23 hours ago [-]

    Well, yeah. Adipocytes multiply when you get fat. But when you lose weight, they don't apoptose, they just shrink in volume by giving up their lipid stores.

    mkoubaa(10000) about 22 hours ago [-]

    I am pretty sure the only way to reduce the number of cells is liposuction

    chewbacha(3349) about 22 hours ago [-]

    Yea, this actually explains the transcriptional expression and weight gain very well. Strong than the methylation evidence imo. I didn't see any causal analysis only correlated and the cells still being there makes sense.

    phkahler(10000) about 22 hours ago [-]

    >> But when you lose weight, they don't apoptose

    Googled for 'Adipocyte apoptosis' and oh boy... It does happen, but I don't trust the AI summary. This looks like a deep rabbit hole.

    raincom(10000) about 21 hours ago [-]

    How do glp-1 drugs such as semaglutide, terzepatid and retatrutide impact apoptose?

    'Tirzepatide promotes M1-type macrophage apoptosis and reduces inflammatory factor secretion by inhibiting ERK phosphorylation' [1]

    [1] https://www.sciencedirect.com/science/article/abs/pii/S15675...

    inverted_flag(