Hacker News with comments/articles inlined for offline reading

Authors ranked on leaderboard
Last updated:
Reload to view new stories

July 21, 2019 20:37



Front Page/ShowHN stories over 4 points from last 7 days
If internet connection drops, you can still read the stories
If there were any historical discussions on the story, links to all the previous stories on Hacker News will appear just above the comments.

Historical Discussions: MITM on HTTPS traffic in Kazakhstan (July 18, 2019: 1323 points)

(1323) MITM on HTTPS traffic in Kazakhstan

1323 points 3 days ago by bzbarsky in 1663rd position

bugzilla.mozilla.org | Estimated reading time – 3 minutes | comments | anchor

(In reply to twolaw from comment #51)

browser already intervene in commercial affairs by blocking websites trackers, because its catchword is to behave for citizen privacy.

different semantics - trackers are ingress whilst MitM is mostly egress (& only then ingress). Morever, trackers are entities with a defined pattern to which the entire internet community is exposable whilst governemtns with their decisions/laws are less so and their domestic affairs impacting only their population mostly.

The whole purpose of the entity is here clearly identified as against the major Mozilla mantra: privacy. It can be the criteria.

Not sure whether Privacy in the Moz culture includes protection from governmental eavesdropping, which likely cannot be even escaped at all - at least not without additional measures than the vanilla browser installation and the necessary configuration of the remote server node. Blocking the certificate entirely may have some repercussions such as potential loss of access to governemental online services.


(In reply to cfi9pnik from comment #50)

The browser should not be neutral between privacy and eavesdropping, between authenticity and identity theft, between the truth and lies, between good and evil. It should support good. This was the point of firefox's existence from the very beginning. Otherwise why would people need it, if there is already Mi©®o$oft Inte®net Explo®e® on window$TM? And if being good means becoming political, let it become political.

That is a lot to ask of a browser, bascially being a moral authority for all that. It would require its developers being beyond reproach and maybe then still not meeting every user's own perpective/perception of the world.

After all, if someone with full understanding of the consequences wants to surrender to eavesdropping and identity theft, maybe he should have the option to do so with the default build of the browser, I don't think the general common morale gives a solid answer here. But even then, the browser should not be neutral between the truth and lies, it should support the truth. In this particular case this means that the browser should make it absolutely sure that the user understands the consequences of his decision to proceed with the connection signed by this certificate, for example, by making the user type (type by himself, not just click 'ok' somewhere, dangerous actions require strong confirmation) something like 'I understand that if I proceed with this connection, Putin will know all data I transfer, including my credit card details and my username and password for internet banking. Moreover, he will be able to impersonate me in all activities I participate in using this connection, including taking loans under my name.'

An explicit warning is a different approach than the blocking of the certificate as being requested.




All Comments: [-] | anchor

JaRail(10000) 3 days ago [-]

I'm surprised at comments in the bug threads suggesting they do nothing. The idea being that fighting this would force governments to fork/change browsers, ultimately being a worse experience for users. Seems like betraying people's trust is a pretty bad user experience.

There will always be a fight over privacy. Giving up to a foreign government is a terrible idea. It would absolutely just let the problem spread and get worse.

I don't think Kazakhstan has the resources to replace outside online services. A move like this should simply result in them shooting themselves in the foot. This needs to be a firm line such that it's simply not practical for them to implement.

I understand that bugs/discussions should weigh both sides. And ultimately, we may need more than HTTPS. That's fine. The point is we don't just roll over and give up.

cpach(2777) 3 days ago [-]

If a government mandates its citizens to install the government's own root certificate, then it's not that easy to find a long-term technological solution. The problem here is not a technical one, IMHO. The problem is that the government of Kazakhstan is not respecting the freedom of its people.

Point in case: In 2018, Kazakhstan ranked #144 in the Economist Intelligence Unit's Democracy Index. Countries such as China, Cuba and Belarus had a better ranking. [See https://en.wikipedia.org/wiki/Democracy_Index#Democracy_Inde...]

IMO what's needed is first and foremost more democracy in Kazakhstan. That's not something that Firefox can solve.

With that said, perhaps anti-surveillance technology can assist the affected users. Maybe Tor. I've heard about some other, similar project but I can't recall its name right now.

[Edit: These where the anti-censorship applications I was thinking of: https://www.psiphon3.com/ and https://getlantern.org/. Can't vouch for their security though.]

k_sze(4147) 3 days ago [-]

I wonder if open source browser projects can have a license that prohibits forking for the purpose of mass (state) surveillance.

tomxor(3480) 3 days ago [-]

> Giving up to a foreign government is a terrible idea. It would absolutely just let the problem spread and get worse.

Their government isn't merley 'snooping' on it's citizens due to a week chain of trust architecture and a lack of ethical clarity... it's requiring by law that it's citizens be snooped on and censored. In such a regime, a technological arms race is not going to change the legality of subverting their measures, and cannot help the masses without simultaneously jeopardising their safety.

2woowoowoo222(10000) 3 days ago [-]

I live in Kazakhstan and have not seen this yet. No MITM according to the provided tests (mholt)

austinheap(3966) 3 days ago [-]

It's only rolled out to 20-30% of the country right now: https://atlas.ripe.net/measurements/22372655/#!probes

DarkContinent(2324) 3 days ago [-]

Could someone explain to me what this means and/or why it's bad?

efxz(10000) 3 days ago [-]

Internet provider will be able to read all your data in plaintext. Logins, passwords, anything you send.

iknowstuff(10000) 3 days ago [-]

The government and ISPs will have access to everything people do on the internet in Kazakhstan.

Tepix(3919) 3 days ago [-]

The state of Kazakhstan is intercepting all encrypted web traffic. They may filter it, block it, inject nasty things (malware etc) or just spy on the populace.

periya(4150) 3 days ago [-]

The Govt of KZ in this case can monitor all internet traffic over HTTPS.

https://en.wikipedia.org/wiki/Man-in-the-middle_attack

karthikshan(10000) 3 days ago [-]

A Certificate Authority is an entity that verifies the identity of websites for your browser. There are only a few CA's in the world and their integrity is crucial because of how your browser trusts them. By making users install a government-issued CA, and also controlling their internet provider, the Kazakh government can pretend to be any website and therefore see all HTTPS traffic from the affected users.

coldpie(2577) 3 days ago [-]

When you connect to a website via HTTPS, your browser downloads the certificate from that website and validates it by checking that the website's certificate was cryptographically signed by an entity that the browser trusts. If the certificate is valid, then you can assume that your data will only be decrypt-able by the website owner, so the connection is secure. Your browser will display a happy green banner showing that the connection is secure, so you can feel safe sending private data to that website without it being eaves-dropped along the way.

The browser checks for validity by ensuring the website certificate is signed by a certificate that is shipped with the browser. These 'root certificates' are usually owned by Certificate Authorities, such as Verisign or any other number of CAs. CAs ought to verify that the entity creating a new certificate is who they claim to be (the website owner) before signing a certificate. This way, you trust Verisign to tell you that you can trust the target website.

What Kazakhstan has done is create their own root certificate and asked people who live there to install it in their browsers. They are also intercepting any connection to facebook.com and giving your browser a Kazakhstan-created certificate, which is then verified against the Kazakhstan-owned root certificate. Since it will pass this check, the browser shows a happy green banner, even though the certificate is owned by Kazakhstan and not facebook.com. In other words, the data people in Kazakhstan send to facebook.com is now being intercepted and decrypted by Kazakhstan before being forwarded to facebook.com. Facebook is the example used in the linked bug, they can perform this with any other website, too.

worldofmatthew(4149) 3 days ago [-]

The best solution would be to blacklist rouge SSL certs.

ddalex(4147) 3 days ago [-]

Perhaps whitelist just the bleu-marine ones?

vbezhenar(3669) 3 days ago [-]

They probably will issue alternative Firefox and Chromium build. That would be worse. Though if entire tech industry will fight this attempt, like Windows will stop updating, macOS, iOS, Android, etc, probably they will step back. But I don't think that anyone would do so. Especially because that scenario is legitimate and widely used by corporations to control their perimeter and Kazakhstan ultimately does not differ.

vkou(10000) 3 days ago [-]

Doing so will make the internet (In Kazakhstan) unusable, because everywhere you go, you will see an 'untrusted cert' warning.

coldpie(2577) 3 days ago [-]

Aw, let them use whatever color they like.

FabHK(3921) 3 days ago [-]

The pertinent page [1] of a local ISP, Kcell, is interesting - very devious.

> Kcell JSC informs its customers of the need to install Security Certificate on personal devices capable of connecting to the Internet

> Due to the increase of identity and personal data theft, including stealing money from bank accounts, introduced a security certificate as an effective tool to protect the country's information space from hackers, online fraudsters and other types of cyber threats.

And some FAQs are hilarious (as in hilariously devious):

> What happens if I do not install the Security Certificate? You may have problems accessing the Internet.

> Will installation of the Security Certificate affect the protection of my personal data? The security certificate has no access to your personal data.

Yeah, true that, literally, but you forgot to mention something there...

[1] https://www.kcell.kz/en/product/3585/658, might have to switch to EN in top right corner

leevlad(10000) 3 days ago [-]

Wow, if only everyone was as smart as Kazakhstan and figured out that this super awersome Security Certificate was 'an effective tool' to protect the entire country's information space. And I've been wasting time with strong passwords, 2FA, E2E encryption, full disk encryption, etc. /s

arpa(3607) 3 days ago [-]

Actually, a dns caa record could be of use in this scenario to at least alert the client that the traffic has been intercepted. Then again it is trivial to intercept and rewrite plain DNS requests and dns over https would also be subject to the same https mitm intercept... Maybe certificate pinning was a right idea.

vbezhenar(3669) 3 days ago [-]

CAA is for issuers, not for browsers. And yes, without DNSSEC it's easily spoofed.

peter_d_sherman(338) 3 days ago [-]

This is bad because we don't like it when a foreign government infringes on foreign citizens' rights, but it may also be good (in a limited sense) because it might bring a whole lot more public scrutiny (from all countries and their citizens) towards the issue...

mdhardeman(10000) 3 days ago [-]

This will _not_ be good.

To the extent that Kazakhstan succeeds with this, it will only make other governments jealous of the capability and want it for themselves.

altmind(10000) 3 days ago [-]

hmm, certificate pinning will not allow this gov-ca to work for a lot of high profile web sites. i wonder if these sites with cert pins are whitelisted by the kz gov?

--

somehow i missed that HPKP is dead and will be removed from chromium and all the derivative browsers. now google is focusing on Expect-CT

lpellis(10000) 3 days ago [-]

My understanding is pinning will not block this, locally installed trust anchors bypass pinning. https://groups.google.com/d/msg/mozilla.dev.security.policy/...

kccqzy(3235) 3 days ago [-]

Although pinned certificates have gone out of favor on the web, they are still very frequently used by iOS and Android apps. Last time I checked, the Facebook Messenger app refused to work when being MitM'ed.

Tepix(3919) 3 days ago [-]

Question to local readers: Is Kazakhstan also blocking VPNs and SSH?

PeterZhizhin(10000) 3 days ago [-]

Not a local reader. I am from Russia.

ZaTelecom Telegram channel (https://t.me/zatelecom) claims that that not all ISPs have rolled out the MITM attack. For now, a good solution would be to switch to a different ISP (it's not like in the US, each home has access to 2-5 different ISPs).

Also, users ask everyone to use a VPN. So, I think that they have access to VPNs.

gnull(10000) 3 days ago [-]

I checked these a few months ago and both services were working fine.

vbezhenar(3669) 3 days ago [-]

Kazakhstan is blocking some websites, including home pages for Tor and popular VPN services. Also it uses some sophisticated Tor blocking: it establishes TCP connection but no bytes going there, so Tor client just hangs there without error or traffic, I wasn't able to unblock it, though I did not try hard enough. I think that they are blocking connections to popular VPN services as well, but I don't really know. Without access to their home pages it's hard to connect anyway. I know that people successfully using some mobile apps as VPNs, so while they are trying to block VPN, they are not trying hard enough.

I never had any problems with SSH and I operate my own OpenVPN on VPS using standard port and I never had any problems with it as well.

mholt(1750) 3 days ago [-]

Would someone with network access in Kazakhstan check if Caddy's MITM detector catches this please? https://caddyserver.com/docs/mitm-detection - or https://mitm.watch (Cloudflare's unofficial deployment of the same tech).

If it does not, could you file a bug report with a complete packet capture (and exact browser version - multiple browsers are preferred)? https://github.com/caddyserver/caddy/issues

(Edit: Reportedly, 'not all Internet providers have started MITM attacks yet' so if you do the test, make sure you are on an intercepted network... if safe to do so.)

lame-robot-hoax(10000) 3 days ago [-]

So uh, should I be concerned at all if my connection came back as a likely MITM from my home network in the US? Or is it most likely a false positive caused by my firewall or something?

I tested it both off a VPN and on a VPN from my iPhone yet still had the same result both times.

niij(4164) 3 days ago [-]

Would this detect MITM if you installed a root cert from ex: school, employer, etc?

cascom(3442) 3 days ago [-]

Does this do something substantially different than:

https://www.grc.com/fingerprints.htm

alexeykapustin(10000) 2 days ago [-]

In Russia on attempt to open https://mitm.watch I get provider's page with message 'Requested IP is blocked'

marmaduke(4156) 3 days ago [-]

Theoretical quedtion: would a VPN in Kazakhstan allow you to test this?

filleokus(3670) 3 days ago [-]

I just tested with ExpressVPN via their Kazakhstan endpoint. On 46.244.29.XXX on the ISP A2B IP B.V. I don't seem to be MITM'ed at all, so I can't report anything about the MITM-detector.

AndyMcConachie(10000) 3 days ago [-]

What makes everyone so sure this isn't happening everywhere already?

The problem Kazakhstan had was that there was no existing CA they could already force to issue certs. So they had to make a new one. It would be foolish to assume that none of the many trust anchors your browser already trusts haven't already been compelled by your local government to do exactly this.

Also, DANE and DNSSEC solves this problem.

tptacek(79) 3 days ago [-]

Not only does it not solve this problem, it actually makes controls like this easier to deploy: Kazakhstan controls the keys for its own ccTLD, and will simply require people to use .kz variants of services (and require services to provide those to deploy in Kazakhstan). Many of the most popular sites in Kazakhstan, including Google, are already reached on .kz names.

Ajedi32(1846) 3 days ago [-]

Certificate Transparency would make it blatantly obvious if any existing CA were being compelled by governments to issue fraudulent certs.

DANE is, unfortunately, not viable to implement in browsers right now for a variety of reasons: https://www.imperialviolet.org/2015/01/17/notdane.html

gnull(10000) 3 days ago [-]

What is interesting is that some local internet providers in Kazakhstan used to inject their own ads into http websites their users visit. I wonder if they will start doing the same with https now.

I noticed this behaviour last February with Kazakhtelecom (telecom.kz) internet provider. When I opened an http website in my browser and started clicking randomly on the parts of the page which are usually not clickable, sometimes such click would open a pop-up window with ads. Those pop-ups did also open sometimes, when I clicked on links of the page. It was unusual, because I used the same websites just a few days before that from Russia and nothing like that happened.

To figure out what's going on I opened the same webpage through proxy and compared it with localy opened one. Shell command for that was something like:

  diff <(curl http://website) <(proxychains curl http://website)
And the only difference was that directly downloaded webpage contained a reference to some suspicious script in a place, where the proxied one had a reference to a google analytics script. I reproduced this behaviour with multiple websites from two different homes, on two different laptops (Linux and Windows). So this is unlikely to be a malware in my router, and I'm pretty sure it's not in my laptop.

I'll be back in Kazakhstan in 3-5 days, I'll try to reproduce this once again.

synackpse(10000) 2 days ago [-]

Hey there!

I had a similar experience with my ISP in Canada. Infact, I did a talk on how I worked out what was going on from a technology perspective: https://www.youtube.com/watch?v=_YeaYIPM-QI

If you want to conduct some testing, I'd be more than happy to help.

420codebro(10000) 3 days ago [-]

Very interesting. It would be neat to put a service together that could crawl a site from different restricted countries to observe / track what is injected (assuming they aren't just wanting to MITM for visibility).

cronix(3962) 3 days ago [-]

Comcast used to do this to me about 6 or 7 years ago to tell me, or someone, about torrent use on the connection and something or other about copyright infringement. They'd inject their messages into the html of websites and you'd have to dismiss them to continue to use the site. Not their site. All sites.

dspillett(4055) 3 days ago [-]

> What is interesting is that some local internet providers in Kazakhstan used to inject their own ads into http websites their users visit.

Several ISPs (including some big national players, not smaller local/struggling/other ISPs) trialled that sort of thing in the UK in the mid/late 00s but there was a big enough ruckus about it that they stopped.

The really egregious thing about what some of them were doing is that they replaced existing ads so were basically trying to take money of the sites (they were at the same time also trying to get sites to pay or be considered 'low priority' traffic so were trying to tripple-dip: get paid by their primary consumer, get paid by the sites, and take the sites' ad money).

It doesn't surprise me that it is actively happening in places there is less choice (so 'voting with your feet' is not an option for telling ISPs what you think) or public outcry is less effective (or drowned out by more pressing issues the area might have).

djsumdog(1126) 3 days ago [-]

I had ads injected by some European and UK ISPs on my own website. I pushed me to finally get LetsEncrypt implemented and switch everything over to https.

break_the_bank(3522) 3 days ago [-]

This is so bad. I'm from India and at my parents place we have the government run internet provider. They MITM and inject advertisements all the time showing annoying popups whenever you open an http link. I don't know how this is legal even.

oneplane(10000) 3 days ago [-]

I wonder if they also proxy stuff like the Google endpoints where chrome does key pinning, or if they whitelist those. I imagine other large systems like those of facebook (when using the app) and Apple are actively remembering what the keys are supposed to look like. That would mean that even a custom CA wouldn't allow carte blanche MITM.

hi41(10000) 2 days ago [-]

Dumb question as I don't understand this well.

If I visit https://gmail.com, I expect all traffic to be encrypted because my browser checks that gmail is indeed using encrypted connection. How is this getting intruded upon?

More importantly, suppose I checkin to a hotel in USA as I usually do and use the hotel's wifi. Would they be able to intrude into my connection to https://gmail.com?

Could someone please give me some clarity on this.

jacobmassey(10000) 3 days ago [-]

Be careful.

noragami(10000) 2 days ago [-]

Hijacking the comment for better visibility. After getting some backlash, the government has already backed down. They claim that installing the certificate is entirely voluntary.

https://rus.azattyq.org/a/30064788.html

They have been talking about this stuff for some years, though. It will get implemented at some point. I have a feeling it was one of their 'test trials': can we boil the frog yet, or do we have to heat the water up a bit more?

LinuxBender(275) 2 days ago [-]

Folks should also periodically get fingerprints of sites they often use and keep logs of them, so that you might notice if something changes.

example:

    openssl s_client -servername www.paypal.com -connect www.paypal.com:443 < /dev/null 2>/dev/null | openssl x509 -fingerprint -noout -in /dev/stdin                                                 
    SHA1 Fingerprint=E8:20:7A:27:8C:BE:D4:D9:7F:44:32:89:E7:6B:13:DD:CE:58:50:F6
Perhaps put all the sites you visit into a text file and loop through them into a date based file or append with date stamps. If enough people did this, you could probably even spot when an entire region is doing something shady or a company was potentially compromised.
Ajedi32(1846) 2 days ago [-]

Are you sure that's really them backing down, and not just a way to obfuscate the issue? Technically yes, installing the certificate is voluntary; it's just that if you don't install it you won't be able to access the internet anymore when the government starts MITMing your connections.

vbezhenar(3669) 1 day ago [-]

They are not backing down, they just saying meaningless words. My wife phone is MITMed right now and she can't open facebook, for example. How is it voluntary?

jedberg(2208) 3 days ago [-]

I find the social aspect of this interesting. Us 'smart tech people' have been pushing https everywhere for a few years now as a way of protecting internet privacy 'for the masses'.

And now the government found a very simple non-technical workaround. Send a message to everyone requiring a government root CA with an easy install, or their internet won't work.

Now 'us techies' have to find a new technical solution to a very social problem.

It never ends. :(

mtgx(151) 3 days ago [-]

I was certain I read a few years ago that Google would mandate that all OEMs would be forced to use a single unified certificate list, which I thought at the time was a way to pre-empt this sort of thing. But I can't find any new info about that anywhere. I only found an article about how to add new certificates on new Android versions in 2019, so I guess you can still change them.

I wonder if Google changed its mind about this once Sundar Pichai took over and then gave Project Dragonfly the greenlight.

jopsen(10000) 3 days ago [-]

Before we celebrate defeat, let's just acknowledge that these practices are not taking place in the US, EU, etc.

And compromising HTTPS in places with a functional judicial system (and human rights) would probably be blocked by an end-less series of law suites.

lostmsu(3810) 3 days ago [-]

The solution to that problem was invented and reinvented hundreds of years ago. It is called gunpowder.

marcosdumay(4169) 3 days ago [-]

It only becomes a social problem after the society gets the tools to know that the government is messing with their communications.

HTTPS is that tool. It is a social problem now, it was a technical problem problem just recently.

reeves23423(10000) 2 days ago [-]

It already exists https://www.torproject.org .

miguelmota(4170) 3 days ago [-]

Unfortunately governments like that will continue to do low effort workarounds as long as they have police and military forces to respond to those who don't conform.

dmitrygr(2329) 3 days ago [-]

Steganography. With a good key and enough stuffing, it is undetectable

stefan_(4170) 3 days ago [-]

Except I look at the linked mailing list and you already get 'us techies' arguing 'uh yeah but uhm this isn't so different from the corporate CA intercept thing right so let's not blacklist it uhm'.

What the fuck.

cesarb(3121) 3 days ago [-]

One of our ('tech people') main failures was that, while we made a heavy push for server authentication, we didn't make a similarly strong push for client authentication. With client certificates, MITM like that is not possible, unless the server also trusts the MITM CA to authenticate its clients (and uses a CA for the client certificates in the first place, instead of a direct mapping between users and their certificates).

tomjen3(4152) 3 days ago [-]

A first suggestion is to ban the cert in chrome/firefox and then keep banning certs as they issue them.

tzakrajs(4072) 3 days ago [-]

Maybe this sort of interruption is manageable when half of your 18 million people are rural and the economy isn't heavily dependent on internet traffic. Try doing this in a more urban populated country and you will see a much different outcome.

LoSboccacc(4149) 3 days ago [-]

to be fair some of us told from the beginning that making all user used to trust the green check would have caused this sort of trust fatigue to the point the majority would have stopped bothering with the actual certificate content and trust chain, and you can search my history highlighting this very issue in relation to let's encrypt, it was a social issue from the very beginning and I got downvoted heavily and repeatedly because apparently 'techies' can't be bothered with exceptions and failure modes once a catch all solution is found

but the warning signs were all there i.e. https://news.ycombinator.com/item?id=17298747#17304077

cmroanirgo(4095) 3 days ago [-]

Will oscp stapling be able to be used to detect 'something fishy' going on, because in that case the root ca wouldn't actually match. Do browsers compare the oscp root with the root of the current chain?

Actually, if it's mitm it's 'all bets are off' isn't it, because the KZ government can filter that it out the proxied response?

Still, if oscp can assist at all, it's probably worth it that the browsers check for mismatch (if they don't already)

Edit: typos

baq(3445) 3 days ago [-]

Technology enables policies both good and not so good. This is just another example of that.

azernik(3470) 3 days ago [-]

It protects privacy for the masses in the countries where most techies live, which is what most of us were paying attention to.

In places like Kazakhstan and China it's a harder problem, and HTTPS is necessary but not sufficient to solve it.

cpeterso(61) 3 days ago [-]

But we are in a better place than before. Without HTTPS everywhere and governments needing to ask people to install new root certs, we would not have learned about this Kazakhstan MITM issue.

ignoramous(3569) 3 days ago [-]

> Now 'us techies' have to find a new technical solution to a very social problem.

Cert pinning does mitigate it for apps, doesn't it? The end-user doesn't need to really worry abt rouge root CAs, if my understanding is right.

Traditional VPNs, P2P VPNs, Tor as a Proxy (decentralised net? dat/i2p/freenet/ipfs) could solve it generally across various use-cases, of which, VPNs are already mainstream.

mr_toad(4000) 3 days ago [-]

> Send a message to everyone requiring a government root CA with an easy install, or their internet won't work.

They're training their entire population to install things that they get in unsolicited emails that purport to be from a legitimate source.

What could go wrong?

tcd(10000) 3 days ago [-]

We need new measures to not allow these certificates to be installed unless they're verified, or at least the OS shows a massive giant warning 'DO NOT DO THIS unless you accept this cert gives $identity access to all your data'.

Seems a very solvable problem.

Florin_Andrei(10000) 3 days ago [-]

> It never ends.

Yeah. Fangs vs shells. Microbes vs white cells.

It's just the way this universe works. The struggle is eternal. Probably built into the root parameters of the Big Bang, if you could somehow trace it that far back in time and causality (which you probably can't, I dunno).

pcwalton(2829) 3 days ago [-]

I'm less pessimistic. The practical result of this is likely just going to be more business for the cottage industry of Great Firewall VPNs, which already compete with one another in traffic obfuscation against an adversary far more sophisticated than the government of Kazakhstan. Thankfully, this is currently a case in which the incentives of the market happen to align well with the goals of defeating censorship.

wnevets(10000) 3 days ago [-]

>Send a message to everyone requiring a government root CA with an easy install, or their internet won't work.

but atleast we know

strictfp(3884) 3 days ago [-]

This is why I'm always advocating for political engagement for fighting these kind of issues. It's not exactly hard for a government to ban or forbid circumventing their monitoring. It does take time, but they're about to catch up.

peterwwillis(2543) 2 days ago [-]

The solution is to warn users that their security+privacy is compromised, and let them make their own informed choice. Techies don't often see that their own wishes shouldn't trump those of individuals (but maybe we're getting into politics now)

Another technical solution would have been to allow security without privacy. If the purpose of the government actions is just to monitor content, you can enable that without disabling security. The HTTP protocol could be modified to transmit checksums signed by a cert, so that a client can verify that content has not been modified, but that content can be (optionally) not encrypted, but still no content-injecting attacks can take place.

But privacy advocates don't like it, so the result is either you have total security + privacy (such as it is), or none at all.

privateSFacct(10000) 3 days ago [-]

They should just put a red dot on the browser bar somewhere indicating a non-normal root cert is being used (this would also help in dev / test scenarios).

taviso(3748) 3 days ago [-]

This is actually the subject of some debate, believe it or not, there is a good argument against it.

Here is the crux of the issue, many TLS middleware providers install their own root certificate for network monitoring, data loss prevention, security scanning and so on. I personally would like them to stop doing that or at least make it obvious to end users it's happening. However, in order to modify the root store, they must have been authorized to do so by the Administrator, and it's their network or hardware.

If we try to make it obvious to users that this inspection is happening, these providers will switch to using alternative methods, such as using Microsoft Detours - which would be even worse, now you have random vendors patching security critical code in such a way that is not discoverable for end-users. This cannot be prevented, because they must already have Administrator access or they wouldn't have been able to modify the root certificate store in the first place.

In this Kazakhstan scenario, imagine if adding the government certificate put a red dot that said 'You are being monitored'. If the government didn't like that, they could instead require you to install monitor.exe that had the exact same effect, but didn't show the dot by patching and hooking all the crypto APIs. I find this argument against adding an obvious indicator quite compelling.

jedberg(2208) 3 days ago [-]

That would help people who already know what a root cert is, but it's well known that most people ignore any indicator in a URL bar. Even 'smart people' ignore them. Do you actually check the lock status of every site you visit?

clinta(10000) 3 days ago [-]

Something like this is in FF 68. Not a red dot, but an indication when you click the padlock.

https://bugzilla.mozilla.org/show_bug.cgi?id=1549605

cj(3230) 3 days ago [-]

This would be fantastic.

Also, it would be great if there were a 'red dot' style warning when you manually click 'Proceed anyway' while viewing a https page with an invalid certificate (currently, the browser remembers the 'Proceed anyway' decision and accepts the invalid cert after the initial acceptance of the warning)

mdhardeman(10000) 3 days ago [-]

I blame, in part, TLS 1.3, E-SNI, and DoH for this.

Previously, a government could monitor what site a user is visiting just by looking at the TLS session startup. Even if it is hosted on a cloud provider and 100 different sites are hosted from the same IP, they could look at the TLS-SNI data in the plain text to choose to interrupt and block the connection.

A fallback would be to manipulate DNS queries and force all DNS queries to be directed to official DNS resolvers. But DoH makes that far harder to control.

This is a bluff being called. Tech said 'If we make it so that they have to spend all this money and build a massive scale intercept that actively participates in each TLS session, they won't buy into the cost.'

Costs keep going down for this sort of thing. Now there are large organizations and governments willing to work on this stuff.

jedisct1(3383) 3 days ago [-]

It's probably completely unrelated.

DoH is easy to block. They can look at SNI and cut DoH connections.

Being able to access all the content is far more valuable than hostnames.

djsumdog(1126) 3 days ago [-]

Have other governments requested their citizens to install country specific CAs? For some reason, I thought China already employed this practice (although I guess they wouldn't need to, as they just tend to block everything that isn't government approved).

vbezhenar(3669) 3 days ago [-]

AFAIK it did not happen yet anywhere, including China. Kazakhstan is kind of 'leader' there. Though I'm sure more countries will follow.

lone_haxx0r(10000) 3 days ago [-]

> I think this CA should be blacklisted by Mozilla and Firefox should not accept it at all even user installed it manually.

> This will save privacy of all Internet users in Kazakhstan.

No. This will mean that users would simply switch to chrome, edge, brave, ... , n + 1.

In case all of them block this CA, the government will force people to install an older version or will patch any open source browser so that it works with their certificates.

IMO, this is also wrong from a philosophical point of view. Your browser should just be your browser and not take part in political disputes. It doesn't sit well with me that Firefox has anything to say in the politics of its users.

And finally, encryption doesn't solve violence.

taftster(10000) 3 days ago [-]

Yes, exactly.

In the United States, for example, it's legal (even expected?) that corporations can install custom CAs into their user's browsers and prevent internet access to any browser without it installed. Is it Mozilla's job to prevent these CAs from being installed on user's workstations? Should Mozilla reject any certificate from Blue Snort, etc.?

Kazakhstan has likewise declared it legal (under their own sovereign legal authority) to prevent web access to its citizens without the required CA installed. Just like it's legal in the US for corporations to 'spy' on their employees, it's legal in Kazakhstan for it to spy on its citizens.

Laws of Western countries do not extend into other sovereign nations, regardless of what one thinks of those laws. It's not Mozilla's job to get involved in this case.

> And finally, encryption doesn't solve violence.

Nor abuse of freedom by nation states, unfortunately.

fragmede(3529) 3 days ago [-]

Given the rise of HTTPS, intentionally or unintentionally, politics now lives in the endpoint (smartphone/tablet/computer)'s tech stack. Some part - the browser (and thus its vendor), the operating system (and its vendor), or some add-on (and thus its vendor) - gets to determine who/what's considered a root CA, and is thus allowed to decrypt message contents.

effie(4170) 3 days ago [-]

You're forgetting the power a discontent population can have on government. In case all browsers took a stance against cooperating with silent MITM, this would be a great pressure on powerful entities not to do it. Governments can get away with suppressing people's online experience for a while, but if they have subpar experience and basic websites won't function properly due to dated browsers, the pressure on governments to get modern browsers may increase.

nurbo(10000) 3 days ago [-]

A fellow from Kazakhstan here.

Banning this certificate or at least warning the users against using it WILL help a lot.

Each authoritarian regime is authoritarian in its own way. Kazakhstan doesn't have a very strong regime, especially since the first president resigned earlier this year. When people protest strongly against something, the government usually backs down. For example, a couple of years ago the government withdrew their plans of lending lands to foreign governments after backlash from ordinary people. If Kazakhs knew about the implications of installing this certificate, they would have been on the streets already.

If Firefox, Chrome and/or Safari block this certificate, the people will show their dissatisfaction and the law will be revoked.

Sometimes the people in authoritarian countries need a little bit of support from organizations to fight for their rights. I really hope the browser organizations would help us here.

reeves23423(10000) 2 days ago [-]

In the meanwhile consider using Tor https://www.torproject.org . It has different transports transport plugins available if that will make the traffic look like regular traffic and not Tor traffic.

winter_blue(3760) 2 days ago [-]

> If Firefox, Chrome and/or Safari block this certificate

For users who are already subject to the MITM, when they try to download a new version of their browser, couldn't the ISP simply serve them a slightly modified version of the new/updated Chrome/Firefox binary which does not blacklist their certificate? It would require a high level of technical expertise, but what could prevent this sort of thing?

jacquesm(43) 3 days ago [-]

I wished that all individuals in all countries would have such an attitude towards their governments.

ftl64(10000) 3 days ago [-]

Couldn't agree more.

JoshTriplett(170) 3 days ago [-]

Please do post this feedback in the bugzilla bug and the linked discussion thread; this is the kind of thing that helps developers make a more informed decision rather than just speculating on what would help people more.

gnull(10000) 1 day ago [-]

I totally agree with your point. Blocking this certificate in major browsers or warning users would be of great use.

But there is a minor remark I have to make here. I'm sure you were talking about citizens of Kazakhstan, when you used the word 'Kazakhs'. The word 'Kazakh' actually denotes an ethnicity, not a citizenship. And although Kazakhs are predominant ethnicity in Kazakhstan (~65% of the population), there are many others, and it's incorrect to call them Kazakhs.

Wikipedia suggests 'Kazakhstani' as a English term for Kazakhstan citizens. I also saw other people using the word 'Kazakhstanians'. Maybe one of these two would be a better choice than 'Kazakhs'.

vbezhenar(3669) 2 days ago [-]

I really don't like the idea that some third forces would interfere with internal politics of my country. Browser should work according to technical standards, not according to what US citizens decided to be good or bad. If Firefox wants to forbid locally installed roots, I'm all for it, but implement it for everyone.

That said, I don't see how government would step back. People are uninformed and generally passive, they wouldn't care enough. So, sadly, it might be the only way to push back that decision. But I still don't like it.

mikorym(10000) 3 days ago [-]

What is the extent of the MITM attack that you can do with this certificate? Can you intercept all https traffic?

vbezhenar(3669) 3 days ago [-]

I'm from Kazakhstan using the biggest Internet provider (Kazaktelecom) and that's not true for me. No MITM here. May be not yet. Also checked mobile provider (Activ) and no MITM here too. But I saw local news, so probably not fake, though I'm not sure if it'll be mobile internet only or all providers.

sbaha88(10000) 3 days ago [-]

Seems to affect Astana mobile users for now

ftl64(10000) 3 days ago [-]

Same here. Haven't yet got any SMS or notifications from my mobile/home ISPs, but couple of my friends are reporting that they did / there are many posts at social media, proving the fact, like this one: https://www.instagram.com/p/B0Do5IOHjab/

Ajedi32(1846) 3 days ago [-]

This comment on the mozilla.dev.security.policy mailing list says that right now it's only for users in the nation's capital: https://groups.google.com/d/msg/mozilla.dev.security.policy/...

> At the moment, providers started to use the certificate in the capital of Kazakhstan - Nur-Sultan (ex. Astana).

So it may not be nationwide yet.

samat(4168) 3 days ago [-]

Does anyone have ideas, who is the tech vendor?

Russians? Chinese?

Fins(3443) 3 days ago [-]

They could perfectly well do it themselves.

yholio(10000) 3 days ago [-]

Can we, endpoints outside Kazakhstan, detect when a MITM client is connected and serve a boiler plate message 'Untrusted connection'?

If enough high level sites do this (Google, Cloudflare, Wikipedia etc) it might force the hand of the government since they are the ones effectively breaking the internet.

arpa(3607) 3 days ago [-]

No, mitm is not easily detected on server side. It's a transparent proxy. You could start serving these messages to whole KZ ip range, though.

nickysielicki(3799) 3 days ago [-]

Warning: what follows is completely baseless speculation, and let's concede that right off the bat.

Who's to say that this isn't happening in the US as well? The US has invested billions of dollars in dragnet surveillance that is allegedly useless for anything other than metadata in the context of HTTPS.

Is it out of the question to ask whether our secret courts could issue gag orders and claim that national security mandates CA root keys? Such a gag order would only be served to a small group of engineers at large companies, and those engineers would have no right to report on it. We're talking about only a few hundred FISA gag orders, and thousands are served annually. It would make their multibillion dollar infrastructure useful again, wouldn't surprise me if some (unnamed, anonymous) judge bought the argument.

To employees at vulnerable companies, what's your PKI like? Is anyone aware of a company that implements strong multi-party checks on accesses to important private keys? If the NSA wanted your keys, how many employees would need to be served gag orders? Is it on the order of dozens or hundreds?

durpleDrank(4162) 3 days ago [-]

Baseless?

https://en.wikipedia.org/wiki/Lavabit

I could of sworn there are instances where root authorities had brush ins with the law, maybe I'm just remembering a theoretical rant I heard somewhere.

admax88q(10000) 3 days ago [-]

If the NSA was using their own certificates to MITM all HTTPS traffic it would be easily noticed by security researchers. Its not like they obtain the private keys of every US company. They'd have to make their own replacement certain for every site they wish to intercept. That could easily be noticed by security professionals and targeted companies by monitoring.

filleokus(3670) 3 days ago [-]

Earlier this would probably have been a somewhat plausible solution, but not even then for mass scale surveillance.

Assume NSA et. al had access to trusted CA private keys, then they could generate certificates for arbitrary domains which would be trusted by clients. But if they MITM'ed _all_ connections (or even a large portion) surely someone would have noticed, like in the DigiNotar case [0].

But it's even harder (or better) now, with the advent of Certificate Transparency. Since browsers check certificates, periodically, against the CT logs which would fail for forged certificates [1].

However, stealing private keys from companies themselves is a practice that I can imagine happening on a small scale, like the Realtek signing keys for Stuxnet [2]. But doing that on a large scale is not really sustainable.

[0]: https://en.wikipedia.org/wiki/DigiNotar#Issuance_of_fraudule... [1]: https://security.stackexchange.com/questions/190096/how-will... [2]: https://en.wikipedia.org/wiki/Stuxnet#Windows_infection

cjbprime(2429) 3 days ago [-]

The larger the conspiracy theory is, the higher the risk that someone will simply decide it's worth whistleblowing on. The whistleblowing could even be effectively done anonymously in this case. Secret Government surveillance of end to end encrypted data in the US would be such a huge news story and public interest that it wouldn't be difficult to find engineers willing to take the risk of anonymously providing evidence of it to the press.

A 'few hundred' gag orders placed on engineers about something deeply outrageous sounds completely implausible as lasting secrecy. Secrets simply can't be kept at that scale.

Even NSA employees themselves would eventually refuse to keep such a program secret, as we saw with Snowden.

everdrive(10000) 3 days ago [-]

>Who's to say that this isn't happening in the US as well?

No one can say it because you just claimed it's all secret and can't be proven.

jupp0r(3615) 3 days ago [-]

So it took them only two weeks to take advantage of Firefox's new policy to automatically 'fix'[1] man in the middle attacks through enterprise/antivirus CAs. As expected, this 'convenience' will make us all less safe :(.

[1] https://blog.mozilla.org/security/2019/07/01/fixing-antiviru...

jopsen(10000) 3 days ago [-]

If an attacker can install a CA on the system, the attacker can probably also apply binary modifications to Firefox.

Or replace it with a compromised version.

austinheap(3966) 3 days ago [-]

At what point does this become an OFAC issue for browser vendors based in the states? I would be stunned if someone at Commerce isn't already circulating enforcement memos about this.

mdhardeman(10000) 3 days ago [-]

I think that's terribly optimistic. I think there are all kinds of people in the US government who would like to be able to point to a success story of a scheme like this, so that they'll have more ammunition in support of implementing it here.

As for whether designating that browser vendors can't distribute software to Kazakhstan, their government would just fork an open-source one, mod it to pre-include their MiTM cert, and force their citizens to use that.

akersten(10000) 3 days ago [-]

Google, Mozilla, and Microsoft need to take a stand here and blacklist these certs. All of the efforts to move to HTTPS, and all of the rhetoric surrounding it, are just wasted time and empty words if we as a tech community allow this kind of behavior to go unchallenged. This sets such a dangerous precedent, and governments need to know that this kind of meddling will not be tolerated.

will4274(10000) 3 days ago [-]

Let's get Congress to do it - fix the collective action problem. Pass a law that says that all American companies are forbidden from serving traffic to countries which require their citizens to install compromised roots on their personal devices. Treat them like North Korea / Iran.

terlisimo(10000) 3 days ago [-]

Hello,

To continue using internet, you need to install our government-provided fork of Firefox that doesn't blacklist our government-provided root cert.

regards, your Tele2

Yizahi(10000) 3 days ago [-]

Lets say you have this root CA installed on local machine and won't delete it. Can you protect yourself against https decryption by MITM in any way? Will VPN help or they will intercept VPN connection too?

Tepix(3919) 3 days ago [-]

Use a browser that doesn't use the CA list of the OS (such as Firefox) and tunnel all the traffic via a stealthy VPN.

It would probably be illegal and if the regime finds out they'll put you into jail etc.

isostatic(3801) 3 days ago [-]

I have custom root certs for internal dev sites for my company. That's fine, but I'd like to add the root with a caveat that I control saying 'I trust this root for *.mycompany.com,mycompany.org', but that I know means they wouldn't be able to proxy 'mybank.com'.

I don't think Firefox or Chrome can do that can it?

burtonator(2045) 3 days ago [-]

For root certain? But you can add your own self signed certs for domains an they can be wildcards.

You only need a root if you're issuing other keys.

vbezhenar(3669) 3 days ago [-]

Just issue self-signed certificate for *.mycompany.com, mycompany.org and then put it into trust store, without CA flag set.

y0ghur7_xxx(3451) 3 days ago [-]

There is a Name Constraints extension in X.509[1] that does exactly that, but to my knowledge no browser implements it.

[1] https://tools.ietf.org/html/rfc5280#section-4.2.1.10

JoshTriplett(170) 3 days ago [-]

I'd like that as well, for exactly the same purpose.

To the best of my knowledge, no browser can do this today, and I don't know of any other software that can do that either. (I'd want to have it in the system certificate store with the same constraint, as well.)

Name Constraints, as mentioned elsewhere in this thread, wouldn't solve the problem, for two reasons: most software doesn't support them (and silently ignores them rather than correctly failing closed), and they require the CA certificate to contain the constraints rather than system configuration adding the constraints.

We need a mechanism to place certificates in a certificate store (browser or system) with specific domain constraints configured by the administrator. To avoid the failure mode of Name Constraints, those certificates shouldn't be accessible to software that doesn't know to enforce domain constraints.

That would also require updates to various SSL libraries (and to browsers) to handle this new certificate store and enforce the constraints.

darkhorn(4009) 3 days ago [-]

> CAA creates a DNS mechanism that enables domain name owners to whitelist CAs that are allowed to issue certificates for their hostnames.

https://blog.qualys.com/ssllabs/2017/03/13/caa-mandated-by-c...

tialaramex(10000) 3 days ago [-]

You should be able to do this for Firefox, in two ways, neither of them very convenient, but if you're excited enough about this to put some work in:

1. The actual browser software has its own rules for what's trusted on top of the root trust store often copy-pasted into other software (e.g. in the Python package Certifi). These rules include name constraints of the sort you're interested in, functionally they're used to constrain roots that might be fine but whose government operators promise they're only for some TLDs under authority of that government, so in a sense for those names the government decides who 'really' owns them anyway.

Rather than link a horde of HN readers into Mozilla's poor mercurial server I'll point you more indirectly at a summary from their wiki:

https://wiki.mozilla.org/CA/Additional_Trust_Changes

You can fork Firefox and add your own rules to NSS, or if you're feeling really fancy, your own extension mechanism to add rules at runtime.

2. You can spin up a root, and trust it, and use that to sign a suitable constrained intermediate. Then, destroy the private key for the root and continue using the intermediate.

In this scenario Firefox gives total trust to your root, but it can't possibly proxy mybank.com because you destroyed the private keys, it can't do anything further at all. The intermediate, which can still issue, is constrained.

Google (I think? Some big tech firm with know-how) has used this trick for products that do loop-back HTTPS. It mints a root cert during instal, issues one certificate and then destroys the private key, then it can trust the root cert but without any danger you later get MITM'd by the root, because that root's private key no longer exists anywhere.

schoen(875) 3 days ago [-]

No, although the root itself could be scoped that way with an X.509 name constraint. But if you add the root then I believe there's no browser policy to otherwise limit the names for which it can be trusted.

daukadolt(10000) 3 days ago [-]

Does such a certificate compromise non-browser traffic as well? Like SSH tunnels, mobile apps, Telegram etc.

tomxor(3480) 3 days ago [-]

SSH doesn't depend on certificate authorities, it's up to you to manage your own keys, each end point also has a uniquely generated signature which avoids MITM after first time auth (including by taking over domains).

This is a HTTPS only issue and fundamentally it's the same problem as control over domains (ease of manipulation through centralisation).

ziftface(10000) 3 days ago [-]

I guess you'd have to install the certificate on your phone too. I guess that means that visitors to Kazakhstan won't have internet access during their stay, unless they install the malicious certificate on their phones as well. I really hope this doesn't set a precedent.

nhooyr(4070) 3 days ago [-]

SSH no, mobile apps and telegram probably if they use TLS.

vbezhenar(3669) 3 days ago [-]

Just don't install that certificate. If something stops working, you'll know that they tried to break that channel. If something's working, then it's OK. And if you need things to work, use VPN.

adsadasdas(10000) 3 days ago [-]

How does this work technically?

I understand that by making people install the government cert, any website with a cert signed by that government cert will happily speak TLS.

But, how can they read data transmitted between websites they don't control? When the client asks for Facebook's cert, wouldn't the government have to sneak in and show a fake cert signed by them instead? How does that work?

balowria(10000) 3 days ago [-]

It can probably work as MITM. The ISP or whoever controls your net traffic needs to generate a fake certificate(signed by a trusted root cert) for a site you are browsing. Refer example in: https://en.wikipedia.org/wiki/Man-in-the-middle_attack

TazeTSchnitzel(2091) 3 days ago [-]

Tele2, a Swedish company and major phone network in Sweden.

Wouldn't be the first scandalous thing in the former Soviet Union that a Swedish phone network was involved in: https://en.wikipedia.org/wiki/Telecom_corruption_scandal

filleokus(3670) 3 days ago [-]

To be noted though, Tele2 has as of recently exited the Kazakhstani market: https://www.tele2.com/media/press-releases/2019/tele2-has-ag...





Historical Discussions: Epic Games Supports Blender Foundation with $1.2M (July 15, 2019: 731 points)

(731) Epic Games Supports Blender Foundation with $1.2M

731 points 6 days ago by brachi in 3500th position

www.blender.org | Estimated reading time – 2 minutes | comments | anchor

July 15, 2019 (Cary, NC) – Epic Games, as part of the company's $100 million Epic MegaGrants program, is awarding the Blender Foundation $1.2 million in cash to further the success of Blender, the free and open source 3D creation suite that supports the full range of tools empowering artists to create 3D graphics, animation, special effects or games.

The Epic MegaGrants initiative is designed to assist game developers, enterprise professionals, media and entertainment creators, students, educators, and tool developers doing outstanding work with Unreal Engine or enhancing open-source capabilities for the 3D graphics community.

The Epic MegaGrant will be delivered incrementally over the next three years and will contribute to Blender's Professionalizing Blender Development Initiative.

"Having Epic Games on board is a major milestone for Blender," said Blender Foundation founder and chairman Ton Roosendaal. "Thanks to the grant we will make a significant investment in our project organization to improve on-boarding, coordination and best practices for code quality. As a result, we expect more contributors from the industry to join our projects."

"Open tools, libraries and platforms are critical to the future of the digital content ecosystem," said Tim Sweeney, founder and CEO of Epic Games. "Blender is an enduring resource within the artistic community, and we aim to ensure its advancement to the benefit of all creators."

For more information about Epic MegaGrants, visit unrealengine.com/en-US/megagrants.

About Blender Foundation The Blender Foundation (2002) is an independent nonprofit public benefit corporation with the purpose to provide individual artists and small teams with a complete, free and open source 3D creation pipeline, managed by public projects on blender.org. Its spin-off Blender Institute hosts the foundation's offices and currently employs 15 people who work on the Blender software and creative projects to validate and stress Blender in production environments. For more information, visit blender.org/foundation.

About Unreal Engine Epic Games' Unreal Engine technology brings high-quality games to PC, console, mobile, AR and VR platforms. Creators also use Unreal for photorealistic visualization, interactive product design, film, virtual production, mixed reality TV broadcast and animated entertainment. Follow @UnrealEngine and download Unreal for free at unrealengine.com.

About Epic Games Founded in 1991, Epic Games is the creator of Fortnite, Unreal, Gears of War, Shadow Complex, and the Infinity Blade series of games. Epic's Unreal Engine technology, which brings high-fidelity, interactive experiences to PC, console, mobile, AR, VR and the Web, is freely available at unrealengine.com. The Epic Games store offers a handpicked library of games, available at epicgames.com. Follow @EpicGames for updates.




All Comments: [-] | anchor

Andrex(3283) 6 days ago [-]

In the spirit of supporting open software, it would be fantastic if Epic Games Store and its DRM warmed up to Linux next.

https://news.ycombinator.com/item?id=19844241

shmerl(3831) 6 days ago [-]

They should offer DRM-free options, and Linux options as well. And no more of that exclusive nonsense.

groovybits(10000) 6 days ago [-]

'"Open tools, libraries and platforms are critical to the future of the digital content ecosystem," said Tim Sweeney, founder and CEO of Epic Games.'

Contradictory sentiments like these are what lead me to believe this grant is mostly for marketing purposes. (tin-foil hat mode)

noahster11(4112) 6 days ago [-]

The anticheat they use (EAC) for fortnite checks if Wine is in use, and will fail to run the game if it is.

This is taken directly from the EAC dll:

https://i.imgur.com/a4wBVWz.jpg

badsectoracula(10000) 6 days ago [-]

> Epic Games Store and its DRM

EGS doesn't have DRM, it is up to each game to implement their own if they want, but the store itself doesn't provide any.

Which is great since most games do not bother :-P (although i'd prefer it if they had a DRM-free stance like GOG, but Epic is too publisher-friendly to allow that).

bhouston(2544) 6 days ago [-]

Commoditize one's compliments: https://jasongriffey.net/wp/2012/04/19/commoditizing-our-com...

Unreal Engine, like Unity, right now has to be used with Autodesk products like 3DS Max or Maya, or at least that is the case with the large majority of professionals. This is an impediment to using Unreal Engine and thus makes Epic Games sort of dependent on Autodesk.

There is probably no coincidence that the new general manager of Unreal Engine is an ex-Autodesk SVP, Marc Petit, who used to be in charge of Maya, 3DS Max, etc for a decade (https://www.awn.com/news/marc-petit-leaves-autodesk). He definitely sees the value of building up Blender to remove the dependence of Unreal Engine on expensive tools from other vendors.

cma(3440) 6 days ago [-]

Unreal has had gltf support for a while now, reducing the dependence on fbx for most common stuff.

pvg(4169) 6 days ago [-]

It's 'complements'. Commoditizing one's compliments is greeting cards.

soulofmischief(10000) 6 days ago [-]

I use Unreal+Blender just fine. What do you mean?

laurentb(10000) 5 days ago [-]

Interestingly, Marc Petit came to Autodesk after the Softimage acquisition and back in the days was a big figure in the Softimage community. Softimage then died down completely after the Autodesk acquisition in 2015 (being the third wheel of their animation package with Maya and 3DS Max)

chipperyman573(3876) 6 days ago [-]

I never understood when important people force deals with companies they used to work at. He doesn't work for Autodesk any more, why would he care how well they're doing? Is he being paid by autodesk? I mean I know he might be friends with people from those companies still, but he's actively hurting his own company by making these calls.

crimsonalucard(10000) 6 days ago [-]

I hope it's no strings attached.

Danieru(2602) 6 days ago [-]

We received a devgrsnt for our video game about two years back. The grants are super no strings attached. Epic is by far the nicest organization to work with in the video game industry. Yet that was even before the Fortnite money.

ralusek(4158) 6 days ago [-]

Me too, but I suspect at worst it would be like 'and make sure to develop and refine the export to UE4 functionality,' which is something the community benefits from anyway.

Someone1234(4161) 6 days ago [-]

You can read more about Epic's MegaGrants here:

https://www.unrealengine.com/en-US/megagrants

It doesn't look like it has many (any?) strings. Aside from being split over three years in Blender's case.

It is worth considering that this is in Epic's best interests as Blender + Unreal Engine is a common platform for startups/indie devs.

Making Blender better may make the games that people develop using Unreal Engine better, which might mean higher license income for Epic. So it is a win for Epic and a win for Blender/the community.

goobynight(10000) 6 days ago [-]

So, this will fund ~2-3 people over the next 3 years. That's by my estimate.

johnnyanmac(10000) 6 days ago [-]

keep in mind the Blender Foundation is based in The Netherlands, not San Francisco. That salary might go a much, much longer way compared to if Blender was based in the US (maybe, I do not know the average compensation for skill programmers in that country).

gmueckl(3795) 6 days ago [-]

Yes, this does sound about right. Makes it feel like a drop in the bucket, though. The amount of ground that Blender is trying to cover is ridiculously large because they're deliberately trying to turn it into a jack of all trades (compositing, audio and video editing etc.). The result is that it is a true master of none.

binthere(4168) 6 days ago [-]

As someone who's done lots of Blender work as well as Maya and integrations with Unity and Unreal Engine, I still think there's quite a long way to go to make Blender a good option for game development. It's free, it kind of works but it's very painful when it comes to animation, texturing, and details.

Right now, this is what I usually do:

  - Basic Modeling: Blender
  - Sculpting + Details: ZBrush
  - Painting: Substance Painter
  - Animation: Maya
So yeah, the last 3 are paid and it's not cheap. Blender can do all three but not as good. I hope that with this additional money the Blender Foundation team can hire more people to close these gaps. In 2.8 they've improved on different fronts but they are still quite behind in those departments.

Last but not least, they should make more effort to improve their keybindings. They've been monkey patching it in 2.8 with the 'industry-compatible' keybindings but when you use it a bunch of other things stop working.

mattferderer(10000) 6 days ago [-]

I'm curious why you would use Blender over Maya for modeling if you have both?

I tried Blender a decade ago but since I had access to Maya & 3D Studio Max I didn't give it much attempt since the UI seemed more confusing & the material available to learn was lacking.

nineteen999(4168) 6 days ago [-]

I have the same setup as you, except rather than use Maya for animation I use the free UE4Tools plugin for Blender which works with 2.79.

I had pretty good success with it, and wrote a Python addon which builds on top to retarget mocap data from the Perception Neuron suit to the UE4 Skeleton.

iamcreasy(2963) 6 days ago [-]

Could you please explain what Blender is not good at animation? What major features are missing?

Elizer0x0309(4170) 6 days ago [-]

Datapoint: I've had/have a really good experience with Blender. Countless opensource resources to fill any gaps in the export pipeline. Not to mention the relatively easy python api that gives you access to all the scene's data.

Rch_East(10000) 6 days ago [-]

Epic are doing a great job improving fairness in the gaming industry, and the economic conditions for developers. I'm looking forward to their Epic Store opening up to more (high quality) Indie games.

sgarman(4100) 6 days ago [-]

The % deal they are offering is a good start but I'm not sure buying exclusivity of game releases is good for 'fairness in the gaming industry.'

Lowkeyloki(4129) 6 days ago [-]

That's probably a drop in the bucket for Epic, but it's still much appreciated!

KingFelix(10000) 6 days ago [-]

Yup! So stoked on the work they are doing. Epic has been funding some awesome projects / people. I am really stoked on what they are doing. I can only imagine if other companies with Epic levels of cash were doing the same thing. I love it, and support all the stuff they are doing. Tim Sweeney is a good guy. Looking forward to see what else they come up with

MrLeap(10000) 6 days ago [-]

This is great!

As a slighty OT aside, I really hope blender's UI changes sloooow down. I've been using it for 16 years. 2.8 versus 2.7 is so different, It almost feels like they're making changes for change sake.

I've been trying to use their various 2.8 RC's so I don't have to say I don't know how to use blender anymore when it pops.

They made the following things toggleable at launch, but man these are some muscle memory breaking changes that are now the default.

- Changed selecting objects from right to left click - Space bar no longer opens up the function search. The search is like 50% of my workflow, so this one hit me in the gut until I changed the setting.

Then there's some other things that feel renamed for no discernible reason.

- I can't search for 'remove doubles' anymore. There are certain geometries that I've come to build by snapping to axis, extruding, snapping to vertices, remove doubles. Now it's buried in a menu at mesh->clean up->'merge by distance'. It also makes a GUI element pop with the distance argument, and there's no obvious way to 'apply'. So weird. - Ambient occlusion in the view appears to have been renamed 'cavity'. - The tool panel lost its words, it's now just icons. Functionality has to be discovered by hovering. This is the worst UX habit from mobile, I wish it would stay out of my desktop tooling. - The view/selection/snap/etc settings bar has moved from the bottom of the screen to the top. Why? - Properties tabs moved to the side from the top. I can see why. - I can't make objects unselectable but visible in the outliner. Why?! - Layers are gone, they're now 'collections' but they don't do the same thing at all!

This is the stuff I've hit in a about 2 hours of messing. I'm willing to acclimate to about 90% of these changes. So far it's just the layers / merge doubles thing that really sucks. Eevee is really pretty, so it's got that going for it, which is nice.

dtf(3653) 6 days ago [-]

Some ways to merge vertices by distance (aka 'remove doubles'), in edit mode:

1) Mesh > Clean Up > Merge by Distance

2) F3 operator search. Start typing 'mer...' and hit Enter.

3) Ctrl-V to show vertex menu, then Merge Vertices > By Distance

4) The quickest: Alt-M + B (show vertex merge menu, B to select 'By Distance')

I scratched my head a bit first too, but frankly 'merge vertices by distance' is far more descriptive than 'remove doubles'.

The UI that pops up is just the operator parameters that used to appear when you hit F6 (eg try adding a Circle, see the parameters that pop up) It's better this is visible to users, rather than hidden behind some secret hotkey. As with previous Blender versions, operators are 'applied' by default - you can change the parameters via the UI until you like the result. Once you switch operator, move on, or change mode, the UI will disappear and the operation becomes undoable.

To make objects visible/selectable/renderable etc... open the filter menu on the Outliner (top right, looks like a funnel). This lets you add more toggles to the items.

To move the view/selection/snap settings to the bottom just right click, Header > Flip to Bottom/Top.

Collections replace layers and groups (eg you can instance a Collection), though I haven't completely got my head round them yet. But they look quite powerful.

failrate(10000) 6 days ago [-]

Yes, I understand your frustration, and it has options to use the legacy UI (IN PART).

sprafa(4024) 6 days ago [-]

Pretty sure you can change a lot of this no?

Tbh the UI being terrible was one of the main reasons I stayed away from Blender. The fact they changed it makes me have hope I can start using it seriously.

zlsa(10000) 6 days ago [-]

Right/left-click to select and the spacebar action can both be set on the first run of 2.8, or in the Keymap tab of the preferences.

Remove doubles can be done with Alt-M -> tap 'B'.

The 3D header (and any header in Blender) can be moved to the top or bottom: right-click the header, select 'Flip to Top' or 'Flip to Bottom'.

In the Outliner, you can enable the 'toggle selectable' button in the Filters popup. Click on the 'Filter' icon, then click on the mouse cursor to enable the button. Now, you can click the cursor next to any object to make it unselectable, and it will remain visible in the outliner.

Collections can do everything Layers could do, and many more things besides.

nailer(425) 6 days ago [-]

This is really smart move that helps Epic more than you might think.

UE4's own tools (which are based around creating environments from existing models, placing actors, and realtime rendering) are WORLDS ahead of common Autodesk modelling tools eg 3DS Max and Maya (which Epic itself uses). UE4 is easy to pick up, but then you have to learn Maya and it becomes a grind.

There needs to be something as powerful as Maya with a better UX to stop modelling being a blocker for new UE4 users. Being Free As in Beer and Free as in Freedom help too.

TrevorJ(4057) 6 days ago [-]

There's no real comparison between a game engine and a modeling/animation package. They are built to do very different things. However, it is definitely in Epic's best interest to support a robust and freely available tool for the content creation side of things.

lone_haxx0r(10000) 6 days ago [-]

WTF I love Tim Sweeney now.

pedrocx486(10000) 6 days ago [-]

I still don't. His actions last month will not have me loving him in a while.

pixelbath(10000) 6 days ago [-]

Awesome. Blender is on the cusp of releasing a major UI overhaul (2.8) that will make it more accessible to newcomers (left-click is now the default!). I'm excited to see it getting some major support from the gaming industry as well as the film industry.

Ithildin(10000) 6 days ago [-]

That's great news. I've learned Max and Maya, but my brain could never wrap itself around Blender. I'd definitely try it again once the update comes out.

Vinnl(509) 6 days ago [-]

More info about the changes set to come here: https://www.blender.org/download/releases/2-80/

krautsourced(10000) 6 days ago [-]

It's not _that_ much of an overhaul imho. I still find it to be weird, coming from other apps (Cinema 4D in my case).





Historical Discussions: HTTP Security Headers – A Complete Guide (July 18, 2019: 692 points)

(693) HTTP Security Headers – A Complete Guide

693 points 3 days ago by BCharlie in 4066th position

nullsweep.com | Estimated reading time – 11 minutes | comments | anchor

Companies selling 'security scorecards' are on the rise, and have started to become a factor in enterprise sales. I have heard from customers who were concerned about purchasing from suppliers who had been given poor ratings, and in at least one case changed a purchasing decision based initially on the rating.

I investigated how these ratings companies calculate company security scores, and it turns out they use a combination of HTTP security header usage and IP reputation.

IP reputation is based on blacklists and spam lists combined with public IP ownership data. These should generally be clean as long as your company doesn't spam and can quickly detect and stop malware infections. HTTP security header usage is calculated similar to how the Mozilla Observatory works.

Therefore, for most companies, their score is largely determined by the security headers being set on public facing websites.

Setting the right headers can be done quickly (usually without significant testing), can improve website security, and can now help you win deals with security conscious customers.

I am dubious about the value of this test methodology and exorbitant pricing schemes these companies ask. I don't believe it correlates to real product security all that well. However it certainly increases the importance of spending time setting headers and getting them right.

In this article, I will walk through the commonly evaluated headers, recommend security values for each, and give a sample header setting. At the end of the article, I will include sample setups for common applications and web servers.

Content-Security-Policy

A CSP is used to prevent cross site scripting by specifying which resources are allowed to load. Of all the items in this list, this is perhaps the most time consuming to create and maintain properly and the most prone to risks. During development of your CSP, be careful to test it thoroughly – blocking a content source that your site uses in a valid way will break site functionality.

A great tool for creating a first draft is the Mozilla laboratory CSP browser extension. Install this in your browser, thoroughly browse the site you want to create a CSP for, and then use the generated CSP on your site. Ideally, also work to refactor the JavaScript so no inline scripts remain, so you can remove the 'unsafe inline' directive.

CSP's can be complex and confusing, so if you want a deeper dive, see the official site.

A good starting CSP might be the following (this likely requires a lot of modifications on a real site). Add domains in each section that your site includes somewhere.


Content-Security-Policy: default-src 'self'; img-src 'self' https://i.imgur.com; object-src 'none'; script-src 'self'; style-src 'self'; frame-ancestors 'self'; base-uri 'self'; form-action 'self';

Strict-Transport-Security

This header tells the browser that the site should only be accessed via HTTPS – always enable when your site has HTTPS enabled. If you use subdomains, I also recommend enforcing this on any used sub domains.

Strict-Transport-Security: max-age=3600; includeSubDomains

X-Content-Type-Options

This header ensures that the MIME types set by the application are respected by browsers. This can help prevent certain types of cross site scripting bypasses.

It also reduces unexpected application behavior where a browser may "guess" some kind of content incorrectly, such as when a developer labels a page "HTML" but the browser thinks it looks like JavaScript and tries to render it as JavaScript. This header will ensure the browser always respects the MIME type set by the server.

X-Content-Type-Options: nosniff

Cache-Control

This one is a bit trickier than the others because you likely want different caching policies for different content types.

Any page with sensitive data, such as a user page or a customer checkout page, should be set to no-cache. One reason for this is preventing someone on a shared computer from pressing the back button or going through history and being able to view personal information.

However, pages that change rarely, such as static assets (images, CSS files, and JS files), are good to cache. This could be done on a page by page basis, or using regex on the server configuration.


Header set Cache-Control no-cache
<filesMatch '.(css|jpg|jpeg|png|gif|js|ico)$'>
    Header set Cache-Control 'max-age=86400, public'
</filesMatch>

Expires

This sets the time the cache should expire the current request. It is ignored if the Cache-Control max-age header is set, so we only set it in case a naive scanner is testing for it without considering cache-control.

For security purposes, we will assume that the browser should not cache anything, so we'll set this to a date that always evaluates to the past.

Expires: 0

X-Frame-Options

This header indicates whether the site should be allowed to be displayed within an iFrame.

If a malicious site puts your website within an iFrame, the malicious site is able to perform a click jacking attack by running some JavaScript that will capture mouse clicks on the iFrame and then interact with the site on the users behalf (not necessarily clicking where they think they clicked!).

This should always be set to deny unless you are specifically using frames, in which case it should be set to same-origin. If you are using Frames with another site by design, you can white list the other domain here as well.

It should also be noted that this header is superseded by the CSP frame-ancestors directive. I still recommend setting this for now to appease tools, but in the future it will likely be phased out.

X-Frame-Options: deny

Access-Control-Allow-Origin

Tell the browser which other sites' front end JavaScript code may make requests of the page in question. Unless you need to set this, the default is usually the right setting.

For instance, if SiteA serves up some JavaScript which wants to make a request to siteB, then siteB must serve the response with the header specifying that SiteA is allowed to make this request. If you need to set multiple origins, see the details page on MDN.

This can be a little confusing, so I drew up a diagram to illustrate how this header functions:

Data flow with Access-Control-Allow-Origin
Access-Control-Allow-Origin: http://www.one.site.com

Set-Cookie

Ensure that your cookies are sent via HTTPS (encrypted) only, and that they are not accessible via JavaScript. You can only send HTTPS cookies if your site also supports HTTPS, which it should. You should always set the following flags:

A sample Cookie definition:

Set-Cookie: <cookie-name>=<cookie-value>; Domain=<domain-value>; Secure; HttpOnly

See the excellent Mozilla documentation on cookies for more information.

X-XSS-Protection

This header instructs browsers to halt execution of detected cross site scripting attacks. It is generally low risk to set, but should still be tested before putting in production.

X-XSS-Protection: 1; mode=block

Web Server Example Configurations

Generally, it's best to add headers site-wide in your server configuration. Cookies are the exception here, as they are often defined in the application itself.

Before adding any headers to your site, I recommend first checking the observatory or manually looking at headers to see which are set already. Some frameworks and servers will automatically set some of these for you, so only implement the ones you need or want to change.

Apache Configuration

A sample Apache setting in .htaccess:

<IfModule mod_headers.c>
    
    Header set Content-Security-Policy: default-src 'self'; img-src 'self' https://i.imgur.com; object-src 'none'; script-src 'self'; style-src 'self'; frame-ancestors 'self'; base-uri 'self'; form-action 'self';
    
    Header set X-XSS-Protection: 1; mode=block
    Header set Access-Control-Allow-Origin: http://www.one.site.com
    Header set X-Frame-Options: deny
    Header set X-Content-Type-Options: nosniff
    Header set Strict-Transport-Security: max-age=3600; includeSubDomains
    
    
    Header set Cache-Control no-cache
    Header set Expires: 0
    
    <filesMatch '.(ico|css|js|gif|jpeg|jpg|png|svg|woff|ttf|eot)$'>
        Header set Cache-Control 'max-age=86400, public'
    </filesMatch>
</IfModule>

Nginx Configuration


add_header Content-Security-Policy: default-src 'self'; img-src 'self' https://i.imgur.com; object-src 'none'; script-src 'self'; style-src 'self'; frame-ancestors 'self'; base-uri 'self'; form-action 'self';
add_header X-XSS-Protection: 1; mode=block;
add_header Access-Control-Allow-Origin: http://www.one.site.com;
add_header X-Frame-Options: deny;
add_header X-Content-Type-Options: nosniff;
add_header Strict-Transport-Security: max-age=3600; includeSubDomains;
add_header Cache-Control no-cache;
add_header Expires: 0;
location ~* \.(?:ico|css|js|gif|jpe?g|png|svg|woff|ttf|eot)$ {
    try_files $uri @rewriteapp;
    add_header Cache-Control 'max-age=86400, public';
}

If you don't have access to the web server, or have complex header setting needs, you may want to set these in the application itself. This can usually be done with framework middleware for an entire site, and on a per-response basis for one-off header setting.

I only included one header for brevity in the examples. Add all that are needed via this method in the same way.

Node and express:

Add a global mount path:

app.use(function(req, res, next) {
    res.header('X-XSS-Protection', 1; mode=block);    
    next();
});

Java and Spring:

I don't have a lot of experience with Spring, but Baeldung has a great guide to header setting in Spring.

PHP:

I am not familiar with the various PHP frameworks. Look for middleware that can handle requests. For a single response, it is very simple.

header('X-XSS-Protection: 1; mode=block');

Python / Django

Django includes configurable security middleware that can handle all these settings for you. Enable those first.

For specific pages, you can treat the response like a dictionary. Django has a special way to handle caching that should be investigated if trying to set cache headers this way.

response = HttpResponse()
response['X-XSS-Protection'] = '1; mode=block'

Conclusions

Setting headers is relatively quick and easy. You will have a fairly significant increase in your site security for data protection, cross site scripting, and click jacking.

You also ensure you don't lose future business deals as a result of company security ratings that rely on this information. This practice seems to be increasing, and I expect it to continue to play a role in enterprise sales in future years.

Did I miss a header you think should be included? Let me know!




All Comments: [-] | anchor

undecidabot(4056) 3 days ago [-]

Nice list. You might want to consider setting a 'Referrer-Policy'[1] for sites with URLs that you'd prefer not to leak.

Also, for 'Set-Cookie', the relatively new 'SameSite'[2] directive would be a good addition for most sites.

Oh, and for CSP, check Google's evaluator out[3].

[1] https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Re...

[2] https://www.owasp.org/index.php/SameSite

[3] https://csp-evaluator.withgoogle.com

will4274(10000) 3 days ago [-]

Referrer-Policy is nice, but browsers should just default to strict-origin-when-cross-origin and end the mess.

deftnerd(10000) 3 days ago [-]

This is a good basic overview of the basic headers, but I suggest spending some time on Scott Helme's blog. He runs securityheaders.io, a free service that scans your site, and assigns it a letter grade based on what headers and configurations you've applied.

For instance, his explanation of Content Security Policy headers is much more detailed than in the OP's link.

https://scotthelme.co.uk/content-security-policy-an-introduc...

t34543(10000) 2 days ago [-]

Mozilla Observatory does the same thing. https://observatory.mozilla.org/

el_duderino(161) 3 days ago [-]

securityheaders.io is now securityheaders.com

https://scotthelme.co.uk/security-headers-is-changing-domain...

joecot(10000) 3 days ago [-]

I'm a little confused by the examples for Access-Control-Allow-Origin:

> Access-Control-Allow-Origin: http://www.one.site.com

> Access-Control-Allow-Origin: http://www.two.site.com

And in the examples setting both. Because in my experience you cannot set multiple [1]. Lots of people instead set it to * which is both bad and restricts use of other request options (such as withCredentials). It looks like the current working solution is to use regexes to return the right domain [2], but I'm currently having trouble getting that to work, so if there's some better solution that works for people I'd love to hear it.

1. https://developer.mozilla.org/en-US/docs/Web/HTTP/CORS/Error... 2. https://stackoverflow.com/questions/1653308/access-control-a...

BCharlie(4066) 3 days ago [-]

You are right on this - I thought you could set multiple sites by setting multiple headers, but it doesn't work that way, which I should have known because headers don't work that way in general...

The recommended way to do multiple sites seems to be to have the server read the request header, check it against a whitelist, then dynamically respond with it, which seems terrible.

Thanks for catching this - I updated the post to reflect this and make it more clear.

jrockway(3381) 3 days ago [-]

I think the problem that people are running into with CORS is that their webserver was created before CORS was a thing, so it's tough to configure it correctly. What you want to do is if you allow the provided Origin, echo it back in Access-Control-Allow-Origin.

Envoy has a plugin to do this (envoy.cors), allowing you to configure allowed origins the way people want (['*.example.com', 'foo.com']) and then emitting the right headers when a request comes in. It also emits statistics on how many requests were allowed or denied, so you can monitor that your rules are working correctly. If you are using something else, I recommend just having your web application do the logic and supply the right headers. (You should also be prepared to handle an OPTIONS request for CORS preflight.)

cujanovic(1189) 3 days ago [-]
kureikain(3921) 3 days ago [-]

This is such a great and completed guide. Lots of headers with example and explanations. Have been looking for it.

I will include in in my newsletter[0] next monday if you don't mind.

---

[0]: https://betterdev.link

the_common_man(4082) 3 days ago [-]

X-frame-options is obsolete. Most browsers complain loudly on the console or ignore the header. Use csp instead

floatingatoll(3889) 3 days ago [-]

For those wondering, CSP 'frame-ancestors' if I remember correctly.

will4274(10000) 3 days ago [-]

> X-frame-options is obsolete. Most browsers complain loudly on the console or ignore the header.

The deny option seems to work just fine. My default browser (Firefox) doesn't complain. MDN doesn't indicate any browsers have dropped support. Plus, dropping support would be an unmitigated and unnecessary unforced security error, by making old sites insecure. Do you have a link to an example of a browser ignoring the header?

Avamander(10000) 3 days ago [-]

Instead of X-Frame-Options one should use CSP's frame-ancestors option, it has wider support among modern browsers. But CSP deserves more than one paragraph in general.

He also missed Expect-Staple and Expect-CT, in addition to that, most of security headers have the option to specify an URI where failures are sent, very important in production environments.

tialaramex(10000) 3 days ago [-]

Expect-CT is pretty marginal. In principle a browser could implement Certificate Transparency but then only bother to enforce it if Expect-CT is present, in practice the policy ends up being that they'll enforce CT system-wide after some date. Setting Expect-CT doesn't have any effect on a browser that can't understand SCTs anyway, so that leaves basically no audience.

Furthermore, especially with Symantec out of the picture, there is no broad consumer market for certificates from the Web PKI which don't have SCTs. The audience of people who know they want a certificate is hugely tilted towards people with very limited grasp of what's going on, almost all of whom definitely need embedded SCTs or they're in for a bad surprise. So it doesn't even make sense to have a checkbox for 'I don't want SCTs' because 99% of people who click it were just clicking boxes without understanding them and will subsequently complain that the certificate doesn't 'work' because it didn't have any SCTs baked into it.

There are CAs with no logging for either industrial applications which aren't built out of a web browser (and so don't check SCTs) and are due to be retired before it'd make sense to upgrade them (most are gone in 2019 or 2020) or for specialist customers like Google whose servers are set up to go get them SCTs at the last moment, to be stapled later. Neither is a product with a consumer audience. Which means neither is a plausible source of certificates for your hypothetical security adversary.

As a result, in reality Expect-CT doesn't end up defending you against anything that's actually likely to happen, making it probably a waste of a few bytes.

BCharlie(4066) 3 days ago [-]

That is true! I do set frame-ancestors in the sample CSP for this reason. I could probably do a dedicated post on CSP to do it justice, but don't want to overwhelm anyone who just wants to start setting headers.

One good reason to set both options, as I mention in the post, is that scanners who rate site security posture may penalize site owners who don't set both - no harm in doing it that I know of.

yyyk(10000) 3 days ago [-]

The X-XSS-Protection header recommendation is a Zombie recommendation which is at best outdated and at worst harmful. Its origins are based on old IE bugs but it introduces worse issues.

IMHO, the best value for X-XSS-Protection is either 0 (disabling it completely like Facebook does) or not providing the value at all and just letting the client browser use its default. Why?

First, XSS 'protection' is about to not be implemented by most browsers. Google has decide to deprecate Chrome's XSS Auditor[0], and stop supporting XSS 'protection'. Microsoft has already removed its XSS filter from Edge[1]. Mozilla has never bothered to support it in Firefox.

So most leading net companies already think it doesn't work. Safari of course supports the much stronger CSP. So it's only possibly useful on IE - if you don't support IE, might as well save the bytes.

Second, XSS 'protection' protects less than one might think. In all implementing browsers, it has always been implemented as part of the HTML parser, making it useless against DOM-based attacks (and strictly inferior to CSP)[2].

Worse, the XSS 'protection' can be used to create security flaws. IE's default is to detect XSS and try to filter it out, this has been known to be buggy to the point of creating XSS on safe pages[3], which is why the typical recommendation has been the block behaviour. But blocking has been itself exploited in the past[4], and has side-channel leaks that even Google considers too difficult to catch[0] to the point of preferring to remove XSS 'protection' altogether. Blocking has an obvious social exploitation which can create attacks or make attacks more serious.[5]

In short, the best idea is to get rid of browsers' XSS 'protection' ASAP in favour of CSP, preferably by having all browsers deprecate it. This is happening anyway, so might as well save the bytes. But if you do provide the header, I suggest disable XSS 'protection' altogether.

[0] https://groups.google.com/a/chromium.org/forum/#!msg/blink-d...

[1] https://developer.microsoft.com/en-us/microsoft-edge/platfor...

[2] e.g. https://github.com/WebKit/webkit/blob/d70365e65de64b8f6eaf1f...

[3] CVE-2014-6328, CVE-2015-6164, CVE-2016-3212..

[4] https://portswigger.net/blog/abusing-chromes-xss-auditor-to-...

[5] Assume that an attacker has enough access to normally allow XSS. If he does not, the filter is useless. If he does, the attacker can by definition trigger the filter. So trigger the filter, make a webpage be blocked, and call the affected user as 'support'. From there the exploitation is obvious, and can be much worse than mere XSS. Now, remember that all those XSS filters in all likelihood have false positives, that may not be blocked by other defences because they're not attacks. So It's quite possible the filter introduces a social attack that wouldn't be possible otherwise!

Hattip: https://frederik-braun.com/xssauditor-bad.html which gave me even more reasons to think browsers' XSS 'protection' is awful. I didn't know about [2] before reading his entry.

BCharlie(4066) 3 days ago [-]

Thanks for this response - lot's of new information here that I'll have to read up on!

yyyk(10000) 3 days ago [-]

For [3] (exploiting IE's XSS filter default behaviour to create XSS) see also https://www.slideshare.net/codeblue_jp/xss-attacks-exploitin... .

The author recommends either changing the default behaviour to block or disabling the filter altogether. I believe experience has shown this protection method cannot be fixed.

Ultimately, safe code is code that can be reasoned about but there never was even any specification for this 'feature'. By comparison, CSP has a strict specification. It covers more attacks, and has a better failure mode between XSS protections' filter and block entire page load behaviours.

Grollicus(10000) 3 days ago [-]

Should mention for Access-Control-Allow-Origin that the default value is the safe default and setting this header weakens site security.

BCharlie(4066) 3 days ago [-]

Great point! I added a sentence to say that the default is all that's needed.

spectre256(4137) 3 days ago [-]

It's definitely worth repeating the warning that, while very useful, Strict-Transport-Security should be deployed with special care!

While the author's example of `max-age=3600` means there's only an hour of potential problems, enabling Strict-Transport-Security has the potential to prevent people from accessing your site if for whatever reason you are no longer able to serve HTTPS traffic.

Considering another common setting is to enable HSTS for a year, its worth enabling only deliberately and with some thought.

sjwright(10000) 3 days ago [-]

Not being able to serve HTTPS is not a real concern. It seems possible but in reality it simply won't happen. If it ever does break, you fix it, you don't change protocols.

Once you go HTTPS you're all in regardless whether or not you've set HSTS headers. Let's say your HTTPS certificate fails and you can't get it replaced. So what, you're going to temporarily move back to HTTP for a few days? Not going to happen! Everyone has already bookmarked/linked/shared/crawled your HTTPS URLs. There is no automated way to downgrade people to HTTP, so only the geeks who would even think to try removing the "s" will be able to visit. And most geeks won't even do that because we've probably never encountered a situation where that has ever helped.

BCharlie(4066) 3 days ago [-]

I think it's a good point which is why I set the time low, even though many other resources set it to a week or longer. I just don't like very long cache times for anything that can break, so that site owners have a little more flexibility in case something goes wrong down the line.

Someone1234(4161) 3 days ago [-]

It is a good point.

I would like to add that a lot of web-apps break if they aren't served over HTTPS regardless, due to the Secure flag being set on cookies. For example if we run ours in HTTP (even for development) it will successfully set the cookie (+Secure +HttpOnly) but cannot read the cookie back and you get stuck on the login page indefinitely.

So we just set ours to a year, and consider HTTPS to be a mission critical tier component. It goes down the site is simply 'down.'

HSTS is kind of the 'secret sauce' that gives developers coverage to mandate Secure cookies only. Before then we'd get caught in 'what if' bikeshedding[0].

[0] https://en.wiktionary.org/wiki/bikeshedding

zaarn(10000) 3 days ago [-]

I set HSTS to 10 years. My infrastructure isn't even capable of serving HTTP other than for LetsEncrypt certs. An outage on HTTPS is a full outage. Most of my sites handle user data in some way, so HTTPS is mandatory anyway, as per my interpretation of the GDPR.

barathvutukuri(10000) 3 days ago [-]

HSTS Preload is also nice. Common HSTS Preload configuration and opt-in link. https://hstspreload.org/

mtgx(151) 3 days ago [-]

> if for whatever reason you are no longer able to serve HTTPS traffic

Isn't that how it should work? Would you rather use Gmail over HTTP if its HTTPS stopped working? Besides, just supporting HTTP fallback means you're much more vulnerable to downgrade attacks -- it's the first thing attackers will attempt to use.

pimterry(3457) 3 days ago [-]

The only risk is if you've served HTTPS traffic properly with HSTS headers to users, and then your server is later unable to correctly handle HTTPS traffic. Note that HSTS headers on a non-HTTPS response are ignored.

Whilst there's cases where you might fail to serve HTTPS traffic temporarily (i.e. if your cert expires and you don't handle it) almost all HTTPS problems are quick fixes, and are probably your #1 priority regardless of HSTS. If your HTTPS setup is broken and your application has any real security concerns at all then it's arguably better to be inaccessible, rather than quietly allow insecure traffic in the meantime, exposing all your users' traffic. I don't know many good reasons you'd suddenly want to go from HTTPS back to only supporting plain HTTP either. I just can't see any realistic scenarios where HSTS causes you extra problems.

tialaramex(10000) 3 days ago [-]

I don't get people who worry about _feature_ pinning like this.

I imagine them looking at a business continuity plan and being aghast - why are we spending money to manage the risk from a wildfire in California overwhelming our site there, yet we haven't spent ten times as much on a zombie werewolf defence grid or to protect against winged bears?

HSTS defends against a real problem that actually happens, like those Californian wildfires, whereas 'whatever reason you are no longer able to serve HTTPS traffic' is a fantasy like the winged bears that you don't need to concern yourself with.

ehPReth(1581) 3 days ago [-]

Speaking of HSTS.. does anyone here know if Firebase Hosting (Google Cloud) plans to support custom HSTS headers with custom domains? I can't add things like includesubdomains or preload at present unfortunately

txcwpalpha(4158) 3 days ago [-]

Unless your site is nothing but a dumb billboard serving nothing but static assets (and maybe even then...), the inability to serve HTTPS traffic should be considered a breaking issue and you shouldn't be serving anything until your HTTPS is restored. 'Reduced security' is not a valid fallback option.

That might not be something that a company's management team wants to hear, but indicating to your users that falling back to insecure HTTP is just something that happens sometimes and they should continue using your site is one of the worst things you can possibly do in terms of security.





Historical Discussions: Why did we wait so long for the bicycle? (July 15, 2019: 687 points)

(688) Why did we wait so long for the bicycle?

688 points 6 days ago by exolymph in 429th position

rootsofprogress.org | Estimated reading time – 19 minutes | comments | anchor

Why did we wait so long for the bicycle?

July 13, 2019 · 11 min read

The bicycle, as we know it today, was not invented until the late 1800s. Yet it was a simple mechanical invention. It would seem to require no brilliant inventive insight, and certainly no scientific background.

Why, then, wasn't it invented much earlier?

I asked this question on Twitter, and read some discussion on Quora. People proposed many hypotheses, including:

  • Technology factors. Metalworking improved a lot in the 1800s: we got improved iron refining and eventually cheap steel, better processes for shaping metal, and ability to make parts like hollow tubes. Wheel technology improved: wire-spoke (aka tension-spoked) wheels replaced heavier designs; vulcanized rubber (1839) was needed for tires; inflatable tires weren't invented until 1887. Chains, gears, and ball bearings are all crucial parts that require advanced manufacturing techniques for precision and cost.

  • Design iteration. Early bicycles were inconvenient and dangerous. The first version didn't even have pedals. Some versions didn't have steering, and could only be turned by leaning. (!) The famous "penny-farthing" design, with its huge front wheel, made it impossible to balance with your feet, was prone to tipping forward on a hard stop, and generally left the rider high in the air, all of which increased risk of injury. It took decades of iteration to get to a successful bicycle model.

  • Quality of roads. Roads in the 1800s and earlier were terrible by modern standards. Roads were often dirt, rutted from the passage of many carts, turning muddy in the rain. Macadam paving, which gave smooth surfaces to roads, wasn't invented until about 1820. City roads at the time were paved with cobblestones, which were good for horses but too bumpy for bicycles. (The unevenness was apparently a feature, assisting in the runoff of sewage—leading one Quora answer to claim that the construction of city sewers was what opened the door to bicycles.)

  • Competition from horses. Horses were a common and accepted mode of transportation at the time. They could deal with all kinds of roads. They could carry heavy loads. Who then needs a bicycle? In this connection, it has been claimed that the bicycle was invented in response to food shortages due to the "Year without a Summer", an 1816 weather event caused by the volcanic explosion of Mt. Tambora the year earlier, which darkened skies and lowered temperatures in many parts of the world. The agricultural crisis caused horses as well as people to starve, which led to some horses being slaughtered for food, and made the remaining ones more expensive to feed. This could have motivated the search for alternatives.

  • General economic growth. Multiple commenters pointed out the need for a middle class to provide demand for such an invention. If all you have are a lot of poor peasants and a few aristocrats (who, by the way, have horses, carriages, and drivers), there isn't much of a market for bicycles. This is more plausible when you realize that bicycles were more of a hobby for entertainment before they became a practical means of transportation.

  • Cultural factors. Maybe there was just a general lack of interest in useful mechanical inventions until a certain point in history? But when did this change, and why?


These are all good hypotheses. But some of them start to buckle under pressure:

The quality of roads is relevant, but not really the answer. Bicycles can be ridden on dirt roads or sidewalks (although the latter led to run-ins with pedestrians and made bicycles unpopular among the public at first). And historically, roads didn't improve until after bicycles became common—indeed it seems that it was in part the cyclists who called for the improvement of roads.

I don't think horses explain it either. A bicycle, from what I've read, was cheaper to buy than a horse, and it was certainly cheaper to maintain (if nothing else, you don't have to feed a bicycle). And it turns out that inventors were interested in the problem of human-powered vehicles, dispensing with the need for horses, for a long time before the modern bicycle. Even Karl von Drais, who invented the first two-wheeled human-powered vehicle after the Year without a Summer, had been working on the problem for years before that.

Technology factors are more convincing to me. They may have been necessary for bicycles to become practical and cheap enough to take off. But they weren't needed for early experimentation. Frames can be built of wood. Wheels can be rimmed with metal. Gears can be omitted. Chains can be replaced with belts; some early designs even used treadles instead of pedals, and at least one design drove the wheels with levers, as on a steam locomotive.

So what's the real explanation?


Giovanni Fontana's self-driving carriage
Human-powered carriage in Récréations Mathématiques Columbia University Libraries

To understand this, I dug into the history of the bicycle.

The concept of a human-powered vehicle goes back many centuries. The earliest reference I have found is to Venetian engineer Giovanni Fontana, who in the early 1400s described a four-wheeled carriage powered by a driver pulling on a loop of rope connected by gears to the wheels (it's unclear if he ever even attempted to build such a machine; Fontana sketched a lot of strange things).

Giovanni Fontana's self-driving carriage

Another early concept was described in the book Bicycle by David V. Herlihy:

More than three centuries ago, the distinguished French mathematician Jacques Ozanam spelled out the theoretical advantages of a human-powered carriage "in which one can drive oneself wherever one pleases, without horses." Its owner could freely roam along the roads without having to care for an animal and might even enjoy a healthy exercise in the process. Moreover, this particular type of "self-moving" vehicle, in contrast to those that called for wind or steam for propulsion, would run on that most abundant and accessible of all resources: willpower. But how to construct such a valuable vehicle? That was the twenty-third of some fifty "useful and entertaining" problems Ozanam identified and addressed in his famous Récréations Mathématiques et Physiques, published in 1696.

Ozanam's book presented a proposed solution from another inventor: another four-wheeled carriage, driven by two people (one to steer, one to power the vehicle by stepping up and down on large treadles connected to the wheels by ropes, pulleys, and gears).

Human-powered carriage in Récréations Mathématiques Columbia University Libraries

It seems that for centuries, the carriage was the model for human-powered vehicles. Various inventors tried their hand at designs, and some were even built. There is a record in a London journal of an attempt in 1774 that went up to six miles an hour. French inventor Jean-Pierre Blanchard (who would later go on to fame in ballooning) built a human-powered carriage that went a dozen miles from Paris to Versailles. An American mechanic named Bolton built a version in 1804 that used mechanical leverage from interlocking gears. Presumably, all these attempts went nowhere because the machines were too large and heavy to be practical.

Draisine c. 1820, Kurpfälzisches Museum, Heidelberg Wikimedia Commons (CC BY-SA 3.0)

The key insight was to stop trying to build a mechanical carriage, and instead build something more like a mechanical horse. This step was taken by the aforementioned Karl von Drais in the early 1800s. Drais was an aristocrat; he held a position as forest master in Baden that is said to have given him free time to tinker. His first attempts, beginning in 1813, were four-wheeled carriages like their predecessors, and like them failed to gain the support of authorities.

New-York Tribune, Sep 1894 Flickr / Karin Dalziel (CC BY-NC 2.0)

But in 1817 (possibly motivated by the aforementioned food crisis and resultant shortage of horses, although this is unclear), he tried again with a new design: a two-wheeled, one-person vehicle that is a recognizable ancestor of the modern bicycle. It was made of wood, with iron tires. He called it the Laufmaschine, or "running machine"; it had no pedals, and instead was powered by directly pushing off the ground with one's feet. It was also called the "velocipede" (from the Latin for "swift foot") or the "draisine" (English) or "draisienne" (French) after its inventor; an improved version made by a London coachmaker was known in England as the "pedestrian curricle".

Draisine c. 1820, Kurpfälzisches Museum, Heidelberg Wikimedia Commons (CC BY-SA 3.0)
New-York Tribune, Sep 1894 Flickr / Karin Dalziel (CC BY-NC 2.0)

Without pedals or gears, this proto-bicycle couldn't achieve the speed or efficiency of modern designs. But, like the scooters still used by children today, it allowed you to coast, especially downhill, and it held your weight as you moved forward. Drais got up to 12 miles per hour on his machine. It became a fad in Europe in 1818–19, then faded. It seems the reasons were a combination of the potential for injury and the general annoyance of the public that these things were being driven through pedestrian areas such as sidewalks and parks (some things never change; we're repeating this today with the scooter wars in San Francisco and other cities).

Michaux "boneshaker", 1870 Wikimedia / Classic Motorcycle Archive (CC BY-SA 3.0)
"The American Velocipede", wood engraving by Theodore Davis, Harper's Weekly, Dec 1868

The next key advance didn't come until decades later, when someone put pedals on the bike. There are conflicting claims to first inventor (going back to 1839), but it was definitely done by the 1860s in France. In any case, it was in the 1860s that bicycle development really took off. Pedals allowed the rider to propel the machine faster and more efficiently. This model was manufactured in France, at first with wooden frames, later with iron, and became commonly known as the "boneshaker" (which gives you an idea of how rough the ride still was).

Michaux "boneshaker", 1870 Wikimedia / Classic Motorcycle Archive (CC BY-SA 3.0)
"The American Velocipede", wood engraving by Theodore Davis, Harper's Weekly, Dec 1868

At this point, though, there were still no gears or chains. The pedals were attached directly to the front wheel. This gave the rider little mechanical advantage: it's the same as a fixie with a 1:1 gear ratio (vs. the ratios most commonly used today which are closer to 3:1). Think of what it's like to pedal a bike that's in too low a gear: you pump your legs a lot without going very fast.

Penny-farthing bicycle Flickr / calitexican (CC BY-NC-SA 2.0)

The only solution was to make the wheel larger, leading around 1870 to the ridiculous-looking "penny-farthing" or "high-wheel" design with the huge front wheel, which you've probably seen and may associate with the late 1800s. By around this time, bicycles were being made with metal frames, wire-spoke wheels, and solid rubber (not yet inflatable) tires. This design did give a faster ride, and a smoother one, since the large wheel absorbed shocks better. But it required acrobatic balance to ride, and as noted above it was prone to nasty spills and injuries, including "taking a header" if you stopped suddenly.

Penny-farthing bicycle Flickr / calitexican (CC BY-NC-SA 2.0)

The third and final key advance, then, was to separate the pedals from the wheel. Variations on this "safety bicycle", including at least one driven by treadles and levers, were attempted from the 1870s if not before. The first commercially successful model, using the familiar crank and chain design, was produced in 1885 by John Starley. Finally, in 1888, inflatable (pneumatic) tires were introduced by John Dunlop, cushioning the ride and eliminating the last advantage of the penny-farthing.

So, by the end of the 1880s, bicycles had evolved into the form we know them today, with (approximately) equal-sized wheels, pedals, chains, metal frames, wire-spoke wheels, and inflatable rubber tires.


So what can we conclude?

First, the correct design was not obvious. For centuries, progress was stalled because inventors were all trying to create multi-person four-wheeled carriages, rather than single-person two-wheeled vehicles. It's unclear why this was; certainly inventors were copying an existing mode of transportation, but why would they draw inspiration only from the horse-and-carriage, and not from the horse-and-rider? (Some commenters have suggested that it was not obvious that a two-wheeled vehicle would balance, but I find this unconvincing given how many other things people have learned to balance on, from dugout canoes to horses themselves.) It's possible (I'm purely speculating here) that early mechanical inventors had a harder time realizing the fundamental impractiability of the carriage design because they didn't have much in the way of mathematical engineering principles to go on, but then again it's unclear what led to Drais's breakthrough.

And even after Drais hit on the two-wheeled design, it took multiple iterations, which happened over decades, to get to a design that was efficient, comfortable, and safe.

Early "velocipede" models, from an 1887 German encyclopedia. Many designs were tried Wikimedia Commons

Second, advances in materials and manufacturing were probably necessary for a commercially successful bicycle. It's a bit hard, from where I stand, to untangle which advances in design were made possible by new materials and techniques, and which were simply sparks of inventive imagination that hadn't been conceived or developed before. But the fact that people were willing to put up with the precarious high-wheeled design indicates to me that pneumatic tires were crucial. And it's plausible to me that advanced metalworking was needed to make small, lightweight chains and gears of high and consistent quality, at an acceptable price—and that no other design, such as a belt or lever, would have worked instead. It's also plausible to me that wooden frames just weren't light and strong enough to be practical (I certainly wouldn't be eager to ride a wooden bicycle today).

But we can go deeper, and ask the questions that inspired my intense interest in this question in the first place. Why was no one even experimenting with two-wheeled vehicles until the 1800s? And why was no one, as far as we know, even considering the question of human-powered vehicles until the 1400s? Why weren't there bicycle mechanics in the 1300s, when there were clockmakers, or at least by the 1500s, when we had watches? Or among the ancient Romans, who built water mills and harvesting machines? Or the Greeks, who built the Antikythera mechanism ? Even if they didn't have tires and chains, why weren't these societies at least experimenting with draisines? Or even the failed carriage designs?

To even begin to answer this, we have to realize that it's part of a much wider phenomenon. I asked the same question of the cotton gin, which unlike the bicycle did not require advanced materials: it's a wooden box, a wire mesh, and a drum with wire teeth; in fact, it was so simple that once the concept was out, plantation owners made bootleg copies by hand (depriving Eli Whitney of most of his patent royalties). The same question can be asked of all of the key inventions of textile mechanization; Anton Howes, an economic historian who chimed in on the Twitter thread linked above, has noted of John Kay's flying shuttle:

Kay's innovation was extraordinary in its simplicity. As the inventor Bennet Woodcroft put it, weaving with an ordinary shuttle had been "performed for upwards of five thousand years, by millions of skilled workmen, without any improvement being made to expedite the operation, until the year 1733". All Kay added was some wood and some string. And he applied it to weaving wool, which had been England's main industry since the middle ages. He had no special skill, he required no special understanding of science for it, and he faced no special incentive to do it. As for institutions, the flying shuttle was technically illegal because it saved labour, the patent was immediately pirated by competitors to little avail, and Kay was forced to move to France, hounded out of the country by angry weavers who threatened his property and even his life. Kay faced no special incentives — he even innovated despite some formidable social and legal barriers.

There are also other stories in which an early attempt at invention was demonstrated, the idea found no backers if it wasn't already fully viable, and then development was dropped for decades. Richard Trevithick's early, failed experiments with locomotives come to mind.

In light of this, I think the deepest explanation is in general economic and cultural factors. Regarding economic factors, it seems that there needs to be a certain level of surplus to support the culture-wide research and development effort that creates inventions. Note that Karl von Drais was a baron who apparently had a cushy job and invented in his spare time. This is common of researchers of that era: they were often aristocrats or otherwise independently wealthy (and those who weren't had to scramble for support from wealthy patrons). Today we have research labs in both universities and corporations, plus venture capitalists to fund development of new products and services. The moment it becomes clear that a certain type of innovation might be possible, there are multiple teams funded and hustling to bring it to market. There are no multi-decade gaps in the innovation timeline anymore, or at least vastly fewer.

Looking at economic factors on the demand side, surplus also would seem to create markets for new products. Maybe GDP per capita just has to hit a certain point before people even have time, attention and energy to think about new inventions that aren't literally putting food on the table, a roof over your head, or a shirt on your back.

Finally, there are cultural factors. Howes says that "innovation is not in human nature, but is instead received. ... when people do not innovate, it is often simply because it never occurs to them to do so." Joel Mokyr says, similarly, that "progress isn't natural" (and his book on this topic, A Culture of Growth, helped inspire this blog). I agree with both.

Fully elucidating these economic and cultural factors is a major future project of this blog.


Sources and further reading: Excerpt from Bicycle, by David V. Herlihy, "Of Velocipedes and Draisiennes", "Who Invented the Bicycle?", and many Wikipedia articles including "History of the bicycle".




All Comments: [-] | anchor

peterwwillis(2543) 6 days ago [-]

Why did we wait so long for the elevator? Circular dependency and incentive. Combining a ratchet with a pulley is incredibly simple and both have been around since the ancient Greeks. But with a few exceptions, it didn't make sense to build buildings big enough to need elevators. I mean, how would you get up and down the buildings? Answer: an elevator... but it didn't exist yet... hence, nobody could imagine how to get up and down tall buildings efficiently. That, and steel-framed buildings that could support themselves over 10 or so floors didn't come into vogue until after the renaissance.

wmf(2007) 6 days ago [-]

And there wasn't enough water pressure for the upper floors.

iguy(10000) 6 days ago [-]

There's about five centuries in there between the renaissance and steel-framed buildings!

But of course cranes existed (on building sites, and in factories) since approximately the day after we invented the rope. But an elevator safe and reliable enough to make rich people buy an apartment above the 2nd floor? And cities dense enough that they were tempted to do so? Those took longer.

nkoren(1702) 6 days ago [-]

Innovation takes an unreasonably twisty path. An even more stark example: suitcases with wheels. If you have the technology to build a siege engine or a crossbow, then surely you can build a suitcase with wheels. And yet they post-date the Saturn V rocket. How reasonable is that?

sobani(10000) 5 days ago [-]

Why would you need a suitcase with wheels when you have a porter who carries your cases for you? Remember that if you could afford to fly in the sixties, you could easily afford it to have someone else deal with your luggage.

tlb(1234) 6 days ago [-]

The unicycle was invented shortly afterwards. Here's a patent from 1869, for an over-complicated unicycle with a treadle mechanism instead of pedals on cranks. https://patents.google.com/patent/US87355

Many of the objections about how complicated the bicycle is don't apply to the unicycle. The Romans could have made a perfectly good one out of wood with brass cranks. But maybe it was even less obvious than the bicycle that it was possible to ride.

jessaustin(378) 6 days ago [-]

If unicycles had come first, would bicycles have ever surpassed them?

choiway(10000) 6 days ago [-]

I don't think the modern two wheeled in-line design is an obvious starting point or design iteration. It's not obviously user friendly either. Proof: 90% of the people reading this thread probably had to be 'taught' how to ride a two wheeled bicycle.

jessaustin(378) 6 days ago [-]

This might be a function of age and development? I couldn't learn to bicycle at the 'proper' age. (7? 9? I don't know exactly when it was but I could tell my parents were frustrated.) I just couldn't do it. Later as a teenager I tried it and it was easy.

otabdeveloper2(10000) 6 days ago [-]

Bad quality article.

There's a simple answer: ball bearings are relatively high tech that requires lots of innovation to come together at the same time.

mc32(4111) 6 days ago [-]

True for a modern bike, but a brass (whatever metal) bushing would work for a more primitive bike.

twic(3492) 6 days ago [-]

I went looking to see when ball bearings were invented, and came across this wildly detailed history, prepared for NASA in the 1980s:

https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/198100...

It's page 5 before it even gets to the invention of the wheel!

SilasX(4032) 6 days ago [-]

Relatedly, my favorite example of 'why didn't this get invented before' is wheeled luggage which (from what I've read) wasn't sold until 1989, even though the wheelbarrow (the same basic idea) has been around since prehistoric times.

IIRC, the reason it didn't appear sooner was a combination of a) getting reliable castors[1] for wheels that small is a non-trivial problem, and b) there wasn't much demand for it until the 80s.

[1] These things: https://en.wikipedia.org/wiki/Caster

Robotbeat(10000) 6 days ago [-]

By the way, push bikes (i.e. the hobby horse) are very common among toddler and preschool age children in my circles. I don't see why a wooden version of this couldn't have been common much earlier than 1817. Wood is sufficient for its construction, although I do suppose a metal bearing would be superior.

jessaustin(378) 6 days ago [-]

At the very least it would be a fun way to descend steep hills.

simulate(2711) 6 days ago [-]

In 100 years will we wonder why recumbent bicycles took so long to catch on? MIT's David Gordon Wilson, the writer of the excellent book Bicycling Science, had been advocating for recumbents as a more efficient design to replace traditional modern bicycles for decades.

http://news.mit.edu/2017/mit-designer-inventor-author-bike-b... and https://mitpress.mit.edu/books/bicycling-science-third-editi...

UI_at_80x24(10000) 6 days ago [-]

As a bent rider myself, I too am baffled.

In my mind from the very first instant I saw one (late 1970's or early 80's Dick Ryan was interviewed on broadcast TV [likely as a novelty] and I was captivated. My child mind saw this as 'proper evolution' and I had to have one.

Here's a great summary of the history of the recumbent bike:

https://www.lightningbikes.com/riders/martin-krieg/recumbent...

Some people are intimidated by change, others are energized by challenging the status quo. Those will always be my heroes.

GhostVII(10000) 6 days ago [-]

I don't think I would ever regularly use a recumbent bicycle, since you don't seem to have as much control over the bike than with a traditional one. You can't jump a curb or swerve around parked cars very easily on a recumbent. Also it's nice to be over top of the bike so you can easily jump off or redistribute your weight whenever you have to.

davesmith1983(10000) 6 days ago [-]

I've ridden many different types of bicycle from fixed gears, single speed mountain bikes and more traditional mountain bikes and racing bikes. Recumbent is horrible to ride in traffic (you are below the height of most traffic) and the handling is atrocious.

Yes they are maybe more efficient but they are not pleasant to ride.

TBH the current design is efficient enough, is well understood both in terms of the technology. Almost all improvements now are iterative.

Also more modern materials (carbon fibre, aluminium) are either expensive, brittle (carbon fibre is easily damaged in a crash) or both. A lot of riders (especially the fixed crowd) have a saying of 'Steel is Real', frames are normally cheap, easy to come by and even after crashes easily repaired (you can just bend it back most of the time).

The same can be said about many of the more modern gearing systems. Anything past 2 up front, 8 speed at the back is again typically more expensive to fix, parts are less easy to come by and is prone to failure.

Shimano's hub gears need regular maintenance where traditional derailleur setup will go on for thousands of miles with only basic maintenance. Maintenance on a derailleur system is typically cleaning (fairy liquid is fine) and using cheap lubrication that can be bought for a few pounds/dollars will suffice in most cases.

icebraining(3594) 6 days ago [-]

First mover advantage? Upright bikes are everywhere, and I could easily find one to learn, plus they're cheap. Recumbent simply don't offer enough perceived advantage to justify the investment to even try one out.

lazyjones(4088) 6 days ago [-]

> In 100 years will we wonder why recumbent bicycles took so long to catch on?

In 100 years, most likely only crazy hipsters will know what a bicycle is because everything will be electrified. E-Scooters will eradicate bicycles for most uses in the next 10 years or so.

giobox(4164) 6 days ago [-]

While efficient conversion of rider input to forward motion is obviously important in bicycles, it's not the only attribute that matters. Recumbent bikes make efficiency gains here in some riding scenarios at the cost of a great many other attributes, which is why they have never really caught on.

Also, they are really hard for young people to pop wheelies on, and that's important too :)

thedogeye(3319) 6 days ago [-]

I think the post downplays the value of a horse. Horses are incredible. Not only do they provide transport that is even better than bicycle (you don't have to pedal, and they can go over almost any terrain) but they also provide labor. Indeed, horses actually generate a net surplus of labor above the labor required to grow food to feed them. That is, a farmer with a horse can grow far more food than a farmer without a horse, even after all the food the horse eats.

If I could safely and legally keep a horse in front of my house and office I would totally get rid of my bike.

quickthrower2(1327) 6 days ago [-]

Then I hope you would clean up it's poo.

hermitdev(10000) 6 days ago [-]

> City roads at the time were paved with cobblestones, which were good for horses but too bumpy for bicycles.

This is one of the draws for the Paris-Roubaix spring classic race, a +200 mile, single day race with dozens of sections of rural cobble stones. Though, generally well maintained, the roads are narrow and brutal on modern road racing carbon-fiber bikes on 23mm wide tires. Lots of attrition from accidents. Typically pretty wet weather for the (spring) race and the cobbles are slippery and also get muddy. Riders tend to ride on the dirt shoulderer when they can, because it's smoother, but this can lead to tire punctures. The sections of cobbles are also narrow, so getting around 200 riders through is harrowing by itself. Smallest mistake by one rider compounds quickly.

Also mechanicals such as punctured tires and tossed chains are common. Rough corners/edges on the stones and gaps between cobbles easily puncture tires. The bikes have no suspension, so the chains bounce quite a bit, easily being tossed and in extreme cases, can be bent or broken if they jump a chain ring, requiring a replacement.

For their efforts, the trophy? A cobble stone with adornment.

The race is a spectacle. Probably one of the greatest one day spring classics of the UCSI Pro Tour. 6 hours or more of grueling, punishing cycling in crappy weather while being vibrated like crazy from the cobbles (the whole race isnt cobbles, but significant sections are, usually +20 sections, ranging in length of around a few kilometers each).

garethrees(10000) 6 days ago [-]

> Typically pretty wet weather for the (spring) race

Paris–Roubaix is certainly memorable when it's wet and muddy, but the last time it happened was in 2002! Early spring is the driest part of the year in north-eastern France, and so the usual conditions for the race are dry and dusty. See the Inner Ring: https://inrng.com/2016/04/rain-for-roubaix/

jaclaz(3915) 6 days ago [-]

As a side-side note, ever asked someone (friends, family, etc.) to draw a bycicle by heart?

This guy did:

http://www.gianlucagimini.it/prototypes/velocipedia.html

dec0dedab0de(4121) 6 days ago [-]

This is cool. Except it's driving me nuts that I can't figure out what's wrong with the first picture, or at least why it would immediately break.

snazz(3535) 6 days ago [-]

Those renders are extremely cool. It looked like he made them without any 3D modeling at all which seems really challenging. The perspective is spot-on.

brookhaven_dude(10000) 6 days ago [-]
evilolive(10000) 6 days ago [-]

could there be any simpler explanations like a modern renovation of the temple where this was added?

are there other recorded appearances of this modern-looking but very-early bicycle in the region? why was there a gap in its existence until it was re-introduced in the 20th century?

kingludite(10000) 6 days ago [-]

Its a lovely article, I much enjoyed reading it. I now think of it as part of a long list of scattered around publications that tried to describe the same process. Each with their own flower bed of opinions I don ́t agree with and soiled with technical misconceptions.

Having red a good number of patents and an interest in the innovation process I had to giggle a bit how the 100 year old design was described as THE bicycle. As if we've reached the final destination. Iḿ sure they thought the same about the bone breaker and the face smasher designs long after better ones were made.

We use to have a local bicycle shop ran by a guy who financially really didn't need to, it was his passion (and rumor had it it that he did it to get away from his wife.) I went there one day to ask him why he only sold normal bikes, you have a huge store but all the bikes are pretty much the same? He explained that he use to have 1 or 2 special designs but that people looking to buy a bicycle changed their behavior from circling the shop 2 or 3 times then buying something to making half a lap then standing there gazing at that 'weird' bike for a minute... and then they just left! He apparently put a good bit of thought into it since the loss of sales didn't bother him. His eventual decision was that he didn't want to disrupt peoples train of thought. They are here to look for a new bike, I should facilitate that to the best of my abilities.

Most bicycle mods or improvements are not useful but they are all weird to people. I invented 2 myself that increase efficiency and the quality of the work out by a truly unbelievable amount. The process was wonderful, I made rusty old clunkers that felt like high end bikes. While there was some encouragement from cycling enthusiasts I didn't care much for other peoples opinions, the manufacturing and marketing was already boring to me but the moaning was truly something else. The funniest part was when I left a rusty old test bike parked among hundreds of other bikes, something I never did, then someone let the air out of both tires. It was poetry to me, I wasn't even mad. How dare I have something weird on my bike.

more on the normality enforcement: https://bicycles.stackexchange.com/questions/59406/why-are-r...

j-conn(10000) 5 days ago [-]

Would love to hear more about your mods! Have any photos? Just curious as a layman who rides one nearly every day.

GlenTheMachine(4110) 6 days ago [-]

One possibly overlooked factor: the nature of travel has changed dramatically in the last 200 years. Right up until World War 1, my ancestors literally had no reason to ever go more than ten miles from home, and even that was rare. The entire extended family lived within a two mile radius. This was because they were all farmers, and farmers didn't need to travel often. Furthermore, if you did travel, well, you already had a horse. Which, to make a point not made in the article, didn't have to do with the cost of the horse; it had to do with the fact that you already needed to have a horse to farm. So the horse was a sunk cost.

If you did travel frequently, this was likely because you were in the business of shipping goods, ie tobacco or cotton or some other agricultural staple from where it was grown to a port, and hence speed wasn't your primary concern; inexpensive transport for bulk goods was. For all that is awesome about bicycles, bulk transports they aren't.

So: at least one factor was economic, but of an indirect sort. We had to have a critical mass of population that needed to travel regularly farther than they could conveniently walk, and for which horses were not sunk costs.

roywiggins(4056) 6 days ago [-]

You can look at the history of the fax machine. There were primitive fax machines very early, but they were a bit cumbersome, but more importantly nobody needed to send documents that fast anyway. It was easy and normal to send a guy on a horse, and for business purposes at the time, that was more than fast enough, for the distances that were being covered by the first fax lines.

joe_the_user(3736) 6 days ago [-]

Indeed, bicycles have generally been transport within cities and close-by. A world where most people lived in rural areas would not be that practical for bicycle travel even now.

iguy(10000) 6 days ago [-]

Well, I'd argue that they didn't need to travel, because their world was set up for people who couldn't. The spacing of villages most places was set by how far you could walk to the fields & still do a good day's work. Riding around on horseback was for a tiny elite, most places -- a horse eats more than a person, in a world where food was the majority of income. Once people could travel, they did, and the world got re-organized around this assumption.

But still, bicycles took off as a technology when they were too expensive for farm-workers, as a sort-of upper-middle class leisure craze. Only later did they enable mass mobility.

[comment moved]

hyperpallium(2672) 6 days ago [-]

Yes, and like the edison light bulb and birdseye frozen fish, you need infrastructure: smooth roads. Horse and cart are more tolerant.

runarberg(3966) 6 days ago [-]

This is not entirely correct. If you are a farmer you (or a someone working with you) still need to travel for numerous reasons, including to bring your products to market, to round up your animals, to buy equipment and supplies, etc.

You say they had horses for those types of traveling, except they most likely didn't. Only a successful farmer could afford a horse. And even if they had a horse, chances were it was busy or tired for farming works to be used for transportation.

Most people walked to were they needed to go, in if the modern bicycle would have existed a farmers dream would be to have a successful crop one year so they could afford a bicycle. In fact a modern bicycle would have been a good investment for many farm as they allow farmers to bring more products to a market quicker and more efficiently, in turn allowing the transporter to be back at their farming job quicker and less tired.

This can be seen in today's poorer regions where no trains nor road runs. But the occasional farm has invested in a bicycle to help with transportation.

vipbababd(10000) 6 days ago [-]

One possibly overlooked factor: the nature of travel has changed dramatically in the last 200 years. Right up until World War 1, my ancestors literally had no reason to ever go more than ten miles from home, and even that was rare. The entire extended family lived within a two mile radius. This was because they were all farmers, and farmers didn't need to travel often. Furthermore, if you did travel, well, you already had a horse. Which, to make a point not made in the article, didn't have to do with the cost of the horse; it had to do with the fact that you already needed to have a horse to farm. So the horse was a sunk cost. If you did travel frequently, it was likely because you were in the business of shipping goods, ie tobacco or cotton or some other agricultural staple from where it was grown to a port, and hence speed wasn't your primary concern; inexpensive transport for bulk goods was. For all that is awesome about bicycles, bulk transports they aren't.

So: at least one factor was economic, but of an indirect sort. We had to have a critical mass of population that needed to travel regularly farther than they could conveniently walk, and for which horses were not sunk costs.

iguy(10000) 6 days ago [-]

[comment moved]

GlenTheMachine(4110) 6 days ago [-]

Plagiarism. Thanks dude.

wahlrus(4029) 6 days ago [-]

I really enjoyed the markup/css styling on this piece. While the content is certainly up for debate, it was presented in a very visually clear and informative manner.

jasoncrawford(3088) 6 days ago [-]

Thank you! I have actually spent a good amount of time getting the image layout just right

atoav(10000) 6 days ago [-]

Roads were a necessety for the invention of bicycles. And by roads I mean roads with a certain standard.

Another necessety was a certain type of middle class which would actually desire individual transport. If you are a worker why would you prefer a bicycle over a wagon that transports all tools? If you are an aristocrat, why prefering a bicycle over some comfortable carriage, especially when you don't even know the way?

It is no accident, that the bike was invented by a certain type of rich middle class in cities with good infrastructure. More as a toy first (basically a horizontal bar with two wheels and you had to push with the feet), but more serious later

loeg(3756) 6 days ago [-]

> Roads were a necessety for the invention of bicycles. And by roads I mean roads with a certain standard.

The chicken and egg actually went in the other direction. Bicycles became popular and bicyclists lobbied for better roads.

dboreham(3189) 6 days ago [-]

Same for steam engines: it turns out Watt spent most of his effort on getting components manufactured and having the associated manufacturing processes developed. The basic idea of the steam engine had been around for a long time prior. You can ask the same question of Romans and electricity. They had all the necessary tech: glass for insulators, acid for batteries, metallurgy for wire etc.

SilasX(4032) 6 days ago [-]

Reminds me of John Salvatier's point about the 'surprising level of detail' in reality, i.e. yes, the idea that 'steam can drive a wheel' is simple enough, but you have to get a lot of small details right that don't make it into textbooks like reliable manufacturing tolerances.

HN discussion: https://news.ycombinator.com/item?id=16184255

Retric(4024) 6 days ago [-]

The other half is this is the efficiency of Roman steam engines where so low as to make it more novelty than useful.

Seeing an expensive device that does less than 1/100th the work of a horse does not obviously translate into a revolution.

Dumblydorr(10000) 6 days ago [-]

He also innovated the engines processes, using heat efficiently and reducing waste. Effective machining and manufacturing then scaled his innovations.

lucideer(4115) 6 days ago [-]

> (Some commenters have suggested that it was not obvious that a two-wheeled vehicle would balance, but I find this unconvincing given how many other things people have learned to balance on, from dugout canoes to horses themselves.)

I'll be one of those commenters I guess.... What!??

Balancing in a canoe is in no way similar to a bicycle. I can jump in a canoe having never been in one, and it will balance itself. I can probably even manage a few trepidous rows of an oar (or hand, or arbitrary object) without tipping it. Without me, the canoe will stand on its own. If I fall, it will often right itself without me.

On top of that, a canoe is both an evolution on previous working designs (rafts), and analogous to nature (whether it be a fish, crocodile or a floating log).

The horse, with its four (not two!) legs is an even more ridiculous comparison.

It seems pretty clear that the development of the bicycle would've been significantly hampered by the fact that balancing on two wheels was:

(a) unintuitive as an idea with no analogue in nature or previous innovations

(b) had a marked learning curve

(c) that learning curve was separate and independent for each design. Riding a modern bicycle doesn't automatically make you an adept penny farthing, unicycle, nor tandem rider. Nor any of the umpteen other technological iterations.

I wouldn't be surprised if this trumped material/manufacturing limitations as a hurdle. See this whimsical video for 'evidence' of this [0]

[0] https://www.youtube.com/watch?v=0yJdz-kjfLk

tcmb(10000) 6 days ago [-]

If you saw off a disc-shaped slice from a log of wood, and it rolls a few meters, you experience first hand that a single wheel can balance itself. Some say this is how the wheel was invented in the first place [citation needed].

deathanatos(4170) 6 days ago [-]

> The horse, with its four (not two!) legs is an even more ridiculous comparison.

The comparison is, as I understood it, the rider balancing on the horse itself. I've only ever done this myself with saddle and stirrups, so I don't know if this is significantly more difficult without those.

I think you're being a little generous with the canoe. Boarding a canoe without a dock (that is, climbing in from the water) is tricky, and requires some thought about where your weight is. (Since the lip of the canoe, in my experience, sits well above the water line, and you're about to pull down on the edge to attempt to board, but that same motion is the motion that moves the edge closer to the water which will flood it.) I've seen plenty of people flip and subsequently flood a canoe after an unsuccessful attempt to board.

JackFr(3367) 6 days ago [-]

>It seems pretty clear that the development of the bicycle would've been significantly hampered by the fact that balancing on two wheels was:

> (a) unintuitive as an idea with no analogue in nature or previous innovations

> (b) had a marked learning curve

> (c) that learning curve was separate and independent for each design.

This! It's completely non-obvious that balancing will work and there is no analog in nature to copy.

analog31(10000) 6 days ago [-]

I read a biography of the Wright Brothers, which pointed out that they spent quite some time and effort with their gliders, learning how to fly. So they were already skilled pilots before they attempted powered flight. The book suggested that this idea occurred to none of the other airplane inventors.

There might have been a similar problem with the bicycle: Anybody who built the first one had no idea of how to ride it, and lacked the advantages of being six years old, namely closer to the ground and quicker to heal from minor injuries.

What I want to ask: Why are we still waiting so long for the bicycle? There just seem like so many barriers to more widespread adoption.

mantap(10000) 6 days ago [-]

It does seem utterly counterintuitive that a vehicle that cannot balance by itself would be able to balance with the addition of a heavy rider elevating its center of gravity. Usually making an object top-heavy renders it even more unstable, and bicycles are not stable to begin with.

And adding a rider on top of a bicycle does make it more unstable, but the amazing flexibility of the human brain allows us to transform that instability into stability, kind of like how modern fighter jets are intentionally unstable to make them more manoeuvrable.

Bicycles are a marvel of physics and biology working together.

f_allwein(3176) 6 days ago [-]

Same can be said, perhaps more accurately, about the wheelbarrow. From https://www.penguinrandomhouse.ca/books/312366/the-knowledge... :

> The wheelbarrow, for instance, could have occurred centuries before it actually did—if only someone had thought of it. This may seem like a trivial example, combining the operating principles of the wheel and the lever, but it represents an enormous labor saver, and it didn't appear in Europe until millennia after the wheel (the first depiction of a wheelbarrow appears in an English manuscript written about 1250 AD).

brohee(4054) 5 days ago [-]

And the European wheelbarrow is vastly inferior to the Chinese one (see https://www.lowtechmagazine.com/2011/12/the-chinese-wheelbar...)

spraak(3898) 6 days ago [-]

> I don't think horses explain it either. A bicycle, from what I've read, was cheaper to buy than a horse, and it was certainly cheaper to maintain (if nothing else, you don't have to feed a bicycle).

I don't think this is so simple. A horse can haul a lot more than a human can. How much human bicycle power is equal to one horse power?

henryfjordan(10000) 6 days ago [-]

An average human can peak above 1HP for short amounts of time (think sprinting). They definitely cannot maintain that. I think the Horse Power was defined at a sustainable rate for the horse.

1 HP = 746 watts and most people on a bike would cruise in 100-250 watts range. The top cyclists can maintain more like 400 watts

https://www.cyclinganalytics.com/blog/2018/06/how-does-your-...

tcmb(10000) 6 days ago [-]

'When considering human-powered equipment, a healthy human can produce about 1.2 hp (0.89 kW) briefly (see orders of magnitude) and sustain about 0.1 hp (0.075 kW) indefinitely; trained athletes can manage up to about 2.5 hp (1.9 kW) briefly and 0.35 hp (0.26 kW) for a period of several hours.'[1]

100W on a bike is easy to maintain for several hours, even for non-trained humans. An output of 1hp (750W) only lasts a few seconds.

[1] https://en.wikipedia.org/wiki/Horsepower

hprotagonist(2410) 6 days ago [-]

If i had to point to a specific technical factor, it's pneumatic tires.

asark(10000) 6 days ago [-]

I've, in some past article linked on (I think) HN read the answer to this very question as advanced, efficient bearings (so, the ability to produce finely accurate small metal objects).

weberc2(4136) 6 days ago [-]

I recall hearing that it's tube steel. The previous iterations used solid steel (maybe cast iron?) and were super heavy.

Excel_Wizard(10000) 6 days ago [-]

>Yet it was a simple mechanical invention

As a mechanical engineer, this statement baffled me. All manufactured technology exists in the context of the manufacturing capabilities available to the designer. The manufacturing tech had to be tremendously complicated before a decent bike could be made. Hollow steel tubes aren't simple. Ball bearings aren't simple. There is a reductionist viewpoint among 'theory' people that misses the trees for the forest.

sinker(10000) 6 days ago [-]

Anyone who had tried to make anything, especially anything original and with mechanical components knows this to be true. Even simple machines require a lot of forethought, planning, and iteration to be able to perform consistently and reliably. When we see an every day object like a bicycle we take it for granted and think that it must be obvious. But that couldn't be further from the truth.

Take even just a single part from it, like the chain for example probably represents centuries worth of technology. Each link has to be uniform to operate smoothly on a chainring or sprocket, which indicates some form of mass production.

Chains must be hardened to withstand stress, resist stretching. Soft steel would wear and deform too quickly.

Each link is in itself a complex component composed of a uniform bushing and pin shaped oblong symmetrical down the center and also quite small. It must pivot smoothly.

I doubt there are many people, even skilled people, who could make a complete bicycle from raw metal stock.

astine(3838) 6 days ago [-]

Did you read the article or just that line? Most of what you mention is addressed plus a number of other factors.

'Technology factors are more convincing to me. They may have been necessary for bicycles to become practical and cheap enough to take off. But they weren't needed for early experimentation. Frames can be built of wood. Wheels can be rimmed with metal. Gears can be omitted. Chains can be replaced with belts; some early designs even used treadles instead of pedals, and at least one design drove the wheels with levers, as on a steam locomotive.'

'Second, advances in materials and manufacturing were probably necessary for a commercially successful bicycle. It's a bit hard, from where I stand, to untangle which advances in design were made possible by new materials and techniques, and which were simply sparks of inventive imagination that hadn't been conceived or developed before. But the fact that people were willing to put up with the precarious high-wheeled design indicates to me that pneumatic tires were crucial. And it's plausible to me that advanced metalworking was needed to make small, lightweight chains and gears of high and consistent quality, at an acceptable price—and that no other design, such as a belt or lever, would have worked instead. It's also plausible to me that wooden frames just weren't light and strong enough to be practical (I certainly wouldn't be eager to ride a wooden bicycle today).'

jedimastert(4052) 6 days ago [-]

Yeah, whoever said that should try to make one without being able to go the the hardware store

admax88q(10000) 6 days ago [-]

Neither of those is necessary for a basic bicycle. A wooden frame with a fixed gear ration and chain or belt would work just fine for moderate riding

WalterBright(4074) 6 days ago [-]

The article also didn't mention weight. The lighter a bike is, the more effective it is. One made out of wood and iron would be simply too heavy. Imagine a cast iron wheel.

Lots of people dismiss the Wright Bros as 'bicycle mechanics'. They kinda miss that lightweight bicycle technology, like chain drives and steel wire, were essential for their working airplane.

dfeojm-zlib(10000) 6 days ago [-]

I swear Cannondale road bikes have more in common with aircraft than terrestrial vehicles. You gotta see the aluminum welds on insanely thin tubes.

Disclaimer: I lost my 20+-year-old baby-blue R800TT to frame damage that I bought while making $4.25/hour. #SadDayForMe

ajuc(3822) 6 days ago [-]

You don't need steel tubes for a bicycle, you don't even need steel at all. In fact there were (and still are) wooden bicycles and they work fine.

https://en.wikipedia.org/wiki/Wooden_bicycle

As for ball bearings these are incremental improvements not a necessity (see horse chariots and carriages with wheels used for millenia before we had ball bearings). For another example see medieval wooden windmills - all cogs and bearings made from wood with some animal fat and skin for bushings and lubrication - worked well enough for centuries.

https://ethw.org/Wheels

> Tutankhamen's chariots give us an opportunity to study the details of wheels and axles. The aspect that is most striking to a present-day engineer is that the axles were made of wood and the wheels bad wooden journals. The favored materials were elm and birch, which were imported because neither wood was native to Egypt. Anyone accustomed to modern practice finds it hard to believe that wood-on-wood could function as a bearing at all. This primitive arrangement was improved in a few cases by the addition of a leather bushing. Lubrication in the form of animal fat or tallow is known to have been used, although the exact composition has not been determined.

https://www.brown.edu/Departments/Joukowsky_Institute/course...

https://vimeo.com/14166672

First mass-produced bicycle (velocipede) had no ball bearings until a few years later.

Gibbon1(10000) 6 days ago [-]

There is an economic component in this. The people who would really want a bicycle had neither time nor money to develop them nor money to afford one. People with time and money would immediately think that horses were more practical. And they wouldn't be wrong.

It wasn't until the mid 1800's that the people who would want a bicycle could afford one if such was available. You had a middle class with disposable income. And simple mechanical contraptions like bicycles were cheaper. Because steel got about 20 times cheaper from 1860 to 1890. And things like chain drives had become industrial commodity items.

dockd(10000) 6 days ago [-]

And let's not forget the pneumatic tire; Wikipedia suggests they were invented 1847-1888. (It says they were invented to help make bicycles/tricycles comfortable.)

If anyone thinks these tires aren't important, then why are almost all vehicle tires still pneumatic?

dotancohen(10000) 6 days ago [-]

It goes both ways. Imagine what motor vehicles and transportation in general would look like today if we were able to manufacture and process titanium alloys like we do steel. How far away are we from that breakthrough?

Conversely, what would the world look like today without steel? No airplanes? No WWI nor WWII? No huge panamax ships? No global markets?

chrisdhoover(10000) 6 days ago [-]

It is reductionist. Bicycles are complex machines, simple to us now because we can view examples and understand them readily. But to create one from nothing requires imagination. He hints at it with the cotton gin and flying shuttle. He also discounts the whole balance thing stating that you balance on a horse. Not true you sit on a horse you don't balance on one. Further the horse is firmly planted on the ground on four legs. It is not obvious that someone could balance on two wheels. It seems that the foot powered "walking" bike was the development that taught us to balance.

What bothers me with such speculative and unsupported history is now it will likely be thought true. Everyone believes that there were two sleeps per night in the past or that doctors brought women to orgasm to relieve stress. These ideas have little substantiated documentation and are hard to believe yet they are repeated by know-it-alls frequently.

dalbasal(10000) 6 days ago [-]

Baffling, maybe. It's a good question, not because the answer is trite but because it's interesting.

The invention itself might have even occurred, or parts of it. If you pressed hard enough, maybe you might have gitten a decent prototype built in the bronze age.

History has to take its course, in a sense. It has to be.practical and economical to manufacture, acquire and use one. A major part of that is what it takes to build one that's good and cheap enough. Another part is the path (roads, good steel parts like hollow tubes). Then it needs to be invented in a way that can lead to some decent number being made, demand of some sort considering that it still sucks. Enough people need to learn to ride one... There need to be engineers around with an interest...

It's dense with trees, but forests assume trees are inevitable. Nothing here is really. Maybe the invention is, on some level. But things usually need to be invented into existence over many iterations, events and chains before they totally stick.

mercules(10000) 6 days ago [-]

Can you imagine having a bicycle made of casting iron with the technology of the past? Tetanus would be a problem, people wouldn't be able to have bicycles close to the ocean, it would be even dangerous if the person fell on the ground.

It certainly is amazing how people take for granted the marvels of current technology.

atoav(10000) 6 days ago [-]

Yes, but I don't think that was the factor here. Ancient technology was astoundingly precise (see the famous Antikythera mechanism).

I think the problem was roads. If you combine bad (or nonexistent) roads with airless hard tires, the result is much much less useful than any modern bike on any modern road.

The first bikes were indeed wooden with wooden wheels. No ball bearings, no hollow steel tubes etc. But streets.

I could now say something about how engineers are so specialized in their perspective they cannot judge things without bringing current conventions into it but hey, every profession comes with it's weakness.

bambax(3490) 6 days ago [-]

It seems this comment was made before reading the whole of the article, which goes into much detail about the invention and the many iterations that were needed to arrive at the current design.

duxup(3919) 6 days ago [-]

The TV series 'Connections' did a great job demonstrating how often that more often than not we're not short on ideas or inventions or inventiveness.

Usually there are a lot of great ideas, but availability of materials in quantity, or a key part is what is missing, sometimes for centuries, and often comes from someplace you might not expect.

track_me_now(10000) 6 days ago [-]

This entire post sounds like it was written by an alien trying to compress a full theoretical understanding of the bicycle into an essay without ever having ridden one. super-weird tone to the whole thing; they discount many researched aspects of bicycles and present the information like it's brand new.

MadWombat(10000) 6 days ago [-]

This is addressed in the article to some degree. Wheels and the frame can be made of wood. Wheels can be lined with iron or bronze plating. Ball bearings are not strictly necessary. Chain can be replaced with belts or some alternative transfer mechanism. The result would not be as comfy as a road bike today, but it would be a functional vehicle.

rlue(3922) 6 days ago [-]

Your points are all literally stated in the article's first bullet point.

fastaguy88(10000) 5 days ago [-]

This entire discussion is very interesting, with arguments for and against the idea that the underlying materials technology was (or was not) sufficient for a bicycle to have emerged 50 or more years earlier.

This discussion is similar to flawed arguments about evolution. It assumes that at any given point in time, there was an exhaustive search of solution space (in this case for personal transportation). In discussions of evolution, people often propose that a particular solution occurred because it was optimal in some way.

But neither evolution, nor technology development, involves an exhaustive search. Things happened by chance, and some things were not tried as early as they might have been (or they were tried and tried and found lacking in some way), until, by chance, the right conditions allowed the technology (or evolutionary trait) to emerge.

The role of chance is often under appreciated.

aqsalose(3206) 5 days ago [-]

Yeah. Do not underestimate the amount of technology that goes into bicycles.

Wright brothers were bicycle mechanics, before they pivoted into inventing heavier-than-air flight craft.

eveningcoffee(4151) 6 days ago [-]

I think you are really overthinking it.

Bicycle can be very simple. You need just two wheels (can be made of wood with some steel support, just like a wagon wheels) and some frame to connect these (again can be made from wood). You do not need any bearings or even a belt - you can connect pedals directly to the front wheel (as was done in early design).

All of this can be made with simple carpenter and smith tools.

ChrisRR(10000) 6 days ago [-]

Chains and gears are especially not simple

TheGRS(4157) 6 days ago [-]

I have seen bicycles built using bamboo. I mostly agree with the assessment that the tech and need just wasn't there in antiquity, but I would definitely stress that it could have plausibly been built using the materials available.

Animats(2017) 6 days ago [-]

Right. And one point that cannot be stressed enough.

Good steel, in quantity, only goes back to 1880.

Lots of things that 'could have been built in antiquity' foundered on that basic fact. Before Bessemer, steel was as exotic as titanium is now.

The alternatives are not good. Cast iron? Too heavy and too brittle. Wrought iron? Maybe, but not that great. Lead? Too soft. Brass? Could work, but expensive. A king's kid might have had a brass bicycle.

If the iron workers of Cizhou, China, who had an air-blown steel making process by about 1100 AD, had made the next step to a Bessemer converter, history would have been completely different. They were close. Right idea, limited steelmaking capability, existing iron industry. Coal was available, but apparently not too easily.

The few places in the world with easy access to both coal and iron ore started the Industrial Revolution. Then came railroads, and the resources didn't have to be so close.

WalterBright(4074) 6 days ago [-]

> a reductionist viewpoint

I once tried to figure out how to make iron from nothing. There are several 'from scratch' guides on the internet, but all of them include 'buy these needed chemicals from a supply house'.

derefr(3632) 6 days ago [-]

> Ball bearings

Couldn't you just use anything sufficiently round and uniform and Mohs-hard, given a lubricant oil to protect it from wear? Glass marbles? Pearls? Rocks after a ridiculous amount of tumbling?

alex_young(2114) 6 days ago [-]

Wooden bikes seem like an obvious materials argument. Here's one I saw in a shop recently:

https://photos.app.goo.gl/7mzTsrKgUgENoDkQ9

I suspect the answer is complexity. There are a bunch of inventions all together in a bicycle. From the frame to the spokes to the tires and headset, there is a lot of IP there.

noneeeed(10000) 5 days ago [-]

The same was very much true for Babbage and the Difference Engine. If I remember correctly, the level of repeatable precision and quality needed to make something like that was simply well beyond what engineers could manage even at the height of the industrial revolution. In theory everything that Babbage was designing was technically possible at small scale, but simply couldn't be done on the scale he imagined.

If memory serves, even the London Science Museum's partial reconstruction in the 1980s was a massive challenge of engineering.

lazyant(4147) 6 days ago [-]

And if you don't have smooth roads you need rubber for the tires no? there is a lot of technology involved in making a useful bicycle (ie not a toy).

c3534l(4066) 6 days ago [-]

Steele tubes is new bicycle technology. The earliest bicycles were all wood. The bicycle was already a known and popular thing before steel. But they were thought of more as toys or akin to a swan paddleboat than a serious mode of transportation. They could and were made by skilled woodworkers, so manufacturing technology is not the deciding factor. I think understanding and caring about complex mechanisms brought on by the industrial revolution is what prompted the devices; that is, it was cultural rather than technological that prompted their introduction.

gniv(10000) 5 days ago [-]

The most compelling counterargument, to me, is mentioned in the article: clocks. I still don't know how to make a clock and I certainly do not think it's easy. Yet they were making clocks centuries before the first bicycle was invented.

hyperpallium(2672) 6 days ago [-]

Why not bamboo or balsa or bone?

The wheel existed before ball bearings - why are they necessary for bicycles? (is it an efficiency thing, for human powered?)

The safety-bicycle chain requires cheap precision mass engineering, but rope, rubber, axial rod (as in a car), or interlocking gears also work. Or direct, as in a penny-farthing. Or, no wheel power transfer, but push along with your feet, as in the Kirkpatrick bicycle.

oarabbus_(10000) 6 days ago [-]

It's not a simple mechanical invention at all, I agree with you, this is a silly/absurd statement.

I recall in my freshman physics classes asking my professor 'why are all the modules using inclined planes and pulleys? wouldn't we be able to learn better if the problems involved something we're actually familiar with in everyday life like a bicycle?' (I attended the university with the most bicyclists in the world)

I only needed to see his reaction to realize that a bicycle is pretty damn complex.

loeg(3756) 6 days ago [-]

Bicycle chains are really complicated and have tons of moving parts.

hadlock(10000) 6 days ago [-]

Bicycle chains are pretty complicated, but steam powered devices (mills, saws, etc) were using flat leather belts for decades, and rope drive existed as well. Many modern bicycles come with belt drives. Also periodically people produce shaft drive bicycles which require a bit more machining, but have few moving parts and are quite reliable.

timonoko(4156) 6 days ago [-]

Bicycle was useless without paved roads. Horse was much more convenient. I have personal anecdote about these issues. As a kid in 1967 I made 250 km bicycle trip in two days. My father was amazed: 'How this is possible? I made exactly the same trip in 1929 also in two days, but I had a very good horse galloping half the time.' We were both camping at the same forest at halfway point too..

bagacrap(4169) 6 days ago [-]

I think it's less about the road quality and more about pneumatic tires. Same for cars. Would you rather drive a car on a dirt road with modern tires, or on a highway, directly on metal rims? The answer seems obvious to me...

closeparen(4060) 6 days ago [-]

Why did we wait so long after the modern bicycle to see it as transportation rather than recreation? Awareness of climate change maybe? It seems that a) bicycles were not primary transportation even before cars, and b) the push to replace cars with bikes could have happened at any point, even before cars took off. So why now?

jonsnowman(10000) 6 days ago [-]

Bicycles were heavily used for transportation before cars at least in those countries where they were available at reasonable cost. Anecdotal, but my grandpa and his pals ride bicycles their whole youth and even later, got his driver's license when he was 50, and didn't give a about any environmental issues (though he was no fan of consumerism and made pretty much everything he needed himself and overall preferred simple life, but also deeply hated 'those hippies who can't actually do anything useful with their hands', which still characterizes 99 percent of the so called environmentalists...). Back then the distances were smallish, usually under 30 km which made bike the most convenient way for travelling.

I see nobody recognizing that bikes need pretty good roads to be efficient. That was the main reason to prefer walking in some areas even if you had a bike.

asark(10000) 6 days ago [-]

Bicycles seem really expensive to me, relative to cars, considering how much simpler they are and how much less total material's involved. Plus they're a whole lot easier to ship. Not TCO, sure, but sheer cost of the manufactured object at retail. And on the cheaper end they often barely work correctly at all anyway, and all of them seem to require a lot more maintenance than a car, per hour of use or (especially, by long shot) per KM travelled. I assume these are economy of scale issues that may be sorted out if bicycle use grows significantly—though I'm not sure about the reliability.

jaclaz(3915) 6 days ago [-]

Well, it depends on where, seemingly you forget Europe in the years around (before and after) WWII and of course whole China.

mc32(4111) 6 days ago [-]

So it looks like there were attempts but they had fatal design flaws and technology (metallurgy) was immature.

One question raised is why didn't we see a trike come about before the bike?

It's more stable and propulsion via direct front pedal isn't as awkward as the front pedal bike.

Gravityloss(3333) 6 days ago [-]

It's dangerous in a curve as you can't lean.

But a trike with a big front wheel could have had a decent speed with direct drive and yet be less dangerous than a penny farthing.

giobox(4164) 6 days ago [-]

While it's maybe surprising, have you ever tried riding something with a 1:1 drive ratio? There's a reason even bikes with a fixed gear still have gears...

Pedals connected directly to the front wheel would be truly awful for virtually all adult bicycle use cases.

burkaman(3859) 6 days ago [-]

We did. The ad for the Rover Safety Bicycle in this article says 'Safer than any Tricycle'. From Wikipedia you can see that trikes beat bikes by at least a hundred years, depending on your definition: https://en.wikipedia.org/wiki/Tricycle#History

benj111(4061) 6 days ago [-]

They did ish. (from the article https://rootsofprogress.org/img/early-bicycle-models.png)

I think the big problem is weight, you probably had to be fit to ride, so why pootle around on something safer and even heavier. Its like trying to get a mamil to ride a sit up and beg.





Historical Discussions: Alan Turing to feature on new £50 note (July 15, 2019: 664 points)
New face of the Bank of England's £50 note is revealed as Alan Turing (July 20, 2019: 10 points)

(664) Alan Turing to feature on new £50 note

664 points 6 days ago by hanoz in 2473rd position

www.bbc.co.uk | Estimated reading time – 8 minutes | comments | anchor

Image copyright Bank of England

Computer pioneer and codebreaker Alan Turing will feature on the new design of the Bank of England's £50 note.

He is celebrated for his code-cracking work that proved vital to the Allies in World War Two.

The £50 note will be the last of the Bank of England collection to switch from paper to polymer when it enters circulation by the end of 2021.

The note was once described as the 'currency of corrupt elites' and is the least used in daily transactions.

However, there are still 344 million £50 notes in circulation, with a combined value of £17.2bn, according to the Bank of England's banknote circulation figures.

'Alan Turing was an outstanding mathematician whose work has had an enormous impact on how we live today,' said Bank of England governor Mark Carney.

Media playback is unsupported on your device

Media captionMark Carney praises Alan Turing's achievements

'As the father of computer science and artificial intelligence, as well as a war hero, Alan Turing's contributions were far-ranging and path breaking. Turing is a giant on whose shoulders so many now stand.'

Why was Turing chosen?

The work of Alan Turing, who was educated in Sherborne, Dorset, helped accelerate Allied efforts to read German Naval messages enciphered with the Enigma machine.

Less celebrated is the pivotal role he played in the development of early computers, first at the National Physical Laboratory and later at the University of Manchester.

In 2013, he was given a posthumous royal pardon for his 1952 conviction for gross indecency following which he was chemically castrated. He had been arrested after having an affair with a 19-year-old Manchester man.

The Bank said his legacy continued to have an impact on science and society today.

Analysis: Paul Rincon, BBC News website science editor

Alan Turing played an absolutely crucial role in Allied victories through his codebreaking work. He is also considered a towering figure in the development of computing.

Alan Turing

1912 – 1954

  • 1912 Alan Mathison Turing was born in West London

  • 1936 Produced "On Computable Numbers", aged 24

  • 1952 Convicted of gross indecency for his relationship with a man

  • 2013 Received royal pardon for the conviction

Source: BBC

Yet for decades, the idea of Turing being featured on a banknote seemed impossible. This will be seen as an attempt to signal how much has changed in society following the long, ultimately successful campaign to pardon Turing of his 1952 conviction - under contemporary laws - for having a homosexual relationship.

His work helped cement the concept of the algorithm - the set of instructions used to perform computations - that are at the heart of our relationship with computers today. He was also a pioneer in the field of artificial intelligence: one of his best known achievements in this field is the Turing Test, which aims to measure whether a machine is 'intelligent'.

Former Manchester MP and gay rights campaigner John Leech, who campaigned for Alan Turing's pardon, said: 'This is a fitting and welcome tribute to a true Manchester hero.

'But more importantly I hope it will serve as a stark and rightfully painful reminder of what we lost in Turing, and what we risk when we allow that kind of hateful ideology to win.'

The Bank asked the public to offer suggestions for the scientist whose portrait should appear on the £50 note. In six weeks, the Bank received 227,299 nominations covering 989 eligible scientists.

A shortlist was drawn up by a committee, including experts from the field of science, before the governor made the final decision.

Image copyright Getty Images
Image caption Rosalind Franklin, Stephen Hawking and Ada Lovelace all appeared on the shortlist

The shortlisted characters, or pairs of characters, considered were: Mary Anning, Paul Dirac, Rosalind Franklin, William Herschel and Caroline Herschel, Dorothy Hodgkin, Ada Lovelace and Charles Babbage, Stephen Hawking, James Clerk Maxwell, Srinivasa Ramanujan, Ernest Rutherford, Frederick Sanger and Alan Turing.

The debate over representation on the Bank's notes could resurface after this decision.

Jane Austen will continue to be the only woman, apart from the Queen, whose image will be seen on the four notes.

There was also a campaign calling for a historic figure from a black and ethnic minority background (BAME) to feature on the new £50 note.

In response to Maidstone MP Helen Grant, who raised the issue in Parliament, the governor said: 'The Bank will properly consider all protected characteristics, and seek to represent on its banknotes characters reflecting the diversity of British society, its culture and its values.'

How will the banknote change?

Steam engine pioneers James Watt and Matthew Boulton appear on the current £50 note, issued in 2011.

The new £50 Turing note will enter circulation by the end of 2021, Mr Carney announced at the Science and Industry Museum in Manchester. It will feature:

  • A photo of Turing taken in 1951 by Elliott and Fry, and part of the National Portrait Gallery's collection
  • A table and mathematical formulae from Turing's 1936 paper 'On Computable Numbers, with an application to the Entscheidungsproblem' - foundational for computer science
  • The Automatic Computing Engine (ACE) Pilot Machine - the trial model of Turing's design and one of the first electronic stored-program digital computers
  • Technical drawings for the British Bombe, the machine specified by Turing and one of the primary tools used to break Enigma-enciphered messages
  • A quote from Alan Turing, given in an interview to The Times newspaper on 11 June 1949: 'This is only a foretaste of what is to come, and only the shadow of what is going to be'
  • His signature from the visitor's book at Bletchley Park in 1947
  • Ticker tape depicting Alan Turing's birth date (23 June 1912) in binary code. The concept of a machine fed by binary tape featured in Turing's 1936 paper.

Current Bank of England £5 and £10 notes are plastic - which the Bank says are more durable, secure and harder to forge. The next version of the £20, to enter circulation next year, will also be made of the same polymer.

So, the £50 note will be the last of the Bank's collection to change.

Why do we even have a £50 note?

In recent years, there have been doubts that the £50 note would continue to exist at all.

Fears that the largest denomination note was widely used by criminals and rarely for ordinary purchases prompted a government-led discussion on whether to abolish it.

The £50 note was described by Peter Sands, former chief executive of Standard Chartered bank, as the 'currency of corrupt elites, of crime of all sorts and of tax evasion'.

There has also been considerable discussion over the future of cash in the UK, as cards and digital payments accelerate and the use of notes and coins declines.

Nevertheless, in October, ministers announced plans for a new version of the note, to be printed in the UK.

What about other banknotes?

Polymer £5 and £10 notes are already in circulation, while a £20 design will be issued in 2020.

Jane Austen was chosen to appear on the plastic £10 note after a campaign to represent women other than the Queen on English notes.

Image copyright Getty Images

In 2015, a total of 30,000 people nominated 590 famous visual artists for the £20 note, before JMW Turner was selected with the help of focus groups. He will replace economist Adam Smith on the note in 2020.

Sir Winston Churchill appears on the polymer £5 note.

Image copyright PA

A host of different people have appeared on banknotes issued in Scotland and Northern Ireland. Ulster Bank's vertical £5 and £10 notes entered circulation in Northern Ireland in February.




All Comments: [-] | anchor

danharaj(3997) 6 days ago [-]

Chuang Tzu with his bamboo pole was fishing in the Pu river. The prince of Chu sent two vice-chancellors with a formal document: We hereby appoint you prime minister.

Chuang Tzu held his bamboo pole still.Watching the Pu river, he said:"I am told there is a sacred tortoise offered and canonized three thousand years ago,venerated by the prince, wrapped in silk,in a precious shrine on an altar in the temple.

What do you think? Is it better to give up one's life and leave a sacred shell as an object of cult in a cloud of incense for three thousand years,or to live as a plain turtle dragging its tail in the mud?"

"For the turtle", said the vice-chancellor,"better to live and drag its tail in the mud!"

"Go home!", said Chuang Tzu."Leave me here to drag my tail in the mud."

hi41(10000) 5 days ago [-]

I think when someone see the new note, they will realize a terrible crime was committed by society based on whom he loved. They will leave that path of unrighteousness and follow the path of peace and understanding. So yes, it is useful.

lern_too_spel(4168) 5 days ago [-]

How does this story apply? Turing has no life to live whether he is on the notes or not, and somebody will be put on them.

dalbasal(10000) 6 days ago [-]

Well deserved. War hero (possibly the greatest of the war) & science hero of the highest order. Mistreated terribly by the state for his efforts.

arcturus17(10000) 6 days ago [-]

Hate to be that guy, but the Great War is WWI

jermy(10000) 6 days ago [-]

On a point of technicality - he was only two years old when the Great War started! (I think you meant the 2nd World War)

codeulike(2855) 6 days ago [-]

The formulae and the binary ticker tape are a nice touch.

thom(3932) 6 days ago [-]

Shame he didn't make it onto the £10 note tbh.

mpweiher(31) 6 days ago [-]

Apparently the binary spells out Alan Turing's birthday!

Someone1234(4161) 6 days ago [-]

Alan Turing is a good choice. There were many good candidates on the shortlist (Paul Dirac, Rosalind Franklin, Dorothy Hodgkin, Ada Lovelace and Charles Babbage, James Clerk Maxwell, etc) that hopefully get their day.

I will say Stephen Hawking absolutely deserves it, one day, but I feel like the notable person should be dead for a certain period before we throw them on currency (certainly longer than a year). Conceptually I like the '50 year rule.' Helps mitigate populism or overly politicizing it.

That results in the Queen (or King) being a remark on today, and the Scientist or other note-worthy contributor being a remark on our history (and values?).

balzss(4154) 6 days ago [-]

I totally agree with you. This is sad news for Stephen Hawking though because I'd be surprised if we still had physical cash in 50 years time.

marsRoverDev(10000) 6 days ago [-]

ESA has named their newest mars rover after Rosalind Franklin, a good start.

liberte82(10000) 6 days ago [-]

Oh man, I totally forgot Stephen Hawking died until you reminded me. :(

fortran77(4071) 6 days ago [-]

Stephen Hawking would not be a good choice. He wasn't a major force in saving a whole nation, and he had hateful, controversial political opinions.

Alan Turing was truly a great man.

physicsguy(4170) 6 days ago [-]

It's cool, but the circulation for these notes is really low - I only ever used to see £50 notes when working in a shop that sold timber to builders!

jacknews(10000) 6 days ago [-]

Perhaps shows the state of Britain's economy/relative worth compared to the rest of the world, since I used them (true, only occassionally) late 90s/early 2000's, but they must be worth much much less now, due to inflation, not to mention that referendum thing. It's not because prices are low in Britain aka Treasure-Island, that's for sure!

$100 bills are more than common, for example, and similar (actually quite a bit more) in value.

golergka(2552) 6 days ago [-]

Aside from the excellent choice of Turing, this is some godawful design, with all the worst Photoshop cliches from the 90s.

suaveybloke(4141) 6 days ago [-]

It's just a rough mock-up, the final design should be much better (at least that's what happened with the redesigns of the £5/£10 notes).

smcl(3873) 6 days ago [-]

Just a shame that this bank note is basically unusable

maffydub(10000) 6 days ago [-]

Can you elaborate on why they're unusable?

jotm(4100) 6 days ago [-]

Maybe they'll finally start using it in cash machines outside London once the pound drops haha

jonatron(4005) 6 days ago [-]

They're normal in casinos. Don't know about anywhere else though.

marsRoverDev(10000) 6 days ago [-]

The current batch of 50s are practically unusable because they are widely counterfeited and lack sufficient security features. The new one should significantly lower that risk - that having been said, it's likely that they will be hoarded by drug dealers etc so you won't be seeing a huge number of them in circulation.

epanchin(10000) 6 days ago [-]

I have never had trouble spending a 50. Although Scottish 50s are often refused.

jgrahamc(24) 6 days ago [-]

Mark Carney, Governor of the Bank of England, came to the Cloudflare London office opening party recently and he told me that it was entirely the Governor's discretion to choose who is on the bank notes. This was the first time the public had been solicited for ideas, and a committee had whittled the suggestions down to a short list.

Ultimately, Carney made this call. Thanks!

chongli(10000) 6 days ago [-]

Mark Carney is Canadian and was previously the governor of the Bank of Canada. We've been doing this sort of thing with our money for a long time now. Looks like he brought the tradition to England.

dreamcompiler(3982) 6 days ago [-]

When I saw the headline my first thought was JGC must have had something to do with this! Thank you for your efforts on behalf of Turing over the years.

Zeebrommer(10000) 6 days ago [-]

The article states that 'In 2015, a total of 30,000 people nominated 590 famous visual artists for the £20 note, before JMW Turner was selected with the help of focus groups.' so this doesn't appear to be the first time the public is involved.

bufferoverflow(3949) 6 days ago [-]

Good choice of Alan, but what a trainwreck of the design and the typography. Ugly as hell.

jw1224(10000) 6 days ago [-]

Pretty sure this is only a mockup — the note itself won't be out until the end of 2021.

I remember seeing a preview of the redesigned £5/£10 notes a few years ago and thinking exactly the same thing — seeing Trebuchet MS on a bank note was a bit of a surprise, but thankfully the end result looked far better.

robert_foss(3570) 6 days ago [-]

It is funny how the article doesnt mention him committing suicide as a result of being chemically castrated by the UK authorities for being gay.

justin66(2850) 6 days ago [-]

There's a sentence in the article that mentions his chemical castration, trial, and posthumous pardon and links to a more detailed article detailing all this and broaching the topic of suicide.

I bet you want an editorial (and I bet you want one that mirrors your own beliefs and is a bit preachy about it). That's not what this article is for.

mhh__(4127) 6 days ago [-]

This is common knowledge at least in the UK (in my experience).

dschuetz(3196) 6 days ago [-]

Yes, exactly. Doesn't even feel like an apology.

Luc(1513) 6 days ago [-]

You already knew about it, so obviously there was no need to mention it. Again.

wlll(673) 6 days ago [-]

TBH if it was me I think I'd prefer it if not every single mention of me in the future brought up what is ultimately a rather personal and unpleasant thing. Yes, it was an appalling thing to have been done to him, but if I were successful I think I'd rather my successes were celebrated standing alone.

mises(3046) 6 days ago [-]

I'm not sure it needed to? I don't think they tried to exclude it, and more that 'by the way, we castrated this guy decades ago' doesn't naturally come up in the article. Plus, they did actually issue a formal apology.

I would hope they're doing this to honor Turing, not to apologize for what they did to him. Face on a bill is nice, but doesn't make up for ruining some one's life, so it would be a lousy apology any way. So I kind of hope the media more covers what Turing accomplished than the things Engkand did.

simplicio(10000) 6 days ago [-]

FWIW, after reading Hodges biograpy on Turing, I don't think its anywhere near as clear that his death is due to suicide as is commonly stated, or that if he did commit suicide, that it was due to his (unjust) indecency conviction.

mike-cardwell(1032) 6 days ago [-]

The short paragraph which talks about his mistreatment, links to a much longer article regarding his pardon, and his alleged suicide. This seems totally reasonable to me.

brighter2morrow(10000) 6 days ago [-]

>It is funny how the article doesnt mention him committing suicide as a result of being chemically castrated by the UK authorities for being gay.

Yeah I agree, I assume this is actually the main reason he is so well known and celebrated today. Mere obsession with computers would make Godel, Alonso Church, and others with equivalent models of computation just as famous.

billpg(1636) 6 days ago [-]

For context, the £50 note is worth around 63 USD.

(Used to be around $80 a few years ago. Thanks Brexit!)

criddell(4110) 6 days ago [-]

Do you know if the bank sells new notes specifically to collectors? I'd love to buy one of these if I can get one that is in great shape.

raverbashing(3671) 6 days ago [-]

And even funnier how there's a quote there how the 50 note is being targeted as an elitist/laundry note

It might have been a big amount of money 20/30 yrs ago but today it's a bit weird that's the biggest denomination.

arctangent(10000) 6 days ago [-]

For the benefit of some below, the design shown in the news article is only a concept image [1]. The final design will be revealed nearer the time the new note is issued.

[1] https://www.bankofengland.co.uk/news/2019/july/50-pound-bank...

tripzilch(4043) 4 days ago [-]

Oh, good! :) because the white glow looks like a really bad photoshop cutout job ...

wlll(673) 6 days ago [-]

The last time I saw a £50 note was in about 1997. I pay with cash a lot. I think it's fair to say they don't get used all that much.

epanchin(10000) 6 days ago [-]

Why would you receive a £50 paying in cash?

There's larger note for which you could be receiving it as change.

duiker101(2735) 6 days ago [-]

The more surprising thing is that they are making a new £50 note at all. I haven't seen one in a very long time and many shops don't even accept them any more.

Symbiote(4145) 6 days ago [-]

That just depends where you live and work.

In a part of London with many tourists, I saw them used by people ahead of me in the queue daily. There are some cash machines somewhere in London that dispense them -- possibly in Canary Wharf, I forget.

Conversely, I don't think I've seen a cash machine with fivers.

https://imgur.com/a/sJu9y (cash machine showing it has £20s and £50s).

iforgotpassword(10000) 6 days ago [-]

That's interesting. In Germany I might get a strange look if I pay for my coffee with a 50€ bill but it's usually accepted unless they actually don't have enough change. It's certainly no problem at a supermarket, even paid with 100€ bill a few times (because there is this stupid ATM near my workplace that doesn't ask which bills I want and just gives me 100 if I chose that amount and I forget almost every time). Judging from other comments here it seems outside London you really have trouble paying with that 50£ bill. I have seen one supermarket had a sign up saying they don't accept 200€ bills so far.

I'm curious about other countries now because I never really gave that a thought. I was surprised though when I was in China for the first time and learned the largest one is 100rmb which was around 10€ at that point. It's now more like 15€ and China is rapidly changing to a completely cashless society anyways but it was really weird back then; since using the ATM included a fee, you would get like 4000 bucks at once (depending on how long you stayed) and run around with 40 bills in your pocket.

sygma(3345) 6 days ago [-]

A bit off-topic: last time I tried to use a 50 GBP note in London I got strange looks and it was refused. Apparently anything beyond a 20 GBP note is seen as 'too high'. A worrying turn of events...

drcode(3255) 6 days ago [-]

I think this is true in most countries now, in that ATMs are more and more designed to just spit out a single denomination, which raises the statistical likelihood that any larger denomination note could be counterfeit.

kitd(3922) 6 days ago [-]

Wait for a few months after Oct 31st, then it should be fine.

byziden(10000) 6 days ago [-]

Some shops will do extra verification, such as rubbing it against paper to ensure the ink comes off, and use a UV light to see the hidden symbols. Even though it might be legal tender, sadly there's no obligation to accept it. Just as they might refuse Pokemon cards as payment too. https://www.bankofengland.co.uk/knowledgebank/what-is-legal-...

folli(2237) 6 days ago [-]

The £50 note was described (...) as the 'currency of corrupt elites, of crime of all sorts and of tax evasion'.

I recently dealt with a 1000 CHF (= £811 = $1018) banknote in Switzerland. It's not very commonly used, but it exists.

sigwinch28(10000) 6 days ago [-]

Although not commonly used, it seems the Swiss do not have such skepticism of high-value notes.

I would routinely pay for things (groceries, electronics, etc.) with 100CHF and 200CHF notes without the cashiers batting an eye, while I used the 1000CHF note once or twice for large purchases.

Grue3(10000) 6 days ago [-]

In Russia banks love giving out 500 euro notes when you exchange rubles to euros, the problem is most places in Europe don't accept 500 euro notes for payment, even for sums that are more than 500 euro (such as paying for hotel stay). From what I heard even most banks won't accept them unless you have an account, so they're totally useless to bring for a euro trip.

sytelus(311) 6 days ago [-]

Isn't this unusual for British tradition? If I understood correctly, the bank notes are issued under the authority and guarantee of monarch which is why they carry her picture. This applies to any country which has also accepted her as their queen as well (Canada, Australia, New Zealand etc). It is being said that the queen doesn't carry passport or banknotes because both are issues under her name. So there is big symbolism there which will be gone?

npad(3854) 6 days ago [-]

The Queen is on the other side of the note.

tobylane(10000) 6 days ago [-]

It's said she carries cash. She gives money to the collection plate at Sandringhams church at Christmas.

civicsquid(4158) 6 days ago [-]

It really isn't that unusual. The five pound note for example has Churchill on one side and the Queen on the other.

twic(3492) 6 days ago [-]

As others have mentioned, it's quite normal to have the queen on the front of the note, and some other historical figure on the back. As far as i can tell, this started in 1970, with a £20 note with William Shakespeare on it:

https://www.bankofengland.co.uk/banknotes/withdrawn-banknote...

schnable(4146) 6 days ago [-]

Nice, but a click baity headline for HN...

julianwachholz(4131) 6 days ago [-]

It's fixed now

jeffwass(3305) 6 days ago [-]

Title should probably be updated to explicitly say it's Alan Turing who will feature on the note.

napolux(834) 6 days ago [-]

#nospoiler :-)

dwardu(10000) 6 days ago [-]

And lose clicks?

rolltiide(10000) 6 days ago [-]

they using bootleg Photoshop 6 at the Bank of England? yeesh

random42(4114) 6 days ago [-]

How do you know this?

ginko(4071) 6 days ago [-]

I sort of agree. The photograph seems cheap. What happened to engraved portraits on banknotes?

rvz(10000) 6 days ago [-]

I am going to have to agree with this, the other notes look professionally engraved and what we have here looks like it is rushed and a airbrushed photoshop crop for us to see.

ddalex(4147) 6 days ago [-]

Awesome recognition! I just wish people weren't such dicks to Turing in his lifetime.

pbhjpbhj(3949) 6 days ago [-]

I feel like people would still react negatively to a 40-something picking up unknown teens (19, in the one case I know the details of) for casual sex? Particularly if that person were considered a security risk?

People [wrongly] get called a paedophile for such behaviour and get pilloried in the social press.

Aside: it always seemed like the simplest explanation for the strangeness around his arrest was that he was compromised (in the national security sense).

Of course none of that relates to his fantastic achievements in computing.

Fry (Christian fundamentalist) and Newton ( Christian fundamentalist writer) -- both of whom have been on bank notes -- probably wouldn't fair well in the court of public opinion nowadays either. Not sure Churchill would make it on anymore either tbh.

mathieubordere(4170) 6 days ago [-]

+1 It breaks my heart when I think about what they did to him.





Historical Discussions: At what time of day do famous programmers work? (July 18, 2019: 646 points)
At what time of day does famous programmers work? (July 15, 2019: 6 points)
At what time of day does famous programmers work? Part 2. Workweek vs. Weekend (July 18, 2019: 3 points)

(648) At what time of day do famous programmers work?

648 points 3 days ago by lionix in 10000th position

ivan.bessarabov.com | Estimated reading time – 10 minutes | comments | anchor

At what time of day do famous programmers work?

I was curious when the famous programmers do their work. And it is quite easy to find. The result of programmers work is a code. Code is stored in version control systems (VCS). When you put code in VCS the time is record.

One of the most popular version control system is git. When you put code in it you create the thing called 'commit'. Here is an example of raw information about some git commit:

$ git cat-file commit 82be015
tree 496d6428b9cf92981dc9495211e6e1120fb6f2ba
author Ivan Bessarabov <[email protected]> 1563188141 +0300
committer Ivan Bessarabov <[email protected]> 1563188141 +0300
Initial commit

Here you can see commit message ('Initial commit'), info about ID that stores file structure ('tree 496d...'), name and email of commit author and the most interesting thing — the timestamp (1563188141) and timezone information (+0300) (every git commit has 'author' and 'commiter', usually they are the same).

The timestamp is the number of seconds since 1st January 1970. If we convert 1563188141 to more human date we'll get '2019-07-15 10:55:41' — that is the time in UTC timezone. Then we add '03' hours and '00' minutes to that time and get '2019-07-15 13:55:41' — that is the time the commit author can see on his wall clock when he did commit.

Some serious code that is stored in VCS has lots and lots of commits and a lot of commit authors. So we can write a simple program that will check all the commits to filter only the commits by one person get the local time of that commit and aggregate it by hour when the commit was make.

Linus Torvalds

Linus is the author of the Linux operating system, the author of the git VCS and the author of the less known program Subsurface (scuba diver tracking system).

Here is the graph with his commits hours to the repo https://github.com/torvalds/linux.

Linus makes an impression of totally normal person. Most of the commits are made at 10. There are practically no commits at night.

00 -   61 *
01 -   21
02 -   20
03 -   13
04 -   28
05 -  116 *
06 -  263 ****
07 -  793 *************
08 - 1802 ******************************
09 - 2578 *******************************************
10 - 2963 **************************************************
11 - 2670 *********************************************
12 - 2257 **************************************
13 - 2085 ***********************************
14 - 2039 **********************************
15 - 2139 ************************************
16 - 1955 ********************************
17 - 1736 *****************************
18 - 1365 ***********************
19 - 1023 *****************
20 -  853 **************
21 -  577 *********
22 -  240 ****
23 -  128 **

And here is his commits to the project https://github.com/git/git:

00 -    9 ****
01 -    7 ***
02 -    4 *
03 -    0
04 -    0
05 -    0
06 -    0
07 -   20 ********
08 -   27 ************
09 -   90 ****************************************
10 -  108 ************************************************
11 -  112 **************************************************
12 -   97 *******************************************
13 -   73 ********************************
14 -   70 *******************************
15 -  104 **********************************************
16 -   77 **********************************
17 -   59 **************************
18 -   54 ************************
19 -   49 *********************
20 -   58 *************************
21 -   49 *********************
22 -   31 *************
23 -   19 ********

Sebastian Riedel

Sebastian is the author of two popular Perl frameworks: Catalyst and Mojolicious.

His work schedule is insane. I envy his productivity.

This is his commits to the https://github.com/mojolicious/mojo repo grouped by hour:

00 -  685 ***********************************************
01 -  553 **************************************
02 -  472 ********************************
03 -  414 ****************************
04 -  341 ***********************
05 -  334 ***********************
06 -  298 ********************
07 -  208 **************
08 -  147 **********
09 -  145 **********
10 -  198 *************
11 -  225 ***************
12 -  302 ********************
13 -  342 ***********************
14 -  488 *********************************
15 -  536 *************************************
16 -  630 *******************************************
17 -  678 **********************************************
18 -  723 **************************************************
19 -  641 ********************************************
20 -  626 *******************************************
21 -  628 *******************************************
22 -  686 ***********************************************
23 -  681 ***********************************************

Chris Lattner

Chris is the author of LLVM compiler and the programming language Swift. He was working in Apple, for a short period of time he was working in Tesla and now he works at Google.

Looking to his commit distribution to the https://github.com/apple/swift repo it looks like he is a night person:

00 -  324 **************************************
01 -  185 *********************
02 -   79 *********
03 -   77 *********
04 -  265 *******************************
05 -  426 **************************************************
06 -  313 ************************************
07 -  116 *************
08 -   31 ***
09 -   40 ****
10 -   41 ****
11 -   46 *****
12 -   30 ***
13 -   48 *****
14 -  105 ************
15 -  126 **************
16 -  229 **************************
17 -  245 ****************************
18 -  237 ***************************
19 -  151 *****************
20 -  300 ***********************************
21 -  394 **********************************************
22 -  387 *********************************************
23 -  341 ****************************************

Rob Pike

The latest notable work of Rob is the Go programming language. Here is the graph for his commits to the repo https://github.com/golang/go:

00 -   29 ****
01 -    1
02 -    1
03 -    5
04 -    0
05 -    5
06 -   19 **
07 -   62 *********
08 -   80 ***********
09 -  126 ******************
10 -  240 ***********************************
11 -  338 *************************************************
12 -  184 ***************************
13 -  339 **************************************************
14 -  317 **********************************************
15 -  301 ********************************************
16 -  264 **************************************
17 -  224 *********************************
18 -   73 **********
19 -   69 **********
20 -   91 *************
21 -   79 ***********
22 -   64 *********
23 -   51 *******

Brad Fitzpatrick

Brad is the author of LiveJournal, he has created memcached and now he is working on Go programming language.

Here is the graph when he had commited to https://github.com/memcached/memcached:

00 -   11 ********************************
01 -   10 *****************************
02 -   17 **************************************************
03 -    7 ********************
04 -    7 ********************
05 -   13 **************************************
06 -    8 ***********************
07 -    8 ***********************
08 -    2 *****
09 -    0
10 -    3 ********
11 -    1 **
12 -    0
13 -    0
14 -    0
15 -    0
16 -    4 ***********
17 -    8 ***********************
18 -    9 **************************
19 -    9 **************************
20 -   12 ***********************************
21 -   10 *****************************
22 -   11 ********************************
23 -   14 *****************************************

And for the Go language https://github.com/golang/go:

00 -   44 *************
01 -   30 *********
02 -   26 ********
03 -   24 *******
04 -   26 ********
05 -   27 ********
06 -   21 ******
07 -   38 ***********
08 -   68 ********************
09 -  114 ***********************************
10 -  145 ********************************************
11 -  160 *************************************************
12 -  124 **************************************
13 -  130 ****************************************
14 -  148 *********************************************
15 -  160 *************************************************
16 -  162 **************************************************
17 -  158 ************************************************
18 -  143 ********************************************
19 -  127 ***************************************
20 -  104 ********************************
21 -  100 ******************************
22 -  115 ***********************************
23 -   69 *********************

Rasmus Lerdorf

The first developer of the PHP programming language.

https://github.com/php/php-src (this repo does not have first PHP version, so this is worktime statistics on the recent PHP versions):

00 -   55 **************************
01 -   29 *************
02 -   21 **********
03 -   28 *************
04 -   42 ********************
05 -   52 *************************
06 -   41 *******************
07 -   22 **********
08 -   44 *********************
09 -   56 **************************
10 -   37 *****************
11 -   25 ************
12 -   30 **************
13 -   43 ********************
14 -   67 ********************************
15 -   71 **********************************
16 -  104 **************************************************
17 -  104 **************************************************
18 -   99 ***********************************************
19 -   56 **************************
20 -   56 **************************
21 -   82 ***************************************
22 -   96 **********************************************
23 -   78 *************************************

Guido van Rossum

Benevolent dictator for the Python programming language https://github.com/python/cpython:

00 -  346 *****************
01 -  233 ***********
02 -  304 ***************
03 -  247 ************
04 -  229 ***********
05 -  126 ******
06 -   67 ***
07 -   52 **
08 -  107 *****
09 -  186 *********
10 -  200 **********
11 -  230 ***********
12 -  317 ***************
13 -  572 ****************************
14 -  844 ******************************************
15 -  994 **************************************************
16 -  899 *********************************************
17 -  801 ****************************************
18 -  815 ****************************************
19 -  789 ***************************************
20 -  818 *****************************************
21 -  749 *************************************
22 -  750 *************************************
23 -  517 **************************

Fabrice Bellard

He has created FFmpeg, QEMU, the Tiny C Compiler and recently he has created QuickJS:

Here is his working hours on https://github.com/FFmpeg/FFmpeg project:

00 -   17 *******
01 -    4 *
02 -    1
03 -    0
04 -    6 **
05 -    5 **
06 -    0
07 -    4 *
08 -    4 *
09 -   15 ******
10 -   20 *********
11 -   10 ****
12 -   13 ******
13 -   41 ******************
14 -   47 *********************
15 -   23 **********
16 -   44 ********************
17 -   51 ***********************
18 -   50 ***********************
19 -   30 *************
20 -   31 **************
21 -   46 *********************
22 -  108 **************************************************
23 -   43 *******************

Errata

Some previous versions of this this text has different numbers for Linus Torvalds' repos linux and git. I've used '--author='[email protected]' --author='[email protected]'' to get all the Linus commits. But there are some commits made by Linus with other emails. The current version of text uses data for commits filterd with '--author='Linus Torvalds''.

And in case of linux repo there was a bug. I've used 2 steps to generate graph: first save commit info to the file 'git log --author='[email protected]' --author='[email protected]' --date=iso > commits', and then feed that file to one-liner: 'cat all | | perl -nalE 'if (/^Date:\s+[\d-]{10}\s(\d{2})/) { say $1+0 }' | ...'. The first step does not finished correctly and the 'all' file had only partial data (the earlies commit saved in that file was f39e8409955fad210a9a7169cc53c4c18daaef3a )

Here are the previous versions of graphs:

Discussions

This post was shared on serveral sites and there were pretty interesting discussions about it:

The script

If you want to check when do some other programmer work (or if you want to find such information about yourself) here is a script that I used to get that info. This is a one-liner that you need to execute in the working copy of the repository. You need to specify --author option to git command. In most simple case you specify the name ('--author='Sebastian Riedel''). But is is also possible to use email ('--author='[email protected]'') and specify more than one '--author'.

https://gist.github.com/bessarabov/674ea13c77fc8128f24b5e3f53b7f094

Links

There is a continuation of this post:

This text is also available in Russian language




All Comments: [-] | anchor

artur_makly(2782) 3 days ago [-]

TIL who the hell these 'famous' folks are!

smudgymcscmudge(4102) 3 days ago [-]

Who would you consider famous among currently active programmers that aren't on this list?

I knew about half of the list (mainly the authors of software I use). I could probably name a few who are equally famous, but not many who are more famous.

edit: I went back and counted. I recognized 4 of 7 names.

tshanmu(10000) 3 days ago [-]

There is a possible issue though - would they commit immediately after coding - how long do they need to work on a commit before actually committing it?

glandium(3838) 3 days ago [-]

Another issue is that when you travel to a different timezone, you don't necessarily update the timezone of your system.

jfroma(3545) 3 days ago [-]

I think is different for every person. I usually commit often but I'd squash multiple commits into one.

So, my graph isn't exactly going to tell you at what hours I code but at what time I sent pull requests to my coworkers for example.

iosonofuturista(10000) 3 days ago [-]

For one or two isolated commits, that is a fair question. But with hundreds I think it evens out.

A more detailed analysis could also take in account the number of changes. How many files per commit? How many lines, maybe?

The day of week is also a very interesting metric, that is very visible e.g. on github's timeline. I know my busiest days in terms of commits are in the middle of the week, a marked decrease on Friday afternoons, and if there are commits in the weekend, they most likely contain swearing in the message.

Edit: I'm betting Brad Fitzpatrick's pattern in Go (current job?) and memchached (not current job?) would vary in terms of day of week, not just hour.

amelius(869) 3 days ago [-]

And perhaps they commit first thing in the morning, after a nightly test succeeded (or perhaps they simply forget to commit at night).

Another useful graph would be commit volume versus time (where volume is e.g. number of lines modified)

hmottestad(4046) 3 days ago [-]

I think the calculated wall time incorrectly.

>> The timestamp is the number of seconds since 1st January 1970. If we convert 1563188141 to more human date we'll get '2019-07-15 10:55:41' — that is the time in UTC timezone. Then we add '03' hours and '00' minutes to that time and get '2019-07-15 13:55:41' — that is the time the commit author can see on his wall clock when he did commit.

Usually the +0300 indicates that the times tamp is at +3 hours from UTC. Eg. the timestamp is already in local wall time. The offset can be used to convert it to UTC.

So if the author did what they wrote. Then I reckon all the data is actually in UTC and not local wall time.

vomitcuddle(4164) 3 days ago [-]

Unix time is always the number of seconds since 00:00:00 Thursday, 1 January 1970 UTC time. Git records the committer's time zone, since the time would normally be converted back into a human readable form.

JDevlieghere(10000) 3 days ago [-]

The one for Chris Lattner is definitely off: https://twitter.com/clattner_llvm/status/1150992062365310977

segmondy(10000) 3 days ago [-]

Every time of the day, it's like asking at what time does a musician compose, or an artist paint or a author write. Surely, we can consider the physical activity work, but the mental activity is also work and precedes the physical.

The brain churns on the problem at hand most of the time even when we are away from the computer. I would even be bold to say, more so when we are away from the computer and not distracted by typing.

scottmcf(10000) 3 days ago [-]

Absolutely this. Working on something else entirely is often the best way to resolve difficult logic problems that I get stuck on.

Just throw it on the 'subconscious pile' and let my brain chew away on it in the background.

gwern(136) 3 days ago [-]

> Every time of the day, it's like asking at what time does a musician compose, or an artist paint or a author write.

That's a pretty useless way of seeing things. You might as well define eating or going on the toilet as 'writing' since hey, they might be thinking about it while they're doing it. If you look at actual writers talking about when they write (https://www.gwern.net/Morning-writing) many of them are quite explicit about taking breaks to do other work like correspondence or family/friends in order to not think about writing.

flr03(4155) 3 days ago [-]

It's difficult to draw any conclusion since we don't know their work pattern. If you commit at 12 it probably means you were working between 9 and 12, or between 11 and 12 or maybe it's just yesterday's work ? Who knows.

panpanna(10000) 3 days ago [-]

I usually work 9-5 and commit before going home. I might pick it up again at home and commit my changes before I go to sleep, around 11.

According to this study I would be working 5 to 11pm

AdmiralAsshat(1407) 3 days ago [-]

Author is probably a non-native English speaker, so, gentle correction: 'At what time of day do non famous programmers work?'

zcrackerz(10000) 3 days ago [-]

The original article was written in Russian, so I'm assuming this is an automatic translation.

codingdave(10000) 3 days ago [-]

My gut reaction to this was the same as everyone else - to question the validity of using commits as a measurement of time worked.

But then I read further and realized that this isn't a research study, it is just a clever script to output some data from github. So I'm going to go run it on a couple of my projects, see what it says, and enjoy that someone put it together.

nitrogen(3919) 3 days ago [-]

I've run similar git stats scripts (I think it was a Debian package actually) on my own projects, and for me at least, they were pretty reflective of my own working habits, with a time lag of an hour or two in the mornings.

anaphor(2855) 3 days ago [-]

From John Carmack:

> I often worked nights and slept during the day, but I almost always got 8 hours of sleep. There are still 112 hours left in the week!

https://twitter.com/ID_AA_Carmack/status/932718658366857216

simplify(4165) 3 days ago [-]

Happy to see this being pointed out. Sleep is the single most important factor in code quality & correctness https://twitter.com/hillelogram/status/1119709859979714560

michaelg7x(10000) 2 days ago [-]

Full disclosure: I am definitely not famous, but might be considered a programmer.

I found that when hacking for a several months on a solo prototype project, my hours drifted later and later so that I ended up going to bed around 4 a.m., and rising at 11. It's really hard to correct! In terms of productivity, though, the wee small hours are hard to beat.

naltun(10000) 3 days ago [-]

> Linus is the author of the Linux operating system

Hmmmm... Not quite. Albeit I am a massive fan of the Linux kernel, though Torvalds is _not_ the author of the Linux operating system (if we assume that Linux is the equivalent of GNU utilities + the Linux kernel).

z_open(10000) 3 days ago [-]

Why would we assume this? A kernel is absolutely an operating system.

noddingham(10000) 3 days ago [-]

Who cares? Why does it matter?

rvz(10000) 3 days ago [-]

Exactly, no-one cares. This industry is creating a culture of celebrity programmers and worshipping and glorifying the code they write.

It does not even remotely matter. What matters is who is going to maintain all of this after the years to come. (Or they could be even obseleted or superseeded anyway.)

miguelmota(4170) 3 days ago [-]

Are the commits represented in UTC time when using the script?

mikorym(10000) 3 days ago [-]

It's in local time as an offset of UTC. So, if you pool by repo then you could have two commits at 13:00 even though they were commited at different times of the day, but in different time zones.

pgcj_poster(10000) 3 days ago [-]

I wonder how much this is related to age.

8fingerlouie(10000) 3 days ago [-]

Quite a bit i'd imagine.

I used to write lots of code at all hours of the day, 8 hours at work, 6+ hours at home. Then i had kids, and while i still write code 8 hours a day at work, i rarely find the time to write anything in my spare time anymore.

Every now and then i have 'some project' that i'm working on, but it's mostly in bursts of 2-3 days, followed by 3-5 days of 'doing nothing'. The 2-3 days are _not_ weekends, and i usually end up losing sleep over it.

Kids are relentless, and no matter how late you stayed up, they still get up at 6 AM, and needs to be taken to kindergarten/school, and requires you to be (mentally) present in the afternoon/evening.

I guess you could talk about a 'soft burnout' happening from the 2-3 days burst.

I'm trying to change the burst periods into more frequent, shorter periods of 1-2 hours, hoping the increased frequency will reduce the 'ramp up' time while getting 'into the zone'.

anaphor(2855) 3 days ago [-]

Probably also heavily related to whether you have kids or not. I remember Linus saying that before he had kids he used to work during the night, but he had to stop that so he could drive them to school, etc.

cptskippy(10000) 3 days ago [-]

I was wondering the same thing and also how one's habits change over time. I wonder would something like the Linux Kernel repository have sufficient history to show this?

flatline(3933) 3 days ago [-]

Do all these individuals work from a US time zone, or does fit report the user's local time zone from which the commit was made?

dabei(4160) 3 days ago [-]

It is local time zone. The commit log has time zone info.

alexrbarlow(4130) 3 days ago [-]

Does this take into account timezones?

cptskippy(10000) 3 days ago [-]

4th paragraph.

wiradikusuma(2123) 3 days ago [-]

I'm a forgetful person, so I commit very often but with label 'tmp', and later I squash them all into 1 proper commit before pushing. I'm sure I'm not alone, so commit time can't be used as barometer.

samhh(10000) 3 days ago [-]

I dislike the apparent inability to label groups of changes with `git stash`, so instead before leaving a branch with in-progress changes I'll commit them all as 'WIP'. Then, when I return to the branch, I'll check if the previous commit was WIP (`git log -1`), and if so will reset it (`git reset HEAD~1`). It's the best workflow I've discovered for dealing with this problem, and it means that I don't lose my commit history which is otherwise very clean.

lucb1e(2085) 3 days ago [-]

I never saw anyone do this. May I ask what the point is? If it's just going to be in your local version and you later have to do (a very small amount of) work to get it back into one commit, why bother committing random stuff intermediately at all?

bstar77(10000) 3 days ago [-]

When I'm working on a new change I always create a new branch and (almost) always just work against one commit. I'll just keep amending that commit. I generally work out my tickets so that they are small enough that I don't have too many concerns with each branch, so maintaining a commit history isn't all that useful. I'm pretty sure I do loose out on the 'looking really busy with lots of commits' in my github history tracking, but I could care less.

holtalanm(10000) 3 days ago [-]

i do this all. the. time. i'll squash multiple in-progress commits into one before submitting a PR.

dangirsh(4154) 3 days ago [-]

You might like Magit's WIP feature: https://magit.vc/manual/magit/Wip-Modes.html

cyphar(3681) 3 days ago [-]

I would suggest making use of 'git commit --fixup' or 'git commit --squash' which creates a commit that is specially-named such that you can later squash everything together with `git rebase --autosquash'. It's really transformed how I work on large patchsets.

wyldfire(650) 3 days ago [-]

To those asking about whether time zones are considered, the author responded in [1]:

> The script uses the time the author saw on his wall clock when doing the commit. I can't imagine better time to use for such graphs

[1] https://gist.github.com/bessarabov/674ea13c77fc8128f24b5e3f5...

JCharante(10000) 2 days ago [-]

What does wall clock mean? How do they know the geographic position of the authors at their time of commit? I have my laptop's time set to my SO's because if I want to know my current timezone I can just look down at my watch.

CameronNemo(3979) 3 days ago [-]

I can imagine a better time: that of their home timezone. Perhaps it is +0400 where someone makes a commit, but if their home time zone is -0700, that is a bit different situation.

tuna-piano(3809) 3 days ago [-]

As someone who generally works a 9-5 job at a company where most people are expected to work 9-5, can anyone explain if there are any norms or rules that enable someone to work such abnormal hours at their companies?

Do companies like Google not care as much about work hours? I can't imagine anyone showing up at 2pm at any of the places I've worked, regardless of how productive they were.

filoleg(10000) 3 days ago [-]

As others have already mentioned, it depends on the manager and the company, but I noticed that most tech companies are super lax about it and don't care when and how much you work, as long as you get expected work done, are responsive during a good chunk of the day (i.e., still have a lot of schedule overlap with your teammates, so that they can reach you out for help), and are able to participate in team-related things like code reviews/meetings/etc.

For example, some of my teammates do 8 to 4, some noon to 8, and others do 9 to 1 then go to gym/do chores/whatever, and then 4 to 8. As long as you deliver and communicate well within your schedule, no one cares.

I personally like it, because some days I might not feel good at all in the morning, so I get in late, but then I get a weird burst of energy around 7pm when I get home, so I sit down and end up writing some code.

Keep in mind, this is a double-edged sword, because you might end up working late at night to get some burning feature done or fix a nasty bug blocking the rest of the team.

paco3346(10000) 3 days ago [-]

I think this really depends on the attitude of the company and specific managers. Each quarter I'm given a list of projects and just need to have them done by the end of the quarter.

Thankfully my boss has seen that I'm much more productive with my abnormal schedule so I'm inclined that this type of freedom probably comes on a case by case basis.

ddenisen(10000) 3 days ago [-]

I've never worked (as a software engineer) at a company that expects their employees to show up at 9am. Now, at the place I where I work currently, most people _do_ tend to show up between 8-9am and leave by 5-5:30, so working at 6pm I often find myself among a single-digit number of people still at the office, but I think such schedules are usually driven by people's life obligations (having kids, mostly) and personal preferences (naturally early risers) rather than company requirements. I myself routinely wake up around 9am, and come in the office between 10:30-11:15am. I've been here for 5+ years, gotten a substantial raise each year since I've started, and recently gotten promoted. In talking with my manager, my working hours have never ever been brought up as a negative (in fact, my current boss tends to get in around 10:30am himself, mostly to avoid rush hour traffic).

WWLink(10000) 3 days ago [-]

After you've proven yourself, some places will let you get away with a lot.

njharman(4089) 3 days ago [-]

Tldr; Small companies. And 'earning' it.

After 6-12mo getting shit done, probably including a 12-20 hour working until 'oh shit loading money' issue. And discussion with my boss. I have freedom to time shift my work. I'm consistently present for meetings. But if I'm blocked by external factors ill take Monday off so I can work on Sat (unblocked on weds) to meet following Monday deadline. Often work late at night where I can get solid block of 3-4 hour uninterupterd. I mostly work 10 to 7 to avoid traffic. I vaguely track hours and I average 36-40 or less. But they are so much more productive. Many people spend 8+ hours at work but get little done. My 6 hours are solid work. It also helps me manage burn out.

Im slightly manic depressive and partner/childless. Works great for me. YMMV.

golergka(2552) 3 days ago [-]

Game developer here, most of the companies I worked at had an expectation that you show up around 12 and leave around 8-9. Guess it's a culture thing.

Now that I'm a remote contractor, I can finally live according to my natural schedule and wake up around 5pm.

dahart(3736) 3 days ago [-]

Personally, I'd be very cautious about assuming that commit time has anything to do with work time.

For at least 15 years I've had a policy - and so have the people on my teams - to avoid merging or committing to public branches at night or just before & during weekends so that you don't accidentally hose other people on the team, who rely on automated builds & testing. We write code at all hours, but wait to commit/push/merge to master until the morning when everyone's there.

I realize that not everyone has policies like that, but these are high profile programmers who are likely to have their own complicating factors. Linus, for example, commits a lot of merges that other people depend on; his code reviews and code writing might be on separate schedules.

killjoywashere(3304) 3 days ago [-]

Interestingly, many pathologists have a policy of not signing out cases (do I commit this patient to a cancer-not-cancer diagnosis?) in the evenings. The move is generally more self-protective (tired minds make bad decisions), but certainly the 24-hour folks like hospitalists aren't looking for me to add more work to their nights and weekends either.

khalilravanna(10000) 3 days ago [-]

Looks like you already addressed that merge time != commit time. That being said, while it's hard to make a blanket statement about what part of the working process a commit falls, I'd venture a guess that it's usually at the end of a period of work. Given that, the commit time covers an indeterminate period of time before the commit's time. So having no commits at 6 AM doesn't mean the person isn't working at 6 AM, but it probably means they aren't working at 5 AM or 4 AM.

But that all being said, we only have a handful of people worth of data points and who's to say that Linus isn't someone who writes his code the previous day and commits it the next. Or that Guido isn't someone who makes lots of small commits as he's working. Hell, maybe they all squash multiple days worth of commits leaving it with a timestamp for a totally different time.

tl;dr: Yeah, I don't know that we can glean that much super reliably from this data set.

samstave(3736) 3 days ago [-]

YEP.

We instituted a policy of NO CHANGE FRIDAYS! To avoid fucking up peoples weekends.

kawsper(3925) 3 days ago [-]

Isn't that what feature or topic branches are for? Commit in your own little world, merge when you are ready to share.

strunz(10000) 3 days ago [-]

These are commit times, not merge times.

dchest(610) 3 days ago [-]

Strange policy about commits — why just not push at night or during weekends? Why's everyone not working on branches?

novok(10000) 3 days ago [-]

I would avoid deploys before weekends, holidays, and vacations but you have problems with your pre-merge/pre-commit testing setup if it possibly screws people up when you land something.

spullara(1201) 3 days ago [-]

Commit time and push time are not the same time.

k__(3262) 3 days ago [-]

The work patterns would be more interesting.

Does Bellard think the whole project through for days and then simply hacks it down in a few hours?

Does one of them use TLA+?

Do they use iterative approaches, where the first few versions are really buggy?

anaphor(2855) 3 days ago [-]

From the projects they mention, the only one I can see really benefiting from TLA+ would be memcached. For most things it doesn't make a whole lot of sense to use it, unless you're writing a potentially distributed app. If you just want to model a regular program, then you'd be better off looking at PlusCal, which is more oriented to regular programs that aren't designed to be distributed. It compiles down to TLA+ so you get the same benefits, but with an easier to understand syntax.

Edit: Also if you want to learn about how famous programmers work, then there's no better source than the book Coders at Work by Peter Seibel. http://www.codersatwork.com/

nojvek(3857) 3 days ago [-]

I guess only white male programmers are famous in programming ?

Good job further propagating the stereotype.

numlock86(10000) 3 days ago [-]

I already commented on this somewhere else. What's the problem with that? Are you implying female/black programmers are different?

[1] https://news.ycombinator.com/item?id=20469963

btucker(3975) 3 days ago [-]

It's unfortunate that the author chose to only include male programmers in the post.

numlock86(10000) 3 days ago [-]

Why is it unfortunate? Or am I missing some sort of implication here? And what about black programmers? Does it matter? If so, why?

barce(3582) 3 days ago [-]

Let's cargo cult the advice on this ftw. Also let's piss off our managers by saying, 'Yeah, I heard this on Hacker News.' Here for the downvotes to get back to 666.

throwaway3627(3849) 3 days ago [-]

Lmao. OP items like these reek of people looking for shallow poser shortcuts rather than digging in.

donatj(3525) 3 days ago [-]

It's interesting to me hearing people in the comments talk about committing after hours of work. I instinctively commit after every maybe 30 loc, especially if I feel it has value. I'll sometimes squash them down if the PR turns out particularly noisy. It would be interesting to compare the average size of commits as well I'd imagine.

nottorp(3930) 3 days ago [-]

I don't know, I commit logical units. Changes that do something. If they're a one liner, that's it. If it's 500 lines, that's it.

If it's more than a day of work I might commit and push unfinished stuff before i stop for the day, but otherwise no.

Edit: although the latter happens very rarely. IMO if a unit is a whole day of work, maybe it needs splitting.

alexhutcheson(4052) 3 days ago [-]

In my opinion, the project should be in a working state (everything builds, all tests pass, and the built binary or library would be usable) after every commit. Additionally, every commit should also contain unit tests for the code that's being changed, if possible.

Sometimes it's possible to meet these criteria after writing just 30 lines of code, but more commonly it will take 100-300 lines of code and several hours to get to that state.





Historical Discussions: What's Coming in Python 3.8 (July 17, 2019: 633 points)

(633) What's Coming in Python 3.8

633 points 4 days ago by superwayne in 3649th position

lwn.net | Estimated reading time – 11 minutes | comments | anchor

Welcome to LWN.net

The following subscription-only content has been made available to you by an LWN subscriber. Thousands of subscribers depend on LWN for the best news from the Linux and free software communities. If you enjoy this article, please consider accepting the trial offer on the right. Thank you for visiting LWN.net!

Free trial subscription

Try LWN for free for 1 month: no payment or credit card required. Activate your trial subscription now and see why thousands of readers subscribe to LWN.net.

By Jake Edge July 17, 2019

The Python 3.8 beta cycle is already underway, with Python 3.8.0b1 released on June 4, followed by the second beta on July 4. That means that Python 3.8 is feature complete at this point, which makes it a good time to see what will be part of it when the final release is made. That is currently scheduled for October, so users don't have that long to wait to start using those new features.

The walrus operator

The headline feature for Python 3.8 is also its most contentious. The process for deciding on PEP 572 ('Assignment Expressions') was a rather bumpy ride that eventually resulted in a new governance model for the language. That model meant that a new steering council would replace longtime benevolent dictator for life (BDFL) Guido van Rossum for decision-making, after Van Rossum stepped down in part due to the 'PEP 572 mess'.

Out of that came a new operator, however, that is often called the 'walrus operator' due to its visual appearance. Using ':=' in an if or while statement allows assigning a value to a variable while testing it. It is intended to simplify things like multiple-pattern matches and the so-called loop and a half, so:

    m = re.match(p1, line)
    if m:
        return m.group(1)
    else:
        m = re.match(p2, line)
        if m:
            return m.group(2)
        else:
            m = re.match(p3, line)
            ...
becomes:
    if m := re.match(p1, line):
        return m.group(1)
    elif m := re.match(p2, line):
        return m.group(2)
    elif m := re.match(p3, line):
        ...
And a loop over a non-iterable object, such as:
    ent = obj.next_entry()
    while ent:
        ...   # process ent
	ent = obj.next_entry()
can become:
    while ent := obj.next_entry():
        ... # process ent
These and other uses (e.g. in list and dict comprehensions) help make the intent of the programmer clearer. It is a feature that many other languages have, but Python has, of course, gone without it for nearly 30 years at this point. In the end, it is actually a fairly small change for all of the uproar it caused.

Debug support for f-strings

The f-strings (or formatted strings) added into Python 3.6 are quite useful, but Pythonistas often found that they were using them the same way in debugging output. So Eric V. Smith proposed some additional syntax for f-strings to help with debugging output. The original idea came from Larry Hastings and the syntax has gone through some changes, as documented in two feature-request issues at bugs.python.org. The end result is that instead of the somewhat cumbersome:

    print(f'foo={foo} bar={bar}')
Python 3.8 programmers will be able to do:
    print(f'{foo=} {bar=}')
In both cases, the output will be as follows:
    >>> foo = 42
    >>> bar = 'answer ...'
    >>> print(f'{foo=} {bar=}')
    foo=42 bar=answer ...

Beyond that, some modifiers can be used to change the output, '!s' uses the str() representation, rather than the default repr() value and '!f' will be available to access formatting controls. They can be used as follows:

    >>> import datetime
    >>> now = datetime.datetime.now()
    >>> print(f'{now=} {now=!s}')
    now=datetime.datetime(2019, 7, 16, 16, 58, 0, 680222) now=2019-07-16 16:58:00.680222
    >>> import math
    >>> print(f'{math.pi=!f:.2f}')
    math.pi=3.14

One more useful feature, though it is mostly cosmetic (as is the whole feature in some sense), is the preservation of spaces in the f-string 'expression':

    >>> a = 37
    >>> print(f'{a = }, {a  =  }')
    a = 37, a  =  37
The upshot of all of that is that users will be able to pretty-print their debugging, log, and other messages more easily. It may seem somewhat trivial in the grand scheme, but it is sure to see a lot of use. F-strings have completely replaced other string interpolation mechanisms for this Python programmer and I suspect I am far from the only one.

Positional-only parameters

Another change for 3.8 affords pure-Python functions the same options for parameters that those implemented in C already have. PEP 570 ('Python Positional-Only Parameters') introduces new syntax that can be used in function definitions to denote positional-only arguments—parameters that cannot be passed as keyword arguments. For example, the builtin pow() function must be called with bare arguments:

    >>> pow(2, 3)
    8
    >>> pow(x=2, y=3)
    ...
    TypeError: pow() takes no keyword arguments

But if pow() were a pure-Python function, as an alternative Python implementation might want, there is no easy way to force that behavior. A function could accept only *args and **kwargs, then enforce the condition that kwargs is empty, but that obscures what the function is trying to do. There are other reasons described in the PEP, but many, perhaps most, are not things that the majority of Python programmers will encounter very often.

Those that do, however, will probably be pleased that they can write a pure-Python pow() function, which will behave the same as the builtin, as follows:

    def pow(x, y, z=None, /):
	r = x**y
	if z is not None:
	    r %= z
	return r
The '/' denotes the end of the positional-only parameters in an argument list. The idea is similar to the '*' that can be used in an argument list to delimit keyword-only arguments (those that must be passed as keyword=...), which was specified in PEP 3102 ('Keyword-Only Arguments'). So a declaration like:
    def fun(a, b, /, c, d, *, e, f):
        ...
Says that a and b must be passed positionally, c and d can be passed as either positional or by keyword, and e and f must be passed by keyword. So:
    fun(1, 2, 3, 4, e=5, f=6)          # legal
    fun(1, 2, 3, d=4, e=5, f=6)        # legal
    fun(a=1, b=2, c=3, d=4, e=5, f=6)  # illegal
It seems likely that most Python programmers have not encountered '*'; '/' encounter rates are likely to be similar.

A movable __pycache__

The __pycache__ directory is created by the Python 3 interpreter (starting with 3.2) to hold .pyc files. Those files contain the byte code that is cached after the interpreter compiles .py files. Earlier Python versions simply dropped the .pyc file next to its .py counterpart, but PEP 3147 ('PYC Repository Directories') changed that.

The intent was to support multiple installed versions of Python, along with the possibility that some of those might not be CPython at all (e.g. PyPy). So, for example, standard library files could be compiled and cached by each Python version as needed. Each would write a file of the form 'name.interp-version.pyc' into __pycache__. So, for example, on my Fedora system, foo.py will be compiled when it is first used and __pycache__/foo.cpython-37.pyc will be created.

That's great from an efficiency standpoint, but may not be optimal for other reasons. Carl Meyer filed a feature request asking for an environment variable to tell Python where to find (and put) these cache files. He was running into problems with permissions in his system and was disabling cache files as a result. So, he added a PYTHONPYCACHEPREFIX environment variable (also accessible via the -X pycache_prefix=PATH command-line flag) to point the interpreter elsewhere for storing those files.

And more

Python 3.8 will add a faster calling convention for C extensions based on the existing 'fastcall' convention that is used internally by CPython. It is exposed in experimental fashion (i.e. names prefixed with underscores) for Python 3.8, but is expected to be finalized and fully released in 3.9. The configuration handling in the interpreter has also been cleaned up so that the language can be more easily embedded into other programs without having environment variables and other configuration mechanisms interfere with the installed system Python.

There are new features in various standard library modules as well. For example, the ast module for processing Python abstract syntax trees has new features, as do statistics and typing. And on and on. The draft 'What's New In Python 3.8' document has lots more information on these changes and many others. It is an excellent reference for what's coming in a few months.

The status of PEP 594 ('Removing dead batteries from the standard library') is not entirely clear, at least to me. The idea of removing old Python standard library modules has been in the works for a while, the PEP was proposed in May, and it was extensively discussed after that. Removing unloved standard library modules is not particularly controversial—at least in the abstract—until your favorite module is targeted, anyway.

The steering council has not made a pronouncement on the PEP, nor has it delegated to a so-called BDFL-delegate. But the PEP is clear that even if it were accepted, the changes for 3.8 would be minimal. Some of the modules may start raising the PendingDeprecationWarning exception (many already do since they have been deemed deprecated for some time), but the main change will be in the documentation. All of the 30 or so modules will be documented as being on their way out, but the actual removal will not happen until the 3.10 release—three or so years from now.

The future Python release cadence is still under discussion; currently Python 3.9 is scheduled for a June 2020 release, much sooner than usual. Python is on an 18-month cycle, but that is proposed to change to nine months (or perhaps a year). In any case, we can be sure that Python 3.8 will be here with the features above (and plenty more) sometime before Halloween on October 31.

Did you like this article? Please accept our trial subscription offer to be able to see more content like it and to participate in the discussion.


(Log in to post comments)




All Comments: [-] | anchor

singularity2001(2423) 4 days ago [-]

Do parsers of previous pythons emit warnings: 'this feature is not available in pythons 3.3 3.4 3.5 etc' ?

ben509(10000) 4 days ago [-]

No, just a SyntaxError.

Generally, library authors won't be able to use it if they want to support many versions; same as with f-strings.

philsnow(4166) 4 days ago [-]

I noticed some changes to pickle; do people still use pickle for Real Work?

Potential vulnerabilities aside, I got bitten by some migration issue back in the 2.2 to 2.4 transition where some built-in types changed how they did their __setstate__ and __getstate__ (iirc) and that caused objects picked under 2.4 to not unpickle correctly under 2.2 or something like that. After that I never wanted to use pickle in production again.

brilee(3786) 4 days ago [-]

Pickle is only guaranteed to work within python versions and shouldn't be used as a long-term data storage strategy. It's really intended for quick-n-dirty serialization, or for multiprocessing communication, where the objects are ephemeral.

tasubotadas(4149) 4 days ago [-]

I'll just put a reminder here that it's the year 2019 and AMD and Intel has 10-core CPUs while Python is still stuck with GIL ̄\_(ツ)_/ ̄

ben509(10000) 4 days ago [-]

It's the current year!

This is slated for 3.9: https://www.python.org/dev/peps/pep-0554/

dec0dedab0de(4121) 4 days ago [-]

I dont like the positional only arguments..

Really, I dont like anything that trys to force a future developer into using your code the way you expect them to.

duckerude(10000) 4 days ago [-]

I think the main value is that function documentation becomes slightly less absurd.

If you run `help(pow)` as early as Python 3.5 it lists the signature as `pow(x, y, z=None, /)`. The first time I saw that `/` I was pretty confused, and it didn't help that trying to define a function that way gave a syntax error. It was this weird thing that only C functions could have. It's still not obvious what it does, but at least the signature parses, which is a small win.

Another thing it's good for is certain nasty patterns with keyword arguments.

Take `dict.update`. You can give it a mapping as its first argument, or you can give it keyword arguments to update string keys, or you can do both.

If you wanted to reimplement it, you might naively write:

  def update(self, mapping=None, **kwargs):
      ...
But this is wrong. If you run `d.update(mapping=3)` you won't update the 'mapping' key, you'll try to use `3` as the mapping.

If you want to write it in pure Python < 3.8, you have to do something like this:

  def update(*args, **kwargs):
      if len(args) > 2:
          raise TypeError
      self = args[0]
      mapping = None
      if len(args) == 2:
          mapping = args[1]
      ...
That's awful.

Arguably you shouldn't be using keyword arguments like this in the first place. But they're already used like this in the core language, so it's too late for that. Might as well let people write this:

  def update(self, mapping=None, **kwargs, /):
      ...
nneonneo(10000) 4 days ago [-]

One of the use-cases for positional-only arguments strikes me as being very sensible:

    def my_format(fmt, *args, **kwargs):
        ...
        fmt.format(*args, **kwargs)
suffers from a bug if you want to pass fmt as a keyword argument (e.g. `my_format('{fmt}', fmt='int')`). With positional-only arguments that goes away.

You could always force developers into using your code the way you expect by parsing args/kwargs yourself, so it's not like this really changes anything about the 'restrictiveness' of the language.

Animats(2017) 4 days ago [-]

The title made me think 'Be afraid. Be very afraid'. But it's all little stuff.

Unchecked type annotations remain the worst addition since 3.0. Actual typing might be useful; it allows optimizations and checking. But something that's mostly a comment isn't that helpful.

snicker7(10000) 4 days ago [-]

Both static checking and compilation can be implemented using third party libraries. I think projects like mypyc could be a real game-changer.

joshuamorton(10000) 4 days ago [-]

If you don't like unchecked annotations, then check them. It's not hard to do.

stakhanov(10000) 4 days ago [-]

Speaking as someone who has written Python code almost every day for the last 16 years of my life: I'm not happy about this.

Some of this stuff seems to me like it's opening the doors for some antipatterns that I'm consistently frustrated about when working with Perl code (that I didn't write myself). I had always been quite happy about the fact that Python didn't have language features to blur the lines between what's code vs what's string literals and what's a statement vs what's an expression.

keymone(4053) 3 days ago [-]

> what's a statement vs what's an expression

never understood the need for this. why do you even need statements?

if there's one thing that annoys me in python it's that it has statements. worst programming language feature ever.

ashton314(4137) 4 days ago [-]

Many languages don't distinguish between statements and expressions—in some languages, this is because everything is an expression! I'm most familiar with these kinds of languages.

I'm not familiar much with Python, beyond a little I wrote in my linear algebra class. How much does the statement/literal distinction matter to readability? What does that do for the language?

sametmax(3648) 4 days ago [-]

F-strings have appeared 2 versions ago. All in all, the feedback we have has been overwhelmingly positive, including on maintenance and readability.

duckerude(10000) 4 days ago [-]

Do you have an example of bad code you'd expect people to use assignment expressions and f-strings for?

I don't think I've come across any f-string abuse in the wild so far, and my tentative impression is that there's a few patterns that are improved by assignment expressions and little temptation to use them for evil.

It helps that the iteration protocol is deeply ingrained in the language. A lot of code that could use assignment expressions in principle already has a for loop as the equally compact established idiom.

hk__2(2481) 4 days ago [-]

As someone who has written Python code almost every day for both professional and personal projects for a few years: I'm really happy about these assignment expressions. I wish Python would have more expressions and fewer statements, like functional languages.

vesche(3809) 4 days ago [-]

Was really hoping to see multi-core in 3.8, looks like we'll be waiting until 3.9

https://www.python.org/dev/peps/pep-0554/

https://github.com/ericsnowcurrently/multi-core-python/wiki

Stubb(4157) 4 days ago [-]

A map() function that isn't just an iterated fork() would be glorious. Let me launch a thread team like in OpenMP to tackle map() calls containing SciPy routines and I'll be unreasonably happy.

jasonrhaas(4085) 4 days ago [-]

The walrus operator does not feel like Python to me. I'm not a big fan of these types of one liner statements where one line is doing more than one thing.

It violates the philosophies of Python and UNIX where one function, or one line, should preferably only do one thing, and do it well.

I get the idea behind the :=, but I do think it's an unnecessary addition to Python.

coldtea(1198) 4 days ago [-]

>It violates the philosophies of Python and UNIX where one function, or one line, should preferably only do one thing, and do it well.

Python never had that philosophy... You might confused it with 'there should be one, and preferably only one, obvious way to do anything'.

fatbird(10000) 4 days ago [-]

The unix philosophy of simplicity was on a per tool basis, not function or line of code. The walrus operator is Python version of what we can do now in C or in JS, doing plain assignment in an expression while evaluating it for truthiness. And more often than not, the point of that single-purposeness in Unix is so you can chain a bunch of piped commands that result in a perl-like spaghetti command that's three terminal widths long.

andrewf(3793) 4 days ago [-]

This has never felt like a Pythonic principle to me. Python has always seemed like a high-level language that enables dense code. Look at the docs for list comprehensions! https://docs.python.org/2/tutorial/datastructures.html#list-...

A lot of folks see Go as a Python successor which surprises me because I don't think the languages favor the same things at all. Maybe my perspective is weird.

jstimpfle(3582) 4 days ago [-]

I support your view, but want to make you aware that early unix did favour a little cleverness to reduce line counts (and even character counts). C's normal assignment operator does what python's walrus does, for example. Or look at pre/post increment/decrement operators. Or look at languages like sed, or bc, they try to be terse over anything else.

lordnacho(4097) 4 days ago [-]

Gotta ask how many of these changes are actually reflective of changing environments.

I could see with c++ that between 2003 and 2014 a fair few underlying machine things were changing and that needed addressing in the language.

But Python is not quite as close to the machine, and I don't see how something like the walrus is helping much. If anything it seems like you'd scratch your head when you came across it. And for me at least one of the main attractions of python is you're hardly ever surprised by anything, things that are there do what you guessed, even if you hadn't heard of them. Function decorators for instance, you might never have seen one but when you did you knew what it was for.

Same with the debug strings. That seems to be a special case of printing a string, why not leave it at that? I'm guessing a lot of people don't ever read a comprehensive python guide, what are they going to do when they see that?

thaumasiotes(3661) 4 days ago [-]

> I'm guessing a lot of people don't ever read a comprehensive python guide, what are they going to do when they see that?

My guess would be 'run it and see what it does'.

Areading314(4026) 4 days ago [-]

Very much seems like perlification, and we all know what happened to Perl.

Although that being said I always really liked Perl

lizmat(3244) 4 days ago [-]

Perhaps it's more Perl 6-ification?

xaedes(3846) 4 days ago [-]

Wow. Never would I have guessed the amazing concept of assignment expression is so confusing for, what it seems, a lot of python programmers. It really was time to introduce it to them.

jpetrucc(4146) 4 days ago [-]

It's not really that it's confusing, more so that it isn't necessarily 'pythonic'

wil421(4162) 4 days ago [-]

>Debug support for f-strings.

F strings are pretty awesome. I'm coming from JavaScript and partially java background. JavaScript's string concatenation can become too complex and I have difficulty with large strings.

>Python 3.8 programmers will be able to do: print(f'{foo=} {bar=}')

Pretty cool way to help with debugging. There are so many times, including today, I need to print or log some debug string.

"Debug var1 " + var1 + " debug var2" + var2...and so on. Forgot a space again.

joaolvcm(10000) 4 days ago [-]

By the way, this has nothing do with f strings but for debugging JavaScript you can do something like

console.log({var1,var2,var3});

And the logged object will get created with the variables content and the variable nem as key, so it will get logged neatly like

{var1: 'this is var1', var2: 2, var3: '3'}

GrumpyNl(4070) 4 days ago [-]

Why elif en not juste elseif?

akubera(3812) 4 days ago [-]

Perhaps to align with the final 'else' clause, or it was familiar to c programmers due to the c-preprocessor directive https://gcc.gnu.org/onlinedocs/cpp/Elif.html, or they were mindful that every character counts when you want to push 80 character max-line-length style?

To be clear, that's not a new feature in 3.8.

dreary_dugong(10000) 4 days ago [-]

It fits in a single indent space. At least that's what my professor told us, and it seems to be confirmed by a quick online search.

mehrdadn(3331) 4 days ago [-]

Or just else if... but honestly elif is easiest to type and it's not hard to understand.

kbd(3817) 4 days ago [-]

Despite controversy, walrus operator is going to be like f-strings. Before: 'Why do we need another way to...' After: 'Hey this is great'.

People are wtf-ing a bit about the positional-only parameters, but I view that as just a consistency change. It's a way to write in pure Python something that was previously only possible to say using the C api.

ehsankia(10000) 4 days ago [-]

Was the controversy really about the need for the feature? I thought most people agreed it was a great feature to have, and most of the arguments were about `:=` vs re-using `as` for the operator.

adito(2744) 4 days ago [-]

I wonder if the controversial Go's error check function 'try' proposal[0] will also be similar to this situation.

[0]: https://github.com/golang/go/issues/32437

linsomniac(4014) 4 days ago [-]

I've literally been wanting something like the walrus operator since I first started using Python in '97. Mostly for the 'm = re.match(x, y); if m: do_something()' sort of syntax.

Scarblac(10000) 4 days ago [-]

By itself I agree, every now and then you write a few lines that will be made a little shorter now that := exists.

But there's a long standing trend of adding more and more of these small features to what was quite a clean and small language. It's becoming more complicated, backwards compatibility suffers, the likelyhood your coworker uses some construct that you never use increases, there is more to know about Python.

Like f-strings, they are neat I guess. But we already had both % and .format(). Python is becoming messy.

I doubt this is worth that.

craigds(3814) 3 days ago [-]

Yep, I can't wait to use the walrus operator. I just tried it out (`docker run -it python:3.8.0b2-slim`) and I'm hooked already.

Also, it's times like these I'm really glad docker exists. Trying that out before docker would have been a way bigger drama

choppaface(4063) 4 days ago [-]

Was the walrus operator really worth 'The PEP 572 mess'? https://lwn.net/Articles/757713/

That post makes a few things very clear:

* The argument over the feature did not establish an explicit measure of efficacy for the feature. The discussion struggled to even find relevant non-Toy code examples.

* The communication over the feature was almost entirely over email, even when it got extremely contentious. There was later some face-to-face talk at the summit.

* Guido stepped down.

stefco_(10000) 4 days ago [-]

f-strings are the first truly-pretty way to do string formatting in python, and the best thing is that they avoid all of the shortcomings of other interpolation syntaxes I've worked with. It's one of those magical features that just lets you do exactly what you want without putting any thought at all into it.

Digression on the old way's shortcomings: Probably the most annoying thing about the old 'format' syntax was for writing error messages with parameters dynamically formatted in. I've written ugly string literals for verbose, helpful error messages with the old syntax, and it was truly awful. The long length of calls to 'format' is what screws up your indentation, which then screws up the literals (or forces you to spread them over 3x as many lines as you would otherwise). It was so bad that the format operator was more readable. If `str.dedent` was a thing it would be less annoying thanks to multi-line strings, but even that is just a kludge. A big part of the issue is whitespace/string concatenation, which, I know, can be fixed with an autoformatter [0]. Autoformatters are great for munging literals (and diff reduction/style enforcement), sure, but if you have to mung literals tens of times in a reasonably-written module, there's something very wrong with the feature that's forcing that behavior. So, again: f-strings have saved me a ton of tedium.

[0] https://github.com/python/black

heydenberk(2339) 4 days ago [-]

I'd used f-string-like syntaxes in other languages before they came to Python. It was immediately obvious to me what the benefit would be.

I've used assignment expressions in other languages too! Python's version doesn't suffer from the JavaScript problem whereby equality and assignment are just a typo apart in, eg., the condition of your while loop. Nonetheless, I find that it ranges from marginally beneficial to marginally confusing in practice.

brummm(10000) 3 days ago [-]

I think in certain situations the walrus operator will probably be useful. But it definitely makes code less legible, which makes me cautious. The only useful use case I have found so far is list comprehensions where some function evaluation could be reduced to only one execution with the walrus operator.

Grue3(10000) 3 days ago [-]

>Python 3.8 programmers will be able to do: print(f'{foo=} {bar=}')

Ugh, how did this get approved? It's such a bizarre use case, and debugging by print should be discouraged anyway. Why not something like debug_print(foo, bar) instead (because foo and bar are real variables, not strings)?

jimktrains2(3398) 3 days ago [-]

I don't understand why you think print or log debugging is inherently bad.

Also, it's part of the format string and not a special print function so that it can be used for logs and other output as well, not just the console.

mottosso(10000) 4 days ago [-]

Very much looking forward to assignment expressions! It's something I've wanted to do every so often, only to realise that you can't. A worthy addition to the already intuitive Python language.

tomd3v(4020) 4 days ago [-]

Seriously. I recently came from PHP, and this is one feature I've been missing quite often and a lot.

ihuman(2759) 4 days ago [-]

How come they are using a new := operator instead of using equals?

andolanra(10000) 4 days ago [-]

They address this briefly in the PEP[0], and it's largely because they want to be very clear about when it's happening. The distance between

    if x = y:
and

    if x == y:
is very small, and easy to ignore, so insisting on

    if x := y:
makes it very clear that what's happening won't be mistaken for comparison at a quick glance.

[0]: https://www.python.org/dev/peps/pep-0572/#why-not-just-turn-...

jimktrains2(3398) 4 days ago [-]

That's literally the point: it's not the assignment operator.

The difference between = and == in an if causes many hugs in other languages. Using := for assignment in an expression instead of == means that you can't simply have a typo and have a bug.

dragonwriter(4166) 4 days ago [-]

Which equals?

= (existing) is statement assignment

== (existing) is expression equality

:= (new) is expression assignment

president(4125) 4 days ago [-]

Anyone else think the walrus operator is just plain ugly? There is a certain aesthetic quality that I've always appreciated about the Python language and the walrus operator looks like something straight out of Perl or Shell.

brown9-2(1549) 4 days ago [-]

Looks pretty normal if you do any amount of Go.

ptx(3789) 4 days ago [-]

It's also used in Algol, Pascal, Modula and other perfectly respectable languages.

outerspace(4155) 4 days ago [-]

Does it make sense to use := everywhere (can it be used everywhere?) instead of just in conditionals? Just like Pascal.

ben509(10000) 4 days ago [-]

It's not valid in an assignment statement, so you can't use it everywhere.

FWIW, I agree with the sentiment; I use := for assignment in my language precisely because that's the correct symbol. But even there, my grammar accepts = as assignment as well because I type it from habit.

DonHopkins(3368) 4 days ago [-]

About as much sense as it makes to use ; after every Python statement. Just like Pascal.

(Yeah I know, ; is a statement separator, not a statement terminator in Pascal.)

As long as you're being just like Pascal, did you know Python supported Pascal-like 'BEGIN' and 'END' statements? You just have to prefix them with the '#' character (and indent the code inside them correctly, of course). ;)

    if x < 10: # BEGIN
        print 'foo'
    # END
Waterluvian(4066) 4 days ago [-]

The lack of the 'nursery' concept for asyncio really sucks. Originally I heard it was coming in 3.8. Right now asyncio has this horrible flaw where it's super easy to have errors within tasks pass silently. It's a pretty large foot gun.

sametmax(3648) 4 days ago [-]

You can code your own wrapper for this.

Like https://github.com/Tygs/ayo

It's not as good as having it in the stdlib, because people can still call ensure_future and not await it, but it's a huge improvement and completly compatible with any asyncio code.

ProjectBarks(4166) 4 days ago [-]

The changes to f-strings just seems like a step in the wrong direction. Don't make the string content implicit!

strictfp(3884) 4 days ago [-]

Also, why abandon printf-style? All languages tend to converge to printf over time, it's simply the most tried and tested model out there!

londons_explore(4155) 4 days ago [-]

I long for a language which has a basic featureset, and then 'freezes', and no longer adds any more language features.

You may continue working on the standard library, optimizing, etc. Just no new language features.

In my opinion, someone should be able to learn all of a language in a few days, including every corner case and oddity, and then understand any code.

If new language features get added over time, eventually you get to the case where there are obscure features everyone has to look up every time they use them.

alexhutcheson(4052) 4 days ago [-]

Lua is pretty close, and pretty close to Python in terms of style and strengths.

Edit: I actually forgot about the split between LuaJIT (which hasn't changed since Lua 5.1), and the PUC Lua implementation, which has continued to evolve. I was thinking of the LuaJIT version.

dingo_bat(3954) 3 days ago [-]

You're talking about C.

hu3(3936) 4 days ago [-]

From what I've seen, Go is the closest we have for mainstream language resistant to change.

dwaltrip(10000) 4 days ago [-]

All human languages change over time. It is the nature of language.

jnwatson(10000) 4 days ago [-]

The only frozen languages are the ones nobody uses except for play or academic purposes.

As soon as people start using a language, they see ways of improving it.

It isn't unlike spoken languages. Go learn Esperanto if you want to learn something that doesn't change.

owaislone(4144) 4 days ago [-]

That's what Go has been so far but it might see some changes soon after being 'frozen' for ~10 years.

colechristensen(10000) 4 days ago [-]

This is why a lot of scientific code still uses fortran, code written several decades ago still compiles and has the same output.

How long has the code which was transitioned to python lasted?

nerdponx(3577) 4 days ago [-]

someone should be able to learn all of a language in a few days, including every corner case and oddity, and then understand any code.

Why should this be true for every language? Certainly we should have languages like this. But not every language needs to be like this.

Areading314(4026) 4 days ago [-]

Absolutely agree. How many times have you heard 'that was true until Python 3.4 but now is no longer an issue' or 'that expression is illegal for all Pythons below 3.3', and so on. Not to mention the (ongoing) Python 2->3 debacle.

markrages(10000) 4 days ago [-]

Python 2.7 is not far from that language.

chewxy(1344) 4 days ago [-]

Go? I moved a lot of my datascience and machine learning process to Go. Only thing really left in Python land is EDA

baq(3445) 4 days ago [-]

remember the gang of four book? such books happen when the language is incapable of expressing ideas concisely. complexity gets pushed to libraries which you have to understand anyway. i'd rather have syntax for the visitor pattern or whatever else is there.

poiuyt098(10000) 4 days ago [-]

Brainfuck has been extremely stable. You can learn every operator in minutes.

mr_crankypants(10000) 4 days ago [-]

Such languages exist. Ones that come to mind offhand are: Standard ML, FORTH, Pascal, Prolog.

All of which are ones that I once thought were quite enjoyable to work in, and still think are well worth taking some time to learn. But I submit that the fact that none of them have really stood the test of time is, at the very least, highly suggestive. Perhaps we don't yet know all there is to know about what kinds of programming language constructs provide the best tooling for writing clean, readable, maintainable code, and languages that want to try and remain relevant will have to change with the times. Even Fortran gets an update every 5-10 years.

I also submit that, when you've got a multi-statement idiom that happens just all the time, there is value in pushing it into the language. That can actually be a bulwark against TMTOWTDI, because you've taken an idiom that everyone wants to put their own special spin on, or that they can occasionally goof up on, and turned it into something that the compiler can help you with. Java's try-with-resources is a great example of this, as are C#'s auto-properties. Both took a big swath of common bugs and virtually eliminated them from the codebases of people who were willing to adopt a new feature.

fatbird(10000) 4 days ago [-]

All you're doing then is moving the evolution of the language into the common libraries, community conventions, and tooling. Think of JavaScript before ES2015: it had stayed almost unchanged for more than a decade, and as a result, knowing JavaScript meant knowing JS and jQuery, prototype, underscore, various promise libraries, AMD/commonjs/require based module systems, followed by an explosion of 'transpiled to vanilla JS' languages like coffeescript. The same happened with C decades earlier: while the core language in K&R C was small and understandable, you really weren't coding C unless you had a pile of libraries and approaches and compiler-specific macros and such.

Python, judged against JS, is almost sedate in its evolution.

It would be nice if a combination of language, libraries, and coding orthodoxy remained stable for more than a few years, but that's just not the technology landscape in which we work. Thanks, Internet.

diminoten(10000) 4 days ago [-]

Why can't you do this with Python? No one said you had to use any of these new features...

Though to me that's like saying, 'I want this river to stop flowing' or 'I'd prefer if the seasons didn't change.'

orangecat(10000) 4 days ago [-]

In my opinion, someone should be able to learn all of a language in a few days, including every corner case and oddity, and then understand any code.

'Understanding' what each individual line means is very different from understanding the code. There are always higher level concepts you need to recognize, and it's often better for languages to support those concepts directly rather than requiring developers to constantly reimplement them. Consider a Java class where you have to check dozens of lines of accessors and equals and hashCode to verify that it's an immutable value object, compared to 'data class' in Kotlin or @dataclass in Python.

linsomniac(4014) 4 days ago [-]

I'm in operations and I've spent much of my career writing code for the Python that worked on the oldest LTS release in my fleet, and for a very long time that was Python 1.5...

I was really happy, in some ways, when Python 2 was announced as getting no new releases and Python 3 wasn't ready, because it allowed a kind of unification of everyone on Python 2.7.

Now we're back on the treadmill of chasing the latest and greatest. I was kind of annoyed when I found I couldn't run Black to format my code because it required a slightly newer Python than I had. But... f strings and walrus are kind of worth it.

DonHopkins(3368) 4 days ago [-]

The Turing Machine programming language specification has been frozen for a long time, and it's easy to learn in a few days.

So has John von Neumann's 29 state cellular automata!

https://en.wikipedia.org/wiki/Von_Neumann_cellular_automaton

https://en.wikipedia.org/wiki/Von_Neumann_universal_construc...

(Actually there was a non-standard extension developed in 1995 to make signal crossing and other things easier, but other than that, it's a pretty stable programming language.)

>Renato Nobili and Umberto Pesavento published the first fully implemented self-reproducing cellular automaton in 1995, nearly fifty years after von Neumann's work. They used a 32-state cellular automaton instead of von Neumann's original 29-state specification, extending it to allow for easier signal-crossing, explicit memory function and a more compact design. They also published an implementation of a general constructor within the original 29-state CA but not one capable of complete replication - the configuration cannot duplicate its tape, nor can it trigger its offspring; the configuration can only construct.

orwin(10000) 4 days ago [-]

Try C maybe? It is still updated, but only really minor tweaks for optimisation.

Also Common lisp specs never changed since the 90s and is still usefull as a 'quick and dirty' language, with few basic knowledge required. But the 'basic feature set' can make everything, so the 'understand any code' is not really respected. Maybe Clojure is easier to understand (and also has a more limited base feature set, with no CLOS).

locoman88(10000) 4 days ago [-]

Fully agree. If we continue with this madness in five years Python will become indistinguishable from the Python I learnt 10 years ago.

Seems the Golang people have much cooler heads. Time to push for Golang at my workplace

mbo(10000) 4 days ago [-]

Elixir?

From the v1.9 release just a few weeks ago: https://elixir-lang.org/blog/2019/06/24/elixir-v1-9-0-releas...

> As mentioned earlier, releases was the last planned feature for Elixir. We don't have any major user-facing feature in the works nor planned. I know for certain some will consider this fact the most excing part of this announcement!

> Of course, it does not mean that v1.9 is the last Elixir version. We will continue shipping new releases every 6 months with enhancements, bug fixes and improvements.

plopz(10000) 4 days ago [-]

Isn't that what C is?

vindarel(10000) 4 days ago [-]

Common Lisp seems to tick the boxes. The syntax is stable and it doesn't change. New syntax can be added through extensions (pattern matching, string interpolation, etc). The language is stable, meaning code written in pure CL still runs 20 years later. Then there are de-facto standard libraries (bordeaux-threads, lparallel,...) and other libraries. Implementations continue to be optimized (SBCL, CCL) and to develop core features (package-local-nicknames) and new implementations arise (Clasp, CL on LLVM, notably for bioinformatics). It's been rough at the beginning but a joy so far.

https://github.com/CodyReichert/awesome-cl

coleifer(3016) 3 days ago [-]

Lua is a great small language.

sandGorgon(890) 4 days ago [-]

Does anyone know the status of pep-582 : https://www.python.org/dev/peps/pep-0582/

It's still marked as a 3.8 target

mixmastamyk(3461) 4 days ago [-]

Too late I think.

raymondh(3210) 4 days ago [-]

To me, the headline feature for Python 3.8 is shared memory for multiprocessing (contributed by Davin Potts).

Some kinds of data can be passed back and forth between processes with near zero overhead (no pickling, sockets, or unpickling).

This significantly improves Python's story for taking advantage of multiple cores.

aidos(3698) 4 days ago [-]

Oh no way. That has huge potential. What are the limitations?

quietbritishjim(10000) 3 days ago [-]

It looks like this will make efficient data transfer much more convenient, but it's worth noting this had always been possible with some manual effort. Python has had `mmap` support at least as long ago as Python 2.7, which works fine for zero-overhead transfer of data.

With mmap you have to specify a file name (actually a file number), but so long as you set the length to zero before you close it there's no reason any data would get written to disk. On Unix you can even unlink the file before you start writing it if you wish, or create it with the tempfile module and never give it a file name at all (although this makes it harder to open in other processes as they can't then just mmap by file name). The mmap object satisfies the buffer protocol so you can create numpy arrays that directly reference the bytes in it. The memory-mapped data can be shared between processes regardless of whether they use the multiprocessing module or even whether they're all written in Python.

https://docs.python.org/3.7/library/mmap.html

acqq(1232) 4 days ago [-]

For us who didn't follow:

'multiprocessing.shared_memory — Provides shared memory for direct access across processes'

https://docs.python.org/3.9/library/multiprocessing.shared_m...

And it has the example which 'demonstrates a practical use of the SharedMemory class with NumPy arrays, accessing the same numpy.ndarray from two distinct Python shells.'

Also, SharedMemory

'Creates a new shared memory block or attaches to an existing shared memory block. Each shared memory block is assigned a unique name. In this way, one process can create a shared memory block with a particular name and a different process can attach to that same shared memory block using that same name.

As a resource for sharing data across processes, shared memory blocks may outlive the original process that created them. When one process no longer needs access to a shared memory block that might still be needed by other processes, the close() method should be called. When a shared memory block is no longer needed by any process, the unlink() method should be called to ensure proper cleanup.'

Really nice.

agent008t(10000) 3 days ago [-]

Isn't that already the case?

I thought that when you use multiprocessing in Python, a new process gets forked, and while each new process has separate virtual memory, that virtual memory points to the same physical location until the process tries to write to it (i.e. copy-on-write)?

amelius(869) 4 days ago [-]

> no pickling, sockets, or unpickling

But still copying?

If not, then how does it interoperate with garbage collection?

aportnoy(2383) 4 days ago [-]

I've been waiting for this for a very long time. Thank you for mentioning this.

Would this work with e.g. large NumPy arrays?

(and this is Raymond Hettinger himself, wow)

gigatexal(4061) 4 days ago [-]

Agreed this is huge.

gclaugus(10000) 4 days ago [-]

Walrus operator looks like a great addition, not too much syntax sugar for a common pattern. Why were folks arguing about it?

joshuamorton(10000) 4 days ago [-]

It's not that common. There's 1 place where its useful imo (comprehensions to avoid duplicate calls), but even that can be handled case by case, and it certainly isn't a common thing.

chrisseaton(3025) 4 days ago [-]

I don't know how you read it - 'if x is assigned the value y'? Most other things in Python can just be read out loud.

hdfbdtbcdg(10000) 4 days ago [-]

Because it goes against 20 years of the principles behind the language.

adjkant(4147) 4 days ago [-]

I write a good deal of python and I can't think of a line of code that I would use it for besides the while loop on a non-iterable data source, which is such a once in a blue moon case. As mentioned by others, the operator invites more ways to do the same thing, which is not what Python has been viewed as being about.

sametmax(3648) 4 days ago [-]

It's the most controversial feature ever introduced because it goes against a lot of python culture and philosophy.

All in all the debate has been heated and long, but it has been decided that the python community will use it intelligently and rarely, but that when it matters, it can help a lot.

I'm against this feature, while I was pro f-string. However, I'm not too worried about missuse and cultural shift because I've seen 15 years of this show going on and I'm confident on it's going to be indeed tagged as 'risky, use it knowing the cost' by everybody by the time 3.8 gets mainstream.

coldtea(1198) 4 days ago [-]

Change aversion.

BuckRogers(4166) 4 days ago [-]

The problem with modern Python is that it's trying to recreate C# or Java. Which leaves it with nothing, because it'll only end up an inferior version of the languages/platforms of which it's attempting to duplicate.

When I was into Python, I liked it because it was a tighter, more to the basics language. Not having 4 ways to format strings and so forth. I don't think Python can defeat Java by becoming Java. It'll lose there due to multiple disadvantages. The way Python 'wins' (as much as it could at least), is focusing on 'less is more'. They abandoned that a while ago.

My vision of a language like Python would be only 1-way to do things, and in the event someone wants to add a 2nd way, a vote is taken. The syntax is changed, and the old bytecode interpreter handles old scripts, and scripts written with the latest interpreter's bytecode only allows the new syntax. For me that's the joy of Python.

I think a lot of people wanted Python's original vision, 'one way to do things'. If I want feature soup, I'll use what I program in daily. Which I do want feature soup by the way, I just have no need to replace it with another 'feature soup' language like Python turned into because it's inferior on technical and for me, stylistic levels.

ptx(3789) 4 days ago [-]

The Zen of Python says that there should preferably be one obvious way to do it, not always strictly only one way.

Also, this motto should be interpreted in the appropriate historical context – as taking a position in relation to that of Perl, which was dominant when Python was gaining popularity and had the motto 'there's more than one way to do it'.

orangecat(10000) 4 days ago [-]

My vision of a language like Python would be only 1-way to do things, and in the event someone wants to add a 2nd way, a vote is taken.

By that standard, the walrus operator is not only acceptable but essential. Right now there are at least 3 ways to process data from a non-iterator:

  # 1: loop condition obscures what you're actually testing
  while True:
      data = read_data()
      if not data:
          break
      process(data)
  # 2: 7 lines and a stray variable
  done = False
  while not done:
      data = read_data()
      if data:
          process(data)
      else:
          done = True
  # 3: duplicated read_data call
  data = read_data()
  while data:
      process(data)
      data = read_data()
There's too many options here, and it's annoying for readers to have to parse the code and determine its actual purpose. Clearly we need to replace all of those with:

  while (data := read_data()):
      process(data)
Yes, I'm being a bit snarky, but the point is that there is never just one way to do something. That's why the Zen of Python specifically says one 'obvious' way, and the walrus operator creates an obvious way in several scenarios where none exist today.
ohazi(3283) 4 days ago [-]

Also type hints for dictionaries with fixed keys:

https://www.python.org/dev/peps/pep-0589/

I know it's almost always better to use objects for this, but tons of code still uses dictionaries as pseudo-objects. This should make bug hunting a lot easier.

ben509(10000) 4 days ago [-]

Oh, nice! I'll have to add that to json-syntax.[1]

[1]: https://pypi.org/project/json-syntax/

lxmcneill(10000) 4 days ago [-]

Huh, was totally unaware of this. For me this has good implications for ingesting CSVs/.xlsx to dicts. Clean-ups / type hinting is required at times for dirtier documents.

sleavey(4152) 4 days ago [-]

Without wanting to ignite a debate about the walrus operator (and having not read any of the arguments), I can guess why there was one. It's not clear to me what it does just from reading it, which was always one of Python's beginner-friendlinesses.

coldtea(1198) 4 days ago [-]

>It's not clear to me what it does just from reading it

How isn't it entirely obvious? := is the assignment operator in tons of languages, and there's no reason not to have assignment be an expression (as is also the case in many languages).

traderjane(3077) 4 days ago [-]
ehsankia(10000) 4 days ago [-]

I don't know why the downvotes, but I personally much prefer this to the editorialized and incomplete list in the current list.

Looking at the module changes, I think my top pick is the changes to the `math` module:

> Added new function math.dist() for computing Euclidean distance between two points.

> Added new function, math.prod(), as analogous function to sum() that returns the product of a 'start' value (default: 1) times an iterable of numbers.

> Added new function math.isqrt() for computing integer square roots.

All 3 are super useful 'batteries' to have included.

musicale(10000) 4 days ago [-]

All I care about is allowing a print statement in addition to the print function. There's no technical reason why both can't coexist in a perfectly usable manner.

mixmastamyk(3461) 4 days ago [-]

Try an editor snippet like I did years ago. It's even shorter to type:

    pr<TAB>  -->  print(' ')
                  #      ^ cursor
Alex3917(433) 4 days ago [-]

Have there been any performance benchmarks done on Python 3.8 yet? I'd be interested in seeing how it compares to 3.6 and 3.7, but haven't seen anything published.

gonational(4098) 3 days ago [-]

Absolutely this.

I think that the most important thing Python can do in each release is to improve performance, incrementally.

tasty_freeze(10000) 4 days ago [-]

I'm all in favor of the walrus operator for the for loop, but the first example given to justify it is code I'd never write. The first if does a return, so there is no need for the else: and indentation. I'm sure there are other code examples that would justify it, but this one is unconvincing.

duckerude(10000) 4 days ago [-]

The return statements make it a poor example. There's an example from the standard library in the PEP that has a similar shape:

  reductor = dispatch_table.get(cls)
  if reductor:
      rv = reductor(x)
  else:
      reductor = getattr(x, '__reduce_ex__', None)
      if reductor:
          rv = reductor(4)
      else:
          reductor = getattr(x, '__reduce__', None)
          if reductor:
              rv = reductor()
          else:
              raise Error(
                  'un(deep)copyable object of type %s' % cls)
Becomes:

  if reductor := dispatch_table.get(cls):
      rv = reductor(x)
  elif reductor := getattr(x, '__reduce_ex__', None):
      rv = reductor(4)
  elif reductor := getattr(x, '__reduce__', None):
      rv = reductor()
  else:
      raise Error('un(deep)copyable object of type %s' % cls)
wodenokoto(3969) 4 days ago [-]

Nice way in without walrus

    m = re.match(p1, line)
    if m:
        return m.group(1)
    m = re.match(p2, line)
    if m:
        return m.group(2)
    m = re.match(p3, line)
    ...
With walrus:

    if m := re.match(p1, line):
        return m.group(1)
    elif m := re.match(p2, line):
        return m.group(2)
    elif m := re.match(p3, line):
The example would have been better if it didn't have the return, but just a value assign or a function call.
stefco_(10000) 4 days ago [-]

There's a lot of talk in this thread about Python going down-hill and becoming less obvious/simple. I rather like modern python, but I agree that some features (like async/await, whose implementation fractures functions and libraries into two colors [0]) seem like downgrades in 'Pythonicity'.

That said, I think some things have unquestionably gotten more 'Pythonic' with time, and the := operator is one of those. In contrast, this early Python feature (mentioned in an article [1] linked in the main one) strikes me as almost comically unfriendly to new programmers:

> Python vowed to solve [the problem of accidentally assigning instead of comparing variables] in a different way. The original Python had a single '=' for both assignment and equality testing, as Tim Peters recently reminded him, but it used a different syntactic distinction to ensure that the C problem could not occur.

If you're just learning to program and know nothing about the distinction between an expression and a statement, this is about as confusing as shell expansion (another context-dependent syntax). It's way too clever to be Pythonic. The new syntax, though it adds an extra symbol to learn, is at least 100% explicit.

I'll add that := fixes something I truly hate: the lack of `do until` in Python, which strikes me as deeply un-Pythonic. Am I supposed to break out of `while True`? Am I supposed to set the variable before and at the tail of the loop (a great way to add subtle typos that will cause errors)? I think it also introduces a slippery slope to be encouraged to repeat yourself: if assigning the loop variable happens twice, you might decide to do something funny the 2:Nth time to avoid writing another loop, and that subtlety in loop variable assignment can be very easy to miss when reading code. There is no general solution I've seen to this prior to :=. Now, you can write something like `while line := f.readline()` and avoid repetition. I'm very happy to see this.

[0] https://journal.stuffwithstuff.com/2015/02/01/what-color-is-...

[1] https://lwn.net/Articles/757713/

[edit] fixed typos

owlowlowls(10000) 4 days ago [-]

>I'll add that := fixes something I truly hate: the lack of `do until` in Python, which strikes me as deeply un-Pythonic. Am I supposed to break out of `while True`? Am I supposed to set the variable before and at the tail of the loop (a great way to add subtle typos that won't cause errors)?

This is relevant to what I've been doing in OpenCV with reading frames from videos! In tutorial examples on the web, you'll see exactly the sort of pattern that's outlined in the PEP 572 article.

>line = f.readline()

>while line:

> ... # process line

> line = f.readline()

Just, replace readline() with readframe() and the like. So many off-by-one errors figuring out when exactly to break.

Asooka(10000) 4 days ago [-]

You are supposed to write

    for x in iter(f.readline, ''):
Or if you don't know what readline will return you can wrap it in your own lambda:

    for x in iter(lambda:f.readline() or None, None):
There is a lot you can do with iter to write the kind of loops you want but it's not well known for some reason. It's a very basic part of the language people seem to overlook. Walrus does however let you write the slightly more useful

    while predicate(x:=whatever()):
Which doesn't decompose easily into iter form.
thomasahle(4113) 4 days ago [-]

The problem with `while line := f.readline():` is that it takes preasure of library writers. You should really just do `for line in f:`. If the library only has a `next` function, it needs to be fixed.

voldacar(4165) 4 days ago [-]

Python looks more and more foreign with each release. I'm not sure what happened after 3.3 but it seems like the whole philosophy of 'pythonic', emphasizing simplicity, readability and 'only one straightforward way to do it' is rapidly disappearing.

amedvednikov(4151) 4 days ago [-]

I'm working on a language with a focus on simplicity and 'only one way to do it': https://vlang.io

The development has been going quite well:

https://github.com/vlang/v/blob/master/CHANGELOG.md

jnwatson(10000) 4 days ago [-]

Beyond the older-than-35 reason, I think a lot of folks are used to the rate of new features because there was a 5 year period where everyone was on 2.7 while the new stuff landed in 3.x, and 3.x wasn't ready for deployment.

In reality, the 2.x releases had a lot of significant changes. Of the top of my head, context managers, a new OOP/multiple inheritance model, and division operator changes, and lots of new modules.

It sucks that one's language is on the upgrade treadmill like everything else, but language design is hard, and we keep coming up with new cool things to put in it.

I don't know about Python 3.8, but Python 3.7 is absolutely amazing. It is the result of 2 decades of slogging along, improving bit by bit, and I hope that continues.

nerdponx(3577) 4 days ago [-]

Can you give an example of something like this happening to the language? IMO 3.6+ brought many positive additions to the language, which I also think are needed as its audience grows and its use cases expand accordingly.

The walrus operator makes while loops easier to read, write and reason about.

Type annotations were a necessary and IMO delightful addition to the language as people started writing bigger production code bases in Python.

Data classes solve a lot of problems, although with the existence of the attrs library I'm not sure we needed them in the standard library as well.

Async maybe was poorly designed, but I certainly wouldn't complain about its existence in the language.

F strings are %-based interpolation done right, and the sooner the latter are relegated to 'backward compatibility only' status the better. They are also more visually consistent with format strings.

Positional-only arguments have always been in the language; now users can actually use this feature without writing C code.

All of the stuff feels very Pythonic to me. Maybe I would have preferred 'do/while' instead of the walrus but I'm not going to obsess over one operator.

So what else is there to complain about? Dictionary comprehension? I don't see added complexity here, I see a few specific tools that make the language more expressive, and that you are free to ignore in your own projects if they aren't to your taste.

JustSomeNobody(3931) 4 days ago [-]

I see what you're saying, but I kinda like the gets ':=' operator.

sametmax(3648) 4 days ago [-]

Most code still look like traditional Python. Just like meta programming or monkey patching, the new features are used sparingly by the community. Even the less controversial type hints are here on maybe 10 percent of the code out there.

It's all about the culture. And Python culture has been protecting us from abuses for 20 years, while allowing to have cool toys.

Besides, in that release (and even the previous one), appart from the walrus operator that I predict will be used with moderation, I don't see any alien looking stuff. This kind of evolution speed is quite conservative IMO.

Whatever you do, there there always will be people complaining I guess. After all, I also hear all the time that Python doesn't change fast enough, or lack some black magic from functional languages.

pippy(4077) 4 days ago [-]

This is what happens when you lose a BDFL. While things become more 'democratic', you lose the vision and start trying to make everyone happy.

unethical_ban(4133) 3 days ago [-]

You can write very Python2.7 looking code with Python3. I don't think many syntax changes/deprecations have occurred (I know some have).

LaGrange(10000) 4 days ago [-]

In my experience, every technology focused on building a 'simple' alternative to a long-established 'complex' technology is doomed to discover exactly _why_ the other one became 'complex.' Also spawn at least five 'simple' alternatives.

Doesn't mean nothing good comes out of them, and if it's simplicity that motivates people then eh, I'll take it, but gosh darn the cycle is a bit grating by now.

baq(3445) 4 days ago [-]

i've been hearing this since 1.5 => 2.0 (list comprehensions), then 2.2 (new object model), 2.4 (decorators)...

happy python programmer since 1.5, currently maintaining a code base in 3.7, happy about 3.8.

hetman(10000) 4 days ago [-]

I'm someone who loves the new features even though I don't think they're 'pythonic' in the classical meaning of this term. That makes me think that being pythonic at it's most base level is actually about making it easier to reason about your code... and on that count I have found most of the new features have really helped.

tjpnz(10000) 4 days ago [-]

I don't think that philosophy was ever truly embraced to begin with. If you want evidence of that try reading the standard library (the older the better) and then try running the code through a linter.

llukas(10000) 4 days ago [-]

If you complained more specifically it would be possible to discuss. For what was described in article I don't see anything 'foreign'. Python was always about increasing code readability and those improvements are aligning well with this philosophy.

teddyh(2464) 4 days ago [-]

"I've come up with a set of rules that describe our reactions to technologies:

1. Anything that is in the world when you're born is normal and ordinary and is just a natural part of the way the world works.

2. Anything that's invented between when you're fifteen and thirty-five is new and exciting and revolutionary and you can probably get a career in it.

3. Anything invented after you're thirty-five is against the natural order of things."

― Douglas Adams, The Salmon of Doubt

chc(4059) 4 days ago [-]

The idea that str.format produced simpler or more readable code than f-strings is contrary to the experience of most Python users I know. Similarly, the contortions we have to go through in order to work around the lack of assignment expressions are anything but readable.

I do agree that Python is moving further and further away from the only-one-way-to-do-it ethos, but on the other hand, Python has always emphasized practicality over principles.

Razengan(4019) 4 days ago [-]

The '*' and '/' in function parameter lists for positional/keyword arguments look particularly ugly and unintuitive to me. More magic symbols to memorize or look up.

l0b0(4163) 4 days ago [-]

My first though was the same as the snarky sibling comment, but after reading TFA I realized these are all features I've used in other languages and detest. The walrus operator an complex string formatting are both character-pinching anti-maintainability features.

vkaku(4133) 4 days ago [-]

That walrus operator has given me exactly what I wanted from C. Although I'd have preferred:

if val = expr():

hyperion2010(10000) 4 days ago [-]

That particular version opens the way for massive typo footguns and results in the insanity of defensive programming patterns like yoda expressions.





Historical Discussions: Argdown (July 19, 2019: 607 points)

(614) Argdown

614 points 3 days ago by Qaphqa in 10000th position

argdown.org | Estimated reading time – 4 minutes | comments | anchor

Simple

Writing pro & contra lists in Argdown is as simple as writing a Twitter message. You don't have to learn anything new, except a few simple rules that will feel very natural.

Expressive

With these simple rules you will be able to define complex dialectical relations between arguments or dive into the details of their logical premise-conclusion structures.

Powerful

Your document is transformed into an argument map while you are typing. You can export your analysis as HTML, SVG, PDF, PNG or JSON. If that is not enough, you can easily extend Argdown with your own plugin.

# Learn Argdown in 3 Minutes

Argdown's formula consists of three ingredients:

# 1 Nested pro-contra-lists

Statement titles come in square brackets, argument titles in angle brackets.

argdown
[Argdown is the best]: Argdown is the best 
tool for analyzing complex argumentation 
and creating argument maps.
  - <Editors easier>: Argument map editors 
    are way easier to use. #pro-editor
    + <WYSIWYG>: In argument map editors what 
      you see during editing is what you get 
      at the end: an argument map. #pro-editor
  + <Pure Data>: With Argdown no user interface 
    gets in your way. You can focus on writing 
    without getting distracted.

Click on the Map button in the upper right corner to see the resulting argument map.

# 2 Premise-conclusion-structures

Let's logically reconstruct an additional argument in detail:

argdown
<Word Analogy>
(1) [Word @#*%!]: It is much easier to write 
    and format a text with Markdown than it is with Word.
(2) Markdown and Word are comparable in their ease of use 
    to Argdown and argument map editors respectively.
----
(3) It is much easier to analyze complex argumentation and 
    create argument maps with Argdown than it is with 
    argument map editors.
    -> <Editors easier>
[Argdown is the best]
  - <Editors easier> #pro-editor
    + <WYSIWYG> #pro-editor
  + <Pure Data>

Click on the Map button in the upper right corner to see the resulting argument map.

# 3 Markdown-like text-formatting

argdown
# Headings are used to group statement and arguments in the map
You can use __many__  (though not all) *features* of [Markdown](http://commonmark.org/) to format Argdown text.
And you can use #hashtags to color statements and arguments in the map.

For this example, no map will be generated, as the Argdown source code contains no statements or arguments connected by support or attack relations.

# Getting started

Now that you have learned the basics of Argdown you can:

  • Browser Sandbox Try out Argdown in your browser. Includes a live preview of the generated map.
  • VS Code Extension Install the Argdown VS Code extension for full Argdown language support in one of the best code editors around. Includes a live preview, syntax highlighting, content assist, code linting and export options.
  • Commandline Tool If you prefer to work with the commandline install the Argdown commandline tool. You can define custom processes in your config file and use them in a task runner to export several argument maps for the same document at once.

If you are getting unexpected results in your map, take a look at the syntax rules of Argdown and do not forget to separate top-level elements by empty lines.

MIT Licensed | Copyright © 2018-present Christian Voigt | Funded by Debatelab, KIT Karlsuhe




All Comments: [-] | anchor

daenz(511) 3 days ago [-]

Very cool. Better technology in the area of discussion and disagreements will go a long way (longer than I think most people realize). We need a 'bicycle for the mind' concept for online discussion.

toasterlovin(10000) 3 days ago [-]

I agree. But this is half the solution. The other half is an authoritative source that tracks the arguments and evidence for various hypotheses and theories. So a layperson can understand the basic argument and the strength of the evidence for or against it in about 15 minutes, but fractal in nature so that one can descend all the way down to the raw data if they choose. Something like Wikipedia, except for hypotheses and theories about how existence works. Then we can just link people to a source that outlines the flaws in their pet theories.

Side note: I think diet advocates and climate enthusiasts would be in for a bit of a shock from such a resource when they have to deal with the fact that epidemiology and model building are some of the weakest forms of evidence. But I digress.

TeMPOraL(3281) 2 days ago [-]

Strongly agreed, and I'm happy this was created, even though I'm not convinced it's going in the right direction. We do need people experimenting with tech that lets us streamline reasoning and arriving at group consensus.

DiabloD3(26) 3 days ago [-]

Who's the target market for this?

goerz(3603) 3 days ago [-]

Philosophers. We were using the early predecessor of this in our philosophy of science course 10 years ago or so (the TA was the guy who's now the professor behind this project). It's a pretty neat way to analyze informal logic.

natch(4095) 3 days ago [-]

Needs some syntax for referring to definitions of terms used in the arguments. Terms like "easier" and "complex" need formal, agreed on, definitions for the example arguments to make any sense.

rolling_robot(4164) 3 days ago [-]

It would be really nice to have emacs package that supports argdown compilation and syntax.

imiric(10000) 3 days ago [-]

If you're already using Emacs, this use case seems like it could be handled with org-mode and a custom export implementation.

astro-codes(10000) 3 days ago [-]

The argdown theme in vscode (that is recommended to install) is surprisingly nice as well. Does anyone know if it's a derivative of something?

astro-codes(10000) 3 days ago [-]

Just realised that it is the default vscode light theme! I clearly have never used it ...

pithymaxim(10000) 3 days ago [-]

In case it's useful I immediately got a dead link https://argdown.org/guide/a-first-example/ from this page https://argdown.org/guide/

dmix(1329) 3 days ago [-]

Needs the .html, works on the left sidebar fortunately: https://argdown.org/guide/a-first-example.html

tingletech(2297) 3 days ago [-]

looks like it might make a good general purpose graph editor

stefco_(10000) 3 days ago [-]

The dot language used by graphviz is simple enough and portable to web implementations [0][1]. I think this is more useful for its specific domain.

[0] https://github.com/dagrejs/dagre-d3/wiki

[1] https://github.com/mdaines/viz.js

breck(365) 3 days ago [-]

This source repo is really fantastic code. If you are implementing a new language this is a nice one to emulate.

They've got a Language Server Protocol Implementation, VS Code Extension, Code Mirror mode, and more, and the code and even config files project wide are all very well done.

Great stuff.

broth(10000) 3 days ago [-]

Would you be able to provide some specific examples?

crooked-v(4157) 3 days ago [-]

So... what is it, exactly? That page has a lot of words, but they're not ones that make any sense to me.

confused-k(10000) 3 days ago [-]

Agreed.

I clicked on the link, and my thought process was something like:

The name invokes markdown, so it sounds like something for marking up args but I have no idea what args are.

> A simple syntax for complex argumentation

What does that even mean?

Clicked on 'Getting Started' and immediately it wants me to install something and/or use a sandbox. Well, I'm not going to install anything when I have no idea what it even is, and I'm also not going to bother trying out the sandbox because I have no idea what it is.

The next three text blocks on the main page:

> Simple - Writing pro & contra lists in Argdown is as simple as writing a Twitter message. You don't have to learn anything new, except a few simple rules that will feel very natural.

Oh, so it's a markup language for writing lists?

> Expressive - With these simple rules you will be able to define complex dialectical relations between arguments or dive into the details of their logical premise-conclusion structures.

No wait, it's a markup language for making mind maps?

> Powerful - Your document is transformed into an argument map while you are typing. You can export your analysis as HTML, SVG, PDF, PNG or JSON. If that is not enough, you can easily extend Argdown with your own plugin.

Well whatever this is, it can export to other file formats. Too bad the website never bothered telling me what it was, so by this point I just gave up.

aasasd(10000) 2 days ago [-]

The screenshot and example linked here are rather curious: https://news.ycombinator.com/item?id=20475870

Though personally I'd still like to read up on the technique.

paulfurley(3811) 3 days ago [-]

Agreed:

"argumentation", "pro & contra lists", "dialectical relations" "premise-conclusion structures

..all appear within the first 100 words and I don't know what any of them mean. I'm a a native English speaker with a technical education and I am lost!

Is the site aimed at some profession where these words are commonly used?!

injidup(10000) 3 days ago [-]

And there I was thinking that it was a CLI argument parser generator.

0xfi(10000) 1 day ago [-]

glad I wasn't the only one

garmaine(10000) 3 days ago [-]

At first I thought this was a parser to automatically convert help text for a command into an argument parser. Something I didn't realize I needed until right now. Someone write that thing too!

imurray(4101) 3 days ago [-]

http://docopt.org/ — turns help text in a Python doc string into an argument parser.

After a doc string put:

    import docopt
    args = docopt.docopt(__doc__, version='0.0.1')
Edit: krapht points out there is now support for multiple languages: https://github.com/docopt
szemet(3420) 2 days ago [-]

It would have been helpful to me, if the page would try to explain and 'sell' not just the tool, but the idea of argument maps too (e.g. in a few initial paragraph).

I've met the concept of argument map for the first time, and had to google it to gain some understanding: https://en.wikipedia.org/wiki/Argument_map

shusson(4149) 2 days ago [-]

> if the page would try to explain and 'sell'

It's a specific tool for argumentation. I don't think they need to explain what argumentation is.

voronoff(10000) 2 days ago [-]

I think this image from elsewhere in the comments does a reasonable job of showing both what an argument map looks like and why you might want one.

http://www.argunet.org/wordpress-argunet-2/wp-content/upload...

TeMPOraL(3281) 3 days ago [-]

Do pro/con trees actually work in real life? I played with something like this when using Kialo for a while, and my impression was that this technique doesn't help much. Shoehorning everything into a pro or con is one thing, duplication of points in many places in a tree is another.

I abandoned Kialo with a conclusion that pro/con trees don't map well to reality; we need graphs of facts and their relationships.

(Also, my gut feeling is that when you're talking about 'arguments' instead of 'facts', 'evidence' and probabilities, you're in business of convincing, not truth seeking).

But that's not to diss this project. As an implementation of pro/con trees it's excellent, and I'd prefer typing in this language a millon times more than clicking around Kialo.

ChrisKor(10000) 2 days ago [-]

They actually do have linking[0] and map creators are usually quite anal about only having one instance of each argument in a map. So instead of accepting your suggestion they will just link the already present one there, which can be a bit annoying.

[0] https://www.kialo.com/its-possible-that-early-belief-in-the-...

manx(4055) about 3 hours ago [-]

I build a graph-based argument mapping system with community moderation with a friend as our CS master theses: https://github.com/woost/wust

I'm happy to discuss this whole topic with interested people, as I think it is a very important problem. I believe it may be possible in the long-term to fix politics with these kinds of technology. I wanted to do my PhD in this area, but wasn't able to get any funding. So I'm starting a Startup now, that does graph-based collaboration software (with the future hope to bring more focus into arugment mapping): https://app.woost.space

If interested, I can also send you my master thesis, just reach out per email.

Brendinooo(4008) 2 days ago [-]

I have similar feelings about Kialo. I've been experimenting with a site concept that seeks to make canonical statements (fact/value statements, policy statements, questions) that could be associated with others while avoiding the pro/con binary. After all, one man's con can often be another man's pro, depending on the underlying assumptions, and things don't always fit neatly into those categories.

derefr(3632) 2 days ago [-]

> duplication of points in many places in a tree

It seems (but I might be wrong) that this syntax allows for nodes to also reference other nodes rather than embedding them (see e.g. the third example on the page, though it's too trivial to tell if that's what it's doing for sure.) I think they expect you to draft the argument map, then go back over it and iteratively reduce it by manually normalizing duplicate sub-arguments into one canonical sub-argument in one place + references to it in other places.

> Also, my gut feeling is that when you're talking about 'arguments' instead of 'facts', 'evidence' and probabilities, you're in business of convincing, not truth seeking.

Usually the point of this kind of software (argument-mapping software) is to, first, efficiently capture an argument that exists, either as a sort of "court stenographer" during the argument, or from a recording after the fact. You want the tree of pros and cons (really, rebuttals / consequents / syllogisms / a bunch of other smaller categories) because you're trying to capture the structure of the discussion itself.

Then, once you have captured that structure, argument-mapping software has tooling to allow you to massage (refactor!) the discussion from its original shape, into one that lets you more efficiently get at the truth. Turn things graphical, assign arguments weights, unify duplicate branches, etc.

Argument mapping is not just about pro/con trees; but pro/con trees are a nearly-lossless way to capture how people actually debate things, so they're a good "ingested primary source" format to keep around and refer back to when you're trying to summarize and judge a debate (rather than having to listen to the audio transcript over and over, or read through a linear stream of debate text.)

Errancer(10000) 2 days ago [-]

'(Also, my gut feeling is that when you're talking about 'arguments' instead of 'facts', 'evidence' and probabilities, you're in business of convincing, not truth seeking).'

That's funny since for me it's the other way around, in philosophy 'facts' are taken with great suspicion but arguments are fine.

duxup(3919) 2 days ago [-]

Some analytical types love these things and if they make a decision.... maybe that is enough.

I do find people who are analytical types sometimes greatly overestimate the 'known' and underestimate the unknown / nuances.

rcorrear(10000) 2 days ago [-]

A very long time ago I worked in https://groupidea.com/ where we tried solving this in a way much like the one you proposed.

blattimwind(10000) 3 days ago [-]

> (Also, my gut feeling is that when you're talking about 'arguments' instead of 'facts', 'evidence' and probabilities, you're in business of convincing, not truth seeking).

Depending on context pro/contra is probably what can be generated from facts when comparing things, so facts should be persistent, pro/contra dynamically generated.

Qaphqa(10000) 3 days ago [-]

A simple syntax (kind of Markdown) for complex argumentation, defining complex dialectical relations between arguments. Your document is transformed into an argument map while you are typing.

Screenshot: http://www.argunet.org/wordpress-argunet-2/wp-content/upload...

winter_blue(3760) 3 days ago [-]

Thank you. I was searching their homepage for an example like this. The picture shows very clearly what Argdown does.

zucker42(10000) 3 days ago [-]

This site desperately needs a picture like the one you just posted on its front page.

vinceguidry(10000) 2 days ago [-]

I would love to see this baked into a social network as a substrate for semi-structured debate. Bookmarking for later reference.

bloopernova(10000) 2 days ago [-]

I'd like to see something like this used to display people's arguments in social media. Something like a mythical 'machine learning argument analysis' process would look at comments and threads, then spit out something like Argdown. Then people could see what structure and flow different discussions have.

Could also be useful if there was such a mythical tool that could gently teach people to debate in a more logical manner. The user would write their comment, but before it is submitted, it is analyzed and helpful tips or edits shown to the user.

As an aside, I wonder how good grammar checkers have become since the Word 97 days? My grammar is utterly terrible and I wish I could get it proofread before I make comments.

dharmab(10000) 3 days ago [-]

The map view relies on color for information, which makes it hard to read as a colorblind user.

pas(10000) 3 days ago [-]

And there's no legend/explanation for the colors at all on the pictures :(

And the black text color doesn't work that well on most of the backgrounds :/

trumbitta2(2515) 2 days ago [-]

I always look in wonder at how americans are so in argumentation. I mean, you even have a whole set of tools and now a 'markdown of sorts' for argumentation.

As an european, this never ceases to amaze me.

smallnamespace(10000) 2 days ago [-]

Americans are dualists [1] who believe there is both good and evil, and the best thing is to have both (all) sides fight it out to see who wins.

You can see this throughout the culture, from the legal system, to robust support for free speech, to the quite frankly Christian morality (a belief that there is original sin in the body politic that must be atoned for) that permeates even the most hardcore progressive, atheistic left.

[1] https://en.wikipedia.org/wiki/Dualistic_cosmology

aasasd(10000) 2 days ago [-]

> Funded by Debatelab, KIT Karlsuhe

And Christian Voigt apparently resides in Berlin.

diggan(824) 2 days ago [-]

Eh, I think most people of science in general are into argumentation because it allows you to flesh out ideas more.

I don't think, looking at the history of fields like logic, people from the US (guessing you meant that, not Americans in general) have more representation than other countries. Take a look at this list for example: https://en.wikipedia.org/wiki/List_of_logicians

jonathanstrange(10000) 2 days ago [-]

The main centres of argumentation research are in Europe (Amsterdam, Switzerland, Lisbon, and several groups in Germany and the UK) and in Canada (Univ. of Waterloo). In the US, there is a heavier focus on 'critical thinking' as a pedagogical tool.

lidHanteyk(10000) 2 days ago [-]

As an American, I was taught that our style of debate, with its understanding of dialectic and rhetoric, was ultimately grounded in ancient Greek traditions. I don't know how true that is, though.

Most public policy debate is so empty as to be worthless, and these tools don't change that. The typical English-language dialectic debate suffers from poor metaphysics and ontology, and in the USA this is compounded by a society that shuns math and logic skills. The typical USA debate is about feelings, not about evidence.

There are two investigations into philosophy, namely speech acts and pragmatics, which happened in English-language philosophy first, and may be due to English itself.





Historical Discussions: DuckDuckGo Expands Use of Apple Maps (July 16, 2019: 610 points)

(610) DuckDuckGo Expands Use of Apple Maps

610 points 5 days ago by doener in 50th position

spreadprivacy.com | Estimated reading time – 3 minutes | comments | anchor

In our quest to provide the best experience for local searches, earlier this year we announced that we're now using Apple's MapKit JS framework to power our mapping features. Since then, we've been continually working hard on further enhancements and we're excited today to show you some new improvements.

Map Re-Querying

Whereas previously each new map-related search required returning to a regular DuckDuckGo Search page, now it's possible to stay in our expanded map view where you can refine local searches instantly. This is useful for limiting generic searches like 'restaurants' to a specific area. Similarly, moving around the map or zooming in and out will enable you to update your search to include places within the field of view. For example, try a query such as 'coffee shop' and zoom in on the map to refine your search.

Local Autocomplete

Another time-saving enhancement is intelligent autocomplete within the expanded map view. Updating or typing new search queries will now dynamically show you search suggestions that are tailored to the local region displayed. For example, as you type 'coffee' we'll show you search suggestions related to coffee within the map area in view, rather than somewhere else in the world.

Dedicated Maps Tab

We now show a dedicated Maps tab at the top of every search results page. Previously we did this only for searches that we assumed were map-related, but for broader coverage you'll now consistently see Maps alongside Images, Videos and News. For example, an ambiguous query such as 'cupcakes' will give you the option to open the Maps tab, showing local places to enjoy delicious cupcakes.

Dark Mode

What about dark mode, you ask? We're pleased to say that when switching to DuckDuckGo's popular dark theme, Apple-powered maps now seamlessly switch to dark mode for a coherent look, whether you use it all the time or just for glare-free searching at night.

A lot has changed with using maps on DuckDuckGo making it an even smoother experience, but what hasn't changed is the way we handle your data—or rather, the way we don't do anything with your data. We are making local searches faster while retaining the privacy you expect.

How do we ensure your privacy when performing map and address-related searches? With Apple, as with all other third parties we work with, we do not share any personally identifiable information such as IP address. And for local searches in particular, where your approximate location information is sent by your browser to us, we discard it immediately after use. This is in line with our strict privacy policy. You can read more about our anonymous localized results here.

We believe there should be no trade-off for people wanting to protect their personal data while searching. Working with Apple Maps to enhance DuckDuckGo Search is an example of how we do this, and pushes us further in our vision of setting a new standard of trust online.


For more privacy advice, follow us on Twitter & get our privacy crash course.




All Comments: [-] | anchor

m8rl(4079) 5 days ago [-]

In my region (Germany) Apple Maps are not very helpful, compared to Openstreetmap they are incomplete and/or years back. I absoluty can't see why they chose to use them.

xenospn(10000) 5 days ago [-]

Probably comes down to API access and/or pricing. Privacy focused mapping services are quite rare. I'd do the same in their place.

larrysalibra(3870) 5 days ago [-]

I thought people complaining in the comments were just being critical, but I clicked the "coffee shops" example search in the post - I'm in Hong Kong - and it showed me only 2 results:

One coffee shop in Hong Kong and one on the other side of the Pearl River Delta in Macau. That's pretty bad. Screenshot here: https://twitter.com/larrysalibra/status/1151182624108318720?...

dmix(1329) 5 days ago [-]

That looks more like it has to do with query detection than the lack of results. It clearly didn't understand the Hong Kong part, which is the main issue.

WillyF(4103) 5 days ago [-]

One thing thing that I love about Apple Maps is that they have the name of every river, stream, creek, and ditch if you zoom in far enough. I can't find this information in Google Maps (maybe there's a way to find it, but zooming in doesn't do it). This was exceptionally helpful on my recent trip to Corsica, where I was searching for a specific stream with a genetically significant population of native trout. Apple Maps made finding it a breeze, and even had the name of all the tributaries that flow into it, which were essentially just trickles.

I subscribe to OnX Maps for most of my fishing and hunting research in the United States, but Apple Maps is a pretty great free option.

exadeci(10000) 5 days ago [-]

Because it's data from OSM

dsd(10000) 5 days ago [-]

I noticed that too. I like osmand (openstreetmap) for the same reason. It's like google maps decided to practice more minimalism than apple did.

burlesona(3867) 5 days ago [-]

This is cool to see, and I think should be a virtuous cycle. As I understand it, maps is the kind of thing where more usage really helps make the map better, and while DDG doesn't bring the scale of being the iphone's default map, it should be adding a non-trivial amount of traffic.

I've used Apple Maps as my primary map since it came out, and I've only gotten a wrong location one time in literally thousands of searches, and that was years ago. It wasn't really ready when it launched, but it has gotten consistently better over time. The UX is great, in many cases the satellite imagery is more up-to-date compared to Google, and it doesn't maul my battery to use. Not saying it's clearly better than Google, because it isn't, but for my usage it's more than "good enough," and I love to see Apple's privacy respecting products compete effectively with big G.

xenospn(10000) 5 days ago [-]

Same as you - been using it for years. In Southern California, its just as good as Google Maps, and the updated version is miles ahead when it comes to actual map details. Google Maps looks spartan compared to the Apple version these days.

gog(10000) 5 days ago [-]

Apple Maps maybe works in the US, but in Europe, at least in my country, it's not even close to Google Maps.

JumpCrisscross(48) 5 days ago [-]

> Not saying it's clearly better than Google

If privacy is worth something to you, it's clearly better than Google.

I, too, use Apple as my primary map. In many cases, Apple Maps is better Google. The ones in which it's behind are more than made up for by Apple's values.

jxdxbx(10000) 5 days ago [-]

I primarily use Apple Maps and its directions and the accuracy of roads have always been great for me. But sometimes I can't find a POI and searching for POIs can be significantly worse than Google Maps.

Xylakant(3459) 5 days ago [-]

> Not saying it's clearly better than Google, because it isn't, but for my usage it's more than "good enough"

I've just started testing Apple Maps as Google Maps replacement, and the quality seems to be highly dependent on the location. Cities, densely populated areas seem fine, but I'm currently traveling somewhat rural polish areas and apple maps seems to have no distinction between solid country roads and unpaved path. If it shows up on a map, you can travel it seems to be its credo.

dybber(10000) 5 days ago [-]

Apple Maps doesn't do bike routes which is reason enough not to use it.

Google Maps does a very nice job for bikes. If I should wish something of Google Maps it would be "I bring my bike on the train" routes, which is a very common and nice way to get around here in Denmark at least.

caiob(10000) 5 days ago [-]

Apple Maps works better than Google Maps in Montreal.

bin0(4117) 5 days ago [-]

Here's the issue with this: the live-traffic component of google maps, waze, etc. is very much a network effect. You get a lot of traffic info by measuring the speeds and stalls of your users. This creates the classic chicken-and-egg problem: need users to get data; need data to get users.

Spooky23(3604) 5 days ago [-]

The Google mapping properties are definitely on the decline. My theory is that they are less critical to the advertising business.

For navigation, Waze can't find a route 1/5 times for a longer trip, and Google Maps gets weirder and more hostile to use every few months. For me, they have tweaked their routing engine to be more 'creative' and direct you in weird ways. Big G is too fat and happy and they need a challenge.

Apple Maps is fine now from an accuracy POV. They are more conservative about routing and Apple gives the app privileged treatment on iOS that improves the UX while driving. Now that Apple seems to be embracing letting services move beyond their platform, I think we'll see them give Google a run for their money.

soneca(1480) 5 days ago [-]

Tangentially, I noticed that when I start typing a word in my Chrome URL input field, it started to privilege Google searches instead of websites I normally visit.

It is very annoying to type 'n <enter>' and it goes on to search any query starting with 'n' that I happened to have searched in the past instead of going to HN as it was the case for the last several years.

It is happening now with all my usual 'shortcuts'.

Chrome now is less a browser and more a Google widget.

I wonder if I change the default search engine to DuckDuckGo it would still be the case.

baobrain(10000) 5 days ago [-]

This is a chrome flag you can toggle: omnibox-drive-suggestions

Alternatively, use Firefox.

r00fus(4170) 5 days ago [-]

I avoid Chrome on all but specific work use-cases. Have noticed a big improvement in battery usage while it's not running.

vkaku(4133) 5 days ago [-]

What I'd want is Apple Maps and Here Maps to enter into a technology/business collaboration agreement.

Here Maps have excellent tech and offline packaging and Apple has the reach.

Put all this together with DDG, we have a winner.

manuelmagic(3954) 5 days ago [-]

I absolutely agree! I'm kinda disappointed that nobody else talked about Here Maps in the comments. Nobody use it?

SanchoPanda(10000) 5 days ago [-]

Why does neither maps.duckduckgo.com or duckduckgo.com/maps take me to the maps interface?

Thats the number one way I access google maps.

mperham(3398) 5 days ago [-]

Likewise. Note ddg.co works too, saves lots of typing.

oldgun(3640) 5 days ago [-]

Not sure if it's a lot to ask, but I'll consider it a killer app if DDG can let user opt which map source to choose from? e.g. Some may prefer Google, and some may prefer openstreetmap?

Just a thought.

maroonparty(10000) 5 days ago [-]

you can set your preferred Directions Source in the settings to google

Nightshaxx(4075) 5 days ago [-]

It already has this. It's in search settings.

gerash(10000) 5 days ago [-]

No, you don't get to choose. You have to be privacy first which narrows it down to DDG and Apple. Deal with it.

nvrspyx(10000) 5 days ago [-]

They at least let you choose which one to use for directions. Apple Maps is the default as far as I can tell, but if you go to the settings page on DDG [1] and scroll down to Directions Source, you can change to Bing, Google, HERE, and OpenStreetMap. However, this only affects what it directs you to when you click 'Get Directions' and you can't change the actual map source that's embedded or used for location results, although I believe they used to let you change that. It's a bummer that it was changed and that their Help page still refers to that feature [2].

[1] https://duckduckgo.com/settings

[2] https://help.duckduckgo.com/duckduckgo-help-pages/features/m...

bilbo0s(4155) 5 days ago [-]

Pretty sure you can forget about DDG supporting Google. The whole point of DDG is privacy. Not sure how they could use Google and still keep your location information private?

duskwuff(10000) 5 days ago [-]

They can't use Google Maps, because Google's Terms of Service for Maps explicitly prohibits using them alongside other map data providers:

> (e) No Use With Non-Google Maps. Customer will not use the Google Maps Core Services in a Customer Application that contains a non-Google map. For example, Customer will not (i) display Places listings on a non-Google map, or (ii) display Street View imagery and non-Google maps in the same Customer Application.

-- https://cloud.google.com/maps-platform/terms/

As far as OSM goes, Apple Maps is based heavily on OSM data. (I've submitted corrections to OSM and seen the changes propagate to Apple Maps.) So I don't see much point in supporting two different services which are that closely related.

ancorevard(10000) 5 days ago [-]

I love Apple Maps' Dark Mode.

ducktypegoose(10000) 5 days ago [-]

I think the true hero of the story is whoever made dark mode for maps a thing.

solarkraft(3986) 5 days ago [-]

Why support proprietary Apple Maps instead of Open Street Map? Is the data a lot more precise? Is the viewer smoother?

Is it just another step towards aligning with Apple for an eventual buyout/search engine standard?

That said: If privacy is the only concern Apple seems to be a pretty good ally, as the only major player with a significant interest in it.

kalleboo(3908) 4 days ago [-]

AFAIK, OSM is not happy when you use their tile servers for large commercial projects like DDG[0], so DDG would have to run, maintain and update that whole infrastructure themselves. For a service of their size it would probably require at least a dedicated engineer.

With Apple Maps, they just have to include some JavaScript and Apple deals with all of that.

[0] https://operations.osmfoundation.org/policies/tiles/

scrooched_moose(10000) 5 days ago [-]

Open Street Map is not remotely usable in many parts of the country. Picking largely at random:

South Minneapolis is pretty much unpopulated other than parks/schools/churches: https://www.openstreetmap.org/#map=14/44.9256/-93.2303

Same in a good chunk of Memphis: https://www.openstreetmap.org/#map=14/35.1364/-89.9691

and some major suburbs of Atlanta: https://www.openstreetmap.org/#map=14/33.7077/-84.2708

It appears to be a usability/functionality tradeoff. Apple Maps isn't perfect in privacy nor accuracy/completeness, but OSM is useless for many people.

Freak_NL(4145) 5 days ago [-]

> Is the data a lot more precise?

Perhaps in Apple's own backyard. In the Netherlands it's laughably bad. Cycle tracks? Mostly missing (in a country that has a huge cycling infrastructure). The map doesn't even have building outlines.

Google Maps is slightly better, but mostly because of the more extensive mapping of points-of-interest; because business owners add their own information with an almost religious zeal.

Bing interestingly enough uses OpenStreetMap (and properly attributes its usage) to gain access to the municipally contributed building outlines OpenStreetMap can use due to its permissive licence. The roads are their own though, and they are quite inaccurate at the lower end of the road hierarchy.

OpenStreetMap is probably the most complete map here in the Netherlands (disclaimer: I contribute to OpenStreetMap).

DuckDuckGo using Apple Maps instead of OpenStreetMap is a really weird choice for many countries, but perhaps it works better in the US?

philshem(4095) 5 days ago [-]

Does anyone else see Apple buying DDG in the near future?

burlesona(3867) 5 days ago [-]

Probably not. Apple is cautious about acquisitions and doesn't generally buy things that have a consumer brand, they don't really do advertising supported products, and they're also mindful of anti-trust. My guess is that in the search space they see DDG as a useful partner, much like they see Yelp, but they're not trying to expand into that business so an acquisition wouldn't do much for them.

scottmcf(10000) 5 days ago [-]

Would people continue to use it if that happened? I feel like that would defeat some of the point of the service.

kabacha(10000) 5 days ago [-]

I'm still so perplexed why they didn't go with OpenStreetMaps which are not only floss but also infinitely better. Apple maps are absolutely useless in my region, while osm has always been at least toretable experience wherever I went in the world. Actually OSM is often better than google maps - the only thing it really lacks is better user review ecosystem.

kkarakk(10000) 4 days ago [-]

OSM doesn't have great APIs, you end up having to have GIS experts on your team in order to use it in your product. source:someone who tried to use OSM stuff in a IoT fleet tracking system. Even Bing maps is better than OSM

goda90(10000) 5 days ago [-]

It seems to still have the dropdown to select which mapping service, but it doesn't change when you use it, and Apple maps isn't on the list. But the map it shows me does have the Apple logo in the corner.

gruez(3672) 5 days ago [-]

Are you talking about the drop down right under the 'directions' button? I believe that's for navigation only. ie. if you choose 'google', and click the directions button, it opens google maps in a new tab, and if you change it to bing, it opens bing maps in a new tab.

cletus(3028) 5 days ago [-]

I think of Apple Maps the same way I think of North Korea's missile program: I know it exists and it has continent-level accuracy.

ummonk(4118) 5 days ago [-]

Given that North Korea has been able to place satellites into sun synchronous orbit, its missile program has far better than continent-level accuracy.

mcs_(4169) 5 days ago [-]

one of the best comment this week

melling(1468) 5 days ago [-]

Last weekend I used Apple Maps to navigate to a movie theater from within the Trailers app.

Not sure which one got it wrong. We ended up in some neighborhood two miles from the theater.

I knew once we pulled off the main road and into that neighborhood that we were screwed.

bredren(4128) 5 days ago [-]

Isn't there an improved Apple Maps being rolled out city by city? What's the status of that?

rimliu(10000) 5 days ago [-]

I don't use NK missiles, but I do use Apple Maps and they are fine.

rootusrootus(10000) 5 days ago [-]

I use it all the time. In Portland it is as good as GMaps is at navigating fastest route during heavy traffic. Which is to say, not perfect, but adequate. It has yet to take me to the wrong location.

helix438745(10000) 5 days ago [-]

Oh my God, you're so hilarious! /s

Dude, Apple Maps jokes are so 2012.

Apple Maps has improved significantly in recent years. With iOS 13, it's going to annihilate Google Maps.

willio58(10000) 5 days ago [-]

I choose to use Apple maps for navigating most places within a city. On iPhone, I think it has far superior ux design. In between cities I use google maps because I feel it is more up to date.

iamtheworstdev(10000) 5 days ago [-]

That's a pretty funny comment, but I believe so far NK has actually failed to show continent level accuracy. They've only shown Pacific Ocean accuracy. XD

jakecopp(4143) 5 days ago [-]

It's a shame they didn't invest in OpenStreetMap.

Their values would align significantly, and OpenStreetMap has excellent road and path coverage in my experience (though struggles with Points of Interest).

Maxious(3126) 5 days ago [-]

They do invest in improving the project https://github.com/osmlab/appledata/

hardwaresofton(3322) 5 days ago [-]

My first instinct is that it's a money thing -- kind of like how Yelp results show up in DDG now which I only recently noticed. DDG is taking a page out of Mozilla's book and getting as many corporate partnerships as they can.

kbody(3781) 5 days ago [-]

Same here, hopefully it's not a pre-acquirement move and instead be more aligned in the future like you said.

bad_user(3119) 5 days ago [-]

I live in Romania.

It depends on the country, but for search what really matters are the points of interest and Apple Maps in my country doesn't have any, whereas OSM and Google Maps are competing head to head.

Even for driving, the OSM apps available, while lower quality, are more reliable when I travel to Bulgaria for example. The penetration of Google Maps in Eastern Europe isn't great and Apple Maps isn't worth bothering with.

Anyway, I wonder why DuckDuckGo is choosing Apple Maps. It makes no sense IMO from a user experience perspective.

Remember that if you're in California or New York, those are the primary markets targeted by all tech companies, so your experience with Apple Maps is not representative of the rest of the world.

In my travels OSM fares quite well in terms of its POS database and is the only one that can compete with Google Maps in that regard.

thekid314(10000) 4 days ago [-]

I'd second this, Apple Maps is still useless in Egypt and most of Africa.

There is plenty of room for these tech companies to plow some of their profits into data sets from outside San Francisco.

raxxorrax(10000) 4 days ago [-]

In Germany I use OSM exclusively for navigating and it is very good. Maybe not quite on the level of Google maps concerning things like live traffic, but certainly good enough to reach your goal and then some. Love the project and would have liked to have DuckDuckGo support it instead of using proprietary data.

Google Maps has shown what can happen if you use it for anything business critical.

edit: Sadly OSM doesn't yet have services like forward adress search (might be too expensive to provide). It would enable many businesses to use it for adress comparison to clean up their own data for example. I think that could put OSM on the map so to speak.

yoz-y(10000) 4 days ago [-]

If I am not mistaken OSM will only give you a database but not the map tiles. For that you need to get them from some service such as MapBox, Google or Apple. Usually these are paid by tiles and Apple is currently the cheapest.

r3bl(1269) 4 days ago [-]

> Anyway, I wonder why DuckDuckGo is choosing Apple Maps. It makes no sense IMO from a user experience perspective.

It makes sense from a privacy perspective. Both of the companies are probably aware of the fact that Apple Maps are really fucking bad at the moment. So was DDG at the beginning. It's incredibly difficult to offer a product that matches Google's when you're two decades behind. The only way to improve is to gather more data. You don't need to collect personal data in order to improve the service, just data in general.

Including Apple Maps in a privacy-first search engine gives Apple the marketing boost in their target market: relatively rich people that do have something to hide. If you use a privacy-aware search engine, and you see that search engine partnering with Apple, it's easier to believe that Apple truly is privacy aware. I still have my doubts, but they're slowly but surely diminishing.

From DDG's perspective, it gives them relevance. They're no longer just a small player in the market trying to make a name for themselves. They're big enough to be able to partner with Apple. This isn't their first collaboration neither: Safari was the first major browser that included their search engine out of the box (Firefox was the second one, about two months later).

It also makes perfect sense for them to stick together, because their ultimate goal is the same: to offer an alternative to surveillance capitalism. It still doesn't make much sense in the short run, but it makes perfect sense in the long run. The more people distrust Google/Facebook/Microsoft/Amazon, the more they're gonna look for the alternatives. DDG, Apple, and similar companies just need to be stubborn. The market will find them, not the other way around.

olah_1(10000) 5 days ago [-]

Very recently Qwant launched their Maps beta that is based on OpenStreetMaps. Discussion here: https://news.ycombinator.com/item?id=20304720

Sidenote: I use duckduckgo for Safari search. I saw an ad on twitter for something that I searched in a private window of Safari. Not sure whose fault that is, but it really disturbed me.

asdff(10000) 5 days ago [-]

Not ddg, probably whatever website you landed on.

Imo there's no point in dealing with ddg on an iphone if you can't even install adblockers.

scrooched_moose(10000) 5 days ago [-]

Interesting, but certainly not usable yet.

My 'shorthand' address missed my house by about 5 miles, and the precise mailing address (like I'd use on an envelope) brought up a steakhouse about 8 miles away. My company name dropped me in Saudi Arabia, and the exact address dropped me in New York (I'm in Minnesota).

It's the same issue I have with Open Street Maps, if you're not in SF/NYC/Chi they're damn near useless. OSM at least gets me to the correct block, although it's still off by about 500 feet.

Edit: Oh boy, this is like Cuil again. Grand Canyon brings up a mall in Israel, Burj Khalifa is somehow underwater, Eifel Tower brings up Las Vegas, Roman Colosseum some residential street in Houston. Statue of Liberty and Taj Mahal are the only two landmarks I tried it got correct. I get it's a 'beta', but ouch. If you can't get addresses or major landmarks correct this shouldn't even be public facing yet.

godelski(4135) 5 days ago [-]

There are ways to track besides the search engine. The website you landed on might have done it. Not saying it wasn't ddg, but that there are many possibilities (and what I think is a major part of the problem)

oldgun(3640) 5 days ago [-]

Twitter does provide a 'why am I seeing this ad' option for each ad it displays. Maybe that'll give you some clues?

baddox(4114) 5 days ago [-]

> For example, try a query such as 'coffee shops' and zoom in on the map to refine your search.

There is exactly one result for 'coffee shops' in San Francisco. The tech and privacy initiatives sound good, but unfortunately the data needs work to pass basic sanity checks.

https://duckduckgo.com/?q=coffee+shops&ia=web&iaxm=maps&stri...

Edit:

Searching for 'coffee shop' (singular) shows many more results. Perhaps the blog post should use that as its example.

https://duckduckgo.com/?q=coffee+shop&ia=web&iaxm=maps&stric...

banach(10000) 5 days ago [-]

'coffee shop' on the other hand yields 20 results. I agree that some support for fuzzy search is needed, but there is a reasonable amount of data there.

LeoPanthera(2908) 5 days ago [-]

Weirdly, 'coffee shops in san francisco' shows a lot.

exhilaration(10000) 5 days ago [-]

I just tried 'coffee shop' and 'coffee shops' near my location in eastern Pennsylvania (semi-rural) and the results are atrocious. 'Supermarket' shows all the possible options so it's not all bad.

Avamander(10000) 5 days ago [-]

It's a fun example, imagine how garbage search in languages that have more than one case is is. :/

amanzi(3444) 5 days ago [-]

There are only two coffee shops in Wellington (NZ) according to DDG/AppleMaps.

https://duckduckgo.com/?q=coffee+shop&t=h_&ia=recipes&iaxm=m...

Edit: searching for 'cafe' gives better results but still not great

https://duckduckgo.com/?q=cafe&t=h_&ia=recipes&iaxm=places&s...

iamaelephant(10000) 5 days ago [-]

Now imagine how bad it is outside of the SF bubble. I searched for coffee shops and the closest result to me is 7 hours drive away.

pwinnski(10000) 5 days ago [-]

I was a big defender of Apple Maps, largely because I almost never saw any problems with the data. Then I moved into an apartment complex in which Apple had the driveway in the wrong place.

I've moved since, so I'll spell it out: https://duckduckgo.com/?q=5940+Arapaho+Rd%2C+75248&t=osx&ia=...

Apple Maps believes that the driveways are to the south and east, but in fact the front driveway--the main entrance--is to the north, and there is no direct passage from the east. So every set of directions to or from those apartments begins or ends incorrectly. When leaving, I just have to guess whether I should turn east on Arapaho to catch up to where Maps thinks I should have ended up on Preston to start out, or whether it will send me west on Arapaho once it realizes I'm already most of a block in that direction from Preston. It added a minute or two to every trip, and delivery people would fail to find my apartment unless I specifically said 'don't use Apple Maps.' So I started saying that to everyone, all the time.

Apple's commitment to privacy means that they deliberately don't track the beginning or ending of any trip, but that's precisely the bits they needed to track to see that their routing was completely and totally wrong. So the problem will apparently never be fixed, at least until an Apple employee happens to want to visit a friend who lives in the Enclave at Prestonwood and realizes they can't get there.

So I've switched to Google Maps, and I loathe the lack of privacy, but I love the sharing option, so I guess I'm staying, even though I live elsewhere now.

nsilvestri(10000) 5 days ago [-]

Convenience and privacy are on a spectrum. More of one means less of the other.

mda(4162) 5 days ago [-]

I don't think there is much difference regarding privacy between apple maps and Google maps. What exactly do you loathe?

pwinnski(10000) 5 days ago [-]

I just checked OpenStreetMaps, and it gets the navigation right[0], so that issue is solely Apple's. That said, there's still a visual indication on OSM of a driveway to Preston Rd that does not exist in life.

[0] To test, I navigated from 'Renner Frankford Library Branch, 6400, Frankford Road, Dallas, Collin County, Texas, 75252, USA' to '5940, Arapaho Road, Dallas, Dallas County, Texas, 75248, USA' at https://www.openstreetmap.org/directions

saagarjha(10000) 5 days ago [-]

> So the problem will apparently never be fixed, at least until an Apple employee happens to want to visit a friend who lives in the Enclave at Prestonwood and realizes they can't get there.

Or you can report the issue yourself in-app?

josefresco(3934) 5 days ago [-]

What service did DDG use before Apple Maps?

Doctor_Fegg(3320) 5 days ago [-]

Mapbox.





Historical Discussions: Cracking My Windshield and Earning $10k on the Tesla Bug Bounty Program (July 15, 2019: 572 points)
Cracking My Windshield and Earning $10k on the Tesla Bug Bounty Program (July 15, 2019: 2 points)

(572) Cracking My Windshield and Earning $10k on the Tesla Bug Bounty Program

572 points 6 days ago by EdOverflow in 4072nd position

samcurry.net | Estimated reading time – 6 minutes | comments | anchor

One of the more interesting things I've had the opportunity to hack on is the Tesla Model 3. It has a built in web browser, free premium LTE, and over-the-air software updates. It's a network connected computer on wheels that drives really fast.

Early in the year I decided to purchase one and have had an absolute blast both messing with it and driving it. I've spent way too long sitting in my garage trying to make it do things it's not supposed to, but luckily got something interesting out of it.

April, 2019

The first thing I spent time messing with was the car's "Name Your Vehicle" functionality. This allowed you to set a nickname for your car and would save the information to your account so you could see it on the mobile app whenever you received push notifications (e.g. charging complete).

The "Name Your Vehicle" button in the upper right of the center square

Initially, I named my car "%x.%x.%x.%x" to see if it was vulnerable to format string attacks like the 2011 BMW 330i was, but sadly it didn't really do anything

After spending more time messing with the input I saw that the allowed content length for the input was very long. I decided to name the Tesla my XSS hunter payload and continued toying around with the other functionalities on the car.

My idea for setting this name was that it may show up on some internal Tesla website for vehicle management or possibly from a functionality within my account

The other thing I spent a lot of time messing with was the built in web browser. I wasn't able to get this to do anything even remotely interesting but had a fun time trying to get it to load in files or strange URIs.

I couldn't find anything that evening so I called it quits and forgot that I'd set my car name to a blind XSS payload.

June, 2019

During a road trip a huge rock came from somewhere and cracked my windshield.

I used Tesla's in app support to setup an appointment and continued driving.

The day after, I received a text message about the issue saying that someone was looking into it. I checked my XSS hunter and saw something really interesting.

Vulnerable Page URL https://redacted.teslamotors.com/redacted/5057517/redacted Execution Origin https://redacted.teslamotors.com Referer https://redacted.teslamotors.com/redacted/5YJ31337

One of the agents responding to my cracked windshield fired my XSS hunter payload from within the context of the "redacted.teslamotors.com" domain.

This was super exciting.

The screenshot attached to the XSS hunter showed that the page was used to see the vital statistics of the vehicle and was accessed via an incremental vehicle ID in the URL. The referrer header had my vehicle's VIN number as an argument.

The XSS had fired on a dashboard used for pulling managing Tesla vehicles.

There was current information about my car shown in the attached XSS hunter screenshot like the speed, temperature, version number, tire pressure, whether it was locked, alerts, and many more little tidbits of information.

VIN: 5YJ3E13374KF2313373
Car Type: 3 P74D
Birthday: Mon Mar 11 16:31:37 2019
Car Version: develop-2019.20.1-203-991337d
Car Computer: ice
SOE / USOE: 48.9, 48.9 %
SOC: 54.2 %
Ideal energy remaining: 37.2 kWh
Range: 151.7 mi
Odometer: 4813.7 miles
Gear: D
Speed: 81 mph
Local Time: Wed Jun 19 15:09:06 2019
UTC Offset: -21600
Timezone: Mountain Daylight Time
BMS State: DRIVE
12V Battery Voltage: 13.881 V
12V Battery Current: 0.13 A
Locked?: true
UI Mode: comfort
Language: English
Service Alert: 0X0

Additionally, there were tabs about firmware, CAN viewers, geofence locations, configurations, and code named functionalities that sounded interesting.

Some of the functionality of the application

I had attempted to browse to the "redacted.teslamotors.com" URL but it timed out. It was probably an internal application.

The thing that was very interesting was that live support agents have the capability to send updates out to cars and, most likely, modify configurations of vehicles. My guess was that this application had that functionality based off the different hyperlinks within the DOM.

I didn't attempt this, but it is likely that by incrementing the ID sent to the vitals endpoint, an attacker could pull and modify information about other cars.

If I were an attacker attempting to compromise this I'd probably have to submit a few support requests but I'd eventually be able to learn enough about their environment via viewing the DOM and JavaScript to forge a request to do exactly what I'd want to do.

Reporting

At nearly 2:00 AM in the morning (after driving for 11 hours) I manically wrote a report to the Tesla bug bounty program. They triaged it as a P1, commented, and pushed out a hot fix within 12 hours.

I was unable to reproduce it. In about two weeks, they paid out a $10,000 bounty and confirmed my suspicion this was a serious issue.

Looking back, this was a very simple issue but understandably something that could've been overlooked or regressed somehow. Although I'm unsure of the exact impact of the vulnerability, it seems to have been substantial and at the very least would've allowed an attacker to view live information about vehicles and likely customer information.

Timeline

20 Jun 2019 06:27:30 UTC – Reported 20 Jun 2019 20:35:35 UTC – Triaged, hot fix 11 Jul 2019 16:07:59 UTC – Bounty and resolution

On a final note, Tesla's bug bounty program is fantastic. They provide a safe haven for researchers who are in good-faith trying to hack their cars. If you accidentally brick one, they'll even offer support in attempting to fix it.

Thanks to everyone who helped me review this before publishing.




All Comments: [-] | anchor

driverdan(1431) 5 days ago [-]

This is a great example of why it's terrible to have a car that can be remote controlled including the ability to push arbitrary updates. It should not be possible to use XSS to compromise a vehicle.

trilila(10000) 5 days ago [-]

Following this logic, nothing should be remotely controlled because there might be security risks. Including OS updates to laptops.

gowld(10000) 5 days ago [-]

XSS compromised a remote web app, not the vehicle. The vehicle hacked Tesla HQ, not vice versa

jxcl(10000) 5 days ago [-]

This bug probably existed because some developer thought 'this is an internal application, I don't need to apply the same rigorous input/(edit: and output, as replies point out) sanitation as I do with normal sites because it's only accessible by VPN.'

As a consultant that gets to see a lot of 'internal only' applications, this is one of the misconceptions that me and my coworkers try to fight against. XSS is effective even if the attacker doesn't have access to the internal application, because it's not the attacker's computer making the requests.

dmix(1329) 5 days ago [-]

This stuff should be taken care of by your web framework wherever possible.

trilila(10000) 5 days ago [-]

Normally, it would not be the input to be sanitised, but rather the output properly formatted. It's easier to make sure that ANY type of input is shown properly, as opposed to eliminating SOME of the known issues.

ec109685(4140) 5 days ago [-]

Output sanitization is what you want to bet on. Only your website / app knows where a piece of data will be displayed, so that is when you should apply appropriate encoding of the output stream.

duxup(3919) 5 days ago [-]

Hell even in my past career supporting hardware network products a lot of companies had / have management ports that are vulnerable to all sorts of stuff. The industry standard response from engineers was 'well that should be behind a firewall'.

It's time we stop pretending the big bad internet is just 'out there' just because it should be, it is everywhere.

Thorrez(10000) 5 days ago [-]

Note that even if it's only accessible by VPN, attackers can still make HTTP requests to it because when an employee connected to the VPN visits attacker.com , attacker.com can make XHR calls to internalsite.com . The attacker can't read the response (unless there are other vulnerabilities), but if you don't have CSRF protection, the attacker can perform actions on the internal site.

gwbas1c(3865) 5 days ago [-]

Could just be because the application was written by a less experienced programmer, or even outsourced?

brokenmachine(4170) 5 days ago [-]

All the comments on here seem to be praising Tesla for paying a bug bounty, but I'm just sitting here horrified at how much information a phone support guy is able to view remotely about owners cars, not to mention the ability to send OTA updates.

No way am I buying a connected car.

reallydontask(10000) 5 days ago [-]

I think you might be out of options soon, if you want a new car that is. A while longer for used cars obviously.

Once all new cars are connected, the DuckDuckGo of cars will launch soon thereafter with the promise of a privacy centric connected car :)

gibolt(4109) 5 days ago [-]

What a great response and turnaround. Bug as fixed within 24 hours and paid out within a month.

I wouldn't expect any other car manufacturer to respond ever, most don't even own their software stack.

Someone1234(4161) 5 days ago [-]

A lot of other vehicle manufacturers couldn't anyway, they don't build the infotainment systems in-house, they simply just re-theme/re-badge the units from companies like Panasonic, Pioneer, Fujitsu-Ten, etc.

So if they got a bug report it would have to travel through ten layers of indirection before an engineer got to read it (let alone understand/respond). Particularly when there might be two or three different written word languages used between consumer and engineer (e.g. English -> Japanese -> Mandarin (Taiwan)).

Tesla (and Ford previously) were actually oddballs in that they didn't use 'off the shelf' infotainment units.

inlined(4069) 5 days ago [-]

> On a final note, Tesla's bug bounty program is fantastic. They provide a safe haven for researchers who are in good-faith trying to hack their cars. If you accidentally brick one, they'll even offer support in attempting to fix it.

This is an amazingly open and refreshing policy!

FireBeyond(3774) 5 days ago [-]

* Subject to Tesla's definition of 'good faith'.

Another hacker who discovered references to the Model 3 in his car before its announcement:

* had his vehicle firmware downgraded to a version that contained no such references

* had his vehicle blocked from receiving further firmware updates

* had his vehicles ethernet port disabled

* [deleted] caught some commentary from Musk about his hacking behavior put himself, and other drivers at risk.[/deleted]

hanniabu(3880) 5 days ago [-]

Does this mean you can legally mod your car under the guise of hacking it?

EdwardDiego(4145) 5 days ago [-]

I love that we're living in a world where you can accidentally brick your car. The future, man, the future.

nickip(10000) 5 days ago [-]

What would the fix for this be? Enabling CORS only for `https://garage.vn.teslamotors.com`?

bzbarsky(1663) 5 days ago [-]

CORS won't do it, because it protects the response target, not the response source.

CSP would do the trick, though.

The other fix is properly escaping things before sticking them in your markup.

bhhaskin(4119) 5 days ago [-]

That would be a good first step, but more importantly making sure any content is rendered in a safe way. In this instance safe means making sure HTML entities are properly encoded and escaped.

Someone1234(4161) 5 days ago [-]

Sanitization of the text input (e.g. < becomes &lt;, > becomes &gt;, etc). This is automatic/implicit on a lot of modern web frameworks (since text and Html are distinct types and output to a page are treated differently, with text sanitization being implied unless you opt out).

You shouldn't ever be running untrusted JavaScript. Content Security Policy and similar are just extra layers of protection if you mess up.

em-bee(3796) 5 days ago [-]

sanitize the output of the car name field so that any html tags are escaped.

yahelc(3056) 5 days ago [-]

In addition to properly escaping inputs, Content Security Policy Headers to restrict the hosts that the browser executes JavaScript from (e.g., script-src). https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Co...

komali2(10000) 5 days ago [-]

I mentioned this to my coworkers who brought up something I hadn't thought of - would this be illegal in the USA via something such as CFAA? https://en.wikipedia.org/wiki/Computer_Fraud_and_Abuse_Act He technically accessed Tesla's dashboard without authorization, for example.

EdOverflow(4072) 5 days ago [-]

(Obligatory: I am not a lawyer)

This is what the 'safe harbor' that the author was referring to is supposed to cover.

> Tesla considers that a pre-approved, good-faith security researcher who complies with this policy to access a computer on a research-registered vehicle has not accessed a computer without authorization or exceeded authorized access under the Computer Fraud and Abuse Act ('CFAA'). [1]

*.teslamotors.com, which is where the blind XSS payload fired, is in scope and therefore the safe harbor covers that asset too. For more on bug bounty safe harbors, I would highly recommend taking a look at Amit Elazari's work at https://amitelazari.com/%23legalbugbounty-hof and https://github.com/edoverflow/legal-bug-bounty.

[1]: https://bugcrowd.com/tesla

jerf(3326) 5 days ago [-]

Tesla authorizes certain activities through their Bug Bounty program: https://bugcrowd.com/tesla

This is the first clause in the 'in scope' section, so it is not unauthorized.

It would be bad if he used this to just wander around in their website, though. Nobody's contested whether this is worth a $10,000 payout yet, but this seems a decent place to point out that using https://beefproject.com , you can use that XSS vulnerability as a reverse proxy back into Tesla's network, and browse through the support site authenticated as the user currently accessing the XSS payload. This isn't just an XSS, it was a authentication bypass that a real attacker could have leveraged into access into that internal web site full of sensitive info in just a few minutes.

Johnny555(4123) 5 days ago [-]

Interestingly, the car returned the (current?) speed:

Speed: 81 mph

I wonder if that, coupled with the GPS info (which wasn't included in the data returned, but I assume the car knows it) would be sufficient to issue a speeding ticket if the government had access to the data?

dalore(10000) 5 days ago [-]

Probably not if there is a +/- of 10% error rate.

samirm(4163) 5 days ago [-]

But how would they know who to ticket? Just because your car is moving, doesn't mean you're the one driving it. If they cannot prove who was operating the vehicle at the time of infraction, they cannot issue a ticket.

cjbprime(2429) 5 days ago [-]

Your cellphone already has all of this information too. Google Maps will tell me both my current speed and the road's speed limit while driving with navigation on.

mstade(4156) 5 days ago [-]

I know in some jurisdictions at least you also need to identify the driver, because the ticket needs to be made out to whoever was speeding, which may or may not be the owner of the vehicle.

ars(3006) 5 days ago [-]

A car's self reported speed is not accurate enough - for example if you slip on gravel or ice, the reported speed would momentarily be higher.

danaur(10000) 5 days ago [-]

Some insurance companies are already doing this I think where they attach things to your car and if you stay under the speed limit you get discounts on your payments

TomMarius(10000) 5 days ago [-]

In my country any device that would be giving fines has to be periodically checked every month by the authorities and made unmodifiable without oversight.

Freestyler_3(10000) 5 days ago [-]

They would have to verify beforehand how accurate the speedometer is.

penagwin(4151) 5 days ago [-]

It REALLY depends on your government.

In Michigan doesn't allow automatic speeding camera things that automatically issue tickets, while New York does.

Going off how Michigan operates then no, the GPS info wouldn't be enough.

In another government such as China the answer is very likely 'Yes'.

simonebrunozzi(669) 5 days ago [-]

We should always, always plaude and praise companies that are at least this serious about bounty programs.

Two years ago, despite I wouldn't call myself the deepest technical person on the planet, I found a terrible bug that exposed 1.1M records for a bay area startup. (edit: the bug was really easy to find, it was a form of URL injection. I couldn't even believe that bug was there in the first place).

I reached out to them multiple times, only to realize they were going to ignore me in perpetuity. I didn't even want money, I would have been happy just to see the bug fixed. (I never helped fix a bug that another company had). Nada.

A less scrupulous person would have sold that information and exposed data for 1.1M people.

I am not naming the company here, even though they would totally deserve it.

brailsafe(3641) 5 days ago [-]

Please become less scrupulous! If that bug isn't fixed, that's just another in a long line of disposable bay area startups run by rich careless people—certainly none of which lurk on HN—who treat sensitive customer information like used tissue. I'm sure there's a way to do it where you don't expose the data, but I'd thinknofnit as a favour to a million people.

mkagenius(1885) 5 days ago [-]

Right now I know of 5-10 serious (100 million plus user data in total) bugs in multiple startups in India. I have reported to them and haven't heard back. The problem is especially severe in India.

SkyBelow(10000) 5 days ago [-]

>I am not naming the company here, even though they would totally deserve it.

I do wonder to what extent the culture itself of how we approach bugs is designed to benefit companies over consumers. That we avoid naming and shaming due to a chilling effect of blow back, that we have disclosure windows, that the legal framework for reporting bugs is so flaky, that we are all accustomed to bad security practices and getting our data hacked, it all feels like it is architected to benefit companies who rarely suffer from hacks (sometimes there is a significant cost, but that rarely outweighs the profits).

It reminds me of identity theft. The entire concept that you lost money because your identity was stolen from you, that the bank (or other company) who feel for the fake victim isn't even a party to the actual crime, pushes the costs onto consumers. Instead of seeing it as the banks being the victim and thus responsible to bear the costs that aren't recoverable from the criminals, is is their customers who are. Thus it reduces the cost to the bank of poor identity management. An entire culture that offloads the costs of the bank's penny pinching onto consumers.

Another such examples is when the early automotive industry pushed for people to view jay walking as the crime, shifting blame onto pedestrians for being in the way of cars.

gingabriska(4158) 5 days ago [-]

But I wonder what if a developer purposely plants a bug then ask his friend to report it and split the bounty. It seems it's easy to take advantage of such programs internally?

solarkraft(3986) 5 days ago [-]

Please name them.

> they would totally deserve it.

They do. It is important to warn their customers about their practices. They had their chance and proved they're absolutely incompetent and shouldn't have anyone's data.

avgDev(10000) 5 days ago [-]

I also found a terrible bug recently, that could cost this company millions of dollars.

Basically, the company has physical stores and also sells stuff online. Stuff bought online can be returned in store. However, if you bought an item online which was on sale, you could return in store for the full amount. I returned a laptop which I bought online for $999 and received $1399 back.

I think it was due to the fact that the store runs on iSeries/AS400 and the website is in .Net. I happen to work with both, and I can imagine that there is a lot of pain to make the systems work together.

glandium(3838) 5 days ago [-]

If you have been able to access that data, chances are someone else has too. And the data might as well be considered as having leaked already. I wonder if the right course of action would be to send it to haveibeenpwned.

deckar01(4013) 5 days ago [-]

I once reverse engineered a Gmail worm found in the wild. The underlying exploit ended up being a security scan bypass in Google docs. I spent a lot of time submitting a bounty report, but I made one fatal mistake: I used URL redirection in the PoC. It was automatically rejected even though that was an example of content that the scan normally detects, not the actual vulnerability. It was closed as not eligible, then silently fixed a week later.

Edit: I checked the emails to refresh my memory. A human acknowledged that it was a flaw in the security scanner and forwarded it to the drive team, then a bot (AFAICT) determined that it was not eligible based on metadata in the report.

Edit 2: I did get one thing out of it. They sent me an invitation to a Bounty Craft event in Las Vegas during Def Con which I was attending that year (likely the actions of another bot scraping the email list). I got there early and accidentally sat down in the Microsoft Security Response team's couch area while they were all up getting food. They were nice people. They realized I never picked up swag on the way in and someone took me back to the door to get it. Apparently since I was with one of the event organizer and they said 'you forgot to give him a t-shirt' they assumed I was staff and gave me a staff t-shirt. The event was 100% about how the sponsor companies were investing in automated fuzzing technologies and basically didn't need bug bounty hunters anymore. Slap in the face.

rossng(4105) 4 days ago [-]

I assume this company has customers in the EU. If the bug still exists today, try dropping a GDPR complaint to one of the European data regulators. Though they have limited resources, they have started taking these things pretty seriously [1] and will look _very_ unkindly on a failure to report the breach or address it.

[1] https://ico.org.uk/about-the-ico/news-and-events/news-and-bl...

iforgotpassword(10000) 5 days ago [-]

I'm wondering what the right approach is in such a situation. If they don't fix the leak, do you keep quiet or go public? Going public puts them under much more pressure to fix their shit, otoh, bad actors have probably more than enough time to scrape the data. But the other scenario bears the risk of some other bad actor also having discovered it and silently abusing the data. Considering the leak goes unfixed and the company grows they might some time be able to scrape data of ten times as many people.

So would you rather actively help leaking 1m records to public or potentially have someone else getting 10m a year later, but not having anything to do with it directly?

Thinking about it you might try and contact a bigger tech news site to get the companies attention.

nkrisc(4160) 5 days ago [-]

If I was their user, I'd want to know if they were so carelessly exposing my data.

j0e1(4050) 5 days ago [-]

Tangentially, how long did it take to get the windshield fixed? I've heard horror stories about their service.

zlz123(10000) 5 days ago [-]

I'm yet to fix it because the crack isn't too bad yet. Their windshield replacement is through retailers who fit their standards and not Tesla directly so I assume it won't be too bad as all they have to do is ship the wind screen.





Historical Discussions: History and Effective Use of Vim (July 19, 2019: 553 points)
History and Effective Use of Vim (July 19, 2019: 5 points)
History and Effective Use of Vim (July 20, 2019: 2 points)
History and Effective Use of Vim (July 19, 2019: 2 points)
History and Effective Use of Vim (July 19, 2019: 2 points)

(567) History and Effective Use of Vim

567 points 2 days ago by begriffs in 1395th position

begriffs.com | Estimated reading time – 42 minutes | comments | anchor

This article is based on historical research and on simply reading the Vim user manual cover to cover. Hopefully these notes will help you (re?)discover core functionality of the editor, so you can abandon pre-packaged vimrc files and use plugins more thoughtfully.

To go beyond the topics in this blog post, I'd recommend getting a paper copy of the manual and a good pocket reference. I couldn't find any hard copy of the official Vim manual, and ended up printing this PDF using printme1.com. The PDF is a printer-friendly version of the files $VIMRUNTIME/doc/usr_??.txt distributed with the editor. For a convenient list of commands, I'd recommend the vi and Vim Editors Pocket Reference.

Table of Contents

History

Birth of vi

Vi commands and features go back more than fifty years, starting with the QED editor. Here is the lineage:

  • 1966 : QED ("Quick EDitor") in Berkeley Timesharing System
  • 1969 Jul: moon landing (just for reference)
  • 1969 Aug: QED -> ed at AT&T
  • 1976 Feb: ed -> em ("Editor for Mortals") at Queen Mary College
  • 1976 : em -> ex ("EXtended") at UC Berkeley
  • 1977 Oct: ex gets visual mode, vi

You can discover the similarities all the way between QED and ex by reading the QED manual and ex manual. Both editors use a similar grammar to specify and operate on line ranges.

Editors like QED, ed, and em were designed for hard-copy terminals, which are basically electric typewriters with a modem attached. Hard-copy terminals print system output on paper. Output could not be changed once printed, obviously, so the editing process consisted of user commands to update and manually print ranges of text.

By 1976 video terminals such as the ADM-3A started to be available. The Ex editor added an "open mode" which allowed intraline editing on video terminals, and a visual mode for screen oriented editing on cursor-addressible terminals. The visual mode (activated with the command "vi") kept an up-to-date view of part of the file on screen, while preserving an ex command line at the bottom of the screen. (Fun fact: the h,j,k,l keys on the ADM-3A had arrows drawn on them, so that choice of motion keys in vi was simply to match the keyboard.)

Learn more about the journey from ed to ex/vi in this interview with Bill Joy. He talks about how he made ex/vi, and some things that disappointed him about it.

Classic vi is truly just an alter-ego of ex – they are the same binary, which decides to start in ex mode or vi mode based on the name of the executable invoked. The legacy of all this history is that ex/vi is refined by use, requires scant system resources, and can operate under limited bandwidth communication. It is also available on most systems and fully specified in POSIX.

From vi to vim

Being a derivative of ed, the ex/vi editor was intellectual property of AT&T. To use vi on platforms other than Unix, people had to write clones that did not share in the original codebase.

Some of the clones:

  • nvi - 1980 for 4BSD
  • calvin - 1987 for DOS
  • vile - 1990 for DOS
  • stevie - 1987 for Atari ST
  • elvis - 1990 for Minix and 386BSD
  • vim - 1991 for Amiga
  • viper - 1995 for Emacs
  • elwin - 1995 for Windows
  • lemmy - 2002 for Windows

We'll be focusing on that little one in the middle: vim. Bram Moolenaar wanted to use vi on the Amiga. He began porting Stevie from the Atari and evolving it. He called his port "Vi IMitation." For a full first-hand account, see Bram's interview with Free Software Magazine.

By version 1.22 Vim was rechristened "Vi IMproved," matching and surpassing features of the original. Here is the timeline of the next major versions, with some of their big features:

1991 Nov 2 Vim 1.14: First release (on Fred Fish disk #591).
1992 Vim 1.22: Port to Unix. Vim now competes with Vi.
1994 Aug 12 Vim 3.0: Support for multiple buffers and windows.
1996 May 29 Vim 4.0: Graphical User Interface (largely by Robert Webb).
1998 Feb 19 Vim 5.0: Syntax coloring/highlighting.
2001 Sep 26 Vim 6.0: Folding, plugins, vertical split.
2006 May 8 Vim 7.0: Spell check, omni completion, undo branches, tabs.
2016 Sep 12 Vim 8.0: Jobs, async I/O, native packages.

For more info about each version, see e.g. :help vim8. To see plans for the future, including known bugs, see :help todo.txt.

Version 8 included some async job support due to peer pressure from NeoVim, whose developers wanted to run debuggers and REPLs for their web scripting languages inside the editor.

Vim is super portable. By adapting over time to work on a wide variety of platforms, the editor was forced to keep portable coding habits. It runs on OS/390, Amiga, BeOS and BeBox, Macintosh classic, Atari MiNT, MS-DOS, OS/2, QNX, RISC-OS, BSD, Linux, OS X, VMS, and MS-Windows. You can rely on Vim being there no matter what computer you're using.

In a final twist in the vi saga, the original ex/vi source code was finally released in 2002 under a BSD free software license. It is available at ex-vi.sourceforge.net.

Let's get down to business. Before getting to odds, ends, and intermediate tricks, it helps to understand how Vim organizes and reads its configuration files.

Configuration hierarchy

I used to think, incorrectly, that Vim reads all its settings and scripts from the ~/.vimrc file alone. Browsing random "dotfiles" repositories can reinforce this notion. Quite often people publish monstrous single .vimrc files that try to control every aspect of the editor. These big configs are sometimes called "vim distros."

In reality Vim has a tidy structure, where .vimrc is just one of several inputs. In fact you can ask Vim exactly which scripts it has loaded. Try this: edit a source file from a random programming project on your computer. Once loaded, run

:scriptnames

Take time to read the list. Try to guess what the scripts might do, and note the directories where they live.

Was the list longer than you expected? If you have installed loads of plugins the editor has a lot to do. Check what slows down the editor most at startup by running the following and look at the start.log it creates:

vim --startuptime start.log name-of-your-file

Just for comparison, see how quickly Vim starts without your existing configuration:

vim --clean --startuptime clean.log name-of-your-file

To determine which scripts to run at startup or buffer load time, Vim traverses a "runtime path." The path is a comma-separated list of directories that each contain a common structure. Vim inspects the structure of each directory to find scripts to run. Directories are processed in the order they appear in the list.

Check the runtimepath on your system by running:

:set runtimepath

My system contains the following directories in the default value for runtimepath. Not all of them even exist in the filesystem, but they would be consulted if they did.

~/.vim
The home directory, for personal preferences.
/usr/local/share/vim/vimfiles
A system-wide Vim directory, for preferences from the system administrator.
/usr/local/share/vim/vim81
Aka $VIMRUNTIME, for files distributed with Vim.
/usr/local/share/vim/vimfiles/after
The "after" directory in the system-wide Vim directory. This is for the system administrator to overrule or add to the distributed defaults.
~/.vim/after
The "after" directory in the home directory. This is for personal preferences to overrule or add to the distributed defaults or system-wide settings.

Because directories are processed by their order in line, the only thing that is special about the "after" directories is that they are at the end of the list. There is nothing magical about the word "after."

When processing each directory, Vim looks for subfolders with specific names. To learn more about them, see :help runtimepath. Here is a selection of those we will be covering, with brief descriptions.

plugin/
Vim script files that are loaded automatically when editing any kind of file. Called "global plugins."
autoload/
(Not to be confused with "plugin.") Scripts in autoload contain functions that are loaded only when requested by other scripts.
ftdetect/
Scripts to detect filetypes. They can base their decision on filename extension, location, or internal file contents.
ftplugin/
Scripts that are executed when editing files with known type.
compiler/
Definitions of how to run various compilers or linters, and of how to parse their output. Can be shared between multiple ftplugins. Also not applied automatically, must be called with :compiler
pack/
Container for Vim 8 native packages, the successor to "Pathogen" style package management. The native packaging system does not require any third-party code.

Finally, ~/.vimrc is the catchall for general editor settings. Use it for setting defaults that can be overridden for particular file types. For a comprehensive overview of settings you can choose in .vimrc, run :options.

Third-party plugins

Plugins are simply Vim scripts that must be put into the correct places in the runtimepath in order to execute. Installing them is conceptually easy: download the file(s) into place. The challenge is that it's hard to remove or update some plugins because they litter subdirectories in the runtimepath with their scripts, and it can be hard to tell which plugin is responsible for which files.

"Plugin managers" evolved to address this need. Vim.org has had a plugin registry going back at least as far as 2003 (as identified by the Internet Archive). However it wasn't until about 2008 that the notion of a plugin manager really came into vogue.

These tools add plugins' separate directories to Vim's runtimepath, and compile help tags for plugin documentation. Most plugin managers also install and update plugin code from the internet, sometimes in parallel or with colorful progress bars.

In chronological order, here is the parade of plugin managers. I based the date ranges on earliest and latest releases of each, or when no official releases are identified, on the earliest and latest commit dates.

  • Mar 2006 - Jul 2014 : Vimball (A distribution format and associated Vim commands)
  • Oct 2008 - Dec 2015 : Pathogen (Deprecated in favor of native vim packages)
  • Aug 2009 - Dec 2009 : Vimana
  • Dec 2009 - Dec 2014 : VAM
  • Aug 2010 - Nov 2010 : Jolt
  • Oct 2010 - Nov 2012 : tplugin
  • Oct 2010 - Feb 2014 : Vundle (Discontinued after NeoBundle ripped off code)
  • Mar 2012 - Mar 2018 : vim-flavor
  • Apr 2012 - Mar 2016 : NeoBundle (Deprecated in favor of dein)
  • Jan 2013 - Aug 2017 : infect
  • Feb 2013 - Aug 2016 : vimogen
  • Oct 2013 - Jan 2015 : vim-unbundle
  • Dec 2013 - Jul 2015 : Vizardry
  • Feb 2014 - Oct 2018 : vim-plug
  • Jan 2015 - Oct 2015 : enabler
  • Aug 2015 - Apr 2016 : Vizardry 2
  • Jan 2016 - Jun 2018 : dein.vim
  • Sep 2016 - Present : native in Vim 8
  • Feb 2017 - Sep 2018 : minpac
  • Mar 2018 - Mar 2018 : autopac
  • Feb 2017 - Jun 2018 : pack
  • Mar 2017 - Sep 2017 : vim-pck
  • Sep 2017 - Sep 2017 : vim8-pack
  • Sep 2017 - May 2019 : volt
  • Sep 2018 - Feb 2019 : vim-packager
  • Feb 2019 - Feb 2019 : plugpac.vim

The first thing to note is the overwhelming variety of these tools, and the second is that each is typically active for about four years before presumably going out of fashion.

The most stable way to manage plugins is to simply use Vim 8's built-in functionality, which requires no third-party code. Let's walk through how to do it.

First create two directories, opt and start, within a pack directory in your runtimepath.

mkdir -p ~/.vim/pack/foobar/{opt,start}

Note the placeholder "foobar." This name is entirely up to you. It classifies the packages that will go inside. Most people throw all their plugins into a single nondescript category, which is fine. Pick whatever name you like; I'll continue to use foobar here. You could theoretically create multiple categories too, like ~/.vim/pack/navigation and ~/.vim/pack/linting. Note that Vim does not detect duplication between categories and will double-load duplicates if they exist.

Packages in "start" get loaded automatically, whereas those in "opt" won't load until specifically requested in Vim with the :packadd command. Opt is good for lesser-used packages, and keeps Vim fast by not running scripts unnecessarily. Note that there isn't a counterpart to :packadd to unload a package.

For this example we'll add the "ctrlp" fuzzy find plugin to opt. Download and extract the latest release into place:

curl -L https://github.com/kien/ctrlp.vim/archive/1.79.tar.gz \
	| tar zx -C ~/.vim/pack/foobar/opt

That command creates a ~/.vim/pack/foobar/opt/ctrlp.vim-1.79 folder, and the package is ready to use. Back in vim, create a helptags index for the new package:

:helptags ~/.vim/pack/foobar/opt/ctrlp.vim-1.79/doc

That creates a file called "tags" in the package's doc folder, which makes the topics available for browsing in Vim's internal help system. (Alternately you can run :helptags ALL once the package has been loaded, which takes care of all docs in the runtimepath.)

When you want to use the package, load it (and know that tab completion works for plugin names, so you don't have to type the whole name):

:packadd ctrlp.vim-1.79

Packadd includes the package's base directory in the runtimepath, and sources its plugin and ftdetect scripts. After loading ctrlp, you can press CTRL-P to pop up a fuzzy find file matcher.

Some people keep their ~/.vim directory under version control and use git submodules for each package. For my part, I simply extract packages from tarballs and track them in my own repository. If you use mature packages you don't need to upgrade them often, plus the scripts are generally small and don't clutter git history much.

Backups and undo

Depending on user settings, Vim can protect against four types of loss:

  1. A crash during editing (between saves). Vim can protect against this one by periodically saving unwritten changes to a swap file.
  2. Editing the same file with two instances of Vim, overwriting changes from one or both instances. Swap files protect against this too.
  3. A crash during the save process itself, after the destination file is truncated but before the new contents have been fully written. Vim can protect against this with a "writebackup." To do this, it writes to a new file and swaps it with the original on success, in a way that depends on the "backupcopy" setting.
  4. Saving new file contents but wanting the original back. Vim can protect against this by persisting the backup copy of the file after writing changes.

Before examining sensible settings, how about some comic relief? Here are just a sampling of comments from vimrc files on GitHub:

  • "Do not create swap file. Manage this in version control"
  • "Backups are for pussies. Use version control"
  • "use version control FFS!"
  • "We live in a world with version control, so get rid of swaps and backups"
  • "don't write backup files, version control is enough backup"
  • "I've never actually used the VIM backup files... Use version control"
  • "Since most stuff is on version control anyway"
  • "Disable backup files, you are using a version control system anyway :)"
  • "version control has arrived, git will save us"
  • "disable swap and backup files (Always use version control! ALWAYS!)"
  • "Turn backup off, since I version control everything"

The comments reflect awareness of only the fourth case above (and the third by accident), whereas the authors generally go on to disable the swap file too, leaving one and two unprotected.

Here is the configuration I recommend to keep your edits safe:

' Protect changes between writes. Default values of
' updatecount (200 keystrokes) and updatetime
' (4 seconds) are fine
set swapfile
set directory^=~/.vim/swap//
' protect against crash-during-write
set writebackup
' but do not persist backup after successful write
set nobackup
' use rename-and-write-new method whenever safe
set backupcopy=auto
' patch required to honor double slash at end
if has('patch-8.1.0251')
	' consolidate the writebackups -- not a big
	' deal either way, since they usually get deleted
	set backupdir^=~/.vim/backup//
end
' persist the undo tree for each file
set undofile
set undodir^=~/.vim/undo//

These settings enable backups for writes-in-progress, but do not persist them after successful write because version control etc etc. Note that you'll need to mkdir ~/.vim/{swap,undodir,backup} or else Vim will fall back to the next available folder in the preference list. You should also probably chmod the folders to keep the contents private, because the swap files and undo history might contain sensitive information.

One thing to note about the paths in our config is that they end in a double slash. That ending enables a feature to disambiguate swaps and backups for files with the same name that live in different directories. For instance the swap file for /foo/bar will be saved in ~/.vim/swap/%foo%bar.swp (slashes escaped as percent signs). Vim had a bug until a fairly recent patch where the double slash was not honored for backupdir, and we guard against that above.

We also have Vim persist the history of undos for each file, so that you can apply them even after quitting and editing the file again. While it may sound redundant with the swap file, the undo history is complementary because it is written only when the file is written. (If it were written more frequently it might not match the state of the file on disk after a crash, so Vim doesn't do that.)

Speaking of undo, Vim maintains a full tree of edit history. This means you can make a change, undo it, then redo it differently and all three states are recoverable. You can see the times and magnitude of changes with the :undolist command, but it's hard to visualize the tree structure from it. You can navigate to specific changes in that list, or move in time with :earlier and :later which take a time argument like 5m, or the count of file saves, like 3f. However navigating the undo tree is an instance when I think a plugin – like undotreeis warranted.

Enabling these disaster recovery settings can bring you peace of mind. I used to save compulsively after most edits or when stepping away from the computer, but now I've made an effort to leave documents unsaved for hours at a time. I know how the swap file works now.

Some final notes: keep an eye on all these disaster recovery files, they can pile up in your .vim folder and use space over time. Also setting nowritebackup might be necessary when saving a huge file with low disk space, because Vim must otherwise make an entire copy of the file temporarily. By default the "backupskip" setting disables backups for anything in the system temp directory.

Vim's "patchmode" is related to backups. You can use it in directories that aren't under version control. For instance if you want to download a source tarball, make an edit and send a patch over a mailing list without bringing git into the picture. Run :set patchmod=.orig and any file 'foo' Vim is about to write will be backed up to 'foo.orig'. You can then create a patch on the command line between the .orig files and the new ones.

Include and path

Most programming languages allow you to include one module or file from another. Vim knows how to track program identifiers in included files using the configuration settings path, include, suffixesadd, and includeexpr. The identifier search (see :help include-search) is an alternative to maintaining a tags file with ctags for system headers.

The settings for C programs work out of the box. Other languages are supported too, but require tweaking. That's outside the scope of this article, see :help include.

If everything is configured right, you can press [i on an identifier to display its definition, or [d for a macro constant. Also when you press gf with the cursor on a filename, Vim searches the path to find it and jump there. Because the path also affects the :find command, some people have the tendency to add '**/*' or commonly accessed directories to the path in order to use :find like a poor man's fuzzy finder. Doing this slows down the identifier search with directories which aren't relevant to that task.

A way to get the same level of crappy find capability, without polluting the path, is to just make another mapping. You can then press <Leader><space> (which is typically backslash space) then start typing a filename and use tab or CTRL-D completion to find the file.

' fuzzy-find lite
nmap <Leader><space> :e ./**/

Just to reiterate: the path parameter was designed for header files. If you want more proof, there is even a :checkpath command to see whether the path is functioning. Load a C file and run :checkpath. It will display filenames it was unable to find that are included transitively by the current file. Also :checkpath! with a bang dumps the whole hierarchy of files included from the current file.

By default path has the value ".,/usr/include,," meaning the working directory, /usr/include, and files that are siblings of the active buffer. The directory specifiers and globs are pretty powerful, see :help file-searching for the details.

In my C ftplugin (more on that later), I also have the path search for include files within the current project, like ./src/include or ./include .

setlocal path=.,,*/include/**3,./*/include/**3
setlocal path+=/usr/include

The ** with a number like **3 bounds the depth of the search in subdirectories. It's wise to add depth bounds where you can to avoid identifier searches that lock up.

Here are other patterns you might consider adding to your path if :checkpath identifies that files can't be found in your project. It depends on your system of course.

  • More system includes: /usr/include/**4,/usr/local/include/**3
  • Homebrew library headers: /usr/local/Cellar/**2/include/**2
  • Macports library headers: /opt/local/include/**
  • OpenBSD library headers: /usr/local/lib/\*/include,/usr/X11R6/include/\*\*3

See also: :he [, :he gf, :he :find.

Edit ⇄ compile cycle

The :make command runs a program of the user's choice to build a project, and collects the output in the quickfix buffer. Each item in the quickfix records the filename, line, column, type (warning/error) and message of each output item. A fairly idomatic mapping uses bracket commands to move through quickfix items:

' quickfix shortcuts
nmap ]q :cnext<cr>
nmap ]Q :clast<cr>
nmap [q :cprev<cr>
nmap [Q :cfirst<cr>

If, after updating the program and rebuilding, you are curious what the error messages said last time, use :colder (and :cnewer to return). To see more information about the currently selected error use :cc, and use :copen to see the full quickfix buffer. You can populate the quickfix yourself without running :make with :cfile, :caddfile, or :cexpr.

Vim parses output from the build process according to the errorformat string, which contains scanf-like escape sequences. It's typical to set this in a "compiler file." For instance, Vim ships with one for gcc in $VIMRUNTIME/compiler/gcc.vim, but has no compiler file for clang. I created the following definition for ~/.vim/compiler/clang.vim:

' formatting variations documented at
' https://clang.llvm.org/docs/UsersManual.html#formatting-of-diagnostics
'
' It should be possible to make this work for the combination of
' -fno-show-column and -fcaret-diagnostics as well with multiline
' and %p, but I was too lazy to figure it out.
'
' The %D and %X patterns are not clang per se. They capture the
' directory change messages from (GNU) 'make -w'. I needed this
' for building a project which used recursive Makefiles.
CompilerSet errorformat=
	\%f:%l%c:{%*[^}]}{%*[^}]}:\ %trror:\ %m,
	\%f:%l%c:{%*[^}]}{%*[^}]}:\ %tarning:\ %m,
	\%f:%l:%c:\ %trror:\ %m,
	\%f:%l:%c:\ %tarning:\ %m,
	\%f(%l,%c)\ :\ %trror:\ %m,
	\%f(%l,%c)\ :\ %tarning:\ %m,
	\%f\ +%l%c:\ %trror:\ %m,
	\%f\ +%l%c:\ %tarning:\ %m,
	\%f:%l:\ %trror:\ %m,
	\%f:%l:\ %tarning:\ %m,
	\%D%*\\a[%*\\d]:\ Entering\ directory\ %*[`']%f',
	\%D%*\\a:\ Entering\ directory\ %*[`']%f',
	\%X%*\\a[%*\\d]:\ Leaving\ directory\ %*[`']%f',
	\%X%*\\a:\ Leaving\ directory\ %*[`']%f',
	\%DMaking\ %*\\a\ in\ %f
CompilerSet makeprg=make

To activate this compiler profile, run :compiler clang. This is typically done in an ftplugin file.

Another example is running GNU Diction on a text document to identify wordy and commonly misused phrases in sentences. Create a "compiler" called diction.vim:

CompilerSet errorformat=%f:%l:\ %m
CompilerSet makeprg=diction\ -s\ %

After you run :compiler diction you can use the normal :make command to run it and populate the quickfix. The final mild convenience in my .vimrc is a mapping to run make:

' real make
map <silent> <F5> :make<cr><cr><cr>
' GNUism, for building recursively
map <silent> <s-F5> :make -w<cr><cr><cr>

Diffs and patches

Vim's internal diffing is powerful, but it can be daunting, especially the three-way merge view. In reality it's not so bad once you take time to study it. The main idea is that every window is either in or out of "diff mode." All windows put in diffmode (with :difft[his]) get compared with all other windows already in diff mode.

For example, let's start simple. Create two files:

echo 'hello, world' > h1
echo 'goodbye, world' > h2
vim h1 h2

In vim, split the arguments into their own windows with :all. In the top window, for h1, run :difft. You'll see a gutter appear, but no difference detected. Move to the other window with CTWL-W CTRL-W and run :difft again. Now hello and goobye are identified as different in the current chunk. Continuing in the bottom window, you can run :diffg[et] to get "hello" from the top window, or :diffp[ut] to send "goodbye" into the top window. Pressing ]c or [c would move between chunks if there were more than one.

A shortcut would be running vim -d h1 h2 instead (or its alias, vimdiff h1 h2) which applies :difft to all windows. Alternatively, load just h1 with vim h1 and then :diffsplit h2. Remember that fundamentally these commands just load files into windows and set the diff mode.

With these basics in mind, let's learn to use Vim as a three-way mergetool for git. First configure git:

git config merge.tool vimdiff
git config merge.conflictstyle diff3
git config mergetool.prompt false

Now, when you hit a merge conflict, run git mergetool. It will bring Vim up with four windows. This part looks scary, and is where I used to flail around and often quit in frustration.

+-----------+------------+------------+
|           |            |            |
|           |            |            |
|   LOCAL   |    BASE    |   REMOTE   |
+-----------+------------+------------+
|                                     |
|                                     |
|             (edit me)               |
+-------------------------------------+

Here's the trick: do all the editing in the bottom window. The top three windows simply provide context about how the file differs on either side of the merge (local / remote), and how it looked prior to either side doing any work (base).

Move within the bottom window with ]c, and for each chunk choose whether to replace it with text from local, base, or remote – or whether to write in your own change which might combine parts from several.

To make it easier to pull changes from the top windows, I set some mappings in my vimrc:

' shortcuts for 3-way merge
map <Leader>1 :diffget LOCAL<CR>
map <Leader>2 :diffget BASE<CR>
map <Leader>3 :diffget REMOTE<CR>

We've already seen :diffget, and here our bindings pass an argument of the buffer name that identifies which window to pull from.

Once done with the merge, run :wqa to save all the windows and quit. If you want to abandon the merge instead, run :cq to abort all changes and return an error code to the shell. This will signal to git that it should ignore your changes.

Diffget can also accept a range. If you want to pull in all changes from one of the top windows rather than working chunk by chunk, just run :1,$+1diffget {LOCAL,BASE,REMOTE}. The "+1" is required because there can be deleted lines "below" the last line of a buffer.

The three-way marge is fairly easy after all. There's no need for plugins like Fugitive, at least for presenting a simplified view for resolving merge conflicts.

Finally, as of patch 8.1.0360, Vim is bundled with the xdiff library and can create diffs internally. This can be more efficient than shelling out to an external program, and allows for a choice of diff algorithms. The "patience" algorithm often produces more human-readable output than the default, "myers." Set it in your .vimrc like so:

if has('patch-8.1.0360')
	set diffopt+=internal,algorithm:patience
endif

Buffer I/O

See if this sounds familiar: you're editing a buffer and want to save it as a new file, so you :w newname. After editing some more, you :w, but it writes over the original file. What you want for this scenario is :saveas newname, which does the write but also changes the filename of the buffer for future writes. Alternately, the :file newname command will change the filename without doing a write.

It also pays off to learn more about the read and write commands. Becuase r and w are Ex commands, they work with ranges. Here are some variations you might not know about:

:w >>foo append the whole buffer to a file
:.w >>foo append current line to a file
:$r foo read foo into the end of the buffer
:0r foo read foo into the start, moving existing lines down
:.,$w foo write current line and below to a file
:r !ls read ls output into cursor position
:w !wc send buffer to wc and display output
:.!tr 'A-Za-z' 'N-ZA-Mn-za-m' apply ROT-13 to current line
:w|so % chain commands: write and then source buffer
:e! throw away unsaved changes, reload buffer
:hide edit foo edit foo, hide current buffer if dirty

Useless fun fact: we piped a line to tr in an example above to apply a ROT-13 cypher, but Vim has that functionality built in with the the g? command. Apply it to a motion, like g?$.

Filetypes

Filetypes are a way to change settings based on the type of file detected in a buffer. They don't need to be automatically detected though, we can manually enable them to interesting effect. An example is doing hex editing. Any file can be viewed as raw hexadecimal values. GitHub user the9ball created a clever ftplugin script that filters a buffer back and forth through the xxd utility for hex editing.

The xxd utility was bundled as part of Vim 5 for convenience. The Vim todo.txt file mentions they want to make it more seamless to edit binary files, but xxd can take us pretty far.

Here is code you can put in ~/.vim/ftplugin/xxd.vim. Its presence in ftplugin means Vim will execute the script when filetype (aka "ft") becomes xxd. I added some basic comments to the script.

' without the xxd command this is all pointless
if !executable('xxd')
	finish
endif
' don't insert a newline in the final line if it
' doesn't already exist, and don't insert linebreaks
setlocal binary noendofline
silent %!xxd -g 1
%s/\r$//e
' put the autocmds into a group for easy removal later
augroup ftplugin-xxd
	' erase any existing autocmds on buffer
	autocmd! * <buffer>
	' before writing, translate back to binary
	autocmd BufWritePre <buffer> let b:xxd_cursor = getpos('.')
	autocmd BufWritePre <buffer> silent %!xxd -r
	' after writing, restore hex view and mark unmodified
	autocmd BufWritePost <buffer> silent %!xxd -g 1
	autocmd BufWritePost <buffer> %s/\r$//e
	autocmd BufWritePost <buffer> setlocal nomodified
	autocmd BufWritePost <buffer> call setpos('.', b:xxd_cursor) | unlet b:xxd_cursor
	' update text column after changing hex values
	autocmd TextChanged,InsertLeave <buffer> let b:xxd_cursor = getpos('.')
	autocmd TextChanged,InsertLeave <buffer> silent %!xxd -r
	autocmd TextChanged,InsertLeave <buffer> silent %!xxd -g 1
	autocmd TextChanged,InsertLeave <buffer> call setpos('.', b:xxd_cursor) | unlet b:xxd_cursor
augroup END
' when filetype is set to no longer be 'xxd,' put the binary
' and endofline settings back to what they were before, remove
' the autocmds, and replace buffer with its binary value
let b:undo_ftplugin = 'setl bin< eol< | execute 'au! ftplugin-xxd * <buffer>' | execute 'silent %!xxd -r''

Try opening a file, then running :set ft. Note what type it is. Then:set ft=xxd. Vim will turn into a hex editor. To restore your view, :set ft=foo where foo was the original type. Note that in hex view you even get syntax highlighting because $VIMRUNTIME/syntax/xxd.vim ships with Vim by default.

Notice the nice use of "b:undo_ftplugin" which is an opportunity for filetypes to clean up after themselves when the user or ftdetect mechanism switches away from them to another filetype. (The example above could use a little work because if you :set ft=xxd then set it back, the buffer is marked as modified even if you never changed anything.)

Ftplugins also allow you to refine an existing filetype. For instance, Vim already has some good defaults for C programming in $VIMRUNTIME/ftplugin/c.vim. I put these extra options in ~/.vim/after/ftplugin/c.vim to add my own settings on top:

' the smartest indent engine for C
setlocal cindent
' my preferred 'Allman' style indentation
setlocal cino='Ls,:0,l1,t0,(s,U1,W4'
' for quickfix errorformat
compiler clang
' shows long build messages better
setlocal ch=2
' auto-create folds per grammar
setlocal foldmethod=syntax
setlocal foldlevel=10
' local project headers
setlocal path=.,,*/include/**3,./*/include/**3
' basic system headers
setlocal path+=/usr/include
setlocal tags=./tags,tags;~
'                      ^ in working dir, or parents
'                ^ sibling of open file
' the default is menu,preview but the preview window is annoying
setlocal completeopt=menu
iabbrev #i #include
iabbrev #d #define
iabbrev main() int main(int argc, char **argv)
' add #include guard
iabbrev #g _<c-r>=expand('%:t:r')<cr><esc>VgUV:s/[^A-Z]/_/g<cr>A_H<esc>yypki#ifndef <esc>j0i#define <esc>o<cr><cr>#endif<esc>2ki

Notice how the script uses "setlocal" rather than "set." This applies the changes to just the current buffer rather than the whole Vim instance.

This script also enables some light abbreviations. Like I can type #g and press enter and it adds an include guard with the current filename:

#ifndef _FILENAME_H
#define _FILENAME_H
/* <-- cursor here */
#endif

You can also mix filetypes by using a dot ("."). Here is one application. Different projects have different coding conventions, so you can combine your default C settings with those for a particular project. The OpenBSD source code follows the style(9) format, so let's make a special openbsd filetype. Combine the two filetypes with :set ft=c.openbsd on relevant files.

To detect the openbsd filetype we can look at the contents of buffers rather than just their extensions or locations on disk. The telltale sign is that C files in the OpenBSD source contain /* $OpenBSD: in the first line.

To detect them, create ~/.vim/after/ftdetect/openbsd.vim:

augroup filetypedetect
        au BufRead,BufNewFile *.[ch]
                \  if getline(1) =~ 'OpenBSD;'
                \|   setl ft=c.openbsd
                \| endif
augroup END

The Vim port for OpenBSD already includes a special syntax file for this filetype: /usr/local/share/vim/vimfiles/syntax/openbsd.vim. If you recall, the /usr/local/share/vim/vimfiles directory is in the runtimepath and is set aside for files from the system administrator. The provided openbsd.vim script includes a function:

function! OpenBSD_Style()
	setlocal cindent
	setlocal cinoptions=(4200,u4200,+0.5s,*500,:0,t0,U4200
	setlocal indentexpr=IgnoreParenIndent()
	setlocal indentkeys=0{,0},0),:,0#,!^F,o,O,e
	setlocal noexpandtab
	setlocal shiftwidth=8
	setlocal tabstop=8
	setlocal textwidth=80
endfun

We simply need to call the function at the appropriate time. Create ~/.vim/after/ftplugin/openbsd.vim:

call OpenBSD_Style()

Now opening any C or header file with the characteristic comment at the top will be recognized as type c.openbsd and will use indenting options that conform with the style(9) man page.

Don't forget the mouse

This is a friendly reminder that despite our command-line machismo, the mouse is in fact supported in Vim, and can do some things more easily than the keyboard. Mouse events work even over SSH thanks to xterm turning mouse events into stdin escape codes.

To enable mouse support, set mouse=n. Many people use mouse=a to make it work in all modes, but I prefer to enable it only in normal mode. This avoids creating visual selections when I click links with a keyboard modifier to open them in my browser.

Here are things the mouse can do:

  • Open or close folds (when foldcolumn > 0).
  • Select tabs (beats gt gt gt...)
  • Click to complete a motion, like d<click!>. Similar to the easymotion plugin but without any plugin.
  • Jump to help topics with double click.
  • Drag the status line at the bottom to change cmdheight.
  • Drag edge of window to resize.
  • Scroll wheel.

Misc editing

This section could be enormous, but I'll stick to a few tricks I learned. The first one that blew me away was :set virtualedit=all. It allows you to move the cursor anywhere in the window. If you enter characters or insert a visual block, Vim will add whatever spaces are required to the left of the inserted characters to keep them in place. Virtual edit mode makes it simple to edit tabular data. Turn it off with :set virtualedit=.

Next are some movement commands. I used to rely a lot on } to jump by paragraphs, and just muscle my way down the page. However the ] character makes more precise motions: by function ]], scope ]}, paren '])', comment ]/, diff block ]c. This series is why the quickfix mapping ]q mentioned earlier fits the pattern so well.

For big jumps I used to try things like 1000j, but in normal mode you can actually just type a percentage and Vim will go there, like 50%. Speaking of scroll percentage, you can see it at any time with CTRL-G. Thus I now do :set noruler and ask to see the info as needed. It's less cluttered. Kind of the opposite of the trend of colorful patched font powerlines.

After jumping around between tags, files, or within a file, there are some commands to get your bearings. Try :ls, :tags, :jumps, and :marks. Jumping through tags actually creates a stack, and you can press CTRL-T to pop one back. I used to always press CTRL-O to back out of jumps, but it is not as direct as popping the tag stack.

In a project directory that has been indexed with ctags, you can open the editor directly to a tag with -t, like vim -t main. To find tags files more flexibly, set the tags configuration variable. Note the semicolon in the example below that allows Vim to search the current directory upward to the home directory. This way you could have a more general system tags file outside the project folder.

set tags=./tags,**5/tags,tags;~
'                          ^ in working dir, or parents
'                   ^ in any subfolder of working dir
'           ^ sibling of open file

There are some buffer tricks too. Switching to a buffer with :bu can take a fragment of the buffer name, not just a number. Sometimes it's harder to memorize those numbers than remember the name of a source file. You can navigate buffers with marks too. If you use a capital letter as the name of a mark, you can jump to it across buffers. You could set a mark H in a header, C in a source file, and M in a Makefile to go from one buffer to another.

Do you ever get mad after yanking a word, deleting a word somewhere else, trying paste the first word in, and then discovering your original yank is overwritten? The Vim registers are underappreciated for this. Inspect their contents with :reg. As you yank text, previous yanks are rotated into the registers '0 - '9. So '0p pastes the next-to-last yank/deletion. The special registers '+ and '* can copy/paste from/to the system clipboard. They usually mean the same thing, except in some X11 setups that distinguish primary and secondary selection.

Another handy hidden feature is the command line window. It it's a buffer that contains your previous commands and searches. Bring it up with q: or q/. Once inside you can move to any line and press enter to run it. However you can also edit any of the lines before pressing enter. Your changes won't affect the line (the new command will merely be added to the bottom of the list).

This article could go on and on, so I'm going to call it here. For more great topics, see these help sections: views-sessions, viminfo, TOhtml, ins-completion, cmdline-completion, multi-repeat, scroll-cursor, text-objects, grep, netrw-contents.




All Comments: [-] | anchor

bayesian_horse(10000) 1 day ago [-]

I've been using Vim for a couple of years now and I almost can't stand any text editor without VIM-like extension.

However, recently I took a liking to Visual Studio Code (with VIM bindings of course). Yes I know, it's terribly bloated and consumes RAM like nobody's business, but the Browser DOM arguably is the successor of terminal emulation in terms of ubiquitous interfaces, and VSC does use the additional power quite smartly. There are graphical hints and tweaks which are next to impossible to achieve in a terminal emulation.

I'll still use VIM all the time, especially remotely. But VSC does provide similar extensibility. I somehow wish there was something like VSC, based on web/electron, but more like a Texteditor, less like an IDE. And preferably not controlled by a huge corporation.

cvshepherd(10000) 1 day ago [-]

have you had a look at https://github.com/onivim/oni ? heard good things about it.

brynjolf(10000) 1 day ago [-]

There are just so many bugs with Vim implementation in VSC. For example the two undo systems can lose sync and randomly delete half the document.

uberduper(10000) 2 days ago [-]

I very recently set out to start bringing my vimrc with me when logging into remote hosts and came up with this.

  Host * !github.com
    RemoteCommand echo -e 'syntax enable\nset ts=2\nset sw=2\n' > ~/.tmp_vimrc && bash -c 'set -a; vi() { vim -u ~/.tmp_vimrc '[email protected]'; }; set +a;bash -l'
    RequestTTY yes
It gets ugly when you get a lot of options in there.

It's possible to just curl or scp something in place via RemoteCommand above, or `source: https://foo.io/vimrc` but my security paranoia lead me to this instead.

jacobparker(3526) 2 days ago [-]

Curl it, check the sha256 of the download vs a hardcoded one and then source it if they match?

The URL you curl could be a GitHub blob URL (theoretically immutable) but if you check the hash you're not trusting GH for anything other than availability.

jcutrell(4124) 2 days ago [-]

I'm deep into vim and have recently seen some things that colleagues are doing with VS Code. I have to admit, I'm tempted.

But I'm so used to vim + tmux now, it's hard to imagine working with something different.

rhizome(4133) 1 day ago [-]

The more I use VS Code, the more I want a way not to have to use the mouse anymore.

lbebber(3961) 1 day ago [-]

Do try vscodevim, I made the switch and I haven't looked back. Best of both worlds for me.

stank345(4169) 1 day ago [-]

Can you elaborate? I'm also a vim + tmux user

_hardwaregeek(10000) 2 days ago [-]

I realized why Vim has always felt a little off to me. The primary navigational commands are mostly on the right hand, which as a left handed person feels very weird to me. While Emacs commands favor neither hand (if anything they favor the left with C-f, C-b, C-a, C-e, C-x C-s, M-x, etc). Just a small observation.

nightkoder(10000) 2 days ago [-]

You may enjoy a dvorak keyboard. Up and down (j and k) are the c and v keys. Left and right (h and l) are j and p.

Accacin(10000) 2 days ago [-]

I'm a React Developer that uses Vim, and whilst I have reduced my .vimrc down a lot since I started playing with vim, I still use about 15 plugins such as Deoplete, tern, ale, fzf, a language pack (I'm always messing around learning new languages), and then a selection of Tim Pope addons.

At 120 lines, I consider my .vimrc quite light.. Although I'm not sure what others will think.

jeremyjh(3967) 2 days ago [-]

I used to shell into servers a lot and open vim to edit files, so I'm quite comfortable with a sparse configuration. But when I'm developing software, I want features, and these days I spend most of my time doing development. My current setup is I use Spacemacs for development work, but I use a very minimalist vim configuration for quick edits. This gives me the best of both worlds - very quick, responsive editor that I can use reliably and efficiently. And a good development experience that is easy on my fingers.

kazinator(3772) 2 days ago [-]

> Some of the clones:

>

> nvi - 1980 for 4BSD

> [ ... ]

> elvis - 1990 for Minix and 386BSD

This is incorrect; nvi is in fact a mid 1990's fork of Elvis, worked over for better POSIX compliance by Keith Bostic.

> vim - 1991 for Amiga

Though that was the first public release, Moolenaar had worked on it since 1988. It was based on Tim Thompson's Stevie, which had been released, in 1987 (noted in the table).

begriffs(1395) 2 days ago [-]

Thanks for the correction, can you give me a more precise date for nvi? I can update the article.

Foober223(10000) 2 days ago [-]

> Oct 2010 - Feb 2014 : Vundle (Discontinued after NeoBundle ripped off code)

Looks like Vundle has an MIT license. rip off usually implies something negative or dishonest. Copying MIT licensed code is a normal and encouraged activity.

begriffs(1395) 2 days ago [-]

Sorry, the way I phrased that was sloppy. I should have said, 'main author abandoned, saying NeoBundle ripped off code.'

http://www.gmarik.info/blog/2014/why-i-stopped-contributing-...

Izkata(10000) 2 days ago [-]

Also has a few commits since then. The impression I get from the repo is that it's 'done', not 'discontinued'.

newman8r(2009) 2 days ago [-]

To anyone who hasn't tried it - http://www.vimgolf.com/ is probably the quickest and most fun way to take your skills to the next level.

smitty1e(10000) 2 days ago [-]

https://vimvalley.com/ isn't cheap, but you get what you pay for.

Deimorz(3232) 2 days ago [-]

I recommend the book 'Practical Vim' too: https://pragprog.com/book/dnvim2/practical-vim-second-editio...

I remember learning a fair amount from it when I read it a few years ago (and should probably read through it again at some point).

stirfrykitty(10000) 2 days ago [-]

Been using vim since 1998 and rarely stray unless I'm typing notes for something unimportant and them I use Nano.

Back in the day when I was a Unix admin, we often worked in full screen terminals and when editing a config file didn't like having to close the vim instance to go look at something, so learned about this little gem:

:sh (go back to shell and do your thing and leave vim running)

Ctrl-d to return to intact and running vim instance.

As an aside, if you decide to use nano to edit config files, make sure you use nano -w (no wrap), otherwise you may find yourself with a non-bootable OS instance.

Zelphyr(4049) 2 days ago [-]

You can also do

    :! ls -a
to execute a shell command and see the results from within Vim.
miguelmota(4170) 2 days ago [-]

Been using vim for almost a decade and embarrassed to not have known about how to go back to vim after doing `:sh` , so thanks for sharing that. I use tmux all the time so I tend to switch to a pane running bash and `ctrl-b z` to toggle the pane fullscreen

Syssiphus(10000) 2 days ago [-]

Or just CTRL+z and then 'fg' to go back.

bch(10000) 2 days ago [-]

> :sh (go back to shell and do your thing and leave vim running)

In nvi (at least), one can also open a buffer and :script to run a shell inside vi and have all the yank/paste/navigation/all-the-things features of vi. Mind you need to i[nsert] or a[ppend] after the prompt to issue your commands.

magduf(10000) 2 days ago [-]

I wouldn't even use nano for typing notes; I'm so used to vim that I don't see why I wouldn't want its powers, even for something as simple as writing notes. I might very well want to reorder the notes, for instance: that's very easy in vim with dd/p. nano might have some Ctrl-key combo that does the same thing, but why bother learning that when I already know vim?

andrewstuart(1050) 2 days ago [-]

I started my programming career determined to be a vim guru and do all my programming with vim.

This was misguided and after a year of wasted productivity and wasted time fiddling with plugs and dealing with broken stuff (surely by my own hand) I switched to a professional IDE and I rapidly became a much better developer.

I use vim constantly now but in it's most plain vanilla form, for the purpose of editing files when logged into Linux systems. That's all I use it for.

I really wish however that every Linux system had a clone of the old DOS edit command which was beautifully simply and straightforward and met most needs highly intuitively.

As a sidenote: can I just say that PyCharm is an incredible IDE and I can recommend it heartily to anyone. If your job is programming then it is many times over worth paying the money for the professional edition.

chimpburger(10000) 2 days ago [-]

You can install IdeaVim in PyCharm. I've been using IdeaVim with IntelliJ for 6 years and could never go back to non-vim style editing. This plugin provides the best of both worlds.

ryacko(10000) 2 days ago [-]

I use mcedit, it is part of midnight commander, usually available as mc.

Excellent for editing config files through the terminal.

SEJeff(2224) 2 days ago [-]

To each their own! I'm a very effective python AND go programmer / sysadmin who uses vim for most all of my IDE stuff. I've also managed Linux professionally since 2005 and played with it since 1998 so it isn't for everyone.

My main vim plugins for this use case:

* go.vim * nerdtree * git-gutter * YouCompleteMe * syntastic + flake8 * black.vim # python equiv of gofmt

This is only for my workstations. I prefer vanilla configs for servers.

AlexeyBrin(404) 2 days ago [-]

> I really wish however that every Linux system had a clone of the old DOS edit command which was beautifully simply and straightforward and met most needs highly intuitively.

The nano editor experience is pretty close to the old DOS edit feeling.

feiss(10000) 2 days ago [-]

I used vim for years, but then Sublime appeared.. although I miss the snappiness of vim, and wouldn't mind to come back. Specifically, I'd miss these nice features of Sublime:

1. Multiple cursors!! (and how easy is to use them)

2. Real-time preview of regex search

3. Package Manager (easy installation and discoverability of plugins)

4. Jump to file, jump to function, jump to css selector.. (ctrl+p, ctrl+r, using fuzzy search)

5. Project tree in small font (many files at sight)

Edit: Here there is a bunch of good stuff: https://medium.com/@huntie/10-essential-vim-plugins-for-2018...

jsjohnst(3762) 2 days ago [-]

1. Exists via plugin

2. Exists built in, but need to enable options

3. Exists if you install a plugin manager that has those features

4. Exists via plugin

5. Exists via plugin

kgwxd(2488) 1 day ago [-]

Sublime is really nice, I bought a license for V2. But after a lot of radio silence from the dev, I realized I never want to invest years building muscle memory and becoming dependent on specific plugins only to potentially have it disappear one day. Proprietary software is just not a good option for fundamental tools like a text editor.

dllthomas(2807) 2 days ago [-]

> 2. Real-time preview of regex search

Doesn't vim have this by default these days?

haolez(10000) 2 days ago [-]

I've grown quite dependent on multiple cursors myself. Supposedly, Kakoune is a vim-like editor with good multiple cursors support.

http://kakoune.org/

jLyrrad(10000) 2 days ago [-]

I remember the first time I ran `vimtutor` in my terminal. A lot changed since then! Although I just stick with defaults since I have my IDE for my day-to-day job.

dllthomas(2807) 2 days ago [-]

Also of note, the first part of https://vim-adventures.com is fun. I expect the rest is too, but I got sticker shock when I went to buy. Probably worth it for someone who doesn't already know vim well, though.

2bitencryption(10000) 2 days ago [-]

After years of a love/hate relationship with Vim (I love what it does, but have configuring it), I had an epiphany: by sticking with the defaults, whether they are my preferred choices or not, I can instantly understand how to use Vim in any environment. Once you get used to biting the bullet and hitting escape instead of jj, or ctrl+c, it just works, everywhere.

If you abandon the urge to pimp out your Vim with a billion plugins, and just use it raw, it's a kind of editor Nirvana. Let go of your desires and live without want :)

Of course, that's just me. I understand why someone would want to turn Vim into their personalized powerhouse editor with IDE powers, with their .vimrc a 'git pull' away.

But I've learned to live with the humble defaults, and it's made life easy.

Diederich(10000) 2 days ago [-]

Yup.

I've been using vi[m] on an almost daily basis since about 1989...wow, that's 30 years.

In those decades, I have resisted putting anything in my .vi[m]rc except:

set tabstop=4

set expandtab

set shiftwidth=4

set shiftround

That's it. And I agree with you, it's a very happy place for me.

87zuhjkas(10000) 1 day ago [-]

> Once you get used to biting the bullet and hitting escape instead of jj ... it just works, everywhere.

Well you can also invest the small amount of time to type in:

:imap jj <Esc>

WorldMaker(10000) 2 days ago [-]

My biggest reason to not use the defaults has been keyboard layout. I've bounced back and forth between a version that deeply changed the Vim keys and one that tried to minimally change keys as best as possible. I'm back on the 'minimal change' layout preference, but the problem is still 'minimal change' is not 'no change' and definitely not quite 'nirvana'.

darkpuma(10000) 2 days ago [-]

I find that the muscle memory for my personal config is keyed off the visual appearance of my preferred color scheme. So I have no problem context switching between personal config and default config, provided the color schemes are right for each.

(Or anyway, that was the case until recently when I switched to evil-mode. I'm totally helpless in default config emacs.)

colordrops(10000) 2 days ago [-]

I don't find myself needing to use Vim in random environments enough to gnash my teeth about how it doesn't work as expected. Even for those who do, it's not hard to pimp out Vim while leaving the baseline behavior alone. Any new functionality I add I make sure not to override existing frequently used key bindings.

HeavyStorm(10000) about 22 hours ago [-]

I usually stick to this rule almost everywhere, with just a few exceptions for some shortcuts.

Reconfiguring means trouble when, for instance, helping a colleague or teaching someone, as well as the overhead whenever the configs are lost.

root_axis(10000) 2 days ago [-]

This makes sense if you don't use vim as an IDE, but if you do, configurations and plugins are essential. Of course, you wouldn't expect to have a fully configured jetbrains IDE when you SSH into a remote server, but that doesn't mean you can't use one on your local development machine.

coldtea(1198) 2 days ago [-]

>After years of a love/hate relationship with Vim (I love what it does, but have configuring it), I had an epiphany: by sticking with the defaults, whether they are my preferred choices or not, I can instantly understand how to use Vim in any environment.

Which I've never found much important.

If you know Vim basics (modes, basic commands, movements, etc), then you can use vim in any environment that you SSH to or happen to have to work temporarily on.

But why wouldn't you want the Vim on your main driver laptop, which you use every day to not have a nice custom setup, and some good third party plugins (e.g. file search, linting, etc)?

It's not like using them will make you forgot the basic commands, movements, etc, to use vim in some unknown remote machine.

And it's also not like you should optimize for the random remote machine you'll get in, and not where you spend hours programming every day.

So unless one is a sysadmin and has no real 'main' machine he uses vim in, this makes no sense to me...

entelechy0(4135) 1 day ago [-]

I'm left-handed and depend on lefty vim nav, but otherwise I agree

pessimizer(2041) 2 days ago [-]

I'm the same way, and I hit the same spot with my desktop OS. I don't need any more individualization than I can put on a machine in 5-10 minutes.

The reason vi(m) is awesome is because it's everywhere and it's (pretty much) the same.

ifoundthetao(10000) 2 days ago [-]

I am right there with you. I did the same thing, and now when I jump into Vim from anywhere, it's so much easier.

Though I really do like having the Capslock key remapped to escape.

danaur(10000) 2 days ago [-]

Depending on your job there's no reason that you would ever need to use defaults. You should optimize your editor for your day-to-day use case

ggm(4001) 2 days ago [-]

this +

nvi had more direct lineage to true vi, Keith Bostic was in the Bill Joy headspace, but it kind-of got stuck. Vim diverged in ways which used to annoy me but I've come to accept.

I won't mention 'the other editor family' here but it is interesting to see the things in that one, which I miss in this one (for me at least) which are principally the ability to cut-paste regions, and a set of things which became screen and tmux. Now there IS screen and tmux, that matters less.

Not having 2D region cut-paste is a small price to pay for the joys of a modal editor.

arvinsim(10000) 1 day ago [-]

Still have to configure fonts. Can't stand the defaults.

dr01d(10000) 1 day ago [-]

I would agree, but then I found this: https://vim-bootstrap.com/ and it is sooooo nice. I no longer have to think about configuring vim or managing plugins because it is done. Esp for golang.

Qwertystop(4167) 2 days ago [-]

I go for a heap of plugins but don't remap the things that start off built-in, so I get the muscle-memory for the defaults when I need a remote shell somewhere, but still the useful plugins on my own computer.

ahallock(4165) 2 days ago [-]

I try to keep it minimal as well, but some plugins are too good not to install for my daily work. TabNine, Auto Pairs, Vim Surround, Endwise, and CtrlP are a few that come to mind.

WhatIsDukkha(10000) 2 days ago [-]

If you want portable then yeah empty out your .vimrc and get fluent with vims superpowers as they are.

If you want 'personalized powerhouse editor with IDE powers' you are FAR better off going with Emacs/Evil.

The Emacs plugin ecosystem is way ahead of Vim.

Plugins are clean, stable and compose well together. That was never my experience with Vim.

Also unlike Vimscript, Elisp is pretty clean useable language that many Emacs users actually learn. I'd bet the vast majority of vim users never get past step 1 on learning Vimscript?

Emacs/Evil for the workstation and clean Vim for the random server side work.

derefr(3632) 2 days ago [-]

I want syntax highlighting and correct tab behavior, though, and I use a language that not even Neovim has built-in syntax support for.

cgag(3785) 2 days ago [-]

I use a bunch of plugins and I've never had trouble remembering the defaults in the odd case I'm unable to use my own config.

I don't get why the top comment on all vim threads is always recommending using poor defaults your whole life so you can avoid learning to configure it or avoid a second of confusion when you're ssh'd somewhere.

Maybe everyone else besides me spends all their time ssh'd into random servers.

stefco_(10000) 2 days ago [-]

I've taken this approach halfway and only install plugins/config that I can easily do without. Things like git-gutter [0] and a line at the 80th column [1] are great, but I don't depend on them when I log into a remote machine. If I really want some features, I have them somewhat organized in my .vimrc [2]. It's a great compromise; I find that I haven't thought about major editor configuration in a couple of years.

[0] https://github.com/airblade/vim-gitgutter

[1] set colorcolumn=80 textwidth=79

[2] https://github.com/stefco/dotfiles/blob/master/linkfiles/.vi...

[edit] Just want to add that I 100% agree that erring towards vanilla vim is a great idea; I used to use emacs and switched to vim precisely because the starting point fit my use style far better than vanilla emacs.

styfle(2970) 1 day ago [-]

This is my philosophy with most software. I expect to get 80-90% of the way with zero configuration and then maybe for some advanced features I'll touch the config and personalize some things.

The added benefit to becoming proficient with default configuration is that it is really easy to upgrade to a new computer and start from scratch.

And of course with vim, the benefit is the ubiquity when using ssh or docker or VMs which typically use the defaults.

davidgerard(400) 1 day ago [-]

I tend to only mess slightly with my .vimrc, but I get fancy in my .gvimrc for this reason.

komali2(10000) 2 days ago [-]

On a lot of machines, C-[ (control + left bracket) maps to escape. But some programs don't seem to acknowledge this? Like in firefox, doing that changes tabs, but in Emacs + evil mode it's how I 'ESC.'

topmonk(4160) 2 days ago [-]

Just a quick note, you can use Ctrl +[ rather than Escape. Saves your fingers from constantly having to reach for that corner.

mythrwy(10000) 2 days ago [-]

I'd take a tremendous hit in productivity if I followed that advice.

The amount of time I spend in Vim not customized is negligible, and I can always hit Esc in those cases with a not even in the ballpark time saving over losing snips and all the custom plugins.

reidrac(3602) 1 day ago [-]

Some defaults aren't very usable, although 'Debian defaults' are very close. I had the same approach, but lately I've decided to add plugins or configuration to 'enhance' and never 'replace', and I have never been happier as I'm now with vim.

I also decided to invest some time to move to vim + tmux instead of using gvim and now that I'm used to the extras I get with tmux (yep, terminal support in vim 8 is nice but not as flexible), I think I can't go back!

TransAMrit(10000) 1 day ago [-]

I do exactly the same thing. It really relieves the stress of working with an unfamiliar environment, and you become that much more effective with stock vim!

asix66(10000) 1 day ago [-]

I'm a long time VI(m) user, who started with emacs. I'm proficient in both editors, but use vim as my daily driver.

When I started my career, my mentor was an emacs user, and the reason I started with emacs. I became very proficient with the editor, learning how to handle multiple buffers, window splits, rectangle selects, etc. At that time, I didn't know vi. Until I started working on another project, where the project manager made a statement to me that started my migration to vim. I was logged into a machine that didn't have emacs, and I was complaining about it. "Just use vi", he said. "I don't know vi", I said. Then came the statement that stuck with me, "Son, you got to know vi just enough so you don't look stupid."

So I learned vi, just enough to not look stupid, and eventually, as proficiency set in, turned to vi more often than emacs.

Now vim is my primary editor, and I can use emacs just enough to not look stupid. ;)

Back to defaults. I customize my vim with colorized syntax highlighting, and several plugins. But at its core my vim is still default with respect to key bindings. These are the defaults that matter. If you leave key bindings alone, then the core of vim use is identical anywhere you use vim, with or without your personal vimrc. You can a make "stick to the core defaults" argument for just about any editor. Just keep the core defaults, and customize away on systems you use daily. Then when you use the editor without your personalizations, the editor still behaves as usual.

My editor life is also easy, but with some more creature comforts for daily use.

cryptonector(4145) 1 day ago [-]

There's only one thing I need to improve my VIM experience: a way to set style options (and search paths) for each git workspace. I've yet to find a non-hairy way to do this. Help!

Seb-C(10000) 1 day ago [-]

.editorconfig files works well for me for the essential styles. And it can be useful to the other team members as well.

emsy(3600) 2 days ago [-]

I rarely ever use actual vim, mostly when I'm in the terminal. But I have vim plugins for most IDEs I use (VS code, intellij, XCode). It makes editing so much faster. When I have to get by without it I feel as if someone put weights around my wrists. The reason I don't use Vim is because it's frankly not a smooth experience for most languages (unless you fiddle around a lot, and even then I found Ide+vim plug-in superior). I do hope Neovim will solve this, though I didn't test it because the last time I checked windows support was experimental.

deergomoo(10000) 2 days ago [-]

After a several month effort to learn vim bindings and use them full-time, I eventually ended up weaning myself back off because the plugins for the editors I actually use all seemed to come with considerable downsides.

vscode-vim caused odd performance issues, and I encountered significant bugs with the undo stack (namely hitting 'u' would sometimes take out the last 10-15 changes instead of just one). IDEAvim would randomly go completely unresponsive for me, sometimes requiring just re-opening the file and sometimes requiring a restart of the entire IDE. And last I checked the Xcode plugin requires re-signing the entire binary with a self-signed certificate because the new plugin system won't support modal editing.

I was quite happy with neovintageous in Sublime, as well as Sublime's excellent performance in general, but no matter how many plugins I installed it could never seem to come close to the smarts of the other tools I was using.

Ultimately I just arrived the the conclusion that I'm never going to be happy with any editor and decided to make the best of what I could with a consistent set of keybindings across the tools I use.

That said, every time I edit my hosts file or something on a remote server and reach for vim, I wonder if I made the wrong choice. I'm really hoping Language Server Protocol becomes the standard and we reach the point where it no longer matters what editor we use.





Historical Discussions: The Raspberry Pi 4 needs a fan (July 17, 2019: 563 points)

(563) The Raspberry Pi 4 needs a fan

563 points 4 days ago by geerlingguy in 789th position

www.jeffgeerling.com | Estimated reading time – 10 minutes | comments | anchor

The Raspberry Pi Foundation's Pi 4 announcement blog post touted the Pi 4 as providing 'PC-like level of performance for most users'. The Foundation even offers a Raspberry Pi 4 Desktop Kit.

The desktop kit includes the official Raspberry Pi 4 case, which is an enclosed plastic box with nothing in the way of ventilation.

I have been using Pis for various projects since their introduction in 2012, and for many models, including the tiny Pi Zero and various A+ revisions, you didn't even need a fan or heatsink to avoid CPU throttling. And thermal images or point measurements using an IR thermometer usually showed the SoC putting out the most heat. As long as there was at least a little space for natural convection (that is, with no fan), you could do almost anything with a Pi and not have to worry about heat.

The Pi 4 is a different beast, though. Not only does the CPU get appreciably hot even under normal load, there are a number of other parts of the board that heat up to the point they are uncomfortable to touch.

Here's a thermal image taken with my Seek thermal imager highlighting the parts of the board generating the most heat after 5 minutes at idle:

The CPU/System-on-a-Chip (SoC) was around 60°C as well, but the metal casing helps spread that heat around the perimeter pretty well, and in the IR image, the heat radiating off the top of the CPU is somewhat masked by the reflective metal surface. You might notice, however, the bright white areas on the lower left. That's all the power circuitry coming off the USB-C power input. That area of the board is almost always putting out a pretty large chunk of heat, and the components in this area don't put off heat as well as the metal-bodied CPU.

Finally, this image was taken at idle, but if you have any activity on the USB ports, the USB controller chip on the right (that small red spot before you get to the far right of the image) lights up bright white and gets to be 60-70°C as well. A firmware update for the Pi 4 may help keep that chip a little cooler, but it will still get hot under load.

So imagine if you're truly using the Pi 4 as a desktop replacement, with at least one external USB 3.0 hard drive attached, WiFi connected and transferring large amounts of data, a USB keyboard and mouse, a few browser windows open (the average website these days might as well be an AAA video game with how resource-intense it is), a text editor, and a music player. This amount of load is enough to cause the CPU to throttle in less than 10 minutes, in my testing.

Why is throttling bad? Two reasons: First, throttling prevents you from getting the full CPU speed the Pi can offer, meaning things that you're doing will take longer. Second, it indicates that parts inside the Pi (usually just CPU, but likely other parts) are getting hot enough to reach their own internal safety limits. If you run computing hardware at its thermal capacity for long periods of time, this will cause more wear on the parts than if they are run well inside their limits.

If you're just doing extremely light browsing, reading Wikipedia and the like, it might not hit the point where it throttles. But watching videos, scrolling through more complex sites, and switching applications frequently gets the CPU up to 80°C pretty fast, especially if it's closed up inside a plastic box with no ventilation.

For my more formal testing, I started running stress --cpu 4 to make the CPU do lots of work, continuously. After a couple minutes, using vcgencmd measure_temp and vcgencmd get_throttled, I was able to see the CPU start throttling as it hit 80°C (176°F):

To install stress, run sudo apt-get install -y stress. You can monitor the current temperature in a terminal window by running the command watch -n 1 vcgencmd measure_temp. When the CPU throttles, the command vcgencmd get_throttled outputs 0x20002 (the first 2 indicates the Pi has throttled at some point between the prior boot and now; the last 2 indicates the Pi is currently throttling the CPU frequency.

Modding the official Pi case to have a fan

tl;dr: Watch the video (skip to 9:15 for the most exciting part, or read through the instructions below):

Without any ventilation, it's kind of a little plastic oven inside the Pi 4 case. A heat sink might help in some tiny way, but that heat has nowhere to go! So I decided to follow the lead of Redditor u/CarbyCarberson and put a fan in the top cover.

  1. I purchased a Pi-Fan (came in a 2-pack) from Amazon, since it fits nicely above the board and comes with the proper screws for mounting. It plugs directly into the Pi's GPIO pins and needs no modifications.
  2. The easiest way to make a hole for the fan is to use a 1 1/8' hole saw, drilling slowly.
  3. Put the hole saw on your drill, and either use the lower speed setting, or hold the trigger gently, and apply light pressure drilling while holding the Pi case steady.
    • If you spin the hole saw too fast, you'll either lose control and scratch up your Pi case, or burn the plastic and make it look pretty ugly.
  4. Drill directly over the center of the top of the case (make sure the Pi is not inside!), in the middle of the area opposite the Pi logo (this way the fan won't be hitting the network or USB jacks).
  5. Use a file and/or sandpaper (up to 600 grit for a really nice finish) to smooth out the cut after you drill through.
  6. Place the fan on top of the hole, lining it up as closely as you can, then use a mechanical pencil or some other method to mark where the screw holes are around the fan's perimeter.
  7. Use a 7/64' drill bit to drill out the fan screw holes you just marked.
  8. Use sandpaper to knock down the burrs from those screw holes on the inside.
  9. Place the fan under the top of the case, label sticking up (so you can see it through the top of the Pi case), and use the screws and nuts to secure the fan to the case.

When you're putting the Pi back into the case, put it in as normal, then connect the red wire from the pan to the Pi's pin 4 (5V), and the black wire to the Pi's pin 6 (ground). (Reference: GPIO pinout diagram.) Next time you plug in the Pi, the fan should start spinning right away. If it doesn't, either something is physically blocking the fan blades from turning, you have the fan plugged into the wrong GPIO pins, or your fan's a dud!

If you don't have a drill and/or don't want to purchase a 1 1/8' hole saw, you can also use a Dremel tool, though it takes a lot more care to drill through plastic without burning it or wrecking the rest of the top cover! Use a slow speed, and drill out a hole slightly less than the final size using a Dremel drill bit. Then use a round sanding bit to slowly cut back the last 1/8' or so of plastic to reach the fan's outline. Then use the Dremel drill bit again to drill out the screw holes. Not as simple as using a real drill, but it can work.

Temperatures after installing a fan

After installing the fan, I booted the Pi and ran stress --cpu 4 and let it go for an hour. The entire time, the CPU's temperature stayed at or under 60°C (140°F), a full 20°C lower than the throttling point:

I have also been running a Kubernetes cluster with four Raspberry Pi 4's (see more: Rasbperry Pi Dramble), and with the built-in fans on the official PoE HAT, those Pi's processors do not throttle either, even when I'm running a suite of tests which stresses the entire system for an hour or more. The area around the Pis gets fairly warm (since the fans are moving that heat out), but that's a good thing—the heat can dissipate into the surrounding air instead of forming a bubble around the board itself!

The Pi-Fan that I am using produces 50 dB of sound at a distance of one foot (30 cm), so it's not silent, but it's actually a bit quieter than the little fans on the PoE HAT, which also have a higher pitched 'whine' to them that I found more distracting. When it's running, the fan also draws 80 mA of power, continuously, so if you're counting milliamps when supplying power to the Pi (e.g. when running off solar or battery), keep that in mind!

The Pi 4 needs a fan

A heatsink installed inside the Pi 4's official case will do precious little to avoid throttling the CPU (and likely other components, as they all get very hot). A case like the 'Flir' heatsink-as-a-case might help a little, though it still only offers passive heat dissipation. The Pi 3 B+ was the first model I used a fan with for intensive computing (e.g. running a Kubernetes cluster), but it could be used for light computing fanless. The Pi 4 pretty much demands a fan, and I'm amazed that the Pi 4 case doesn't even include holes for better natural heat convection.

Here's to hoping the official Pi 4 B+ case includes some active ventilation, if we're going to keep increasing the speed and energy consumption of not only the SoC but all other Pi subsystems. Until then, I'll be modding my cases to include fans, or using something like the PoE board, with built-in ventilation, to keep my Pis cool.

There are some other options which may be even easier than modifying the official case, like the Fan Shim from Pimoroni or purchasing a 3rd party case with a fan built in. But this option was easy enough and all I needed to complete the project was a $4 fan and a $7 hole saw drill bit (which I can use for other projects in the future).




All Comments: [-] | anchor

stcredzero(3149) 4 days ago [-]

Raspberry Pi is becoming like a NUC. Meanwhile, there are people using Intel NUCs as fanless Hackintoshes:

https://www.youtube.com/watch?v=tUUP8K3RqAo

Fnoord(3910) 4 days ago [-]

Except the Raspberry Pi is an ARM SOC (ARM64 in this case), while Intel NUCs are x86-64. macOS requires x86-64 for now. If Apple ports it to ARM, I really doubt they'll make it easy to run on other ARM hardware.

novaRom(3993) 4 days ago [-]

I cannot find a data sheet for its CPU (BCM2711B0). Is Pi4 actually an open hardware system?

tssva(10000) 4 days ago [-]

None of the Pi's have been open hardware systems.

malensek(10000) 4 days ago [-]

Even the previous-gen Pis would easily throttle under load without a heat sink (although usually installing a decent heat sink was enough to resolve the problem, assuming decent airflow). As the article notes, the heat was mainly from the SoC in the past. The Pi 4 is on a whole different level indeed.

I imagine the logic here is the same as with many other Pi accessories: if you need it, you'll buy it or get it as part of a bundle. In some cases throttling is not a huge issue. But we're a far cry from the simple plug-and-play Pis of the past... Another issue is power -- you can't simply run the newer Pis reliably off any old cell phone charger you have laying around.

AnIdiotOnTheNet(3867) 4 days ago [-]

Yeah, I'm kinda wondering just what the Pi is supposed to be at this point. Is it still an inexpensive low power device for teaching basic computer science? Because it seems like it's more for making Kodi boxes or the worlds most powerful LED blinkers.

geerlingguy(789) 4 days ago [-]

That's just it, though. The official Pi power supply is adequate and good for powering the Pi in any condition, but the case is a far cry from adequate. At best, it just toasts everything inside more evenly, at worst it contributes to throttling because now the heat from every other part of the Pi forms a hot thermal blanket over the CPU.

I'm amazed they didn't at least add a few passive ventilation holes or slots, somewhere. It gets crazy hot inside the case.

vvanders(3098) 4 days ago [-]

Your phone will do this too, that's why sandwiching icepacks between your iPhone makes it do an iCloud backup faster.

Lots of vendors disable thermal limiting during benchmarks to get better numbers.

opmac(10000) 4 days ago [-]

They sell cases with heatsinks and fans for $10-$20. I would just recommend buying one of those versus modding one like the guy in the article did.

devinjflick(10000) 4 days ago [-]

I feel like if I buy official parts, they shouldn't require modding to get the max performance out of the product. They should work without issue together. Currently you can barely get moderate performance with the official case.

This article and mod are helpful, even if obvious, for people like me who bought an official case with my RPi4 expecting them to work well together.

jsharf(10000) 4 days ago [-]

You needed to make that heatmap translucent and overlay it on top of the rpi with a diagram pointing to each chip and what it does. That would have been a super cool graphic.

vectorEQ(10000) 4 days ago [-]

most components should do fine under these temperatures no? on PC usually VRM's are heat sensitive, but the CPU can take pretty crazy temperatures itself.

is the issue similar on PI? What component is at risk at these temperatures? 60-70 seems a little hot for a small device , but is it really TOO hot?

Bootwizard(10000) 4 days ago [-]

Damaging temperatures are generally between 90-100°C. Throttling can happen at lower temps and those are usually CPU dependent (and cooler dependent). Also CPUs with less cores handle thermal throttling worse as they have less cores to fallback on if one overheats.

hajile(4155) 4 days ago [-]

Raspberry Pi tower cooler -- the cooler we deserve.

https://www.seeedstudio.com/ICE-Tower-CPU-Cooling-Fan-for-Ra...

thoughtpalette(3940) 4 days ago [-]

That link 404s for me unfortunately.

beckler(4159) 4 days ago [-]

well yeah, this thing pulls 3 amps.

even though it's mostly to push amperage down the USB 3.0 ports, those electrons going across the board are gonna generate some heat.

geerlingguy(789) 4 days ago [-]

Worst case (headless, at least) with a USB SSD powered off the bus, I was pulling 1.8 amps. But I can imagine if you're also driving some other USB devices, and one or two HDMI displays, 3A is not far off!

prashnts(4052) 4 days ago [-]

I've been using old ssd cases (aluminium ones), which would otherwise end up in trash. Drill four holes, use a screw to create threads in those holes. Lay some thermal tape, and mount the pi. I haven't seen it rise above 60degs — plus it looks alright in my opinion. Few pictures here [1].

I got rid of the ports because I don't need them (headless) and so they mostly fit entirely. I'll make some sorta cover though.

Edit: Forgot to mention that these cases are strong. It fell a few times and nothing visibly bad. I added Pimoroni shim in second iteration because I did want to keep hdmi there. Also left the Ethernet leds for indicator lights.

https://imgur.com/a/5DdWQ68

prashnts(4052) 4 days ago [-]

Adding to that, these cases are an exact fit for three raspberry pi zeros, laid flat. I made a pHAT display board with it. Pictures:

https://imgur.com/a/bhnESTS

penagwin(4151) 4 days ago [-]

I have over 4 RPi's and many SSDs and I've never thought of that, nice!

pwg(304) 4 days ago [-]

> use a screw to create threads in those holes

You'll get better quality threads, and it will likely be easier, if you use a proper tap to create the threads: https://en.wikipedia.org/wiki/Tap_and_die

sneak(3006) 4 days ago [-]

Has anyone made an RPi4 lava lamp yet? Extra credit if there is a camera built in that feeds into the camera connector that supplements the RNG.

vectorEQ(10000) 4 days ago [-]

lava lamp / hw crypto module :'D match made in heaven

husam212(4055) 4 days ago [-]

What about a bigger heatsink?

bjoli(10000) 4 days ago [-]

That has been tried to great success with the rpi3, so I suspect it will at least make it less prone to throttling.

Edit: I just stacked a bunch of 10sec coins on the processor (without any thermal compound) brought the idle temperature down by 3 degrees C.

Edit2: Put an carbon tablet pipe on it, filled with coins. It is not up to heat yet, but right now it is idling at about 45C. It would probably be better with some fluid in it. I will probably keep this as my cooler.

Edit3: Seriously: I have wonderful thermal numbers. sysbench for 240 seconds, temperature 62C.

This is a rpi4 with a poe hat, so my passive cooling options are limited.

kees99(10000) 4 days ago [-]

Quote:

  the metal casing helps
  spread that heat around
It does, to a degree. But that is absolutely not what we are seeing in that IR picture. Shiny bare metal is a mirror at the wavelengths that thermal camera uses. So what you see on thermal image where wifi-module shield and CPU are is reflection of ambient (room) where this picture was taken. Try waving your hand around, you'll see it reflecting there too.

To measure real metal surface temperature by IR, you have to paint that metal (ideally, with black matte paint), or apply similarly-textured sticker.

jeffalyanak(10000) 4 days ago [-]

Understanding the reflectivity and emissivity of the material you are measuring is something that almost nobody accounts for outside of critical, professional applications.

Not that it makes the measurements in question any less inaccurate, but it's very common to see these problems in temperature measurements.

JorgeGT(3849) 4 days ago [-]

Yep, a lot of people nowadays is using IR cameras without any calibration or understanding of the physics behind the measurement. The truth is, the camera requires an emissivity coefficient to associate the received radiation with the emitter body temperature. There are tables for different types of materials, but the best is to calibrate against a surface thermocouple or similar.

A corollary of this is, when you see an IR picture of a product with very different surfaces like rubber, shiny metal, matte paint, plastic, glass, etc. you know that almost for sure the measurement is unreliable because at most they calibrated the camera for the emissivity coefficient of one of the surfaces. And they vary quite a lot...

burnte(10000) 4 days ago [-]

He's measuring the temp of the board next to the CPU, not the shiny IHS on the CPU.

hinkley(4069) 4 days ago [-]

To what degree does scanning it in a cold room help?

geerlingguy(789) 4 days ago [-]

True, I worded it a bit wrong. But the metal case is a huge improvement over the plastic die package on the Pi 3 B and earlier—a heat sink helps a little but you don't yet _need_ one if you use a fan, since the metal is much better at dispersing the heat.

logicallee(4077) 3 days ago [-]

how about touching a thermometer to it? (granted that is one spot, but I'd think it's most precise of all.)

tagrun(3896) 4 days ago [-]

Physicist here, and I disagree. Where else could that radiation with ~10 micron wavelength at that intensity with that localized spatial profile from that particular direction be coming from?

Yes environment typically will have some residual 'noise' at those wavelengths, which you can check its intensity and spatial profile by taking a 'dark frame' if you're in a strange environment and are really suspicious, but it's hardly going to alter what you're seeing in any qualitative way. Assuming someone isn't sending a focused beam of exactly that size at exactly that spot at an exactly correct angle at that particular wavelength.

pankajdoharey(4167) 4 days ago [-]

Check the Ice Tower cooler for raspberry pi https://youtu.be/RyUXC3886Ic

BaconJuice(10000) 4 days ago [-]

Just a bit off-topic but what are you guys doing that you need to do CPU throttling? I just got a Pi4 and wondering if I'm missing out on some cool projects I can do with it :)

geerlingguy(789) 4 days ago [-]

The Pi 4 is the first generation that's actually not horrible at running Kubernetes, so I'm mostly having fun with that (ongoing saga at www.pidramble.com).

seanhandley(3449) 4 days ago [-]

Home media server. Encoding 1080p video brings the temp up.

reallydontask(10000) 4 days ago [-]

I'd guess that heatsink and a case with heatpipes would be sufficient for most, though perhaps not all, uses and silent, which for me is pretty important on a Raspberry Pi

Someone1234(4161) 4 days ago [-]

That's what Apple used to do with their Macbooks. Heat pipes into the metal frame which conducts it over a larger surface area creating natural convections which carry the heat into the air.

The real downside is that mass plays a substantial role. Meaning thicker metal, larger cases, are better than thinner/smaller. Whereas a fan can accomplish similar results for less cost and space (but not passively).

If people don't think passive can work they need to check out Apple's recent Mac Pro that passively cools two Radeon Pro 580Xs. There's no limit, except cost.

Personally I still feel a combination of heat pipe, into heat sink, with a small progressive fan is the ultimate. If it is large enough the fan should stay off except under heavy load.

seanhandley(3449) 4 days ago [-]

I tried a heatsink but it didn't really help unless I had a desk fan providing some air flow.

geerlingguy(789) 4 days ago [-]

The problem is there are now a number of chips on the board—many of which are too small to pop a heatsink on them—which get crazy hot. If you have the Pi inside any kind of enclosed case, that heat just bakes everything else, even if you have heat sinks.

arendtio(10000) 4 days ago [-]

I think 'need' is a pretty strong word in this context. 'Could use' would be a better term, as the Pi 4 clearly works without a fan, just not at peak performance for more than a few minutes.

For many use-cases, I would prefer some performance reduction over the noise generated by a fan.

Marsymars(10000) 4 days ago [-]

Yes, if you're committed to a silent use-case, your options are essentially between a processor that doesn't throttle, or a processor that starts faster, and then eventually throttles to a steady-state matching the non-throttling processor in performance.

pslam(10000) 4 days ago [-]

This article is actually describing how the Raspberry Pi 4 does NOT need a fan.

This is not the 1990s. It is perfectly acceptable and even advantageous to design for a high peak:normal load ratio, with thermal throttling. In this case, it allows for a compact, cheap, fanless design for the vast majority of users.

There is no evidence the heat dissipated will impact lifespan. It is common for the components picked out in particular (power supply, USB-C controllers) to be deliberately designed to run hot. They aren't made on the same process as the SoC.

I feel like there is a missing piece of the software/hardware design art here. There are many takes like this on the Raspberry Pi 4 design. Why only one ethernet? Why no fan? Why not more USB-C? Because it's $35 and because, perhaps, you aren't the target majority market. It's going to satisfy the vast majority of people, and those it doesn't have very simple and cheap ways to mod it so it does.

dingo_bat(3954) 4 days ago [-]

> It is perfectly acceptable and even advantageous to design for a high peak:normal load ratio, with thermal throttling.

Thermal throttling is never acceptable. You should be able to peg the CPU at 100% indefinitely without any throttling.

geerlingguy(789) 4 days ago [-]

I am not directly the target market, no. And the throttling itself is an indication that the heat was designed/accounted for in the Pi's design.

However, to say that the majority of users don't need better cooling, and it won't cause problems most of the time, seems inaccurate. If you use a microSD card to boot/run the Pi, how many of them are rated for blisteringly-hot operating conditions 24x7 (many people using the Pi as a computer will have it booted pretty much all day and just turn off the monitor, or go to screensaver).

And I have already had one PoE HAT's PoE socket pop partly off the board when removing it from a Pi 3 B+. Partly due to stress of flexing the board, definitely... but that kind of tiny solder joint on a part that is stressed when making a connection is exactly the point of failure that may not be directly caused by, but is definitely not helped by thermal stress.

amelius(869) 4 days ago [-]

If you advertise a product with certain specs, and the user can only use the full capability during a fraction of the time, then that is misleading.

codesushi42(10000) 4 days ago [-]

Fans fail too.

The heat dissipated will probably shorten the lifetime of the board. At $35 who cares though? And you have more to worry about from the SD storage.

The RPi needs an SSD. :)

cyanoacry(3397) 4 days ago [-]

> There is no evidence the heat dissipated will impact lifespan.

I have to nitpick at this a little bit as a professional EE who works in high-reliability electronics. Wearout rates absolutely do depend on temperature (and thus heat), and thus the chips used here will have a shorter lifetime than those with active cooling. Now, whether that lifetime will be long enough for the common user is another question (maybe it's 1 million hours of life that get reduced to 100,000 hours, so not normally noticable).

morganvachon(10000) 4 days ago [-]

I'll respectfully disagree. Running my Pi 4 with no fan/heatsink reduces performance to an absolute crawl. It's on par with the desktop on the original Pi released in 2012, i.e. nigh unusable for anything beyond saying 'yep, it boots to desktop'. Thermal throttling effectively neuters the device.

I have it in an open-air acrylic 'sandwich' style case now, with the same Pi-Fan as the author, and it feels performant enough to use as a daily driver for web browsing and other light duties (basically on par with any Chromebook I've come across the past few years). It's still not 'desktop replacement' level due to the SD card performance hit, but it's finally good enough for its intended use case in education without being frustrating. Once boot-from-USB3 arrives it will likely be fast enough to use as a second Linux workstation in a serious capacity.

ryanmercer(3617) 4 days ago [-]

To add to this, I wonder if a properly designed case could create a chimney effect (not unlike the crossdraft kiln Primitive Tech made in his most recent video) where escape heat is drawing in cooler air in such a fashion as to be adequate enough fo the vast majority of applications.

Not that fans need to be very loud, or even spin very fast for something of this size, I imagine relatively low RPMs (probably sub 100 RPM) could still achieve considerable cooling, especially with a crossdraft optimized case.

ChuckMcM(606) 4 days ago [-]

Apropos of the thread: https://www.seeedstudio.com/ICE-Tower-CPU-Cooling-Fan-for-Ra... (a fan for the CPU that is reminiscent of the first 'big' Pentium fans). I asked on twitter if the RasPi had 'jumped the shark' here, being sucked into territory it wasn't really designed for.

As someone who has lived through the microcomputer revolution it is amazing to see the '$100 desktop' at this point of history. If we arbitrarily pick 30 years as a delta (so 1989 vs 2019) and use BYTE[1] as a reference a 20MHz 80386 desktop with a 40MB hard drive was $2500 (as a clone, name brands were more).

An actually comparable computer (32 bit processor at 20MHz, 1MB RAM, 640x480 graphics, 40MB+ of storage (ignoring for the moment 'protected mode') today can be built for less than $10.

That is a huge step function to try to absorb.

[1] https://archive.org/details/byte-magazine-1989-10/page/n347

omni(3892) 4 days ago [-]

The giant heatsink seems like plain overkill, given the author of the article was able to stop it from throttling under stress tests with a much smaller and simpler fan install

akira2501(10000) 4 days ago [-]

> That is a huge step function to try to absorb.

It is a bit on the Heaviside, isn't it?

I'll show myself out.

adrianmonk(10000) 4 days ago [-]

> If you run computing hardware at its thermal capacity for long periods of time, this will cause more wear on the parts than if they are run well inside their limits.

So I have a moderately increased likelihood of, at some indefinite point in the future, having to spend a whole $35 to replace the Raspberry Pi. I can prevent this by spending extra money (and time) up front.

In some cases, it might be worth it. Maybe I'm using this Pi to control something, so downtime is bad. (But even then, fans have moving parts and a high failure rate, so plan to monitor the fan's condition and be prepared to replace it.)

In other cases, if the performance isn't important to you, it may make more sense to just accept a shorter lifespan. By the time it breaks, the next generation Pi may be out anyway. I don't love thinking of hardware as disposable, but at this price point, it may be smarter economically.

jrockway(3381) 4 days ago [-]

I think if you focus too much on '$35' you will be very disappointed with the rpi. You need a power supply, that's $10. You need an SD card, that's $10. You need a case, that's $10. You need a micro HDMI to HDMI cable or two, that's $10. You need a fan, that's $10.

If you mentally decide 'the Raspberry Pi is a $100 computer all-in' you will be much happier. You won't buy one to sit in a drawer (as many on HN have complained about), and you won't try to use a USB charger from your phone from 1992 and that SD card you got with your sandwich at IKEA and be disappointed that it's not very reliable.

junaru(10000) 4 days ago [-]

> If you run computing hardware at its thermal capacity for long periods of time, this will cause more wear on the parts than if they are run well inside their limits.

How true is this actually? Personal anecdote is an I5-2500K overclocked to 4.6ghz with budget aftermarket cooler (its at 4.6ghz constantly regardless of load). It's been running 24/7 in my desktop since 2011.

When will it fail? I see these disclaimers all the time and they sound logical, but does anyone have some numbers on these fail rates under these 'prolonged extreme loads'?

spiderfarmer(4152) 4 days ago [-]

I would accept the shorter lifespan just because of the fan noise. Small and cheap fans are very noticeable in otherwise quiet rooms. I'd hate it.

seanhandley(3449) 4 days ago [-]

I can recommend Pimoroni's fan shim. They provide a python project that allows you to set temperature thresholds to kick the fan in so it's not on all the time. It's not silent but it's pretty quiet.

https://github.com/pimoroni/fanshim-python

geerlingguy(789) 4 days ago [-]

I thought about getting one of these, as they are a little more turnkey... however I couldn't find an easy way to purchase and get quick shipping in the US. Is there any other way besides ordering through the UK?

jtwigg(10000) 4 days ago [-]

I second this. The fan is amazing!

rvz(10000) 4 days ago [-]

Indeed, the fan shim is insanely cool and I just bought one of these the moment that it was back in stock together with my Raspberry Pi 4. It also works on the Pi 3 and 2.

Here's the link if anyone's interested: https://shop.pimoroni.com/products/fan-shim

TazeTSchnitzel(2091) 4 days ago [-]

If the Nintendo 64 was entirely passively cooled, surely the Pi can be? Mind Nintendo/SGI had much more surface area to work with so this is probably an unfair comparison.

ChickeNES(3880) 4 days ago [-]

The Nintendo 64 has a U shaped piece of aluminum spanning the width of the console that screws through the RF shielding and into heatspreaders on the RDRAM, CPU, and RCP. It's about 2 RasPi's long by itself :P

bantunes(4028) 4 days ago [-]

The Pi 4 is much more powerful than an N64. So much so, it can emulate it in software.

LeonM(4061) 4 days ago [-]

So does the pi 3 if you stick it in a case, even with a heatsink.

I was using a pi 3 with a heatsink in the official case to play a h265 movie (which will be software decoded on a pi3). After about 10 minutes I noticed it started dropping frames and it displayed the thermometer symbol in the top-right of the display.

When I removed the top lid from the case, the temperature dropped enough for the thermometer icon to disappear and playback to continue smoothly.

nullbyte(4154) 4 days ago [-]

I'm hosting a Node.js webserver from my RPi3, I have heatsinks and a case installed with no fan.

Seems to run fine, it's never overheated or throttled for me.

danaos(10000) 4 days ago [-]

Well the pi3 does not support h265 acceleration natively, it should have been better to use it on h264 movies.

teraflop(10000) 4 days ago [-]

Yeah, thermal insulation behaves in much the same way as electrical insulation. If you put a conductor in series with a resistor, the conductor doesn't conduct all that well.

dleslie(10000) 4 days ago [-]

Flirc makes excellent passive cooling cases for RPis. Basically the entire shell is a heat sink.

Works for me, all my little media boxes run happily with them, despite running 24/7.

Florin_Andrei(10000) 4 days ago [-]

I've a Pi3 with a CPU heatsink, in case. I have not seen it throttle after adding the radiator.

dbg31415(3954) 4 days ago [-]

Over the years I've used Raspberry Pis as little media players, but they kept burning out every 3-4 months; the card reader (their SSD) wouldn't work, or they'd just blink some other error message. I thought they were just cheaply made ($35 after all). Anyway, long story short, I got a fan and the last one has been working for over a year without issues.

y04nn(10000) 4 days ago [-]

I installed libreelec on my Raspberry Pi 4 the first day I had it. The CPU immediately started (while playing 1080p video) to heat a lot to the point it stalled. Since I haded a motherboard chipset heatsink that I had laying around on the CPU it worked flowlessly. (I keep it without any case)

the_angry_angel(10000) 4 days ago [-]

This was exactly my intended use case - as a gift for relatives - the problem I've had is that I need it enclosed to prevent tiny little fingers from getting hurt/destroying the thing. Tried soak testing it one night and experienced the same thing :/

Currently I'm hoping that I can modify the case this weekend to fulfill my needs.

As something intended to be used by kids to learn about computing I do wonder how many are going to get hurt fingers from the heat output by accidentally maxing the cpu and/or touching the USB ports, etc.




(549) Photorealistic Path Tracer

549 points about 18 hours ago by azhenley in 865th position

thume.ca | | comments | anchor

In order to implement my final scene concept I needed portals, which for my case meant a way to have an object act as a portal where rays passing through it would end up in a different scene, bidirectionally.

I implemented this by giving all rays and surface hits a world number field. Then I modified my code so all scattering and light sampling would observe the world field and cast follow up and shadow rays with the same world.

Then I added a portal SceneNode subclass that checks the world ID of incoming rays and first tests it against that index of its children, as well as a special portal node pointer, and if it hits the portal before it hits the current world, it spawns a new ray in the next world number modulo its number of children, and does the required changing and remapping ray t values to spawn the new ray at the portal.




All Comments: [-] | anchor

duck2(10000) about 10 hours ago [-]

The path tracer is impressive, doing it in a month's worth of work is impressive too :) I took a ray tracing course in second grade, and had to leave everything else and write ray tracers during the term.

And, as an item of nostalgia, here is some of my glossy mirror code:

  if(matl->t == M_GLOSSMIRROR){
      double θ0 = acos(cosθ), Δθ = M_PI * matl->rough / 2, lθ, dθ = 2*Δθ/ray->w, dφ = 2*dθ, θ, φ;
      lθ = (θ0 - Δθ) < 0 ? 0 : ((θ0 + Δθ) < M_PI ? (θ0 - Δθ) : (M_PI - 2*Δθ));
      θ = lθ + (p+RAND1)*dθ; φ = -2*Δθ + (q+RAND1)*dφ;
      v4 z = hit->n, x = norv4(subv4(ray->p, scv4(dotv4(ray->p, z), z))), y = crv4(x, z);
      ...
sfkdjf9j3j(10000) about 10 hours ago [-]

Now that is impressive. When I was in second grade I was still wrapping my head around addition and subtraction.

magicalhippo(10000) about 15 hours ago [-]

One of the best books I've read on the subject is https://www.pbrt.org/ (which the article also mentions).

It manages to tackle both the math and the gritty details of implementing the math. Blew me away the first time I read it.

isatty(10000) about 14 hours ago [-]

I did a ray tracing based project in high school and can also confirm that PBRT is awesome. Gets the point across really well and the authors are (was) active on the mailing list.

trishume(2417) about 15 hours ago [-]

Yah PBRT was my most important resource for completing this project, way more important than anything I learned in course lectures.

I read it cover to cover before doing much, but then made sure I wrote things how I wanted without referencing the book except for a few bits of formula-heavy code I mention in the credits.

Definitely the best technical book I've ever read.

leowoo91(4127) about 13 hours ago [-]

I just wonder why author named ray tracing as path tracing..

sampo(839) about 10 hours ago [-]

> I just wonder why author named ray tracing as path tracing..

Ray casting (1968) was simply casting rays from camera, one per each image pixel, until the ray hits the closest object. Then the pixel gets the color from the object.

Ray tracing (1979) made ray casting recursive for three cases:

1. Reflection: The object surface is a perfect mirror, so we calculate the reflection angle from the incoming ray angle, and continue the process.

2. Refraction: The object is transparent, like glass. The Fresnel equations give the refraction (transmission) and reflection angles and the distribution of energy between them. For some angles, one of these can be 0.

3. Shadow ray: Shadow rays are traced towards each light source. If the shadow ray is blocked by any other object, we're in shadow in respect of that light source. If a light source is visible, both the light source color and the object surface color contribute to the color of this pixel. The object surface can also have a reflectance model (BRDF) so it doesn't need to be a matte Lambertian surface, but the BRDF is only applied to light coming directly from the light sources.

An object can have partially reflective surface (like glossy plastic), and it can also be partially transparent (like colored glass). So at each intersection, we may generate up to all 3 types of rays.

What we miss here, is that all surfaces diffusely reflect light to some amount, and that the colors in all diffuse incoming light from every direction contributes to the color of a surface, not just direct light from visible light sources. (We also miss caustics. Most notably, a transparent glass ball between the surface and a light source will only give a shadow in ray tracing.)

Historically, there was a time when computers were too slow for sampling, even statistically weighted importance sampling, of diffuse light coming from all directions to all surfaces. So the above model of handling only 3 types of rays (mirror reflection, transparent object refraction, light directly from a light source) became both popular, and established as ray tracing.

Then later, advances towards including all kinds of physically possible light rays, adopted names other than ray tracing. They are still tracing rays, but they are not called tray tracing, because the name ray tracing is understood to mean only the first method that got widely popular, with all its limitations.

iamnotacrook(10000) about 13 hours ago [-]

They didn't; they're talking about path tracing.

2_listerine_pls(4018) about 13 hours ago [-]

Impressive resume

person_of_color(4170) about 9 hours ago [-]

The definition of 10x

magicalhippo(10000) about 15 hours ago [-]

Nice work and presentation, always fun with ray tracing. Bonus points for the fractal :)

I assume you did this in RGB space, so no pretty prisms? I couldn't find any explicit mention. From the report I guess you didn't implement refraction at all?

Spectral leads to great images, but the color space conversions are such a pain to get correct.

alkonaut(10000) about 8 hours ago [-]

Also makes it a lot slower. For a really nice/readable (but therefore also "naive" and slow) spectral one, look at this one: https://github.com/TomCrypto/Lambda

It's a lot easier to learn from than the pbrt one in my opinion.

There are ways to avoid having to do single wavelength rays (hero wavelengths or basis functions) but it's tricky.

200px(10000) about 10 hours ago [-]

This is cool. I want to start learning this. What programming language would you suggest to write my code while I learn this? Is C++ pretty much necessary for performance reasons or other languages would be okay too? Any specific recommendation?

sfkdjf9j3j(10000) about 10 hours ago [-]

Check out Peter Shirley's book series, starting with Ray Tracing in One Weekend. It's about ray tracing, not path tracing, but it's a fun introduction to the broader topic.

As for languages, I guess that depends on how long you are willing to wait for results. You might be able to use Go or something on the JVM without driving yourself insane.

alkonaut(10000) about 8 hours ago [-]

If you don't already know c++ I'd go for Rust.

namibj(10000) about 9 hours ago [-]

There are many recent languages that seem suitable. Like Julia and such.

xal(2667) about 9 hours ago [-]

Fun story about the OP: We brought Tristan on as a high school intern at Shopify. I heard about how fantastic he was off and on during his internship (I'm founder and CEO).

So when I heard that he was giving a town hall talk at the end of his stint I definitely wanted to check it out.

What I walked into was a 30 minute take down of Liquid, the template language that I wrote. He systemically took every single design decision I made and dismembered it based on language design fundamentals.

That was still one of my favorite townhalls. As a self taught programmer it also send me to read about compiler and language theory for the next years to better understand all the errors of my ways. Thanks Tristan!

trishume(2417) about 7 hours ago [-]

Thanks tobi <3

I remember hearing you mention my talk when I was back at Shopify a couple years later (context for others: my first internship I was in grade 11 but I went back 2 times, Shopify is great) and realizing that at many other companies giving a talk in front of everyone about design flaws in the CEO's code would be a career limiting move. But I think it's a testament to Shopify's culture that this never occurred to me in grade 11 and in fact it was instead appreciated.

rhaksw(10000) about 8 hours ago [-]

Hah, that's great. This is the kind of humility that keeps me coming back to hN.

mberning(10000) about 8 hours ago [-]

Can you share the critique? It would be an interesting read.

goblin89(1803) about 8 hours ago [-]

Fancy seeing your comment just as I'm listening to Shane Parrish interviewing you on his podcast (to others: The Trust Battery episode, recommended). Respect & keep up the good work!

macspoofing(10000) about 14 hours ago [-]

I recognized the final project structure of CS488. Great class, though quite time-consuming.

The author links to past year galleries: https://www.student.cs.uwaterloo.ca/~cs488/gallery-A5.html

I'm not sure if this was done in all the years, but one of the interesting (and slightly unfair) aspects of the final project was that you were not only graded on the quality of the final render, but also how closely you aligned to your plan as stated up front. That is, if you planned an ambitious render BUT had to scale down to something slightly less ambitious (due to difficulty or time-constraints), your mark would be lower than the individual who planned a simple render and executed that.

wjnc(4157) about 9 hours ago [-]

That seems like quite the wrong incentive for learning. A lot of ambitious plans fail. To be graded only on the final delivery gives adequate incentive to execute. I perhaps get the education signal that executing successfully on a plan is a worthwhile skill, but isn't learning also experimenting?

trishume(2417) about 7 hours ago [-]

I'm not sure if they changed the marking scheme by the time I took the course, but I think they added advice to pick 10 easy things for your plan even if you are ambitious, because the elements of your plan are only marked on completion not difficulty or quality. Then just do the other things as bonus objectives which have their own category, and the quality will be put in a different category.

There's also marks for artistry and humour, and getting perfect in the other categories doesn't require as much work as I put in. I didn't get a perfect mark and I suspect it's because I blew past the cap for technical difficulty but lost some marks on artistry and originality.

trishume(2417) about 18 hours ago [-]

Well I hoped to be on HN today for my high-effort blog post on generics and metaprogramming (http://thume.ca/2019/07/14/a-tour-of-metaprogramming-models-...) but at least I'm on HN for something :P

Edit: And at least right before I go to bed my post technically made it on the front page in the very last slot :)

I'm also glad people like my ray tracer, it was super satisfying to build, and nice to have a project that I know I put substantial effort into and can really be proud of.

gigatexal(4061) about 17 hours ago [-]

Hah you're obviously capable. Some big firm will scoop you up no problem.

azhenley(865) about 17 hours ago [-]

Oops! Your tweet about that blog post was in my feed but I clicked your profile and saw your pinned tweet about your path tracer, so I submitted it. It is quite impressive and you did a good job writing it up.

spencerflem(10000) about 15 hours ago [-]

Oh no way, you're Tristan Hume! My high school science research project was based on the eye tracking thing you made a while back :)

User23(3357) about 15 hours ago [-]

I also have a tungsten cube, although I rather like the archaic name wolfram. I fully second your 10/10 recommendation. Best paperweight I ever owned.

Oh yeah and your webpage has plenty of other great pieces too. Thanks for this!

JabavuAdams(1730) about 17 hours ago [-]

Great stuff! Aside from having the technical chops, you've done a remarkable job documenting your work, and honing your writing.

vkazanov(3853) about 8 hours ago [-]

the metaprogramming post is very, very good, btw, thank you

dblotsky(10000) about 13 hours ago [-]

Ayyyy, CS488. Lovely renders, and great write-up! What learnings from this assignment/course do you feel will be the most useful in future projects (graphics or otherwise)?

P.S. I wish Trains (CS452) had something so demo-able. ;)

dhruvil1514(10000) about 9 hours ago [-]

hii

robbrown451(4152) about 14 hours ago [-]

You're in the very first slot now. Congrats, this is super impressive, both in implementation and documentation.

isoprophlex(4160) about 14 hours ago [-]

In - fucking - credible, the amount of attention to detail in your implementation and in compositing the scene.

I haven't read the meta programming thing yet but I really enjoyed reading the thing about the path tracer.





Historical Discussions: What every computer science major should know (2011) (July 16, 2019: 535 points)
What every computer science major should know (2011) (May 05, 2015: 148 points)
What Every Computer Science Major Should Know (August 24, 2011: 81 points)
What every computer science major should know (July 15, 2014: 20 points)
What every computer science major should know (April 01, 2012: 11 points)
What every computer science major should know (July 13, 2012: 4 points)
What every CS major should know (January 18, 2015: 3 points)
What every computer science major should know (January 12, 2017: 2 points)
What every computer science major should know (August 19, 2015: 2 points)
What every computer science major should know (February 26, 2014: 1 points)
What every computer science major should know (September 05, 2018: 1 points)
What every CS major should know (June 16, 2018: 1 points)

(536) What every computer science major should know (2011)

536 points 5 days ago by rspivak in 443rd position

matt.might.net | Estimated reading time – 23 minutes | comments | anchor

Portfolio versus resume

Having emerged from engineering and mathematics, computer science programs take a resume-based approach to hiring off their graduates.

A resume says nothing of a programmer's ability.

Every computer science major should build a portfolio.

A portfolio could be as simple as a personal blog, with a post for each project or accomplishment. A better portfolio would include per-project pages, and publicly browsable code (hosted perhaps on github or Google code).

Contributions to open source should be linked and documented.

A code portfolio allows employers to directly judge ability.

GPAs and resumes do not.

Professors should design course projects to impress on portfolios, and students, at the conclusion of each course, should take time to update them.

Examples

Technical communication

Lone wolves in computer science are an endangered species.

Modern computer scientists must practice persuasively and clearly communicating their ideas to non-programmers.

In smaller companies, whether or not a programmer can communicate her ideas to management may make the difference between the company's success and failure.

Unfortunately, this is not something fixed with the addition of a single class (although a solid course in technical communication doesn't hurt).

More classes need to provide students the opportunity to present their work and defend their ideas with oral presentations.

Specific recommendations

I would recommend that students master a presentation tool like PowerPoint or (my favorite) Keynote. (Sorry, as much as I love them, LaTeX-based presentation tools are just too static.)

For producing beautiful mathematical documentation, LaTeX has no equal. All written assignments in technical courses should be submitted in LaTeX.

Recommended reading

An engineering core

Computer science is not quite engineering.

But, it's close enough.

Computer scientists will find themselves working with engineers.

Computer scientists and traditional engineers need to speak the same language--a language rooted in real analysis, linear algebra, probability and physics.

Computer scientists ought to take physics through electromagnetism. But, to do that, they'll need take up through multivariate calculus, (and differential equations for good measure).

In constructing sound simulations, a command of probability and (often times) linear algebra is invaluable. In interpreting results, there is no substitute for a solid understanding of statistics.

Recommended reading

The Unix philosophy

Computer scientists should be comfortable with and practiced in the Unix philosophy of computing.

The Unix philosophy (as opposed to Unix itself) is one that emphasizes linguistic abstraction and composition in order to effect computation.

In practice, this means becoming comfortable with the notion of command-line computing, text-file configuration and IDE-less software development.

Specific recommendations

Given the prevalence of Unix systems, computer scientists today should be fluent in basic Unix, including the ability to:

  • navigate and manipulate the filesystem;
  • compose processes with pipes;
  • comfortably edit a file with emacs and vim;
  • create, modify and execute a Makefile for a software project;
  • write simple shell scripts.

Students will reject the Unix philosophy unless they understand its power. Thus, it's best to challenge students to complete useful tasks for which Unix has a comparative advantage, such as:

  • Find the five folders in a given directory consuming the most space.
  • Report duplicate MP3s (by file contents, not file name) on a computer.
  • Take a list of names whose first and last names have been lower-cased, and properly recapitalize them.
  • Find all words in English that have x as their second letter, and n as their second-to-last.
  • Directly route your microphone input over the network to another computer's speaker.
  • Replace all spaces in a filename with underscore for a given directory.
  • Report the last ten errant accesses to the web server coming from a specific IP address.

Recommended reading

Systems administration

Some computer scientists sneer at systems administration as an 'IT' task.

The thinking is that a computer scientist can teach herself how to do anything a technician can do.

This is true. (In theory.)

Yet this attitude is misguided: computer scientists must be able to competently and securely administer their own systems and networks.

Many tasks in software development are most efficiently executed without passing through a systems administrator.

Specific recommendations

Every modern computer scientist should be able to:

  • Install and administer a Linux distribution.
  • Configure and compile the Linux kernel.
  • Troubleshoot a connection with dig, ping and traceroute.
  • Compile and configure a web server like apache.
  • Compile and configure a DNS daemon like bind.
  • Maintain a web site with a text editor.
  • Cut and crimp a network cable.

Recommended reading

Programming languages

Programming languages rise and fall with the solar cycle.

A programmer's career should not.

While it is important to teach languages relevant to employers, it is equally important that students learn how to teach themselves new languages.

The best way to learn how to learn progamming languages is to learn multiple programming languages and programming paradigms.

The difficulty of learning the nth language is half the difficulty of the (n-1)th.

Yet, to truly understand programming languages, one must implement one. Ideally, every computer science major would take a compilers class. At a minimum, every computer science major should implement an interpreter.

Specific languages

The following languages provide a reasonable mixture of paradigms and practical applications:

  • Racket;
  • C;
  • JavaScript;
  • Squeak;
  • Java;
  • Standard ML;
  • Prolog;
  • Scala;
  • Haskell;
  • C++; and
  • Assembly.

Racket

Racket, as a full-featured dialect of Lisp, has an aggressively simple syntax.

For a small fraction of students, this syntax is an impediment.

To be blunt, if these students have a fundamental mental barrier to accepting an alien syntactic regime even temporarily, they lack the mental dexterity to survive a career in computer science.

Racket's powerful macro system and facilities for higher-order programming thoroughly erase the line between data and code.

If taught correctly, Lisp liberates.

Recommended reading

ANSI C

C is a terse and unforgiving abstraction of silicon.

C remains without rival in programming embedded systems.

Learning C imparts a deep understanding of the dominant von Neumann architecture in a way that no other language can.

Given the intimate role poor C programming plays in the prevalence of the buffer overflow security vulnerabilities, it is critical that programmers learn how to program C properly.

Recommended reading
  • ANSI C by Kernighan and Ritchie.

JavaScript

JavaScript is a good representative of the semantic model popular in dynamic, higher-order languages such as Python, Ruby and Perl.

As the native language of the web, its pragmatic advantages are unique.

Recommended reading

Squeak

Squeak is a modern dialect of Smalltalk, purest of object-oriented languages.

It imparts the essence of 'object-oriented.'

Recommended reading

Java

Java will remain popular for too long to ignore it.

Recommended reading

Standard ML

Standard ML is a clean embodiment of the Hindley-Milner system.

The Hindley-Milner type system is one of the greatest (yet least-known) achievements in modern computing.

Though exponential in complexity, type inference in Hindley-Milner is always fast for programs of human interest.

The type system is rich enough to allow the expression of complex structural invariants. It is so rich, in fact, that well-typed programs are often bug-free.

Recommended reading

Prolog

Though niche in application, logic programming is an alternate paradigm for computational thinking.

It's worth understanding logic programming for those instances where a programmer may need to emulate it within another paradigm.

Another logic language worth learning is miniKanren. miniKanren stresses pure (cut not allowed) logic programming. This constraint has evolved an alternate style of logic programming called relational programming, and it grants properties not typically enjoyed by Prolog programs.

Recommended reading

Scala

Scala is a well-designed fusion of functional and object-oriented programming languages. Scala is what Java should have been.

Built atop the Java Virtual Machine, it is compatible with existing Java codebases, and as such, it stands out as the most likely successor to Java.

Recommended reading

Haskell

Haskell is the crown jewel of the Hindley-Milner family of languages.

Fully exploiting laziness, Haskell comes closest to programming in pure mathematics of any major programming language.

Recommended reading

ISO C++

C++ is a necessary evil.

But, since it must be taught, it must be taught in full.

In particular, computer science majors should leave with a grasp of even template meta-programming.

Recommended reading

Assembly

Any assembly language will do.

Since x86 is popular, it might as well be that.

Learning compilers is the best way to learn assembly, since it gives the computer scientist an intuitive sense of how high-level code will be transformed.

Specific recommendations

Computer scientists should understand generative programming (macros); lexical (and dynamic) scope; closures; continuations; higher-order functions; dynamic dispatch; subtyping; modules and functors; and monads as semantic concepts distinct from any specific syntax.

Recommended reading

Discrete mathematics

Computer scientists must have a solid grasp of formal logic and of proof. Proof by algebraic manipulation and by natural deduction engages the reasoning common to routine programming tasks. Proof by induction engages the reasoning used in the construction of recursive functions.

Computer scientists must be fluent in formal mathematical notation, and in reasoning rigorously about the basic discrete structures: sets, tuples, sequences, functions and power sets.

Specific recommendations

For computer scientists, it's important to cover reasoning about:

  • trees;
  • graphs;
  • formal languages; and
  • automata.

Students should learn enough number theory to study and implement common cryptographic protocols.

Recommended reading

Data structures and algorithms

Students should certainly see the common (or rare yet unreasonably effective) data structures and algorithms.

But, more important than knowing a specific algorithm or data structure (which is usually easy enough to look up), computer scientists must understand how to design algorithms (e.g., greedy, dynamic strategies) and how to span the gap between an algorithm in the ideal and the nitty-gritty of its implementation.

Specific recommendations

At a minimum, computer scientists seeking stable long-run employment should know all of the following:

  • hash tables;
  • linked lists;
  • trees;
  • binary search trees; and
  • directed and undirected graphs.

Computer scientists should be ready to implement or extend an algorithm that operates on these data structures, including the ability to search for an element, to add an element and to remove an element.

For completeness, computer scientists should know both the imperative and functional versions of each algorithm.

Recommended reading

Theory

A grasp of theory is a prerequisite to research in graduate school.

Theory is invaluable when it provides hard boundaries on a problem (or when it provides a means of circumventing what initially appear to be hard boundaries).

Computational complexity can legitimately claim to be one of the few truly predictive theories in all of computer 'science.'

A computer scientist must know where the boundaries of tractability and computability lie. To ignore these limits invites frustration in the best case, and failure in the worst.

Specific recommendations

At the undergraduate level, theory should cover at least models of computation and computational complexity.

Models of computation should cover finite-state automata, regular languages (and regular expressions), pushdown automata, context-free languages, formal grammars, Turing machines, the lambda calculus, and undecidability.

At the undergraduate level, students should learn at least enough complexity to understand the difference between P, NP, NP-Hard and NP-Complete.

To avoid leaving the wrong impression, students should solve a few large problems in NP by reduction to SAT and the use of modern SAT solvers.

Recommended reading

Architecture

There is no substitute for a solid understanding of computer architecture.

Computer scientists should understand a computer from the transistors up.

The understanding of architecture should encompass the standard levels of abstraction: transistors, gates, adders, muxes, flip flops, ALUs, control units, caches and RAM.

An understanding of the GPU model of high-performance computing will be important for the foreseeable future.

Specific recommendations

A good understanding of caches, buses and hardware memory management is essential to achieving good performance on modern systems.

To get a good grasp of machine architecture, students should design and simulate a small CPU.

Recommended reading

Operating systems

Any sufficiently large program eventually becomes an operating system.

As such, computer scientists should be aware of how kernels handle system calls, paging, scheduling, context-switching, filesystems and internal resource management.

A good understanding of operating systems is secondary only to an understanding of compilers and architecture for achieving performance.

Understanding operating systems (which I would interpret liberally to include runtime systems) becomes especially important when programming an embedded system without one.

Specific recommendations

It's important for students to get their hands dirty on a real operating system. With Linux and virtualization, this is easier than ever before.

To get a better understanding of the kernel, students could:

  • print 'hello world' during the boot process;
  • design their own scheduler;
  • modify the page-handling policy; and
  • create their own filesystem.

Recommended reading

Networking

Given the ubiquity of networks, computer scientists should have a firm understanding of the network stack and routing protocols within a network.

The mechanics of building an efficient, reliable transmission protocol (like TCP) on top of an unreliable transmission protocol (like IP) should not be magic to a computer scientist. It should be core knowledge.

Computer scientists must understand the trade-offs involved in protocol design--for example, when to choose TCP and when to choose UDP. (Programmers need to understand the larger social implications for congestion should they use UDP at large scales as well.)

Specific recommendations

Given the frequency with which the modern programmer encounters network programming, it's helpful to know the protocols for existing standards, such as:

  • 802.3 and 802.11;
  • IPv4 and IPv6; and
  • DNS, SMTP and HTTP.

Computer scientists should understand exponential back off in packet collision resolution and the additive-increase multiplicative-decrease mechanism involved in congestion control.

Every computer scientist should implement the following:

  • an HTTP client and daemon;
  • a DNS resolver and server; and
  • a command-line SMTP mailer.

No student should ever pass an intro neworking class without sniffing their instructor's Google query off wireshark.

It's probably going too far to require all students to implement a reliable transmission protocol from scratch atop IP, but I can say that it was a personally transformative experience for me as a student.

Recommended reading

Security

The sad truth of security is that the majority of security vulnerabilities come from sloppy programming. The sadder truth is that many schools do a poor job of training programmers to secure their code.

Computer scientists must be aware of the means by which a program can be compromised.

They need to develop a sense of defensive programming--a mind for thinking about how their own code might be attacked.

Security is the kind of training that is best distributed throughout the entire curriculum: each discipline should warn students of its native vulnerabilities.

Specific recommendations

At a minimum, every computer scientist needs to understand:

  • social engineering;
  • buffer overflows;
  • integer overflow;
  • code injection vulnerabilities;
  • race conditions; and
  • privilege confusion.

A few readers have pointed out that computer scientists also need to be aware of basic IT security measures, such how to choose legitimately good passwords and how to properly configure a firewall with iptables.

Recommended reading

Cryptography

Cryptography is what makes much of our digital lives possible.

Computer scientists should understand and be able to implement the following concepts, as well as the common pitfalls in doing so:

  • symmetric-key cryptosystems;
  • public-key cryptosystems;
  • secure hash functions;
  • challenge-response authentication;
  • digital signature algorithms; and
  • threshold cryptosystems.

Since it's a common fault in implementations of cryptosystems, every computer scientist should know how to acquire a sufficiently random number for the task at hand.

At the very least, as nearly every data breach has shown, computer scientists need to know how to salt and hash passwords for storage.

Specific recommendations

Every computer scientist should have the pleasure of breaking ciphertext using pre-modern cryptosystems with hand-rolled statistical tools.

RSA is easy enough to implement that everyone should do it.

Every student should create their own digital certificate and set up https in apache. (It's surprisingly arduous to do this.)

Student should also write a console web client that connects over SSL.

As strictly practical matters, computer scientists should know how to use GPG; how to use public-key authentication for ssh; and how to encrypt a directory or a hard disk.

Recommended reading

Software testing

Software testing must be distributed throughout the entire curriculum.

A course on software engineering can cover the basic styles of testing, but there's no substitute for practicing the art.

Students should be graded on the test cases they turn in.

I use test cases turned in by students against all other students.

Students don't seem to care much about developing defensive test cases, but they unleash hell when it comes to sandbagging their classmates.

User experience design

Programmers too often write software for other programmers, or worse, for themselves.

User interface design (or more broadly, user experience design) might be the most underappreciated aspect of computer science.

There's a misconception, even among professors, that user experience is a 'soft' skill that can't be taught.

In reality, modern user experience design is anchored in empirically-wrought principles from human factors engineering and industrial design.

If nothing else, computer scientists should know that interfaces need to make the ease of executing any task proportional to the frequency of the task multiplied by its importance.

As a practicality, every programmer should be comfortable with designing usable web interfaces in HTML, CSS and JavaScript.

Recommended reading

Visualization

Good visualization is about rendering data in such a fashion that humans perceive it as information. This is not an easy thing to do.

The modern world is a sea of data, and exploiting the local maxima of human perception is key to making sense of it.

Recommended reading

Parallelism

Parallelism is back, and uglier than ever.

The unfortunate truth is that harnessing parallelism requires deep knowledge of architecture: multicore, caches, buses, GPUs, etc.

And, practice. Lots of practice.

Specific recommendations

It is not at all clear what the 'final' answer on parallel programming is, but a few domain-specific solutions have emerged.

For now, students should learn CUDA and OpenCL.

Threads are a flimsy abstraction for parallelism, particularly when caches and cache coherency are involved. But, threads are popular and tricky, so worth learning. Pthreads is a reasonably portable threads library to learn.

For anyone interested in large-scale parallelism, MPI is a prerequisite.

On the principles side, it does seem that map-reduce is enduring.

Software engineering

The principles in software engineering change about as fast as the programming languages do.

A good, hands-on course in the practice of team software construction provides a working knowledge of the pitfalls inherent in the endeavor.

It's been recommended by several readers that students break up into teams of three, with the role of leader rotating through three different projects.

Learning how to attack and maneuver through a large existing codebase is a skill most programmers will have to master, and it's one best learned in school instead of on the job.

Specific recommendations

All students need to understand centralized version control systems like svn and distributed version control systems like git.

A working knowlege of debugging tools like gdb and valgrind goes a long way when they finally become necessary.

Recommended reading

Formal methods

As the demands on secure, reliable software increase, formal methods may one day end up as the only means for delivering it.

At present, formal modeling and verification of software remains challenging, but progress in the field is steady: it gets easier every year.

There may even come a day within the lifetime of today's computer science majors where formal software construction is an expected skill.

Every computer scientist should be at least moderately comfortable using one theorem prover. (I don't think it matters which one.)

Learning to use a theorem prover immediately impacts coding style.

For example, one feels instinctively allergic to writing a match or switch statement that doesn't cover all possibilities.

And, when writing recursive functions, users of theorem provers have a strong urge to eliminate ill-foundedness.

Recommended reading

Graphics and simulation

There is no discipline more dominated by 'clever' than graphics.

The field is driven toward, even defined by, the 'good enough.'

As such, there is no better way to teach clever programming or a solid appreciation of optimizing effort than graphics and simulation.

Over half of the coding hacks I've learned came from my study of graphics.

Specific recommendations

Simple ray tracers can be constructed in under 100 lines of code.

It's good mental hygiene to work out the transformations necessary to perform a perspective 3D projection in a wireframe 3D engine.

Data structures like BSP trees and algorithms like z-buffer rendering are great examples of clever design.

In graphics and simulation, there are many more.

Recommended reading

Robotics

Robotics may be one of the most engaging ways to teach introductory programming.

Moreover, as the cost of robotics continues to fall, thresholds are being passed which will enable a personal robotics revolution.

For those that can program, unimaginable degrees of personal physical automation are on the horizon.

Related posts

Artificial intelligence

If for no other reason than its outsized impact on the early history of computing, computer scientists should study artificial intelligence.

While the original dream of intelligent machines seems far off, artificial intelligence spurred a number of practical fields, such as machine learning, data mining and natural language processing.

Recommended reading

Machine learning

Aside from its outstanding technical merits, the sheer number of job openings for 'relevance engineer,' indicates that every computer scientist should grasp the fundamentals of machine learning.

Machine learning doubly emphasizes the need for an understanding of probability and statistics.

Specific recommendations

At the undergraduate level, core concepts should include Bayesian networks, clustering and decision-tree learning.

Recommended reading

Databases

Databases are too common and too useful to ignore.

It's useful to understand the fundamental data structures and algorithms that power a database engine, since programmers often enough reimplement a database system within a larger software system.

Relational algebra and relational calculus stand out as exceptional success stories in sub-Turing models of computation.

Unlike UML modeling, ER modeling seems to be a reasonable mechanism for visualing encoding the design of and constraints upon a software artifact.

Specific recommendations

A computer scientist that can set up and operate a LAMP stack is one good idea and a lot of hard work away from running their own company.

Recommended reading

Non-specific reading recommendations

What else?

My suggestions are limited by blind spots in my own knowledge.

What have I not listed here that should be included?

Related posts





All Comments: [-] | anchor

WalterBright(4074) 5 days ago [-]

It's a good list, and I always recommend these:

1. Public Speaking. If you can't talk coherently in front of a group of people and sell your ideas, your career will be stunted.

2. Accounting. Accounting is the language of business. If you don't understand double entry bookkeeping, you can't be a manager. You can't run a startup. You can't talk to investors. If you misuse terms like 'gross margin' you'll be overlooked as someone not worthy of doing business with. And worst of all, your feathers will get plucked by people who do, and you may never even realize it.

Neither of these are particularly hard to do, but like flossing one's teeth, avoiding them will be costly to your future.

lol768(4095) 5 days ago [-]

> 2. Accounting.

Any resources you would recommend that are worth looking at to help with this? I think I understand the basics, but particularly for larger companies things get much more complex.

Dextro(10000) 5 days ago [-]

And I've spent far too much of my time in my last project explaining double entry bookkeeping to coworkers to little effect. I had to watch has they re-invented the wheel.

Sadly I'm not that good point 1 (actually terrible at it).

kowdermeister(2503) 4 days ago [-]

9 to 5 engineers don't need to deal with accounting. Really, who the hell cares about double entry bookkeeping? It's frustrating stuff that gets into the way of engineering.

> And worst of all, your feathers will get plucked by people who do, and you may never even realize it.

As someone who has little clue about accounting, what does this even mean? You mean I can be easily ticked by a shady partner?

By the way you are right, my side projects are stuck at the part when I have to ask for money :)

trilila(10000) 5 days ago [-]

2. Is a must have skill for consultants as well, a career path common for software developers.

vages(10000) 5 days ago [-]

Is there anyone who ever knew all of this when they graduated?

I think maybe the top student through my five years at university might have known all of this. I sure as hell did not check all the boxes when I graduated two years ago.

If you're nervous after reading this, just know that the 'should' probably means 'in an ideal world'.

mcguire(3126) 5 days ago [-]

Technically, 'should' means 'if you don't have some familiarity with this, you have a weak spot that you might want to work on.'

analog31(10000) 5 days ago [-]

I'd be frightened by a similar list for my own field, should someone write one. If nothing else, a list like this helps in the search for gaps that one might enjoy filling in.

glangdale(3889) 4 days ago [-]

If you covered all this in a 4-5 year program, you either worked 90 hours a week and didn't take any other courses ('major' doesn't mean full time) or each of these courses was covered in an 'edited highlights' fashion. It's a preposterous list.

throwawayjava(10000) 5 days ago [-]

Students at good programs will check all the technical boxes just by attending required classes. Those schools tend to also have enough project based courses and career prep coaching that the portfolio box is also checked.

The intersection of that and communication skills is rarer, but it happens often enough, especially for people who pick up a second major in the humanities or did a lot of public speaking prior to/during college. Again, not the average case, but not terribly uncommon.

sytringy05(4110) 5 days ago [-]

I've been working in industry for 19 years and I don't think I know all of this stuff right now. What I do know is that in these topics there are a myriad of things I don't _really_ know.

lagadu(10000) 5 days ago [-]

Well, I'd say I knew the vast majority of it when I graduated (in software engineering). Most of the topics listed there had dedicated classes to them which I had to pass.

After 1 year of working about 90% of that was already gone forever.

acbart(10000) 5 days ago [-]

> Racket, as a full-featured dialect of Lisp, has an aggressively simple syntax. For a small fraction of students, this syntax is an impediment. To be blunt, if these students have a fundamental mental barrier to accepting an alien syntactic regime even temporarily, they lack the mental dexterity to survive a career in computer science. Racket's powerful macro system and facilities for higher-order programming thoroughly erase the line between data and code. If taught correctly, Lisp liberates.

What a rude paragraph. I think this viewpoint is very unfortunate. I love functional programming, and I think everyone benefits from learning it, but this garbage attitude is toxic to students. The author cannot imagine a successful computer scientist who doesn't love the same PL paradigms as him. I would prefer to keep them away from my learners.

Gene_Parmesan(10000) 5 days ago [-]

> who doesn't love the same PL paradigms as him.

I don't see that at all. It's not about enjoying it, it's about having the ability to adjust to the Lisp syntax, full stop. Given that Lisp is just about as close to programming directly in an AST as you can get, I'm not sure that claim of his is too far from the truth. Remember, we're talking about computer science, not programmers generally.

ori_b(3769) 5 days ago [-]

I didn't see anything in that quote about loving it -- just about being able to handle it temporarily.

I deal daily with technology that I have don't enjoy, but have to accept temporarily. I wouldn't be fit for a career in computer science if I couldn't adapt to that.

archagon(2947) 5 days ago [-]

This is tangential, but HOW, HOW, HOW does someone have the time to do all this stuff? http://matt.might.net/materials/matthew-might-cv.pdf

I'd love to see a timeline of how some of these super-productive academics are able to fit so many activities into their lives. I doubt I've done 1/10th of what Matt has done in the same amount of time. (And all this while researching, publicizing, and treating his son's rare genetic disorder, in an entirely separate field from the one he got his PhD in: http://matt.might.net/articles/my-sons-killer/)

randomsearch(4152) 5 days ago [-]

What stuff? If you mean publish a lot of papers, quantity doesn't mean much.

felipap(10000) 5 days ago [-]

Related: should I also be including my Twitter and Google+ followers in my CV?

pinacarlos90(10000) 5 days ago [-]

I come from a CS background and learned a lot from studying CS, it definitely gave me a strong foundation and changed the way I view and understand computing. I learned most of the topics mentioned in this document while in school, and although they are all valid, they are not enough for real world needs.

~90% of CS undergraduates will end working as or with engineers, and in 2019 here are the skills that are indispensable:

1) understand popular protocols used in WWW (http, ssh, ftp, etc)

2) version control (Git). Understand pull-requests and the process of collaboration in a team

3) problem solving - how to breakdown problems, and how to overcome them when you reach a wall (use known algorithms when possible)

4) design patterns (learn as many as you can)

5) frameworks: MVC, angular, dependency injection, etc

6) communication skills

7) how to organize work, how breakdown big tasks into small easier-to-accomplish pieces

8) understand deadlines

9) write tests (unit-test, integration-test, etc)

10) understand that "good enough" sometimes is all you need (still try to fix it later :) )

greggyb(4133) 5 days ago [-]

Most of these are in the article. The author mentions:

1) both http and ssh

2) source control and team work (and suggests courses built around these)

3) multiple sections on problem solving and algorithms (incl. proof techniquese, formal methods, knowledge of data structures and algorithms)

4) nothing in particular

5) nothing in particular

6) covers communication skills

7) emphasizes each student taking a lead role on a team in suggested team-based classes

8) not explicit

9) suggests writing and being graded on tests throughout educational career

10) section on graphics and simulation emphasizes this.

anonymous5133(10000) 5 days ago [-]

11) Learn proper research skills so you can continually be finding better solutions to problems encountered in your work/projects.

stock_toaster(2304) 4 days ago [-]

> 4) design patterns (learn as many as you can)

Sure, learn them, but make sure you don't apply them blindly. Let the problem/solution lead you to familiar designs, not the other way around. I have seen some truly hideous code done in the name of cargo-culting 'design patterns'.

aprdm(3979) 5 days ago [-]

100% agreed! Communication is 80% of the job usually. So sad to see a lot of technical people who cannot communicate well!

People seem to forget we're all humans.

agumonkey(955) 5 days ago [-]

I have a peculiar question, has it ever happened to get stuck in a task and not being able to solve it ? what do you do ? if it's an optional feature, I guess you can always write it down but if it's critical ?

bmilleare(4108) 4 days ago [-]

> 10) understand that "good enough" sometimes is all you need (still try to fix it later :) )

This is something most CS grads don't really grasp immediately, but in the real World business and time constraints normally trump perfection.

Technical debt is fine as long as it's fully understood and managed properly.

darkcha0s(10000) 2 days ago [-]

Thank you. I've made it to moderate success as a 'software engineer' with your mentioned/trimmed list of topics, even without taking physics all the way up to electromagnetism.

burlesona(3867) 5 days ago [-]

This is a great list. I would add estimating, although it's implied in a few of your points.

As an engineering manager I find that a consistent difference between good engineers and great engineers is that great engineers can tell me how long something will take even when they haven't done something just like it before. That doesn't mean they can perfectly forecast how the hours will be spent -- no one could do that -- but they know how to figure things out, know how to build in some buffer, and know how to go heads down and crank when absolutely necessary, and as a result they can consistently hit deadlines.

SkyMarshal(2383) 4 days ago [-]

> 3) problem solving - how to breakdown problems, and how to overcome them when you reach a wall (use known algorithms when possible)

> 6) communication skills

I would argue these are the only two things in your list that should be included in a Computer Science degree.

There's a limited amount of time to cram everything into a CS degree, and it should focus on the science, not the practice. The science is harder to learn than the practice. All of the practice items are picked up on your first job out of college.

glangdale(3889) 4 days ago [-]

These kind of lists are always hilariously overstuffed, like a big comfortable rhetorical couch.

It's not all that hard to make up a giant list of 8-10 years worth of material, which if anyone took seriously would also lead to a monoculture i.e. we didn't have any time to do anything aside from computer science and this giant-ass list of requirements like 'physics up to electromagnetism'.

I'd say I was close to hitting, for about the year 1997 or so, these requirements or their equivalents as they existed then, after (a) a pretty decent prep at high school level (Australian high schools allowed a heavy math/science concentration), (b) 4 years of undergrad with - nominally CS as 1 subject in 4 in first year, 1 in 3 in second year, 1 in 2 in third year and full-time in honours (but actually way more focus on CS vs other subjects), and (c) completing the 8 graduate courses required to move on to the rest of the PhD at CMU.

Even then I'd have a long list of gaps in his definition. And that's off 7-8 years of prep, not 4, and an Australian honours CS degree allowed a lot more specialization than a lot of 4-year degrees in the US at the time.

It would be a more interesting approach to try to define a minimal set. Anyone can spam out 8-10 years of study and everyone will agree that, sure, someone who knew all that would be Pretty Good (better have 11 programming languages under your belt, hey). But a far more interesting task is - what do you need?

And, perhaps, in a class of 100 people, why should everyone come out with the same laundry list? Maybe it's good to have 20 really good statisticians there, as well as 20 people who could design the processor they are programming on, etc etc (obviously an overlapping set).

toper-centage(10000) 4 days ago [-]

This list was surprisingly sane and almost exactly the programme of my 5-year degree.

Nokinside(3819) 4 days ago [-]

> And, perhaps, in a class of 100 people, why should everyone come out with the same laundry list?

I fully agree with this point. People here argue that minimum for scientist should be cut down to something that fits into strictly vocational degree that scores them a well paying job.

Every scientist in any STEM field I know knows 'the minimum' that is larger than what fits into masters degree in their field. Someone with PhD is basically still scientist in training, a junior. 8-10 years of basics then very special knowledge above that sounds about right.

My background is in EE, so I think that being an engineer has prestige but it's not the same as being a scientist. Good EE engineer knows different things than research scientist in the field. You need to know enormous amount of theory and math to design modern circuits and radio interfaces but designing them is engineering.

dlkf(10000) 4 days ago [-]

> It would be a more interesting approach to try to define a minimal set.

Agreed. One way to represent this would be a collection of clusters of the technical skills listed in the article. If you're interested in statistics and machine learning, you should get good at SQL and visualization. By contrast, graphics and networking are much less important. If you want to work in operating systems, ANSI C will probably be a lot more helpful than Javascript.

I that every CS student should read the sections on resumes and communication. They're so powerful because unlike the rest of the article, they're not strictly prescriptive. They describe the goals are rather than the implementation details, and suggest a few ways you might get there.

winternett(4018) 4 days ago [-]

Agreed.

Advice about career paths should always be personalized to each individual. I'm someone who has not had a traditional CS education, I am totally self taught and I don't have many certs at all, actually, I had taken a few courses and not taken the tests, and for all of those courses, the only reason I had taken them is that my employers paid for them.

I've had over 13 different jobs in the past 20 years and that's helped me to really figure out how the IT world works. I primarily work as a Gov contractor on the East coast, and rarely had a job I could not grow into provided I had basic knowledge of development, customer service, and project management. I am not a genius by any standard, but I have accomplished a lot over my career that has made me worth the money they pay me. People used to tell me to not 'hop around' and that there would be unemployment eventually after doing that, but IT changed the traditional rules of employment. I am lucky to have been where I was in history because now there is a serious lack of supply for people who do what I do (Full Stack Dev, Dev Management, Cloud Architecture, Technical Project Management, Application architecture). I have also greatly benefited salary-wise within the past 4 years as I move into more management roles, and the only time I get substantial raises is when I switch jobs, not when I stay with an employer for an annual bonus. Actually, my salary growth year over year has been about +$20k on average for the past 5 years as the East coast begins to climb to West Coast standards, so I don't need to move to Cali and deal with Earth Quakes :P.

I started out learning the Internet and simple HTML in college (around 1995) when the Internet was just going public... That positioning probably helped me a lot. My first CS job out of school was making CD-Roms based on early JS, HTML, and graphics (pre-CSS) for a software development company that was making a tool which would later be killed off by message board software like PHP BB. I worked around real developers and learned their habits, and it made development in C less intimidating to me, back then development seemed like rocket science. The Dot Com boom (even citing it's collapse) convinced me that IT was a new industry I could stick with... It was really an amazing time. I interviewed with MTV, Microsoft, The Motley Fool, and turned them all down because they were offering lower salaries than small IT companies nearer to my home.

I graduated through many different companies as a web developer as sites got more complex, I embraced Google-Fu, which helped me to solve some really complex challenges, Learning concepts like Waterfall, Scrum, Rup, and Agile helped me to work my way into management.

I will agree though, maintaining personal (portfolio) web sites was what got me into the door ahead of my competition most times where no one knew me, it was shocking how so many people even now that I interview for jobs don't have any portfolio links in their resumes, or even a profile on linked in. If you're a developer, it's an easy way to bypass a code test on an interview to just be able to explain how you set up your personal portfolio. I used to have to back out of interviews that sent me code tests because they were never related to what actual company jobs were like anyway, and the tests were often geared towards people who were CS degree holders.

There is no substantial personalized advice someone can give you online that will be properly suited to your needs in a general format. The advice that works better is based on the process you follow rather than the programming language or specific decisions you should make career-wise. I've seen big companies and ideas grow and fail, like IBM and Adobe Flash... That is always guaranteed.

For people embarking on their career journey, I suggest that they take notice of the successful people around them and invite someone they really respect out to lunch and pay the bill (to compensate them for their time and so that you aren't remembered as a 'mental burden' by them). Talk with that person intently, and tell them your ideas and goals (you should be able to trust them at that level of course, so choose wisely). Ask them for their take and listen intently without shooting their ideas, advice, or methods down, and then take all you learn from those conversations and design your own path in your mind, and follow it. If a path doesn't work, be sure to pivot to a new path before you're invested too far/deep... If a path that you're on does work, follow it for as long as it works, but also diversify your efforts, time IS money, never put all of your eggs in one basket. Eventually you'll succeed in time if you take note of what works, mirror that in your actions, and learn to negotiate money and your career properly. That's what has worked for me thus far.

irundebian(10000) 5 days ago [-]

I don't think it's that valuable to improve IDE-less software development skills. Just because some Unix nerds find it cool to play with text files, avoid modern software development tools or program in date languages like C it shouldn't be desirable for a computer science major. Computer science should focus on long-live principles, the Unix philosophy may have a lot of good worth to keep principles, but kinda nothing in the software engineering space should suggest we've already approximated perfect solutions and nothing could be improved.

odyssey7(10000) 5 days ago [-]

Working with an IDE is important experience. However, speaking as someone who tutors CS students, there's a lot that people can miss about what they're doing when they have only ever worked in Eclipse.

There's something enlightening and empowering about tangibly having a specific file that you wrote, applying a compiler to it to produce an executable, and then invoking it yourself. It gives a better sense of ownership of the process that people don't always feel when they load a project into and IDE and fill in lines of code until the play button stops telling them what they did is wrong.

JustSomeNobody(3931) 5 days ago [-]

Your comment shows a lot of naivety. Text is near universal, C still makes the world go 'round and modern doesn't mean graphical.

tracer4201(10000) 5 days ago [-]

My day job is engineering at one of the big tech companies whose search services most of the people I know use almost every day.

In the 100+ interviews I've done for engineers, I've never cared for their portfolio. It's hard for me to know if it really is their portfolio or if they actually wrote the code, etc.

I'm going to ask you design and coding questions.

KUcxrAVrtI(10000) 4 days ago [-]

>I'm going to ask you design and coding questions.

My day job is engineering in systems that kill people or cost billions of dollars if they go wrong.

I'm never going to ask you to code something on the fly other than hello world, a loop to print 0..9 and maybe a recursive function to print 0..9. You can't think deeply enough about a problem in one hour, or one day. I will ask you a lot of questions about what you will do, how you will do it, and what the possible problems and mitigations are.

A portfolio that you've worked on is great because I can find the worst code in it and ask why it's there, what the trade offs of putting it there were and how you would fix it if you had infinite time.

mieseratte(10000) 5 days ago [-]

If someone has a portfolio I will certainly ask about those projects. Gives an opportunity to demonstrate technical acumen and communication skills at the same time.

quickthrower2(1327) 5 days ago [-]

In my experience no one cares about the portfolio. They want your CV, some references and to pass some coding tests. Now to be fair I am not going for the highest paid jobs or jobs required deep computer science knowledge, but for the ordinary job I have found that recruiters and employers alike look at CV. They might look at github if you are lucky.

EnderMB(4168) 5 days ago [-]

I couldn't agree more, and this list pretty much lost me right from the start.

If you're fresh out of university, GitHub is useful because it demonstrates relevant non-academic code you've written outside of your studies.

After you've got your first job, I don't care about your GitHub profile unless you've got some impressive stuff on there. As an employer, I care about your ability to work on non-trivial problems in a team, and unless you're contributing to OSS (which few CS grads do) you'll never have this on your 'portfolio'.

In my experience, your GitHub profile is only noteworthy if you have:

* Breadth of work. Loads of projects in a load of languages that demonstrates you're a tinkerer * Depth of work. At least one project with 5+ stars that solves a non-trivial problem.

Otherwise, that profile is usually some boilerplate code for a MOOC course or to try a language for the first time, and when hiring that's not an indication of anything.

bharam(10000) 5 days ago [-]

Here's how I review resumes. I'll scan the document for evidence of some kind of selectivity in terms of education and/or past employers. I ignore action phrases, GitHub accounts etc. If I see one that interests me I'll setup a phone call. It's a common misconception that we look for reasons to hire someone when in fact the opposite is true: we are looking for reasons to say 'no'.

kasey_junk(4068) 5 days ago [-]

If you aren't using your network (you have a network right?). You've already failed.

No one in tech knows how to hire so its all network based. This isn't a good thing but it's where we are.

hermitdev(10000) 5 days ago [-]

One thing that surprises me that Python wasn't mentioned as a language of interest. It's used in a lot of scientific computing, financial spaces and others. Ive been using it in finance for 15 years. 15 years ago, there was little traction, but now the quants I work with only want to use it. The quants aren't really programmers, but numpy, scipy and pandas make them productive without a lot of dev experience.

analog31(10000) 5 days ago [-]

I'm a huge Python fan, and introduced it to my workplace, where it is now heavily used by about a half dozen scientists and engineers. But... I think a person who has mastered the other languages and general concepts of computer science can pick up Python in a jiffy. And they will probably pick it up by osmosis, if it's used in their vicinity.

I'm saying this as a physicist who has not studied CS formally, but worked with a lot of CS'ists.

supernova87a(10000) 4 days ago [-]

As a manager, I'll tell you what is more important than almost any of these technical skills (although they are important):

1)

The ability to communicate what needs to be done, and how it is to be done, _before_ diving into the work. I work with tons of people who, sure, can get the work done, but I have little to no visibility about what they're going to do and when they think it will be done. I'm never going to be in the code as deep as you -- the purpose of my management is to help make sure the thing you do fits into the rest of the plan and doesn't waste time and resources. What good does it do our team if I don't find out how long it was going to take until you finished?

Inability to do this is generally an indication that they've not done the task before, can't tell you how they're going to solve it before they actually do, or are generally unable to plan their work at a higher level that is useful for a manager. And I don't even mean that the answer has to be a rock-solid time estimate -- if they can raise and communicate the uncertainties, that already is an important piece of information and next level thinking -- what do you know you don't know.

You can tell a more experienced person by whether they can give you plans and estimates for how long and what approach they will take, before they immediately start the work. The work isn't just a coding puzzle to be solved -- they understand it's a project to be managed properly and are working at your level to plan it. Versus you having to painfully extract that info from them.

2)

The ability to self-assess whether their approach is the right/best approach, or what compromises or missing elements their approach adopts. Rarely do I find that people who dive right in because they 'know how it needs to be done' are doing it because that is really the right way among all possible solutions. More like, it's the first thing that popped into their mind.

You can tell an experienced person by whether they take some time at the beginning to debate / discuss with you what approach optimizes for what outcome (whether time, resources, maintainability, scalability, data integrity, etc).

The person who does even these 2 things (or similar indicators of self-reflection and considering the problem) is -- almost unfortunately -- the standout these days.

crispyambulance(3835) 4 days ago [-]

I think those two abilities which you cite are developed ONLY with experience in solving real-world work problems.

These are not skills which can be taught in a school. They have to be learned through work, through mentorship and through the experience of failures and successes. Internships certainly help, but it takes years.

It's OK if hires fresh out of school aren't able to do these things. These need to be cultivated by you, the manager.

bigred100(10000) 5 days ago [-]

To me, this sounds more like what a good CS Ph.D who spent his free time filling in every general CS knowledge gap he could come up with would know.

I don't know why he says learn real analysis and linear algebra to talk to engineers when I doubt 90% of engineers took one analysis course unless they're Ph.Ds.

_Nat_(10000) 5 days ago [-]

The article says:

> Computer scientists and traditional engineers need to speak the same language--a language rooted in real analysis, linear algebra, probability and physics.

This is saying that the ontological common-ground between Computer Science and Engineering is rooted in, among other things, Real Analysis -- not that either Computer Science or Engineering majors need to take a course called 'Real Analysis'.

It does recommend specific math classes, but a class on Real Analysis isn't in those recommendations.

I think that the US's major Engineering-accreditation agency, ABET, requires most Engineering disciplines to have both Linear Algebra and Multivariate Calculus (also called 'Calculus III') as core courses. Many students opt to take additional math beyond the basic core classes.

jstewartmobile(3675) 5 days ago [-]

Judging by the Wikipedia article, 'Real analysis' is freshman-level math for most branches of engineering.

Pretty shocking how little math most CS programs require in comparison.

llamaz(10000) 5 days ago [-]

Differential equations and electromagnetism?

Maybe 30 years ago, but these days electrical engineers know how to code

Chathamization(4144) 5 days ago [-]

There seems to be a tendency to exaggerate the breadth of things that people should study. When you ask for specific instances where something would be useful, you usually get a hand wavy answer that since X field is connected to what you want to study, it is therefore worthwhile to study X field ('Of course computer scientists should study advanced chemistry, we wouldn't even have computers without chemistry! And what if you needed to written a program for chemists!').

In my experience people retain very little of something if they don't keep using it. Many people take things like calculus in highschool or college, and completely forget it within a few years. I often checked with classmates about how much they retained from courses we had just taken the previous semester, and most of the stuff that wasn't directly connected to what they were currently studying seemed to be forgotten (if they ever had a real understanding of it to begin with).

chrisseaton(3025) 5 days ago [-]

Isn't modern ML based on differential equations?

phkahler(4027) 5 days ago [-]

Who is going to write FEA software? Or rigid body dynamics simulation? Or the physically based rendering software? It's not going to be a typical electrical engineer.

commandlinefan(10000) 5 days ago [-]

> must practice persuasively and clearly communicating their ideas to non-programmers

If you're a CS major, panicking over this sort of "technical skills aren't enough, you have to be a persuasive public speaker and effectively do management's job for them", people have been saying this since at least I started coding 30 years ago, and I don't see any evidence that it's actually true any more than it was back then - focus on technical ability, that's a hell of a lot harder to come by than "powerpoint skills".

rossdavidh(4099) 5 days ago [-]

I would say there's plenty wrong with this list, but this one is, ironically, mostly true but malphrased. A better way to put it would be: 'don't be content with being so bad at communication that it gets in the way of your ability to make your (technical) point'. If you know something about how _not_ to communicate (badly), you can allow your technical expertise to be more influential when technical decisions are made (by people higher ranking than you).

It's not becoming a salesman. It's avoiding the situation of being so bad at communicating, that your technical expertise is ignored. Non-technical people usually _want_ to hear your point, in regards a technical decision. If you are unable to help them know what you're trying to say, they may not understand what you're trying to say, and then you'll wonder afterwards why they hired the technical person and then ignored their technical advice. They're not ignoring your advice, in many cases they simply cannot perceive clearly what you are trying to say.

I hope that I have made my point understandable here. But, you know, I may not have. :)

dlphn___xyz(10000) 5 days ago [-]

they're equally important and require time and practice

diminoten(10000) 5 days ago [-]

Technical ability isn't relevant in ~90% of 'CS' jobs. Everyone can do the basics, and you don't need more than the basics to do 90% of 'CS' jobs.

You will be left behind by coworkers who can communicate, and you will be confused about why, if you can't communicate well. Specifically, if you can't take an idea you have and convince others it's the right idea.

ergothus(4167) 5 days ago [-]

> focus on technical ability, that's a hell of a lot harder to come by than "powerpoint skills".

I'll agree and disagree:

Yes, the target audience here SHOULD focus on technical ability. Not because it's 'harder', but because that will be the primary decider in getting hired.

That said, I'd say my communication skills have been a central part of my success. Not 'powerpoint', so much as being able to distill down ideas, translate abstractions, and of late, learning how to examine other people's ideas without coming across as I'm attacking them. My personal impression and the reaction my managers have given me is that these skills are both rare and valued in the field. But you do of course need technical ability or they are worthless.

I definitely think I've achieved more success than my technical ability alone would grant me. It could be Imposter Syndrome talking, but I'm not actually all that great at this. Just good enough, and my communication skills ensure that the 'good enough' is applied where it's needed and to the degree it is needed.

Ultimately, coding is communication, and maintainable/extendable/flexible code is a lot of communication to other devs (including your future self). Even outside of the role of managers, and even outside of planning and architecting with your peers, communication is a valuable skill for a programmer to have.

vkou(10000) 5 days ago [-]

PowerPoint is one of the worst way of persuasively and clearly communicating ideas, be it to programmers or non-programmers.

asark(10000) 5 days ago [-]

Powerpoint skills are where the money is, in large businesses that aren't FAANG. And probably even there, somewhat. I've come to this conclusion after seeing a lot of not-especially-smart-or-effective people-person folks with Powerpoint skills making serious money.

[EDIT] it also seems to me that good presentation and communication skills are how you achieve 'you don't apply for jobs, jobs apply for you' status without top 1% tech skills.

Ar-Curunir(10000) 5 days ago [-]

You're falsely equating the ability to communicate one's ideas clearly with 'PowerPoint skills'.

Technical ability is useless if you're not able to convince others why your project is useful or interesting. If you can't distill what you're doing into terms understandable by a non-expert, then you don't understand your own work well enough in the first place.

grimjack00(10000) 5 days ago [-]

I don't think that point is saying 'you have to be a persuasive public speaker and effectively do management's job for them'.

I really dislike speaking to groups of more than 2 or 3, and I want to stay as far away from any management-like duties as I can.

That said, I'm on a smallish dev team in a very large company, and I regularly find myself having to explain technical things about our products and components to people without a current technical background. If I can't clearly communicate, things can go badly to varying degrees.

NikolaNovak(4164) 5 days ago [-]

There's probably a few qualifiers:

1. What environment you work in: a large corporation vs small technical startup

2. What your ambitions are:

* Bluntly, if you want to be a very, very good coder who gets left alone and is largely handled/insulated by management, gets client requirements translated to and from them, that is certainly achievable by focusing on strictly technical skills. You can advance and be respected within that particular niche.

* If you want to be a solid team lead or manager who actually solves their team member's organizational problems / abstracts the organization challenges away from them; understand and trully aid your client; interact with various parts of your and client's organizations to make a larger change or lead a larger project; then a communication skillset is of course crucial.

It is almost tautological that to effectively communicate to various groups (programming peers, other technical personnel, management, business, clients, users) you have to be good at communicating to various groups :-/

--- As to what's difficult to acquire, again, it can be argued both ways. My wife is a tremendous communicator but technical abilities would be hard for her to acquire. At the same time, quite certainly a large amount of my technical colleagues and co-workers would and do clearly struggle to acquire communication skills. Or put another way, I'm personally in an environment with technical skillset out the wazoo (which is important and respected and appreciated), but communication skills are rare and prized.

jcranmer(10000) 5 days ago [-]

> focus on technical ability, that's a hell of a lot harder to come by than "powerpoint skills".

I don't know, actual presentation skills seem to be rather hard to come by. Just because someone can crank out presentations does not mean they actually have any skill in it.

_pmf_(10000) 4 days ago [-]

Politics and stupid games are at the majority of challenges by a vast margin. Most of us are actually happy when they face a technical challenge, because it's so rare compared to arbitrary shit thrown in your way as a pissing contest between nominal 'stakeholders' who don't care about the product, the user or the future of the company.

krageon(10000) 5 days ago [-]

I've actually found the opposite. Even when I was consistently evaluated to have above-average technical skills (for the company I work at), what kept my career growth from advancing was a lack of personal skills. What's more, ever since I started improving those my personal life started seeing a big improvement as well.

I recognise that for most, being at least moderately skilled at people comes as a given. For those of us where this is not the case it really helps to consciously work at those limitations.

lph(10000) 5 days ago [-]

Strongly disagree. Effective technical leadership is a rarer skill than pure technical chops, especially as one becomes more senior in an organization. And it's much more than "PowerPoint skills"——it's driving consensus and making good decisions in highly ambiguous situations.

That said, I agree that it's not something CS majors should worry about much in school. It's something you learn as your career progresses.

reificator(10000) 5 days ago [-]

I have noticed a distinct improvement in my day to day life, not even just interviews, from really focusing on soft skills.

When I was a kid I'd start every sentence with 'actually' and use jargon without considering whether the other person was following what I was saying. I got told to shut the fuck up a lot as a result.

Now I've had at least a half-dozen people earnestly tell me I should be a teacher. When I think back to my childhood, that blows my mind.

The biggest benefits to my life and mental health came from learning:

* How to better squeeze requirements out of non-technical users, and explaining why the edge cases shouldn't be left to the last minute.

* How to talk to the boss(es) about prioritization, hard requirements vs nice-to-haves, and estimated timeframes. Understanding why you're being asked to do a task and varying your approach on that information will instantly improve your relationship with your boss.

* How to respectfully delegate and offer assistance.

* How to productively critique and receive criticism.

And the best thing? I don't feel at all like these came at the expense of technical skills. I need to be a different mindset to learn either of them, so I structure my efforts around that.

ebg13(10000) 5 days ago [-]

> I don't see any evidence that it's actually true any more than it was back then - focus on technical ability

Learning how to communicate to non-programmers improves your technical ability.

In my experience, programmers who can't communicate to non-programmers don't understand that all scenarios are specific instances of increasingly more abstract scenarios. Programmers who don't internalize that build inelegant and fragile systems which work for specific arbitrarily chosen test data and then instantly fail in the real world.

If you can't explain a program further than jargon, then you haven't thought enough about what the program is actually supposed to accomplish.

mcguire(3126) 5 days ago [-]

It may not be any more true today than it was back then, but it was hella true back then. (Crap! Almost 30 years?! Geeze.) Not being able to convince other people that a) you know what you're doing and b) they should do what you suggest will hamstring your career. Just like it did back then.





Historical Discussions: Notre-Dame came closer to collapsing than people knew (July 17, 2019: 493 points)

(493) Notre-Dame came closer to collapsing than people knew

493 points 4 days ago by Luc in 1513th position

www.nytimes.com | Estimated reading time – 21 minutes | comments | anchor

PARIS — The security employee monitoring the smoke alarm panel at Notre-Dame cathedral was just three days on the job when the red warning light flashed on the evening of April 15: "Feu." Fire.

It was 6:18 on a Monday, the week before Easter. The Rev. Jean-Pierre Caveau was celebrating Mass before hundreds of worshipers and visitors, and the employee radioed a church guard who was standing just a few feet from the altar.

Go check for fire, the guard was told. He did and found nothing.

It took nearly 30 minutes before they realized their mistake: The guard had gone to the wrong building. The fire was in the attic of the cathedral, the famed latticework of ancient timbers known as "the forest."

The fire was in the attic of the cathedral, the famed latticework of ancient timbers known as "the forest."

The guard went to the attic of a small adjacent building, the sacristy.

Instead of calling the fire department, the security employee called his boss but didn't reach him.

The manager called back and eventually deciphered the mistake. He called the guard: Leave the sacristy and run to the main attic.

But by the time the guard climbed 300 narrow steps to the attic, the fire was burning out of control, putting firefighters in a near impossible position.

The miscommunication, uncovered in interviews with church officials and managers of the fire security company, Elytis, has set off a bitter round of finger-pointing over who was responsible for allowing the fire to rage unchecked for so long. Who is to blame and how the fire started have not yet been determined and are at the heart of an investigation by the French authorities that will continue for months.

But the damage is done. What happened that night changed Paris. The cathedral — a soaring medieval structure that has captured the hearts of believer and nonbeliever alike for 850 years — was ravaged.

Today three jagged openings mar Notre-Dame's vaulted ceiling, the stone of the structure is precarious, and the roof is gone. Some 150 workers remain busy recovering the stones, shoring up the building, and protecting it from the elements with two giant tarps.

Some of what went wrong that night has been reported in the French news media, including Le Monde and Le Canard Enchaîné. Now, The New York Times conducted scores of interviews and reviewed hundreds of documents to reconstruct the missteps — and the battle that saved Notre-Dame in the first four critical hours after the blaze began.

What became clear is just how close the cathedral came to collapsing.

The first hour was defined by that initial, critical mistake: the failure to identify the location of the fire, and by the delay that followed.

The second hour was dominated by a sense of helplessness. As people raced to the building, waves of shock and mourning for one of the world's most beloved and recognizable buildings, amplified over social media, rippled in real time across the globe.

That Notre-Dame still stands is due solely to the enormous risks taken by firefighters in those third and fourth hours.

Disadvantaged by their late start, firefighters would rush up the 300 steps to the burning attic and then be forced to retreat. Finally, a small group of firefighters was sent directly into the flames, as a last, desperate effort to save the cathedral.

"There was a feeling that there was something bigger than life at stake," said Ariel Weil, the mayor of the city's Fourth Arrondissement, home to the cathedral, "and that Notre-Dame could be lost."

Paris has endured so much in recent years, from terrorist attacks to the recent violent demonstrations by Yellow Vest protesters. But to many Parisians, the sight of Notre-Dame in flames was unendurable.

"For Parisians, Notre-Dame is Notre-Dame," said its rector, Msgr. Patrick Chauvet, who watched in tears that night as firefighters struggled to tame the blaze. "They couldn't think for one second that this could happen."

The fire warning system at Notre-Dame took dozens of experts six years to put together, and in the end involved thousands of pages of diagrams, maps, spreadsheets and contracts, according to archival documents found in a suburban Paris library by The Times.

The result was a system so arcane that when it was called upon to do the one thing that mattered — warn "fire!" and say where — it produced instead a nearly indecipherable message.

It made a calamity almost inevitable, fire experts consulted by The Times said.

"The only thing that surprised me is that this disaster didn't happen sooner," said Albert Simeoni, an expert born and trained in France, but now head of fire protection engineering at Worcester Polytechnic Institute in Massachusetts.

The ponderous response plan, for example, underestimated the speed at which a fire would spread in Notre-Dame's attic, where, to preserve the architecture, no sprinklers or fire walls had been added.

The plan's flaws may have been compounded by the inexperience of the security employee, who had been working at Notre-Dame just three days when the fire broke out.

Stationed since 7 a.m. within the pale-green walls of the tiny presbytery room, he was supposed to have been relieved after working an eight-hour shift. His replacement was absent, so he was on the second leg of a double shift.

The control panel he monitored was connected to an elaborate system consisting of tubes with tiny holes that ran throughout the cathedral complex. At one end of each tube was what is called an "aspirating" detector, a highly sensitive device that draws in air to detect any smoke.

The message that scrolled across the monitor was far more complicated than the mere word "Feu."

First it gave a shorthand description of a zone — the cathedral complex was divided into four — that read "Attic Nave Sacristy."

That was followed by a long string of letters and numbers: ZDA-110-3-15-1. That was code for one specific smoke detector among the more than 160 detectors and manual alarms in the complex.

Finally came the important part: "aspirating framework," which indicated an aspirating detector in the cathedral's attic, which was also known as the framework.

It remains unclear just how much of that entire alert the employee understood or conveyed to the guard — and whether the critical part of it was relayed at all, though Elytis insists it was.

By the time it was sorted out, the flames were already running wild, too high to be controlled by a fire extinguisher.

Finally, the guard radioed the fire security employee to call the fire department. It was 6:48, 30 minutes after the first red signal lit up the word "Feu."

All the sensitive technology at the heart of system had been undone by a cascade of oversights and erroneous assumptions built into the overall design, said Glenn Corbett, an associate professor of fire science at John Jay College of Criminal Justice in New York.

"You have a system that is known for its ability to detect very small quantities of smoke," Mr. Corbett said. "Yet the whole outcome of it is this clumsy human response. You can spend a lot to detect a fire, but it all goes down the drain when you don't move on it."

If it took more than half an hour to call the fire department, it took just minutes once smoke appeared for the images to begin circulating around the world on social media.

"I think Notre-Dame is burning," someone posted on a Twitter video at 6:52 p.m. Within just a few minutes, the smoke, blowing west with the wind, was so thick it started to obscure the towers.

Minutes earlier, at 6:44, Elaine Leavenworth, a tourist from Chicago, had taken a picture of the facade against a clear blue sky. By the time she took a short walk across Pont Saint-Michel, she smelled smoke. She looked back to see the towers engulfed in smoke, and took another shot.

"Frightening how quickly it changed," Ms. Leavenworth posted on Twitter at 6:55 p.m., along with the two photos.

Monsignor Chauvet, the rector, had been chatting just a couple of hundred yards from the cathedral with shopkeepers, when suddenly one of them pointed up and exclaimed: "Look, there is smoke coming out!"

A sinking feeling took hold. "I said to myself: 'It's the forest that's caught fire,'" Monsignor Chauvet recalled, referring to the cathedral's attic.

He pulled out his cellphone and warned his staff. They said the fire department had been called but had yet to arrive.

"I was incapable of doing anything," Monsignor Chauvet said. "I couldn't say anything. I watched the cathedral burn."

Mr. Weil, the mayor of the Fourth Arrondissement, was just leaving a long meeting at the nearby Hôtel de Ville, the city hall, when he saw the smoke and ran toward Notre-Dame.

He called the mayor of Paris, Anne Hidalgo, and she rushed to meet him. When they reached the plaza, tears were streaming down Monsignor Chauvet's face. Ashes and fiery flakes drifted through the air.

"It was like an end-of-the-world atmosphere," Mr. Weil recalled.

On the plaza, the gathering crowd was stunned, immobilized.

"I cried because you are helpless to do anything," Monsignor Chauvet said. "You wait for the firefighters."

By the time Master Cpl. Myriam Chudzinski arrived, a few minutes before 7 p.m., Notre-Dame was surrounded by hundreds of horrified bystanders. The fire already glowed through the roof.

Corporal Chudzinski, 27, had wanted to be a firefighter since she was a little girl. Now she was staring speechless at a kind of blaze she had never encountered.

Her truck stopped on the Rue du Cloître Notre-Dame, a narrow street that runs on one side of the cathedral. The building was so gigantic, she couldn't see where the fire was spreading anymore.

"We were so small that it was hard to get a proper idea from the bottom of the cathedral," she said. "But it might have been better like that."

Better not to know the danger she was walking into.

Corporal Chudzinski's team was one of the first to arrive, and headed to the attic. The firefighters immediately plugged their hoses into the cathedral's dry risers, empty vertical pipes that would allow them to pump water up to the flames.

Bearing 55 pounds of gear and a breathing pipe on her shoulder, Corporal Chudzinski climbed the dark staircase in the transept on the cathedral's north side.

She knew the structure well, having drilled at Notre-Dame last fall. As she climbed, she recalled that the attic had no firewalls to prevent the spread of a blaze — they had been rejected to preserve the web of historic wooden beams.

With such intense flames, she realized, the attic would be a tinderbox.

Aside from the drill, Corporal Chudzinski had visited the cathedral once a few years ago, marveling at its vastness. "It was so peaceful, so quiet," she said. "But that night, it was more like hell."

Once at the top, Corporal Chudzinski and her team stopped on a cornice outside the attic as she took the lead dousing the flames, about 15 feet away.

Her colleague holding the hose behind her could see that the flames were being pushed by a brisk wind toward the cathedral's northern tower. The fire was starting to surround them, threatening to trap them outside, in the middle of the inferno. They retreated inside, toward the attic.

There was no wind there. But the air was so hot, so barely breathable, that for the first time that night, Corporal Chudzinski plugged in her breathing apparatus. Her thirst was terrible.

In the attic, the flames advanced as an unstoppable wall. They covered countless beams already and nibbled up the floor. Pieces of wood frayed and fell from the timbers one by one.

About 7:50, almost an hour into the fight, a deafening blast engulfed her. It was, she said, like "a giant bulldozer dropping dozens of stones into a dumpster."

The 750-ton spire of the cathedral, wrought of heavy oak and lead, had collapsed. The blast was so powerful it slammed all the doors of the cathedral shut. The showering debris broke several stone vaults of the nave.

Corporal Chudzinski and other firefighters happened to be behind a wall when a fireball hurtled through the attic. It probably saved them. "I felt useless, ridiculously small," she said. "I was just powerless."

The generals overseeing the operation called everyone back. Some 50 firefighters, including Corporal Chudzinski and her team, were ordered to come down.

They battled the fire from the ground, drawing water from the Seine. But it wasn't working.

Before the blast, Corporal Chudzinski and her colleagues had made a critical observation: The flames were endangering the northern tower. The realization would change the course of the fight.

Inside that tower, eight giant bells hung precariously on wooden beams that were threatening to burn. If the beams collapsed, firefighters feared, the falling bells could act like wrecking balls and destroy the tower.

If the northern tower fell, firefighters believed, it could bring down the south tower, and the cathedral with it.

Near 8:30 p.m., President Emmanuel Macron arrived to survey the damage, along with Prime Minister Édouard Phillippe and other top officials.

A group of about 20 officials, including Mayor Hidalgo, Mayor Weil and Monsignor Chauvet, convened across the plaza at Police Headquarters for a briefing by Gen. Jean-Claude Gallet, the head of the Paris fire brigade.

Clad in firefighting gear, dripping with water, General Gallet, 54, had served in Afghanistan and specialized in crisis management. He entered the conference room and gave them the bad news.

The attic could no longer be saved; he had decided to let it go. He would have his brigades throw all their energy into saving the towers, focusing on the northern one, already on fire.

"He came in and told us, 'In 20 minutes, I'll know if we've lost it,'" Mr. Weil recalled. "The air was so thick. But we knew what he meant: He meant Notre-Dame could collapse."

"At that point," Mr. Weil added, "it was clear that some firefighters were going to go into the cathedral without knowing if they would come back out."

Monsignor Chauvet wept. The prime minister circled his thumbs nervously.

President Macron remained silent, but appeared to give tacit approval to General Gallet's decision.

Out on the square, a temporary command post had been set up. There, General Gallet's deputy, Gen. Jean-Marie Gontier, was managing the firefighters on the front lines.

He gathered them around him to prepare for the second stage of the battle. A slippery carpet of ashes covered the stones underfoot in black and gray.

The situation looked grim. Whiteboards displayed sketches of the fire's progress. Police drone footage showed the cathedral's roof as a fiery cross illuminating the night sky. At the center was a gaping hole where the spire had stood for more than 160 years.

Thick smoke was billowing from the wooden frame of the northern tower. Thumb-size embers flew like glittering hornets and pierced some hoses. One of the dry risers needed to get water to the top of the cathedral was leaking, lowering the water pressure.

Now, all that time lost because the firefighters had been called late was exacting its toll. General Gontier compared it to a footrace. "It's like starting a 400 meters, several dozen meters behind," he said.

Gabriel Plus, the Paris fire brigade spokesman, said, "We needed to make decisions, quickly."

At the command post, Master Sgt. Rémi Lemaire, 39, had an idea.

What if they went up the stairs in the southern tower, where he had been earlier in the fight. That way, they could carry up two additional hoses that could be plugged directly into a fire truck. It would give the team more water pressure than the leaking riser could muster.

And then from there, the firefighters could enter the blazing northern tower.

It was a high-risk strategy. But General Gontier agreed.

Sergeant Lemaire had already seen the perils that the northern tower held earlier that evening. In the time it took to decide on the new plan, things had gotten only worse.

"We were at first reluctant to go because we weren't sure we'd have an escape route," he recalled.

A group of firefighters from a neighboring suburb refused to go, but another team said it would do it.

They moved forward with their plan to save the northern tower, which was already aflame.

Sergeant Lemaire led them up the southern tower, and they set up on a platform between the two towers.

He and his team dropped hoses over the side to connect to a fire truck on the ground, hoping to get more pressure than the leaking standpipe could muster.

Another dozen or so firefighters doused the flames that threatened to collapse the floor beneath them. Others held back flames on the roof.

The gigantic bells above their heads could tumble down at any moment. They needed to work quickly.

The firefighters moved higher, the structure ever more precarious as they went.

But they kept going, up another floor, closer to another set of bells.

It took them 15 decisive minutes, but by 9:45 p.m. the flames were tamed.

General Gontier went up on the balcony of Notre-Dame to inspect the situation.

"She is saved," he declared as he descended.

By 11, General Gallet told officials they were confident that the fire in the towers would be brought under control. Around 11:30, President Macron addressed the nation live on television from in front of the cathedral.

"The worst was avoided, even though the battle is not yet over," he said. Then he made a pledge: "We will rebuild this cathedral together."

Over the past three months, investigators have conducted some 100 interviews and sifted the rubble, looking for clues to what started the fire.

They have focused on the possibility of a short-circuit in the electrified bells of the spire, or in the elevators that had been set up on the scaffolding to help workers carry out renovations. They are also considering cigarette butts, which were found on the scaffolding, apparently left by workers.

"We're not ruling out any scenario, we just know it wasn't criminal," said a Paris police official, speaking on the condition of anonymity because the investigation is still underway.

The miscommunication that allowed the fire to rage unchecked for so long is now the source of a bitter dispute over who is responsible.

Church officials say that the employee for Elytis, the fire security company, never mentioned the framework of the cathedral's roof. "Several of them had a walkie-talkie and all heard 'Attic Nave Sacristy,'" said André Finot, a spokesman for Notre-Dame. "That's all."

Monsignor Chauvet, Notre-Dame's rector, has refused to make the employees available independently for interviews, citing the investigation. "Some might lose their jobs,'' he said. "I asked them not to talk."

Arnaud Demaret, the chief executive officer of Elytis, said his employee was still in shock. The company received two death threats over the telephone in the days after the fire, he said.

But he insisted that his employee had communicated the fire's location properly.

"There is only one wooden framework," Mr. Demaret said in an interview. "It's in the attic."

"Had the church employee gone up to the attic right after my employee alerted him," he said, "he would have seen the smoke."

After the fire was brought under control, Sergeant Lemaire and his colleagues stayed up on the roof to put out the flames there and protect the southern tower, where three small fires had started.

Corporal Chudzinski spent the rest of the night helping make space for other fire trucks and securing the area. Then she went back to her station. The city was quiet.

She remembered her retreat, and the drone footage showing the cathedral from above as a flaming cross. Only then, when she was no longer absorbed in fighting the fire, did she fully comprehend the scope of the response.

"I didn't know how huge the teamwork had been," she said.

Miraculously, no one was killed.

Three days later, she and Sergeant Lemaire were among the hundreds of firefighters and police officers honored by President Macron at the Élysée Palace.

Countless Parisians stopped by the city's fire stations to donate food and small gifts and express their thanks. Notes came from around the world.

"These people were heroes," Mayor Weil said.

Still, more than a few wondered why at a time when citizens were taking to the streets protesting inequality and economic hardship, when so many were dying in distant wars and on migrant boats sailing for Europe, should Notre-Dame matter.

But Notre-Dame was more than a building. It rests on Île de la Cité, the island in the middle of the Seine River where Paris was born. Made and remade over the centuries, it remains a focal point of French culture that has responded to the demands of each age it has passed through.

And in the present moment, it represented an unbreakable link with what, for many French, is the essence of their increasingly fragile nationhood.

"Notre-Dame is good and old: Perhaps we'll even see her bury Paris, whose birth she witnessed," the poet Gérard de Nerval once said.

That was back in the 19th century.

That sense of the cathedral as a living, wounded entity has only intensified since the fire.

"First off, this is all about our fragility," Monsignor Chauvet, the rector, said on reflection. "We are as nothing. The fragility of man, in respect to God. We are nothing but — creatures."

Aurélien Breeden and Constant Méheut contributed reporting from Paris. Produced by Mona Boshnaq, Allison McCann, Andrew Rossback, Gaia Tripoli and Jeremy White. Additional work by Michael Beswetherick and Bedel Saget.




All Comments: [-] | anchor

rahkiin(4116) 4 days ago [-]

> "You have a system that is known for its ability to detect very small quantities of smoke," Mr. Corbett said. "Yet the whole outcome of it is this clumsy human response. You can spend a lot to detect a fire, but it all goes down the drain when you don't move on it."

It seems to me that the biggest issue with the fire system was the lack of a graphical display of the alarm... with all the security cameras and screens likely to already be in place, why not add one with maps of the cathedral, showing up in red when an alarm triggers? The need to understand shortcodes it waiting for disaster...

BryantD(2874) 4 days ago [-]

I do technical operations for a living. If I had a buck for every time I've seen an alert with unclear or non-actionable information, I would have retired ten years ago. This article describes a tragic situation, but it's not terribly surprising.

Other things I saw:

- people scared to talk about what happened because of blame issues - putting new employees in critical situations with only three days of training

userbinator(819) 4 days ago [-]

Yes, it's really surprising that a system which 'took dozens of experts six years to put together, and in the end involved thousands of pages of diagrams, maps, spreadsheets and contracts' didn't contain something that I've seen in nearly all the reasonably large buildings I've been in, some which have probably been there for many decades:

https://static1.squarespace.com/static/53ca9e5ce4b0bcaba3c71...

droithomme(10000) 4 days ago [-]

> It seems to me that the biggest issue with the fire system was the lack of a graphical display of the alarm

The man monitoring the alarm, even though he had just started the job, says he did understand the code and where the alarm was and communicated that information to the guard. He does not claim that the problem was he didn't understand the code or there was anything wrong with the design of the alarm system. The guard on the other hand says the man monitoring the alarm told him the wrong location to go to.

Neither of these issues though are related to the subsequent decision to call his supervisor as the number of alarms going off increased, and then sit around and do nothing while waiting for a call back. Nor does this have anything to do with the design of the alarm display.

zkms(4076) 4 days ago [-]

> It seems to me that the biggest issue with the fire system was the lack of a graphical display of the alarm

i agree that the lack of a graphic/synoptic display for the smoke detection system was a horrid flaw, especially when rapid response is needed.

However, i feel that the lack of an automatic extinguishing system was pretty grave too; given the excellent record of automatic fire sprinklers in containing -- if not fully extinguishing -- fires, well before the fire brigade arrives.

Of course, accidental activation is a serious concern, but this can be addressed, whether through the use of a double-interlock preaction scheme (where the piping is kept water-free with a valve that is opened only when the heat/smoke detectors signal a fire; and even once the pipes are filled with water, sprinkler heads need to be individually activated with heat) or using a gaseous clean-agent instead of water.

blauditore(10000) 4 days ago [-]

I would guess no prioritization of usability ('it's just an internal tool, we can train people, it's cheaper anyway'). This seems like a common pattern on software projects, especially company-internal ones.

kweks(4162) 4 days ago [-]

Small aside, there are / were no security cameras anywhere in the cathedral.

pintxo(10000) 4 days ago [-]

Yes and no.

There seem to be multiple points of failure on the critical path. Yes the interface might be sub-perfect, but there is also a human on the critical path, who started work at 7am, meaning he's been working for 12 hours now. There is a reason regulation usually mandates limited hours for safety critical jobs.

Why not automatically inform the fire department, ensuring that there is not one single critical path but an alternate option? Relying on the presence and correct reaction of a single security guard sounds like a less than perfect idea.

WalterBright(4074) 4 days ago [-]

These all seem obvious now in hindsight, but that's always the way of things. People, even educated, intelligent, and earnest people are very bad at user interface design. Good UI design comes from experience, not foresight.

For example, the aviation industry has standard terms for things. 'Takeoff Power' means full power. Can you guess what happened? A pilot needed to abort a landing, and yelled 'takeoff power' to the copilot, who promptly chopped the power, an accident ensued. This phrase was then changed to (I think) 'Full Power'.

The Air Force didn't learn from that civil aviation incident until it happened to them, too, then they fixed it.

It seems obvious, doesn't it? Everybody missed it.

gpvos(1625) 4 days ago [-]

It doesn't even have to be graphical (although I guess that would be best), as long as the location is clear, or a map is on hand.

mshook(10000) 4 days ago [-]

Ok so the UI or procedures sucked but what's really aggravating as a tax payer, at least to me, is the lack of foresight and proper funding where it matters. Though to be honest it's not different from most work environments...

Fire is not something new for historical monuments.

I heard on the radio right after the collapse an architect saying at least half of fires starting on historical monuments happen during renovations. And quickly searching shows an insurance company saying it could be close to 3/4 of them (https://www.cahiers-techniques-batiment.fr/article/notre-dam...).

Sure that wouldn't have fixed everything but now instead of adding a couple of dozens or hundred millions to fund the fire warning system, we're now most likely looking at a cost of multiple billions...

WalterBright(4074) 4 days ago [-]

> Fire is not something new for historical monuments.

I worry about the great libraries being consumed by fire. I do not understand why there isn't a larger effort to digitize them.

hn_throwaway_99(4055) 4 days ago [-]

> Sure that wouldn't have fixed everything but now instead of adding a couple of dozens or hundred millions to fund the fire warning system, we're now most likely looking at a cost of multiple billions...

But was lack of funds really the root cause of the problem? If anything, the overcomplicated nature of the fire warning system seemed to have all the hallmarks of having too much money thrown at it and too many 'cooks in the kitchen' designing it.

rdl(235) 4 days ago [-]

Saved thanks to 1) God 2) brave individual front-line firefighters.

Almost lost due to: 1) Bad UX 2) Bureaucracy 3) Insufficient drills and training for staff

Sounds like France overall.

MH15(4156) 4 days ago [-]

I believe you meant: 1) brave individual front-line firefighters 2) not much else

theon144(10000) 4 days ago [-]

Can I just take a moment to praise the article's presentation?

This comes up every time a NYTimes interactive comes out but wow, the narrative flow on this one really is incredible. The animations smoothly transitioning to full text, the collage of socal media posts... It feels much more like watching a documentary than reading an article.

I hope this catches on, it's what I've been promised with this whole hypermedia shebang!

js2(571) 4 days ago [-]

'Snow Fall' was, I think, the NYT's first serious effort at a multimedia article:

https://en.wikipedia.org/wiki/Snow_Fall

Similar comment at the time:

> The NYT just kinda blew my mind. A newspaper article just blew my mind. This is, by far, the best multimedia storytelling I think I've ever seen. Kudos to the team involved in putting this together, you've shown me the future of media and the internet.

https://news.ycombinator.com/item?id=4951041

I don't really understand how the same paper does these incredible articles and then totally whiffs on other attempts such as:

https://www.nytimes.com/interactive/2019/06/07/arts/dance-da...

zajio1am(10000) 4 days ago [-]

> Can I just take a moment to praise the article's presentation?

I would disagree. It feels irritating to me when some some parts of article content are scrolling and some parts are still (unless still parts are some irrelevant borders).

Also the CPU load is big, which causes my computer fan to run full speed during scrolling.

chrisbro(10000) 4 days ago [-]

Absolutely, this article is stunning. And I'd never seen that picture at dusk of the fire with the Eiffel tower in the background. Amazing all around.

TN1ck(4046) 4 days ago [-]

I worked at a startup where the goal was to build a tool to create stories similar to NYT without needing a coder. We got quite far, and the articles done with it are already quite impressive. The task is really difficult though - performance is hard and responsiveness is super tricky. I'm proud that we didn't end up with scrolljacking and it fells quite web native. https://info.graphics is the current soft launch and has some cool stories of anyone is interested.

contravariant(10000) 4 days ago [-]

They also deserve praise for achieving this without using scroll hijacking (at least not the egregious kind).

jmalkin(10000) 4 days ago [-]

Yeah this is gorgeous

I wonder how much it costs them compared to a regular article

jolmg(4149) 4 days ago [-]

It's also pretty cool that despite all that, the whole article is still readable on terminal web browsers like elinks and w3m. Even the captions in the animations is present.

doctorRetro(10000) 4 days ago [-]

Seconded. I've complained before about websites that try to get too magazine with their articles and how it ruins the narrative. But this? This is done really, really well.

geniium(10000) 4 days ago [-]

They did a great job. I was telling myself the same while scrolling through the article. This is really well done. Congrats NYT!

glerk(10000) 4 days ago [-]

Yeah, props to the NYT! They must be the only ones so far who managed to make scrolljacking useful to the reader and not annoying.

james_pm(3651) 4 days ago [-]

Interesting that during the fire, the world was piling on the fire fighting efforts as being insufficient. As the true picture started coming out over the next few days and now with this article, it's clear that the fire fighters had a plan and executed it well, likely saving the structure.

ls612(4109) 4 days ago [-]

I'll admit I was one of those people who was very pessimistic about the firefighting effort. To be honest, seeing the fire at Notre Dame gave me flashbacks to 9/11 and seeing the towers burning. I thought in light of the enormous loss of life that the new policy might be to not send in a ton of people to a large structure in danger of collapse.

SmellyGeekBoy(4164) 4 days ago [-]

The world? IIRC it was pretty much just one person...

johnchristopher(3695) 4 days ago [-]

I suffer from FOMO for news like that. I wish I could find a monthly reading for important events and an alert system for thing that really should be looked at when it happens.

rags2riches(10000) 4 days ago [-]

Even presidents were giving advice, as I recall...

segmondy(10000) 4 days ago [-]

Amazing read, goes to show that fighting fire is not just about pointing a hose to the fire. There's a lot of strategic decision that has to be made. When you have multiple fire, which one do you point the hose at, where you position yourself, in what order do you fight them? You can't just look at the fire, but everything around it. Timing matters, I suppose SRE's can relate. But I believe developers can also learn from this by doing a lot of introspection.

liberte82(10000) 4 days ago [-]

You can't just send helicopters and dump water on it?

JshWright(3571) 4 days ago [-]

Relatedly... I have a foot in both worlds (part-time firefighter/paramedic, full-time dev/ops/SRE/etc). For the better part of a year now I've been kicking around the idea of a conference talk about the transferable lessons between the two.

syxun(10000) 4 days ago [-]

How's the investigation going? Who set the church on fire?

selimthegrim(2095) 4 days ago [-]

Conspiracy theory much? This is HN, not Breitbart.

JshWright(3571) 4 days ago [-]

The article understates the risk the team that went to the north tower undertook. It mentions that they might not have an escape path, but that really doesn't communicate how exposed they were.

It's not uncommon for a fire crew to advance into an exposed position to make an attack or search for victims. As was the case here, there is usually a hoseline or two set up in a defensive position, covering their escape path. The difference is, this is usually done in a comparatively small space (i.e. a crew holding the staircase of a house while a second crew searches the upstairs). If the defensive position starts to look at risk, the exposed crew can make it back within a minute or two.

The crew in the northern tower of Notre-Dame would have taken far longer than that to make it back through, and it's very likely that if the defensive position had been lost, it wouldn't have been recognized until it was too late, trapping the crew in the tower.

karambahh(10000) 4 days ago [-]

The operations commander purposely chose personnel with no children or spouse to go on the northern tower

The call was made knowing that there may be casualties among them

chaosbolt(10000) 4 days ago [-]

>Paris has endured so much in recent years, from terrorist attacks to the recent violent demonstrations by Yellow Vest protesters.

Seriously? Putting the terrorist attacks in the same context as the Yellow Vest protests? One is an attack on civilians by extremist foreign groups, the other is those citizens protesting against a government decision that hurts them, which is a constitutional right in both France and the US... And this right to protest is what arguably made France a republic in the first place and led many other countries to follow on its steps. And labeling the protest as violent is dishonest at best, the police were violent and started throwing tear gas grenades at people, which made many protesters lose a hand or an eye (only the ones I saw, I'm sure there are many others who weren't recorded and put on Youtube).

bengalister(10000) 4 days ago [-]

Event if you think the protests were justified and French will benefit from it, in the short term it hurt Paris (many destructions to street furniture, monuments, banks, restaurant, shops) and affected negatively the life of the Parisians (burnt cars, closed shops, tourism industry affected etc.)

That is only what is meant here, of course these are 2 different things.

They could also have added the Algerian African Cup victories 'celebrations' that turned into riots and looting (we'll see what happen next Friday).

emptyfile(10000) 4 days ago [-]

>Seriously? Putting the terrorist attacks in the same context as the Yellow Vest protests?

Um, yes? People died in protests, and many terrorists were from France or Belgium.

fermigier(3297) 4 days ago [-]

At least 11 persons have died in the context of the yellow vest protests, which puts them in league (in terms of casualties) with a major terror attack :

Sources: https://www.liberation.fr/checknews/2019/01/30/qui-sont-les-... https://www.lci.fr/social/11-morts-depuis-le-debut-de-la-cri... https://fr.wikinews.org/wiki/Gilets_jaunes_en_France_:_stati...

saiya-jin(10000) 4 days ago [-]

Yeah, seriously. Yellow vest protests were a fine example of 'I want more without doing more, without losing anything I already have (and french citizens have plenty), and I will take everybody else in the country including visitors as hostage and eventually even burn the place down just to get it, and screw the rest of the population/world'.

They do NOT represent common french people, they are in one way or the other extremists, doing what extremists usually do - wreak havoc left and right, and don't give a nanofraction of a fk about others and consequences. How do I know - I have tons of french colleagues, actually roughly 50% of the workforce, and every single one of them was annoyed as hell with them. Initial sympathies for protests against additional 10% diesel tax vaned very quickly as annoyances mounted. Utter disregard for any environmental issues was obvious - the reason for the tax, constant burning of crap in barrels on yellow vest posts, day long revving of engines in centers of the towns etc.

Maybe I missed it, but where were protests about true french issues - bureaucracy that is a far cry from properly functioning western democracy and is a proper drag on economy (it took a colleague 1.5 years to change his driving license and story is beyond ridiculous), messed up 'me-first' mentality so common there, or rampant corruption on all levels on society? Where was something about making government slim, effective? Those actions could bring lowering of taxes, some real benefits to everybody. No, those wanted to keep everything as it is, comfy stable jobs with 10 weeks of paid vacation and tons of other perks, just somehow magically have more money.

dang(173) 4 days ago [-]

Please don't react to the most provocative thing in an article by bringing it in here to have an off topic flamewar. That just starts a different kind of fire.

https://news.ycombinator.com/newsguidelines.html

We detached this subthread from https://news.ycombinator.com/item?id=20457706.

jamesfe(10000) 4 days ago [-]

Then it must have been the police that broke into the Dior and luxury stores on the Champs Elysees and looted them...or that set fire to buildings. Or it was police that spray painted 'the gilet jaunes will win' on the Arc de Triumph.

This behavior forced the city to endure blockades every Saturday and holiday for months. Small businesses near the riot areas were closed and had their windows shattered or spray painted by protestors. Ordinary citizens and their kids couldn't do errands and in some cases leave their apartments for on one out of two days off a week for months on end.

The protests were violent. Who started it you can debate, but against tons of news articles, photos and videos of people fighting hand to hand, getting gassed or throwing cobblestones, lighting buildings on fire...you don't really have much of a factual foundation to say Paris didn't suffer months of violent protests before the ND fire.

jayess(2515) 4 days ago [-]

By attacking, looting, and setting fire to shops that have nothing to do with the government?

rb666(10000) 4 days ago [-]

The french have an overly large fondness of protesting. If they would save it for things that actually matter, the rest of the world wouldn't find it so laughable. The Yellow Vesters are even protesting AGAINST climate action (higher fuel taxes). I wouldn't call it terrorism, but it's certainly not helping anyone.

bsaul(4167) 4 days ago [-]

I don't they meant yellow vest , as in the people with the yellow vest doing all the violence. Rather, the whole series of protests, violence, shop destruction, streets in flames, police throwing tears gaz, etc happening for months and months every single saturday.

vnchr(3524) 4 days ago [-]

Have they figured out how the fire started?

mr_crankypants(10000) 4 days ago [-]

According to TFA, no, but they are nearly certain it was accidental. Top candidates include faulty electrical wiring and cigarette butts.

swebs(3886) 4 days ago [-]

Weird that the article doesn't mention the string of arson and other attacks on Catholic churches in France in the weeks leading up to the Notre Dame fire.

https://www.newsweek.com/spate-attacks-catholic-churches-fra...

lfam(10000) 4 days ago [-]

It's not weird that this article, which translates the technical firefighting and structural aspects of the fire for laypeople, does not mention a xenophobic conspiracy theory that has been rejected by the French authorities.

qazpot(10000) 4 days ago [-]

Those are not fires, that's just cultural enrichment.

acqq(1232) 4 days ago [-]

The part interesting for us:

"The fire warning system at Notre-Dame took dozens of experts six years to put together, and in the end involved thousands of pages of diagrams, maps, spreadsheets and contracts, according to archival documents found in a suburban Paris library by The Times.

The result was a system so arcane that when it was called upon to do the one thing that mattered — warn "fire!" and say where — it produced instead a nearly indecipherable message."

The message (assembled):

"Attic Nave Sacristy." "ZDA-110-3-15-1" "aspirating framework"

mschuster91(3232) 4 days ago [-]

The thing is: usually there's a fire plan - firefighters arrive at the building, read the code and look it up in the plan where exactly the trigger (or inlet for the aspiration tube) is. They do this insanely fast as they have been trained for this - while that security guard and the overtired technician were not.

This is a lack of training and a procedural failure - the usual configuration/process is that any alarm either automatically calls firefighters/police or via procedure the guard manually does this - immediately, not after 30min.

oftenwrong(415) 4 days ago [-]

The article makes it clear that the fire alarm system did warn 'fire':

>The employee monitoring the smoke alarm panel at Notre-Dame cathedral was just three days on the job when the red warning light flashed on the evening of April 15: "Feu." Fire.

and

>Finally, the guard radioed the fire security employee to call the fire department. It was 6:48, 30 minutes after the first red signal lit up the word "Feu."

The message specifying the location of the fire could have been more clear, but the fact that the fire alarm was going off was clear. When you see 'Feu' flashing red on the alarm panel, the correct course of action is obvious. The greatest failure was that the guards tried to go find the fire, wasting 30 minutes, instead of just calling the fire brigade immediately.

If a tsunami warning is being sounded, would you go down to the beach to try to spot it first, or would you run for the high ground?

If a warning of a bombing came, would you go outside with binoculars and try to spot the planes first, or would you run down into the air raid shelter?

9dev(10000) 4 days ago [-]

That's really a textbook example of software projects going horribly wrong. It's often a good idea to take a step back and analyze whether you're overcomplicating things. Often when working on a concept, you're too deep in to see the simple solutions anymore.

dwighttk(3038) 4 days ago [-]

Attic nave sacristy is a location

kweks(4162) 4 days ago [-]

It's a very, very vague error, and totally understandable that the new guard sent his colleague to the wrong place.

We'd expect a phrase to provide more granularity: (or YYYY-MM-DD), and as such, you'd read it exactly as: fire in the attic of the sacristy. The 'nave' text is a bit weird, as the sacristy doesn't have a nave, but at the same time, the nave doesn't have a sacristy.

lqet(4159) 4 days ago [-]

Makes you wonder if the millions of euros and years (decades) of construction delay caused by the fire alarm system at the BER airport will, in the end, be worth the effort [0].

[0] https://en.wikipedia.org/wiki/Berlin_Brandenburg_Airport#Con...

seanhandley(3449) 4 days ago [-]

As someone who has written software that interprets signals from fire alarm panels, I can tell you that it's decipherable if you're trained on the panel model.

I suspect the employee, having been on duty only a few days, was not.

baggy_trough(10000) 4 days ago [-]

Wonder why the system didn't call the fire department itself. Why bother with the human element of the guard at all? This is Notre Dame we're talking about.





Historical Discussions: The PGP Problem (July 17, 2019: 479 points)

(479) The PGP Problem

479 points 5 days ago by bellinom in 2142nd position

latacora.micro.blog | Estimated reading time – 17 minutes | comments | anchor

Cryptography engineers have been tearing their hair out over PGP's deficiencies for (literally) decades. When other kinds of engineers get wind of this, they're shocked. PGP is bad? Why do people keep telling me to use PGP? The answer is that they shouldn't be telling you that, because PGP is bad and needs to go away.

There are, as you're about to see, lots of problems with PGP. Fortunately, if you're not morbidly curious, there's a simple meta-problem with it: it was designed in the 1990s, before serious modern cryptography. No competent crypto engineer would design a system that looked like PGP today, nor tolerate most of its defects in any other design. Serious cryptographers have largely given up on PGP and don't spend much time publishing on it anymore (with a notable exception). Well-understood problems in PGP have gone unaddressed for over a decade because of this.

Two quick notes: first, we wrote this for engineers, not lawyers and activists. Second: "PGP" can mean a bunch of things, from the OpenPGP standard to its reference implementation in GnuPG. We use the term "PGP" to cover all of these things.

The Problems

Absurd Complexity

For reasons none of us here in the future understand, PGP has a packet-based structure. A PGP message (in a ".asc" file) is an archive of typed packets. There are at least 8 different ways of encoding the length of a packet, depending on whether you're using "new" or "old" format packets. The "new format" packets have variable-length lengths, like BER (try to write a PGP implementation and you may wish for the sweet release of ASN.1). Packets can have subpackets. There are overlapping variants of some packets. The most recent keyserver attack happened because GnuPG accidentally went quadratic in parsing keys, which also follow this deranged format.

That's just the encoding. The actual system doesn't get simpler. There are keys and subkeys. Key IDs and key servers and key signatures. Sign-only and encrypt-only. Multiple "key rings". Revocation certificates. Three different compression formats. This is all before we get to smartcard support.

Swiss Army Knife Design

If you're stranded in the woods and, I don't know, need to repair your jean cuffs, it's handy if your utility knife has a pair of scissors. But nobody who does serious work uses their multitool scissors regularly.

A Swiss Army knife does a bunch of things, all of them poorly. PGP does a mediocre job of signing things, a relatively poor job of encrypting them with passwords, and a pretty bad job of encrypting them with public keys. PGP is not an especially good way to securely transfer a file. It's a clunky way to sign packages. It's not great at protecting backups. It's a downright dangerous way to converse in secure messages.

Back in the MC Hammer era from which PGP originates, "encryption" was its own special thing; there was one tool to send a file, or to back up a directory, and another tool to encrypt and sign a file. Modern cryptography doesn't work like this; it's purpose built. Secure messaging wants crypto that is different from secure backups or package signing.

Mired In Backwards Compatibility

PGP predates modern cryptography; there are Hanson albums that have aged better. If you're lucky, your local GnuPG defaults to 2048-bit RSA, the 64-bit-block CAST5 cipher in CFB, and the OpenPGP MDC checksum (about which more later). If you encrypt with a password rather than with a public key, the OpenPGP protocol specifies PGP's S2K password KDF. These are, to put it gently, not the primitives a cryptography engineer would select for a modern system.

We've learned a lot since Steve Urkel graced the airwaves during ABC's TGIF: that you should authenticate your ciphertexts (and avoid CFB mode) would be an obvious example, but also that 64-bit block ciphers are bad, that we can do much better than RSA, that mixing compression and encryption is dangerous, and that KDFs should be both time- and memory-hard.

Whatever the OpenPGP RFCs may say, you're probably not doing any of these things if you're using PGP, nor can you predict when you will. Take AEAD ciphers: the Rust-language Sequoia PGP defaulted to the AES-EAX AEAD mode, which is great, and nobody can read those messages because most PGP installs don't know what EAX mode is, which is not great. Every well-known bad cryptosystem eventually sprouts an RFC extension that supports curves or AEAD, so that its proponents can claim on message boards that they support modern cryptography. RFC's don't matter: only the installed base does. We've understood authenticated encryption for 2 decades, and PGP is old enough to buy me drinks; enough excuses.

You can have backwards compatibility with the 1990s or you can have sound cryptography; you can't have both.

Obnoxious UX

We can't say this any better than Ted Unangst:

There was a PGP usability study conducted a few years ago where a group of technical people were placed in a room with a computer and asked to set up PGP. Two hours later, they were never seen or heard from again.

If you'd like empirical data of your own to back this up, here's an experiment you can run: find an immigration lawyer and talk them through the process of getting Signal working on their phone. You probably don't suddenly smell burning toast. Now try doing that with PGP.

Long-Term Secrets

PGP begs users to keep a practically-forever root key tied to their identity. It does this by making keys annoying to generate and exchange, by encouraging "key signing parties", and by creating a "web of trust" where keys depend on other keys.

Long term keys are almost never what you want. If you keep using a key, it eventually gets exposed. You want the blast radius of a compromise to be as small as possible, and, just as importantly, you don't want users to hesitate even for a moment at the thought of rolling a new key if there's any concern at all about the safety of their current key.

The PGP cheering section will immediately reply "that's why you keep keys on a Yubikey". To a decent first approximation, nobody in the whole world uses the expensive Yubikeys that do this, and you can't imagine a future in which that changes (we can barely get U2F rolled out, and those keys are disposable). We can't accept bad cryptosystems just to make Unix nerds feel better about their toys.

Broken Authentication

More on PGP's archaic primitives: way back in 2000, the OpenPGP working group realized they needed to authenticate ciphertext, and that PGP's signatures weren't accomplishing that. So OpenPGP invented the MDC system: PGP messages with MDCs attach a SHA-1 of the plaintext to the plaintext, which is then encrypted (as normal) in CFB mode.

If you're wondering how PGP gets away with this when modern systems use relatively complex AEAD modes (why can't everyone just tack a SHA-1 to their plaintext), you're not alone. Where to start with this Rube Goldberg contraption? The PGP MDC can be stripped off messages –– it was encoded in such a way that you can simply chop off the last 22 bytes of the ciphertext to do that. To retain backwards compatibility with insecure older messages, PGP introduced a new packet type to signal that the MDC needs to be validated; if you use the wrong type, the MDC doesn't get checked. Even if you do, the new SEIP packet format is close enough to the insecure SE format that you can potentially trick readers into downgrading; Trevor Perrin worked the SEIP out to 16 whole bits of security.

And, finally, even if everything goes right, the reference PGP implementation will (wait for it) release unauthenticated plaintext to callers, even if the MDC doesn't match.

Incoherent Identity

PGP is an application. It's a set of integrations with other applications. It's a file format. It's also a social network, and a subculture.

PGP pushes notion of a cryptographic identity. You generate a key, save it in your keyring, print its fingerprint on your business card, and publish it to a keyserver. You sign other people's keys. They in turn may or may not rely on your signatures to verify other keys. Some people go out of their way to meet other PGP users in person to exchange keys and more securely attach themselves to this "web of trust". Other people organize "key signing parties". The image you're conjuring in your head of that accurately explains how hard it is to PGP's devotees to switch to newer stuff.

None of this identity goop works. Not the key signing web of trust, not the keyservers, not the parties. Ordinary people will trust anything that looks like a PGP key no matter where it came from – how could they not, when even an expert would have a hard time articulating how to evaluate a key? Experts don't trust keys they haven't exchanged personally. Everyone else relies on centralized authorities to distribute keys. PGP's key distribution mechanisms are theater.

Forget the email debacle for a second (we'll get to that later). PGP by itself leaks metadata. Messages are (in normal usage) linked directly to key identifiers, which are, throughout PGP's cobweb of trust, linked to user identity. Further, a rather large fraction of PGP users make use of keyservers, which can themselves leak to the network the identities of which PGP users are communicating with each other.

No Forward Secrecy

A good example of that last problem: secure messaging crypto demands forward secrecy. Forward secrecy means that if you lose your key to an attacker today, they still can't go back and read yesterday's messages; they had to be there with the key yesterday to read them. In modern cryptography engineering, we assume our adversary is recording everything, into infinite storage. PGP's claimed adversaries include world governments, many of whom are certainly doing exactly that. Against serious adversaries and without forward secrecy, breaches are a question of "when", not "if".

To get forward secrecy in practice, you typically keep two secret keys: a short term session key and a longer-term trusted key. The session key is ephemeral (usually the product of a DH exchange) and the trusted key signs it, so that a man-in-the-middle can't swap their own key in. It's theoretically possible to achieve a facsimile of forward secrecy using the tools PGP provides. Of course, pretty much nobody does this.

Clumsy Keys

An OpenBSD signify(1) public key is a Base64 string short enough to fit in the middle of a sentence in an email; the private key, which isn't an interchange format, is just a line or so longer. A PGP public key is a whole giant Base64 document; if you've used them often, you're probably already in the habit of attaching them rather than pasting them into messages so they don't get corrupted. Signify's key is a state-of-the-art Ed25519 key; PGP's is a weaker RSA key.

You might think this stuff doesn't matter, but it matters a lot; orders of magnitude more people use SSH and manage SSH keys than use PGP. SSH keys are trivial to handle; PGP's are not.

Negotiation

PGP supports ElGamal. PGP supports RSA. PGP supports the NIST P-Curves. PGP supports Brainpool. PGP supports Curve25519. PGP supports SHA-1. PGP supports SHA-2. PGP supports RIPEMD160. PGP supports IDEA. PGP supports 3DES. PGP supports CAST5. PGP supports AES. There is no way this is a complete list of what PGP supports.

If we've learned 3 important things about cryptography design in the last 20 years, at least 2 of them are that negotiation and compatibility are evil. The flaws in cryptosystems tend to appear in the joinery, not the lumber, and expansive crypto compatibility increases the amount of joinery. Modern protocols like TLS 1.3 are jettisoning backwards compatibility with things like RSA, not adding it. New systems support just a single suite of primitives, and a simple version number. If one of those primitives fails, you bump the version and chuck the old protocol all at once.

If we're unlucky, and people are still using PGP 20 years from now, PGP will be the only reason any code anywhere includes CAST5. We can't say this more clearly or often enough: you can have backwards compatibility with the 1990s or you can have sound cryptography; you can't have both.

Janky Code

The de facto standard implementation of PGP is GnuPG. GnuPG is not carefully built. It's a sprawling C-language codebase with duplicative functionality (write-ups of the most recent SKS key parsing denial of service noted that it has multiple key parsers, for instance) with a long track record of CVEs ranging from memory corruption to cryptographic side channels. It has at times been possible to strip authenticators off messages without GnuPG noticing. It's been possible to feed it keys that don't fingerprint properly without it noticing. The 2018 Efail vulnerability was a result of it releasing unauthenticated plaintext to callers. GnuPG is not good.

GnuPG is also effectively the reference implementation for PGP, and also the basis for most other tools that integrate PGP cryptography. It isn't going anywhere. To rely on PGP is to rely on GPG.

The Answers

One of the rhetorical challenges of persuading people to stop using PGP is that there's no one thing you can replace it with, nor should there be. What you should use instead depends on what you're doing.

Talking To People

Use Signal. Or Wire, or WhatsApp, or some other Signal-protocol-based secure messenger.

Modern secure messengers are purpose-built around messaging. They use privacy-preserving authentication handshakes, repudiable messages, cryptographic ratchets that rekey on every message exchange, and, of course, modern encryption primitives. Messengers are trivially easy to use and there's no fussing over keys and subkeys. If you use Signal, you get even more than that: you get a system so paranoid about keeping private metadata off servers that it tunnels Giphy searches to avoid traffic analysis attacks, and until relatively recently didn't even support user profiles.

Encrypting Email

Don't.

Email is insecure. Even with PGP, it's default-plaintext, which means that even if you do everything right, some totally reasonable person you mail, doing totally reasonable things, will invariably CC the quoted plaintext of your encrypted message to someone else (we don't know a PGP email user who hasn't seen this happen). PGP email is forward-insecure. Email metadata, including the subject (which is literally message content), are always plaintext.

If you needed another reason, read the Efail paper. The GnuPG community, which mishandled the Efail disclosure, talks this research down a lot, but it was accepted at Usenix Security (one of the top academic software security venues) and at Black Hat USA (the top industry software security venue), was one of the best cryptographic attacks of the last 5 years, and is a pretty devastating indictment of the PGP ecosystem. As you'll see from the paper, S/MIME isn't better.

This isn't going to get fixed. To make actually-secure email, you'd have to tunnel another protocol over email (you'd still be conceding traffic analysis attacks). At that point, why bother pretending?

Encrypting email is asking for a calamity. Recommending email encryption to at-risk users is malpractice. Anyone who tells you it's secure to communicate over PGP-encrypted email is putting their weird preferences ahead of your safety.

Sending Files

Use Magic Wormhole. Wormhole clients use a one-time password-authenticated key exchange (PAKE) to encrypt files to recipients. It's easy (for nerds, at least), secure, and fun: we haven't introduced wormhole to anyone who didn't start gleefully wormholing things immediately just like we did.

Someone stick a Windows installer on a Go or Rust implementation of Magic Wormhole right away; it's too great for everyone not to have.

If you're working with lawyers and not with technologists, Signal does a perfectly cromulent job of securing file transfers. Put a Signal number on your security page to receive bug bounty reports, not a PGP key.

Encrypting Backups

Use Tarsnap. Colin can tell you all about how Tarsnap is optimized to protect backups. Or really, use any other encrypted backup tool that lots of other people use; they won't be as good as Tarsnap but they'll all do a better job than PGP will.

Need offline backups? Use encrypted disk images; they're built into modern Windows, Linux, and macOS. Full disk encryption isn't great, but it works fine for this use case, and it's easier and safer than PGP.

Signing Packages

Use Signify/Minisign. Ted Unangst will tell you all about it. It's what OpenBSD uses to sign packages. It's extremely simple and uses modern signing. Minisign, from Frank Denis, the libsodium guy, brings the same design to Windows and macOS; it has bindings for Go, Rust, Python, Javascript, and .NET; it's even compatible with Signify.

Encrypting Application Data

Use libsodium It builds everywhere, has interface that's designed to be hard to misuse, and you won't have to shell out to a binary to use it.

Encrypting Files

This really is a problem. If you're/not/making a backup, and you're /not/archiving something offline for long-term storage, and you're /not/encrypting in order to securely send the file to someone else, and you're /not/encrypting virtual drives that you mount/unmount as needed to get work done, then there's no one good tool that does this now. Filippo Valsorda is working on "age" for these use cases, and I'm super optimistic about it, but it's not there yet.

Hopefully it's clear that this is a pretty narrow use case. We work in software security and handle sensitive data, including bug bounty reports (another super common "we need PGP!" use case), and we almost never have to touch PGP.




All Comments: [-] | anchor

mongol(10000) 5 days ago [-]

The pass password manager uses PGP, what would be a better design?

tptacek(79) 4 days ago [-]

Pass is an interesting case, because of how it's implemented. The easy, obvious answer would be 'a version of pass that uses libsodium (or Jason's own crypto library) instead of pgp', but there's no command line for those tools, and the only other command line widely available for cryptography is OpenSSL's, which is horrible.

eikenberry(10000) 4 days ago [-]

Nothing at present, but the article did talk about work being done on a project that will be able to work as a direct replacement for pgp in this use case. Near the end, look for the mention of 'age'.

lvh(3176) 5 days ago [-]

I did some 'OpenPGP Best Practices' work for a client recently. They don't have a choice, because a third party requires it. The goal was to make sure it was as safe as possible. One thing that struck me is that I have a simplified mental model for the PGP crypto, and reality is way weirder than that. The blog post says it's CFB, and in a sense that's right, but it's the weirdest bizarro variant of CFB you've ever seen.

In CFB mode, for the first block, you take an IV, encrypt it, XOR it with plaintext. Second block: you encrypt the first ciphertext block, encrypt that, XOR with second plaintext block, and so on. It feels sorta halfway between CBC and CTR.

Here's the process in OpenPGP, straight from the spec because I can't repeat this without being convinced I'm having a stroke:

   1.   The feedback register (FR) is set to the IV, which is all zeros.
   2.   FR is encrypted to produce FRE (FR Encrypted).  This is the
        encryption of an all-zero value.
   3.   FRE is xored with the first BS octets of random data prefixed to
        the plaintext to produce C[1] through C[BS], the first BS octets
        of ciphertext.
   4.   FR is loaded with C[1] through C[BS].
   5.   FR is encrypted to produce FRE, the encryption of the first BS
        octets of ciphertext.
   6.   The left two octets of FRE get xored with the next two octets of
        data that were prefixed to the plaintext.  This produces C[BS+1]
        and C[BS+2], the next two octets of ciphertext.
   7.   (The resynchronization step) FR is loaded with C[3] through
        C[BS+2].
   8.   FRE is xored with the first BS octets of the given plaintext,
        now that we have finished encrypting the BS+2 octets of prefixed
        data.  This produces C[BS+3] through C[BS+(BS+2)], the next BS
        octets of ciphertext.
   9.   FR is encrypted to produce FRE.
   10.  FR is loaded with C[BS+3] to C[BS + (BS+2)] (which is C11-C18
        for an 8-octet block).
   11.  FR is encrypted to produce FRE.
   12.  FRE is xored with the next BS octets of plaintext, to produce
        the next BS octets of ciphertext.  These are loaded into FR, and
        the process is repeated until the plaintext is used up.
Yeah so CFB except your IV isn't your IV and randomly do something with two bytes as... an... authenticator? And then everything after that is off by two? This isn't the only case where OpenPGP isn't just old, it's old and bizarre. I don't have a high opinion of PGP to begin with, but even my mental model is too charitable.

(Disclaimer: I'm a Latacora partner, didn't write this blog post but did contribute indirectly to it.)

matthewdgreen(10000) 4 days ago [-]

I think this is called Plumb CFB. It was invented by Colin Plumb back in the day. I first saw it in their FDE products which didn't have a good IV generation process (kind of acting like a replacement for XTS or a wide block cipher) and no, I don't know what it's for.

belorn(4127) 5 days ago [-]

Telling people to treat email as insecure and thus not use it for anything serious is terrible bad advice.

I am reminded of BGP (Border Gateway Protocol). Anyone who has even glanced at the RFC of BGP could write an essay of the horrible mess of compatibility, extensions, non-standard design of BGP. It also lack any security consideration. The problem is that it is the core infrastructure of the Internet.

Defining something as insecure with the implied statement that we should treat it as insecure is unhelpful advice in regard to critical infrastructure. People are going to use it, continue to use it for the unforeseeable future, and continue to treat it as secure. Imperfect security tools will be applied on top, imperfectly, but it will see continued used as long as it is the best tools we have in the circumstances. Email and BGP and a lot of other core infrastructure that is hopelessly insecure will continue to be used with the assumption that they can be made to be secure, until an actually replacement is made and people start to transition over (like how ipv6 is replacing ipv4 and we are going to deprecate ipv4 if you take a very long term view of it).

tptacek(79) 4 days ago [-]

People that use email to convey sensitive messages will be putting themselves and others at risk, whether or not they use PGP, for the indefinite future. That's a simple statement of fact. I understand that you don't like that fact --- nobody does! --- but it remains true no matter how angry it makes you.

trabant00(10000) 5 days ago [-]

The problem with the alternatives is they are product specific and baked into that product. I need a tool that is a separate layer that I can pipe into whatever product I want, be it files, email, chat, etc. Managing one set of identities is hard enough thank you very much and I also want to be able to switch the communication medium as needed.

I use gnupg a lot and I'm certainly not very happy with it but I guess it's the same as with democracy: the worst system except for the all the others.

Natanael_L(10000) 4 days ago [-]

The problem with this is that a tool that is too generic is itself dangerous, because it creates cross protocol attacks and confusion attacks like in https://efail.de for PGP email.

I think that a better approach is to bind identities from multiple purpose built cryptographic protocols.

w8rbt(3774) 4 days ago [-]

Stop parroting these criticisms of OpenPGP without offering concrete, working replacements that are widely adopted and have open standards.

Some kid recently tried to have OpenPGP support deprecated from Golang's crypto-x package. And that's fine, but do not pull stunts like this without offering a concrete, working and widely adopted replacement. Otherwise, they are just that, publicity stunts with a lot of sound and fury but no solution. That's not helpful to anyone.

A more mature thing to do would be to suggest deprecation in 3 to 5 years and offer a plan of how to get there with other specific tools (some of which do not exist today).

lvh(3176) 4 days ago [-]

I was going to say something about how the article does mention several alternatives, some of which far more widely deployed than OpenPGP.

But first: hold up. 'Some kid'? You mean Filippo Valsorda? Google's Go Crypto person? The same person who is writing the replacement for that one use case?

bennofs(10000) 4 days ago [-]

I understand that there are better tools for encryption, but is there anything that replaces the identity management of PGP? Having a standard format for sharing identities is necessary in my opinion. If I have a friend (with whom I already exchanged keys) refer me to some third friend, it would be nice if he can just send me the identity. Sending me the signal fingerprint isn't a solution for two reasons:

- I don't want to be manually comparing hashes in 2019

- it locks me into signal, I wont be able to verify a git commit from that person as an example

Is there a system that solves this? Keybase is trying but also builds on PGP, we can use S/MIME which relies on CAs but is not better than PGP. Anything else?

mike_hearn(3834) 4 days ago [-]

The CA system is strictly better than PGP for identity management in every respect.

People often think it must be the opposite but this is essentially emotional reasoning: the Web of Trust feels decentralised, social, 'webby' un'corporate', free, etc. All things that appeal to hobbyist geeks with socialist or libertarian leanings, who see encryption primarily through the activist lens of fighting governments / existing social power structures.

But there's nothing secure about the WoT. As the post points out, the entire thing is theatre. Effectively the WoT converts every PGP user into a certificate authority, but they can't hope to even begin to match the competence of even not very competent WebTrust audited CAs. Basic things all CAs are required to do, like use hardware security modules, don't apply in the WoT, where users routinely do unsafe things like use their private key from laptops that run all kinds of random software pulled from the net, or carry their private keys through airports, or accept an email 'From' header as a proof of identity.

I wrote about this a long time ago here:

https://blog.plan99.net/why-you-think-the-pki-sucks-b64cf591...

Natanael_L(10000) 4 days ago [-]

I think that currently Keybase.io is the only thing trying to be universal, with their transparency log plus links to external profiles along with signed attestations for them.

But even that's still not quite what I'm looking for. There's no straightforward way to link arbitary protocol accounts / identities to it, outside of linking plain URL:s.

We need something a bit smarter than keybase that would actually allow you to maintain a single personal identifier across multiple protocols.

louis-paul(2701) 4 days ago [-]

Keybase builds a lot on top of saltpack, which works like a saner PGP: https://saltpack.org

The underlying cryptography is NaCl, which is referenced in the original post.

tptacek(79) 5 days ago [-]

Really, all I did here was combine posts from Matthew Green, Filippo Valsorda, and George Tankersley into one post, and then talk to my partner LVH about it. So blame them.

(also 'pvg, who said i should write this, and it's been nagging at me ever since)

DuskStar(3461) 5 days ago [-]

What are your thoughts on Keybase as a secure Slack replacement?

bellinom(2142) 5 days ago [-]

I've followed and enjoyed your commentary on PGP and cryptography in general, so I thought I'd post it.

Any idea when Fillipo's `age` will be done, or how to follow its development, other than the Google doc?

pgeorgi(4157) 5 days ago [-]

The elephant in the room is 'what to do about email', and a significant part of the issues are related to the 'encrypt email' use case: part of the metadata leakage, no forward secrecy, ...

The closest advice to this in the article would be 'use Signal' which has various issues of its own, unrelated to crypto: it has Signal Foundation as a SPOF and its ID mechanism is outright wonky, as phone numbers are IDs that are location bound, hard to manage multiple for a person, hard to manage multiple persons per ID, hard to roll over.

To me that seems to be a much bigger issue than 'encrypting files for purposes that aren't {all regular purposes}'.

book-mind(10000) 4 days ago [-]

Is it wrong to use openssl to encrypt files?

0. (Only once) generate key pair id_rsa.pub.pem, id_rsa.pem

1. Generate random key

  openssl rand -base64 32 > key.bin
2. Encrypt key

  openssl rsautl -encrypt -inkey id_rsa.pub.pem -pubin -in key.bin -out key.bin.enc
3. Encrypt file using key

  openssl enc -aes-256-cbc -salt -in SECRET_FILE -out SECRET_FILE.enc -pass file:./key.bin
-- other side --

4. Decrypt key

  openssl rsautl -decrypt -inkey id_rsa.pem -in key.bin.enc -out key.bin 
5. Decrypt file

  openssl enc -d -aes-256-cbc -in SECRET_FILE.enc -out SECRET_FILE -pass file:./key.bin
pvg(4169) 4 days ago [-]

This is great, thanks for writing it!

Brings to mind the words of renowned Victorian lifehacker Jerome K. Jerome:

"I can't sit still and see another man slaving and working. I want to get up and superintend, and walk round with my hands in my pockets, and tell him what to do. It is my energetic nature. I can't help it."

srl(2717) 5 days ago [-]

First, thanks enormously for writing this -- and for all the other recent articles that have appeared here the vein of 'PGP is as bad as it is unpleasant to use'. It's a point I didn't really appreciate (at least as much as I (sh/c)ould have) and I'm sure I'm not alone.

It seems that the state of package distribution for many distributions is poor, security-wise. (OpenBSD, to nobody's surprise, is an exception.) For instance, archlinux (I'm loyal) signs packages with PGP[1] and, for source-built packages, encourages integrity checks with MD5. My recollection is that, about 5 years ago, MD5 was supposed to be replaced with SHAxxx. Am I misinterpreting this? Is this actually Perfectly Okay for what a distro is trying to accomplish with package distribution?

(I'm particularly suspicious of the source-built package system, which consists of a bunch of files saying 'download this tarball and compiler. MD5 of the tarball should be xyz.' I'm pretty confident that's not okay.)

Okay, now moving from package distribution to messaging, and again looking at the state of my favorite system. How am I supposed to message securely? The best nix messaging tools are all based around email. Even when I can get the PGP or S/MIME or whatever toolset to work (let's face it, that's at least 45 minutes down the drain), it's clear that I'm not in good shape security-wise.

I should use signal, apparently. Great. Just a few problems: (1) no archlinux signal package, (2) I'm guessing I can't use it from the terminal, and (3) most severely, it seems signal has incomplete desktop support. In particular, I need to first set up signal on my phone. Well, let's face facts: I have a cheap phone from a hard-to-trust hardware vendor, and I think there's a >5% chance it's running some sort of spyware. (The last phone I had genuinely did have malware: there were ads showing in the file manager, among other bizarre behaviors.) So in order to use signal on my desktop, I need to buy a new phone? That's even worse, usability-wise, than PGP.

Is... is it really this bad? I'm getting the sense that the desktop linux community has completely dropped the ball on this one. (And perhaps more generally desktop mac/windows... I wouldn't know.)

[1] Perhaps not so bad, since the keyring is distributed with the system -- but how was the original download verified? Options are: PGP, MD5, SHA1, with the choice left up to the user. That can't be right.

floatboth(10000) 4 days ago [-]

I don't know where you found MD5, most PKGBUILDs have `sha256sums` in them.

Package signing is (hot take!) overrated and can be somewhat theater. It helps if your package manager connects to third party mirrors, but otherwise, the only threat it protects against is 'the https server is compromised but the package build farm is not'. I don't know why anyone would worry so much about that.

cyphar(3681) 5 days ago [-]

Arch Linux's pacman is not a good example of a secure package distribution system (especially not the AUR, where you are downloading all the bits from the internet and building them yourself as your own user or even sometimes as root). They didn't do any package signing or verification at all until (shockingly) recently -- less than 10 years ago IIRC. I am a huge fan of Arch's philosophy but am definitely not a fan of the design of their package distribution system.

If you look at systems like Debian-derivatives or RPM-based distros (openSUSE/SLES, and Fedora/CentOS/RHEL) the cryptosystems are far better designed. In the particular case of openSUSE, the AUR-equivalent (home: projects on OBS) are all signed using per-user keys that are not managed by users -- eliminating almost all of the problems with that system. Yeah, you still have to trust the package maintainer to be sure what they're doing, but that should be expected. I believe the same is true for Fedora's COPR.

[Disclaimer: I work for SUSE and contribute to openSUSE.]

vetrom(10000) 5 days ago [-]

Signal does a great job of supporting activists. That's basically its intentional product focus. Everything an open source proponent engineer might want to promote is secondary. The focus on activism and trying to deal with large actors definitely looks like #1. Everything else about their product is secondary to that.

Signal's product focus has been at best un-encouraging to those who want to use it for anything else. Federation, I'm looking at you, but you've also got to take it in the sense that every single vendor that embraced the only realistic messaging federation standard of the 21st century went on to embrace and extinguish it in less than a decade.

This speaks to a few problems: 1. Messaging is hard 2. Security is hard 3. Logic about security is hard to reason. 4. Historically, anyone paid enough to care about this space hasn't had any sort of public interest at heart.

Any application that combines any of the three falls squarely into what the people at Latacora and the like would call a high risk application. I might disagree with much of their analysis, but in the lens of risk control, they are perfectly correct.

If you're trying to figure out how we got here, you've also got to realize that there was an avalanche of government and commercial entities whose goals are not in alignment with say, those who think the optimal solution is a home rolled provable trustless security system.

For myself and many engineers I'd bet, I'd say that's where we thought things should go in the 90s and early aughts. Some things are better now, but most are much worse.

Society and encryption's implications I would say have caught up with each other, and theres definitely something found wanting. There's definitely say a market opportunity there, but there's also another big challenge that I read reviewing a discussion about package signing lately: 'Nobody in this space gets paid enough to care.'

That's what separates people like Signal, even if some of the engineering crowd doesn't like the way they delivered.

This is a bit of a ramble, so there's two afterwords:

1. Much of the morass about PGP is explicitly due to the environment, space, and time in which it was developed. This does not boil down merely to 'it wasn't known how to do it better.' There were decisions and compromises made. I think the writer at Latacora is not giving the history of the piece justice. That's OK though, because that's not the crux of their argument. It would be good though I think if they gave that a further explanation than why things like the byzantine packet format are impossible to explain, even if that explanation were only a footnote and reference. (Writing the history of how it got there is absolutely doable, but it would make for a dryly humorous history, at best.)

2: The open source and (linux/others?) distro community has tried hard, more than once to make this work. The development, design, and implementation burden though, is gargantuan. The overarching goal was basically to be compatible with every commercial system and maybe do one or two things better. What the article casts as purely a liability was the only way to get practical encryption over the internet well into the early '00s.'

Regardless of all that though, PGP is still a technical nightmare. If you dismiss it though, even when we have better components, I worry that we'd only repeat these mistakes. If you work in any sort of crypto/encryption dependent enterprise, please find and study the history. Don't just take the (well considered) indictment of PGP at face value. There's important lessons to be learned there.

mirimir(3375) 4 days ago [-]
tptacek(79) 4 days ago [-]

I don't understand how these represent 'mistakes', let alone 'serious mistakes'. But I'm glad he liked it.

lucb1e(2085) 4 days ago [-]

For backups, I would recommend Restic. The author mentions Tarsnap, and if you don't want to back up five terabytes that is probably great, but after a few gigabytes it's just not economical for private persons. If you're on hacker news (i.e. the 'engineer' the author was talking about), odds are that storing a hard drive connected to a raspberry pi at your parents' and using Restic as client is an extremely cheap way of backing up many terabytes securely.

wbl(3887) 4 days ago [-]

I have a few gigabytes on tarsnap for pennies a day.

criddell(4110) 4 days ago [-]

For Windows users that were wondering, Restic doesn't back up files that are opened exclusively.

VSS support on Windows is an open issue:

https://github.com/restic/restic/issues/340

roamingnomadic(10000) 5 days ago [-]

Is this blog post satire? or an ad of sorts for some vaporware?

The talking alts either use PGP or a botnet. (signal uses PGP with a CA type of thing, so much for 'stop using pgp') and whatsapp is owned by facebook.

tarsnap as I understand requires that I lock in to a service to secure my backups. F- that. Someone already talked about wormhole.

And how is essentially forking pgp with 'age' really going to solve things? Wow thanks another forked app! :^)

It would be easier to just make a wrapper for gnupg that sets the settings for everything the author is talking about. (well most of the things the user is talking about)

Wouldn't it be easier to just inform maintainers of the package to change the default standards of packages like gnupg? Has the author even attempted to change some of these things?

Don't get me wrong, I get where the author is coming from. Uinx philosophy should be followed... but certain systems can not be compartmentalized they have to unfortunately interact with other another.

If you for example encrypt data without signing it, how would you know that someone isn't trying to poison ciphertext to extract data leakage or worse yet, they already found a way to decrypt your data and manipulating sensitive data?

An encryption program by DESIGN should also have a method to sign data.

lvh(3176) 5 days ago [-]

> (signal uses PGP with a CA type of thing, so much for 'stop using pgp')

I legitimately have no idea what this means.

> And how is essentially forking pgp with 'age' really going to solve things? Wow thanks another forked app! :^)

A big part of the criticism we've gotten when we tell people 'PGP bad' is that we're not providing alternatives. age is one of those alternatives, for one of those use cases.

> It would be easier to just make a wrapper for gnupg that sets the settings for everything the author is talking about. (well most of the things the user is talking about) > Wouldn't it be easier to just inform maintainers of the package to change the default standards of packages like gnupg? Has the author even attempted to change some of these things?

As we mentioned repeatedly in the blog post: no, the PGP format is fundamentally broken, it is not a matter of 'just fixing it'.

> If you for example encrypt data without signing it, how would you know that someone isn't trying to poison ciphertext to extract data leakage or worse yet, they already found a way to decrypt your data and manipulating sensitive data?

I think you're making an argument against unauthenticated encryption here. That's true! You should not have unauthenticated encryption, the way PGP makes it easy for you to have. age is not unauthenticated encryption, so the criticism does not apply.

viraptor(3074) 5 days ago [-]

Another use case I don't know a replacement for is offline team+deployment secrets. Requirements:

- you need team members to read/write the secrets

- you need the deployment service to read the secrets

Without using an online system managing the secrets via ACLs + auth, I don't know how to replace PGP here.

lvh(3176) 5 days ago [-]

what's wrong with an online system managing the secrets? KMS is great, and makes it easy to separate decrypt from encrypt permissions.

(KMS is not the only option! I'm just trying to eke out why you think that's valuable. For example, I think age, mentioned in the blog post, is a direct replacement?)

tzs(3250) 5 days ago [-]

While waiting for 'age' to get written, can I use libsodium for file encryption?

jedisct1(3383) 4 days ago [-]

https://download.libsodium.org/doc/secret-key_cryptography/s...

Battle-tested by ransomware already (sigh).

danielrangel(4170) 4 days ago [-]

My minisign implementation in rust compatible with signify and minisign keys https://crates.io/crates/rsign

jedisct1(3383) 3 days ago [-]

And based on your implementation: https://crates.io/search?q=rsign2

swalladge(3846) 4 days ago [-]

Something I'm curious about - if GPG uses such old/not-recommended encryption standards, is it still secure in the sense that if I gpg encryption something and post it online, a three letter agency will be still unable to decrypt it?

lvh(3176) 4 days ago [-]

There are two ways that breaks:

- your key won't be private forever but future compromise does mean past disclosure

- on long enough timescales if said TLAs can mount offline attacks against it.

So, maybe? But it's definitely the safest way to do it, most of GPGs problems are unforced interaction errors.

taeric(2543) 5 days ago [-]

I don't understand why the use of Yubikeys for a non-exportable key isn't valid for folks that care about security. I mean, I get that not everyone will use it. The vast majority won't. However, the vast majority don't care about security at this level. So... what is the actual criticism? If you care about security, use the keys, right? That feels no different from 'use some other product.'

lvh(3176) 4 days ago [-]

Sure. We do that for eg SSH. I don't think it's a great idea for our standard audience (startups) to implement.

dchest(610) 4 days ago [-]

The only thing in this world more complicated than setting up GPG is setting up GPG with Yubikey.

The fact that I have `fix-gpg` script to restart gpg-agent somewhere in $PATH that I run when for some reason it can't find my YubiKey tells me that it's not a viable solution for 99% of people.

PS. Actual command from GPG:

  > help
  ...
  sex     change card holder's sex
  ...
tialaramex(10000) 4 days ago [-]

There's a few places where this engages in goalpost shifting that seems less than helpful even though I end up agreeing with the general thrust. Let's focus on one:

> Put a Signal number on your security page to receive bug bounty reports, not a PGP key.

We can reasonably assume in 2019 that this 'security page' is from an HTTPS web site, so it's reasonably safe against tampering, but a 'Signal number' is just a phone number, something bad guys can definitely intercept if it's worth money to them, whereas a PGP key is just a public key and so you can't 'intercept' it at all.

Now, Signal doesn't pretend this can't happen. It isn't a vulnerability in Signal, it's just a mistaken use case, this is not what Signal is for, go ask Moxie, 'Hey Moxie, should I be giving out Signal numbers to secure tip-offs from random people so that nobody can intercept them?'.

[ Somebody might think 'Aha, they meant a _Safety number_ not a Signal number, that fixes everything right?'. Bzzt. Signal's Safety Numbers are per-conversation, you can upload one to a web page if you want, and I can even think of really marginal scenarios where that's useful, but it doesn't provide a way to replace PGP's public keys ]

Somebody _could_ build a tool like Signal that had a persistent global public identity you can publish like a PGP key, but that is not what Signal is today.

dunkelheit(4124) 4 days ago [-]

> > Put a Signal number on your security page to receive bug bounty reports, not a PGP key.

Does anyone actually do this? Even Signal developers themselves don't! (see https://support.signal.org/hc/en-us/articles/360007320791-Ho...). Instead there is a plain old email address where you are supposed to send your Signal number so that you can chat.

Leace(3781) 4 days ago [-]

> persistent global public identity

Certificate Transparency could be reused/abused to host it. If, for example, you issued a cert for name <key>.contact.example.com and the tooling would check CT logs this could be a very powerful directory of contacts. Using CT monitors you could see if/when someone tampers with your domain name contact keys.

Mozilla is planning something similar for signing software: https://wiki.mozilla.org/Security/Binary_Transparency

connorlanigan(10000) 4 days ago [-]

The safety number is only partly per-conversation. If you compare safety numbers of different conversations, you'll discover that one half of them is always the same (which half that is changes depending on the conversation). This part is the fingerprint of your personal key.

The Signal blog states that 'we designed the safety number format to be a sorted concatenation of two 30-digit individual numeric fingerprints.' [1]

The way I understand it, you could simply share your part of the number on your website, but Moxie recommends against it, since this fingerprint changes between reinstalls.

[1] https://signal.org/blog/safety-number-updates/

Natanael_L(10000) 4 days ago [-]

Keybase.io supports this usecase

teythoon(3962) 4 days ago [-]

We over at Sequoia-PGP, which gets a honorable mention by the OP, are not merely trying to create a new OpenPGP implementation, but to rethink the whole ecosystem.

For some of our thoughts on the recent certificate flooding problems, see https://sequoia-pgp.org/blog/2019/07/08/certificate-flooding...

lucb1e(2085) 4 days ago [-]

The homepage has some meaningless marketing bullet points about the greatness of pgp itself. Where would I find the ways in which you rethink the whole ecosystem? It seems like Sequoia is just a library, not even a client. Wondering how this could change pgp by much if at all.

verytrivial(10000) 4 days ago [-]

I read these advice columns like 'You should stop using hand saws because we now have electric saws which are better in every way that I care about.' Great! You use them then. I'll keep using crufty old hand saws wherever I want, you use and proselytize your electric saws and we can get on with our lives. I have my own networks and threat models and my own evaluation functions for these. Hand saws are pretty good.

tptacek(79) 4 days ago [-]

If your threat models have you using PGP, your threat models are badly engineered.

eudora(3901) 4 days ago [-]

Pretty shocking, thanks for such a detailed analysis

lucb1e(2085) 4 days ago [-]

It's a large number of minor points. Nothing shocking, no huge issues, and of course nothing we didn't already know. The crypto is good, if you manage it well it stands up to the NSA as far as we know... Sure, it has flaws that make it hard to use in general, and hard to use securely (no long term keys, for example, would make me less paranoid about my private key), but it's still fine despite having huge backwards compatibility.

The author is overly dramatic about it in order to make a point, to hopefully get people looking for alternatives, so that a good one might take it from pgp in the future (and continues to suggest whatsapp and signal, like, really? That's your replacement for pgp?).

ekianjo(318) 5 days ago [-]

So the recommendation here is just to use Chat clients to communicate and forget about Email? Well that is hardly a good solution.

lvh(3176) 5 days ago [-]

Yes! E-mail is fundamentally terribly positioned to do secure messaging. You can use e-mail, or you can have cryptography that works and have people use it, but you can't do both.

floatingatoll(3889) 5 days ago [-]

Yes. Email is designed to be stored forever, and all attempts to do otherwise fail due to the design of email clients. Chat is possible to designed to self-destruct. Anyone can screenshot either, so there's no use worrying about that.

progx(10000) 4 days ago [-]

Messenger are not easier than email, only when 2 ore more communicate on one topic.

But with email you communicate with one or more people about many topcis.

To archieve the same structure in a messenger, you must create several discussions. So messenger are not the holy grail of communication, that is why people still use email.

We need an email user interface with open messenger protocols under the hood for secure communication and usability. None of the current messenger offer that.

goffi(4057) 4 days ago [-]

For a proof of concept, I've made a couple of years ago a 'gateway' which was launching a local IMAP server in my XMPP client. This way you could use any MUA you like while taking profit of XMPP infrastructure (JID — XMPP identifier — are similar to email addresses) and its encryption (OMEMO can be used with that for instance). In other words, you could send messages from Thunderbird (or Gajim) to KMail by using only XMPP protocol (and thanks to IMAP Push, you got notifications immediately).

jobigoud(10000) 4 days ago [-]

Good point, private channels are the new discussions. But they tend to never die so over the years we end up with a cluttered channel list.

jedisct1(3383) 4 days ago [-]

PGP was a game changer when it was introduced and the idea of a chain of trust was neat.

Unfortunately, its implementations are difficult to use, it was never directly supported by operating systems or major applications, and its crypto agility makes it impossible to write minimal implementations.

I wrote and use Piknik for small files transfer (especially with the Visual Studio extension), Encpipe for file encryption and Minisign to sign software. What they have in common is that they are very simple to use, because they only do one thing, with very little parameters to choose from.

The main reason for having written these in the first place was that PGP is too complicated to use, not that it couldn't have done the job.

philips(1010) 4 days ago [-]

Has there been any effort to integrate minisign into git?

maufl(3811) 5 days ago [-]

I use my PGP key (with a Yubikey) for two things, the pass password manager and as an SSH authentication key. Is there a replacement for those two use cases where I can store my private key on a hardware token?

mongol(10000) 5 days ago [-]

Given that you have set this up, and that both of these usecases are for your own use only, I wonder if there is any reason to change. Most criticism in the article is about the complexity on a larger scale.

tptacek(79) 4 days ago [-]

I use the GPG compatibility goo to get an SSH key out of my Yubikey 5 and that's fine, it's fine, go ahead and do that.

stubish(4156) 5 days ago [-]

The PGP problem isn't going away until there is a stable alternative. Under 'The Answer' there are several different, domain specific tools, with their different features and UIs and limitations. And the general case (encrypting files, or really that should be 'encrypting data'), 'this really is a problem'. If I want to replace my use of GnuPG on production with the things on that list, I need to write my own encrypt/decrypt wrappers using libsodium and hope that future travellers can locate the tool or documentation so they can decrypt the data. So I stick with the only standard, GnuPG, despite acknowledging its problems.

tptacek(79) 5 days ago [-]

What specific problem are you trying to solve with PGP? If it's 'encrypting files', why are you encrypting those files? What's the end goal? I acknowledge that there are cases that boil down to 'encrypt a file', but believe they are a lot narrower than people assume they are.

tedunangst(4145) 5 days ago [-]

I am a combination of honored and terrified that signify is the leading example for how to sign packages. The original goals were a bit more modest, mostly focused only on our needs. But that's probably what's most appealing about it.

tptacek(79) 5 days ago [-]

I think you should get comfortable with that, because all the opinions I've collected seem to be converging on it as the answer; Frank Denis Frank-Denis-i-fying it seems to have cinched it. Until quantum computers break conventional cryptography, it would be downright weird to see someone design a new system with anything else.

luizfelberti(10000) 5 days ago [-]

The missing answer in there is how to avoid PGP/GnuPG for commit signing. I've asked about this in another similar thread[0] but didn't get a hopeful answer.

Everytime I look at git's documentation GPG seems very entrenched in there, to a point that for things that matter I'd use signify on the side.

Is there a better way?

[0] https://news.ycombinator.com/item?id=20379501

srl(2717) 5 days ago [-]

It seems pretty clear that, with the current tools available, there is no way to do this (at least with git). There's nothing in principle difficult about it, just that (say) git+signify hasn't been implemented.

I'm getting the strong sense (see also my toplevel comment, and maybe someone will correct me and/or put me in my place) that there's an enormous disconnect between the open source + unix + hobbyist + CLI development communities, and the crypto community. The former set has almost no idea what the state of art in crypto is, and the latter (somewhat justifiably) has bigger fish to fry, like trying to make it so that non-command-line-using journalists have functional encryption that they can use.

I think this is a sociological problem, not a technical 'using command-line tools makes Doing Crypto Right impossible'.

paulddraper(3978) 5 days ago [-]

Signing tags (or somewhat less usefully, commits) can be done the same way packages are signed. It might not be directly integrated with git, but it wouldn't be hard to make a good workflow.

The article mentions Signify/Minisign. [1]

[1] https://jedisct1.github.io/minisign/ as an PGP alternatie.

Sniffnoy(10000) 5 days ago [-]

So what do I use for encrypted messaging that can, like, replace email, then? Nobody seems to have provided any sort of satisfactory answer to this question. To be clear, an answer this has to not just be a secure way of sending messages, it also has to replicate the social affordances of email.

E.g., things distinguishing how email is used from how text-messaging is used:

1. Email is potentially long-form. I sit down and type it from my computer. Text-messaging is always short, although possibly it's a series of short messages. A series of short emails, by contrast, is an annoyance; it's something you try to avoid sending (even though you inevitably do when it turns out you got something wrong). Similarly, you don't typically hold rapid-fire conversations over email.

2. On that point, email says, you don't need to read this immediately. I expect a text message will be probably be read in a few minutes, and probably replied to later that day (if there's no particular urgency). I expect an email will be probably read in a few hours, and probably replied to in a few days (if there's no particular urgency).

3. It's OK to cold-email people. To text someone you need their phone number; it's for people you know. By contrast, email addresses are things that people frequently make public specifically so that strangers can contact them.

So what am I supposed to do for secure messaging that replicates that? The best answer I've gotten for this so far -- other than PGP which is apparently bad -- is 'install Signal on your computer in addition to your phone and just use it as if it's email'. That's... not really a satisfactory answer. Like, I expect a multi-page Signal message talking about everything I've been up to for the past month to annoy its recipient, who is likely reading it on their phone, not their computer. And I can't send someone a Signal message about a paper they wrote that I have some comments on, not unless they're going to put their fricking phone number on their website.

So what do I do here? The secure email replacement just doesn't seem to be here yet.

JasonFruit(3720) 5 days ago [-]

Your point 1 especially speaks to me. Phone-based messaging in general isn't appropriate for the things I would most like to be kept private between me and a recipient, because those sorts of things can't be created on a phone. I've found PGP pretty good for making that happen, when I'm working with someone who a) uses PGP also and b) exercises some caution when using it. I haven't found an option that I can trust that will work for me.

NoGravitas(3878) 4 days ago [-]

I strongly agree with this; email is not instant messaging, and there is not yet any secure replacement for email.

We need a modern design for a successor protocol to email, and no one is working on it because they prefer instant messaging (or think other people do).

floatingatoll(3889) 5 days ago [-]

3 is incorrect for encrypted email: you cannot email someone unaware without their consent and expect them to willingly and ably participate in decrypting.

lvh(3176) 5 days ago [-]

Do you feel like PGP is a good way to cold email people in practice? (I'm not trying to put words in your mouth, but that sounds like what you're saying.)

umvi(10000) 5 days ago [-]

I don't get what's insecure about normal unencrypted email. It's sent over https, isn't it? It's not like I can read your emails unless I break into Google's servers, no? And even if I do, they probably aren't even stored in plaintext.

I just don't get the encrypted email obsession. It's impossible for an individual to withstand a targetted cyber attack so it seems pointless to go above and beyond to ultra encrypt every little thing

Leace(3781) 4 days ago [-]

> there's a simple meta-problem with it: it was designed in the 1990s, before serious modern cryptography

SSL was designed in 1994 but it has been properly maintained and today no-one argues that TLS should be replaced by noise/strobe etc. OpenPGP's problem no 1. is that there are no parties using it on a wider scale and interested in improving it.

tptacek(79) 4 days