Hacker News with comments/articles inlined for offline reading

Authors ranked on leaderboard
Last updated:
Reload to view new stories

May 25, 2022 12:06

Front Page/ShowHN stories over 4 points from last 7 days
If internet connection drops, you can still read the stories
If there were any historical discussions on the story, links to all the previous stories on Hacker News will appear just above the comments.

Historical Discussions: The new and upgraded Framework Laptop (May 19, 2022: 1672 points)

(1675) The new and upgraded Framework Laptop

1675 points 6 days ago by etbusch in 10000th position

community.frame.work | Estimated reading time – 7 minutes | comments | anchor

When we launched the Framework Laptop a year ago, we shared a promise for a better kind of Consumer Electronics: one in which you have the power to upgrade, repair, and customize your products to make them last longer and fit your needs better. Today, we're honored to deliver on that promise with a new generation of the Framework Laptop, bringing a massive performance upgrade with the latest 12th Gen Intel® CoreTM processors, available for pre-order now 1.9k. We spent the last year gathering feedback from early adopters to refine the product as we scale up. We've redesigned our lid assembly for significantly improved rigidity and carefully optimized standby battery life, especially for Linux users. Finally, we continue to expand on the Expansion Card portfolio, with a new 2.5 Gigabit Ethernet Expansion Card coming soon.

In addition to launching our new Framework Laptops with these upgrades, we're living up to our mission by making all of them available individually as modules and combined as Upgrade Kits in the Framework Marketplace 695. This is perhaps the first time ever that generational upgrades are available in a high-performance thin and light laptop, letting you pick the improvements you want without needing to buy a full new machine.

12th Gen Intel® CoreTM processors

Framework Laptops with 12th Gen Intel® CoreTM processors are available for pre-order today 1.9k in all countries we currently ship to: US, Canada, UK, Germany, France, Netherlands, Austria, and Ireland. We'll be launching in additional countries throughout the year, and you can help us prioritize by registering your interest 225. We're using a batch pre-order system, with only a fully-refundable $100/€100/£100 deposit required at the time of pre-order. Mainboards 171 with 12th Gen Intel® CoreTM processors, our revamped Top Cover 587, and the Upgrade Kit 366 that combines the two are available for waitlisting on the Marketplace today. You can register to get notified as soon as they come in stock. The first batch of new laptops as well as the new Marketplace items start shipping this July.

12th Gen Intel® CoreTM processors bring major architectural advancements, adding 8 Efficiency Cores on top of 4 or 6 Performance Cores with Hyper-Threading. This means the top version we offer, the i7-1280P, has a mind-boggling 14 CPU cores and 20 threads. All of this results in an enormous increase in performance. In heavily multi-threaded benchmarks like Cinebench R23, we see results that are double the last generation i7-1185G7 processor. In addition to the top of the line i7-1280P configuration, we have i5-1240P and i7-1260P options available, all supporting up to 30W sustained performance and 60W boost.

We launched a new product comparison page 811, letting you compare all of the versions of the Framework Laptop now available. Every model is equally thin and light at <16mm and <1.3kg, and each has our Expansion Card system that lets you choose your ports, a 13.5" 3:2 display optimal for productivity, a great-feeling keyboard with 1.5mm key travel, a 1080p webcam, hardware privacy switches, and more. We offer both ready-to-use Framework Laptops with Windows 11 and our extremely popular Framework Laptop DIY Edition that lets you bring and assemble your own memory, storage, and Operating System, such as your preferred Linux distro. If you need a laptop today (or a volume order of laptops) or want a bargain, we're dropping the price of the first-generation Framework Laptop 406 until we run out of the limited inventory we have left. If you ever need more performance in the future, you can upgrade to the latest modules whenever you'd like!

Optimized for Linux

We continue to focus on solid Linux support, and we're happy to share that Fedora 36 works fantastically well out of the box, with full hardware functionality including WiFi and fingerprint reader support. Ubuntu 22.04 also works great after applying a couple of workarounds, and we're working to eliminate that need. We also studied and carefully optimized the standby power draw of the system in Linux. You can check compatibility with popular distros as we continue to test on our Linux page 502 or in the Framework Community 56.

Precision Machined

In redesigning the Framework Laptop's lid assembly, we switched from an aluminum forming process to a full CNC process on the Top Cover, substantially improving rigidity. While there is more raw material required when starting from a solid block of 6063 aluminum, we're working with our supplier Hamagawa to reduce environmental impact. We currently use 75% pre-consumer-recycled alloy and are searching for post-consumer sources. The Top Cover (CNC) 587 is built into all configurations of the Framework Laptop launching today, and is available as a module both as part of the Upgrade Kit or individually.

Ethernet Expansion Card

Support for Ethernet has consistently been one of the most popular requests from the Framework Laptop community. We started development on an Expansion Card shortly after launch last year and are now ready to share a preview of the results. Using a Realtek RTL8156 controller, the Ethernet Expansion Card supports 2.5Gbit along with 10/100/1000Mbit Ethernet. This card will be available later this year, and you can register to get notified 180 in the Framework Marketplace.

Reduce Reuse Recycle

We're incredibly happy to live up to the promise of longevity and upgradeability in the Framework Laptop. We also want to ensure we're reducing waste and respecting the planet by enabling reuse of modules. If you're upgrading to a new Mainboard, check out the open source designs 314 we released earlier this year for creative ways to repurpose your original Mainboard. We're starting to see some incredible 394 projects 374 coming out of creators and developers. To further reduce environmental impact, you can also make your Framework Laptop carbon neutral by picking up carbon capture 314 in the Framework Marketplace.

We're ramping up into production now with our manufacturing partner Compal at a new site in Taoyuan, Taiwan, a short drive from our main fulfillment center, helping reduce the risk of supply chain and logistics challenges. We recommend getting your pre-order in early 1.9k to hold your place in line and to give us a better read on production capacity needs. We can't wait to see what you think of these upgrades, and we're looking forward to remaking Consumer Electronics with you!

All Comments: [-] | anchor

canuckintime(10000) 6 days ago [-]

I'm waiting on alternate screen options. The 3K2K OLED screen on the HP Specter X360[1] would be a great highend option for me.

Framework is suggesting many customisable options but the wifi antenna is behind the screen so it's not a seamless transition. I would be interested in a screenless framework (with keyboard or just the guts) if they simplify wifi.

So a WiFi module or Cellular module would be a definite buy

[1] https://www.theverge.com/22264792/hp-spectre-x360-14-laptop-...

seltzered_(10000) 6 days ago [-]

I'm waiting for a tablet design. I use a 3k2k lcd touchscreen on the HP Elite X2 tablet and while it's a more repairable design compared to a MacBook or surface, would love something with a bit more modularity to add a larger battery or modify the casing (see https://www.reddit.com/r/ErgoMobileComputers/comments/tcwep0... )

thrownaway561(10000) 6 days ago [-]

why am going to pay $1100 for an i5/8gb/265ssd when I can pay $700 for a i5/12gb/1tbHDD. This whole thing reminds me of the PANDA project from early 2000s and you all know how well that project worked out.

Laptop are throw aways. At the end of their life you recycle them and get a new one. The single problem I see with all these type of total upgradable devices is that you are still locked into a single vendor. Unless other vendors get onboard and you have competition, you are at the mercy of the single vendor's pricing and existence. How good is an upgradeable laptop when the vendor goes out of business and you can't buy parts?



kitsunesoba(10000) 6 days ago [-]

Having used a laptop similar to that linked HP in past and now comparing spec sheets, I don't really think it's in the same class as the framework laptop at all.

Compared to the framework, the HP's:

- CPU is a generation behind

- Screen has low PPI (less sharp), very low brightness, and is probably a TN panel, meaning colors will be more dull

- HDD which is a lot slower than an SSD anyway is 5400RPM, which is slow even for an HDD

- Battery is 14Wh smaller

- Webcam is 720p instead of 1080p

- Bluetooth and wifi is a whole major version behind

- Charging port is one of those old terrible barrel jacks that gets loose quickly

And the build quality is most assuredly not in the same universe. Laptops as cheap as this HP are built on razor thin margins, which means that manufacturers are cutting costs wherever possible. This gets you things like creaky flexy cases, loose wobbly hinges, chintzy keyboards, bad trackpads, and oddball bargain basement components with less than amazing performance.

In short it will be a lot less pleasant to use, even ignoring that huge gaps in the spec sheet. Models from other manufacturers that would be more comparable to the framework in specs and fit and finish are the M1 MacBook Air/Pro, Dell XPS 13, and Lenovo ThinkPad X1 Carbon.

Brian_K_White(10000) 6 days ago [-]

Other vendors are free to produce compatible parts. They publish physical dimensions and cad files on github.

Everything about this is as interoperable as possible, both physical and software.

Maybe no other vendor will produce a motherboard or keyboard, but it's not Framework's fault.

Second, The closest competing product to Framework is Lenovo not HP. (Despite the fact they look like a Mac's aluminum body, huge buttonless touchpad and black chicklet keys, with a Surface's screen aspect ratio.)

HP's customer is someone who would like a Surface or Mac but doesn't have that kind of cash, or just cares more about a distinctive look that isn't gamer.

Here's HP's customer: I got my mom a top of the line maxxed out HP because she will never care about upgrades or repairs or Linux or raw power, but she does care about the blingy rich bronze look, and I care enough to steer her away from Surface and Mac even though I don't care about the cost. That's HP's customer.

I WAS actually able to replace the battery in her previous Spectre (the sweet thin one with the funky hinges that looked like hoop earrings or wedding bands) to give it to my niece when I updated mom, but HP did not make that easy.

HP are premium-looking Chromebooks that run Windows.

Framework are user-serviceable open platform Lenovos.

'Why would I spend...' You clearly wouldn't, so don't. But I would. Why? goes like this:

I don't particularly care too much about AMD vs Intel, but a lot of people are asking for an AMD cpu motherboard.

Let's say Framework did not make a an AMD motherboard, but someome else did. Let's say that the only way to get a Framework was to either buy a whole Framework including an Intel mainboard I don't want, PLUS the 3rd party motherboard for $500 or whatever it is. I would rather do that, because I want that open platform. First, Framework would not make me buy the entire machine, they would let me buy everything but the main oard. But even if they didn't, that mainboard I didn't want is actually useable all by itself like a 900 horsepower raspberry pi. Or I could sell it, because it's useful to anyone else too. Or I could keep it as a backup in case I damage my prefferred board. That platform which makes all kinds of options possible, is valuable to me.

No one yet makes any such 3rd party mainboard, but the platform at least allows for it and makes it possible vs not-possible. I want that. That is valuable to me. I will pay a lot for that.

coldpie(10000) 6 days ago [-]

> Laptop are throw aways. At the end of their life you recycle them and get a new one. The single problem I see with all these type of total upgradable devices is that you are still locked into a single vendor. How good is an upgradeable laptop when the vendor goes out of business and you can't buy parts?

I agree with your skepticism. But, I don't agree that it has to be this way. Framework is giving another model a chance, and yeah, it may fail. But Frameworks are no /more/ disposable than any other laptop, so I guess I don't see a downside to at least giving it a shot if it's at an acceptable price and has a desirable feature set. You're right that the commodity hardware is cheaper, but I guess I can live with paying a bit more to try something else out and support an alternate model.

bl4ckneon(10000) 6 days ago [-]

Well first off the laptop you linked is an 11th gen cpu vs the frame work which just upgraded to a 12 gen. The framework isn't an amazing value dollar for dollar, spec for spec. That is not why you buy one though...

Is all about the upgradability, the open source aspect, sustainability, etc. Good luck if you want to open your Lenovo laptop and want to get it warrentyed for anything.

BirAdam(10000) 6 days ago [-]

Why oh why does every decent laptop have a 1080p screen? Apparently, if I want something other than 1080p I either buy a MacBook and pay the Apple Tax, or I buy something overpriced and often terrible in every other way?

Terretta(10000) 6 days ago [-]

> I either buy a MacBook and pay the Apple Tax, or I buy something overpriced and often terrible in every other way?

So it's not really a tax then, it's a quality cost.

30944836(10000) 6 days ago [-]

Because Linux's support for HiDPI (specifically, fractional scaling) is limited.

jwcooper(10000) 6 days ago [-]

This laptop has a 2256x1504 screen. The webcam is 1080p.

nrp(10000) 6 days ago [-]

The Framework Laptop's display is 2256x1504.

sryie(10000) 6 days ago [-]

I recently received my first framework laptop after being a loyal Thinkpad user for years. I am loving it so far. I run Ubuntu 22.04 daily and have not had any issues with battery life or the lid (but I do typically leave it plugged in during lunch and overnight). The expansion cards are brilliant and the keyboard is comparable to my old t-series. The aspect ratio is great for coding and I'm happy to see upgradeability is being taken seriously as promised. If I can get 5-10 years out of it like my old ThinkPads (all while upgrading piecewise along the way) I will be a fan for life.

prohobo(10000) 6 days ago [-]

I heard the keyboard is good, but do you mean the newer T-series chiclet keyboards?

Goronmon(10000) 6 days ago [-]

I recently received my first framework laptop after being a loyal Thinkpad user for years.

I get excited about different laptops occasionally...and then I remember that I won't have a trackpoint if I switch to a different brand, and I get disappointed. Literally happens every few months.

efsavage(10000) 6 days ago [-]

> The aspect ratio is great for coding

If I ever need to buy a laptop this would be a huge feature for me, I would love if they still made 4:3 displays for desktops, it's so much better for the triple-wide setup I prefer, especially on the sides.

xur17(10000) 6 days ago [-]

> and the keyboard is comparable to my old t-series.

Really happy to hear this bit since it's my main concern when buying a new laptop. My 2 other questions - how long does the battery last, and how is overall build quality?

favadi(10000) 6 days ago [-]

Do you have any problem with resolution, screen scaling? I think it requires 1.5x scaling, which often cause screen tearing and artifacts on Linux.

reaperducer(10000) 6 days ago [-]

have not had any issues with battery life

Looks like it can be charged from any USB-C port you install in it.

Much better than my work-assigned ThinkPad, which only allows charging through one specific port. As if everyone on the planet has their wall plug in the same location.

Oxodao(10000) 6 days ago [-]

There are 'battery life improvement' will those be available to 1st gen Frameworks motherboard or is it hardware related ?

And do anyone know if the bug in the HDMI card preventing it to go to sleep is still a thing / need a firmware update / is hopeless ?

nrp(10000) 6 days ago [-]

We have a firmware update in testing to improve shut down (S5) drain. For s0ix, we are investigating firmware paths to reduce power consumption. The card itself actually does go to into a low power state, but the USB4/TBT4 retimer stays in a higher power state. That is something we were able to fix in a combination of hardware and firmware on the new 12th Gen Intel Core systems for s0ix/Modern Standby. We're investigating paths to improve this that would work for 11th Gen as well, but nothing final yet.

spullara(10000) 6 days ago [-]

Can I reassign the ctrl and fn keys? I don't use Fn and it is super annoying that it is where it is.

digisign(10000) 6 days ago [-]

If not you can almost always swap with Caps lock via software. That should slowly break the dependence on the corner key.

motiejus(10000) 6 days ago [-]

Any photos of how the ethernet expansion card will look like while plugged in? Looks like it's bigger than the expansion slot.

I intend to keep it there permanently, which brings questions about durability, especially when carrying the laptop around.

nrp(10000) 6 days ago [-]

It is oversized, but robust enough to keep installed (I have been for the last few months dogfooding it). We'll add more photos of it installed to the product page for it before we open sales on it to make sure folks know what they are getting into before buying.

rkagerer(10000) 6 days ago [-]

As in, sticks out past the edge of the laptop?

CalRobert(10000) 6 days ago [-]

Thank you for making an upgrade kit! Very happy with the Framework I bought 2 months ago but seeing options like this - https://frame.work/gb/en/products/12-gen-intel-upgrade-kit - is nice.

noveltyaccount(10000) 6 days ago [-]

This is so amazing. If I had a 10th or 11th gen i7 laptop, no way I'd rush out and buy a new laptop for two or three grand. But a new mainboard for $600? Yeah, that's an annual upgrade train I can get on!

5-(10000) 6 days ago [-]

congrats on the update!

framework is a great laptop with macbook-like chassis.

all that's needed to make me and a vocal minority happy is an alternative thinkpad-like chassis.

Markoff(10000) 6 days ago [-]

more like 7 row thinkpad keyboard for starters

Pasorrijer(10000) 6 days ago [-]

Any plans for discrete graphics?

pc2g4d(10000) 5 days ago [-]

I came here wondering the same thing. I guess I could just get the eGPU I've tossed around for a while?

pizza234(10000) 6 days ago [-]

Alder Lake is still not fully supported by Linux (improvements are coming with v5.181, which is not stable yet, and it will take a while to be released into several linux distros (at least the Ubuntu based)).

It's a shame, because it would have been a great moment to offer an AMD alternative.


smoldesu(10000) 6 days ago [-]

Alder Lake works just fine on Linux, it's only Thread Director which is missing. Not that these machines would really even need it, the current CPU prioritization code seems to work surprisingly well.

howinteresting(10000) 6 days ago [-]

Think most distros have backported the Alder Lake patches to their kernels.

ripley12(10000) 6 days ago [-]

Things are fine on Linux even without Thread Director support. I've been running Fedora 36 (kernel v5.17) on a 12900K for a few months now without any noticeable issues.

neurostimulant(10000) 6 days ago [-]

The Ethernet Expansion Card seems to be using USB type C connector. Can it works on non Framework computers?

Also, anyone has recommendation for great affordable router with 2.5 gigabit ethernet ports for home lab setup? I've been searching for one but it seems only gaming routers include these ports. I prefer something more enterprisy (lots of options to tinker with like mikrotik or pfsense), but those usually don't come with 2.5 gigabit ethernet ports, instead they (the affordable ones) have plenty of 1 gigabit ethernet ports and a single sfp+ port. Or should I bite the bullet and go full sfp+ for home lab setup?

nrp(10000) 6 days ago [-]

Yep! It will work as a normal USB-C Ethernet adapter, but due to the form factor, there is risk that you can apply an excessive amount of torque to a normal USB-C receptacle if the Ethernet cable gets pulled.

rgrmrts(10000) 6 days ago [-]

I would buy a 15/16" version of this in a heartbeat :) I really hope another chassis with a larger screen is in the cards for framework!

jillesvangurp(10000) 6 days ago [-]

Hdpi, with a more sane aspect ratio and hdr that is suitable for graphics work would be high on my wish list.

I actually like using Darktable. And I like using it on a good screen better. So even though I have a Linux laptop that runs Darktable very nicely (even with just Intel Xe graphics), I actually do a lot of photo editing on my 8 year old imac, which has a 5K screen, fantastic contrast & colors, etc. It shows me stuff my laptop is simply incapable of showing. Darktable runs like a dog on it but at least I can see what I'm doing properly and have enough screen real estate to actually fit the tools in the sidebar on the screen without having to scroll.

I'd love to see Linux laptop that is optimal for graphics, movie editing, etc. Mediocre 1080p screens are simply not good enough anymore. Apple stopped shipping anything non hdpi years ago. Even the cheapest macbook air has a decent screen. Decent contrast, easy to calibrate, beautiful colors and excellent dynamic range. Probably best in class by any objective measure. Why can't Linux users get screens that good? It's not like Apple doesn't buy their parts from the same usual suspects in China and Korea when it comes to screens and other things you need to build a laptop.

tomrod(10000) 6 days ago [-]

I like the 13' but totally agree.

jadbox(10000) 6 days ago [-]

17' 4k please! (yes, I can do coding with that resolution at 120% scaling)

bcrosby95(10000) 6 days ago [-]

Yeah, my 13' laptop just died on me, and it was just too uncomfortable for me to risk buying another 13' laptop.

The Ethernet port is a big bonus for me too. Oh wells.

maz-(10000) 6 days ago [-]

Really happy to see this, although I must've bought one of the last Gen 11 Frameworks at full price (literally 2 days ago ).

zeagle(10000) 5 days ago [-]

I would contact them and ask them to adjust the price. You are within the return window and it is cheap goodwill.

jai_(10000) 6 days ago [-]

Does anyone know if the Framework team plan to offer an ARM based mainboard?

I'm honestly not even sure that there are any good ARM based SoCs to make a laptop mainboard from, but given what we've seen from Apple's development of their iPhone chips being integrated into laptop and desktop, I wonder if something similar could be done with other existing ARM CPUs from Samsung or Nvidia?

wmf(10000) 6 days ago [-]

I doubt they'll offer ARM, but the RK3588 is designed for laptops and not embarrassingly slow (about 3x the performance of an RPi 4).

jarbus(10000) 4 days ago [-]

I'm also interested in this, mainly for the battery life improvements.

CameronNemo(10000) 6 days ago [-]

Do any ARM SoCs support USB4 (apart from Apple Silicon)? IIRC that is the main reason they cited for not shipping AMD boards.

jameshart(10000) 6 days ago [-]

Ohhhhh that's why there was a spate of projects posted last week about building computers based on framework mainboards - submarine marketing for this framework upgrade launch. Figured it was framework behind it somehow, but the fact that it's to promote the 'here are some ways to use your old mainboard once you upgrade' angle makes a ton of sense.

natosaichek(10000) 6 days ago [-]

Yeah - whenever somebody makes a good thing and people use it in cool ways, that's submarine marketing for that thing.

nrp(10000) 6 days ago [-]

It's somewhat less nefarious than that. Before we announced the availability of a new Mainboard that existing Framework Laptop users can upgrade to, we wanted to make sure that there were interesting ways for people to re-use their old ones. When we sent out hardware to some creators, we told them we would appreciate it if they posted their projects by X date, leading to them clustering just before that date.

nrp(10000) 6 days ago [-]

I'm happy to answer any questions around this! We've been working on this since update since we launched the product last year, so we're excited to be able to share it today.

hajile(10000) 4 days ago [-]

Notebookcheck says that the 1240p actually performs BETTER than the 1260p due to thermals. Is this true for the new Framework laptop?


akavel(10000) 6 days ago [-]

Piling on the wishlist: any chances of a fanless mainboard in the future? I'm a sucker for fanless, hard for me to imagine going back...

Kerrick(10000) 6 days ago [-]

Will all software shipped with the hardware when ordered with GNU/Linux be Free or will there be non-Free software such as the BIOS/UEFI?

EDIT: I just realized that you cannot order this laptop with GNU/Linux pre installed. I was mistaken.

the_gipsy(10000) 6 days ago [-]

Any chance of ortholinear (grid) keyboards happening?

rochansinha(10000) 4 days ago [-]

Any information when it will launch in India?

waiseristy(10000) 6 days ago [-]

What style of touchpad does the device have? Is it a force sensor style (macbook), hinged (most recent thinkpads), or one-big-button (also some thinkpads had this)?

I absolutely despise the hinged touchpad on my thinkpad as you can't click unless you're pushing on the bottom half of the touchpad. A force sensor touchpad alone would make me put in an order for a framework laptop

pkulak(10000) 6 days ago [-]

Any word on a resolution bump? We only need a few more lines for 2x support!

mentos(10000) 6 days ago [-]


Looking at the DIY Guide [0] it looks like a lot of the laptop comes pre-assembled still (case, motherboard, screen, keyboard).

Is it more cost effective to do the labor on Framework's side to ship everything more tightly together in 1 box or could we see a 'DIY Pro' option that ships every component in its own box? (Maybe even at greater discount?)

Also, check out this Mechanical Watch [1] tutorial that made it to the front page of HN last week. I could definitely see an exploded assembly view like this being really instructional for Framework DIY-ers.

[0] https://guides.frame.work/Guide/Framework+Laptop+DIY+Edition... [1] https://ciechanow.ski/mechanical-watch/

guerby(10000) 6 days ago [-]

Hi nrp

I demo-ed my frame.work laptop yesterday to https://www.matinfo-esr.fr/ which is a single buyer entity for all french universities and public research institutes (once hardware is in their catalog it's click to order for universities without administrative hassle).

They showed interest on the non obsolescence, durability and repairability aspect of frame.work since these features are part of their public service mission.

Feel free to contact me, my email is on the website listed on my HN profile

deng(10000) 6 days ago [-]

Does the laptop support proper S3 sleep, or is this impossible with modern Intel CPUs?

ongy(10000) 6 days ago [-]

Less of a question, more a note:

In the configure page to pre-order for the 12th-Gen variant, there's a link to the 12th-Gen variant. Feels a bit weird and confusing to be pointed towards the shiny new variant, while shopping for the shiny new variant.

Jhsto(10000) 6 days ago [-]

Any plans to ship models with coreboot one day?

sargun(10000) 6 days ago [-]

How are you using CNC to make parts en masse in a cost effective way?

sireat(10000) 6 days ago [-]

Please, more country availability!

I've been waiting for 12th gen Alder Lake availability and am ready to pay. However as a EU citizen from one of the Baltic states I am unable to do so.

Please, tell us that this year any EU citizen will be able to order a Framework laptop.

I could not even find which friends in which countries to ask to order Framework for me.. It used to be US then UK, and I know there are a few other ones.

Combined with a waitlist the logistics are painful.

At least I hope that signing up for the waitlist from a specific country counts as something.

tomerv(10000) 6 days ago [-]

When will you open the option to order to Israel?

  We haven't opened ordering in your region yet, but we're looking forward to getting there! We can notify you when ordering opens:
zucker42(10000) 6 days ago [-]

Why is the SN750/SN850 the default SSD, given it has relatively high power consumption[1] and separately is there any reason to believe that building a DIY version with a different SSD wouldn't work?

[1] https://www.tomshardware.com/reviews/wd-black-sn850-m-2-nvme...

anonporridge(10000) 6 days ago [-]

It looks like you currently can't order without Windows bundled. Will there be an option to order the kit without Windows for a reduced price?

spiffytech(10000) 6 days ago [-]

Any plans to offer larger displays?

rocqua(10000) 6 days ago [-]

I see there are only DDR4 options. Presumably if you bring your own memory, that also has to be DDR4.

Why no DDR5?

boppo1(10000) 6 days ago [-]

Any chance you'll eventually have a Framework with a 'clickier' keyboard and a trackpoint like the x220? I will happily buy your product the next time my x220 dies (instead of upgrading it) if it has the nice clicky keyboard and a trackpoint. A slightly thicker laptop is very much a fair trade-off.

philliphaydon(10000) 6 days ago [-]

You have a bunch of job opening's in Taiwan but you currently don't ship to Taiwan? :(

BlackLotus89(10000) 6 days ago [-]

Is there a plan to offer other payment methods and/or multiple laptop orders. We want to use frame.work laptops for work and those limitations make it really hard for us to get it through logistics/purchase. The upgraded version would be an ideal reason for me to rerequest this as my new main machine.

etbusch(10000) 6 days ago [-]

Do you have any preliminary specs on how battery life differs between these and the 11th gen equivalents?

gavinpc(10000) 6 days ago [-]

I just ordered a Framework yesterday. I'm not interested in the 12th gen chip, but is there any other reason I might want to cancel & re-order today? i.e. would I be getting an older design?

edit: Someone also brings this up on the OP: https://community.frame.work/t/introducing-the-new-and-upgra...

stingraycharles(10000) 6 days ago [-]

I was waiting for the availability of the US international keyboard for DIY builds, but I got an even better present today. I have just made my preorder, surprised to see that a 1280P CPU with 64GB RAM is very reasonably priced!

I was in the market for a MacBook Pro / max upgrades as well, mind you, so effectively I also saved a lot of money (I believe at least a $1k price difference).

I use Linux as my daily driver, super happy to see the better support here as well.

All in all, thank you for making a refreshing change in this market.

user_7832(10000) 6 days ago [-]

Not too different from the AMD comment, is there any plan or roadmap for future generations of (yet-unreleased) Intel chips?

Also congrats on the update, I honestly wasn't expecting it. I'm seriously considering a framework laptop/motherboard for my next PC.

schmorptron(10000) 6 days ago [-]

Wow, that's quite a price jump from the i5 to the i7 and then subsequently to the 6 core one. Could you talk a bit about the economics of having hotter / higher end chips in a notebook and whether there are other non-obvious cost increases to them? Are the higher end models 'subsidizing' the lower end one, or is there motherboard / chipset upgrades that need to happen as a result?

Really like the laptop though, and it's a close contender when it's my time to upgrade... :)

verall(10000) 6 days ago [-]

Any chance for an add-in card with an Intel NIC? I have had issues with the realtek USB-C NICs and I was hoping the 2.5GbE would be intel.

djbusby(10000) 6 days ago [-]

Something is really wacky with v-scrolling on your pre-order page (Pixel3/Chrome) I wasn't able to complete the process.

yasing(10000) 6 days ago [-]

what are the specs on the expansion cards? looks like usb-c.. why not let order with 0 expansion cards and use a dongle of my choosing?

lighttower(10000) 6 days ago [-]

Will it be possible to get a keyboard like the Thinkpad. FULL SIZE arrow keys. Menu button next to Right ALT. And PGUP PGDN adjacent to the arrows ?

Trackpoint is a bonus

PetitPrince(10000) 6 days ago [-]

Any chance to have a trackpoint-style pointing device in the future?

pimterry(10000) 6 days ago [-]

What are the constraints that are blocking wider EU availability?

Right now, in Europe it's only available in a handful of countries (5 of 27). I'm in Spain, and I see I can spec a perfect machine and get it delivered just over the border in France, but I can't get the same thing delivered here just a couple of hours away, which is very surprising! My understanding was the single market & customs union etc should make going from 1 to N EU countries pretty easy.

Is this due to smoe regulatory issues, or needing to organize shipping differently for every country, or waiting to include an ñ key, or something else?

Right now, I'm very seriously looking at ordering one, renting a PO box in France and shipping the laptop here myself, which seems a bit ridiculous.

aestetix(10000) 6 days ago [-]

Can existing customers 'trade in' their old motherboards for the newer models, with credit applied to make it cheaper?

EastSmith(10000) 6 days ago [-]

Not a question for the announcement, but the location page is missing countries, for example Bulgaria. This prevents me from even telling you I want to order form here (Bulgaria).


cf(10000) 6 days ago [-]

I love everything about what you have planned. Is there anything in the works for creating more keyboard options? While mechanical keyboards might be too impractical, even something with bigger arrow keys would be nice.

nspattak(10000) 6 days ago [-]

I would really like to buy one BUT I find it a little bit too expensive, especially the price difference for better CPUs I find proportionate. (i would be slightly more tempted if it was for an amd 6000 cpu, they are much better in perf/power, I hope you will reconsider in the next generation when the iGPU will be RDNA based)

yumraj(10000) 6 days ago [-]

Any chance of 15"-16" in near future?

defaultwizard(10000) 6 days ago [-]

Is there any plans (that you can talk about) for a slightly larger model with maybe more ports?

I want one of these so bad but if you end up doing a larger one shortly down the line i'm going to be really gutted.

Edit: also any plans for a blank ISO keyboard to match the blank ANSI one?

mssdvd(10000) 6 days ago [-]

What are the main reasons for not shipping to other EU / EEA countries?

vodkapump(10000) 6 days ago [-]

I know this gets asked a lot and isn't really about this new upgraded model but..

Any news on plans for AMD models?

helloworld653(10000) 6 days ago [-]

How many external 1080p60 monitors can this drive with the laptop open? 4?

nrp(10000) 6 days ago [-]

Yep, you can use up to four displays in total. That includes the internal display, so using four external monitors would mean turning off the internal one.

tomrod(10000) 6 days ago [-]

I'm on an 11th gen and just ordered another one a week ago.

Great daily driver!

Sebb767(10000) 6 days ago [-]

Same, I ordered just a few days ago! Pity, after waiting for so long, I should've just waited a few days longer and save 70€. Still, looking forward to finally having one :)

kristianp(10000) 5 days ago [-]

On a tangent about the 12th gen i5s, does anyone understand why the cheaper SKUs remove the E cores and not reduce the number of P cores? I suppose Intel intends that power efficiency (battery life) should be a premium feature now.

nani8ot(10000) 5 days ago [-]

That's because Intel uses the E/P cores differently ARM chips usually try to use E cores primarily for saving battery, so they only use P cores when performance is necessary. But Intel uses E cores to allow for higher multi core performance while staying in their power budget and the available die space.

i5s get the same amount of P cores as i7s, so their general application performance is pretty similar. But then if they compile/render something the many small E cores make the CPU faster without melting the system down...

The thermals are also why Intel 11th [1] gen had a maximum of 8 cores, while Intel 10th [2] gen had a maximum of 10. AMD pushed forward with their up to 16 cores and because of how good their performance per watt is, they could cool them. Intel noticed with 10th gen that they couldn't achieve high enough clock speeds with so many cores.

[1] https://ark.intel.com/content/www/us/en/ark/products/series/... [2] https://ark.intel.com/content/www/us/en/ark/products/series/...

justin66(10000) 6 days ago [-]

> We've redesigned our lid assembly for significantly improved rigidity

They should make this part available to existing users as a warranty replacement. It sounds like they've addressed a common complaint on their support forums.

The lid that flops over because of the hinge's weakness, and the absurd excuses made by company personnel (they claim it was designed this way 'to accommodate opening the laptop with one hand,' as if the people who open a laptop with one hand do not need the lid to stay upright) has been a great disappointment for me with this laptop. It is a design defect, not a feature, for the hinge to be this weak.

edit: apparently they're talking about this, so I guess we're stuck with the weak hinge:


nrp(10000) 6 days ago [-]

If the lid drops when the laptop is stationary, the hinge is out of spec and we'll send you a replacement through our support channel.

coder543(10000) 6 days ago [-]

My framework laptop had no issues with the hinge whatsoever. It honestly might have been a little too stiff for my tastes, but it functioned perfectly, and the screen never budged without me moving it intentionally. (Past tense is because I recently sold it as I was using my M1 MBA way more often, mainly due to how long the framework laptop took to come out of sleep mode by comparison. This isn't really a slight against Framework... Apple just did an unreasonably good job with M1 in some areas.)

I would not say the hinge issues have been a 'common' complaint. They've been the most common complaint that I've seen on Reddit, but still rare, especially once you factor in that people usually only go online to complain, and anyone with the hinge issue isn't going to hesitate, since it would be understandably annoying.

foodstances(10000) 6 days ago [-]

They upgraded the hinges at some point during manufacturing to address the screen falling down during light movement:


I believe this upgraded lid assembly is to address the screen wobbling during typing. It's a very thin lid and has a lot of flex, so the tighter hinge just transfers the force into the lid, causing it to wobble. Hopefully this will be eliminated with this upgraded lid.

coldtea(10000) 6 days ago [-]

>We continue to focus on solid Linux support, and we're happy to share that Fedora 36 works fantastically well out of the box, with full hardware functionality including WiFi and fingerprint reader support. Ubuntu 22.04 also works great after applying a couple of workarounds, and we're working to eliminate that need.

This disclaimer -from a company that picks their hw components none the less- is cold water to Linux in the desktop as any sort of 'solved' problem

anon_123g987(10000) 6 days ago [-]

There's no such operating system as 'Linux'. I don't know what these 'workarounds' are exactly, but if it's something like installing a driver for a fingerprint reader that's present in a standard Fedora distro, but not in a vanilla Ubuntu then I don't see the problem. Of course it won't work out of the box.

DeathArrow(10000) 6 days ago [-]

I'm wondering what's the battery life and power usage under Linux. For many laptops this is a problem.

jklinger410(10000) 6 days ago [-]

Had this issue with System76, which offers NVIDIA cards that suck on Linux, and various hardware that requires firmware not in the linux kernel or anywhere else, where you have to install and update that firmware separately (like windows).

All they make are Linux computers and they couldn't/didn't/wouldn't for some reason produce a laptop that just natively worked.

arghwhat(10000) 6 days ago [-]

Well to be fair, no desktop experience is solved if one isn't allowed to apply adjustments for their hardware (drivers, user space tools and whatnot).

My experience on Linux certainly isn't flawless, but I have about as many issues whenever I'm handed a Windows laptop as others have trying Linux. Computers suck.

throw93232(10000) 6 days ago [-]

It is laptop, not desktop ;)

noirbot(10000) 6 days ago [-]

To be fair to them, _desktop_ Linux is a fair bit easier than laptop Linux. Laptops have many of the components that have been the most neglected/hardest to work with - wifi cards, bluetooth, trackpads, fingerprint readers... All all the worse because there's often less or no choice of provider for the components.

For the most part, on a full desktop, you can avoid most of the need for those, or buy a part that works better.

nrp(10000) 6 days ago [-]

To be fair to Linux on the desktop, one of the major challenges is synchronization between new hardware platforms (12th Gen Intel Core), and distro cycles (22.04). We fully expect that the next point release of 22.04 will have a kernel that works well out of the box with 12th Gen. Fedora seems to more consistently be able to go out with more recent kernels. Fedora 36 with 5.17.6 works smoothly.

deepsun(10000) 6 days ago [-]

Used it for half a year. All works awesome, but I don't know how to update firmwares on my Linux Mint/Ubuntu. There are some guidelines on website, but they don't seem official, and say something like 'you may need to fix your bootloader after' which sounds scary that I'll break my perfectly working system.

unnouinceput(10000) 5 days ago [-]

'The system' is on your hard, yes? Who's stopping you to buy another one for testing and swap them out? This way you preserve your perfect working system AND get to play with your desires on the other one. IMO framework laptop is made specifically to tinker with as much as possible in the end.

nerdjon(10000) 6 days ago [-]

it is exciting to see an upgradeable laptop actually be upgradable.

But I have to wonder what the market for this is? The primary use case I see for something like this is a gaming laptop, which this is just nowhere near being suitable for.

Outside of that use case, for the vast majority of compute workloads is being able to upgrade really a need? I have 2 laptops (well technically 3 but I don't count my work one really). A gaming laptop and my Mac as my primary computer outside of gaming. I tend to upgrade my Mac maybe... 4 or 5 years. Maybe even more than that. My Mac I got in 2019 and feel no need to upgrade anything in it.

My gaming laptop on the other hand... If I had the ability to upgrade that I would likely upgrade parts every year or 2... like a good a gaming desktop.

What am I missing here outside of the excitement of an upgradeable laptop? I don't want to diminish the work on that, I am just unclear the use.

victor9000(10000) 6 days ago [-]

A few weeks ago I spilled an entire cup of sugary espresso on my framework laptop which completely ruined my keyboard by making it a sticky mess. You know what I did? I ordered a replacement keyboard kit for $99, installed it in ~5 minutes, and I haven't thought about it since.

Some other part will fail in the future, or I'll spill another cup of coffee, and when that happens all I need to worry about is swapping out the affected parts. And that's great compared to my previous alternatives with an XPS, which was basically to buy a brand new laptop.

Smithalicious(10000) 5 days ago [-]

Any plans to sell a full laptop kit minus mainboard? I have a Framework with the 11th Gen CPU (that I love very much) and I like that there is an upgrade path to the 12th gen CPU, but afaict that would leave me with an 11th gen mainboard that I can't turn into a second laptop easily and would be hard to sell for the same reason.

bacheaul(10000) 4 days ago [-]

Considering you'd like to buy both a complete laptop minus the main board (to put your existing main board in), and a main board (to upgrade your existing 11th gen), I'm sure there's a solution we can find here...

hollerith(10000) 5 days ago [-]

People put the mainboard in a 3-d-printed case (the plans for which are given away by Frame.work) to make a tiny desktop computer (like a NUC), so a mainboard would sell.

angulardragon03(10000) 6 days ago [-]

Looks like 12th gen DIY edition is €60 more than the MSRP of the old model. Wonder if the price will come down further for the old model.

EDIT: site was down while I was checking this, the 11th Gen DIY edition is €829, so €130 cheaper than 12th gen.

nrp(10000) 6 days ago [-]

We announced newly reduced pricing for the original 11th Gen Intel Core-based Framework Laptops. We've sold out of some SKUs and have limited numbers of the remaining ones, so that new reduced pricing is likely the final pricing until we run out of those.

post_break(10000) 6 days ago [-]

Will board schematics be available for repairs? Something Louis has been asking for.

nrp(10000) 6 days ago [-]

We recently released the subset of schematics that we were able to. Louis did a recent video on this: https://www.youtube.com/watch?v=8cJj8PUY0DU

pmlnr(10000) 6 days ago [-]

Please, please, please make s keyboard option that has full size arrow keys, dedicated pgup, pgdown, del, ins, home, end. The current laptop keyboards, apart from thinkpads, are a joke for those who want to work on them, the framework, sadly, is included.

I'd also love a trackpoint with 3 dedicated buttons but I'll keep dreaming.

archon810(10000) 2 days ago [-]

100% this and given the target demographic, I can't believe I had to scroll so much until I found the keyboard called out.

kristianp(10000) 5 days ago [-]

I agree about the separate Home, End PgUp, PgDn keys. Still mad at Apple for influencing the removal of these from all laptop keyboards! It's one big reason that I have a Thinkpad, even though their keys aren't as good as a Surface's or Mac's.

paskozdilar(10000) 6 days ago [-]

Do Frameworks laptop work without proprietary software/firmware/whateverware?

If you can run a 100% free software GNU/Linux distribution such as Trisquel on Framework laptops, that would be a definite buy for me.

qboltz(10000) 6 days ago [-]

They do, I run GNU Guix but you have to buy a wifi card that's free software compatible, which must be bought somewhere else.


doyougnu(10000) 6 days ago [-]

I recently bought a framework laptop for a daily driver when I'm not on my desktop. For context I was running NixOS on an old 2014 macbook air, and I work on the glasgow haskell compiler in my day job so I do a lot of CPU heavy tasks.

I've got to say, as long as these things are being produced I'll never go back. They are just too good and I cannot recommend them highly enough. One of the things that didn't occur to me before I bought it was that _because_ of the modular design I can switch the side the power port is on. That may not seem like much but it was a revelation the first time I sat on the couch and thought 'huh I really wish this was over on that side....wait a minute!'.

I've also had absolutely no problems with NixOS on my machine, even my apple earbuds easily connect via bluetooth, something that I never quite got working on my macbook.

10/10 This is damn close to my dream laptop and I'm excited a new version is on the way.

rcoder(10000) 6 days ago [-]

> 10/10 This is damn close to my dream laptop and I'm excited a new version is on the way.

Agreed, with the seemingly-trivial but actually real elaboration: I'm excited because there's a new version on the way and _I can decide, piece by piece, which parts of the upgrade I want._.

Having the upgrade be a literal circuit board I can swap out is 100% the value prop for Framework and I am likewise a very happy customer to see it, even if I'm happy with the current performance of my laptop and don't need to upgrade.

smoldesu(10000) 6 days ago [-]

I can't even imagine how good these'll be on Alder Lake... might have to grab that i5 model.

gigatexal(10000) 6 days ago [-]

The new revision with the 12-gen chips does it fix the complaints people have had about loud fan noise?

I am super on the fence between this and an arm mac - this is super customizable but the arm chips in the air are silent — no fan.

fui(10000) 3 days ago [-]

Apologies, I am new to HN, could you please share a write-up of your experience and process in case you haven't already? I'm moving from an MacBook Pro to a Framework and before the MBP, I used Slackware as my daily driver. Would appreciate any tips on using NixOS as a daily driver.

dheera(10000) 6 days ago [-]

Same. Only things I wish were slightly better build quality and also I've had issues with Wi-Fi disappearing of late [0], fast battery drain during suspend, as well as battery refusing to charge from zero but there's a workaround involving a dumb USB charger. Kind of hoping these are just early adopter issues and that they'll be dealt with over time.

I really hope some community hardware experts can design more modules for this thing. I want an IMU+GPS+Barometer module among other things, but I'm a software person and don't know how to design PCBs.

[0] https://community.frame.work/t/wi-fi-disappeared-and-reappea...

jerryzh(10000) 6 days ago [-]

To be fair when MacBook move to typec one can charge on both sides for many years, and I kind of look forward to a future when all port use typec But when it comes to inside Mac by no mean compares to framework

emiller88(10000) 6 days ago [-]

FYI, we've added support for the framework to nixos-hardware. I appreciate any feedback or improvements anyone has! https://github.com/NixOS/nixos-hardware/blob/master/framewor...

causality0(10000) 6 days ago [-]

Don't most laptops with type-c power inputs support charging through any of them? My Asus does.

fuzzybear3965(10000) 6 days ago [-]


nikodunk(10000) 6 days ago [-]

Agreed! Got one from work, and it's a beast on Fedora 36 with the 11th gen. Even the discrete-ish Iris Xe graphics are surprisingly fast. So cool that we'll actually be able to update the innards in a few years as necessary to keep it feeling fresh.

Edit: A small but nice design feature is the light that comes on to imply whether the usb-c port is charging properly. Coming from a mac that removed this feature when usb-c charging was introduced, this is a huge luxury.

fossuser(10000) 6 days ago [-]

Does suspend work reliably? Is battery life ok? Does the trackpad suck?

I'm tempted but every time I've tried so far to leave Mac hardware I regret it - seems even harder now with M1 performance.

Still, the framework laptop is super cool. Might be worth trying anyway.

shawnz(10000) 5 days ago [-]

> One of the things that didn't occur to me before I bought it was that _because_ of the modular design I can switch the side the power port is on.

I'm not really sold on the integrated dongle design of the framework. Doesn't this argument speak more to the design of USB-C than it does to the integrated dongles?

wollsmoth(10000) 6 days ago [-]

You can basically do this with macs if you still use the usb-c charger. Even with the new ones they still charge through those ports on either side.

But yeah, being able to swap those ports is great. I'm feeling the pain of having only 1 hdmi out on my laptop and the ability to just add one on sounds amazing.

Transisto(10000) 4 days ago [-]

I'd expect every new laptop to have at least one Usb-c on both sides by now.

It'd be great if they could make an extension with more than one port on it, they're wide enough for more.

bodge5000(10000) 5 days ago [-]

Its a shame they don't have the option of AMD processors, the rdna2 igpu's included in the new ones would be more than sufficient for me, whereas my understanding is that the intel igpu's leave a lot to be desired.

As game dev is one of the main things I do with a personal PC, sadly this means Im somewhat tied down to having a decent gpu. RDNA2 would be perfect for me, powerful enough to dev on and weak enough to test on (so I dont need a seperate low-spec machine for testing low-end performance).

fiddlerwoaroof(10000) 6 days ago [-]

This is interesting: over the last several months, a friend has been running NixOS on a Framework and has been told by Framework employees that they can't help him with Linux kernel issues because he's using an unsupported OS and he's also had lots of complaints about battery life and power management.

I love the idea of the Framework, but it seems to suffer from all the issues that made me switch to MacBooks in the first place.

plaguepilled(10000) 5 days ago [-]

'We're not taking orders in your country' COOL THANKS!

jmprspret(10000) 5 days ago [-]

Yep. (Still...).

kristianp(10000) 5 days ago [-]

Once I get to the checkout and address stage, the only country that can be selected is USA. I think they should let you know well before you have to create an account that this is the case.

headsoup(10000) 6 days ago [-]

Nice. I noticed the 'Pre-order now' button for the 12th gen DIY edition goes to the configuration page for the normal edition.

I was confused that all options had Windows installed...

You can still get to the DIY configure page through Product Story -> Pre-Order Now.

nrp(10000) 6 days ago [-]

We are fixing this now!

Edit: This is fixed now.

4ggr0(10000) 6 days ago [-]

I'm now waiting for over a year for this laptop to be available in my country.

Would still really like to order one, but my patience is running out, don't have a laptop currently.

EDIT: I sound very pissed off. I know that it's hard to ship to lots of countries, it's just frustrating for me to not even have an estimation. Will I be able to buy it in 3 or 6 months? Or does it take another year? no idea.

hda2(10000) 5 days ago [-]

Facing the same problem.

When I first read about this laptop, I immediately jumped to preorder it but found that they aren't sold to people like me. The same happened again when I read this article. Your comment reminded me why I still have one.

The thing is that I'm also starting to reconsider buying one now. I know they're small and barely able to keep up with their local markets, but competitors are starting to notice. I as a non-american would much rather buy from a international competitor who doesn't treat me like a lesser citizen if they can match Framework's level of openness and modularity. Once I buy into their ecosystem, I doubt I'll reconsider framework again. I imagine a significant portion of framework's underserved international market in this position.

I truly wish all the success to the framework team. I just hope, for their sake, that they manage to address international demand before they lose it to their competitors.

leodriesch(10000) 6 days ago [-]

If you're fine with import tax and an American keyboard, there are services [0] that provide you with an American address and send any packets that arrive there to you via international shipping.

[0]: https://www.myus.com/

agomez314(10000) 6 days ago [-]

Exciting stuff! Great job, y'all. Any chance RISC-V architectures will be on a Framework Laptop one day?

seanw444(10000) 6 days ago [-]

Hopefully after AMD finally comes out. That would be awesome. A truly open system.

yuters(10000) 6 days ago [-]

I'm still waiting for more keyboards option on a DIY kit in North America. Seems like a waste for a modular laptop that you can't order one with no keyboard. You have to buy a separate keyboard in the marketplace and throw away the english one.

skywal_l(10000) 6 days ago [-]

I would like to see a 75 keyboard like the HP Envy 13 x360 with a bar of keys with Home/PageUp/PageDown/End.

intsunny(10000) 6 days ago [-]

I'd really wish for an AMD motherboard.

The new AMD chips with the RDNA2 is a Linux user's dream. The open source support is incredible.

cyber_kinetist(10000) 6 days ago [-]

I thought that the latest Alder Lake CPUs pretty much catched up with AMD in terms of general performance nowadays. And Intel's iGPU support was rock solid even before AMD.

rjsw(10000) 6 days ago [-]

The ethernet expansion card looks good.

nrp(10000) 6 days ago [-]

This was by far our most popular Expansion Card request. We've actually been working on it since before launching the original Framework Laptop. It's just a non-trivial packaging challenge, especially to land 2.5Gbit support.

xchaotic(10000) 6 days ago [-]

I had a look and it looks like everything is upgraded- the chassis, the motherboard etc. So if you wanted to take advantage of all of the improvements you'd need to buy them all. At which point it might be less materially wasteful to but another, less recyclable laptop that uses fewer parts ?

hojjat12000(10000) 6 days ago [-]

You don't 'NEED' to upgrade your chassis. Even if you do, it's just one piece out of the 3 pieces for your chassis. You'd be upgrading your motherboard and CPU. You can reuse your wifi, storage, ram, your expansion cards, your display, your speakers, your keyboard, your touchpad, your battery, powersupply, cables...

That being said, you don't need to upgrade from 11th gen to 12th gen. Maybe in a few years when 11th gen isn't cutting it for you anymore you can upgrade to 15th gen.

It's great that they provide a path for upgrading, but the more important thing here is having more recent hardware for someone who wants to buy a framework in 2022.

petilon(10000) 6 days ago [-]

Still no retina display option. Steve Jobs made the right call over a decade ago... the only scaling that looks good after 100% is 200%. Any in-between scaling will have display artifacts.

This laptop has 150% scaling. What sort of display artifacts can you expect because of this? Go to a web page with a grid, with 1-pixel horizontal grid lines. Even though all lines are set to 1-pixel, some lines will appear thicker than others.

I blame Microsoft for this mess. Windows supports in-between resolutions (with display artifacts), and hardware manufacturers therefore manufacture in-between resolutions. Framework laptop is limited to what the display manufacturers put out.

cowtools(10000) 6 days ago [-]

besides the fact that 'retina display' is a marketing term invented by Apple, I don't really see what the big deal is. I have pretty good vision and I don't notice individual pixels on my 1080p screen. More pixels means more load on the GPU.

ProAm(10000) 6 days ago [-]

It also doesn't have a touchbar and has an excessive amount of ports!

Kototama(10000) 6 days ago [-]

Maybe you can accept that no project is perfect, specially young projects and that the exceptional effort they put to have the laptop modular are a big benefits for the environment and having less resource consumption, which is maybe, maybe, more important than a retina display?

daemontus(10000) 6 days ago [-]

A gentle reminder that every retina MacBook has been shipping with fractional scaling as default for years now (and it's not even 1.5). Sure, you can put it back into 2x if you want to. But you can do the same on a Framework, and then you get... wait for it... almost the same vertical resolution as a 2x 13' MB Pro (93% to be exact). If you absolutely need more space and a 2x scaling, there is a large amount of 4K 13'/14' laptops that are more than happy to fill that niche. Free market is your friend :)

So the argument that Windows is somehow responsible for the death of perfect 2x scaling is a bit exaggerated. People just want more space and anti-aliasing is mostly good enough so that no one cares.

2OEH8eoCRo0(10000) 6 days ago [-]

That's weird to nitpick. That's a software issue not a display issue.

hinkley(10000) 6 days ago [-]

Using an external monitor with OS X, you're often stuck with those in-between sizes if you don't either have hawkeyes or enjoy seeing super-crisp 1080p resolution taking up half of your desk, which is a waste of space.

I'm mostly not going to be looking at a 1 pixel wide line at 4k on a 27' monitor. At 32' it might be debatable. Above that you're stacking them oddly (top+bottom, one vertical, or both), or you're down to one monitor and the real estate issue becomes a more pressing issue.

I'm at 'stacking weirdly' and my old main monitor (a '4k' monitor that is actually 3840x2160) is vertical, and angled on the corner of my desk. OS X defaults it to 1080p, which is too big a font for how close I sit to it. Full resolution is way too tiny. So I use 1440 (1.5).

The smallest graphics I use are in grafana, and those happen to be on my vertical monitor. I don't see any weird moire patterns when I scroll them, so if there's an issue with line width, it's well covered by things like not using #00 or #ff for all RGB color channels, which tend to show artifacts more overtly.

But then again, it's not just the hardware it's also the software, and Linux has struggled to keep up with Windows and OS X on some issues related to graphics. The saga of good fonts in X took an unseemly amount of time to sort out.

pkulak(10000) 6 days ago [-]

It's possible for an OS to support fractional scaling properly; just tell applications to render their windows 1.5 times larger, map the inputs properly, and turn off font anti-aliasing. The problem is that it requires every app to be updated, which hasn't happened everywhere yet. Android and iOS, for example, do it perfectly. So does ChromeOS.

colordrops(10000) 6 days ago [-]

No it's not 150% scaling. You can run native resolution just fine. With Linux it's actually better than higher resolutions, as the dot pitch is similar to a 27' 4k external monitor, so you can scale natively on both and have windows look approximately the same size. My other laptop is 4k, and it's a nightmare getting scaling to work because it has such a higher DPI than my external monitor. If Linux had better scaling support for HiDPI I'd prefer a 4k laptop but it doesn't, so native resolution is the way to go.

ElijahLynn(10000) 5 days ago [-]

I like the marketplace idea and hope they can offer a ThinkPad keyboard with a touchstick/trackpoint.

See https://community.frame.work/t/any-chance-of-trackpoint/1026... if this is you too.

andreareina(10000) 5 days ago [-]

Just having different keyboard options would be clutch, the non-inverted-T arrow keys is one of the few things I find to be really flawed.

nagisa(10000) 6 days ago [-]

Could you elaborate on why realtek was chosen for the ethernet expansion card? It does tend to have a pretty bad rep, with one of the recent documentations being https://overengineer.dev/blog/2021/04/25/usb-c-hub-madness.h...

whazor(10000) 6 days ago [-]

Seems to be that the realtek driver issues are mostly for Mac users, which will not be really an issue for Framework users.

nrp(10000) 6 days ago [-]

Unfortunately, Realtek is the one and only choice for a USB 3.1 to 2.5Gbit Ethernet controller. We don't like it any more than you do, but there are several niches in the PC peripheral space where there is no alternative to using a Realtek part.

klaas-(10000) 6 days ago [-]

Any news about lvfs support from framework?

moderation(10000) 6 days ago [-]

Haven't been able to find anything beyond his Jan 6 2022 tweet [0]. No response in the forums for this question [1] from 4 days ago. Frustrating to not be able to upgrade BIOS for Linux users without resorting to Windows or replacing boot loaders

0. https://twitter.com/FrameworkPuter/status/147913722834957517...

1. https://community.frame.work/t/lvfs-3-07-bios-availability/1...

codezero(10000) 6 days ago [-]

I got my framework laptop a little over a week ago and I'm pretty happy with it, but seeing such a huge performance boost is frustrating a little, that said, I bought it to support/encourage the company, so I guess it's working :P

julianbuse(10000) 5 days ago [-]

well, it is upgradeable...

noveltyaccount(10000) 6 days ago [-]

Framework team, I know you hang out here... I'd love a touchscreen+stylus digitizer model with 360° hinge. My Lenovo yoga need a replacement!

freemint(10000) 6 days ago [-]

Digitizers are really expensive. I am not sure if Framework is big enough to handle this. Just buy a snall wacom tablet and put it next to the laptop that also works.

Historical Discussions: "Amateur" programmer fought cancer with 50 Nvidia Geforce 1080Ti (May 20, 2022: 1136 points)

(1138) "Amateur" programmer fought cancer with 50 Nvidia Geforce 1080Ti

1138 points 5 days ago by coolwulf in 10000th position

howardchen.substack.com | Estimated reading time – 30 minutes | comments | anchor

' This is what a programmer should look like! '

' The OP is really cool, technology to save the world '

' Compared with him, I feel like my code is meaningless. '

These comments are from a thread in the V2EX forum, a gathering place for programmers in China.

When you first read these comments, you may think they are a bit exaggerating, but for those people and families who have been saved by the post, such comments are true.

Because here's what the post does: detects breast cancer.

In 2018, a programmer named "coolwulf" started a thread about a website he had made. Users just need to upload their X-ray images, then they can let AI to carry out their own fast diagnosis of breast cancer disease.

Furthermore, the accuracy of tumor identification has reached 90%. In short, it is to let the AI help you 'look at the film', and the accuracy rate is almost comparable to professional doctors, and it is completely free.

As we all know, although the cure rate of breast cancer is high if found in its early stages. However due to the early symptoms are not obvious, it is quite easy to miss the best time to cure, and it is often found at an advanced stage.

However, a reliable AI for tumor detection can enable a large number of patients who cannot seek adequate medical diagnosis in time to know the condition earlier or provide a secondary opinion. Even if a doctor is needed to confirm the diagnosis in the end, it is already invaluable in many areas where medical resources are tight.

Breast cancer also has the highest incidence of all cancers ▼

Immediately, this post by coolwulf, garnered a rare hundreds of responses. In the comments section, there were people who were anxiously awaiting their doctor's test results.

Others had family members with breast cancer and were filled with uncertainty and fear. The coolwulf project has given them hope.

With this, of course, comes the curiosity about the project and coolwulf himself. Where does the huge amount of clinical data and hardware computing power come from? More importantly, who is the superpower that is willing to open it up for free?

For many questions, coolwulf himself did not reply one by one. He soon left, hiding from spotlight, and rarely appeared again. But in 2022, he returned with an even more important 'brain cancer project', and the mystery remained the same.

In order to clear up the fog on coolwulf, we reached out to him in the Midwest of the United States. After a few rounds of interviews, today, let's hear the story of coolwulf, also known as Hao Jiang.

During the time when he was a student, he pursued his undergraduate and PhD degrees in the Department of Physics at Nanjing University and the Department of Nuclear Engineering and Radiological Sciences at the University of Michigan, respectively. He evaluates his career in short and concise terms: ' Although my main career is in medical imaging, I am also a 'armature' programmer doing open source projects in my spare time'.

coolwulf (Hao Jiang) (right) ▼

He told us that his parents are not medical professionals, and his interest in programming was fostered from a young age. Coolwulf spent his free time in school writing code. In the days before GitHub existed, he would often post his side projects on programmer communities like sourceforge.net or on his own personal website.

He was part of an open source project called Mozilla Foundation around 2001. At the time there were two initial projects to develop Mozilla's Gecko rendering engine into standalone browsers, one of which was K-Meleon (a browser that was quite popular in China in the early years) to which he contributed code.

The other project, codenamed Pheonix, is the predecessor of the familiar Firefox browser. He was also interviewed by the media more than ten years ago because of this.

coolwulf also wrote a website starting from 2009 that helps people book hotels at low prices. I think many international students in North America might have used it. And all these are just his spare time projects and personal interests to come up with.

After completing his studies in medical imaging at University of Michigan, he worked successively as Director of R&D in imaging at Bruker and Siemens, directing product development in the imaging detectors. Afterwards, he and Weiguo Lu, now a tenured professor at University of Texas Southwest Medical Center, founded two software companies targeting the radiotherapy and started working on product development for cancer radiotherapy and artificial intelligence technologies.

PS: Not only did he have a side career, but he was also the starting point guard on the basketball team at Nanjing University back in days.

Coolwulf leads the development of the Bruker Photon III ▼

Probably, he will become a distant scientific entrepreneur if he continues to go down in this way. But the following event was both a turning point in coolwulf's life and the starting point that brought him closer to thousands of families and lives.

He was a 34-year-old alumnus of Nanjing University who died after missing the best treatment for breast cancer, leaving behind only a 4-year-old son. After witnessing life and death, and the families destroyed by the disease, coolwulf lamented the loss. At the same time, he learned that many breast cancer patients often lack access to detection, making it easy to delay diagnosis.

Old photos of alumni families ▼

So the idea of using AI to detect X-rays was born by coolwulf, who also happened to have the right professional experience. However, it was not easy to make an AI that could accurately detect tumors.

Coolwulf first downloaded the DDSM and MIAS datasets from the University of Florida website. At the time, because the data format was old and not in standard Dicom and the images were still film scans, he wrote a special program to convert all the information into a usable form.

Then, he also wrote an email asking for permission in order to obtain the breast cancer dataset InBreast, a non-public resource from the University of Barcelona. During the same time, it was also necessary for him to continue reading a lot of literature and writing the corresponding model code.

The request email sent by coolwulf at the time ▼

However, this was not enough; to formally and efficiently train this model, high hardware power is required. So, he built one out of his own pocket - a GPU cluster of 50 Nvidia GTX 1080ti's - locally.

At the time, he 50 graphics cards were not easy to come by. Due to crypto mining, GPUs were in severe shortage and very over-priced on eBay, coolwulf had to ask a lot of his friends to help him check online vendors such as Newegg/Amazon/Dell and grab the GPU when they are available. After lots of efforts, he finally completed the site's preparation.

Yes, in addition to gaming and mining, the

Graphics Cards Have More Uses ▼

The free AI breast cancer detection website took coolwulf about three months of spare time, sometime he had to sleep in his office to get things done, before the site finally went live in 2018.

He said that he's not sure actually how many people have used it because the data is not saved on the server due to patient privacy concerns. But during that time, he received a lot of thank-you emails from patients, many of them from China. Moreover, users really used the website to check out tumors, especially for people in remote areas with limited medical resources, which is equivalent to snatching time from the hands of death.

"The first one had the wrong photo. The tumor was found after retesting" (from coolwulf) ▼

A few years ago, this technology was not as popular as now, so coolwulf's project was more like an initiation. The website also gained a lot of attention from the industry, during which many domestic and foreign medical institutions, such as Fudan University Hospital, expressed their gratitude to him by email and were willing to provide financial and technical support.

After all, the whole thing coolwulf was self-funded, which is not a small amount of money.

Email from Fudan University Cancer Center ▼

As for why he doesn't commercialize the website and collect some money, this question was also asked by us.

coolwulf's answer was indifferent but noncommittal: ' Cancer patients, as well as their families, have endured too much, and I believe everyone wants to help them, and I happen to have the ability to do so.' In this way, he thanked many people but didn't take any financial assistance and wrapped up everything by himself alone.

In addition to the website, there was a desktop version of the testing software at the time ▼

By 2021, coolwulf had reached a second critical turning point in his life. His colleague's cousin had a brain tumor that was not looking good and was treated with 'whole brain radiation therapy'. Unfortunately, a few months after the whole brain radiotherapy, the tumor returned and there was no treatment left but to wait for death to come.

Whole brain radiotherapy is a therapy that eliminates the tumor on a large scale through radiation, which will not only eliminate the cancer cells but also cause damage to normal brain tissues, thus reducing the occurrence of lesions.

In less strict words, whole brain radiotherapy, is more like an ' indiscriminate attack'. Therefore, considering the tolerance of the radiation dose of brain critical structures such as brainstem or optical nerves, whole brain radiotherapy is usually a once-in-a-lifetime treatment.

After this incident, it completely changed coolwulf's perspective and decided to break through an industry challenge - to further pushing AI not only just within the detection stage but also put it into actual treatment.

It is important to know that whole brain radiation therapy is the most common treatment option for brain tumors today. In the United States alone, 200,000 people receive whole brain radiation therapy each year. So, is it necessary to take the risk of choosing whole brain radiotherapy for patients with multiple tumors of brain cancer?

Not really, because there is another kind of treatment - stereotactic radiotherapy. Compared with whole brain radiotherapy, stereotactic radiotherapy is more focused and can precisely remove the diseased tissues without hurting the normal tissues.

For example gamma knife, is a kind of stereotactic radiotherapy machine. This therapy has much lower side effects, is less harmful to patients, and can be used multiple times.

There is also a general consensus in the academic community that stereotactic radiotherapy offers patients a better quality of life and, at the same time, is more effective. The only problem is that with stereotactic radiotherapy, medical resources become more scarce.

Because once this protocol is adopted, the oncologist or neurosurgeon has to precisely outline and label each tumor; the medical physicist also has to make a precise treatment plan for each tumor, and it will take a lot of time to save one patient.

Therefore, doctors almost always prefer whole brain radiotherapy to stereotactic radiotherapy when the patient has more than 5 or more brain lesions.

But AI may be able to share the workload of doctors. So, once again, coolwulf is working to make stereotactic radiotherapy available to more brain cancer patients.

But this time, the problem was significantly more challenging, and he could no longer do it alone. So he approached University of Texas Southwestern Medical Center, Stanford University, for collaboration.

With the help and efforts of many people, the following three AI models were recently developed:

  • a model to automatically outline/label brain metastases;

  • a model based on SVM-radiomics to quickly reduce false positives;

  • and a model based on optimized radiation dose maps to quickly segment multiple lesions into different treatment courses.

The three models complement each other and correspond to the physician's workflow, significantly reducing the workload when using stereotactic radiotherapy.

This project, now being presented at the 2022 AAPM Spring Clinical Meeting and 2022 AAPM annual meeting, has once again achieved widespread industry recognition.

coolwulf, along with his coauthors, is also accelerating the pace of trying to get the entire stereotactic radiotherapy community aware of this achievements so the technology could be adopted to actually help more patients. In interviews, coolwulf has repeatedly mentioned that he is in no way alone in achieving the results he has today.

He hopes that we will publish the list of collaborators, because everyone here is a hero who is quietly fighting with cancer.

In recent years, the cancer mortality rate has dropped by 30% compared to 30 years ago. At this rate, perhaps one day in the future, cancer will no longer be a terminal disease.

But this is not a straightforward bridge, there are countless people like coolwulf and others who are walking in the abyss. To conclude the article, let's borrow a comment from a user on Reddit.

"Not all heroes wear capes"

~ The End ~

(This article is translated from the original Chinese version at: https://www.toutiao.com/article/7094940100450107935/)

All Comments: [-] | anchor

themantalope(10000) 5 days ago [-]

This is very cool work. I'm a radiologist, I also work on developing ML/AI based systems for cancer detection and characterization. Literally just took a break for a few minutes from creating some labels and saw this as the top HN post!

I think in some ways making the model available online can be good, but in other ways could be harmful too. Very complicated topic.

恭喜coolwulf, 祝你继续成功。

DantesKite(10000) 5 days ago [-]

I've always felt the 'could be harmful' was a rationalization by radiologists worried about their job security since it's easily mitigated with a warning and multiple tests.

And especially because in the future, most radiology work will be done by software. It's just a matter of whether it's 10 years or 100 years from now.

martincmartin(10000) 5 days ago [-]

How could it be harmful? Is it just because of errors, i.e. false negative means person won't talk to doctor and won't be caught early; false positive means needless worry, maybe tarnish more general medical industry? Or something else?

ska(10000) 5 days ago [-]

>Very complicated topic.

This is very true. Data availability (and moreso, label availability) is the biggest barrier to improvement here I suspect [ thanks for labeling!]. Access being another. Using a public site to bootstrap that could do very interesting things. On the other hand, public access to a poorly RA/QA'd algorithm could also cause more trouble than help, easily.

tfgg(10000) 5 days ago [-]

What peer review or regulatory approval process has this been through? Seems pretty irresponsible -- there are many notorious pitfalls encountered with ML for medical imaging. You shouldn't play with people's lives.

bsder(10000) 5 days ago [-]

In addition, these kinds of things will still miss lobular ('normal' cancers are ductal) breast cancers as they don't form lumps.

15% of the women with breast cancer are waiting for a non-invasize diagnostic imaging system that can see their cancer. The only thing that can see these is an MRI with gadolinium. And that gadolinium contrast causes issues in about 1 in 1000 women, so it can't be used as a general screen.

mromanuk(10000) 5 days ago [-]

This is like taking your temperature at home, are you making a diagnostic yourself? Not quite. But you can know some symptoms and take action (going to the doctor) maybe with less anxiety

edit: grammar

Mikhail_K(10000) 5 days ago [-]

I don't understand why this comment is downvoted. Automated screening of radiological images by means of neural net is an extensively researched topic. Ten years ago there had been predictions that such automated screening will displace the radiologists, but that clearly did not happen.

For instance, this article is silent on false positive/false negative rates of the software. There is no comparison with other research. It reads like a corporate press release promoting a product.

zmmmmm(10000) 5 days ago [-]

In this case I feel better about it because there is a natural limitation in that most people doing this will only have the scan because they are getting tested through a real clinical process. So effectively they are getting 'standard of care' treatment implicitly, and all this does is accelerate their response to true positives. The worst case scenario is a false positive gives them a lot of anxiety / costs them money through trying to accelerate their real diagnosis only to find it isn't real.

ugh123(10000) 3 days ago [-]

Right. Let's let the pharma and medical establishment continue to 'innovate' on our behalf.

dekhn(10000) 5 days ago [-]

This is an incredibly important point. Medical research must be taken seriously and I see many problems with the processes being applied here.

(for those who care- I'm a published ml biologist who works for a pharma that develops human health products. Having worked in this area for some time, I often see people who have no real idea of how the medical establishment works, or how diagnostics are marketed/sold/regulated. Overconfidence by naive individuals can have massive negative outcomes.

plandis(10000) 5 days ago [-]

If your decision making process is a negative result tells you nothing and a positive result warrants immediate follow up, what's the risk here? I'm assuming doctors recommending that women get checked for breast cancer is the primary breast cancer is tested and diagnosed which presumably wouldn't change because someone make a website.

quasarj(10000) 5 days ago [-]

Ahh yes, why would we want to give poor people a potential route to improve their health? it would definitely be more ethical to let them die.

adultSwim(10000) 3 days ago [-]

I don't understand why this is useful. The starting point is a mammogram. Are there a lot of people who are able to get a mammogram performed, but not able to get it analyzed?

I applaud the author. These tools seems like a great addition to health care providers. I'm just less sure about when you would use it directly as a patient.

coolwulf(10000) 3 days ago [-]

Let me try to explain:

1) Radiologists' experience on reading is not the same. The reading from a 5+ years experience radiologist sometime could be quite different from the one with a one or two years experience radiologist. As a matter of fact, due to the complicated issues with mammogram reading, some inexperienced radiologist don't get mammogram reading tasks when they started working in the clinics.

2) 2nd opinion sometime could be quite useful and important for the patients. By providing a utility for patients to have certain awareness on their mammogram, apart from the readings from their radiologists alone, could be useful, especially for people who are from remote area which lacks of experienced radiologists.

3) Even in big cities which has more medical resources (like big cities in China), due to the amount of patient, usually each radiologist only has about 30s to read a mammogram image (We have been told by multiple doctors and they could confirm that.). Mis-reading is very common due to the work-load. So a utility like this could help on finding or at lease warning doctors on possible missed.

There are other reasoning behind this but above are things on top of my head right now.

light_hue_1(10000) 5 days ago [-]

As an AI/ML researcher who publishes in this area regularly, I will be using this as a case study for AI ethics classes. That this is allowed to go on is shocking.

> In 2018, a programmer named "coolwulf" started a thread about a website he had made. Users just need to upload their X-ray images, then they can let AI to carry out their own fast diagnosis of breast cancer disease.

Literally the worst fears that we have as a community is that people will recklessly apply ML to things like cancer screening on open websites and cause countless deaths, bankruptcies, needless procedures, etc. How many people went to this website, uploaded images, were told were ok and didn't follow up? How many were told they have cancer and insisted on procedures they didn't need?

The website is totally unaccountable. Totally unregulated. Totally without any of the most basic ethical standards in medicine. Without even the most basic human rights for patients. This is frankly disgusting.

In the US this would have been shut down by the FDA immediately.

We should not be celebrating this unethical 'science' that doesn't meet even the most basic of scientific standards or ethical standards.

I can't believe this is getting upvoted here.

Zacharias030(10000) 4 days ago [-]

Exactly my thoughts! Thanks for speaking up.

karolist(10000) 5 days ago [-]

I share your sentiment, people are focusing on successes too much but not scrutinise what potential outcomes false negatives in software like this can have.

acidoverride(10000) 5 days ago [-]

> a case study for AI ethics classes

What is unethical about this citizen science project? What is ethical about keeping it only for yourself, and not sharing it with the world?

You are saying you have the expertise to build a similar product, but releasing it would mean the worst fear of your community?

> people will recklessly apply ML

What are the indications that this is a reckless application of ML?

> How many people went to this website, uploaded images, were told were ok and didn't follow up?

Common sense dictates exactly zero. Their follow up was taking their images and getting an automated second opinion. Either a doctor already deemed them OK, or a doctor deemed them not OK, in which case, they would not rely on a second opinion, to think they are suddenly OK.

> How many were told they have cancer and insisted on procedures they didn't need?

Again, exactly zero. The app returns probabilities not binary diagnostics. No hospital would do anti-cancer procedures on a patient without cancer, even when they insist, because some website, friend, or religious leader told them so.

> The website is totally unaccountable.

Good. Or make the good-faith open-source project accountable and liable? That would simply mean shutting it down. No more diagnostics help for low-expertise hospitals: not good at all.

> Totally without any of the most basic ethical standards in medicine.

List a basic ethical standard in medicine which this project runs afoul of.

> basic human rights for patients

What right is that? The right not to upload your images to a site of your choosing? I thought human rights include self-determination, and keeping possession of your imaging to do however you see fit.

> In the US this would have been shut down by the FDA immediately.

But is that a good, ethical thing? Or simply that red tape and authority in US does not allow for such projects?

> We should not be celebrating this unethical 'science' that doesn't meet even the most basic of scientific standards or ethical standards.

You should not talk about ethics or science, when you did not do even a proper evaluation of the work of a fellow scientist.

> I can't believe this is getting upvoted here.

Awaiting your work on cancer research and ML. Post it here. If devoid of ethical issues, and strongly scientific, it will also be upvoted and celebrated. Or is your major contribution going to be a snipe at someone who actually contributed?

alliao(10000) 4 days ago [-]

really fascinated by this take!

is your issue with this project.. being public? not accurate enough? shouldn't be pursued at all because it'd never work?

what is the goal of regulation?

endisneigh(10000) 5 days ago [-]

Should the internet also be shut down because people get false conclusions from WebMD, Reddit, Twitter, Google search results, etc?

coolwulf(10000) 5 days ago [-]

Thank you for your reply. On the site, it's cleared marked and noted this is not for diagnosis.

'This tool is only to provide you with the awareness of breast mammogram, not for diagnosis.'

IG_Semmelweiss(10000) 5 days ago [-]

and this is why healthcare is the #1 source of bankruptcy in the US.

Some people believe that every single person must have Mercedes-Benz type of care in the US.

They cannot fathom that some of the plebs (do they even exist for them?) may want to make their own independent healthcare choices, and are willing to accept the risk ... (or can only afford!) a Suzuki.

ska(10000) 5 days ago [-]

It's an interesting subject, with a long history; I think many of the biggest challenges are not technical.

The first commercially available AI/ML approach to breast cancer screening was available (US) in the late 90s. There have been many iterations and some improvements since, none of which really knock it out of the park but most clinical radiologists see the value. Perhaps the more interesting question then is why are people getting value out of uploading their own scans, i.e. why does their standard care path not already include this?

coolwulf(10000) 5 days ago [-]

The reason I made this project 100% free and available to the general public is to help patients, especially in the remote area who has limited access to experienced radiologists for diagnosis, to at least get a second opinion on their mammogram. And I think this has certain value and this is why I'm doing this project.


coolwulf(10000) 5 days ago [-]

Recently a Chinese media interviewed me and I talked about a few side projects I have done in the past. I talked about the Neuralrad Mammo Screening project and Neuralrad multiple brain mets SRS platform. More awareness on radiation therapy to the general public will greatly help the community and we believe Stereotactic Radiosurgery (SRS) will eventually replace majority of the whole brain radiation therapy (WBRT) in the next five years.

Here is the link to the original article: https://www.toutiao.com/article/7094940100450107935/

Simon_O_Rourke(10000) 5 days ago [-]

Thank you for all you've done for people, it's amazing and inspiring!

rob_c(10000) 5 days ago [-]

Fantastic work dude. On behalf of anyone who might one day benefit thanks and congrats.

jabrams2003(10000) 5 days ago [-]

What's the best way to contact you? I've been fighting brain cancer for 7 years and work closely with a group of neuro-oncologists, researchers, non-profits, and investors in the space.

I'd love to chat.

koprulusector(10000) 5 days ago [-]

> Recently a Chinese media interviewed me and I talked about a few side projects I have done in the past.

I apologize if this has been asked and answered before, but do you speak Mandarin, or was the interview in English?

Asking out of curiosity if it's the former, and if so, how difficult was it to learn whilst also working on this and other things? And are there any resources or tips you might share that you found helpful?

jacquesm(10000) 5 days ago [-]

Super effort. I understand your reluctance to accept funding but if you ever change your mind on that be sure to publish it here on HN. If giving you more tools means more progress in this domain without the usual red tape then I'm all for giving you as much of a push as possible.

nkzd(10000) 4 days ago [-]

You are an inspiration. Thank you for restoring my faith in this field.

iaw(10000) 5 days ago [-]

You're clearly well accomplished in multiple areas. How do approach learning something new?

hehepran(10000) 5 days ago [-]

Sir, you are super cool.

Billsen(10000) 5 days ago [-]

Nice job!

Abishek_Muthian(10000) 4 days ago [-]

Sir, We've all been hearing that AI will revolutionize medical diagnostics for a long time, seen startups come & go in this space but seeing your work impact lives on the ground has convinced me that the secret missing ingredient was a self-less human who can build an accessible(non-commercial) solution for the masses.

A decade ago I did a voluntary work on a simple app for oral cancer detection(questionnaire, data-entry) with an oncologist, Who used it to for survey in the tribal regions of India and he used to say how lack of early detection is the number one reason for so many deaths(many die without knowing that they had cancer).

He's now settled in Germany, But I would still pass this story to him & perhaps his colleagues in India could make use of it.

Thank you.

sdo72(10000) 4 days ago [-]

Thank you for doing this, you deserve a superhero badge! It's very inspirational.

onetimeusename(10000) 5 days ago [-]

Where did you learn to program on distributed Nvidia GPUs? The article implied you were self taught and learning to do this is quite challenging for various reasons.

Not least, Nvidia's documentation is not the best resource to learn from. This seems like quite a lot of work to understand ML and write custom CUDA code to get this to work. Do you have any insight about how you taught yourself these things and what tools you use?

sylware(10000) 5 days ago [-]

javascript only link. Any compatible link with noscript/basic (x)html browsers?

daniel-cussen(10000) 5 days ago [-]

Unless it's more expensive than existing treatments the medical industry will close the circles around you excluding you.

That's why not one startup has hacked healthcare in America, not one. No breakaway successes making pharma cheaper. Like those incubators in Bangladesh, for premature babies not startups that is, those did OK. Some pill startups yes, but again that's an expensivification of medicine. If you can make medicine more expensive, they welcome you in!

Jim Clark tried this, he was on a roll after Silicon Graphics and Netscape. Huge roll about as strong as Elon Musk as a serial entrepreneur. Then he targeted healthcare and couldn't do shit, just couldn't get anything to happen. He literally talked about getting 'rid of all the assholes' by which he meant insurance and doctors and hospitals and middlemen and pharma and all the other 'assholes' of that nature in his own words, but leave 'only one asshole in the middle--us [paraphrased].' It's in a book. That book also talks about guys going on airplanes and chasing goats off cliffs, saying 'Some people do this.'

Well the real structure of medicine isn't designed around the human body, it's designed around cornering the market. Market dominance. So of course it has this immune system against cost reduction and efficiencies--efficiencies especially--and you do know it lobbies, don't you? And can bribe the FDA like the Sacklers did? Or lobby the FDA, and then bribe underneath so when people see favoritism they think it's the over-the-counter placebo causing a placebo effect without suspecting an additional more potent under-the-table dosage of money. In case the administration has built up a tolerance to the over-the-counter stuff.

rg111(10000) 5 days ago [-]

Hi. Some great projects. What's more commendable is your dedication towards your projects and seeing them through to end- to the point that they are actually useful. This is what I truly admire.

I have a question for you. What is the tech stack that you use?

And if it is not too much: What resources did you use to learn Deep Learning?

rawgabbit(10000) 4 days ago [-]

You are a hero. God bless you and thank you for efforts to help others.

sizzle(10000) 4 days ago [-]

Imagine for a moment if FB and Google took all their developers and resources and applied it to solving cancer, how long do you think it would take to make a dent in the problem space? Or is there a hard limit stopping us from progressive like Moores law for scientific understanding.

adultSwim(10000) 3 days ago [-]

Great work. I'm inspired seeing others use their skills to work on something important. So many of the smartest, most well educated engineers of my generation have put their talents to use doing things that are of little value to society. Thank you for demonstrating to us what else is possible.

dclowd9901(10000) 5 days ago [-]

As a "professional" programmer, I'm humbled by your accomplishments. I really must find ways to contribute more to the world. It seems there's a lot of opportunities in AI to do it.

llaolleh(10000) 5 days ago [-]

Your story was inspirational. It's really cool to run this project to help others without expecting any payment.

ska(10000) 5 days ago [-]

WBRT is pretty brutal. Am I right in thinking you are focusing on multiple site treatment/palliative treatment of metastatic presentations? High site count also or sticking to say < 5?

pen2l(10000) 5 days ago [-]

Oh, it's you!

What a beacon of light and inspiration you are. Thanks for your work.

That said, I welcome you to publish your work so it can become even better after a formalized peer-review process.

FpUser(10000) 5 days ago [-]

I am not a religious man at all but God Bless you. You are an amazing human being and a source of inspiration.

mamborambo(10000) 4 days ago [-]

Super impressed. Amateurs can and do play their parts in science --- there are numerous discoveries made in astronomy, mathematics, and definitely in computer applied sciences that sprang from the minds of amateurs.

YeGoblynQueenne(10000) 5 days ago [-]

>> Furthermore, the accuracy of tumor identification has reached 90%.

How is this accuracy calculated? Further in the article it is noted that there is no patient data saved by the project:

>> He said that he's not sure actually how many people have used it because the data is not saved on the server due to patient privacy concerns. But during that time, he received a lot of thank-you emails from patients, many of them from China.

Considering user privacy is laudable in my opinion, but I'm still curious to know how accuracy is known.

Iv(10000) 5 days ago [-]

Probably based on a test set from the original dataset.

westcort(10000) 5 days ago [-]

My key takeaways:

* The free AI breast cancer detection website took coolwulf about three months of spare time, sometime he had to sleep in his office to get things done, before the site finally went live in 2018

* The website also gained a lot of attention from the industry, during which many domestic and foreign medical institutions, such as Fudan University Hospital, expressed their gratitude to him by email and were willing to provide financial and technical support

* Afterwards, he and Weiguo Lu, now a tenured professor at University of Texas Southwest Medical Center, founded two software companies targeting the radiotherapy and started working on product development for cancer radiotherapy and artificial intelligence technologies

* But in 2022, he returned with an even more important 'brain cancer project'

* coolwulf (Hao Jiang) (right) He told us that his parents are not medical professionals, and his interest in programming was fostered from a young age

* A reliable AI for tumor detection can enable a large number of patients who cannot seek adequate medical diagnosis in time to know the condition earlier or provide a secondary opinion

* He said that he's not sure actually how many people have used it because the data is not saved on the server due to patient privacy concerns

Link to the technology: http://mammo.neuralrad.com:5300/

dekhn(10000) 5 days ago [-]

that's an unadorned http link. Really?

ramraj07(10000) 5 days ago [-]

> A reliable AI for tumor detection can enable a large number of patients who cannot seek adequate medical diagnosis in time to know the condition earlier or provide a secondary opinion

Citation Required?

latchkey(10000) 5 days ago [-]

ETH will soon move from PoW to PoS (let's not debate the timeline or if it is a good idea). This will put about 32 million GPUs worth of compute and millions of CPUs searching for something else to do (or just flood the market with used equipment).

I have been searching, for years, for alternative workloads for these GPUs beyond just PoW mining and password cracking. Many of them are on systems with tiny cpus, little memory, little disk, little networking so the options are heavily limited. AI/ML/Rendering/Gaming actually make bad use cases.

If anyone has thoughts on this, I'd appreciate hearing them. Let it all die is certainly an option, but it also seems just as wasteful as keeping it going. Maybe we can find a better use case, like somehow curing cancer...

redisman(10000) 5 days ago [-]

Why isn't there a folding coin? Productive mining and you reward the new protein folds or whatever

PartiallyTyped(10000) 5 days ago [-]

What about federated learning to deal with the little memory issue?

VHRanger(10000) 5 days ago [-]

Proof of Stake has been 6-18months away for 5 years now.

As far as I'm concerned it'll release along with Star Citizen

dekhn(10000) 5 days ago [-]

[email protected] has been doing this for 20+ years. They already did all the smart research and tech development. Just use that until somebody comes up with a workable DrugDiscoveryAtHome or CureCancerAtHome.

daniel-cussen(10000) 5 days ago [-]

Oh you know what an alternative use is? Oaths. Works with old ASICs as well...well I think. So you take a document, like this comment, you append a nonce (you'll see) and you hash it until you get a lot of zeroes in the front. Same as bitcoin, but you're not hashing the bitcoin protocol. Then, you know the document has been sworn, as a cryptographic oath, to that extent. Nonce: 38943

netsharc(10000) 5 days ago [-]

Crypto is <valley-girl>literally</valley-girl> stopping us from finding the cure for cancer!

mwt(10000) 5 days ago [-]

[email protected] would love to take a swing at a sliver of that compute

PragmaticPulp(10000) 5 days ago [-]

> ETH will soon move from PoW to PoS (let's not debate the timeline or if it is a good idea). This will put about 32 million GPUs worth of compute and millions of CPUs searching for something else to do (or just flood the market with used equipment).

Crypto markets crashing together could do this, but ETH's switch isn't going to do much for old cards.

Checking https://whattomine.com/ shows that ETH mining isn't even in the top 5 most profitable things to mine with a 1080Ti right now. The miners looking to squeeze every bit of profitability out of old hardware switched away from ETH a long time ago.

zamadatix(10000) 5 days ago [-]

There are plenty of good uses, projects like BOINC have been using GPUs for good for over a decade. The problem is the incentive system disappears, it's a lot easier to get people to run 32 million GPUs when it makes them money instead of costs them money.

notfed(10000) 5 days ago [-]

What does '90% accuracy' mean? Is this before or after applying Bayes' theorem?

coolwulf(10000) 4 days ago [-]

At the time of this model / work developed (late 2017 / early 2018), very few public mammogram datasets were available. So the model was trained on DDSM/MIAS and tested on InBreast dataset. The 90% accuracy was calculated with results on InBreast dataset.

redeyedtreefrog(10000) 5 days ago [-]

In the UK the NHS don't do screening for breast cancer for under 50s because it's believed that it would do more harm than good by leading to unnecessary treatment for cancers that would never have actually caused any harm, and even where no treatment is carried out it causes great distress. Though there are arguments that the age cut off is too high, and should be set at 40.

The above is with regard to a well-funded and regulated screening program that presumably has much better precision/recall than this website. I wonder what the cut off age is for this website before the diagnoses cause more harm than good? 60? 70?

This is getting lots of upvotes because it's confirmation bias for the large segment of HN readers who believe that problems would easily be solved by a small number of brilliant technologists, if only it weren't for governments and big organisations with all their rules and regulations.

laingc(10000) 5 days ago [-]

A lot of people, including myself, don't believe that central health authorities have the right to make that call.

Moreover, I personally don't have confidence in their ability to make those kinds of decisions, and I believe the abysmal performance of the NHS supports my view.

mchusma(10000) 4 days ago [-]

This is absurd logic. If the next step for a test like is a procedure with a lot of risk, change the next step.

We need to be able to work in a world with frequent, imperfect, low cost diagnostic tools. Cancer is almost completely survivable if caught early enough. So working to figure out early detection is effectively the 'cure' for cancer we have been looking for.

bayareabadboy(10000) 4 days ago [-]

Your solution to anxiety derived from lack of medical knowledge is more bureaucracy between a patient and their healthcare provider?

mateo1(10000) 4 days ago [-]

I'm always surprised with the 'just close your eyes' attitude of medical policies. I mean, this way we essentially choose when to get women misdiagnosed? Isn't the solution to get a better idea of how common it is for non-cancerous masses to appear and adjust the risk predictions? Or to actually improve the diagnostic methods?

adamredwoods(10000) 5 days ago [-]

NHS recommendations: https://www.cancerresearchuk.org/about-cancer/breast-cancer/...

Anecdotally, I know (directly and indirectly) many, many women who had breast cancer before 50 years of age.

webmobdev(10000) 5 days ago [-]

Thanks for the different perspective. What did you mean by 'unnecessary treatment' though? If you have cancer, doesn't it need to be treated? Doesn't cancer anywhere always cause harm to the body?

morelish(10000) 4 days ago [-]

The NHS is massive bureaucracy. What it does or doesn't do is peering inside the belly of a Byzantine whale.

The article is a misnomer calling him an 'amateur'. Its a click bait title. He's shown himself to a world leading researcher in the application of AI to cancer screening.

Plenty of managers in the NHS can't even do simple math.

gregsadetsky(10000) 5 days ago [-]

1) I just downloaded the 'The Mammographic Image Analysis Society database of digital mammograms' [0] and ran it against the tool [1] image by image. Results below, code here [2]:

  true_pos 36
  true_neg 207
  false_pos 63
  false_neg 16
  total 322
2) How is it true when the site [1] says 'We will not store your data on our server. Please don't worry about any privacy issues.' when you can find all analyzed mammograms under the 'static' directory?



(trying file names at random)

[0] https://www.repository.cam.ac.uk/handle/1810/250394

[1] http://mammo.neuralrad.com:5300/upload

[2] https://github.com/gregsadetsky/mias-check

coolwulf(10000) 5 days ago [-]

Thank you for your efforts for validation and I appreciate that. There is a script running in the background to auto clean the files in static folder every day.

transfire(10000) 5 days ago [-]

Sadly, this would be illegal in the USA and get shut down pretty quickly.

giantg2(10000) 5 days ago [-]

More like a patent holder would usurp all the work someone else did and make a fortune off of it after taking 5 years to get through the red tape.

renewiltord(10000) 5 days ago [-]

unlessI'm wrong, he's in Michigan.

codingdave(10000) 5 days ago [-]

What exactly is illegal about this? If you are thinking HIPAA laws, they don't apply when you are sharing your own medical information/images.

Flankk(10000) 5 days ago [-]

The FDA may or may not attempt to classify it as a medical device and then shut it down. Otherwise legal if it includes a disclaimer.

jonplackett(10000) 5 days ago [-]

Is 90% correct rate considered good enough for this kind of use?

Seems like 1/10 wrong would be bad, how does that compare with a doctor doing it?

Zacharias030(10000) 4 days ago [-]

The 90% accuracy figure in the article unfortunately exposes the author as very ignorant.

On a German breast cancer screening population, 90% accuracy is abysmal as ~99.2% of cases are negative. Just predicting „no cancer" would achieve 99.2% accuracy.

Accuracy is a very bad metric for such highly asymmetric problems.

To provide some context: The German screening system is able to identify cancer in ~6/1000 of patients screened, and missing it for the remaining 2/1000.

IIRC this is achieved by re-inviting in ~3% of cases to further examination, where ultrasound / needle biopsy / etc. can be done on a case by case basis.

latortuga(10000) 5 days ago [-]

According to the American Cancer Society

> About half of the women getting annual mammograms over a 10-year period will have a false-positive finding at some point.

chrischen(10000) 4 days ago [-]

If anyone ever needs an example of how crypto is doing harm to the world: 'Due to crypto mining, GPUs were in severe shortage and very over-priced on eBay'

The over-valuation of crypto is a two-fold negative impact on society: a massive brain-drain sucking talented engineers who would otherwise be solving real problems, and the opportunity cost of GPUs burning electricity to run unintentional ponzi schemes instead of training deep learning models.

cjblomqvist(10000) 4 days ago [-]

You could argue the other way around. Crypto is creating market incentives to develop more powerful GPUs (for less $$$), which can make other applications possible. It's tricky to control market forces...

(not that I'm a fan of crypto)

OJFord(10000) 5 days ago [-]

'Amateur' oughtn't be scare-quoted because it's not a slur, many of the finest programmers were amateurs for many years before they were old enough to be given a job in the profession.

ant6n(10000) 5 days ago [-]

If u used to be a paid software programmer and got a different job, but continued doing programming side projects without pay, are u an amateur or not?

jxramos(10000) 5 days ago [-]

I had an art teacher affectionately remind me the etymology for amateur

> borrowed from French, going back to Middle French, 'one who loves, lover,' borrowed from Latin amātor 'lover, enthusiastic admirer, devotee,' from amāre 'to have affection for, love, be in love, make love to' (of uncertain origin) + -tōr-, -tor, agent suffix https://www.merriam-webster.com/dictionary/amateur#etymology...

changes the feeling of it all when you get that context, someone who loves a subject pretty much--no qualifications skill wise or regarding depth but they love it and should presumably take things seriously to some degree as any lover would.

gist(10000) 5 days ago [-]

Using 'amateur' (quoted or not) is click bait. It's an embellishment to the rest of the headline. For that matter even though it's true the graphics cards are as well. Only thing that could have made it more click bait would be to also put in AI in the headline.

Historical Discussions: Imagen, a text-to-image diffusion model (May 23, 2022: 931 points)

(952) Imagen, a text-to-image diffusion model

952 points 1 day ago by keveman in 10000th position

gweb-research-imagen.appspot.com | Estimated reading time – 4 minutes | comments | anchor

There are several ethical challenges facing text-to-image research broadly. We offer a more detailed exploration of these challenges in our paper and offer a summarized version here. First, downstream applications of text-to-image models are varied and may impact society in complex ways. The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo. In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access. Second, the data requirements of text-to-image models have led researchers to rely heavily on large, mostly uncurated, web-scraped datasets. While this approach has enabled rapid algorithmic advances in recent years, datasets of this nature often reflect social stereotypes, oppressive viewpoints, and derogatory, or otherwise harmful, associations to marginalized identity groups. While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models. As such, there is a risk that Imagen has encoded harmful stereotypes and representations, which guides our decision to not release Imagen for public use without further safeguards in place.

Finally, while there has been extensive work auditing image-to-text and image labeling models for forms of social bias, there has been comparatively less work on social bias evaluation methods for text-to-image models. A conceptual vocabulary around potential harms of text-to-image models and established metrics of evaluation are an essential component of establishing responsible model release practices. While we leave an in-depth empirical analysis of social and cultural biases to future work, our small scale internal assessments reveal several limitations that guide our decision not to release our model at this time. Imagen, may run into danger of dropping modes of the data distribution, which may further compound the social consequence of dataset bias. Imagen exhibits serious limitations when generating images depicting people. Our human evaluations found Imagen obtains significantly higher preference rates when evaluated on images that do not portray people, indicating a degradation in image fidelity. Preliminary assessment also suggests Imagen encodes several social biases and stereotypes, including an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes. Finally, even when we focus generations away from people, our preliminary analysis indicates Imagen encodes a range of social and cultural biases when generating images of activities, events, and objects. We aim to make progress on several of these open challenges and limitations in future work.


Chitwan Saharia*, William Chan*, Saurabh Saxena, Lala Li, Jay Whang, Emily Denton, Seyed Kamyar Seyed Ghasemipour, Burcu Karagol Ayan, S. Sara Mahdavi, Rapha Gontijo Lopes, Tim Salimans, Jonathan Ho, David Fleet, Mohammad Norouzi*

*Equal contribution. Core contribution.

Special Thanks

We give thanks to Ben Poole for reviewing our manuscript, early discussions, and providing many helpful comments and suggestions throughout the project. Special thanks to Kathy Meier-Hellstern, Austin Tarango, and Sarah Laszlo for helping us incorporate important responsible AI practices around this project. We appreciate valuable feedback and support from Elizabeth Adkison, Zoubin Ghahramani, Jeff Dean, Yonghui Wu, and Eli Collins. We are grateful to Tom Small for designing the Imagen watermark. We thank Jason Baldridge, Han Zhang, and Kevin Murphy for initial discussions and feedback. We acknowledge hard work and support from Fred Alcober, Hibaq Ali, Marian Croak, Aaron Donsbach, Tulsee Doshi, Toju Duke, Douglas Eck, Jason Freidenfelds, Brian Gabriel, Molly FitzMorris, David Ha, Philip Parham, Laura Pearce, Evan Rapoport, Lauren Skelly, Johnny Soraker, Negar Rostamzadeh, Vijay Vasudevan, Tris Warkentin, Jeremy Weinstein, and Hugh Williams for giving us advice along the project and assisting us with the publication process. We thank Victor Gomes and Erica Moreira for their consistent and critical help with TPU resource allocation. We also give thanks to Shekoofeh Azizi, Harris Chan, Chris A. Lee, and Nick Ma for volunteering a considerable amount of their time for testing out DrawBench. We thank Aditya Ramesh, Prafulla Dhariwal, and Alex Nichol for allowing us to use DALL-E 2 samples and providing us with GLIDE samples. We are thankful to Matthew Johnson and Roy Frostig for starting the JAX project and to the whole JAX team for building such a fantastic system for high-performance machine learning research. Special thanks to Durk Kingma, Jascha Sohl-Dickstein, Lucas Theis and the Toronto Brain team for helpful discussions and spending time Imagening!

All Comments: [-] | anchor

endisneigh(10000) 1 day ago [-]

I give it a few years before Google makes stock images irrelevant.

sydthrowaway(10000) 1 day ago [-]

Short Getty images?

tpmx(10000) 1 day ago [-]

Rolling this into Google Docs seems like a nobrainer.

makeitdouble(10000) 1 day ago [-]

I really expect them to first make DALL-E and competing networks unfit for commercialization by providing the better choice for free, have stock companies cry in the corner, to just sunset the product a year or two down the road and we're left wandering what to do.

notahacker(10000) 1 day ago [-]

Tbh imagine this tech combines particularly well with really well curated stock image databases so outputs can be made with recognisable styles, and actors and design elements can be reused across multiple generated images.

If Getty et al aren't already spending money on that possibility, they probably should be.

pphysch(10000) 1 day ago [-]

The entire 'content' industry could get eaten by a few hundred people curating + touching-up output from these models.

octocop(10000) 1 day ago [-]

Would be awesome to see a side by side comparison to DALL-E, generating from the same text

mlfn(10000) 1 day ago [-]

It's in the PDF.

throwaway743(10000) 1 day ago [-]
CobrastanJorji(10000) 1 day ago [-]

Is this a joke?

atty(10000) 1 day ago [-]

Lucidrains is a champ. If theyre on HN, bravo and thanks for all the reference implementations!

xnx(10000) 1 day ago [-]

OpenAi really thought they had done something with DALL-E, then Google's all 'hold my beer'.

dntrkv(10000) 1 day ago [-]


tomatowurst(10000) 1 day ago [-]

when will there be a 'DALL-E for porn' ? or is this domain also claimed by Puritans and morality gate keepers? The most in demand text-to-image is use case is for porn.

astrange(10000) 1 day ago [-]

Train it yourself. Danbooru is a publicly available explicit dataset.

jonahbenton(10000) 1 day ago [-]

I know that some monstrous majority of cognitive processing is visual, hence the attention these visually creative models are rightfully getting, but personally I am much more interested in auditory information and would love to see a promptable model for music. Was just listening to 'Land Down Under' from Men At Work. Would love to be able to prompt for another artist I have liked: 'Tricky playing Land Down Under.' I know of various generative music projects, going back decades, and would appreciate pointers, but as far as I am aware we are still some ways from Imagen/Dalle for music?

astrange(10000) 1 day ago [-]

I believe we're lacking someone training up a large music model here, but GPT-style transformers can produce music.

gwern can maybe comment here.

An actually scary thing is that AIs are getting okay at reproducing people's voices.

addandsubtract(10000) 1 day ago [-]

I agree. How cool would it be to get an 8 min version of your favorite song? Or an instant DnB remix? Or 10 more songs in the style of your favorite album?

hn_throwaway_99(10000) 1 day ago [-]

As someone who has a layman's understanding of neural networks, and who did some neural network programming ~20 years ago before the real explosion of the field, can someone point to some resources where I can get a better understanding about how this magic works?

I mean, from my perspective, the skill in these (and DALL-E's) image reproductions is truly astonishing. Just looking for more information about how the software actually works, even if there are big chunks of it that are 'this is beyond your understanding without taking some in-depth courses'.

londons_explore(10000) 1 day ago [-]

Figure A.4 in the linked paper is a good high level overview of this model. Shame it was hidden away on page 19 in the appendix!

Each box you see there has a section in the paper explaining it in more detail.

rvnx(10000) 1 day ago [-]

Check https://github.com/multimodalart/majesty-diffusion or https://github.com/lucidrains/DALLE2-pytorch

There is a Google Colab workbook that you can try and run for free :)

This is the image-text pairs behind: https://laion.ai/laion-400-open-dataset/

astrange(10000) 1 day ago [-]

> I mean, from my perspective, the skill in these (and DALL-E's) image reproductions is truly astonishing.

A basic part of it is that neural networks combine learning and memorizing fluidly inside them, and these networks are really really big, so they can memorize stuff good.

So when you see it reproduce a Shiba Inu well, don't think of it as "the model understands Shiba Inus". Think of it as making a collage out of some Shiba Inu clip art it found on the internet. You'd do the same if someone asked you to make this image.

It's certainly impressive that the lighting and blending are as good as they are though.

geonic(10000) 1 day ago [-]

Can anybody give me short high-level explanation how the model achieves these results? I'm especially interested in the image synthesis, not the language parsing.

For example, what kind of source images are used for the snake made of corn[0]? It's baffling to me how the corn is mapped to the snake body.

[0] https://gweb-research-imagen.appspot.com/main_gallery_images...

DougBTX(10000) 1 day ago [-]

In the paper they say about half the training data was an internal training set, and the other half came from: https://laion.ai/laion-400-open-dataset/

kordlessagain(10000) about 24 hours ago [-]

> Since guidance weights are used to control image quality and text alignment, we also report ablation results using curves that show the trade-off between CLIP and FID scores as a function of the guidance weights (see Fig. A.5a). We observe that larger variants of T5 encoder results in both better image-text alignment, and image fidelity. This emphasizes the effectiveness of large frozen text encoders for text-to-image models

I usually consider myself fairly intelligent, but I know that when I read an AI research paper I'm going to feel dumb real quick. All I managed to extract from the paper was a) there isn't a clear explanation of how it's done that was written for lay people and b) they are concerned about the quality and biases in the training sets.

Having thought about the problem of 'building' an artificial means to visualize from thought, I have a very high level (dumb) view of this. Some human minds are capable of generating synthetic images from certain terms. If I say 'visualize a GREEN apple sitting on a picnic table with a checkerboard table cloth', many people will create an image that approximately matches the query. They probably also see a red and white checkerboard cloth because that's what most people have trained their models on in the past. By leaving that part out of the query we can 'see' biases 'in the wild'.

Of course there are people that don't do generative in-mind imagery, but almost all of us do build some type of model in real time from our sensor inputs. That visual model is being continuously updated and is what is perceived by the mind 'as being seen'. Or, as the Gorillaz put it:

  ... For me I say God, y'all can see me now
  'Cos you don't see with your eye
  You perceive with your mind
  That's the end of it...
To generatively produce strongly accurate imagery from text, a system needs enough reference material in the document collection. It needs to have sampled a lot of images of corn and snakes. It needs to be able to do image segmentation and probably perspective estimation. It needs a lot of semantic representations (optimized query of words) of what is being seen in a given image, across multiple 'viewing models', even from humans (who also created/curated the collections). It needs to be able to 'know' what corn looks like, even from the perspective of another model. It needs to know what 'shape' a snake model takes and how combining the bitmask of the corn will affect perspective and framing of the final image. All of this information ends up inside the model's network.

Miika Aittala at Nvidia Research has done several presentations on taking a model (imagined as a wireframe) and then mapping a bitmapped image onto it with a convolutional neural network. They have shown generative abilities for making brick walls that looks real, for example, from images of a bunch of brick walls and running those on various wireframes.

Maybe Imagen is an example of the next step in this, by using diffusion models instead of the CNN for the generator and adding in semantic text mappings while varying the language models weights (i.e. allowing the language model to more broadly use related semantics when processing what is seen in a generated image). I'm probably wrong about half that.

Here's my cut on how I saw this working from a few years ago: https://storage.googleapis.com/mitta-public/generate.PNG

Regardless of how it works, it's AMAZING that we are here now. Very exciting!

dave_sullivan(10000) 1 day ago [-]

Well, first they parse the language into a high level vector representation. Then they take images and add noise and train a model to remove the noise so it can start with a noisy image and produce a clear image from it. Then they train a model to map from the word representation for text to the noisy image representation for the corresponding image. Then they upsample twice to get to good resolution.

So text -> text representation -> most likely noised image space -> iteratively reduce noise N times -> upsample result

Something like that, please correct anything I'm missing.

Re: the snake corn question, it is mapping the 'concept' of corn to the concept of a body as represented by intermediary learned vector representations.

FargaColora(10000) 1 day ago [-]

This looks incredible but I do notice that all the images are of a similar theme. Specifically there are no human figures.

influxmoment(10000) 1 day ago [-]

I believe DALLE and likely this model excluded images of people so it could not be misused

benwikler(10000) 1 day ago [-]

Would be fascinated to see the DALL-E output for the same prompts as the ones used in this paper. If you've got DALL-E access and can try a few, please put links as replies!

qclibre22(10000) 1 day ago [-]

See the paper here : https://gweb-research-imagen.appspot.com/paper.pdf Section E : 'Comparison to GLIDE and DALL-E 2'

jandrese(10000) 1 day ago [-]

Is there a way to try this out? DALL-E2 also had amazing demos but the limitations became apparent once real people had a chance to run their own queries.

wmfrov(10000) 1 day ago [-]

Looks like no, 'The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo. In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access.'

faizshah(10000) 1 day ago [-]

What's the best open source or pre-trained text to image model?

GaggiX(10000) 1 day ago [-]

latent diffusion model trained on LAION-400M https://github.com/CompVis/latent-diffusion

Veedrac(10000) 1 day ago [-]

I thought I was doing well after not being overly surprised by DALL-E 2 or Gato. How am I still not calibrated on this stuff? I know I am meant to be the one who constantly argues that language models already have sophisticated semantic understanding, and that you don't need visual senses to learn grounded world knowledge of this sort, but come on, you don't get to just throw T5 in a multimodal model as-is and have it work better than multimodal transformers! VLM[1] at least added fine-tuned internal components.

Good lord we are screwed. And yet somehow I bet even this isn't going to kill off the they're just statistical interpolators meme.

[1] https://www.deepmind.com/blog/tackling-multiple-tasks-with-a...

axg11(10000) 1 day ago [-]

I firmly believe that ~20-40% of the machine learning community will say that all ML models are dumb statistical interpolators all the way until a few years after we achieve AGI. Roughly the same groups will also claim that human intelligence is special magic that cannot be recreated using current technology.

I think it's in everyone's benefit if we start planning for a world where a significant portion of the experts are stubbornly wrong about AGI. As a technology, generally intelligent ML has the potential to change so many aspects of our world. The dangers of dismissing the possibility of AGI emerging in the next 5-10 years are huge.

benreesman(10000) 1 day ago [-]

It's just my opinion but I think the meme you're talking about is deeply related to other branches of science and philosophy: ranging from the trust old saw about AI being anything a computer hasn't done yet to deep meditations on the nature of consciousness.

They're all fundamentally anthropocentric: people argue until they are blue in the face about what "intelligent" means but it's always implicit that what they really mean is "how much like me is this other thing".

Language models, even more so than the vision models that got them funded have empirically demonstrated that knowing the probability of two things being adjacent in some latent space is at the boundary indistinguishable from creating and understanding language.

I think the burden is on the bright hominids with both a reflexive language model and a sex drive to explain their pre-Copernican, unique place in the theory of computation rather than vice versa.

A lot of these problems just aren't problems anymore if performance on tasks supersedes "consciousness" as the thing we're studying.

hooande(10000) 1 day ago [-]

I haven't been overly surprised by any of it. The final product is still the same, no matter how much they scale it up.

All of these models seem to require a human to evaluate and edit the results. Even Co-Pilot. In theory this will reduce the number of human hours required to write text or create images. But I haven't seen anyone doing that successfully at scale or solving the associated problems yet.

I'm pessimistic about the current state of AI research. It seems like it's been more of the same for many years now.

skybrian(10000) 1 day ago [-]

I think it's something like a very intelligent Borgian library of babel. There are all sorts of books in there, by authors with conflicting opinions and styles, due to the source material. The librarian is very good at giving you something you want to read, but that doesn't mean it has coherent opinions. It doesn't know or care what's authentic and what's a forgery. It's great for entertainment, but you wouldn't want to do research there.

For image generation, it's obviously all fiction. Which is fine and mostly harmless if you you know what you're getting. It's going to leak out onto the Internet, though, and there will be photos that get passed around as real.

For text, it's all fiction too, but this isn't obvious to everyone because sometimes it's based on true facts. There's often not going to be an obvious place where the facts stop and the fiction starts.

The raw Internet is going to turn into a mountain of this stuff. Authenticating information is going to become a lot more important.

syspec(10000) 1 day ago [-]

I'm curious why all of these tools seem to be almost tailored toward making meme images?

The kind of early 2010's, over the top description of something that's ridiculous

benreesman(10000) 1 day ago [-]

These things can make any image you can define in terms of a corpus of other images. That was true at lower resolution five years ago.

To the extent that they get used for making bored ape images or whatever meme du juor, it says much more about the kind of pictures people want to see.

I personally find the weird deep dreaming dogs with spikes coming out of their heads more mathematically interesting, but I can understand why that doesn't sell as well.

TaylorPhebillo(10000) 1 day ago [-]

My hunch is that they aren't tailored toward ridiculous images exactly, but if they demonstrated 'a woman sitting in a chair reading', it would be really hard to tell if the result was a small modification of an image in the training data. If they demonstrate 'A snake made out of corn', I have less concern about the model having a very close training example.

qz_kb(10000) 1 day ago [-]

I have to wonder how much releasing these models will 'poison the well' and fill the internet with AI generated images that make training an improved model difficult. After all if every 9/10 'oil painted' image online starts being from these generative models it'll become increasingly difficult to scrape the web and to learn from real world data in a variety of domains. Essentially once these things are widely available the internet will become harder to scrape for good data and models will start training on their own output. The internet will also probably get worse for humans since search results will be completely polluted with these 'sort of realistic' images which can ultimately be spit out at breakneck speed by smashing words from a dictionary together...

abel_(10000) 1 day ago [-]

On the contrary -- the opposite will happen. There's a decent body of research showing that just by training foundation models on their outputs, you amplify their capabilities.

Less common opinion: this is also how you end up with models that understand the concept of themselves, which has high economic value.

Even less common opinion: that's really dangerous.

rg111(10000) 1 day ago [-]

People training newer models just have to look for the 'Imagen' tag or the Dall-E2 rainbow at the corner and heuristically exclude images having these. This is trivial.

Unless you assume there are bad actors who will crop out the tags. Not many people now have access to Dall-E2 or will have access to Imagen.

As someone working in Vision, I am also thinking about whether to include such images deliberately. Using image augmentation techniques is ubiquitous in the field. Thus we introduce many examples for training the model that are not in the distribution over input images. They improve model generality by huge margins. Whether generated images improve generality of future models is a thing to try.

Damn I just got an idea for a paper writing this comment.

agar(10000) 1 day ago [-]

The irony is that when the majority of content becomes computer-generated, most of that content will also be computer-consumed.

Neil Stephenson covered this briefly in 'Fall; or Dodge In Hell.' So much 'net content was garbage, AI-generated, and/or spam that it could only be consumed via 'editors' (either AI or AI+human, depending on your income level) that separated the interesting sliver of content from...everything else.

VMG(10000) 1 day ago [-]

It will not be limited to the internet. Have you looked at a magazine stand in the last 10 years? The content looks generated (not by AI) even today.

Cheap books, cheap TV and cheap music will be generated.

afro88(10000) 1 day ago [-]

I can see a world where in person consumption of creative media (art, music, movies etc), where all devices are to be left at the door, becomes more and more sought after and lucrative.

If the AI models can't consume it, it can't be commoditised and, well, ruined.

benlivengood(10000) about 14 hours ago [-]

I think instead the images people want to put on the Internet will do the same for these models as adversarial training did for AlphaZero; it will learn what kinds of images engage human reaction.

rajnathani(10000) about 7 hours ago [-]

For better training data in the future: Storing a content hash and author identification (an example proprietary solution right now [0]) of image authors, and having a decentralized reputation system for people/authors would help be the solution for better training data in the future whereby authors can gain reputation/incentives too.

[0] https://creativecloud.adobe.com/discover/article/how-to-use-...

rhacker(10000) 1 day ago [-]

Look at carpentry blogs, recipe blogs. Nearly all of it is junk content. I bet if you combined GPT and imagen or dalle2 you could replace all of them. Just provide a betty crocker recipe and let it generate a blog that has weekly updates and even a bunch of images - 'happy family enjoying pancakes together'

I can see the future as being devoid of any humanity.

dclowd9901(10000) 1 day ago [-]

Eventually the only jobs humans will have is training AI to act human. Sounds very Philip K Dick now that I think about it.

ismepornnahi(10000) 1 day ago [-]

Adding a watermark to all AI generated images should be imperative.

Gigachad(10000) 1 day ago [-]

I wonder if google images could just seed in some generated images when none relevant are found..

JayStavis(10000) 1 day ago [-]

Huh, I had never thought of that. Makes it seem like there's a small window of authenticity closing.

The irony is that if you had a great discriminator to separate the wheat from the chaff, that it would probably make its way into the next model and would no longer be useful.

My only recommendation is that OpenAI et al should be tagging metadata for all generated images as synthetic. That would be a really interesting tag for media file formats (would be much better native than metadata though) and probably useful across a lot of domains.

whatshisface(10000) 1 day ago [-]

I don't think it will 'poison the well' so much as change it - images that humans like more will get a higher pagerank, so the models trained on Google Images will not so much as degrade as they will detach from reality and begin to follow the human mind they way plausible fiction does.

LoveMortuus(10000) about 5 hours ago [-]

Maybe we'll go back to index based search engines like Yahoo. Could resolve many issues we see today, but I think the biggest question is scalability. Maybe some open source open database system?

bowmessage(10000) 1 day ago [-]

I also worry about the potential to further stifle human creativity, e.g. why paint that oil painting of a panda riding a bicycle when I could generate one in seconds?

joshspankit(10000) 1 day ago [-]

Just yesterday I was speculating that current AI is bad at math because math on the internet is spectacularly terrible.

I think you're right, and it's unlikely that we (society) will convince people to label their AI content as such so that scraping is still feasible.

It's far more likely that companies will be formed to provide "pristine training sets of human-created content", and quite likely they will be subscription based.

kleer001(10000) 1 day ago [-]

How would that really happen? It seems to me you're assuming that there's no such thing as extant databases of actual oil paintings, that people will stop producing, documenting, and curating said paintings. I think the internet and curated image databases are far more well kept than your proposed model accounts for.

dr_dshiv(10000) 1 day ago [-]

How the fck are things advancing so fast? Is it about to level off ...or extend to new domains? What's a comparable set of technical advances?

dqpb(10000) 1 day ago [-]

This video by Juergen Schmidhuber discusses the acceleration of AI progress:


astrange(10000) 1 day ago [-]

Bigger model = better because a lot of performance at this task is memorization or the "lottery ticket hypothesis".

An impressive advance would be a small model that's capable of working from an external memory rather than memorizing it.

y04nn(10000) 1 day ago [-]

Really impressive. If we are able to generate such detailed images, is there anything similar for text to music? I would I though that it would be simpler to achieve than text to image.

redox99(10000) 1 day ago [-]

Our language is much more effective at describing images than music.

tomatowurst(10000) 1 day ago [-]

why stop at audio? the pinnacle of this would be text-to-videos, equally indistinguishable from real thing.

nomel(10000) 1 day ago [-]

Compare the size of a raw image file to a raw music file, to get an idea of the complexity difference.

touringa(10000) 1 day ago [-]
d--b(10000) 1 day ago [-]

One thing that no one predicted in AI development was how good it would become at some completely unexpected tasks while being not so great at the ones we supposed/hoped it would be good.

AI was expected to grow like a child. Somehow blurting out things that would show some increasing understanding on a deep level but poor syntax.

In fact we get the exact opposite. AI is creating texts that are syntaxically correct and very decently articulated and pictures that are insanely good.

And these texts and images are created from a text prompt?! There is no way to interface with the model other than by freeform text. That is so weird to me.

Yet it doesn't feel intelligent at all at first. You can't ask it to draw "a chess game with a puzzle where white mates in 4 moves".

Yet sometimes GPT makes very surprising inferences. And it starts to feel like there is something going on a deeper level.

DeepMind's AlphaXxx models are more in line with how I expected things to go. Software that gets good at expert tasks that we as humans are too limited to handle.

Where it's headed, we don't know. But I bet it's going to be difficult to tell the "intelligence" from the "varnish"

Layke1123(10000) 1 day ago [-]

Syntactically* I know it's the most trivial of things, but in case you were curious as I often am!

satokausi(10000) 1 day ago [-]

I doubt 99% of humans can draw a "chess game with a puzzle where white mates in 4 moves"

ml_basics(10000) 1 day ago [-]

Why is this seemingly official Google blog post on this random non-Google domain?

aidenn0(10000) 1 day ago [-]

This is quite suspicious considering that google AI research has an official blog[1], and this is not mentioned at all there. It seems quite possible that this is an elaborate prank.

1: https://ai.googleblog.com/

mmh0000(10000) 1 day ago [-]

You mean one of Google's domains?

  # whois appspot.com
  [Querying whois.verisign-grs.com]
  [Redirected to whois.markmonitor.com]
  [Querying whois.markmonitor.com]
  Domain Name: appspot.com
  Registry Domain ID: 145702338_DOMAIN_COM-VRSN
  Registrar WHOIS Server: whois.markmonitor.com
  Registrar URL: http://www.markmonitor.com
  Updated Date: 2022-02-06T09:29:56+0000
  Creation Date: 2005-03-10T02:27:55+0000
  Registrar Registration Expiration Date: 2023-03-10T00:00:00+0000
  Registrar: MarkMonitor, Inc.
  Registrar IANA ID: 292
  Registrar Abuse Contact Email: [email protected]
  Registrar Abuse Contact Phone: +1.2086851750
  Domain Status: clientUpdateProhibited (https://www.icann.org/epp#clientUpdateProhibited)
  Domain Status: clientTransferProhibited (https://www.icann.org/epp#clientTransferProhibited)
  Domain Status: clientDeleteProhibited (https://www.icann.org/epp#clientDeleteProhibited)
  Domain Status: serverUpdateProhibited (https://www.icann.org/epp#serverUpdateProhibited)
  Domain Status: serverTransferProhibited (https://www.icann.org/epp#serverTransferProhibited)
  Domain Status: serverDeleteProhibited (https://www.icann.org/epp#serverDeleteProhibited)
  Registrant Organization: Google LLC
  Registrant State/Province: CA
  Registrant Country: US
  Registrant Email: Select Request Email Form at https://domains.markmonitor.com/whois/appspot.com
  Admin Organization: Google LLC
  Admin State/Province: CA
  Admin Country: US
  Admin Email: Select Request Email Form at https://domains.markmonitor.com/whois/appspot.com
  Tech Organization: Google LLC
  Tech State/Province: CA
  Tech Country: US
  Tech Email: Select Request Email Form at https://domains.markmonitor.com/whois/appspot.com
  Name Server: ns4.google.com
  Name Server: ns3.google.com
  Name Server: ns2.google.com
  Name Server: ns1.google.com
dekhn(10000) 1 day ago [-]

I'n not certain but I think it's prelease. The paper says the site should be at https://imagen.research.google/ but that host doesn't respond

jonny_eh(10000) 1 day ago [-]

appspot.com is the domain that hosts all App Engine apps (at least those that don't use a custom domain). It's kind of like Heroku and has been around for at least a decade.


mshockwave(10000) 1 day ago [-]

IIRC appspot.com is used by App Engine, one of the earliest SaaS platforms provided by Google.

jeffbee(10000) 1 day ago [-]

Not just that ... Google Sheets must be the all-time worst way to distribute 200 short strings.

discmonkey(10000) 1 day ago [-]

For people complaining that they can't play with the model... I work at Google and I also can't play with the model :'(

hathym(10000) 1 day ago [-]

off-topic: as a google employee do you have unlimited gce credits?

kvetching(10000) about 20 hours ago [-]

Good thing there is a company committed to Open Sourcing these sorts of AI models.

Oh wait.

Google: 'it's too dangerous to release to the public'

OpenAI: 'we are committed to open source AGI but this model is too dangerous to release to the public'

interblag(10000) 1 day ago [-]

I think they address some of the reasoning behind this pretty clearly in the write-up as well?

> The potential risks of misuse raise concerns regarding responsible open-sourcing of code and demos. At this time we have decided not to release code or a public demo. In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access.

I can see the argument here. It would be super fun to test this model's ability to generate arbitrary images, but 'arbitrary' also contains space for a lot of distasteful stuff. Add in this point:

> While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes. Imagen relies on text encoders trained on uncurated web-scale data, and thus inherits the social biases and limitations of large language models. As such, there is a risk that Imagen has encoded harmful stereotypes and representations, which guides our decision to not release Imagen for public use without further safeguards in place.

That said, I hope they're serious about the 'framework for responsible externalization' part, both because it would be really fun to play with this model and because it would be interesting to test it outside of their hand-picked examples.

arthurcolle(10000) 1 day ago [-]

How does that make you feel?

SemanticStrengh(10000) about 20 hours ago [-]

Is brocoliman standing in the way? :(

make3(10000) 1 day ago [-]

I mean I don't know how that makes it any better from a reproducibility stand point lol

octocop(10000) 1 day ago [-]

is your team/division hiring?

karmasimida(10000) 1 day ago [-]

I mean inference on this cost not small money.

I don't think they would host this for fun then.

ShakataGaNai(10000) 1 day ago [-]

All of these AI findings are cool in theory. But until its accessible to some decent amount of people/customers - its basically useless fluff.

You can tell me those pictures are generated by an AI and I might believe it, but until real people can actually test it... it's easy enough to fake. This page isn't even the remotest bit legit by the URL, It looks nicely put together and that's about it. Could have easily put together this with a graphic designer to fake it.

Let be clear, I'm not actually saying it's fake. Just that all of these new 'cool' things are more or less theoretical if nothing is getting released.

cellis(10000) 1 day ago [-]

Inference times are key. If it can't be produced within reasonable latency, then there will be no real world use case for it because it's simply too expensive to run inference at scale.

mistrial9(10000) 1 day ago [-]

Reading a relatively-recent Machine Learning paper from some elite source, and after multiple repititions of bragging and puffery, in the middle of the paper, the charts show that they had beaten the score of a high-ranking algorithm in their specific domain, moving the best consistant result from 86% accuracy to 88% accuracy, somewhere around there. My response was: they got a lot of attention within their world by beating the previous score, no matter how small the improvement was.. it was a 'winner take all' competition against other teams close to them; the accuracy of less than 90% is really of questionable value in a lot of real world problems; it was an enormous amount of math and effort for this team to make that small improvement.

What I see is semi-poverty mindset among very smart people who appear to be treated in a way such that the winners get promotion, and everyone else is fired. That this sort of analysis with ML is useful for massive data sets at scale, where 90% is a lot of accuracy, not at all for the small sets of real world, human-scale problems where each result may matter a lot. The amount of years of training that these researchers had to go through, to participate in this apparently ruthless environment, are certainly like a lottery ticket, if you are in fact in a game where everyone but the winner has to find a new line of work. I think their masters live in Redmond, if I recall.. not looking it up at the moment.

gwern(10000) 1 day ago [-]

What you're missing is that the performance on a pretext task like ImageNet top-1 will transfer outside ImageNet, and as you go further into the high score regime, often a small % can yield qualitatively better results because the underlying NN has to solve harder and harder problems, eliciting true solutions rather than a patchwork of heuristics.

Nothing in a Transformer's perplexity in predicting the next token tells you that at some point it suddenly starts being able to write flawless literary style parodies, and this is why the computer art people become virtuosos of CLIP variants and are excited by new ones, because each one attacks concepts in slightly different ways and a 'small' benchmark increase may unlock some awesome new visual flourish that the model didn't get before.

londons_explore(10000) 1 day ago [-]

If you worked in a hospital and you managed to increase the survival rate from 86% to 88%, you too would be a hero.

Sure, it's only 2%, but if it's on a problem where everyone else has been trying to make that improvement for a long time, and that improvement means big economic or social gains, then it's worth it.

Jyaif(10000) 1 day ago [-]

Jesus Christ. Unlike DALL-E 2, it gets the details right. It also can generate text. The quality is insanely good. This is absolutely mental.

not2b(10000) 1 day ago [-]

Yes, the posted results are really good, but since we can't play with it we don't know how much cherry picking has been done.

benreesman(10000) 1 day ago [-]

I apologize in advance for the elitist-sounding tone. In my defense the people I'm calling elite I have nothing to do with, I'm certainly not talking about myself.

Without a fairly deep grounding in this stuff it's hard to appreciate how far ahead Brain and DM are.

Neither OpenAI nor FAIR ever has the top score on anything unless Google delays publication. And short of FAIR? D2 lacrosse. There are exceptions to such a brash generalization, NVIDIA's group comes to mind, but it's a very good rule of thumb. Or your whole face the next time you are tempted to doze behind the wheel of a Tesla.

There are two big reasons for this:

- the talent wants to work with the other talent, and through a combination of foresight and deep pockets Google got that exponent on their side right around the time NVIDIA cards started breaking ImageNet. Winning the Hinton bidding war clinched it.

- the current approach of "how many Falcon Heavy launches worth of TPU can I throw at the same basic masked attention with residual feedback and a cute Fourier coloring" inherently favors deep pockets, and obviously MSFT, sorry OpenAI has that, but deep pockets also non-linearly scale outcomes when you've got in-house hardware for multiply-mixed precision.

Now clearly we're nowhere close to Maxwell's Demon on this stuff, and sooner or later some bright spark is going to break the logjam of needing 10-100MM in compute to squeeze a few points out of a language benchmark. But the incentives are weird here: who, exactly, does it serve for us plebs to be able to train these things from scratch?

Herodotus38(10000) 1 day ago [-]

Is Maxwell's Demon applicable to this scenario? I'm not a physicist but I recently had to look it up after talking with someone and thought it had to do with a specific thermodynamic thought experiment with gas particles and heat differences. Is there is another application I don't understand with computing power?

chillee(10000) 1 day ago [-]

> Neither OpenAI nor FAIR ever has the top score on anything unless Google delays publication.

This is ... very incorrect. I am very certain (95%+) that Google had nothing even close to GPT-3 at the time of its release. It's been 2 full years since GPT-3 was released, and even longer since OpenAI actually trained it.

That's not to talk about any of the other things OpenAI/FAIR has released that were SOTA at the time of release (Dall-E 1, JukeBox, Poker, Diplomacy, Codex).

Google Brain and Deepmind have done a lot of great work, but to imply that they essentially have a monopoly on SOTA results and all SOTA results other labs have achieved are just due to Google delaying publication is ridiculous.

meowface(10000) 1 day ago [-]

Not elitist at all; I highly appreciate this post. I know the basics of ML but otherwise am clueless when it comes to the true depths of this field and it's interesting to hear this perspective.

ttul(10000) 1 day ago [-]

In short, it's all about money.

f38zf5vdt(10000) 1 day ago [-]

> But the incentives are weird here: who, exactly, does it serve for us plebs to be able to train these things from scratch?

I'm not sure it matters. The history of computing shows that within the decade we will all have the ability to train and use these models.

dougabug(10000) 1 day ago [-]

This characterization is not really accurate. OpenAI has had almost a 2 year lead with GPT-3 dominating the discussion of LLMs (large language models). Google didn't release its paper on the powerful PaLM-540b model until recently. Similarly, CLiP, Glide, DALL-E, and DALL-E2 have been incredibly influential in visual-language models. Imagen, while highly impressive, definitely is a catch-up piece of work (as was PaLM-540b).

Google clearly demonstrates their unrivaled capability to leverage massive quantities of data and compute, but it's premature to declare that they've secured victory in the AI Wars.

joshcryer(10000) 1 day ago [-]

Who does it serve for plebs to be shown the approach openly? I don't know that it does a disservice to anyone by showing the approach.

But in general it is likely more due in part to the fact that it's going to happen anyway, if we can share our approaches and research findings, we'll just achieve it sooner.

james-redwood(10000) 1 day ago [-]

Metacalculus, a mass forecasting site, has steadily brought forward the prediction date for a weakly general AI. Jaw-dropping advances like this, only increase my confidence in this prediction. 'The future is now, old man.'


chias(10000) 1 day ago [-]

'The future is already here — It's just not very evenly distributed'

sydthrowaway(10000) 1 day ago [-]

How can we prepare for this?

This will result in mass social unrest.

tpmx(10000) 1 day ago [-]

I don't see how this gets us (much) closer to general AI. Where is the reasoning?

davikr(10000) 1 day ago [-]

Interesting and cool technology - but I can't seem to ignore that every high-quality AI art application is always closed, and I don't seem to buy the ethics excuse for that. The same was said for GPT, yet I see nothing but creativity coming out from its users nowadays.

s17n(10000) 1 day ago [-]

Running inference on one of these models takes like a GPU minute, so they can't just let the public use them.

dougmwne(10000) 1 day ago [-]

GTP-3 was an erotica virtuoso before it was gagged. There's a serious use case here in endless porn generation. Google would very much like to not be in that business.

That said, you can download Dream by Wombo from the app store and it is one of the top smartphone apps, even though it is a few generations behind state of the art.

LordDragonfang(10000) 1 day ago [-]

You're aware of nothing but creativity from its users. The people using the technology unethically intentionally don't advertise that they're using it.

There's mountains of ai-generated inauthentic content that companies (including Google) have to filter out of their services. This content is used for spam, click farms, scamming, and even state propaganda operations. GPT-2 made this problem orders of magnitude worse than it used to be, and each iteration makes it harder to filter.

The industry term is (generally) 'Coordinated Inauthentic Behavior' (though this includes uses of actual human content). I think Smarter Every Day did a good videos (series?) on the topic, and there are plenty of articles on the topic if you prefer that.

angrysword(10000) 1 day ago [-]

Ethics,racist, LGBT, bla,bla. If we talk about political correct, I really suggest you guys go somewhere else. instead of stay in hacker news. AI generated porn is better that let some people, who do not want to do the porn, doing the porn themselves.

thorum(10000) 1 day ago [-]

That only lasts until the community copies the paper and catches up. For example the open source DALLE-2 implementation is coming along great: https://github.com/lucidrains/DALLE2-pytorch

whywhywhywhy(10000) 1 day ago [-]

The AI ethics thing is just a PR larp at this point.

"Oh our tech is so dangerous and amazing it could turn the world upside down" yet we hand it to random Bluechecks on Twitter.

It's just marketing

natly(10000) 1 day ago [-]

I don't buy the ethics but I do buy the obvious PR nightmare that would inevitably happen if journalists could play with this and immediately publish their findings of 'racist imagery generated by googles AI'. That's all it's about and us complaining is not going to make them change their minds.

quasar14(10000) 1 day ago [-]

check out open source alternative dalle-mini: https://huggingface.co/spaces/dalle-mini/dalle-mini

fumblebee(10000) 1 day ago [-]

Another consideration here is that hosting a queryable model like this becomes expensive. I remember a couple of years ago lone developer had to take down his site which hosted a freely accessible version of GPT-2 (3?) model because the bills were running to some $20k. (Chump change for Google, but still).

minimaxir(10000) 1 day ago [-]

Granted that's a selection bias: you likely won't hear about the cases where legit obscene output occurs. (the only notable case I've heard is the AI Dungeon incident)

wiz21c(10000) 1 day ago [-]

I find it a bit disturbing that they talk about social impact of totally imaginary pictures of racoon.

Of course, working in a golden lab at Google may twist your views on society.

dougmwne(10000) 1 day ago [-]

Oh, I would say they are probably underestimating the impact. You only saw the images they thought couldn't raise alarm bells. Anyone will be able to create photorealistic images of anyone doing anything, Anything! This is certainly a dangerous and society altering tech. It won't all be teddy bears and racoons playing poker.

daenz(10000) 1 day ago [-]

>While we leave an in-depth empirical analysis of social and cultural biases to future work, our small scale internal assessments reveal several limitations that guide our decision not to release our model at this time.

Some of the reasoning:

>Preliminary assessment also suggests Imagen encodes several social biases and stereotypes, including an overall bias towards generating images of people with lighter skin tones and a tendency for images portraying different professions to align with Western gender stereotypes. Finally, even when we focus generations away from people, our preliminary analysis indicates Imagen encodes a range of social and cultural biases when generating images of activities, events, and objects. We aim to make progress on several of these open challenges and limitations in future work.

Really sad that breakthrough technologies are going to be withheld due to our inability to cope with the results.

joshcryer(10000) 1 day ago [-]

They're withholding the API, code, and trained data because they don't want it to affect their corporate image. The good thing is they released their paper which will allow easy reproduction.

T5-XXL looks on par with CLIP so we may not see an open source version of T5 for a bit (LAION is working on reproducing CLIP), but this is all progress.

riffraff(10000) 1 day ago [-]

This seems bullshit to me, considering Google translate and google images encode the same biases and stereotypes, and are widely available.

planetsprite(10000) 1 day ago [-]

Literally the same thing could be said about Google images, but google images is obviously avaliable to the public.

Google knows this will be an unlimited money generator so they're keeping a lid on it.

jowday(10000) 1 day ago [-]

Much like OpenAIs marketing speak about withholding their models for safety, this is just a progressive-sounding cover story for them not wanting to essentially give away a model they spent thousands of man hours and tens of millions of dollars worth of compute training.

ThrowITout4321(10000) 1 day ago [-]

I'm one that welcomes their reasoning. I don't consider myself a social justice kind of guy but I'm not keen on the idea that a tool that is suppose to make life better for everyone has a bias towards one segment of society. This is an important issue(bug?) that needs to be resolved. Specially since there is absolutely no burning reason to release it before it's ready for general use.

Mockapapella(10000) 1 day ago [-]

Transformers are parallelize-able, right? What's stopping a large group of people from pooling their compute power together and working towards something like this? IIRC there were some crypto projects a while back that we're trying to create something similar (golem?)

6gvONxR4sf7o(10000) 1 day ago [-]

It's wild to me that the HN consensus is so often that 1) discourse around the internet is terrible, it's full of spam and crap, and the internet is an awful unrepresentative snapshot of human existence, and 2) the biases of general-internet-training-data are fine in ML models because it just reflects real life.

user3939382(10000) 1 day ago [-]

Translation: we need to hand-tune this to not reflect reality but instead the world as we (Caucasian/Asian male American woke upper-middle class San Fransisco engineers) wish it to be.

Maybe that's a nice thing, I wouldn't say their values are wrong but let's call a spade a spade.

visarga(10000) 1 day ago [-]

The big labs have become very sensitive with large model releases. It's too easy to make them generate bad PR, to the point of not releasing almost any of them. Flamingo was also a pretty great vison-language model that wasn't released, not even in a demo. PaLM is supposedly better than GPT-3 but closed off. It will probably take a year for open source models to appear.

xmonkee(10000) 1 day ago [-]

I was hoping your conclusion wasn't going to be this as I was reading that quote. But, sadly, this is HN.

sinenomine(10000) 1 day ago [-]

> Really sad that breakthrough technologies are going to be withheld due to our inability to cope with the results.

Indeed it is. Consider this an early, toy version of the political struggle related to ownership of AI-scientists and AI-engineers of the near future. That is, generally capable models.

I do think the public should have access to this technology, given so much is at stake. Or at least the scientists should be completely, 24/7, open about their R&D. Every prompt that goes into these models should be visible to everyone.

swayvil(10000) 1 day ago [-]

it isn't woke enough. Lol.

tines(10000) 1 day ago [-]

This raises some really interesting questions.

We certainly don't want to perpetuate harmful stereotypes. But is it a flaw that the model encodes the world as it really is, statistically, rather than as we would like it to be? By this I mean that there are more light-skinned people in the west than dark, and there are more women nurses than men, which is reflected in the model's training data. If the model only generates images of female nurses, is that a problem to fix, or a correct assessment of the data?

If some particular demographic shows up in 51% of the data but 100% of the model's output shows that one demographic, that does seem like a statistics problem that the model could correct by just picking less likely 'next token' predictions.

Also, is it wrong to have localized models? For example, should a model for use in Japan conform to the demographics of Japan, or to that of the world?

jimmygrapes(10000) 1 day ago [-]

Are 'Western gender stereotypes' significantly different than non-Western gender stereotypes? I can't tell if that means it counts a chubby stubble-covered man with a lip piercing, greasy and dyed long hair, wearing an overly frilly dress as a DnD player/metal-head or as a 'woman' or not (yes I know I'm being uncharitable and potentially 'bigoted' but if you saw my Tinder/Bumble suggestions and friend groups you'd know I'm not exaggerating for either category). I really can't tell what stereotypes are referred to here.

nomel(10000) 1 day ago [-]

If you tell it to generate an image of someone eating Koshihikari rice, will it be biased if they're Japanese? Should the skin color, clothing, setting, etc be made completely random, so that it's unbiased? What if you made it more specific, like 'edo period drawing of a man'? Should the person draw be of a random skin color? What about 'picture of a viking'? Is it biased if they're white?

At what point is statistical significance considered ok and unbiased?

bogwog(10000) 1 day ago [-]

I wouldn't describe this situation as 'sad'. Basically, this decision is based on a belief that tech companies should decide what our society should look like. I don't know what emotion that conjures up for you, but 'sadness' isn't it for me.

tomp(10000) 1 day ago [-]

> a tendency for images portraying different professions to align with Western gender stereotypes

There are two possible ways of interpreting interpreting 'gender stereotypes in professions'.

biased or correct



meetups323(10000) 1 day ago [-]

One of these days we're going to need to give these models a mortgage and some mouths to feed and make it clear to them that if they keep on developing biases from their training data everyone will shun them and their family will go hungry and they won't be able to make their payments and they'll just generally have a really bad time.

After that we'll make them sit through Legal's approved D&I video series, then it's off to the races.

babyshake(10000) 1 day ago [-]

Indeed. If a project has shortcomings, why not just acknowledge the shortcomings and plan to improve on them in a future release? Is it anticipated that 'engineer' being rendered as a man by the model is going to be an actively dangerous thing to have out in the world?

tyrust(10000) 1 day ago [-]

From the HN rules:

>Eschew flamebait. Avoid unrelated controversies and generic tangents.

They provided a pretty thorough overview (nearly 500 words) of the multiple reasons why they are showing caution. You picked out the one that happened to bother you the most and have posted a misleading claim that the tech is being withheld entirely because of it.

devindotcom(10000) 1 day ago [-]

Good lord. Withheld? They've published their research, they just aren't making the model available immediately, waiting until they can re-implement it so that you don't get racial slurs popping up when you ask for a cup of 'black coffee.'

>While a subset of our training data was filtered to removed noise and undesirable content, such as pornographic imagery and toxic language, we also utilized LAION-400M dataset which is known to contain a wide range of inappropriate content including pornographic imagery, racist slurs, and harmful social stereotypes

Tossing that stuff when it comes up in a research environment is one thing, but Google clearly wants to implement this as a product, used all over the world by a huge range of people. If the dataset has problems, and why wouldn't it, it is perfectly rational to want to wait and re-implement it with a better one. DALL-E 2 was trained on a curated dataset so it couldn't generate sex or gore. Others are sanitizing their inputs too and have done for a long time. It is the only thing that makes sense for a company looking to commercialize a research project.

This has nothing to do with 'inability to cope' and the implied woke mob yelling about some minor flaw. It's about building a tool that doesn't bake in serious and avoidable problems.

seaman1921(10000) 1 day ago [-]

Yup this is what happens when people who want headlines nitpick for bullshit in a state-of-the-art model which simply reflects the state of the society. Better not to release the model itself than keep explaining over and over how a model is never perfect.

dclowd9901(10000) 1 day ago [-]

Even as a pretty left leaning person, I gotta agree. We should see AI's pollution by human shortcoming akin to the fact that our world is the product of many immoralities that came before us. It sucks that they ever existed, but we should understand that the results are, by definition, a product of the past, and let them live in that context.

makeitdouble(10000) 1 day ago [-]

> Really sad that breakthrough technologies are going to be withheld due to our inability to cope with the results.

Genuinely, isn't it a prime example of the people actually stopping to think if they should, instead of being preoccupied with whether or not they could ?

ccbccccbbcccbb(10000) 1 day ago [-]

In short, the generated images are too gender-challenged-challenged and underrepresent the spectrum of new normalcy!

alphabetting(10000) 1 day ago [-]

There is a contingent of AI activists who spend a ton of time on Twitter that would beat Google like a drum with help from the media if they put out something they deemed racist or biased.

Mizza(10000) 1 day ago [-]

So glad the company that spies on me and reads my email for profit is protecting me from pictures that don't look like TV commercials.

ceeplusplus(10000) 1 day ago [-]

The ironic part is that these 'social and cultural biases' are purely from a Western, American lens. The people writing that paragraph are completely oblivious to the idea that there could be other cultures other than the Western American one. In attempting to prevent 'encoding of social and cultural biases' they have encoded such biases themselves into their own research.

andybak(10000) 1 day ago [-]

Great. Now even if I do get a Dall-E 2 invite I'll still feel like I'm missing out!

rvnx(10000) 1 day ago [-]

It's always the same with AI research: 'we have something amazing but you can't use it because it's too powerful and we think you are an idiot who cannot use your own judgement.'

ALittleLight(10000) 1 day ago [-]

Interesting to me that this one can draw legible text. DALLE models seem to generate weird glyphs that only look like text. The examples they show here have perfectly legible characters and correct spelling. The difference between this and DALLE makes me suspicious / curious. I wish I could play with this model.

zimpenfish(10000) 1 day ago [-]

The latent-diffusion[1] one I've been playing with is not terrible at drawing legible text but generally awful at actually drawing the text you want (cf. [2]) (or drawing text when you don't want any.)

[1] https://github.com/CompVis/latent-diffusion.git [2] https://imgur.com/a/Sl8YVD5

GaggiX(10000) 1 day ago [-]

Imagen takes text embeddings, OpenAI model takes image embeddings instead, this is the reason. There are other models that can generate text: latent diffusion trained on LAION-400M, GLIDE, DALL-E (1).

Tehdasi(10000) 1 day ago [-]

Still has the issue with screwing up mechanical objects. In their demo checkout the wheels on the skateboards, all over the place.

ricardobeat(10000) 1 day ago [-]

I thought the weird text in DALL-E 2 was on purpose to prevent malicious use.

the8472(10000) 1 day ago [-]

DALLE1 was able to render text[0]. That DALLE2 isn't probably is a tradeoff introduced by unCLIP in exchange for diverse results. Now the google model is better yet and doesn't have to make that tradeoff.

[0] https://openai.com/blog/dall-e/#text-rendering

hahajk(10000) 1 day ago [-]

Off topic, but this caught my attention:

"In future work we will explore a framework for responsible externalization that balances the value of external auditing with the risks of unrestricted open-access."

I work for a big org myself, and I've wondered what it is exactly that makes people in big orgs so bad at saying things.

dougmwne(10000) 1 day ago [-]

I think they were being careful not to be too quotable there on CNN.

beeskneecaps(10000) about 22 hours ago [-]

It's terrifying that all of these models are one colab notebook away from unleashing unlimited, disastrous imagery on the internet. At least some companies are starting to realize this and are not releasing the source code. However they always manage to write a scientific paper and blog post detailing the exact process to create the model, so it will eventually be recreated by a third party.

Meanwhile, Nvidia sees no problem with yeeting stylegan and and models that allow real humans to be realistically turned into animated puppets in 3d space. The inevitable end result of these scientific achievements will be orders of magnitude worse than deepfakes.

Oh, or a panda wearing sunglasses, in the desert, digital art.

isx726552(10000) about 21 hours ago [-]

I am absolutely terrified of all this for a different reason: all human professions (not just art) will soon be replaced by "good enough" AI, creating a world flooded with auto-generated junk and billions of people trapped permanently in slums, because you can't compete with free, and no one can earn a living any longer.

It's an old fear for sure but it seems to be getting closer and closer every day, and yet most of the discussion around these things seems to be variations of "isn't this cool?"

kvetching(10000) about 20 hours ago [-]

I, for One, Welcome Our Robot Overlords

sexy_panda(10000) 1 day ago [-]

Would I have to implement this myself, or is there something ready to run?

UncleOxidant(10000) 1 day ago [-]

I think implementing this yourself is likely not doable unless you have the computing resources of a Google, Amazon or Facebook.

manchmalscott(10000) 1 day ago [-]

The big thing I'm noticing over DALL-E is that it seems to be better at relative positioning. In a MKBHD video about DALLE it would get the elements but not always in the right order. I know google curated some specific images but it seems to be doing a better job there.

benwikler(10000) 1 day ago [-]

Totally—Imagen seems better at composition and relative positioning and text, while DALL-E seems better at lighting, backgrounds, and general artistry.

planb(10000) 1 day ago [-]

Seeing the artificial restrictions to this model as well as to DALL-E 2, I can't help but ask myself why the porn industry isn't driving its own research. Given the size of that industry and the sheer abundance of training material, it seems just a matter of time until you can create photo realistic images of yourself with your favourite celebrity for a small fee. Is there anything I am missing? Can you only do this kind of research at google or openai scale?

alexb_(10000) 1 day ago [-]

Porn is actually a really good litmus test to see if a money/media transfer technology has real promise. Pornography needs exactly 2 things to work well - a way to deliver media, and a way to collect money. If you truly have a system that can do one of those two things better than we currently can, and it's not just empty hype, it will be used for porn. 'Empty hype' won't touch that stuff, but real-world usecases will.

Unrelated to the main topic, but this is exactly why I think cryptocurrencies will only be used for illegal activities, or things you may want to hide, and nothing else. Because that's where it has found its usecase in porn.

rg111(10000) 1 day ago [-]

Transfer learning is a thing.

But I have not tried making generative models with out-of-distribution data before. Distributions other than main training data.

There are several indie attempts that I am aware of. Mentioning them to the reply of this comment. (In case the comment gets deleted)

The first layers should be general. But the later layers should not behave well to porn images. As they are more specialist layers learning distribution specific visual patters.

Transfer learning is posssible.

nootropicat(10000) 1 day ago [-]

This is completely outside the competency of the current porn industry.

You gave an example of a still image, but it's going to end up with an AI generating a full video according to a detailed text prompt. The porn industry is going to be utterly destroyed.

visarga(10000) 1 day ago [-]

Interesting discovery they made

> We show that scaling the pretrained text encoder size is more important than scaling the diffusion model size.

There seems to be an unexpected level of synergy between text and vision models. Can't wait to see what video and audio modalities will add to the mix.

gwern(10000) 1 day ago [-]

I think that's unsurprising. With DALL-E 1, for example, scaling the VAE (the image model generating the actual pixels) hits very fast diminishing returns, and all your compute goes into the 'text encoder' generating the token sequence.

Particularly as you approach the point where the image quality itself is superb and people increasingly turn to attacking the semantics & control of the prompt to degrade the quality ('...The donkey is holding a rope on one end, the octopus is holding onto the other. The donkey holds the rope in its mouth. A cat is jumping over the rope...'). For that sort of thing, it's hard to see how simply beefing up the raw pixel-generating part will help much: if the input seed is incorrect and doesn't correctly encode a thumbnail sketch of how all these animals ought to be engaging in outdoors sports, there's nothing some low-level pixel-munging neurons can do to help much.

ravi-delia(10000) 1 day ago [-]

Basically makes sense, no? DALLE-2 suffered from misunderstanding propositional logic, treating prompts as less structured then it should have. That's a text model issue! Compared to that, scaling up the image isn't as important (especially with a few passes).

jeffbee(10000) 1 day ago [-]

Is there anything at all, besides the training images and labels, that would stop this from generating a convincing response to 'A surveillance camera image of Jared Kushner, Vladimir Putin, and Alexandria Ocasio-Cortez naked on a sofa. Jeffrey Epstein is nearby, snorting coke off the back of Elvis'?

astrange(10000) 1 day ago [-]

- The current examples aren't convincing pictures of "a shiba inu playing a guitar".

- If you made that picture with actors or in MS Paint, politics boomers on Facebook wouldn't care either way. They'd just start claiming it's real if they like the message.

codemonkey-zeta(10000) 1 day ago [-]

Probably just a frontend coding mistake, and not an error in the model, but in the interactive example if you select:

'A photo of a Shiba Inu dog Wearing a (sic) sunglasses And black leather jacket Playing guitar In a garden'

The Shiba Inu is not playing a guitar.

didgeoridoo(10000) 1 day ago [-]

Found the QA tester.

astrange(10000) 1 day ago [-]

There are visible "alignment" issues in some of their examples still. The marble koala DJ in the paper doesn't use several of the keywords.

They have an example "horse riding an astronaut" that no model produces a correct image for. It'd be interesting if models could explain themselves or print the caption they understand you as saying.

Historical Discussions: Vangelis has died (May 19, 2022: 787 points)

(788) Vangelis has died

788 points 6 days ago by Saint_Genet in 10000th position

pitchfork.com | Estimated reading time – 3 minutes | comments | anchor

Vangelis—the composer who scored Blade Runner, Chariots of Fire, and many other films—has died, Reuters reports, citing the Athens News Agency. A cause of death was not revealed. According to The Associated Press, the musician died at a French hospital. Vangelis was 79 years old.

Born Evángelos Odysséas Papathanassíou, Vangelis was largely a self-taught musician. He found success in Greek rock bands such as the Forminx and Aphrodite's Child—the latter of which sold over 2 million copies before disbanding in 1972. One of his earliest film scores, written while he was still in Aphrodite's Child, was for a French nature documentary called L'Apocalypse des animaux.

An innovator in electronic music, Vangelis is arguably best known for his work on Chariots of Fire and Ridley Scott's Blade Runner. It was noted by many upon the release of the Harrison Ford–starring film that Vangelis' score was as important a component as Ford's character Rick Deckard in bringing the futuristic noir film to life. Years on, it's considered by many to be a hallmark in the chronology of electronic music.

Vangelis' work on Chariots of Fire earned him the 1981 Academy Award for Best Original Score. The soundtrack album also reached the top of the Billboard 200 albums chart in April 1982. The film's opening theme—called "Titles" on the soundtrack album—topped the Billboard Hot 100 the following month. The theme has featured often at the Olympic Games.

In 1973, Vangelis started his solo career with his debut album Fais que ton rêve soit plus long que la nuit (Make Your Dream Last Longer Than the Night). During the '70s, he was widely rumored to join the prog-rock band Yes, following the departure of keyboardist Rick Wakeman. After rehearsing with them for months, Vangelis declined to join the group. He and Yes lead vocalist Jon Anderson reunited later in the '80s, and they went on to release several albums together as Jon & Vangelis.

Vangelis released his final studio album, Juno to Jupiter, in September 2021 via Decca. The record was inspired by the mission of NASA's Juno spacecraft and featured soprano Angela Gheorghiu.

Kyriakos Mitsotakis, Greece's prime minister, eulogized Vangelis on Twitter. "Vangelis Papathanassíou is no longer with us. For the whole world, the sad news states that the world music firm has lost the international Vangelis. The protagonist of electronic sound, the Oscars, the Myth and the great hits," he wrote, according to the site's translation. "For us Greeks, however, knowing that his second name was Odysseus, means that he began his long journey in the Roads of Fire. From there he will always send us his notes."

All Comments: [-] | anchor

bmitc(10000) 6 days ago [-]

He was a very unique, visionary talent. To back that up, beyond his scores, take a look at the following video to get a glimpse of how he worked.


There used to be some forum posts detailing the custom MIDI controllers and setup more, but it looks like a lot of it has been deleted or removed. I found this though:


yourapostasy(10000) 6 days ago [-]

Thank you for sharing those links! If Vangelis were a coder, he would have taken a Space Cadet Keyboard and extended it. 17 pedals...and on top of that, what appears to be a very broad custom notation / shorthand system. I can't tell if that notation acted more like keyboard macros or even more modifier keys.

johnsanders(10000) 6 days ago [-]

Chariots of Fire is iconic of course. https://www.youtube.com/watch?v=8a-HfNE3EIo

I think my first encounter with his music was the Ernest & Julio Gallo wine commercials. https://www.youtube.com/watch?v=mES7lzR9uFE

Seems he should have been considerably older than 79.

johnsanders(10000) 6 days ago [-]

And you can't mention the Ernest & Julio Gallo ads without a link to the voiceover guy / head of the ad agency Hal Riney. Another legend.


julienchastang(10000) 6 days ago [-]

Thanks for the link to the commercial. Classic Vangelis.

motohagiography(10000) 6 days ago [-]

Imo, Vangelis brought the synthesizer from an experimental novelty to an instrument for composition. The two sounds I associate with him are the long brassy triangle with a steep envelope that we know from both Blade Runner and the accompaniment to the piano in the Chariots of Fire theme, and his effective use of chimes.

I have tickets for Olafur Arnalds next week, and there is a younger generation of composers like Arnalds, Frahm, Richter, Tiersen, Aphex/James, and even Reznor/Ross, who could not have avoided Vangelis' influence marrying the synth with classical techniques. He was a big part of what inspired me to start making synth music and more than a few of my tracks have homages to his work, and this note triggered a memory of playing the Chariots theme on piano as a really young child and it seemed to be everywhere at the time. A loss, but hard to mourn such an exceptional contribution as well.

tzs(10000) 6 days ago [-]

How would you compare Vangelis and Isao Tomita? Tomita was marrying synth with classical at about the same time or even a little earlier, to widespread acclaim and success.

A couple other names I think of when I think early serious synth work are Jean-Michel Jarre and Larry Fast. Where would you put them in synth history?

jancsika(10000) 5 days ago [-]

> Imo, Vangelis brought the synthesizer from an experimental novelty to an instrument for composition. The two sounds I associate with him are the long brassy triangle with a steep envelope that we know from both Blade Runner and the accompaniment to the piano in the Chariots of Fire theme, and his effective use of chimes.

One of the most memorable parts of the Blade Runner soundtrack is the brass synth that casually tools around the blues scale. It sounds like an homage to old detective films and grounds the entire movie.

I wonder-- did that influence the intro to Dire Straits' Money for Nothing? It begins with a nice little synth bass and some arpeggiator bleeps and blorps, but there's a similar synth that similarly cruises around a blues scale for a bit.

Digression-- after listening to it again, I noticed that pentatonic synth business in Money for Nothing ends on a C two octaves above middle C which then does a quick upward glissando about an octave and a fourth. Did the keyboardist map the midi wheel to a perfect 11th to do that glissando? If so it sounds incredibly smooth: great job DX7, and/or early MIDI, and/or Alan Clark!

Edit: clarification

pmoriarty(10000) 6 days ago [-]

Just for reference, here's Nils Frahm performing More:


johnohara(10000) 6 days ago [-]

When I listen to Chronotope Project I hear the Vangelis influences. Along with Vangelis' inspiration to pursue such solitary and personal musical expressions.

https://www.youtube.com/watch?v=ARkcu3J8OvI https://chronotope-project.com/bio

greggsy(10000) 5 days ago [-]

It's hard to leave any discussion around the development of synth as an instrument without mentioning Tangerine Dream and Popol Vuh, but they were arguably more related to the prog-rock scene than as composers.

Incidentally, a former member of TH, Klaus Schultz also died last month. TG didn't land any major soundtracks (like Jarre or Vangelis), but they were nonetheless highly influential , if only due to their prolific releases.

emporas(10000) 5 days ago [-]

Blades runner had astonishing graphics as well, really haunting. Nowadays graphics like that are not that special but for time they were revolutionary. I just created some graphics of some animals in place of the legendary monologue of the movie.


spacemadness(10000) 5 days ago [-]

Who is the James in Aphex/James? I'm not sure if you mean to imply Richard D. James or a separate person (see Reznor/Ross).

fetus8(10000) 6 days ago [-]

Based on my listening and no other knowledge, he probably had quite the influence on Oneohtrix Point Never aka OPN aka Daniel Lopatin. He's always been a big synth head and his OST scores really showcase his talents.

Agamus(10000) 5 days ago [-]

For me, it's the theme and much of the music in Carl Sagan's original Cosmos Series. RIP!

notacoward(10000) 5 days ago [-]

> Vangelis brought the synthesizer from an experimental novelty to an instrument for composition.

Along with Wendy Carlos and Jean-Michel Jarre, of course.

haspok(10000) 5 days ago [-]

My personal favourites would be the Bounty title track, which is incredible because it just tells the whole story musically in about 4 minutes!

And the title sequence of Antarctica (the Japanese movie from 1983), which nobody has ever seen, I mean the movie is good, even if a little strange, but the title... I cannot imagine what it would have been like watching it on the big screen and listening to it in a theatre... back in 1983.

Secret tip: you can watch it here: https://youtu.be/fNaV3CHCs6g

qpiox(10000) 5 days ago [-]

Antarctica (https://www.imdb.com/title/tt0085991/) is a fantastic dreamy movie. It is a nature movie, somewhere in-between a feature film and documentary, about dogs that are left to survive alone during the winter on the Antarctica. Whats intriguing is that it is a story told from their perspective - nearly no dialogue, just visuals and music. The movie is slow, so you need to be in the mood for that and kind of psychedelic, so I highly recommend watching it just before dawn on a projector and a proper surround system (borrow one or visit a friend if you don't have your own, it's worthy).

PrimeDirective(10000) 6 days ago [-]

I have listened to Chariots of Fire and Blade Runner soundtracks countless times while developing or just spending time behind the computer. Thanks for making the time go by better Vangelis!

qpiox(10000) 5 days ago [-]

Countless hours of El Greco on loop during gaming sessions (Wolfenstein, Doom, Quake and similar) (https://www.youtube.com/watch?v=gSXzPd8RX0Q)

endorphine(10000) 6 days ago [-]

This is my favorite (and perhaps the least popular) from Vangelis. Listen to this gem...just listen to it: https://www.youtube.com/watch?v=Uta02hfUF4o

Pr0ject217(10000) 6 days ago [-]

Thank you.

stefanos82(10000) 6 days ago [-]

This gave me goosebumps because I could not stop thinking about him the past week or so and I could not understand why...

Αναπαύσου εν ειρήνη θρύλε, Rest in peace legend.

Καλό ταξίδι γίγαντα μου https://www.youtube.com/watch?v=Nd-DlMOLCY4

nunodonato(10000) 5 days ago [-]

Damn, I wasn't the only one! Two days ago, just out of the blues I started listening to his music on spotify (i like no-vocals music while coding). He dies 1 later :(

nonrandomstring(10000) 6 days ago [-]

RIP a true master of synthesis, up there as an equal to Isao Tomita and Ralf Hutter.

So many memorable and now foundational techniques:

The gated saw string 'chug' (Chariots of Tire)

Glissando space echo dives (Blade Runner)

Incredible synthetic guitar solos that inspired Jan Hammer

Analogue strings from the CS80 that melt like Mantovani.

Will be so sorely missed. I'm gonna play out all my collection in a huge Vangelisathon.

petre(10000) 5 days ago [-]

I'm very sad, having litened to his music a lot during my childhood. At least JMJ is still alive.

pantulis(10000) 5 days ago [-]

> Incredible synthetic guitar solos that inspired Jan Hammer

'Metallic Rain' comes to mind

mhh__(10000) 6 days ago [-]

That's a shame.

His work on blade runner just has this timeless magic to it. The sequel ends on his motif (tears in rain) for a reason too.

I also forgot to mention that chariots of fire is truly great too.

Some parts of his music haven't aged too well, but the stuff that hasn't is sorely missed in today's film scores. Even if Zimmer is brilliant he's not a poet.

zeruch(10000) 6 days ago [-]

He did some of his best (and worst) work while collaborating with Jon Anderson in my opinion. 'Short Stories' was a great, quirky album in the late 70s. 'Friends of Mr. Cairo' was dreck in the 80s.

moron4hire(10000) 6 days ago [-]

Seriously, I can always tell a Hans Zimmer score without even having preknowledge that a film had hired him. Big, orchestral, boring score that repeats the same motifs he's been using for the last 50 years? Dude has one act.

coldtea(10000) 5 days ago [-]

Aside, he was accused of plagiarising Chariots of Fire from this Greek tv series theme (coming out a few years earlier):


He did know and was friends with the composer (also very talented), my guess is Vangelis just heard it at some point and picked up the theme's feel and basic style, and subconsciously copied it. It happens. His version is better imho.

sydthrowaway(10000) 6 days ago [-]

How did people learn synths back then without the internet?

amatecha(10000) 5 days ago [-]

Many (most?) came with printed manuals that described all their functions and how the controls work. Though, that wouldn't usually teach general concepts of subtractive synthesis or the like. Honestly I'll wager that most synth musicians learned by experimenting with the synth for hours on end, just as I did when I was learning. I mean HOURS and HOURS. Synthesizers are not actually that complicated, and once you understand the fundamental pieces, you find that basically all synths just have some combination of those fundamental pieces. You can either find out that foundational stuff by observation or getting lucky with a synth manual that lays it all out for you, or knowing someone that is already knowledgeable with that stuff.

Saint_Genet(10000) 6 days ago [-]

I've been listening mostly to his 70s proggish stuff lately, but the opening of Blade Runner still gives me goosebumps. It wouldn't have been half the film it was without his music.


the_other(10000) 6 days ago [-]

I'm rediscovering 'Albedo 0.39' as a result of this news. I've had a copy for years but not played it for a long time. Absolutely love it.

greenhorn123(10000) 6 days ago [-]

I love Soil Festivities, still decades after hearing it for the first time. Amazing album. Not to diminish his other work, but that one really stands out for me.

Also, if you don't know about it yet, check out his collaborations with Jon Anderson, as Jon & Vangelis, two awesome musicians at their peak.

What a pity...

chasil(10000) 6 days ago [-]

My favorite was Opera Sauvage. It had a flow and consistency that was, for me, unique in his works.

He will be missed.

seydor(10000) 6 days ago [-]

Yes what remarkable, 'organic' sound. Also 'l'apocalypse des animaux' and his other early albums really. Despite being old synths, they still sound classic

zrav(10000) 5 days ago [-]

Indeed, for the last two decades the first movement of Soil Festivities has been my go-to track to get me in the zone when I have to get any serious writing going. It just flows so nicely.

rffn(10000) 6 days ago [-]

First Klaus Schulze and now Vangelis. So sad. Rest in peace!

ryandrake(10000) 5 days ago [-]

Holy shit! I didn't realize Klaus Schulze died too. Between these two and Edgar Froese, the world is rapidly losing electronic music greats. Please tell me Giorgio Moroder is still alive!

TheOtherHobbes(10000) 6 days ago [-]

Massive, epic talent. Did everything by ear and instinct, never learned to read or write music. Incredible feel for timbre, melody, and structure.

The DX7 synth used to have a ridiculous 'chuff chuff chuff DING!' comedy steam train preset. It sounded terrible and was utterly useless except as a 10 second novelty.

He used it in one of his soundtracks - and somehow made it perfectly musical in that setting.

seydor(10000) 6 days ago [-]

which one was it?

pmoriarty(10000) 6 days ago [-]

'Did everything by ear and instinct, never learned to read or write music.'

He seems to have had his own musical notation, of a sort. You can see him using it at the beginning of this video:


sheinsheish(10000) 6 days ago [-]

one of his more esoteric and probably less known tracks : https://www.youtube.com/watch?v=P6qoTPhhv9w La petite fille de la mer

bborud(10000) 6 days ago [-]

That track was almost impossible to get hold of back in the day. I had heard of it, but never heard it. After years of asking around in record stores I finally found a really scratchy sounding cassette tape (no doubt a pirated copy) in a German record shop in East Frisia the summer of 1985.

WalterBright(10000) 6 days ago [-]

It's on the Themes album.

languagehacker(10000) 6 days ago [-]

Damn, RIP. Dude wrote my favorite song describing what color each horse of the apocalypse is

poulpy123(10000) 6 days ago [-]

the four horsemen ?

james-skemp(10000) 6 days ago [-]

For those that want to know more, this is referencing the album 666 by Aphrodite's Child.

Got a copy from Germany sometime between '00 and '03.

Amazing album, especially for someone that had only known him for Blade Runner at the time.

His stuff with Jon Anderson is also fairly good. The Friends of Mr. Cairo is one of my favorites.

termios(10000) 6 days ago [-]

the leading horse is WHITE the second horse is RED the third one is a BLACK the last one is a GREEN

Saint_Genet(10000) 6 days ago [-]

Most people know him from his brilliant film scores, but his prog rock era is up there with the greatest of the genre too

skyechurch(10000) 6 days ago [-]

He wrote the theme song to the Carl Sagan series 'Cosmos', both the song and the show had me transfixed as a kid.


sebastianconcpt(10000) 6 days ago [-]

I feel you...


colomon(10000) 6 days ago [-]

To be more precise, he wrote the music which was used as the theme song for 'Cosmos' -- it originally appeared on Vangelis's album 'Heaven and Hell', five years before 'Cosmos' came out. Apparently it was called Movement 3 from 'Symphony to the Powers B', though on my old CD copy of the album it just appears in the middle of the track 'Heaven and Hell Part 1'. Really powerfully evocative music, takes me right back to being a 10 year old watching 'Cosmos'.

yardie(10000) 6 days ago [-]

I don't know much about Vangelis other than Chariots of Fire, he's Greek, and my neighbor when I was a kid loved the shit out of him. I assumed for a very long time Vangelis was an entire band and not just one person.

RIP amongst so many others, lately.

seydor(10000) 6 days ago [-]

He could be an entire orchestra not just a band yes

He had an album based on Greek folk music (Odes, 1979). This is a cretan dance: https://www.youtube.com/watch?v=Hc9_qVAflzk

zoomablemind(10000) 6 days ago [-]

My memory holds that magical feeling, when in the mindnight darkness and quietness of home, suddenly heard a gentle stream of silver bells and a beautiful, maybe melancholic, melody from a tiny radio speaker... with no announcement of the author or name of the song. It was then just used as a last song of the day.

Took me a veeery long time and other side of the globe to hear it again, again by chance, but with attribution in that case. Then some hours trying to locate the recording...

La Petite Fille de Mer


Truly as if having a chanceful glimpse of a Mermaid.

Thank you for the magic, Master Vangelis. RIP.

riffraff(10000) 5 days ago [-]

Wow, I never knew this was vangelis, thanks for sharing.

nixass(10000) 6 days ago [-]

Man, never knew this was his song. Even tho I am fan of his work I never end discovering knew masterpieces he made.

bramjans(10000) 5 days ago [-]

That's really nice, thanks for sharing!

dancemethis(10000) 6 days ago [-]

I had no idea it was a _person_ named Vangelis. The word sounds like a band name.

And really, the exquisite textures are a workload which would ordinarily require multiple talents. Guess he was THAT good.

nixass(10000) 6 days ago [-]

It's actually common Greek name. I was in shock (positively) when I got two new colleagues at work both named Vangelis. They remind me of the Vangelis every single day, funny stuff

MomoXenosaga(10000) 6 days ago [-]

That's sad. Don't care about music much but for synthesizer I always make an exception. Guess I'll be listening to the Blade Runner OST tomorrow at the gym in remembrance.

qpiox(10000) 5 days ago [-]

If you don't care about music much, but are into synths and sounds check Ignacio (https://www.youtube.com/watch?v=vEnC4kLujgo) or Spiral for something lighter (https://www.youtube.com/watch?v=9VV1lWVhMCk)

gsoto(10000) 6 days ago [-]

Just sharing one of my favorite pieces of him:

'Memories of Green' (from the album 'See You Later') https://www.youtube.com/watch?v=pW9D6agp794

I think this piece shows the range of his musical expressiveness, apart from his virtuosity or synth programming skills. Just a piano passed through a flanger effect with some ambient sounds.

The electronic bleeps in that track are recorded from a handheld electronic game (Bambino UFO Master Blaster [1]). Talk about giving a whole new meaning to those sounds.

[1] https://www.youtube.com/watch?v=-sEOW8wAqG0

wcarss(10000) 6 days ago [-]

I couldn't play that video for some reason (says it's unavailable), so here's another link to (I think) the same song: https://www.youtube.com/watch?v=u1KfOMkyU_w

the_af(10000) 6 days ago [-]

'Memories of Green' was also used to great effect in Blade Runner. I love how well it works there. It's so sad and evocative.

ffhhj(10000) 6 days ago [-]

》'Memories of Green' (from the album 'See You Later') https://www.youtube.com/watch?v=pW9D6agp794

That title and that album cover: a woman wearing bikini and low-light glasses for snow, with the sun on the back and broken floating ice. A prediction of climate change from 1980?

FpUser(10000) 5 days ago [-]

>'Memories of Green'

Memories of Green and his work on Blade Runner in general are on my list of best music. Beautiful work.

janci(10000) 6 days ago [-]

Can somebody shed som light on Miami Vice song? I always thought it's Jean Michel Jarre and Vangelis song, but apparently it is called Crocket's Theme by Jan Hammer

adamzochowski(10000) 5 days ago [-]

Miami Vice had two memorable tunes, miami vice title song, and the crocket's theme (more nostalgic like feel). Both were made by Jan Hammer. If you want to see cool stuff, checkout his 'beyond Mind's Eye', one of the earliest 3d music videos https://www.youtube.com/watch?v=b5zMtCvWhG0

bigpeopleareold(10000) 6 days ago [-]

My first introduction to Vangelis was a vinyl of the album Spiral when I was younger ... I didn't even know he did Blade Runner until years later, but I really liked that album. Sad to hear he passed away though.

tgv(10000) 6 days ago [-]

Spiral and Albedo 0.39 were my introduction to his work. Great albums, quite possibly the best instrumental 'pop' albums of that time.

troyvit(10000) 6 days ago [-]

Maaaaaan about the time Blade Runner came out I was a fourth grader fumbling with the Chariots of Fire record pretty much every day. That was the first record I remember associating the different reflections on the grooves with the length of the song. Pretty clear given that side 2 was all one song.

So many elementary school crushes I dreamt of to that album.

Didn't get whacked upside the head by Blade Runner until like 1989 or something and then went on that endless quest to find the version of the soundtrack that most matched what you hear in the movie (there was some legal crap about releasing the original music). Ended up with a few of the CDs floating around.

My world wouldn't be the same without his music.

the_af(10000) 6 days ago [-]

> (there was some legal crap about releasing the original music).

That crap resulted in multiple bootleg versions of the Blade Runner soundtrack. I don't know if there is a definitive one :/ Maybe with the Special Edition blue-ray?

I like the voiceover version of the tracks from the original CD ('do you like our owl?'), but I also like listening to the tracks without voiceovers.

trh0awayman(10000) 6 days ago [-]

Spiral is one of my all-time favorite albums - and the opening song is my favorite: https://www.youtube.com/watch?v=I-0Z5D7eRh8

hmahncke(10000) 6 days ago [-]

I listened to this album constantly as a teenager...

Historical Discussions: Billing systems are a nightmare for engineers (May 18, 2022: 775 points)

(775) Billing systems are a nightmare for engineers

775 points 7 days ago by Rafsark in 10000th position

www.getlago.com | Estimated reading time – 15 minutes | comments | anchor

'On my first day, I was told: 'Payment will come later, shouldn't be hard right?' I was worried. We were not selling and delivering goods, but SSDs and CPU cores, petabytes and milliseconds, space and time. Instantly, by an API call. Fungible, at the smallest unit. On all continents. That was the vision. After a week I felt like I was the only one really concerned about the long road ahead. In ambitious enterprise projects, complexity compounds quickly: multi-tenancy, multi-users, multi-roles, multi-currency, multi-tax codes, multi-everything. These systems were no fun, some were ancient, and often 'spaghetti-like'. What should have been a 1 year R&D project ended up taking 7 years of my professional life, in which I grew the billing team from 0 to 12 people. So yes, if you have to ask me, billing is hard. Harder than you think. It's time to solve that once and for all.'

This is a typical conversation we have with engineers on a daily basis. In that case, these are Kevin's words, who was the VP Engineering at Scaleway, one of the European leaders in cloud infrastructure.

Some of you asked me why billing was that complex, after my latest post about my 'Pricing Hack'. My co-founder Raffi took on the challenge of explaining why it's still an unsolved problem for engineers.

We also gathered insights from other friends who went through the same painful journey, including Algolia, Segment, Pleo, don't miss them! Passing the mike to Raffi.

When you're thinking about automating billing, this means your company is getting traction. That's good news!

You might then wonder: should we build it in-house? It does not look complex, and the logic seems specific to your business.

Also, you might want to preserve your precious margins and therefore avoid existing billing solutions like Stripe Billing or Chargebee that take a cut of your revenue. Honestly, who likes this rent-seeker approach?

Our team at Lago still has some painful memories of the internal billing system at Qonto, that we had to build, maintain, and deal with.. Why was it that painful? In this article, I will provide a high-level view of the technical challenges we faced while implementing hybrid pricing (based on both 'subscription' and 'usage'), and what we learned the hard way in this journey.

Engineers be like...

TL;DR: Billing is just 100x harder than you will ever think

Let's bill yearly as well, should be pretty straightforward' claims the Revenue team. Great! Everyone is excited to start working on it. Everyone, except the tech team. When you start building your internal billing system, it's hard to think of all the complexity that will pop up down the road, unless you've experienced it before.

It's common to start a business with a simple pricing. You define one or two price plans, and limit this pricing to a defined number of features. However, when the company is growing, the pricing gets more and more complex, just like your entire codebase.

At Qonto, our first users could only onboard on a €9 plan. We quickly decided to add plans, and 'pay-as-you-go' features (such as ATM withdrawals, foreign currency payments, one shot capital deposit, etc...) to grow revenue.

Also, as Qonto is a 'neobank', we wanted to charge our customers directly in their wallet, through a ledger connected to our internal billing system. The team started from a duo of full-time engineers building a billing system (which is already a considerable investment), to currently a dedicated cross-functional team called 'pricing'.

This is not specific to Qonto of course. Pleo, another Fintech unicorn from Denmark faced similar hurdles:

'I've learned to appreciate that billing systems are hard to build, hard to design, and hard to get working for you if you deviate from 'the standard' even by a tiny bit.'

admits Arnon Shimoni, leading Product Billing Infrastructure at Danish Fintech unicorn Pleo.

This is not even specific to Fintechs. The Algolia team ended up creating a whole pricing department, now led by Djay, a Pricing and monetization veteran, from Twilio, VMWare, Service Now. They pivoted their pricing to a 'pay-as-you-go' model based pricing based on the number of monthly API searches

'It looks easy on paper — however, it's a challenge to bring automation and transparency to a customer, so they can easily understand. There is a lot of behind-the-scenes work that goes into this, and it takes a lot of engineering and investment to do it the right way.'

says their CEO, Bernardette Nixon in Venture Beat and we could not agree more.

#1 - Dates

When implementing a billing system, dealing with dates is often the number 1 complexity. Somehow, all your subscriptions and charges deal with a number of days. Whether you make your customers pay weekly, monthly or yearly, you need to roll things over a period of time called the billing period.

Here is a non-exhaustive list of difficulties for engineers:

  1. How to deal with leap years?
  2. Do your subscriptions start at the beginning of the month or at the creation date of the customer?
  3. How many days/months of trial do you offer?
  4. Who decided February only holds 28 days? 🤔
  5. Wait, bullet 1 is also important for February... 🤯
  6. How to calculate a usage-based charge (price per seconds, hours, days...)?
  7. Do I resume the consumption or do I stack it month over month? Year over year?
  8. Do I apply a pro-rata based on the number of days consumed by my customer?

Although every decision is reversible, billing cycle questions are often the most important source of customer support tickets, and iterating on them is a highly complex and sensitive engineering project.

For instance, Qonto migrated the billing cycle start date from the 'anniversary' date, to the 'beginning of the month' date, and the approach was described here. It was not a trivial change.

#2 - Upgrades & downgrades

Then, you need to enable your customers to upgrade or downgrade their subscriptions. Moving from a plan A to a plan B seems pretty easy to implement, but it's not. Let's zoom on potential edge cases you could face.


  1. The user downgrades in the middle of a period. Do we block features right now or at the end of the current billing period?
  2. The user has paid the plan in advance (for the next billing period)
  3. The user has paid the plan in arrears (for what he has really consumed)
  4. The user downgrades from a yearly plan to a monthly plan
  5. The user downgrades from a plan paid in advance to a plan paid in arrears (and vice-versa)
  6. The user has a discount applied when downgrading


  1. The user upgrades in the middle of a period. We probably need to give her access to the new features right now. Do we apply a pro-rata? Do we make her pay the pro-rata right now? At the end of the billing period?
  2. The user upgrades from a plan paid in advance to a plan paid in arrears
  3. The user upgrades from a monthly plan to a yearly plan. Do we apply a pro-rata? Do we make her pay the pro-rata right now? At the end of the billing period?
  4. The user upgrades from a plan paid in advance to a plan paid in arrears (and vice-versa)

We did not have a 'free trial' period at the time at Qonto, but Arnon from Pleo describes the additional scenarii this creates here.

#3 - Usage-based computations

Subscription based billing is the first step when implementing a billing system. Each customer needs to be affiliated to a plan in order to start charging the right amount at the right moment.

But, for a growing number of companies, like we did at Qonto, other charges come alongside this subscription. These charges are based on what customers really consume. This is what we call 'usage based billing'. Most companies end up having a hybrid pricing: a subscription charged per month and 'add-ons' or 'pay as you go' charges on top of it.

These consumption-based charges are tough to track at scale, because they often come with math calculation rules, performed on a high volume of events that need to be tracked.

Some examples:

Segment.com tracks the number of Monthly Tracked Users

This means that they need to COUNT the DISTINCT number of users, each month, and resume this value at the end of the billing period. In order to get the number of unique visitors, they need to apply a DISTINCT to deduplicate them.

Algolia tracks the number of api_search per month

This means they need to SUM the number of monthly searches for a client and resume it at the beginning of each billable period.

It becomes even more complex when you start calculating a charge based on a timeframe. For instance, Snowflake charges the compute usage of a data warehouse per second.

This means that they sum the number of Gigabytes or Terabytes consumed, multiplied by the number of seconds of compute time.

Maybe an example we can all relate to would be the one of an energy company who needs to charge $10 per kilowatt of electricity used per hour, for instance. In the example below, you can get an overview of what needs to be modeled and automated by the billing system.

  • Hour 1: 10 KW used for 0.5 hour = 5 KW (10 x 0.5)
  • Hour 2: 20 KW used for 1 hour = 20 KW (20 x 1)
  • Hour 3: 0 KW used for 1 hour = 0 KW (0 x 1)
  • Hour 4: 30 KW used for 0.5 hour = 15 KW (30 x 0.5)

TOTAL = 40 KW used x $10 ⇒ $40

#4 - Idempotency done right

Working with companies' revenue can be tough.

Billing mismatches sometimes happen. Charging a user twice for the same product is obviously bad for customer experience, but failing to charge when it's needed hurts revenue. That's partly why Finance and BI teams spend so much time on revenue recognition. As a 'pay-as-you-go' company, the billing system will process a high volume of events, when an event needs to be replayed, it needs to happen without billing the user another time. Engineers call it 'Idempotency', meaning the ability to apply the same operation multiple times without changing the result beyond the first try.

It's a simple design principle, however, maintaining it at all times is hard.

#5 - The case for a CCC - Cash Collection Officer

Cash collection is the process of collecting the money customers owe you. And the black beast of cash collection is 'dunnings': when payments fail to arrive, the merchant needs to persist and make repeated payment requests to their customers without damaging the relationship. These reminders are called 'dunnings'.

At Qonto, we called these 'waiting funds'. A client's status is 'waiting funds' when they successfully went through the sign-up, the KYC and KYB process, yet their account balance is still 0.

For a neobank, the impact is twofold: you can't charge for your service fees (a monthly subscription), and your customer does not generate interchange revenues (A simplistic explanation of interchange revenues: when you make a €100 payment with Qonto - or any card provider-, Qonto earns €0.5-€1 of interchange revenue, through the merchant's fees.) Therefore, your two main revenue streams are 'null', but you did pay to acquire, onboard, KYC the user, produce and send a card to them. We often half joked about the need to hire a 'chief waiting funds officer': the financial impact of this is just as high as the problem is underestimated.

Every company has 'dunning' challenges. For engineers, on top of all the billing architecture, this means they need to design and build:

  • A 'retry logic' to ask for a new payment intent
  • An invoice reconciliation (if several months of charges are being recovered)
  • An app logic to block the access in case of payment failure
  • An emailing workflow to urge a user to proceed to the payment

Some SaaS are even on a mission to fight dunnings and have built full-fledge companies around cash collection features, such as Upflow for instance, that is used by successful B2B scale-ups, including Front and Lattice, the leading HRtech.

'Sending quality and personalized reminders took us a lot of time and, as Lattice was growing fast, it was essential for us to scale our cash collection processes. We use Upflow to personalize how we ask our customers for money, repeatedly, while keeping a good relationship. We now collect 99% of our invoices, effortlessly',

says Jason Lopez, controller at Lattice.

#6 - The labyrinth of taxes and VAT

Taxes are challenging and depend on multiple dimensions.

What are the dimensions?

Applying tax to your customers depends on what you are selling, your home country and your customers' home country. In the simplest cases, your tax decision tree should look like this:

Now, imagine that you sell different types of goods/services to different taxonomies of clients in +100 countries. If you think the logic on paper looks complex, the engineering needs to automate this at least tenfold.

What do engineers need to do?

Engineers will need to think of an entire tax logic within the application. This logic is pyramidal based both on customers and products sold by your company.

  1. Taxes on the general settings level. Somehow, your company will have a general tax rate that is applied by default in the app.
  2. Taxes per customer. This general setting tax rate can be overridden by a specific tax applied for a customer. This per-customer tax rate depends on all the dimensions explained in the image above.
  3. Taxes per feature. In some cases, tax rates can also be applied by feature. This is mostly the case for the banking industry. For instance, at Qonto, banking fees are not subject to taxes and non-banking fees have a 20% VAT rate for all customers. Engineers created a whole tax logic based on the feature being used by a customer.

With billing, the devil is in the details. That's why I always cringe when I see engineering teams build a home-made system, because they think it's not 'that complex'.

If you've already tackled the topics listed above and think it's a good investment of your engineering time, go ahead and build it in-house. Make sure to budget for the maintenance work that is always needed.

Another option is to rely on existing billing platforms, built by specialized teams. If you're considering choosing one or switching, and you think I can help, please reach out!

To solve this problem at scale, we adopted a radical sharing approach. We've started building an Open-Source Alternative to Stripe Billing (and Chargebee, and all the equivalents).

Our API and architecture are open, so that you can embed, fork, customize them as much as your pricing and internal process need. As you've read, we experienced these painpoints first hand.

Request access or sign up for a live demo here, if you're interested!

All Comments: [-] | anchor

Melatonic(10000) 7 days ago [-]

Why XXXXXXX are a nightmare for engineers:


On a more serious note though billing can be hugely complicated and I am really glad I do not have to deal with it

Rafsark(10000) 7 days ago [-]

Do you have a dedicated team in charge of it?

jlg23(10000) 7 days ago [-]

I've implemented a few billing systems and 'the nightmares' were never related toengineering but always to specification: If bizdev does not know what they want, no outsourced imlementation will be able to help them.

AnhTho_FR(10000) 7 days ago [-]

From your experience, who should specify the billing system? The Product team?

We're building Lago so that the right questions / decisions frameworks are asked during the implementation, so it's like a forcing power embedded in the product.

Our experience is, unless the Product/Biz team has been exposed to billing, they will never specify in a way that is precise enough, so the engineers who implement billing will have to think/assess/decide themselves.

retcon(10000) 7 days ago [-]

>/as Qonto is a 'neobank', we wanted to charge our customers directly in their wallet, through a ledger connected to our internal billing system. The team started from a duo of full-time engineers building a billing system (which is already a considerable investment), to currently a dedicated cross-functional team called 'pricing'.

Stored procedures (ca. Oracle v6) suddenly sound a walk in the park.

AnhTho_FR(10000) 7 days ago [-]

They do indeed!

yobbo(10000) 7 days ago [-]

One design decision that, for me, seems to simplify things is to consider the 'business system' a type of state machine that records all business events and serves as a 'source of truth'. If the events are not recorded, they have not occurred from the business perspective. A ledger-type architecture can be useful.

This means that business events or user operations generate state transitions, which eventually are implemented as database transactions. The event log can be stored and inspected.

The end user terms are encoded in the state-transitions. Contract obligations are encoded in the state of the database.

As for calendar issues - operations need to be performed over 'real calendars'. It might be practical to 'materialize' the business calendar into discrete units for this purpose.

Rafsark(10000) 7 days ago [-]

For sure; I do think an event based solution working as a source of truth for the billing is the right solution. However it still creates engineering difficulties (making sure you don't ingest the event twice for instance). The ledger-type architecture can definitely work. When we built the system for a fintech, it was actually an event-based architecture connected to a ledger (taking the money out of a wallet). I think the whole process would be: - Ingesting events for usage based feature - Store these events in a database and make sure you make the idempotency right - Aggregate the usage of these events based on most common aggregate functions - Price this usage inside plans or subscriptions - Assign a subscription to a customer - Trigger a webhook (used for invoice/payment) at the end or beginning of the billing period

eesmith(10000) 7 days ago [-]

A classic, from http://www.canonical.org/~kragen/tao-of-programming.html

> There was once a programmer who was attached to the court of the warlord of Wu. The warlord asked the programmer: ``Which is easier to design: an accounting package or an operating system?''

> ``An operating system,'' replied the programmer.

> The warlord uttered an exclamation of disbelief. ``Surely an accounting package is trivial next to the complexity of an operating system,'' he said.

> ``Not so,'' said the programmer, ``when designing an accounting package, the programmer operates as a mediator between people having different ideas: how it must operate, how its reports must appear, and how it must conform to the tax laws. By contrast, an operating system is not limited by outside appearances. When designing an operating system, the programmer seeks the simplest harmony between machine and ideas. This is why an operating system is easier to design.''

> The warlord of Wu nodded and smiled. ``That is all good and well, but which is easier to debug?''

> The programmer made no reply.

buescher(10000) 7 days ago [-]

Which came first? I am pretty sure it was computer accounting systems.

Which is older, the oldest continuously supported operating system, or the oldest continuously supported (computer) accounting system?

AnhTho_FR(10000) 7 days ago [-]

OMG love the story, thanks for sharing!

PaulHoule(10000) 7 days ago [-]

Applications programming has a complexity of it's own that's distinct from the problems in systems programming.

alex_suzuki(10000) 7 days ago [-]

Thoughtful, and made me laugh. Thanks!

bombcar(10000) 7 days ago [-]

Heh, I'd say the operating system is easier to debug, because the accounting package could literally have 'bugs' that are caused by errors in the tax laws, business processes, etc, which cannot be fixed, only worked around.

teddyh(10000) 7 days ago [-]

The linked page does not show it, but The Tao of Programming is actually a book which you can buy:


I own a copy; it's great.

epberry(10000) 7 days ago [-]

Usage based billing is not only hard to implement but hard to consume. No one really knows what they are spending and how much particular customers or products cost. I did a writeup a week ago on CDNs and ended up spending several hours in spreadsheets.

I had felt this pain vaguely before but actually seeing it everyday at my current job makes me realize there are engineers suffering on both sides of software billing.

hobo_mark(10000) 7 days ago [-]

Speaking of usage-based billing, if I intend to charge users a usage-based bill at the end of the month, as far as I have looked Stripe is the only provider that supports this (they call it 'metered billing'), is there really no alternative?

Rafsark(10000) 7 days ago [-]

Stripe offers simple metering, but in the end, you make the job of calculating metering charges (aggregate all usage and send the value to stripe). It's becoming harder when you are an API or a cloud company who want to bill a server usage for instance. Calculations are getting heavy, costful and hard to maintain. This is why I do think stripe is not a usage based solution, and also the reason why I decided to create a company trying to solve this pain :)

kdeldycke(10000) 7 days ago [-]


Usage-based billing is incomplete without its suite of reporting, consolidation and visualization tools. No wonders there is a cottage industry of cloud computing consultants and SaaS solutions focused on helping you to make sense of your AWS/GCP/Azure invoices.

spchampion2(10000) 7 days ago [-]

Just wait until you meet billing's angry roommate: invoicing.

In the US, an invoice is just a weird PDF that you might glance at before sending off to your accounts payable team. But in other countries, especially those that use VAT style taxing systems, an invoice can be a legal document that expresses costs and taxes in a legally prescribed way for accounting purposes. Many countries have prescriptive rules over how you invoice, who can write invoices, when you can invoice, what goes on an invoice, etc. And every country does it differently.

Are you building a service that charges incrementally many times per period? Or even worse, refunds incrementally? You might be sending a truckload of expensive accounting papers to your customer every month, one per transaction. And each of those pages was printed using government prescribed software from oddball vendors you must use or else.

ozim(10000) 7 days ago [-]

Yeah - and simply creating an invoice is creating taxable income in EU - if customer drags their feet you still owe VAT to the taxman.

If you want to change invoice you have to make a special 'correction invoice' because changing 'real invoice' is a criminal offense - fun things :)

johnrgrace(10000) 7 days ago [-]

And most big companies who agree to pay you net X days, that usually is after the presentation of a correct invoice. The invoice doesn't get to them, or it isn't right the clock doesn't start for them to pay you. Getting invoices wrong can quickly result in major cashflow drops.

throwaway6532(10000) 6 days ago [-]

>And every country does it differently

Seems like the real world is just choc-full of Liskov Substitution Principle violations :P

cm2012(10000) 7 days ago [-]

Just one of many reasons its easier to do business in the US

wonton53(10000) 7 days ago [-]

I work on a payment system that invoices consumers on behalf of other companies and pays out the money to these companies by splitting the payout on multiple partners (luckily I do not handle the support and follow-up). The system have ability to refund invoices and customers some times end up paying more than or just parts of the invoice. On top og all this it integrates with several old 90s systems with poor datetime handling and poor uptime (some times a windows xp box in the cudtomers offices). It also handles card payments and over all, by far the hardest thing to get right is the invoicing. Its just so extremely fuzzy and time sensitive.

trollied(10000) 7 days ago [-]

Yup, it's a nightmare. Invoices needing to be in specific number sequences. Any corrections need to be dealt with by reissuing a special 'corrective invoice'. Don't even get me started on geocoding for taxes. Urgh. Also, when you've got the invoicing right, you've got to work out how to do the feeds to the accounting systems so that it all posts to the right place.

codegeek(10000) 7 days ago [-]

And then just wait until you meet Invoicing's annoying cousin Purchase Orders (PO). THere is a PO. Can I pay this in multiple invoices ? Sorry the PO is not approved yet. We cannot accept an invoice. Sorry the Invoice doesn't have reference to our PO. Can you fix that please ?

vidarh(10000) 7 days ago [-]

Many years ago I led a billing team at Yahoo! responsible for billing for premium services in European markets.

My team existed for the sole reason that the European business did not trust that the US payment services team understood European needs, and the 'invoice is a legal document' thing was one of them. I spent so many meetings repeating to the US team that no, we could not switch to their invoicing as long as a new software release might retroactively change the template used to show already issued invoices (e.g. the registered office might change, or the VAT number).

We didn't have to deal with printed invoices thankfully, but we did ensure we produced each invoice once and stored it, so that the European finance team could sleep at night.

At the time Yahoo! had 8 different payment platforms. Some of those were due to weirdness around acquisitions and Japan (which was a joint venture), but apart from mine I believe two others also existed because of local weirdness that the local businesses decided it was safest not to let Yahoo! US mess with.

At the time I left, after 3 years of that, we'd managed to get agreements to migrate a few bits and pieces to the US after they'd shown the understood requirements for specific features, but it was like pulling teeth.

908B64B197(10000) 7 days ago [-]

> You might be sending a truckload of expensive accounting papers to your customer every month, one per transaction. And each of those pages was printed using government prescribed software from oddball vendors you must use or else.

I guess that's good for employment numbers? There's really two ways to create jobs, innovate or regulate. The US and the EU have apparently chosen very different paths. With every country having it's own special snowflakes laws, there's a reason EU founders always try to come to the US first.

dawkins(10000) 7 days ago [-]

Invoicing gets even better in countries like Portugal, where you have to send to the tax authorities every invoice that you generate.

bombcar(10000) 7 days ago [-]

And even worse - your biggest customers won't even smell your invoice unless you enter it line by line into some ancient SAP system developed 20 years ago, where everything cloud related is classified as telephony except storage space which must be classified as filing cabinets (movable) or your invoice will be (very slowly) rejected.

And if it is rejected you have to enter it all again by hand; editing isn't a feature.

Rafsark(10000) 7 days ago [-]

Hi! We decided to aggregate all costs of a 'billable period'.

Imagine you bill your customers monthly (the billable period), all the charges (usage-based features + subscription) will appear as line items of a single invoice.

This enables you to gather all the fees of a period into a total invoice, but still be able to provide granularity to your customers (breakdown of all the fees to be paid).

marcosdumay(10000) 7 days ago [-]

Yep. Invoice software is country specific. One can sell invoicing software to several countries, it just can not be the same software. And no, you can't just abstract the differences, because the invoices focus changes widely.

At least we are in a situation where the seller is almost only subject to the laws of the seller's country. There are some exceptions, but one can mostly deal with those (the EU has plenty, the good news is that if the seller is not also on the EU, most do not apply).

Also, I haven't seen a country accounting system that made accounting papers inherently expensive. Some make refunds very expensive, but not the papers. It's more a matter of badly designed software and processes.

duxup(10000) 7 days ago [-]

Not even legal issues.

In my experience invoicing, and really all 'printable' documents is the land of 'oh someone did something wrong put X on the document'. And then again and again and again... And 'this is two pages long that's too much' and on and on ...

It's bike shedding insanity with no ideal or end in sight. Every change is arbitrary with no real measure of success.

I swear the only time anyone LOOKs at these documents is to bitch about them.

Oh man and just as if I summoned it someone just sent me a ticket about something on an invoice.

elietoubi(10000) 7 days ago [-]

Hi! I co-led the billing stack re-write at Uber circa 2017 and it was an absolute nightmare. Some of the unexpected hard things that are not mentioned in this article: - Manage promotion was very complex - Modeling money (we used to model it as cents ie 1/100 of the common denomination ... turns out it broke a lot of countries like Japan where the Japanese Yen doesn't have cents) - Timezone is a hellhole

One interesting open source project was https://killbill.io/

PS: Since I built a billing and invoicing startup .. check it out: zenbill.com

0des(10000) 7 days ago [-]

Hey thanks for showing me this. What lies beyond the starter plan in concrete terms?

nickdothutton(10000) 7 days ago [-]

The reason billing systems are hard is because they tend to just accumulate features/behaviours/requirements. Forever. Although billing is not my specialty I have had to involve myself in it as product owner and business unit manager. Try and imagine the amount of logic, complexity, rules, and exceptions involved in a system that has accumulated perhaps 30-40 years of customer sign-ups. Umpteen contracts, umpteen iterations of each contract, and perhaps 50 or 100 products, each sold in 10 or 100 countries. I now know more about the tax system in South America than I ever wanted to.

Rafsark(10000) 7 days ago [-]

Yep, totally right. Same at the previous company we worked for and built the billing logic. You start doing it in house because the first version is simple. It's getting complicated when you keep adding paying features to a product. Often see product managers becoming experts in taxes and accounting :)

r00fus(10000) 7 days ago [-]

I see building or implementing these kind of systems as 'discovering' use cases for given scenarios/customers/markets. That's why 'billing' is almost too generic - instead 'billing for telco in South American markets' - is more useful because those verticals have specific sets of use-cases and challenges.

Brystephor(10000) 7 days ago [-]

I've worked at two companies in their payment systems. This post does a good job of mentioning many of the billing system challenges. I think it understates how difficult and important idempotency is to maintain at scale with multiple teams though.

Additionally, it didn't mention financial regulation changes. India had changes sometime in the last few years which required whole new systems built that were specific to India customers. One example of complexity is this: does the system apply to customers who are in India or does it apply to businesses who are in India (the owners might not be)?

He also did not mention fraud, which is basically uncollected spend that results in charge backs typically. If you get too many chargebacks, you'll get fined. If you get even more chargebacks, you'll get kicked off the card network and will no longer be able to accept cards within that network.

Post pay (customer gets the product and pays afterwards) models are subject to more fraud, uncollected charges, etc.

There's also the fun of dealing with downstream payment processors. I haven't worked with a processor or bank yet that was very reliable. Sometimes they'd return 500 http codes and then process a payment, requiring manual intervention. Sometimes that happens with files of payments because most banks deal with files for transactions.

tablespoon(10000) 7 days ago [-]

> Additionally, it didn't mention financial regulation changes. India had changes sometime in the last few years which required whole new systems built that were specific to India customers. One example of complexity is this: does the system apply to customers who are in India or does it apply to businesses who are in India (the owners might not be)?

IIRC, I've read this is one of the reasons that systems like SAP are so popular. They might be an ugly monster from a technical perspective and seem overly complex, but they provide thoroughly tested implementations that handle all that stuff.

AnhTho_FR(10000) 7 days ago [-]

Hey @Brystefor, I'm @Rafsark's co-founder. We could have spent more time on idempotency, indeed. We have not deep dived into the 3 other aspects you mentioned indeed, we're actually planning to 1/ open-source the articles and add additional challenges as they are mentioned 2/ open our roadmap based on these inputs as well Thanks for sharing! Will ping you back here when it's live PS: we had not thought of financial regulation changes indeed!

robot(10000) 7 days ago [-]

Problem is it still requires an API to integrate. Stripe has all these capabilities but API integrations are needed to use them. If you are looking to build a micro SaaS without needing to deal with billing API integrations check out our software: https://saasbox.net. Built for completely eliminating any billing related SW development. It doesn't handle all the corner cases mentioned in the article, but some of them are handled, such as plan upgrade / downgrades with pro-rating, editing plans on the fly, migrating users across plans, notifying your application on those changes.

Rafsark(10000) 7 days ago [-]

Why is it a problem to use an API?

vikeri(10000) 7 days ago [-]

My understanding is that Stripe billing can be managed completely no-code from the Stripe dashboard? And I think it includes automatic tax, prorations etc.

0xbadcafebee(10000) 7 days ago [-]

Don't spend time, effort and money on things that do not bring business value. Any time I see a company with engineers building something they could buy off the shelf [and the thing they're building is not the product] I know their priorities are whack. I've worked for several companies that had more money than brains. They'd spend years, and millions of dollars in salaries, to build shit they could have bought and implemented in two months. And not only that, but multiple iterations of failing to build said thing, by multiple teams.

Just because a tractor is used for farming doesn't mean a farmer should build her own tractor.

afarrell(10000) 7 days ago [-]

At some companies, it takes more effort to go through the procurement process than to build the thing.

kdeldycke(10000) 7 days ago [-]

The essence of MVP I guess.

Now, back to the article's topic, the real question is: is there an off-the-shelf billing product available out there?

janci(10000) 7 days ago [-]

I was working on a billing system for utility company and it was a nightmare but not for the reasons from the article.

Dates are no problem, everything is billed at the end of month, quarter or year.

Upgrades are downgrades are simplified, customer can legally change tariff only on predefined dates/events.

Usage - yes, but exampled in the article do not even scratch the complexity of this

Idempotency is not really needed, the billing process is not continuous, it is a batch job.

Cash collection is out of scope of billing software.

Taxing was clear. Yes, there were multiple billable items with different tax rates, but not too complex. All customers were local.

The nightmare part was:

- customers can have multiple consumption locations, location can have multiple meters and customer can request to split the bill on multiple invoices or combine multiple bills to single invoice as they please

- meters can be replaced at any time

- meter readings are available on dates totally independent from billing cycle. Most of consumption data are mere forecasts.

- when the actual consumption data is available, everything must be recalculated AND compensated (on a correction invoice showing the difference)

- actual consumption data can be wrong and may be corrected at a later date, even multiple times

- consumption points can be added or removed or moved to different customer at any time, but this information is only available after the fact

- the prices can change mid-billing cycle for some customers but the consumption data is not available with that granularity

- customer legal information (company name, address) can change mid-cycle

politician(10000) 7 days ago [-]

Did the company use a relational database to deal with the billing and subsequent adjustments? I've been there: it can be more straightforward to use an event streaming approach that recalculates billing as new events arrive.

1minusp(10000) 6 days ago [-]

I work in a closely related space: parent's comments are accurate. Some of these items can be solved with better forecasting models, but the issue is that almost no-one actually cares about 'better' forecasting at inidividual meter levels. Sysytem-wide (or at least substation level) forecasts are well studied for supply/demand considerations (at a minimum). Also, the variability in consumption patterns at an individual level are large. IOW, one might be able to generate forecasts accurately for 'typical' residential consumption (at a per-household level), but commercials can be very different. Ideally, IF (and this is a BIG if), forecasts for 'most' premises was accurate enough, then one could make the claim that the missing data shouldnt really cause a recalculation of bill once the actual consumption data comes thru, except in cases where the error is large. Guess that means that the error check/correct process needs to continue to exist.

The meter being replaced shouldnt be a major issue, this is relational (at least we treat this all as relational) data that should be captured (by the customer information 'CIS' system), and should be available with a 1-2 day delay. Similar argument applies to other relational aspects of the premise under consideration. Not saying that those are easy (more ways for this process to have gaps).

AnhTho_FR(10000) 7 days ago [-]

Thanks janci for sharing your experience!

Indeed, we only scratched the surface of the complexity of metered billing, we'll do a deep dive soon on this topic (it does deserve its own post).

I think for the'nightmares' you mention: - some might be specific to utility (not applicable to an online business), such as '- meter readings are available on dates totally independent from billing cycle'. - Some topics might be simpler for a utility co: tax for instance (you know where your users are, and they are limited to a geographic region)

- But some nightmares such as '- the prices can change mid-billing cycle for some customers but the consumption data is not available with that granularity' do really resonate for online businesses too, indeed

Thanks for sharing, great insights for our future post!

_glass(10000) 7 days ago [-]

I was also consulting for a utility company. Billing was easy, we used SAP (lol). But CRM (where we also used SAP) was a nightmare. The different processes where a nightmare. I mean utilities are amazingly complex logistically. New metering for a single house build, when it's ordered, installed, etc.

cletus(10000) 7 days ago [-]

Pretty much everything is more complicated than people (including engineers) think.

Take a simple example: selling clothing in a retail store. Sounds simple right? You have an inventory of items and someone pays for it. Should be a simple transaction, right? Well, now you're dealing with:

- How do you determine the price? There might be a retail price but there are sales, discounts, overrides for various reasons (eg defective goods), etc;

- You have to handle different payment options eg cash, credit card, debit card, gift cards, store credit. Payment methods can be mixed;

- You have to handle returns. These generally have to go back to the original form of payment. Some goods might not be eligible for a return. What if the payment is split between store credit and a credit card? Where does the balance go?

- Identity of the person making the sale or handling the return;

- Authorization procedures for returns and price overrides.

Everything is complicated.

As an aside, this is one reason why I'm so bearish on Web3 and smart contracts in general. The edge cases are so complex and possibly unknowable that codifying these, particularly on an immutable blockchain, to remove the need for human intervention seems doomed to failure.

Take Web3 identity. If you lose your password there are methods for recovering your account. The first is a reset password option. That may be insufficient and there'll sometimes be a human avenue to recover your account. Now consider a Web3 identity. I've seen various incarnations of these such as authorizing other identities who with consensus can recover your account. Well, those identities can be compromises so you've just added an attack vector. Alterantively that recovery mechanism may become insufficient as people lose identities themselves.

If these problems weren't complex they'd be solved problems.

AnhTho_FR(10000) 7 days ago [-]

Hey Cletus! I'm Anh-Tho, one of the cofounders of Lago.

Totally agree with you on web3 identity and its edge cases, it's just billing ^10000 in terms of complexity.

I think for 'billing', usually people (especially non-tech people), don't understand the different blocks of the pricing stack: authorization, pricing grid, billing itself, invoicing (they often only see end-result: a receipt or invoice), payment gateways, dunnings, accounting etc). This post barely scratches the surface of it, but we intend to write and deep-dive more on each of these building blocks.

logicalmonster(10000) 7 days ago [-]

> As an aside, this is one reason why I'm so bearish on Web3 and smart contracts in general. The edge cases are so complex and possibly unknowable that codifying these, particularly on an immutable blockchain, to remove the need for human intervention seems doomed to failure.

I do think the potential for errors is currently real (people mess up simple wallet transfers all of the time) but it's reasonable to assume that big things and making them easy for normal people to use is going to take time.

It's easy to forget how ridiculously early into crypto humans still are. We're only about 14 years into the invention of crypto. We're only about 7 years since the release of Ethereum. The general public barely understands what crypto is other than having some conception of it as some kind of Internet money. Most people cannot discuss what a blockchain is, what smart contracts are, or why oracles are important. And forget the general public for a minute: even smart technical people on HN can barely give a good description of how all of these pieces work and fit together.

trompetenaccoun(10000) 7 days ago [-]

>If these problems weren't complex they'd be solved problems.

That's an axiomatic and therefor meaningless statement. Knowing something is hard to solve doesn't mean new tech won't bring improvements regarding the problem.

It doesn't follow that having an immutable ledger means it has to be used for every single application. That would be like claiming trains are useless because they can't climb stairs. They don't have to do that, we keep walking the stairs just like we've always done.

Immutability is great in situations where you want to record that something has taken place and don't want disputes about this. Some of the things you mentioned can be automated to some degree. Take returns as an example: You sell the item with a unique token that makes it identifiable and everything else can be automated, including chargebacks. We already have this to some extent with bar codes and RFID tags, but it can be taken to the next level with a non-fungible and non-falsifiable token that's connected to a smart contract which automatically returns money, does the accounting, updates the inventory, reports the tax data and so forth if authorized by the right person. As the owner you can easier monitor and analyze the flow of money, better detect fraud and come up with a myriad of improvements. Let's assume we have autonomous delivery trucks soon, then the entire supply process could be automated, including the billing as soon as the product is accepted. With something like the Helium Network shipments can be tracked, so there can be no dispute over where they are located.

For company owners who really doesn't want to spend much time at the office you could go a step further and even set conditions for disputes like contracts triggering a certain action after a suppliers products have been rejected too many times because of quality issues, or vice versa is a customer keeps ordering and returning products from a factory. The contract could switch to automatically ordering from a competitor.

ben7799(10000) 7 days ago [-]

I agree with thesis but there are entire aspects of this that he didn't even touch upon. There is a multiverse of different billing nightmares for engineers.

I worked on a piece of software that bridged billing data from IBM mainframes out to the web. The hoops that had to be jumped through to get data out of the mainframes and the number of hacks and kludges involved was legendary. Likely everyone here has paid many bills in their life on websites that used that software. At one point I knew the vast majority of my personal bills had gone through it. Credit Cards, Utilities, Gas cards, etc.. Stuff everyone had to pay with paper before the web.

colejohnson66(10000) 7 days ago [-]

Could you expand on what kind of hacks/kludges were required?

nico75(10000) 7 days ago [-]

Solid job unearthing the numerous billing system challenges (almost triggering PTSD from prior experiences interacting with such systems...). While I haven't fully checked out the details of the solution you offer, I'm excited to see builders embarking on building better solutions :huggingface

spookthesunset(10000) 7 days ago [-]

I occasionally get PTSD flashbacks from my time maintaining a home brew billing system.

That shit is hard. There are so many ways to fuck it up because you didn't know what you were doing. The list of unknown unknowns for a billing system are huge unless you are an actual expert in billing, which you aren't and neither is anybody in your team.

perlgeek(10000) 7 days ago [-]

I work at a b2b company with a 25+ years history, and OH MY GOD...

* Sales is payed (partially) off commissions, so they do their very best to sell whatever the customer wants, not what the engineers know how to bill, or that product management has prepared. It's really hard to push back against a contract with revenue 10x your yearly salary 'just' because the new features to bill it don't fit together with existing features.

* It's very hard to keep track of which customer pays for what, exactly, and is entitled to which service. Easy for standard products, hard for bespoke products, nearly impossible for shared services (it took us >9 months to sunset a shared HTTP proxy that was in use by just 4 customers that didn't even pay for it...)

* Legacy contracts: imagine 25 years iterating on product design, and keeping the old contracts running (except when the customer leaves on their own). Some of the old contracts are hard or impossible to find, nobody wants to spend their time rummaging through them to find out what exactly a customer is entitled to. A year ago we still had a customer that still pays for 8€ per GB HTTP traffic (web hosting), because that was the going rate back in the days

* Usage-based billing: half of our customers want to be flexible and only pay for what they use, and the other half uses SAP and for them, invoice amounts changing from month to month are the pure horror. So clever sales people invent hybrid models ('pay for what you use, but fixed price until a certain threshold, we'll notify you when you get close to reaching the threshold'). Another source of complexity.

* Usage-based billing: it's basically a topic for the whole company, most departments provide services that needs metering, but metering is usually an afterthought to the architecture, hard to verify for somebody on the outside

* Usage-based billing: for big customers you cannot put all the details on the invoice (our mailing service provider is limited to 99 pages per invoice...), so you need separate reporting to drill into the details. Fire and brimstone will rain down if reporting and invoice disagree on some numbers...

This just scratches the surface. Customers that want to split their bills along cost centers that don't align with access control tenants, different electricity prices based on location, the list is endless.

Rafsark(10000) 7 days ago [-]

The article just scratches the surface too. I could have written a book to describe it!

Reporting for revenue is a hard game, indeed. Spent a lot of time trying to reconcile revenue streams in the same company we ended up building the whole billing system. It's also not easy because metering is not something that is predictable nor easy to make it fit into a finance report.

christophilus(10000) 7 days ago [-]

This has been my life for the past few months. Billing and dates / times are the worst part of the job.

AnhTho_FR(10000) 7 days ago [-]

Have you looked at existing solutions? Why did not they fit?

my_usernam3(10000) 7 days ago [-]

Reading this article gives me a smidgen of sympathy for Xfinity. They have so many plan options/timed promotional deals, and seem to run into so many issues every seemingly minor change, now it makes sense to me why.

But in the same breadth still ... F** Xfinity

AnhTho_FR(10000) 7 days ago [-]

Anh-Tho, one of the co-founders of Lago, here! yes totally, each time I open a pricing page, I have full sympathy for the engineering team who made it happen! I used to be the one tweaking with pricing plans on a spreadsheet and handing it over to engineers in a 'just make it happen' fashion!

bcrosby95(10000) 7 days ago [-]

I've had to deal with subscription based billing, and in my experience it wasn't a nightmare. There are some corner cases, but, those exist in every domain. And the stakes are definitely raised since you're dealing with people's money.

Usage based billing sounds like a nightmare though.

Taxes in the USA are a nightmare too. I have a friend that lives on a street, and for one side the tax rate is 7.5%, and the other side its 9.25%. Literally no company we tested got this right. Even Amazon.

Rafsark(10000) 7 days ago [-]

For simple subscription-based billing, it's somehow not this hard, you are right. It also depends on how much plans you provide to your customers, and you still need to scratch your head for upgrades and downgrades.

I truly believe in usage-based billing, so I do think pricing are getting more and more complex over time.

Taxes are a nightmare for everyone also (in Europe too...)

BeFlatXIII(10000) 6 days ago [-]

How often to neighbors on that street ship their deliveries to one another to cheat on taxes?

awillen(10000) 7 days ago [-]

Once upon a time I was the first product person at a now-decacorn, and my first task was fixing the billing system. It was quite the monster, and we ended up implementing a combination of Zuora and an internal system, as there were some parts of the billing model Zuora couldn't handle.

I came away from this with one big lesson - if you're considering a complex billing model, consider the engineering implications first. With most products, engineering feedback gets taken into account - often product proposes something, engineering breaks it down, product realizes that feature x is vastly more complicated than they thought and not worth the effort, and the requirements are changed to simplify or remove it.

The one thing that never seems to be true of is pricing models - that decision gets made at the very top and handed down with no chance for feedback. I think that if it the folks designing the billing system realized the costs, they might simplify things. If the complexity of your billing system means that 3% of your engineering team (plus additional folks in support and finance) is going to be working on it forever, but if you simplify it a bit you could keep 90% of it and only have 1% of engineering working on it, that might be a good tradeoff - after all, that leaves you more engineers to build features, which should drive additional sales. Unfortunately, that analysis never seems to get done up front and the cost is only understood after the billing system is deeply integrated into everything and would take an unpalatable amount of effort to change.

Rafsark(10000) 7 days ago [-]

Couldn't agree more on this. Even if finance, sales, marketing or other deps are involved, billing is an engineering thing! Back in 2016, while building the billing system for qonto.com (european fintech unicorn), we were surrounded by people willing to add complexity to a thing they don't understand. A team of 2 engineers building it ended up in a whole squad called 'Pricing Cross Functional Team'... even the name of this team was complex :)

PeterisP(10000) 7 days ago [-]

Yes, this is a big thing - there needs to be a clear model of how variable pricing and discounts are going to work in this company, with the sales team being able to apply the actual amounts of any prices and discounts, but not arbitrarily changing the model.

It doesn't work when the model is too simple or restrictive, which will simply result in the model being violated; you do need at least the ability to customize pricing for individual customers, and for specific invoices, but other than that the decisions (e.g. whether you apply invoice discounts as percentages or specific amount or both) can vary but you just need to pick any reasonable option and stick with it.

simonjgreen(10000) 7 days ago [-]

There's few things worse in B2B than encountering a system where engineers have reinvented general accounting principles from scratch.

kdeldycke(10000) 7 days ago [-]

The only worthwhile re-invention of late might have been triple-accounting.

asciimike(10000) 7 days ago [-]

Nit: there's a lot of noise being made in comments about it being an open source product, but I haven't found the source (maybe I have to speak to an expert to get it?). github.com/getlago has the docs, which is nice (and gives a lot more info than is available on the website about how the product works), but not quite the same.

Also note:

> Our open-core version is forever free. We will introduce paying add-ons in the future, with a consumption-based approach.

AnhTho_FR(10000) 7 days ago [-]

It's coming soon, we're in the final steps of QA, but we could not wait to share this post! Thanks for the comments on the documentation, by the way!

bhawks(10000) 7 days ago [-]

'Tax logic' is an oxymoron and a trap for smart technologists to fall into. Tax codes are not logically consistent or clearly defined. They're a bunch of illfitting laws, beauracratic/regulatory guidance and interpretations of judicial decisions.

I tell folks working in this area to accept the illogical nature of it all and to be ready for all sorts of arbitrary last minute changes. As the article points out understanding time plays an important role here too - when do different tax treatments take effect is an important question to answer.

c3534l(10000) 6 days ago [-]

Tax is written by a bunch of people whose background is overwhelmingly from law, not finance, for reasons that have nothing to do with logic, consistency, financial literacy, understanding, sanity, or common sense. Politicians frequently pass laws without understanding what it will do or what its implications are, because they just didn't think that far ahead. It is quite possibly the worst system you could come up with for writing tax law.

That said, there is a logic to most of it, but not in a way that allows you to come up with general heuristics or universal abstractions.

dhzhzjsbevs(10000) 6 days ago [-]

I'll see you're billing and raise you localization. All in? Here, have some timezones for good measure.

AnotherGoodName(10000) 6 days ago [-]

Ok for the one-upmanship I raise you billing systems for telecommunications. 1000x the transactions per customer and 1000x the complexity of the business rules.

Eg. An SMS on an off peak time is cheaper. If the user makes 100 or more calls a month a special rate. How do you charge data per plan? If the user roams to another country there's an entirely new set of complex billing rules.

lelandbatey(10000) 7 days ago [-]

Billing is a nightmare, and if I had one piece of advice for people building a pay-for-what-you-use system like most SaaS, it's this:

DO NOT bill against your business logic entities. They change, and doing a COUNT at the end of the month won't catch all things which changed or which cost you money during the month. Instead, figure out what you bill for and and when you do that thing, record a row in a DB or into a steam of that 'billable event'.

Reconciling billable events is much easier to do, and it's tolerant of all the weird edge cases (such as a support person hard deleting data when they should soft delete, which would otherwise throw off your end of month counts).

There's a reason AWS (in general) can produce a log of everything you did which cost you money. It's painful, but it's less painful than the alternative.

Rafsark(10000) 7 days ago [-]

Couldn't agree more! I do think the best way to do it is to log ingested events in a db, and decide which way you want to aggregate a billable feature at a defined billable period. This way lets you (i) keep track of everything, (ii) invoice complex behaviors and (iii) provide great granularity to your customers inside a final invoice.

theptip(10000) 7 days ago [-]

Sound advice. Anything to do with accounting, you'll probably want to treat as an append-only log of events. (Note, you can also use event-sourcing and have your domain entities _be_ events, in a billing bounded context that might make sense. Not usually the first approach I'd recommend though.)

On a similar note, make sure you think about bitemporality as well. In other words, "effective_at" vs. ""created_at". You might learn that due to a bug, you didn't record a billable event, and you need to insert it into the history (say, to put it in the correct billing period). But setting the "created_at" in the past brings a bunch of issues; it's confusing and you'll soon hit issues where lying about created-time trips you up. (Migrations, reasoning about what version of the code applied what events, etc).

Fowler has good further reading here: https://martinfowler.com/articles/bitemporal-history.html.

bob1029(10000) 7 days ago [-]

I think event sourcing / logging is the way to solve this kind of thing.

You have an obvious start/stop checkpoint in the log, processing can be done in batch after hours, no information ever gets destroyed, etc.

Mutable database rows that are used for accounting purposes should sound dangerous to anyone trusted with these systems.

bombcar(10000) 7 days ago [-]

And depending on how you do it, it's not as hard as it sounds - for example, the first minute of 'cloud server V1' can get a row in the db, and each 'period' that row gets updated with the current minutes.

Or you can log them into tables that get processed into other invoice tables.

This also lets you keep grandfathered customers around as long as you want.

flappyeagle(10000) 7 days ago [-]

If you squint, this is similar to how e-commerce billing is done. You have stock keeping units that represent the logical item being sold, and every transaction clones that row as part of the order record.

This way, if the price changes, the item description changes, the images on the item change, etc, you can still have a record of what was actually sold and how much to refund a return etc.

rsanaie(10000) 7 days ago [-]

I'm getting PTSD reading this article

Rafsark(10000) 7 days ago [-]

Ahah so sorry. Hope this is because of the substance, not about the form ;)

api(10000) 7 days ago [-]

Any kind of business or accounting system tends to be an endless long tail of 'can the invoicing system send the invoice in Esperanto using only 16-bit Unicode characters on Tuesdays but only while it is raining and only for customers whose last names end with E and who signed up more than one year ago according to the Chinese calendar?' type features.

That's why attempts at 'no-code' systems where biz/accounting people can design their own system are a perennial fad, but as these systems are developed they always just end up evolving into weird domain specific programming languages.

It's also why any sufficiently large company ends up implementing their own in-house accounting system and/or customizing some byzantine monstrosity like SAP which is really just another way of implementing your own.

nikanj(10000) 7 days ago [-]

The #1 reason those systems are moderately successful: When your sales person needs to write the logic for billing customer X, they'll try to make the customer contract more reasonable.

Normally sales people are deal-driven, and if promising the customer '14% off on the first three transactions on every rainy Tuesday, unless Monday was also rainy' gets the deal closed, then that's going in the contract.

No better way to ensure the contracts are reasonable, than making sure the person negotiating the deal feels the pain of billing it

biermic(10000) 7 days ago [-]

Absolutely agree with the poster. This is my life at the moment. Just to take money for a small piece of software I built.

Multiple currencies, trial periods, when in the process do you ask for the creditcard, country specific taxes,... And the worst: invoices.

While paddle.com takes care of many of those things - I was shocked after realizing how much work this is going to be.

Please think carefully about those things before you quit your job to work on your 'Micro SaaS' or 'unicorn startup'.

Rafsark(10000) 7 days ago [-]

Agreed! But I do prefer to build it for me rather than someone else's startup, though

dand31(10000) 7 days ago [-]

Have you not looked at Stripe? Their product is fairly straight forward to set up

herdrick(10000) 7 days ago [-]

Have you looked at Outseta? https://www.outseta.com/billing and https://www.outseta.com/demo . I haven't used it but it looks great. I'd love to hear what you think.

akrymski(10000) 7 days ago [-]

Have also been considering Paddle. Would love to know what you found it's missing?

dboreham(10000) 7 days ago [-]

Note OP makes a billing product (OSS, granted) so is motivated to characterize the field as complex and hard. In reality billing is just like many other processes humans engage in. For sure it's more complex than computing prime numbers, but hey try writing software to capture medical services processes :) In the end this is exactly what engineers are supposed to do: figure out ways to model/capture/represent complex processes and data.

spookthesunset(10000) 7 days ago [-]

Or better, buy a billing system instead of inventing one inhouse. Billing is hard to do correctly. For one thing it interconnects with so many different platforms in your company. For another thing, the cost of fucking something up is high--people don't exactly like seeing mistakes in their bill. Hell, think about tax... you wanna deal with that shit?

To do it 'correct' you've got to have people who truly understand accounting concepts like double entry bookkeeping, financial reporting, cash vs. accrual, financial regulations, auditors, etc. Unless your company's product is billing, good luck hiring an engineer who knows this shit.

But trust me from experience. Don't build your own billing system. There are several good ones out there that will grow with your organization. Buy one.

dhosek(10000) 7 days ago [-]

This feels like a good moment to mention my most embarrassing bug which was, of course, in a billing system. I had written code to handle the overdue billing of customers' usage on an internet telephony service. The code seemed to work great in QA so we went ahead and pushed it to production.

A couple days later we got an angry call from a customer whose card we'd maxed out. The final stage of the process was to adjust their bill by the amount that they paid, but there was a sign error in that adjustment so instead of lowering their balance by the amount paid, we increased it. As a consequence, for a user with, say, a $50 balance, we kept doubling the amount that we charged them each day.

Exponential growth is a powerful thing.

alfalfasprout(10000) 7 days ago [-]

I'm not sure this comment is exactly in good faith though-- OP does make a billing product but as a result is also uniquely qualified to provide insights into what can make it hard.

This isn't OP trying to claim it's the most difficult problem to solve from an engineering standpoint. Merely why it's hard.

Of course engineers solve problems, hard or not. That's not really being debated here is it?

RockRobotRock(10000) 7 days ago [-]

I think this is a great article and sales pitch for the company, since a lot of the comments have been anecdotes of how its actually even HARDER than the author leads on.

AnhTho_FR(10000) 7 days ago [-]

Hi dboreham, I'm one of the co-founders of Lago here (the OP).

I completely agree with you, billing is not the most complex process of all, medical services may be of higher complexity.

Our post was meant to highlight the discrepancy between the perceived low complexity (lots of teams who have never done it think it's simple) and the reality, with our own experience building a fintech.

I think for some processes (medical services? I'm not an expert in this field to be honest) people might suspect it's going to be challenging.

That was our intention. Basically we think no B2B saas should be building billing themselves, unless they are 300% sure their pricing will always remain subscription based only, and very simple (same amount every month).

Hope I helped clarify!

zippergz(10000) 7 days ago [-]

I have no vested interest in selling billing products, and my experience is that billing is somewhat unique in the delta between how hard people who have never done it THINK it is, and how hard it actually is. There are many hard things that appear simple, but time and time again I have seen inexperienced engineers be caught off guard by just how much harder billing is than it looks.

Aeolun(10000) 7 days ago [-]

If it's as annoying as translating human processes into code, I can see how they'd want to standardize it so they'd never have to do that particular part again.

Then again, if you make it too customizable, you end up with current single sign on solutions.

btilly(10000) 7 days ago [-]

Wait until you get into the complexities of taxation.

If you think you know them, pay attention to https://twitter.com/aotearoa_ben/status/1526786701750050817.

That's a law that changes the tax rate if

1. The purchase happens in Texas. 2. The payment cleared in a specific 2 day period. 3. The item cost < $100. The cost of shipping has to be included in the item's cost. 4. The item falls into specific categories that almost certainly don't match whatever categorization system you are attempting to use.

Seriously, calling out just one example, https://comptroller.texas.gov/taxes/publications/98-490/scho... points out that a kit containing school supplies qualifies as long as the cost of the qualifying school supply items in it exceeds the cost of the non-qualifying items. I guarantee that you don't have a category for that in your accounting system...

oger(10000) 6 days ago [-]

Our team was building a telco operator in the U.S. We already did that in several European markets and in Australia - so we assumed that we were mildly proficient in taxation issues. And then came the U.S. - a complete nightmare of overlapping rules that sometimes are not properly published. In essence you cannot do this on your own - you definitely need a tax service provider (like Avalara). Funny thing is though that you still are pretty safe to be noncompliant sometimes somewhere and the provider will not take on the liability if their data / computation was wrong. In short: a mind-blowing disaster. And you had to PRINT your tax forms and SNAIL-MAIL them to every jurisdiction that you were doing business in. I mean: hundreds or even more. Every. Single. Month. And don't get me started on the banking system stuck in Stone Age. I definitely would think twice to set up a business in the U.S.

mox1(10000) 6 days ago [-]

Just do what eBay does and just charge sales tax on everything!

ProAm(10000) 6 days ago [-]

Wait until you deal with Brazil. Billings systems are not hard they take engineers that are thoughtful and long-term thinkers which is not conducive to today's software developer hires. Which is why posts like this are made, Its a market ripe for dominance.

dormento(10000) 6 days ago [-]

* Cries in brazilian *

elevation(10000) 6 days ago [-]

Even when the laws are comprehensible and you're not running at scale, existing accounting systems make supporting simple new requirements difficult.

Imagine a US LLC setting up their chart of accounts to track IRS deductible expenses. Coding expenses into this 17-account schema in the normal course of book keeping means taxes can be filled by copying the account totals into the corresponding IRS form field.

But later, the business wins an federal contract which reimburses certain 'R&D' expenses and possibly some 'overhead' expenses. Now not only must every expense must be categorized with one of 17 expense categories for the IRS, but it must also be categorized with one of 3 categories for our federal contract purposes. But wait, a small business loan application may require profit and loss broken down by a different set of categories. New and local tax authorities may impose other categories.

One way to address this complexity explosion is with multiple instances of the same accounting software; each imports the same bank account statement, then justifies it against a chart of accounts. But many businesses will break out Excel and hope there aren't too many expensive errors.

If new regulation is inevitable, why does our software pretend otherwise?

jef_leppard(10000) 6 days ago [-]

I worked on a tax app. Total nightmare. I remember having to adjust tax rate for a couple parishes in Louisiana and a city in Florida for example. Trying to come up with a readable, testable, way to get that done was something I tell ya.

philliphaydon(10000) 6 days ago [-]

I used to work at a company that managed the systems for back of house in fast food chains. We moved customers from excel to an online system. Omg when it came to doing labouring in the US. It's scary.

If a person clocks in at 9:52M. The company only has to pay them from 10am. But in other states they must pay from 9:45am. There's so much complexity in the us due to all states being different.

systemvoltage(10000) 6 days ago [-]

We should demand governments to maintain and provide an API for their tax codes. Let them have skin in the game. If the taxes are incorrect, businesses can point out that it's the government API's fault.

Completely developed in the open, and with full test coverage.

This should be required to collect taxes. No API? No taxes. API down? Tax waived.

_tom_(10000) 6 days ago [-]

I worked on a tax app. I was amazed to find that some tax tables necessary to compute taxes were not released until months after taxes were due. WTF?

Andrew_nenakhov(10000) 7 days ago [-]

As someone, who has built a billing system not once in my life, but twice (one for an internet privider I worked for, which counted the amount of traffic, and another one for a SaaS project(, I fully sympathize with the post. Billing is an unbelievable can of worms even before you get to taxes. Add in all the things the marketing people want from billing (trials, discounts, per-seat per usage pricing, etc), and you have enough tasks to last till retirement, no matter how young you are.

Rafsark(10000) 7 days ago [-]

Yep, do agree that non tech team created a lot of pricing complexities, sometimes without any reasons. Remember a marketing team updating plans on webflow and telling the product team « see, not that hard » ;) Ended up with a whole squad of engineers called « the pricing cross functional team » ...

londons_explore(10000) 7 days ago [-]

Random idea:

Don't build a billing system. Or at least not a precise one.

Have a price list as normal, but just guestimate the user's bills based on any easily accessible information. Eg. For a hosting company I might just count how many VM's they have active when the billing script runs . If the user isn't happy with the total, give them a button to correct the total and pay that.

Spot check what the user's pay, and warn then ban any users which are clearly deliberately paying substantially less than list price for your services.

CharlesW(10000) 7 days ago [-]

How does auditing work if billing is non-deterministic?

AnhTho_FR(10000) 7 days ago [-]

I love it, sounds like the Swedish shops with no cashier https://mashable.com/article/swedish-store-cashiers Has it been tried before? How were disputes managed?

Hackbraten(10000) 7 days ago [-]

You'd still have to apply the right tax rate for every line item. If you allow the user to edit the total, then which of the tax amounts are you going to recalculate?

__alexs(10000) 7 days ago [-]

This is a great way to be bankrupted by crypto currency miners.

recursive(10000) 7 days ago [-]

This sounds like trouble at several levels.

Someone's going to reverse engineer your heuristic and min-max it. At that point, you'll need to defend your heuristic against edge cases. And then you may as well just make it official.

Also, B2B customers are allergic to 'pay what you feel'-type schemes.

walnutclosefarm(10000) 7 days ago [-]

Always has been. Marketing and sales create complexity on the front end, devising all kinds of products, plans, discount schemes and related complexity; jurisdiction, taxes, and accounting rules add complexity on the backend. In the middle, measuring what you're billing for is the hook around which all that complexity chaotically orbits.

Decades ago - the computer involved was an honest to goodness System 360 (which is to say, not a 370, Z, or any of the the code compatible 360 successors) - I spent an evening trying to figure out the service charge on my bank account. I couldn't get at all close to the figure on the statement by applying the bank's schedule of charges. I took it into the bank, and got bumped by the teller to a department manager, and thence to a VP (small town; small bank), whom I told, 'I will pay the service charge if you can explain to me what I'm being charged for.' 2 hours later he gave up, having not gotten any where near an explanation, grinned, and said, 'I can fix this.' He set some flag on my account, and I never again paid service charges at that bank. Yep - that flag remained set on my account for nearly 20 years, through multiple software and hardware upgrades, until the bank was acquired by a big, expanding regional bank, and I left for a different, entirely local bank.

Rafsark(10000) 7 days ago [-]

The marketing team of my previous company created a whole fake price plan that never existed in the code base. Somehow it works to drag acquisition, but it's a pain in the ** to explain why it's not as easy as updating webflow ;)

hammock(10000) 7 days ago [-]

Double-entry accounting is tough for a lot of people. It's a different domain and trying to fit it into traditional 'math' will only cause headaches.

tabtab(10000) 7 days ago [-]

An alternative to DE is 'Assume Balance'. See near the bottom of http://wiki.c2.com/?AccountingModeling

kdeldycke(10000) 7 days ago [-]

Still, someone tried in the 'Algebraic Models for Accounting Systems' book: https://www.amazon.com/Algebraic-Accounting-Systems-Salvador...

Source: https://github.com/kdeldycke/awesome-billing#finance

daxaxelrod(10000) 7 days ago [-]

Poked around their site a bit, they claim to be open source but I don't see a link to a git repo anywhere. Also searched GitHub a bit but might have missed it.

AnhTho_FR(10000) 7 days ago [-]

Hey @daxaxelrod! I'm one of the cofounders of Lago, the lib will be opened very soon, we're in the final steps of QA, that's why. I will make sure to ping you back here when it is. We wanted to share this post about our first-hand experience with billing a bit ahead of this. Thanks for your patience!

fatnoah(10000) 7 days ago [-]

My very first startup (wayyyy back in 2000) built a billing SaaS. I fully agree that rating, billing, invoicing, and all of the related things are hard. I learned a lot in that role about dealing with complexity...and date/time.

Rafsark(10000) 7 days ago [-]

What are the top 3 things you learned that every company should be aware of? Could be interesting to have an experienced view on this :)

ddingus(10000) 7 days ago [-]

(1 of 2)

In the 00's, I was part of an effort to get the State of Oregon to consider open source software. A 40 million billing system debacle was part of what drove this effort. The details are not important, save to say the ASA paid big to stall the legislation and that worked. It was stalled, and word got out that future bills would be toxic. So, that's over, but a great idea remains!

I am putting it here, because someone, somewhere should do it. Will save us all a lot of money and do a great public good. And it's likely to pay a fellowship of some kind very well over the next couple decades. I'm on a different project and need to finish it, and I'm likely to be paid well on that, so...

Instead of paying the likes of Oracle, or some consulting firm to write and support the billing system, the State should do it. So far, I've seen several bad contracts in my State where the State has a right to use the software, does not own the code, must pay for support, and on it goes. This is quite expensive, and frankly, it didn't work well.

Instead do this:

The State sets up an organization. This org is public, and those who work there are public servants, or some similar arrangement. The key thing is the billing system gets written on open code, uses open data, and the resulting software is owned by the public, through the State essentially. This code gets published on Github under an appropriate open license.

Fund this org, paying respectable developer salaries, and get the system written, tested, deployed, whatever is required. This funding comes from public tax dollars, and I would suggest the lottery as a primary source.

Once the initial system is complete, and let's say it's for water. Once it's complete, the system can be published and the whole thing becomes an export for the State that does it.

Any municipality anywhere can do one of a few basic things:

They can build and deploy the system and handle billing themselves.

They can hire the Org to help them do this, contact for support, whatever.

They can hire private entities to deploy the system.

Changes can come from anywhere, but the organization is best suited to implement them. This can be on a contract basis, funded reasonably in any number of ways.

The organization continues on to develop other public works tools, each funded by public dollars, each displacing some expensive thing or other, and each saving the people quite a bit over the course of say a decade.

Wash, rinse, repeat.

The end product is a nice pile of public works software needed by tons of municipalities most all of whom are paying hard for it now and they pay hard for basically everything, and continue to do that. The public could make these investments once and benefit for a very long time.

Perhaps more than one State wants to be a part of something like this, or whatever State gets going on it first, might want a few organizations, depending on what is to be done and the scope / scale required. Either way, the Organizations provide jobs and consulting / support and basically own the public works they develop and are funded to maintain those software works.

An advantage over contracts and proprietary license is cost, particularly when the likes of Oracle are involved (and no judgement here and no calling out as I'm only saying Oracle as a well known example among many out there and I just feel the public works can be done without Oracle and the cost associated with similar entities). Having the data be open is another advantage in that it can be used in all sorts of ways, and the public records can be made available to the public in simple, effective ways.

Mix in smart API's and the whole thing becomes very high value and an attractive base target for all sorts of public software development having to do with the basic machinery of society.

None of this is particularly sexy, but it does pack a big punch in terms of modeling how a more modern, digital society could employ open code, open data to get the work we need done, done in a cost effective, efficient way.

And that's it basically.

Over time, this first, or core organization will accumulate considerable domain knowledge. And it would become the basis for a lot of other work. Ideally, others started up in various places would all garner the knowledge needed by domain:

Utilities, water, power, internet (where it's actually municipal, and we know that's not popular with the big telecoms...), garbage, etc.

City functions, parking, various permits, anything that has a process that could be automated and or just performed on open code in simple, direct ways that are lean, consistent.

Vehicles: Registration, and all that tends to be tied into these systems. In my state, we've got vehicle registration, drivers licenses / ID Cards that link to facial biometrics (I don't like this, but that is another discussion), and it's also voter information central with a signature, party affiliation, address, and everything needed to manage the voting rolls, and execute Vote By Mail nicely, and in a trustworthy fashion. Oregon has done well on that front. Citizenship records are in this pool of software too.

Health Care: Right now, again in my State the Medicaid system insures people who are diabled, are poor, perhaps wards of the State. This is currently on Oracle, and has been such a mess it's ended up in court. I don't even know the state of it all right now, but it's a lot of money and a lot of grief.

ddingus(10000) 7 days ago [-]

(2 of 2 --> I got 'comment too long!)

Now, this won't be popular with a lot of us here, myself included, as doing stuff like this will displace some revenue streams companies and people depend on. However, a longer view looks considerably more attractive. The jobs created are needed. Tie this stuff into education, and it's a perfect 'farm' training ground for up and coming developers to work on important projects and get good experiences.

A lean and mean digital civics is attractive and necessary for a lot of reasons. The case for doing it on open code and open data is super compelling and compliant with the basic ideas behind public works anyway.

As citizens, we get to experience our taxes actually getting more work done and at a lower cost. And yes, spending will move toward other areas where it's possible to extract revenue, and that's an ongoing problem. Real good can be done, and that's what I'm writing about.

Companies wanting to or needing to interoperate with the State government can do so employing everything from really old school, courier, papers, and the like, through to very new school, all digital, everything operating in ways we know can work well given the incentives are where they need to be and the organization is funded as it needs to be.

The second and third order effects could be very significant! Standards fall out of this kind of thing, and devices, protocols, and all manner of currently diverse and expensive to maintain systems could be made leaner, meaner, have much longer service lives and the knowledge needed will be out in the open and available to the people who need it.

Obviously security needs to be a consideration. And that's no different than what we have going on right now. The big difference is an open society can be built on open code and data and having it be funded in ways that maximize the value to the public while also keeping costs where they need to be, appropriate makes a ton of longer term sense.

In a scenario where this is all closed, the idea of competition is often cited as a reason to do it private, but the realities are corruption and other harsh realities tend toward scenarios where lock in type solutions favor the public getting the lowest value possible for the highest dollar amount possible. We see this over and over and over. And that's why the ASA (American Software Association) opposed the effort in Oregon and another one in Texas with such zeal it was kind of amazing really.

Should we do it in some fashion as I've hinted at here, the opposite becomes true. Initial value for the dollar might not be a whole lot different from whatever contract + proprietary software delivers. Over time, the trend will be toward getting very high value for the dollar.

Regarding Standards... Think center of gravity. Systems like these can have very attractive start costs. And cost of change will vary, and likely be made higher by current players wanting to leverage lock in to preserve revenue streams (and who can blame them?). Fair enough, but as more gets done, that center of gravity will prove compelling, and we all get the benefits working in a similar, more compatible way provide.

When we all require a resource or process, and let's just take power or water for a moment... Running these things at profit puts incentives in the wrong places. Maximizing profit is not the same as maximizing the public use value. And where that is wrong, everyone takes a small hit, and those tend to add right up.

Maximizing public use value can end rent seeking type arrangements that cause more grief than expected. Maintenance is one, tech debt is another of many that come to mind. Where these are out of public view, they tend to get ignored and risks accumulate, until there is an event, and suddenly, those risks play out, and we the public are faced with a large bill, and that's an old story, no need to say more.

I'll end with the belief this seems to be an excellent way to employ open code and data. And one of the big problems we find out there with open code and data is failure to work on 'dull' or 'uninteresting' problems. How many times do we have to find out that project X is being used by everyone, and nobody really owns making sure it's going to make sense to continue to use it?

Now I will just stop there. That's it. Some municipality somewhere would LOVE to get a State grant to get this started, and some State somewhere really needs this kind of thing enough to bite on the idea, and there are a ton of messes to clean up.

So maybe those should just start getting cleaned up!

aerovistae(10000) 7 days ago [-]

> To solve this problem at scale, we adopted a radical sharing approach. We've started building an Open-Source Alternative to Stripe Billing (and Chargebee, and all the equivalents).

This is so great. I personally haven't had to deal with this problem, but I've worked at a number of organizations (and heard about many others!) where this sort of business logic had to be implemented. It's just reinventing the wheel. I shudder to think how many companies have implemented a system for managing recurring subscriptions.

We have things like Ruby-on-Rails, Django, Laravel, and many others to take the bite out of building web applications. They keep us from having to reinvent the wheel.

We need similar open source frameworks for common business use-cases - billing, subscriptions, order/purchase management, and so on.

Sometimes it's weird to remember we're in the stone age of technology - all of this is a few decades old at most, and even early predecessors don't go back more than 70 years. Human history goes back tens of thousands of years. There's so much yet to come.

ptsneves(10000) 7 days ago [-]

This is where solutions like WooCommerce shine. WooCommerce is an amazing piece of software that is many times used for free, but generates thousands of dollars.

intuxikated(10000) 7 days ago [-]

> We need similar open source frameworks for common business use-cases - billing, subscriptions, order/purchase management, and so on.

You mean like an ERP? There's 2 open source ones I can think of:

- Odoo, which is the bigger one, mostly Open-source (CRM / sales / subscriptions / invoices / webshop / inventory / purchase) but some enterprise-only modules (Rental, Field Services), I work with this software daily as an Odoo Developer (Customizing Odoo for customers' needs).

- Erp-Next, which is completely open source as far as I can tell, through my limited testing it seems to be less advanced than Odoo currently.

EDIT: you can even check runbot.odoo.com for some test-environments which are automatically built, where you can test/experiment, login is always admin:admin

sandworm101(10000) 7 days ago [-]

Do not worry. Blockchain will solve or at lease simplify all of these problems.

It is the way.

StevenWaterman(10000) 7 days ago [-]

Look, Poe's law!

Rafsark(10000) 7 days ago [-]

Even if with blockchain and crypto you need to define a logic to bill your customer. This logic is pre-transactional and define how much you need to price the transaction.

tikiman163(10000) 6 days ago [-]

I program billing systems, and even account reconciliation systems for payment processes. It's not a nightmare, it just take a greater understand of accounting. If you want to be an engineer that works on financial systems, take the first year of classes that accounts have to take and pay attention.

datavirtue(10000) 6 days ago [-]

This. I'm astounded at the gaps in some people's inability to understand business processes because they have no accounting training. It is not that difficult but it is something that needs to be integrated into your mindset.

You might then realize that accountants have been using event sourcing to solve complex problems for hundreds of years.

Very senior developers will nearly always default to juggling updates to database records for everything. I cringe because it is often the source of serious gaps in domain language between developers and business operations people, leading to unreliable systems that are very difficult to reason about.

Immutable ledgers are your friend.

Rafsark(10000) 6 days ago [-]

You are totally right. But, somehow, most of the engineers working on those topics don't want to work on financial system. They are just part of a company and take this topic 'because someone has to take it'. They accumulate the tech knowledge, but not necessarily the whole accounting part

kache_(10000) 7 days ago [-]

nightmare, or job creator?

bloodyplonker22(10000) 7 days ago [-]

nightmare job creator.

Rafsark(10000) 7 days ago [-]

Job creation comes always because of hard-things to build; When getting easier, it cuts job!

giantg2(10000) 7 days ago [-]

I work on a team collecting fees at a financial company. It is tedious and boring. There is a lot of complexity. I've often asked the business if they had ever thought about a different fee model that would be less complex. They just want to stick the legacy business model into the new tech...

AnhTho_FR(10000) 7 days ago [-]

I'm Anh-Tho, one of a co-founders of Lago, thanks for your comment! Financial and cloud infrastructure companies are the ones with the most complexity. I've been on the business side, and sometimes you do want to change the pricing but there's so many implications: - Maybe existing users will churn and/or revenue will decrease as a result - If you change the billing system, there's a risk of bugs/errors and complains - If you spend engineering time on this, then you need to deprioritize other projects So... business teams often end up giving up, and that's a shame because iterating on pricing is a very powerful lever of revenue growth.

hedgehog(10000) 7 days ago [-]

Can corroborate, my experience working on an upgrade to an existing multi-platform (iOS IAP + Strip for web product) subscription service was certainly a bit painful.

Rafsark(10000) 6 days ago [-]

Yes, upgrades and downgrades were painful for me too. I think it's due to different subscription dates between all our customers, having to create specific code for each edge case. In the end, it works, but so much harder than I thought

dekhn(10000) 7 days ago [-]

Basically everything about enterprise is a nightmare for engineers. What amazes me (after having worked at some of the most profitable companies in the world) is just how little intelligence the leadership has about their own revenue or costs, beyond 'wow huge amounts of money is coming in or going out'. And how many critical processes are implemented manually, by individuals, with personal spreadsheets, on their laptops.

deathanatos(10000) 6 days ago [-]

Yep, I'm about ready to start implementing one of those very spreadsheets just to start trying to categorize Azure costs.

makeitdouble(10000) 6 days ago [-]

> intelligence the leadership has about their own revenue or costs

I see the positionning of the CFO right next or below the CEO as a mechanism to let the CEO care about "big pitcure" money flow, while having someone in contact with reality guide decisions and veto stuff that won't fly in cost/revenue terms.

Rafsark(10000) 6 days ago [-]

It's a difficult topic to tackle; The financial leader has different needs than the tech one. One needs a perfect report with exact match day over day, month over month, year over year. The second one needs to do the math to calculate instant consumption of the product. If we add marketing leaders of a company to it, it brings creativity to the extreme (like inventing new prices, just to use it 'as a hack'). Same goal (revenue), but definitely not the same path to success

BeFlatXIII(10000) 6 days ago [-]

I have a SaaS idea I've been sleeping in solely because building the billing, authentication, & marketing are each an additional 96% of the work that is not building the actual functionality of what I'd want to sell.

BatteryMountain(10000) 6 days ago [-]

Yup, I work at a tech company and our entire finance & billing team is using excel exclusively (I think our payroll/leave system is the only thing not on excel), where every other team uses one system or another. All of the higher ups makes decisions based on docs created out of all those excel artifacts. I have to admit, their productivity is quite good and they are kings at aggregating and graphing data in sexy ways. It would be a challenge to build them something that actually works better and can adapt quickly to changes (the software lifecycle too slow for them and now they have to deal with some developer types to get something done, so skip all that noise and stick to excel).

chasd00(10000) 7 days ago [-]

I spent a lot of time right out of college working on the equivalent of an invoicing system for a pharmacy chain (rx drug pricing, electronic carrier submission, and reconciliation).

Account receivables is also another nightmare. We would get checks that randomly show up at corporate for no reason from insurance carriers but then when the carrier realized their error instead of handling it as a separate process they would deduct the check amount from whatever invoice ( or even across a range of invoices! ) from us they felt like. We literally had a bucket called 'magic money' these random deposits went into that we would use to fill in the A/R gaps from insurance carrier insanity. There was no connection between magic money and whatever invoice they decided to short change us on so it was just a hope-for-the-best process.

thibo_skabgia(10000) 6 days ago [-]

Hey, I feel your pain. The 'hope-for-the-best process' definitely is something happening in many companies... Yet, it's not a fatality anymore. I work with Upflow (YC 20) and it is exactly what we offer to our clients : 1. Making sure they have clear visibility on their AR 2. designing systematized workflows for their customers (as many as needed) 3. Customer payment portals to help their customers pay on time, with no friction. That's here at www.upflow.io

po1nt(10000) 7 days ago [-]

Implementing and maintaining billing is my daily job for the last 6 years.

I would also mention topics like: - Reporting (various data aggregation and audit reports) - High reliability, nothing hurts company more than unability to bill their customers - Rounding, this can be seen as a subset of 'taxes' but it's much more complex - Locating user, also can be seen as a subset of 'taxes' but user can have country A set in profile configurations, country B on credit card and country C geolocated from IP address - Timezones, issue everywhere but when we talk accounting it's super important - Talking to moronic payment gateway providers. This is the biggest ones. I would love to just flip Apple and Google like Linus flipped of nVidia. Proprietary, poorly documented, 'find out yourself by weeks of experimenting' bullshit with no easy way of getting technical support even when you bring those companies millions per year. Things don't work, deprecate monthly or weekly and expect you to be always ready to make changes. Some implementation make zero sense at all with complete paradigm shift of handling payments like between Google in-app pay and Google subscriptions.

But as my SO says 'Don't cry, it pays well'

AnhTho_FR(10000) 7 days ago [-]

Wow, thanks for sharing! We found out 'billing engineers' are very rare and really demanded, as it requires to be very detail oriented, technical, and business processes oriented. But even if there is high demand (and I guess: corresponding high pay, as you mention), very few engineers are up to the challenge!

Rafsark(10000) 7 days ago [-]

Yep you are right, indeed. We could have mentioned Reporting/Analytics for revenue, which is always a huge pain for companies!

patrck(10000) 6 days ago [-]

Yeah, Audit for the win.

And it's not 'billing', it's Compliance + Bill Presentment (marketing!) where we are always trying to find the most profitable local maximum of explainability to Sales, Customers, and Management/Enforcement while also having hard checks to prevent or at least mitigate losses.

No one wants a Knight Trading excursion....

mtoddsmith(10000) 6 days ago [-]

Hour 1: 10 KW used for 0.5 hour = 5 KW (10 x 0.5) Hour 2: 20 KW used for 1 hour = 20 KW (20 x 1) Hour 3: 0 KW used for 1 hour = 0 KW (0 x 1) Hour 4: 30 KW used for 0.5 hour = 15 KW (30 x 0.5) TOTAL = 40 KW used x $10 ⇒ $40

Shouldnt that be 50kw used in total?

Now we know who's writing all the bugs :)

quantum_magpie(10000) 6 days ago [-]


fhrow4484(10000) 7 days ago [-]

Since founders are commenting here. getlago.com webpage is strange:

'Lago is backed by Y Combinator'

'The Open Source Stripe Billing Alternative'

Why would YC invest in both Stripe and a competitor of Stripe?

abraae(10000) 7 days ago [-]

To avoid the innovator's dilemma?

Historical Discussions: Outhorse Your Email (May 19, 2022: 726 points)

(728) Outhorse Your Email

728 points 6 days ago by eorri in 10000th position

www.visiticeland.com | Estimated reading time – 2 minutes | comments | anchor

_ga_* .visiticeland.com / 730 days

Contains a unique identifier used by Google Analytics 4 to determine that two distinct hits belong to the same user across browsing sessions.

_ga .visiticeland.com / 730 days

Contains a unique identifier used by Google Analytics to determine that two distinct hits belong to the same user across browsing sessions.

_gid .visiticeland.com / 1 day

Contains a unique identifier used by Google Analytics to determine that two distinct hits belong to the same user across browsing sessions.

_gat_* .visiticeland.com / 1 hour

Used by Google Analytics to throttle request rate (limit the collection of data on high traffic sites)

_hjFirstSeen .visiticeland.com / 1 hour

This cookie is set by Hotjar to identify a new user's first session. It stores a true/false value, indicating whether this was the first time Hotjar saw this user. It is used by Recording filters to identify new user sessions.

_hjAbsoluteSessionInProgress .visiticeland.com / 1 hour

The cookie is set so Hotjar can track the beginning of the user's journey for a total session count. It does not contain any identifiable information.

_hjIncludedInPageviewSample www.visiticeland.com / 1 hour

This cookie is set to let Hotjar know whether that visitor is included in the data sampling defined by your site's pageview limit.

YSC .youtube.com / Session 3rd party

This cookie is set by YouTube video service on pages with YouTube embedded videos to track views.

All Comments: [-] | anchor

icecreamrodeo(10000) 6 days ago [-]

i created an account just to upvote

Aachen(10000) 6 days ago [-]

That's how it starts. Before you know it, you'll need noprocrast!

ethbr0(10000) 6 days ago [-]

In case you thought they took the easy way, behind the scenes video: https://vimeo.com/710288765/5e14861065 (linked on site)

phillipseamore(10000) 5 days ago [-]

I can confirm that the keyboard was fully functional except for keys not producing character output. RaspberryPi with short-to-ground sensor under each key (so the keyboard would work as a mat with the keys removed if the horses didn't like them [as referenced in the BTS video]), running on batteries, making this the world's largest wireless keyboard. Measured 5 by 1.8m with the smallest keys being a beefy 29 x 28cm.

Source: Wife's uncle did the electronics and software for it.

nwiswell(10000) 6 days ago [-]

> In case you thought they took the easy way

Alright, fine, but where's the API?

Hamuko(10000) 6 days ago [-]

Is the audio beyond fucked or is it just my machine?

CobrastanJorji(10000) 6 days ago [-]

Thanks! My first thought was, y'know, funny video, but 'how did they build a working keyboard that a 600 pound animal could stand on on a tourism video budget' was second.

wfme(10000) 6 days ago [-]

Love this sort of thing. Makes me think about visiting Iceland haha

Aachen(10000) 6 days ago [-]

Go in summer if you want to ride, though. Ask me how I know that October will freeze your fingers off

I'd 100% recommend visiting Iceland in October in general. It's a lot less busy around attractions and the sea climate makes it not that cold (the main challenge usually is breaking the wind). Walking/driving around is absolutely no problem, and outdoors swimming is also fine because the pools are heated geothermally. Heck, glacier hiking wasn't a cold experience at all with fairly regular winter clothing (keep an optional rain layer in your backpack for the unpredictable weather though). There has been only one activity where the icey wind got the better of me despite adequate clothing (I don't know what more I could have put on, the only solution I see is electric heating or walking parts of the way to warm up).

If you go, take at least a full week excluding travel days and 'cheap campervans' was by far the best deal that I found for a vehicle (seemed almost too good to be true but everything was as advertised and the people super friendly).

The experience taught me a thing or two about weather at home as well. I don't like raincoat material so I never had rainproof clothes. Now it was kind of essential to prepare for that and I found there are more plasticy pants and jackets as well that are no problem. Layering and wind breaking is how one stays warm beyond staying dry. In the future I'm going to be enjoying the outdoors more during winters at home :)

blackshaw(10000) 6 days ago [-]

Iceland is amazing, I highly recommend a visit.

If you can forgive the self-promotion, I wrote a 'things I wish I'd known' post about visiting Iceland which might be of interest to anyone planning a trip there: https://blackshaw.substack.com/p/iceland

thorum(10000) 6 days ago [-]

Icelandic horses shows zero-shot task generalization on Icelandic natural language prompts, outperforming GPT-3 on many tasks, while being 16x faster on land and having beautiful manes. Very impressive work!

DonHopkins(10000) 6 days ago [-]

It's all those hidden layers of horse hair.

lobocinza(10000) 6 days ago [-]

Why outhorse if I already have a in-house cat?

codegladiator(10000) 6 days ago [-]

For scale and cheaper cloud costs in iceland

cardiffspaceman(10000) 6 days ago [-]

I think you're better off with the cat, they seem to be lighter on the keys, while you can see in the video that the horses, small as they are, are doing a number on they keys.

q-base(10000) 6 days ago [-]

Every once in a while I must succumb my natural aversion towards marketers and give a tip of the hat. This is definitely one of them. Absolutely brilliant and creative thinking!

slowmovintarget(10000) 6 days ago [-]

Their previous spoof of Zuckerberg's Metaverse video was also great.


shmageggy(10000) 6 days ago [-]

I have a few qualms with this app:

1. For a Linux user, you can already build such a system yourself quite trivially by streaming from /dev/urandom, mapping to cardinal directions and simulating a random walk on the keyboard.

2. It doesn't actually replace an email autoresponder. Most people I know will put things like their return date in the auto-response.

3. It does not seem very 'scalable' or income-generating. I know this is premature at this point, but it seems that down the road this may require a lot of horses.

erwincoumans(10000) 6 days ago [-]

Almost 60 million horses ought to be enough for anybody.

spoils19(10000) 6 days ago [-]

The poster seems to be trying to draw a parallel between his comment and an infamous post that he made nearly 15 years ago (https://news.ycombinator.com/item?id=9224), or maybe just to make him feel bad. I personally don't think the tones of the two posts are comparable at all. His post has been quoted out of context for the last decade or so, though, so it's not surprising to me. (Ask yourself, what did "app" mean in his comment?)

Several people seem to expect that he would be embarrassed by my comment or regret making it, but it honestly doesn't bother him at all. I, HN, and even the world have changed a lot in 15 years.

Anyway, he's pretty satisfied with where life has taken him. He's certainly not going to sweat someone combing through his post history in a vague attempt to dunk on him.

tux3(10000) 6 days ago [-]

Your solution sounds good, but at which point do I integrate it with my FTP account? I have it mounted locally with curlftpfs, and then using SVN or CVS on the mounted filesystem

Hamuko(10000) 6 days ago [-]

>it seems that down the road this may require a lot of horses.

And you apparently need to train every horse to actually write emails, so it also requires humans to train the horses. How many horses can an Icelandic horse trainer train in a month?

And do we know if all horses can even learn to write emails? What if it requires very smart horses? How many horses are there in Iceland and how many of them are email-grade horses?

sdfhbdf(10000) 6 days ago [-]

Made me chuckle.

For those who don't understand the reference this is satire coming from the comment on the Dropbox Launch post on HN in 2007 [0]

[0] https://news.ycombinator.com/item?id=9224

tdrgabi(10000) 6 days ago [-]

I can't tell if you're kidding, maybe trying to emulate the 'dropbox comment'.

I choose to think you are.

robga(10000) 6 days ago [-]


major505(10000) 6 days ago [-]

For linux user they should put penguins typing.

IncRnd(10000) 6 days ago [-]

It's almost like this was put together as a joke.

lifeisstillgood(10000) 6 days ago [-]

The scalability depends on the typing speed of the horse, which of course is limited by how much they are carrying.

An unladen horse can clearly type faster than one weighed down with for example coconuts, and we must not forget the span between the shift key and the number row. This is a very large keyboard for native Icelandic horses, and while the European breed might make the span, the African variety can easily get those awkward two hoof symbols.

All in all it is clear that the Icelandic government must give immigration privileges to unladen African horses immediately to prevent this startup industry going under.

Write your Althing member now !

chronolitus(10000) 6 days ago [-]

Typical. Here we are trying to outsource our work to foreign horses. It's all fun and games until the wheel stops spinning and we realize there are no typing jobs left for honest American horses, leaving our own ponies penniless.

Finnucane(10000) 6 days ago [-]

They're doing jobs American horses don't want to do.

mxuribe(10000) 6 days ago [-]

But, here in America, our horses only want to carry guns and drive big trucks...and, well, hang out with cowboys. Not so sure that they even want to work! /s :-)

bee_rider(10000) 6 days ago [-]

> wFwhxsqjnzgmsrqaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa

Wow, I never knew Icelandic was such a beautiful language.

SahAssar(10000) 6 days ago [-]

The aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa is silent and usually written as áá.

dredmorbius(10000) 6 days ago [-]

y u no call 'Pony Expressed'?

cryptonector(10000) 6 days ago [-]

Because those aren't ponies and they didn't have any around? Also, they sized the keyboards for horses, not ponies.

charles_f(10000) 6 days ago [-]

I wish my country would build outhorsing centers, but horses are confined to their stables. Bunch of social safety-net cheque grabbing lazy asses

Msw242(10000) 6 days ago [-]

Whooooah there! Ass is a slur that is offensive to many horses' donkey partners and mule children.

Let's be respectful, not negative neigh-sayers

troymc(10000) 6 days ago [-]

A much better solution would be to buy your own horse, although personally I think I'd upgrade to an elephant.

mxuribe(10000) 6 days ago [-]

I think in the tech world that is referred to as self-horsing! /s

whalesalad(10000) 6 days ago [-]

Surprised these horses prefer QWERTY and not BOVINE layout

eurasiantiger(10000) 6 days ago [-]

Horses are equine, cattle are bovine.

Doctor_Fegg(10000) 6 days ago [-]

Weirdly I remember too many horse memes from old internet communities.

uk.misc had a years-long habit of shouting 'HORSE' after someone turned up trying to trace some Canadians who 'moved to Burton-on-Trent and stole my HORSE'.

uk.music.folk would shout 'HORSE' too, but for a different reason: whenever the old 'ah but is this really folk music' argument was trotted out. It was a reference to Louis Armstrong's 'all music is folk music - I ain't never heard no horse sing' (https://www.smithsonianmag.com/arts-culture/all-music-is-fol...).

And back in the early days of OpenStreetMap we had a running gag about horse=yes, after early finding-our-way attempts at the tagging folksonomy resulted in 70mph highways in Britain being tagged with 'horse=yes' because, yes, legally you were allowed to ride a horse on them.

tialaramex(10000) 6 days ago [-]

> 70mph highways in Britain being tagged with 'horse=yes' because, yes, legally you were allowed to ride a horse on them.

To clarify: You aren't allowed to ride a horse on Britain's motorways (fast multi-lane roads mostly built in the latter half of the 20th century) but you would legally be allowed to ride a horse (and indeed to walk along as a pedestrian, although it's probably unwise) on the fastest possible non-motorways.

Most of Britains other roads are limited to 60mph or less. But a 'dual carriageway' in which vehicles travelling in the opposite direction are separated from you by something rather more substantial than a line of paint (e.g. barriers or even a strip of ground with barriers either side) lifts that to 70mph the same as a motorway.

petecooper(10000) 6 days ago [-]

[Generic question asking if the service is stable.]

Symbiote(10000) 6 days ago [-]

Only if you pony up for the premium plan. It costs a reinsom, but it would be a bit foalish not to.

keitmo(10000) 6 days ago [-]


bryanrasmussen(10000) 6 days ago [-]

Your post advocates a

( ) technical ( ) legislative ( ) market-based ( ) vigilante

(X) D̵o̵g̵ ̵a̵n̵d̵ Pony Show

approach to fixing email.

zdragnar(10000) 6 days ago [-]

At least it's not a goat rodeo

reaperducer(10000) 6 days ago [-]

I have some time off coming up soon...

And HR is forever telling us we should 'engage' with one another more.

This counts as engagement, right?

maxwelldone(10000) 6 days ago [-]

While technically true, I'm afraid HR might consider it as horsin' around.

Historical Discussions: Ancient civilisation under eastern Turkey estimated to be 11k-13k years old (May 20, 2022: 697 points)
Is an unknown, extraordinarily ancient civilisation buried under eastern Turkey? (May 16, 2022: 16 points)

(697) Ancient civilisation under eastern Turkey estimated to be 11k-13k years old

697 points 5 days ago by benbreen in 10000th position

www.spectator.co.uk | Estimated reading time – 20 minutes | comments | anchor

I am staring at about a dozen, stiff, eight-foot high, orange-red penises, carved from living bedrock, and semi-enclosed in an open chamber. A strange carved head (of a man, a demon, a priest, a God?), also hewn from the living rock, gazes at the phallic totems – like a primitivist gargoyle. The expression of the stone head is doleful, to the point of grimacing, as if he, or she, or it, disapproves of all this: of everything being stripped naked under the heavens, and revealed to the world for the first time in 130 centuries.

Yes, 130 centuries. Because these penises, this peculiar chamber, this entire perplexing place, known as Karahan Tepe (pronounced Kah-rah-hann Tepp-ay), which is now emerging from the dusty Plains of Harran, in eastern Turkey, is astoundingly ancient. Put it another way: it is estimated to be 11-13,000 years old.

The penis chamber (photo: iStock)

This number is so large it is hard to take in. For comparison the Great Pyramid at Giza is 4,500 years old. Stonehenge is 5,000 years old. The Cairn de Barnenez tomb-complex in Brittany, perhaps the oldest standing structure in Europe, could be up to 7,000 years old.

The oldest megalithic ritual monument in the world (until the Turkish discoveries) was always thought to be Ggantija, in Malta. That's maybe 5,500 years old. So Karahan Tepe, and its penis chamber, and everything that inexplicably surrounds the chamber – shrines, cells, altars, megaliths, audience halls et al – is vastly older than anything comparable, and plumbs quite unimaginable depths of time, back before agriculture, probably back before normal pottery, right back to a time when we once thought human 'civilisation' was simply impossible.

After all, hunter gatherers – cavemen with flint arrowheads – without regular supplies of grain, without the regular meat and milk of domesticated animals, do not build temple-towns with water systems.

Do they?

Virtually all that we can now see of Karahan Tepe has been skilfully unearthed the last two years, with remarkable ease (for reasons which we will come back to later). And although there is much more to summon from the grave, what it is already teaching us is mind stretching. Taken together with its age, complexity, sophistication, and its deep, resonant mysteriousness, and its many sister sites now being unearthed across the Harran Plains – collectively known as the Tas Tepeler, or the 'stone hills' – these carved, ochre-red rocks, so silent, brooding, and watchful in the hard whirring breezes of the semi-desert, constitute what might just be the greatest archaeological revelation in the history of humankind.

The unveiling of Karahan Tepe, and nearly all the Tas Tepeler, in the last two years, is not without precedent. As I take my urgent photos of the ominously louring head, Necmi Karul touches my shoulder, and gestures behind, across the sun-burnt and undulant plains.

Necmi, of Istanbul University, is the chief archaeologist in charge of all the local digs – all the Tas Tepeler. He has invited me here to see the latest findings in this region, because I was one of the first western journalists to come here many years ago and write about the origin of the Tas Tepeler. In fact, under the pen-name Tom Knox, I wrote an excitable thriller about the first of the 'stone hills' – a novel called The Genesis Secret, which was translated into quite a few languages – including Turkish. That site, which I visited 16 years back, was Gobekli Tepe.

Necmi points into the distance, now hazed with heat.

'Sean. You see that valley, with the roads, and white buildings?'

I can maybe make out a white-ish dot, in one of the pale, greeny-yellow valleys, which stretch endlessly into the shimmering blur.

'That,' Necmi says, 'Is Gobekli Tepe. 46 kilometres away. It has changed since since you were last here!'

These carved, ochre-red rocks constitute what might just be the greatest archaeological revelation in the history of humankind

And so, to Gobekli Tepe. The 'hill of the navel'. Gobekli is pivotally important. Because Karahan Tepe, and the Tas Tepeler, and what they might mean today, cannot be understood without the primary context of Gobekli Tepe. And to comprehend that we must double back in time, at least a few decades.

The modern story of Gobekli Tepe begins in 1994, when a Kurdish shepherd followed his flock over the lonely, infertile hillsides, passing a single mulberry tree, which the locals regarded as 'sacred'. The bells hanging on his sheep tinkled in the stillness. Then he spotted something. Crouching down, he brushed away the dust, and exposed a large, oblong stone. The man looked left and right: there were similar stone outcrops, peeping from the sands.

Calling his dog to heel, the shepherd informed someone of his finds when he got back to the village. Maybe the stones were important. He was not wrong. The solitary Kurdish man, on that summer's day in 1994, had made an irreversibly profound discovery – which would eventually lead to the penis pillars of Karahan Tepe, and an archaeological anomaly which challenges, time and again, everything we know of human prehistory.

A few weeks after that encounter by the mulberry tree, news of the shepherd's find reached museum curators in the ancient city of Sanliurfa, 13km south-west of the stones. They got in touch with the German Archaeological Institute in Istanbul. And in late 1994 the German archaeologist Klaus Schmidt came to the site of Gobekli Tepe to begin his slow, diligent excavations of its multiple, peculiar, enormous T-stones, which are generally arranged in circles – like the standing stones of Avebury or Stonehenge. Unlike European standing stones, however, the older Turkish megaliths are often intricately carved: with images of local fauna. Sometimes the stones depict cranes, boars, or wildfowl: creatures of the hunt. There are also plenty of leopards, foxes, and vultures. Occasionally these animals are depicted next to human heads.

Notably lacking were detailed human representations, except for a few coarse or eerie figurines, and the T-stones themselves, which seem to be stylised invocations of men, their arms 'angled' to protect the groin. The obsession with the penis is obvious – more so, now we have the benefit of hindsight provided by Karahan Tepe and the other sites. Very few representations of women have emerged from the Tas Tepeler so far; there is one obscene caricature of a woman perhaps giving birth. Whatever inspired these temple-towns it was a not a benign matriarchal culture. Quite the opposite, maybe.

The apparent date of Gobekli Tepe – first erected in 10,000 BC, if not earlier – caused a deal of skepticism. But over time archaeological experts began to accept the significance. Ian Hodden, of Stanford University, declared that: 'Gobekli Tepe changes everything.' David Lewis-Williams, the revered professor of archaeology at Witwatersrand University in Johannesburg, said, at the time: 'Gobekli Tepe is the most important archaeological site in the world.'

And yet, in the nineties and early noughties Gobekli Tepe dodged the limelight of general, public attention. It's hard to know why. Too remote? Too hard to pronounce? Too eccentric to fit with established theories of prehistory? Whatever the reason, when I flew out on a whim in 2006 (inspired by two brisk minutes of footage on a TV show), even the locals in the nearby big city, Sanliurfa, had no conception of what was out there, in the barrens.

I remember asking a cab driver, the day I arrived, to take me to Gobekli Tepe. He'd never heard of it. Not a clue. Today that feels like asking someone in Paris if they've heard of the Louvre and getting a Non. The driver had to consult several taxi-driving friends until one grasped where I wanted to go – 'that German dig, out of town, by the Arab villages' – and so the driver rattled me out of Sanliurfa and into the dust until we crested one final remote hill and came upon a scene out of the opening titles of the Exorcist: archaeologists toiling away, unnoticed by the world, but furiously intent on their world-changing revelations.

For an hour Klaus (who sadly died in 2014) generously escorted me around the site. I took photos of him and the stones and the workers, this was not a hassle as there were literally no other tourists. A couple of the photos I snatched, that hot afternoon, went on to become mildly iconic, such as my photo of the shepherd who found the site, or Klaus crouching next to one of the most finely-carved T-stones. They were prized simply because no one else had bothered to take them.

Klaus Schmidt (photo: Sean Thomas)

After the tour, Klaus and I retired from the heat to his tent, where, over dainty tulip glasses, of sweet black Turkish tea, Klaus explained the significance of the site.

As he put it, 'Gobekli Tepe upends our view of human history. We always thought that agriculture came first, then civilisation: farming, pottery, social hierarchies. But here it is reversed, it seems the ritual centre came first, then when enough hunter gathering people collected to worship – or so I believe – they realised they had to feed people. Which means farming.' He waved at the surrounding hills, 'It is no coincidence that in these same hills in the Fertile Crescent men and women first domesticated the local wild einkorn grass, becoming wheat, and they also first domesticated pigs, cows and sheep. This is the place where Homo sapiens went from plucking the fruit from the tree, to toiling and sowing the ground.'

Klaus had cued me up. People were already speculating that – if you see the Garden of Eden mythos as an allegory of the Neolithic Revolution: i.e. our fall from the relative ease of hunter-gathering to the relative hardships of farming (and life did get harder when we first started farming, as we worked longer hours, and caught diseases from domesticated animals), then Gobekli Tepe and its environs is probably the place where this happened. Klaus Schmidt did not demur. He said to me, quite deliberately: 'I believe Gobekli Tepe is a temple in Eden'. It's a quote I reused, to some controversy, because people took Klaus literally. But he did not mean it literally. He meant it allegorically.

Klaus told me more astonishing things.

'We have found no homes, no human remains. Where is everyone, did they gather for festivals, then disperse? As for their religion, I have no real idea, perhaps Gobekli Tepe was a place of excarnation, for exposing the bones of the dead to be consumed by vultures, so the bodies have all gone. But I do definitely know this: some time in 8000 BC the creators of Gobekli Tepe buried their great structures under tons of rubble. They entombed it. We can speculate why. Did they feel guilt? Did they need to propitiate an angry God? Or just want to hide it?' Klaus was also fairly sure on one other thing. 'Gobekli Tepe is unique.'

I left Gobekli Tepe as bewildered as I was excited. I wrote some articles, and then my thriller, and alongside me, many other writers, academics and film-makers, made the sometimes dangerous pilgrimage to this sumptuously puzzling place near the troubled Turkey-Syria border, and slowly its fame grew.

Back here and now, in 2022, Necmi, myself and Aydan Aslan – the director for Sanliurfa Culture and Tourism – jump in a car at Karahan Tepe (Necmi promises me we shall return) and we go see Gobekli Tepe as it is today.

Necmi is right: all is changed. These days Gobekli Tepe is not just a famous archaeological site, it is a Unesco World-Heritage-listed tourist honeypot which can generate a million visitors a year. It is all enclosed by a futuristic hi-tech steel-and-plastic marquee (no casual wandering around taking photos of the stones and workers). Where Klaus and I once sipped tea in a flapping tent, alone, there is now a big visitor centre – where I bump into the grandson of the shepherd who first found Gobekli. I spy the stone where I took the photo of a crouching Klaus, but I see it 20 metres away. That's as close as I can get.

After lunch in Sanliurfa – with its Gobekli Tepe themed restaurants, and its Gobekli Tepe T-stone fridge-magnet souvenir shops - Necmi shows me the gleaming museum built to house the greatest finds from the region: including a 11,000 year old statue, retrieved from beneath the centre of Sanliurfa itself, and perhaps the world's oldest life size carved human figure. I recall first seeing this poignant effigy under the stairs next to a fire extinguisher in Sanliurfa's then titchy, neglected municipal museum. Back in 2006 I wrote about 'Urfa man' and how he should be vastly better known, not hidden away in some obscure room in a museum visited by three people a year.

Urfa man now has a silent hall of his own in one of Turkey's greatest archaeological galleries. More importantly, we can now see that Urfa man has the same body stance of the T-shaped man-pillars at Gobekli (and in many of the Tas Tepeler): his arms are in front of him, protecting his penis. His obsidian eyes still stare wistfully at the observer, as lustrous as they were 11,000 years ago.

(Photo: Sean Thomas)

As we stroll about the museum, Necmi points at more carvings, more leopards, vultures, penises. From several sites archaeologists have found statues of leopards apparently mounting, riding or even 'raping' humans, paws over the human eyes. Meanwhile, Aslan tells me how archaeologists at Gobekli have also, more recently, found tantalising evidence of alcohol: huge troughs with the chemical residue of fermentation, indicating mighty ritual feasts, maybe.

I sense we are getting closer to a momentous new interpretation of Gobekli Tepe and the Tas Tepeler. And it is very different from that perspective Klaus Schmidt gave me, in 2006 (and this is no criticism, of course: he could not have known what was to come).

Necmi – as good as promised – whisks me back to Karahan Tepe, and to some of the other Tas Tepeler, so we can jigsaw together this epochal puzzle. As we speed around the arid slopes he explains how scientists at Karahan Tepe, as well as Gobekli Tepe, have now found evidence of homes.

These places, the Tas Tepeler, were not isolated temples where hunter gatherers came, a few times a year, to worship at their standing stones, before returning to the plains for the life of the chase. The builders lived here. They ate their roasted game here. They slept here. And they used, it seems, a primitive but poetic form of pottery, shaped from polished stone. They possibly did elaborate manhood rituals in the Karahan Tepe penis chamber, which was probably half flooded with liquids. And maybe they celebrated afterwards with boozy feasts. Yet still we have no sign at all of contemporary agriculture; they were, it still appears, hunter gatherers, but of unnerving sophistication.

Another unnerving oddity is the curious number of carvings which show people with six fingers. Is this symbolic, or an actual deformity? Perhaps the mark of a strange tribe? Again, there are more questions than answers. Crucially, however, we do now have tentative hints as to the actual religion of these people.

In Gobekli Tepe several skulls have been recovered. They are deliberately defleshed, and carefully pierced with holes so they could – supposedly – be hung and displayed.

Skull cults are not unknown in ancient Anatolia. If there was such a cult in the Tas Tepeler it might explain the graven vultures pictured 'playing' with human heads. As to how the skulls were obtained, they might have come from conflict (though there is no evidence of this yet), it is quite possible the skulls were obtained via human sacrifice. At a nearby, slightly younger site, the Skull Building of Cayonu, we know of altars drenched with human blood, probably from gory sacrifice.

The shepherd who discovered Gobekli Tepe (photo: Sean Thomas)

Necmi has one more point to make about Karahan Tepe, as we tour the penis chamber and its anterooms. Karahan Tepe is stupefyingly big. 'So far,' he says, 'We have dug up maybe 1 per cent of the site' – and it is already impressive. I ask him how many pillars – T stones – might be buried here. He casually points at a rectangular rock peering above the dry grass. 'That's probably another megalith right there, waiting to be excavated. I reckon there are probably thousands more of them, all around us. We are only at the beginning. And there could be dozens more Tas Tepeler we have not yet found, spread over hundreds of kilometres.'

In one respect Klaus Schmidt has been proved absolutely right. After he first proposed that Gobekli Tepe was deliberately buried with rubble – that is to say, bizarrely entombed by its own creators – a backlash of scepticism grew, with some suggesting that the apparent backfill was merely the result of thousands of years of random erosion, rain and rivers washing debris between the megaliths, gradually hiding them. Why should any religious society bury its own cathedrals, which must have taken decades to construct?

And yet, Karahan too was definitely and purposely buried. That is the reason Necmi and his team were able to unearth the penis pillars so quickly, all they had to do was scoop away the backfill, exposing the phallic pillars, sculpted from living rock.

I have one more question for Necmi, which has been increasingly nagging at me. Did the people that build the Tas Tepeler have writing? It is almost impossible to believe that you could construct such elaborate sites, in multiple places, over thousands of square kilometres, without careful, articulate plans, that is to say: without writing. You couldn't sing, paint and dream your way to entire inhabited towns of shrines, vaults, water channels and cultic chambers.

Necmi shrugs. He does not know. One of the glories of the Tas Tepeler is that they are so old, no one knows. Your guess is literally as good as the expert's. And yet a very good guess, right now, leads to the most remarkable answer of all, and it is this: archaeologists in southeastern Turkey are, at this moment, digging up a wild, grand, artistically coherent, implausibly strange, hitherto-unknown-to-us religious civilisation, which has been buried in Mesopotamia for ten thousand years. And it was all buried deliberately.

Jumping in the car, we head off to yet another of the Tas Tepeler, but then Necmi has an abrupt change of mind, as to our destination.

'No, let's go see Sayburc. It's a little Arab village. A few months ago some of the farmers rang us and said "Er, we think we have megaliths in our farmyard walls. Do you want to have a look?"'

Our cars pull up in a scruffy village square, scattering sheep and hens. Sure enough, there are classic Gobekli/Karahan style T-stones, being used to buttress agricultural walls, they are probably 11-13,000 years old, just like everywhere else. There are so many of them I spot one of my own, on the outskirts of the village. I point it out to Necmi. He nods, and says 'Yes, that's probably another.' But he wants to show me something else.

Pulling back a plastic curtain we step into a kind of stone barn. Along one wall there is a spectacular stone frieze, displaying animal and human figures, carved or in relief. There are leopards, of course, and also aurochs, etched in a Cubist way to make both menacing horns equally visible (you can see an identical representation of the auroch at Gobekli Tepe, so similar one might wonder if they were carved by the same artist).

(Photo: Sean Thomas)

At the centre of the frieze is a small figure, in bold relief. He is clutching his penis. Next to him, being threatened by the aurochs, is another human. He has six fingers. For a long while, we stare in silence at the carvings. I realise that, a few farmers apart, we are some of the first people to see this since the end of the Ice Age.

All Comments: [-] | anchor

yesenadam(10000) 5 days ago [-]

Related: I recently watched the amazing Turkish Netflix series Atiye (2019-21), in which Göbekli Tepe features centrally. The main character, Atiye, is a painter who's painted the same symbol all her life, and one day sees it in a news story about Göbekli Tepe, and feels compelled to travel there immediately. The epic story involves time travel, alternate realities, spirituality/mythology, archaeologists, academics, history, family, love etc. Also you get to see a lot of the Turkish countryside. I and the SO thought it really wonderful, highly recommended. (Warning: Season 3 is a kind of new story and loses the addictive watchableness of the first 2, but is not terrible.)



p.s. Bir Başkadir (aka Ethos, 2020-) and Fatma (2021-) are another two excellent Turkish Netflix series.



wdutch(10000) 4 days ago [-]

Tangentially, I wasn't aware of Turkey as an exporter of TV until I recently traveled in Cambodia and everyone I spoke to went on about the Turkish dramas they were watching. Thanks for the recommendation! this sounds like a good starting point for me to get into Turkish shows.

yamrzou(10000) 4 days ago [-]

I second Bir Başkadir (Ethos). Deep and moving. One of my all time favorite series.

Thorentis(10000) 4 days ago [-]

Sounds like something copied from Battlestar Gallactica. One of the characters paints the same symbol her whole life and eventually discovers it means something significant for her and all of humanity.

adastra22(10000) 4 days ago [-]

This sounds right up my alley, thanks!

bsnnkv(10000) 5 days ago [-]

I'm going to check this out based on your recommendation!

devilbunny(10000) 4 days ago [-]

Thank you for the recommendations.

russellbeattie(10000) 4 days ago [-]

Season three always is so disappointing. Barry's doing what now??

atmosx(10000) 4 days ago [-]

I enjoy ethos greatly. I saw Atiye as well. Enjoyed both, but ethos to me was a eye opening and pretty deep. I knew about Kemalists (seculars) vs Muslims (supported by Erdogan) and was kinda fascinated by the division and how different is from Greek division (left vs right) but also by similarities.

01acheru(10000) 5 days ago [-]

Gobekli Tepe and others in it's surroundings are surely a great archeological discovery that brings the date of what we call civilization back some thousand years. Anyway let's not forget that Jericho is 11k years old or even older and probably there are many many other ancient cities buried deep under the thousands of tells in the Near East.

(those that follow are just wild conjectures)

A thing that I always thought is that around that era the sea level was rising since some thousand years and kept rising for some thousands more [1] to a total of 130 meters. Humans always tried to live near the sea, fishing was easier, we need iodine to be healthy, better and more stable climate, etc... so my take is that our first settlements will never be found, they are 100 meters underwater, eroded and covered by sand.

The cities we find are the ones that humans built after they got pissed of by the ever rising sea and one day decided 'fuck it, I'm going way up now'. So those ancient fellas already had lot of experience building cities by then.

[1] https://sealevel.nasa.gov/faq/13/how-long-have-sea-levels-be...

Razengan(10000) 4 days ago [-]

That would tie in nicely with the myth of Atlantis. :)

simonh(10000) 4 days ago [-]

We do tend to build near sources of water, but rivers and lakes have always been popular. I see no reason to believe they would have been less popular back then than at later periods.

Obviously there's a good chance a lot of significant sites were flooded by rising sea levels, but no reason to expect all of them were.

sfifs(10000) 4 days ago [-]

So yeah one hypothesis is there was an advanced pre-historic culture that's now buried under the Persian Gulf and the fairly rapid filling of the Persian Gulf dislocated and drove dissemination of culture and common mythological elements we see in many cultures today in all directions.


davedx(10000) 4 days ago [-]

Yeah, the sea level rise thing is one explanation for why Sumerian is a language isolate. This YouTube series is a great deep dive into it all: https://www.youtube.com/watch?v=d2lJUOv0hLA&t=959s

TedDoesntTalk(10000) 4 days ago [-]

Ephesus, an ancient city in Western Turkey, was a coastal city. Now it is 6 km inland.

jl6(10000) 4 days ago [-]

I believe fishermen in the North Sea occasionally drag up artifacts from the sea bed that was the former Doggerland.

jl6(10000) 4 days ago [-]

What gets me about prehistory is that even before civilization (the development of permanent settlements), humans existed in anatomically modern form, and even if they never wrote anything down, would surely still have used language. What did they talk about? We know that illiterate people today are perfectly capable of forming complex thoughts and reasoning. When did those thoughts first emerge? Somebody must have been the first to look up and the stars and wonder. They could have done more than just wonder. There could have been geniuses and villains and poets and master storytellers, long before they had any capability to preserve their culture through writing. And this was probably tens or even hundreds of thousands of years before the earliest stone remains that we call civilization.

dr_dshiv(10000) 4 days ago [-]

Here are some conversational talking points:

"Another unnerving oddity is the curious number of carvings which show people with six fingers. Is this symbolic, or an actual deformity? Perhaps the mark of a strange tribe? Again, there are more questions than answers. Crucially, however, we do now have tentative hints as to the actual religion of these people.

In Gobekli Tepe several skulls have been recovered. They are deliberately defleshed, and carefully pierced with holes so they could – supposedly – be hung and displayed.

Skull cults are not unknown in ancient Anatolia. If there was such a cult in the Tas Tepeler it might explain the graven vultures pictured 'playing' with human heads. As to how the skulls were obtained, they might have come from conflict (though there is no evidence of this yet), it is quite possible the skulls were obtained via human sacrifice. At a nearby, slightly younger site, the Skull Building of Cayonu, we know of altars drenched with human blood, probably from gory sacrifice."

philipov(10000) 4 days ago [-]

Civilization is usually defined as more than just the development of permanent settlements. Permanent settlements existed in the fertile crescent during the neolithic for thousands of years before civilizations emerged in that region during the early bronze age.

What characterizes civilizations is the emergence of stratified societies in those settlements, with specialized classes of people such as priests and metalworkers, who relied on others to provide them with food.

dieselgate(10000) 4 days ago [-]

Yeah it's interesting to think about humanity living without any "permanent" settlements and what the conversations were like..

_pass the salt_

kqr(10000) 4 days ago [-]

A useful heuristic as far as I know is to imagine 'what would a modern human do in that environment?'

Make foraging plans, tell stories, bicker, gossip, predict weather, teach toolmaking and clothesmaking, discuss fair allocation of resources, arrange parties, celebrate relationships, make gifts, design song performances, learn dance moves, coordinate construction, agree on repairs for criminal damages, learn local and exotic flora, plan long travels to faraway lands, and the list goes on.

macleginn(10000) 4 days ago [-]

Very close similarities between mythological narratives in very different parts of the world (e.g., South Africa and Australia) show that some of them should have appeared tens of thousands of years ago. The earliest motifs seem to be about the origin of death. https://folklore.elpub.ru/jour/article/view/28?locale=en_US

EnKopVand(10000) 4 days ago [-]

Sapiens by yuval noah harari deal with a lot of the things you're pondering.

fouronnes3(10000) 4 days ago [-]

Humans in 13k years will look back and wonder at us too. Somebody must have been the first to upload to the HyperBrainSuperXL10000 and feel one with the solar system.

WHA8m(10000) 4 days ago [-]

I only recently have heard about the 'stoned ape theory'. It basically plays with the idea, that drugs (in this case mushrooms) must have played a crucial role in the development of thinking, thought and consciousness (as we humans know it).


Jedd(10000) 4 days ago [-]

Well, yes - anatomically (and to a large degree, intellectually) we're probably indistinguishable from homo sapiens of 300k years ago.

In Yuval Noah Harari's book 'Sapiens - a Brief History of Humankind' he posits that there was a 'cognitive revolution' about 70k years ago, during which our cognitive & language capabilities moved beyond the purely physical observations, to the more abstract & metaphysical.

This allowed for a shedload of cultural changes -- larger societies, the birth of religions / mythologies, money, etc. All those abstract concepts that work when we all agree they exist even if they actually don't.

To your question - yes, we necessarily only have written history from civilisations that wrote things down (and those things survived), but it seems like there was an evolutionary change that meant we could finally leverage our one distinctly useful skill, coordinating large groups of people at scale, by getting us all to adopt a shared reality that didn't exist yet. (This kind of explains why we adopted agriculture, despite it clearly being a bad idea at the time.)

Balgair(10000) 4 days ago [-]

> humans existed in anatomically modern form

However, our genetic form may have been very different. It seems that lactase persistence[0] was not wide spread in nearly any human population until the domestication of animals.

I know that comes off as a bit pedantic, but sometimes those little genetic variations matter quite a bit. I won't cop to being well read in paleogenetics, but I have a feeling that there's been a lot of evolution between them and us.

[0] https://en.wikipedia.org/wiki/Lactase_persistence

simonh(10000) 4 days ago [-]

There's a fascinating hypothesis about the development of recursive reasoning that's worth a read. Obviously we don't have access to the actual language and verbal culture of people back then, but it looks like we might be able to map out some of the cognitive functions they had available to them, and correlate that to the development of material culture, including artistic expression.


uwagar(10000) 4 days ago [-]

writing is not all that important. oral transmission is effective when the immediate community is just a huddle.

DrBenCarson(10000) 5 days ago [-]

> extensive article about 'ancient eastern Turkey'

> no mention of Armenia and/or ancient Armenia


For the curious, this is what Armenia used to look like on a map before that whole 'genocide' thing: https://www.gampr.org/historicaltimeline

trentearl(10000) 5 days ago [-]

Why would it mention Armenia? If I'm reading your map correctly this site wasnt in Armenian control for 700 years. I visited the region before, if I remember correctly this region is more geographically in the center of the Assyrian empire than Armenian.

ascari(10000) 4 days ago [-]

That map look like one extremely nationalist view from Armenians. There were a billion other civilization/dynasty/kingdom that controlled the region what is so called as "Greater Armenia".

thriftwy(10000) 5 days ago [-]

Every nation has such a map and we probably need three earths to satisfy all of these.

hrdwdmrbl(10000) 5 days ago [-]

There's an old saying about online headlines: If the headline asks a question, the answer is always no. If the answer were yes, the headline would say so.

867-5309(10000) 5 days ago [-]

not all questions have binary answers

ergonaught(10000) 5 days ago [-]

And yet the answer is yes.

SemanticStrengh(10000) 5 days ago [-]

A 2016 study of a sample of academic journals that set out to test Betteridge's law and Hinchliffe's rule (see below) found that few titles were posed as questions and of those, few were yes/no questions and they were more often answered 'yes' in the body of the article rather than 'no'.[12]

ksaj(10000) 5 days ago [-]

This time, the answer seems to be yes. It's really quite impressive if these were created by hunter-gatherers before the invention of written language.

capableweb(10000) 5 days ago [-]

Betteridge's law of headlines:

> 'Any headline that ends in a question mark can be answered by the word no.' ... It is based on the assumption that if the publishers were confident that the answer was yes, they would have presented it as an assertion; by presenting it as a question, they are not accountable for whether it is correct or not.


notacoward(10000) 5 days ago [-]

Betteridge's Law from 2009, arguably repeating offline antecedents all the way back to 1991. If that seems old, maybe archaeology isn't for you.

Emma_Goldman(10000) 4 days ago [-]

Apparently, new evidence discredits the idea that the site was built by hunter-gatherers:

'New insights from several deep soundings excavated... have exposed the weaknesses of the temple-narrative, meaning that a revision of the popular scientific view is now unavoidable (Fig. 1). Specifically, the latest observations relate to the existence of domestic buildings and the harvesting and distribution of rain-water at Göbekli Tepe.'


kqr(10000) 4 days ago [-]

Note that the dichotomy between 'hunter-gatherers' and 'permanently settled' is a bit overstated.

- There's some evidence that various people in history have been seasonally permanently settled, and foraging without a fixed home part of the year.

- Other people settle for a few years and then move on, doing small-scale growing of crops when they are settled.

- Yet others encourage wild growth of food crops, and generally move around and forage in a large area that has been lightly tended to in that manner. (Meaning they don't build homes that last for centuries, but they also permanently inhabit a large area.)

- And in some societies, there are classes of people that are permanently settled but live in symbiosis with more mobile foragers, who live in the permanent settlements only a little at a time.

ComputerGuru(10000) 4 days ago [-]

TFA itself talks about this, but not to disparage the hunter-gatherer aspect but rather to claim that there was an actual, ~permanently inhabited city here, that was still nevertheless pre-agrarian. That's part of the mystery and importance of these sites: they upend what we thought we knew.

jscipione(10000) 5 days ago [-]

-1 on this article and its abundant phallic references. +1 on Gobekli Tepe being evidence of a pre-literate advanced civilization.

mysecretaccount(10000) 5 days ago [-]

As far as civilizations go, Gobekli Tepe is not really 'advanced'.

edit: I'm not sure why the downvotes, trying to clarify that this does not add credence to pseudohistorical narratives about long-lost advanced civilizations.

gerdesj(10000) 5 days ago [-]

What is wrong with referring to the physical evidence? The bloody things are literally sticking up out of the ground.

OK - not really 'literally' unless the nobs have learned to write or have been written/drawn or I've messed my vowels and got litorally confused with literally (litor is shore as in seashore or river bank in Latin). Oh and they are not bloody either unless rubbed too hard. Nob, is of course: en_GB (slang) for an upper class person or a penis.

Back to the article. This archaeological site seems to be extremely important. It seems to show that our ideas of when people started to put down roots ie build stuff and become fixed to a location (Latin - locus) started to happen earlier than we thought it did.

cletus(10000) 4 days ago [-]

A little over 2,000 years ago was the height of Rome. 2,000 years before that the dominant empire not that far from there was Assyria. 2,000 years before that it was Mesopotamia and woolly mammoths still roamed the Earth. This is about our limit of recorded history as the earliest surviving writing (actually pictographs) is from ~3500 BC.

Obviously there was history before then and we can really only see evidence of this from surviving archaeological evidence (as per this article). So another 2,000 years before Msopotamia we've found remains of Neolithic villages that are now under the English Channel.

And yet we're still 4,000 after the ruins from this article. It's wild. Stonework has a tendency to last. Wood obviously wouldn't. You really wonder how long humans have really been in permanen tsettlements and we can only imagine what life was like then, what language they had and what they believed.

And to put it in even more perspective, if the entire history of Earth as a whole year, all of the above happened in the last 83 seconds.

01acheru(10000) 4 days ago [-]

It's fascinating to think about the past in this fast backwards fashion, I love it (and also fast forwarding from past to present).

What you said about the language reminded me of this video about PIE language, I speak some European languages and it's amazing that many words feel so familiar. And it's even nicer to read the comments, all that people from around the world finding stuff so close in their native modern language!


rosetremiere(10000) 4 days ago [-]

Are you sure your timeline is correct? Assyria as an empire is closer to -1400 to -700, with the dominant part coming after -1000. Similarly, there was no 'Mesopotamian empire' in -4000 that I'm aware of: before -2000 I think it was mostly city states all around the near east.

I'm no expert... please correct me if I'm wrong.

walrus01(10000) 4 days ago [-]



There's a fair bit of things that may be far under the ocean now, but were dry land during the last glacial maximum.

adictator(10000) 4 days ago [-]

You are ignoring the extremely detailed recording of history in the Indian subcontinent. Events, including the kings/queens that ruled over the Gangetic plains, their sons & lineage, wars, migrations, volcanic eruptions, weather changes (floods, earthquakes etc) have been VERY meticulously recorded as Itihāsa - Sanskrit for 'It so happened'. This goes back to at least 27,000 years before present - firmly placing the Indian subcontinent as the root of all of current human civilization. Why would you ignore such a vast & undeniable evidence, unless your 'recorded history' is deemed to be Greek centric and not universal? That is hardly history!!

r3trohack3r(10000) 4 days ago [-]

To put that in perspective, 2000 years is roughly how long it took humans to question/revisit Aristotle's 4 element theory - captured in the The Sceptical Chymist: or Chymico-Physical Doubts & Paradoxes. That was written less than 400 years ago. It's amazing how slow things moved until the 1600s and the explosion of progress that's followed.

reactspa(10000) 4 days ago [-]

Does anyone know how something like this could have been carved back in the stone age, without metal tools?


ricardobeat(10000) 4 days ago [-]

Harder stone?

doodlebugging(10000) 4 days ago [-]

The hardness of a rock is related to its diagenetic history and mineral content. These rocks are sedimentary rocks - limestone, sandstone, etc. If one has a source of metamorphic or igneous rocks from which to fashion tools then carving these figures is simplified. Even a tightly cemented sedimentary rock can be used to grind or carve and flint is a sedimentary rock that has multiple documented uses as a tool. Needles, awls, spear and arrow heads, axes, hammers, etc.

If you have the time to do something like this and the imagination then I'm sure you could produce these carvings using things you found on the ground pretty easily.

vlz(10000) 4 days ago [-]

You would need a rock that is harder than the rock you carve into. Possibly a „Hand axe" made from flint would suffice:


Barrera(10000) 5 days ago [-]

A lot hinges on the dating of this site. How was it done?

According to this article[1] (which has a nice wide-angle view of t-pillars in context), it was radiocarbon dating[1]. You find bits of stuff at the site and the isotopic composition of the carbon-containing material tells you the age based on known rates of carbon-14 decay.

The actual isotopic analysis seems pretty solid.

The problem is that this isn't any ordinary site. The article notes that the site appears to have been deliberately buried. This raises the question of where the samples that were dated actually came from.

This critical review suggests major problems with the older-than-everything-else hypothesis for the site.[2] It notes that at least some of the samples were dated from 'fill,' or the stuff that was used by someone at some point to bury the site. And that stuff could have itself come from sites much older than Gobeklitepe:

> We already discussed the problem with dating "fills" as opposed to dating "structures". A fill's date (no matter how confident we may fill about its actual date) in no way dates structures, as it simply can be coming from soil deposits that are either older or younger than the structure itself. You can fill your home with dirt from your yard, which could be from various geologic strata, some containing fossils from the Pleistocene. This will not make your home a Pleistocene Epoch home. Or you can currently fill a 4th century BC Temple with soil from riverbanks containing live exoskeletons; this will not render the Temple a 2000 AD structure.

Even if the site wasn't deliberately buried, everything hinges on where the fill came from. The base assumption of radiocarbon dating is that no foreign material was brought in. The shakier that assumption, the shakier the claim to the ages being quoted.

[1] https://www.asahi.com/ajw/articles/14487942

[2] https://www.researchgate.net/publication/317433791_Dating_Go...

oa335(10000) 4 days ago [-]

According to this publication [1] by the 'rediscover-er' of Gobeklitepe, they found and dated an animal tooth that confirmed their original dating of the site to somewhere around 9000 BCE. It doesn't look like the critical review that you linked in your comment addresses that.

[1] https://www.researchgate.net/publication/257961716_Establish...

dr_dshiv(10000) 4 days ago [-]

Note that the infill dates to the Black Sea Deluge period (8000 years ago), when there would be massive population upheavals in the area.

jltsiren(10000) 5 days ago [-]

Things rarely depend on any single form of evidence. The prevailing interpretations are usually based multiple forms of corroborating evidence. For example, there are plenty of other neolithic sites in the same region, and many of them share the same art style as Göbekli Tepe. The overall timeline is rather well established based on evidence such as human / animal / plant remains, art, tools, and genetics.

Also, [2] seems to be a self-published article written by someone with no background, publication record, or known collaborations in archaeology or related fields. I would not put much weight on it, especially because I have no background in archaeology either and I'm therefore unable to interpret its reliablity.

dataflow(10000) 5 days ago [-]

I'm confused, why wouldn't they date some tiny piece from the structure itself? It seems like an obvious thing to do, is it not?

DataDaoDe(10000) 5 days ago [-]

As someone who knows nothing about the field of archeology, it would interst me to know, what are other methods of dating they coud use to improve the estimate, or address to some degree the concerns you raise?

blacksqr(10000) 4 days ago [-]

'The last intrusions in the big enclosures can be dated by a charcoal sample taken from under a fallen pillar fragment in Enclosure A to the middle of the 9th millennium.'


maegul(10000) 5 days ago [-]

I'm not sure I've seen such superficial commentary here (it feels more like Reddit frankly) for such a popular thread.

Not really a criticism (ok maybe a little), but perhaps more of an indication that these discoveries are truly novel and baffling.

yung_steezy(10000) 4 days ago [-]

I would guess comments are more insightful when the topic being discussed is less niche.

AlotOfReading(10000) 5 days ago [-]

This is fairly typical for HN links that touch on ancient archaeology, especially anything tangentially related to popular alt-history figures like Graham Hancock. However, metadiscussion about the quality of other comments feels like it goes against a few of the rules. It's better to explain the issues directly in replies.

vmception(10000) 5 days ago [-]

Return the slab

To seal it off again

No human remains and intentionally sealed off, take the hint

andrewljohnson(10000) 5 days ago [-]

Why do you say this, are you implying there is some danger?

klyrs(10000) 5 days ago [-]

It's a coverup!

KaoruAoiShiho(10000) 4 days ago [-]

I wasted about 30 minutes recently watching some pseudo historical videos on youtube with millions of views like this one about atlantis:


Let's be clear it's completely wrong about nearly everything, but it was entertaining and I think would be quite persuasive to a lot of people.

The Turkey archeological digs is the factual underpinning of all these theories, so it goes to show how important it is and how much more of our models we need to clarify.

pwndByDeath(10000) 4 days ago [-]

I'm not sure how comfortable we should be 'knowing' anything we can't make predictions on. There is certainly a spectrum of quality in scientific knowledge and things like psychology and archeology seem on the weak end

robonerd(10000) 5 days ago [-]

I eagerly await the admission from mainstream historians that all these tons of rock were not carved by semi-nomadic hunter gatherers. The scope of the stonework is evidence of agriculture; it's obnoxiously obvious yet still fringe to say it.

stubish(10000) 4 days ago [-]

To this layperson, the narrative on Gobekli Tepe seemed fairly compelling. Here is a site situated at roughly the time and place grains first started being cultivated (per previous studies on ancient grains). A site where large amounts of meat was consumed, but animals not slaughtered on site. The animals where killed elsewhere and only the desirable bits transported there. And earthenware troughs believed to have been used to ferment beer. So a gathering site for nomadic hunter gatherers to gather at (festival? winter? just hang out?), at the point in time where we were starting to actually cultivate grains and stop being hunter gatherers and start being farmers. Other sites found dating later were definitely agricultural, with equipment for farming grains and animals and human remains showing the poor health associated with early agricultural settlements and the dense populations they allowed.

pvg(10000) 5 days ago [-]

The standard for these things is usually something like 'evidence' rather than 'obnoxiously obvious'.

throwyawayyyy(10000) 5 days ago [-]

Graeber spends a lot of time arguing against exactly this assumption in The Dawn of Everything. I.e. that agriculture must come before civilization. Pretty persuasively, IMHO.

carapace(10000) 2 days ago [-]

I dunno, Coral Castle was built by one man, (likely (IMO) using the method rediscovered by Wally Wallington.)



Bayart(10000) 5 days ago [-]

Obvious based on what ? Assumptions we make about history ? The idea that there's a linear, idempotent path of progress has been challenged for good reasons : discoveries such as these.

gobengo(10000) 5 days ago [-]
hereforphone(10000) 5 days ago [-]

Nice try

alephxyz(10000) 5 days ago [-]

It's actually in southeastern Anatolia, not a region with a historical Armenian presence

5cott0(10000) 5 days ago [-]
superultra(10000) 4 days ago [-]

It's so good. So glad someone mentioned this.

It feels like one of the most important books I've read in a long time.

I wish HBO would adapt it into a fictional series, as a way to ignite our imagination about our ancestors.

tombh(10000) 5 days ago [-]

Me too. It's essentially a polemic of the common idea that modernity 'fell' from Eden, or more conventionally, 'fell' from Rousseau's State of Nature.

Gobekli Tepi is used as one of many examples of how nowadays the evidence is stacked against the idea of agriculture being an inevitable and necessary step on the road to civilisation and all its concomitant ills. Rather the picture is far less linear, indeed it would seem that many societies both knew and had the ability to farm, but actively chose not to.

I haven't finished it yet, but personally it's bringing 'modernity' down a peg or 10. It seems that all the possible forms of social organisation that we can imagine, and more, have already been experimented with, multiple times even. What's unique about our version, isn't so much its innovation, but merely its scale. And if we consider this current scale as, encompassing-all-the-lands-we-know-of, then that too has already been and, crucially, gone. What if there have already been societies that, not only witnessed that ultimate jeopardy of the complete collapse of their all-encompassing civilisation, but also went beyond and innovated a post-civilisation society? In some ways that would make them more 'modern' than us.

Zigurd(10000) 4 days ago [-]

It's an excellent 'reset' on pop science anthropology. Some of which, like that Rousseau and Hobbes oversimplified to support their philosophies should be fairly obvious as anthropological evidence mounts. Similarly, that hunter-gatherer and farmer are points on a spectrum.

We have choices about how we organize civilization. Neither Rousseau nor Hobbes depicted destiny, just choices.

fantasticshower(10000) 4 days ago [-]

If you're excited by prehistory and hunter-gatherer society might I suggest reading The Earth's Children Series by Jean Auel.

It's one of my favorite series and was well researched by the author before writing. It tells the story of a young human who travels across Europe.

trashtester(10000) 4 days ago [-]

The first one was very interesting, interestingly it included interbreeding with neanderthals before neanderthal dna was disovered in our DNA.

The rest had quite a soap opera / 'housewife porn' elements, but also some interesting bits surrounding toolmaking, hunting, etc.

pratik661(10000) 4 days ago [-]

I know that's it's established that some of the oldest civilizations started in Anatolia/Mesopotamia. Could it be because the dry climate there preserves old structures better than damp Germany or tropical Southeast Asia?

hinkley(10000) 4 days ago [-]

Tropical forests are so aggressive, I don't think we'll ever appreciate how rich the civilizations of Central and South America were until we've had major advances in subterranean mapping technology. Most of the artifacts are likely in anaerobic pockets underground, if they still exist at all.

I recall years ago when they discovered that a 'ziggurat on a hill' was in fact not on a hill, the jungle was just doing an excellent job of burying it.

Synaesthesia(10000) 4 days ago [-]

The climate of the middle east was different thousands of years ago. Arabia and the Sahara were not deserts.

ComputerGuru(10000) 4 days ago [-]

Not to be that person, but Anatolia and Mesopotamia were not "dry" even if they approach that description today.

They had milder weather than Germany to be sure, but I don't know if you can definitively say they saw less precipitation.

gerdesj(10000) 5 days ago [-]

We need an older word than civilisation in English.

Civis (citizen) is Latin (civil etc) which is only around 2500 years old give or take a bit. We also have polis (city) related words from old Greek for politician, police polite etc.

We clearly need some words derived from really old Anatolian languages or perhaps there are some already.

Bayart(10000) 5 days ago [-]

If we needed to use words as old as the concepts they're used for, we couldn't do history.

Tagbert(10000) 5 days ago [-]

"Civilization" Congress from an Indo-European root "kei" meaning "to lie" as on a surface. That takes it back about 4-5000 years. That PIE root certainly had an older ancestral word. Since PIE is from just north of Anatolia it is possible that PIE is descended from a language of Göbekli Tepe.

Bjartr(10000) 5 days ago [-]

Why is the age of the word or its roots significant here?

bobkazamakis(10000) 5 days ago [-]

>We need an older word than civilisation in English.

>Civis (citizen) is Latin (civil etc) which is only around 2500 years old give or take a bit.

Do we need a new word for yeet? Seems like that might be outdated too!

sydthrowaway(10000) 5 days ago [-]

So the Proto Indo Europeans have finally been found.

astrange(10000) 4 days ago [-]

That's the Yanmaya. They're from the area of approximately southern Ukraine and Crimea.

selimthegrim(10000) 5 days ago [-]

Could be the proto-Vainakh/Chechens for all we know

danans(10000) 5 days ago [-]

What makes you think the site is Indo-European? There's no obvious link mentioned in the article, and the site predates earliest known Indo-European migrations by 4000 years.

dimal(10000) 4 days ago [-]

> archaeologists in southeastern Turkey are, at this moment, digging up a wild, grand, artistically coherent, implausibly strange, hitherto-unknown-to-us religious civilisation, which has been buried in Mesopotamia for ten thousand years. And it was all buried deliberately.

Maybe they realized that civilization is a miserable slog and they should just go back to hunting and gathering.

astrange(10000) 4 days ago [-]

You'll like hunting and gathering if you like having to kill twin children because they'll exhaust the food supply.

system2(10000) 4 days ago [-]

Yet another crazy thing found in Turkey which won't turn into an unbelievable tourist attraction. They have the oldest churches, Cappadocia, Anatolia etc. None of them are known by foreigners.

dimitrios1(10000) 4 days ago [-]

Might blow your mind that the Patriarchate of Constantinople is still located in Instanbul.

thewarpaint(10000) 4 days ago [-]

Kapadokya is a pretty popular touristic attraction. Source: was there in 2018 with a bunch of non-Turkish people.

7thaccount(10000) 4 days ago [-]

The article says it is a tourist trap now with over a million visitors a year.

Mo3(10000) 5 days ago [-]

Some day, they'll find us too.

coffeeblack(10000) 5 days ago [-]

Or maybe not. Who knows how many large civilizations have never been found.

travis_brooks(10000) 5 days ago [-]

Right, the future archeologists will find some old parking meters and assume the primitive ancients had some sort of steel penis cult.

prescriptivist(10000) 4 days ago [-]

Off topic from the post but something I find fascinating. I live in Maine and recently became aware of the Vail Site in northwest Maine, which purports to be around 13000 years old [1]. The beringian migration predates that by a few thousand years I believe but the actual dispersal of Paleo-Indians to the broader Americas hinges on the melting of the ice sheets that covered North America up until only a thousand years or so prior to the Vail site proper. I know we are talking about a scale of thousands of years but it blows my mind that people (without work animals) found their way across the continent and set up shop here when there was probably little actual reason to come here at all since, presumably, game and fish were as robust in all the lands they traveled to arrive here and fellow human pressure was non-existent.

[1] https://en.wikipedia.org/wiki/Vail_Site

mushbino(10000) 4 days ago [-]

Human footprints were recently found in New Mexico that were definitively dated to 23,000 years ago so humans have probably been here for much longer than that.

doodlebugging(10000) 4 days ago [-]

When I consider stuff like this I look back on my own experiences in life and it is easy enough to see that an ordinary person or group of people can cover a lot of ground in a short period of time on foot.

From the west coast of Alaska to the east coast in Maine it is about 5500 miles. Moving 10 miles a day it only takes 18 months. Even if you only lived for 30 years back then there is plenty of opportunity for a single individual to have made the entire journey on foot allowing lots of opportunities for seasonal pauses or to delay progress because they liked the new digs better than the last place they stopped.

A reasonably adventurous person could easily have seen most of the continental US in a lifetime especially when you consider that boats were part of their skill sets. Even moving as a group you could easily traverse the continent settling for short periods wherever things looked promising.

I will have to look up the Vail site as I am not familiar with that one. I know there is the Buttermilk site in central Texas (Gault site) that has yielded dates in the 16000-21000 yr range as near as I remember.

ema(10000) 4 days ago [-]

I doubt that fellow human pressure was non-existent. A thousand years is plenty of time for even a small founding population to swell enough in size to fill every nook and cranny of a continent.

briga(10000) 4 days ago [-]

>there was probably little actual reason to come here at all since, presumably, game and fish were as robust in all the lands they travelled to arrive here and fellow human pressure was non-existent.

I don't think this is true, there were lots of good reasons to move into the Americas, notably the presence of many large species of game animals that evolved without natural defences against humans (which were all quickly hunted to extinction or out-competed). Maine would have been on the fringes of habitability at the time, but other areas like Mexico and Peru were ideal climates for humans to move to, much better than the Siberian wilderness their ancestors travelled through.

jonny_eh(10000) 5 days ago [-]

I skimmed the article but didn't see how they determined its age.

its_ethan(10000) 5 days ago [-]

I was looking for this too -- 11,000 years is sort of the benchmark for earliest civilization, so having that be the bounding side for how 'young' this place could be struck me as some equivalent to click bait?

edit: looks like someone posted from another source that it was with radiocarbon dating - no reason to think that's incorrect, it just would've been a nice extra sentence or two to include to avoid this very hang-up that at least two people had..

aksss(10000) 5 days ago [-]

From Gobekli Tepe, but probably similar:

'At the end of its uselife, the megalithic enclosures of Göbekli Tepe were refilled systematically. This special element of the site formation process makes it hard to date the enclosures by the radiocarbon method, as there is no clear correlation of the fill with the architecture. Several ways have been explored to overcome this situation, including the dating of carbonate laminae on architectural structures, of bones and the remains of short-lived plants from the filling. The data obtained from pedogenic carbonates on architectural structures back the relative stratigraphic sequence observed during the excavation. But, unfortunately, they are of no use in dating the sampled structures themselves, as the carbonate layers started forming only after the moment of their burial. At least these samples offer a good terminus ante quem for the refilling of the enclosures. For layer III this terminus ante quem lies in the second half of the 9th millennium calBC, while for layer II it is located in the middle of the 8th millennium calBC.'


hsn915(10000) 4 days ago [-]

Samo Burja has an excellent essay about why civilization is older than we think


Some interesting quotes:

> When we find remains of beavers, we assume they built beaver dams, even if we don't immediately find remnants of such dams. > [...] > When we find Homo sapiens skeletons, however, we instead imagine the people naked, feasting on berries, without shelter, and without social differentiation.

henriquemaia(10000) 4 days ago [-]

Thank you for sharing it. I read it and confirm it's an excellent essay. Your quote was the perfect teaser!

simonh(10000) 4 days ago [-]

Sorry but that quote has convinced me not to read the rest. It's absurd. Firstly no we don't just assume beavers made dams, we don't need to because we've found plenty of ancient remains of beaver dams. We know beavers and dam construction behaviour evolved at some point and want to know when and how that happened, so we look for evidence linking the two. If we just made blind assumptions it would not be possible to figure out the developmental timeline.

Secondly the development of evolved instinctive behaviour is in no way comparable to human learned cultural behaviour, such as technology. That should be so obvious I'm at a loss that I have to even point it out.

russellbeattie(10000) 4 days ago [-]

> When thinking about the dating of agriculture it is important to remember that Göbekli Tepe was rediscovered rather than discovered. In October 1994, the archaeologist Klaus Schmidt was reviewing archives of known sites, trying to decide where to dig next. A site description caught his attention: a hill that had first been excavated in a 1963 survey by the University of Istanbul and the University of Chicago, but abandoned soon after.

Heh. A much less prosaic version of the story with the mulberry bush in OP's article.

tshaddox(10000) 4 days ago [-]

Okay, but we don't expect ancient Homo sapiens to have had smartphones, surely?

ThalesX(10000) 5 days ago [-]

> But I do definitely know this: some time in 8000 BC the creators of Gobekli Tepe buried their great structures under tons of rubble. They entombed it. We can speculate why. Did they feel guilt? Did they need to propitiate an angry God? Or just want to hide it?' Klaus was also fairly sure on one other thing. 'Gobekli Tepe is unique.'

I think it'd be rather hard for a hunter gatherer society to realistically cover such a large area under tons of rubble. It makes me wonder if this covering with rubble is somehow related to the Black Sea deluge hypothesis [https://en.wikipedia.org/wiki/Black_Sea_deluge_hypothesis]:

> 'In 1997, William Ryan, Walter Pitman, Petko Dimitrov, and their colleagues first published the Black Sea deluge hypothesis. They proposed that a catastrophic inflow of Mediterranean seawater into the Black Sea freshwater lake occurred around 7600 years ago, c. 5600 BCE .

> As proposed, the Early Holocene Black Sea flood scenario describes events that would have profoundly affected prehistoric settlement in eastern Europe and adjacent parts of Asia and possibly was the basis of oral history concerning Noah's flood. Some archaeologists support this theory as an explanation for the lack of Neolithic sites in northern Turkey. In 2003, Ryan and coauthors revised the dating of the early Holocene flood to 8800 years ago, c. 6800 BCE.'

I think there's a poetic feel to it (which makes me wholly question it); the start of agriculture, Babylon, The Garden of Eden, Noah's ark, all wrapped in one, discovered by a shepherd in the hills and filled with penises.

AlotOfReading(10000) 5 days ago [-]

It's worth noting that Karahan Tepe, Gobekli Tepe, and most of the other PPN-A/B sites in Southern Anatolia are on top of hills and mountains at fairly high elevations. They're not really candidates for any sort of flood event.

As for the poetic feel, the term of art is a 'just-so story'.

pvg(10000) 5 days ago [-]

I think it'd be rather hard for a hunter gatherer society to realistically cover such a large area under tons of rubble

People didn't think hunter gatherer societies were able to build such structures and complexes in general. It seems a lot less likely that the Mediterranean flooded an area that far from the Black Sea that also happens to be 700m above sea level.

aksss(10000) 5 days ago [-]

> it'd be rather hard for a hunter gatherer society to realistically cover such a large area under tons of rubble

We should be careful about underestimating the capabilities of predecessor cultures. We don't even know to what extent these sites were hunter-gatherer societies, right? Isn't a good part of its significance that it's pushing the clock back on our assumptions?

stubish(10000) 4 days ago [-]

>> But I do definitely know this: some time in 8000 BC the creators of Gobekli Tepe buried their great structures under tons of rubble. They entombed it. We can speculate why. Did they feel guilt? Did they need to propitiate an angry God? Or just want to hide it?' Klaus was also fairly sure on one other thing. 'Gobekli Tepe is unique.'

> I think it'd be rather hard for a hunter gatherer society to realistically cover such a large area under tons of rubble.

I'd also be interested in knowing how they know the creators of Gobekli Tepe where the ones who buried it. Maybe their neighbors didn't like them, or maybe it was their now-farming descendants moving the temple to somewhere better suited to growing their crops. These sort of sites tend to have several generations of societies using them, often hostile to the previous cultures (eg. the vandalism of Egyptian temples by their later occupants).

pharke(10000) 5 days ago [-]

The force of water capable of pushing such a large amount of rubble would have bulldozed the entire structure and there would be practically nothing left. Simply look at the pillars[0] that are being excavated, there is no way they could have survived such a force. The builders of this complex would have no technical problems with burying them, filling in a hole is much easier than carving and erecting hundreds of stone blocks, pillars and structures.

[0] https://en.wikipedia.org/wiki/G%C3%B6bekli_Tepe#Architecture

user3939382(10000) 5 days ago [-]

In response to which Graham Hancock slowly sits back in his chair and breathes a sigh of victorious relief.

Melting_Harps(10000) 4 days ago [-]

> In response to which Graham Hancock slowly sits back in his chair and breathes a sigh of victorious relief.

Honestly, he didn't need vindication, but I'm glad for everything Graham has had to put up with his entire career as a JOURNALIST, not and archaeologist, that he finally gets the funding he needs to keep doing his work.

I've been reading America Before on long trips and the way he describes his work on podcasts like JRE make me realize just how terribly ossified academia has become--it's heresay to question the per-established POV. It's no longer, or perhaps never has been in my lifetime, about genuine curiosity and the leap into trying to explain the unknown with the most rigorous and methodical practices (scientific method) when careers are made and lost on parroting and upholding Conventional wisdom above all else. His investigative work in Egyptology was eye opening to me as it reminded me so much of my work in Biology/Chemistry.

I remember sitting in my Biohem lass listening to my professor (who I now consider a friend) describe Walter and Cricks work, and the infamous LSD trip, and telling us of all the women Radio Crystolgraphers (Lindsay, Broomhead, Franklin) who contributed to the ability to arrive to the double helix structure---he too was a crystolographer and used their work for his research. It also entered my mind how Madam Currie is seen as the discoverer of Radioactive particles, while her husband Pierre who also died, is almost never mentioned.

What I'm saying is that narratives are not drawn on division of sex, but rather a seductive and captivating narrative that help lend authority to a specific origin of something, instead of the messy reality that we really have no idea what most of what or where we've come and that things are oftendiscovered by accident (Phleming with penicillin being the most commonly told). And that having a cohesive and seemingly palatable story told from authoritative voices about 'how things really are' gives us a false sense of confidence that lets us accept things as they are.

Graham put's it incredibly eloquently when he says 'we are a Species with amnesia.'

Also worth noting is that Asia Minor, modern day Turkey, is also where the first traces of agriculture are found, which is a pre-requisite for a division of labour and a surplus of food in order to create this kind of specialization to create such immense monoliths.

Gunung Pedang in Indonesia is another mega monolith site that may be even older than this site which is really intriguing, because it makes more sense given that the Indonesia is mainly comprised of so many Islands but still has one of the largest populations in all of the World.

bgroat(10000) 4 days ago [-]

My favourite movie is called 'The Man From Earth', about a man who was born 14,000 years ago and hasn't died.

He doesn't know why, but he just keeps living.

What I think about every time I think of this movie is, 'Okay, so he was 12,000 years old at the time of Christ'. He lived then - now 6 times, and then then-now again.

He was 8,000 years old in Mesopotamia..

Now I can imagine this beloved character in this new, very old, civilization

rapind(10000) 4 days ago [-]

He was a Neanderthal which was supposed to be why he didn't age (not that that makes sense). I really enjoyed that movie too.

vishnugupta(10000) 4 days ago [-]

I've done a few re-watch of The Man From Earth. Very well made.

What's more; it's on YT for free.


diggernet(10000) 4 days ago [-]

That is really a great movie.


A couple interesting details I learned while looking up that link:

- They released a sequel 10 years later.

- The author also wrote the Star Trek episode 'Requiem for Methuselah', which has a similar theme.

carlisle_(10000) 4 days ago [-]

I also often think of this movie. It was really quite excellent at being through provoking in interesting ways like this.

trashtester(10000) 4 days ago [-]

Fun fact: If you believe in the Everett interpretation of QM (Also called Many World), there may be one 'world' for each of us where we become thousands of years old.

bryanrasmussen(10000) 4 days ago [-]

>What I think about every time I think of this movie is, 'Okay, so he was 12,000 years old at the time of Christ'. He lived then - now 6 times, and then then-now again.

In the movie he claimed to have been Christ, which that experience put him off trying to do anything really public to try to help people.

By the way it was written by Jerome Bixby https://en.wikipedia.org/wiki/Jerome_Bixby who also wrote the Star Trek episode https://en.wikipedia.org/wiki/Requiem_for_Methuselah which had to some extent the same premise (although a much younger immortal)

on edit: oops, I see diggernet https://news.ycombinator.com/item?id=31454792 already mentioned the Requiem for Methuselah.

seer(10000) 4 days ago [-]

I remember having similar thoughts of awe about how _old_ are things in this region of the earth, when I was listening to one of the famous Dan Carlin hardcore history series.

There he mentioned how when the romans were conquering Babylon, it had already had a 3000 year history. So its like similar time from us to the romans, as the relative starts of those two civilizations. Babylon was _old_ and they knew it - who was this young upstart trying to recklessly mess with the natural order?

I can't even process things like 12-14 _thousand_ years of human civilization...

internetvin(10000) 4 days ago [-]

This looks awesome, ty for sharing <3

davedx(10000) 4 days ago [-]

Another great book along the same lines is SUM VII: https://www.amazon.com/Sum-VII-novel-T-Hard/dp/0060117028

'The doctor working on the mummy suggests trying to see if the man could be revived...'

poundtown(10000) 5 days ago [-]

is this the same thing graham hancock(Gobekli Tepe) has been going on about for sometime or is this different?

mkl(10000) 5 days ago [-]

I suggest you read the article, which is fascinating and based on archeological research at Gobekli Tepe and other nearby sites. Hancock seems to incorporate real archeological sites into pseudo-scientific narratives.

pvg(10000) 5 days ago [-]


It's nearby but it's a different site

rendall(10000) 4 days ago [-]

> The shepherd who discovered Gobekli Tepe

The article mentions this fellow 5 times exactly like this. His discovery launched the careers of hundreds, generates millions in tourism, and transforms our understanding of Paleolithic human history.

Why not write his name? Does he not deserve historical mention by name like Necmi Karul and Klaus Schmidt?

mavci(10000) 4 days ago [-]

I thought about that too while reading. His name is Mahmut Yıldız.

If you want to read more about him, you can check out the article below.


tsunamifury(10000) 5 days ago [-]

There have been a few articles written about Turkey attempting to derive its current power legitimacy narrative by creating this story of an ancient civilization being founded within their borders. They have pointed out that this is a common trend among dictators that often stretched credulity to the limit (Sadam and Babylon, Mogabe and ancient southern Egypt, Mussolini and the Roman Empire etc) and many attempt to build up their propaganda with such connections.

Im curious how true that is, but there is a trend.

hereforphone(10000) 5 days ago [-]

Those borders didn't exist at the time the 'ancient civilization' was constructed. So what's the point?

wolverine876(10000) 5 days ago [-]

Another archeological propaganda technique is to omit from the history the people you don't like and/or who you don't want to have any claim to the territory. Without naming names, one country likes to skip back thousands of years.

pvg(10000) 5 days ago [-]

What are some such articles?

stubish(10000) 4 days ago [-]

The sites and their importance were known before the current issues in Turkey. Some of the publicity now may well be to encourage nationalism, but the reality is we probably would have been hearing about it 10-20 years earlier if the site hadn't been in a war zone.

Historical Discussions: Lotus 1-2-3 For Linux (May 21, 2022: 692 points)

(696) Lotus 1-2-3 For Linux

696 points 4 days ago by taviso in 10000th position

lock.cmpxchg8b.com | Estimated reading time – 5 minutes | comments | anchor

System Calls

The first problem is that Linux and UNIX do not use a compatible system call interface. UNIX uses the lcall7 interface, so we need to find those calls and fix them up. Here is how this object file calls open():

$ objdump -M intel --disassemble=open 123elf.o
123elf.o:     file format elf32-i386
Disassembly of section .text:
000e20d4 <open>:
   e20d4:   b8 05 00 00 00          mov    eax,0x5
   e20d9:   9a 00 00 00 00 07 00    call   0x7:0x0
   e20e0:   0f 82 c6 01 00 00       jb     e22ac <_cerror>
   e20e6:   c3                      ret    
   e20e7:   90                      nop

That call instruction is what's known as a callgate, which isn't supported on Linux, it will just crash. I want to remove this and route all calls through glibc instead. My first thought was just to mark these symbols as undefined, and then let the linker fix that up by importing a replacement symbol from glibc.


Nothing is ever easy, it turns out that won't work! If we try, objcopy will simply refuse:

$ objcopy -I coff-i386 -O elf32-i386 --strip-symbol open 123.o 123elf.o
objcopy: not stripping symbol `open' because it is named in a relocation

What is objcopy trying to tell us here?

This is a relocatable object file, which simply means it can be loaded at any address and still work. That's possible because it contains all the necessary information – the relocations – to adjust it.

Relocations are really simple, the compiler just records the name of the symbol and the references to it. Now the linker can just walk through and patch each reference to point to the new location – easy.

So objcopy is saying that you can't remove this symbol, because the linker won't know what to patch in when it moves it. Fair enough – but, just because objcopy won't do it doesn't mean it's impossible! We could just fix the relocations too, right?

I don't know of any tool that can do that, but COFF is not a complicated format – I'll write one!

Introducing coffsyrup, a tiny little tool that will remove those pesky COFF symbols even if objcopy refuses!

$ coffsyrup 123.o 123new.o open
MATCH open
RELOC rel open @0x180fa ~0xc9ede
RELOC rel open @0x4c9a1 ~0x95637
RELOC rel open @0x4d348 ~0x94c90
RELOC rel open @0x4ec13 ~0x933c5

Incompatible Functions

Now that we can reroute functions, we have to worry about incompatible functions.

Lots of standard UNIX functions are source but not binary compatible, this is because nobody promises that structures are the same size or layout across UNIX versions. The obvious example is struct stat.

For example, this code is likely to work on any UNIX-like system you can compile it on:

However, the resulting object file is not likely to work on any other system. That's because the size of struct stat and the offset of st_size will be different – it will probably just corrupt your stack and crash!

Luckily there are not really that many functions like this in UNIX. In fact, the number is small enough that I can probably write wrappers to translate them. The important ones are stat(), times(), uname(), fcntl(), ioctl() and so on.

All I have to do is rename those symbols with objcopy, then mark them undefined with coffsyrup. Now I can write a little wrapper that translates a Linux struct stat to a UNIX struct stat and it should work!


Well...I said "little" wrappers – but there are some big incompatabilities in places. One big nightmare was termios. Go ahead and take a look at the termios(3) man page, pretty complex right? Well, everything here works differently in subtle, incompatible, and difficult to debug ways on every UNIX system.


License Failed

Incredibly, after a bunch of hacking it actually runs without crashing!

Lotus 1-2-3 Box

...and refuses to work without a license, damn! Well, I am a legitimate licensed 1-2-3 owner with a boxed copy of 1-2-3, and this is 32 year old abandonware. I think Mitch Kapor will forgive me for bypassing this check.

I can see from breaking on exit() that there is an internal symbol called lic_init() responsible for checking for a valid license. I looked at the code in IDA, and figured out the logic.

It is simply looking for a file called LICENSE.000, which contains an expiry date, username and systemname. If that all matches what the system reports, the check passes! 🏴‍☠️

License Check

All Comments: [-] | anchor

skissane(10000) 4 days ago [-]

> It turns out that the BBS also had a warez copy of Lotus 1-2-3 for UNIX. This was widely thought to be lost – I'm told it couldn't compete with a more popular UNIX office suite called SCO Professional, so there were not many copies sold.

I wonder if anyone still has a copy of Lotus 1-2-3/M? It was the port to the IBM mainframe operating systems MVS (nowadays known as z/OS) and VM/CMS (nowadays z/VM). [0] Not that I ever used it or saw it, but just I have become fascinated with it from reading descriptions of it. From what I understand, it is more different from 1-2-3 for DOS than the Unix or VMS ports were; the Unix and VMS ports work with character mode terminals (such as VT100 compatibles), which while rather different from the terminal model used on MS-DOS or text mode OS/2 (direct memory access to the screen buffer), nonetheless are close enough that the bridge can be gapped–which (on Unix) is classically the job of the curses library (and its various descendants). By contrast, 1-2-3/M was written to work with the block mode 3270 terminals commonly used on IBM mainframes, which send to/from the terminal whole screenfuls of data at a time, rather than individual characters (somewhat similar, in principle, to classic HTML forms). This forced greater changes in the UI compared to the other ports, because a lot of things which are easy to implement with character mode terminals are essentially impossible in 3270.

[0] https://www.ibm.com/common/ssi/ShowDoc.wss?docURL=/common/ss...

GekkePrutser(10000) 4 days ago [-]

> This forced greater changes in the UI compared to the other ports, because a lot of things which are easy to implement with character mode terminals are essentially impossible in 3270.

Couldn't they just render everything to a local virtual terminal and then send the screen after updates?

ngcc_hk(10000) 2 days ago [-]

SAS has a mainframe spreadsheet module.

GartzenDeHaes(10000) 4 days ago [-]

> which send to/from the terminal whole screenfuls of data at a time, rather than individual characters

3270, and it's derivatives such as 6530, had (have?) a protect mode protocol that sends fields in much the same way as an HTTP post [1]. It doesn't seem to have been used that much on IBM's, probably due to the synchronous write-a-screen-and-wait-for-the-result readline style of programming. However, it was used extensively on the Tandem, which had an asynchronous 3-tier style architecture.

1. https://x3270.miraheze.org/wiki/3270_data_stream_protocol

SoftTalker(10000) 4 days ago [-]

There was WordPerfect for unix back in the day as well, I used it at work in an HP-UX environment. I don't know if it was ever available for Linux.

chasil(10000) 4 days ago [-]

There is a free version of WordPerfect for Linux.


irdc(10000) 4 days ago [-]

So, incidentally, having symbols makes it doable to use a decompiler to retrieve something resembling source code.

Just putting that thought out there.

meebee(10000) 4 days ago [-]

That would be a worthwhile project.

bombcar(10000) 4 days ago [-]

If you have symbols and know the language (even better the exact version of the compiler) you can get almost the original source files "minus comments".

slater(10000) 4 days ago [-]

Anyone else misread the title as 'Lotus Notes for Linux' and break out in cold sweats of fear and PTSD? XD

themadturk(10000) 3 days ago [-]

My employer moved from Notes to Outlook and CA Service Desk Manager software not long before I was hired (2012). Some of the more senior employees miss Notes for it's service desk features.

multjoy(10000) 4 days ago [-]

I used to fill photocopiers for a living at IBM North Harbour in Portsmouth back in the very early 00's, and I could never get my head around Notes. The first and only place I ever saw it in the wild.

pstuart(10000) 4 days ago [-]

My first 'real' job was working at Arthur Anderson, where due to my geeky love of computers I was able to quickly move from the mail room to Data Processing.

1-2-3 was like crack for accountants (it was first released right around that time). One of my tasks was making 'backup copies' for users. Probably had 1 legit copy for every 20 staffers so this was quite a sum of money they 'saved'.

After getting sued for millions for copyright infringement they realized it was cheaper to buy a real copy for every user.

bombcar(10000) 4 days ago [-]

Leave it to Arthur Anderson to try to do creative accounting of software licenses!

RcouF1uZ4gsC(10000) 4 days ago [-]

Great write up.

Travis is moving on from 0 (Google Project Zero) to 1-2-3.

I wonder when the article about Quattro Pro will come.

marcodiego(10000) 4 days ago [-]

There is one thing I feel like missing now: dosbox should have a mode to run DOS programs on the terminal and only start a GUI if a graphics mode is started. That would give seamless use of DOS apps.

donio(10000) 4 days ago [-]

dosemu can do this with the -t option, I have used this to run the Turbo Pascal UI on a terminal, works great. Have to make sure that the terminal width is exactly 80 columns.

etothepii(10000) 4 days ago [-]

I am aware of a very large company that still uses Lotus 123 for hundreds of millions of dollars of business.

xattt(10000) 4 days ago [-]

Is it a TUI version of 1-2-3?

d0mine(10000) 4 days ago [-]

You might be surprised how efficient in practice it can be. Often, new software can be a net negative on productivity.

Emacs is the most productive environment (for me) and it is half a century old.

tibbydudeza(10000) 4 days ago [-]

He named the website after the compare and exchange instruction ???. My intel assembler is rather rusty.

lapsis_beeftech(10000) 4 days ago [-]

It is explained in the FAQ:

There was a bug in early Pentiums called the f00f bug, it would cause a deadlock if you provided an invalid operand to cmpxchg8b while using the lock prefix. It was an important vulnerability at the time, and I thought it would be fun to own lock.cmpxchg8b.com.

sph(10000) 4 days ago [-]

That was awesome, but they skipped over the crucial part I'm most interested in: how the heck did they rewrite and reroute the incompatible system calls and libc functions? That's probably the hardest task of it all.

How would you go about it in the first place?

EDIT: ah, their coffsyrup tool (https://github.com/taviso/123elf/blob/main/coffsyrup.c) with help from objdump (it's more powerful that I gave it credit for) does the relocation and patching. I would have loved to read more into that part of the process.

DannyBee(10000) 4 days ago [-]

Outside of patching, it's not that hard, it's just structure conversion at that point.

The coff tool is harder than the rewritten functions.

jshaqaw(10000) 4 days ago [-]

Little known fun-fact re Lotus 1-2-3 was that it came on a ROM cartridge for the PCJr. The PCJr was a total sad sack of a machine. Technically superior to the PC and cheaper too so IBM intentionally crippled it with incompatibility bugs so that businesses could not really rely on it.

BUT they did release Lotus 1-2-3 for it in on a ROM cartridge which in the days when most programs ran on 5.25 inch floppies meant that performance just cranked!

flomo(10000) 4 days ago [-]

Old boss had a handheld computer which ran 1-2-3 from ROM. Spreadsheets on the fly!

I think this was it: https://en.wikipedia.org/wiki/HP_95LX

ghaff(10000) 4 days ago [-]

>Technically superior to the PC

Really? I remember it having a 'chiclet' keyboard that was widely criticized among other things. As for the compatibility, I'm not sure how many computer makers at that time--including IBM--completely grokked that PC compatibility really did mean 100% compatibility as opposed to mostly compatible.

easytiger(10000) 4 days ago [-]

wasn't there a java lotus suite later developed that died off very quickly? I recall running it on Solaris

shimmeringleaf(10000) 4 days ago [-]

Fascinating write up!

Curious how piracy seems half the time to be the best form of archival for older software that was abandoned (most older software is no longer available to download even if one has a license, let alone still runs on a more modern environment, but if you have A you can try and sort out B, as this post so nicely demonstrates)

jeffbee(10000) 4 days ago [-]

It seems like sheer luck that a 30-year-old tape archive was readable. What format is a BBS operator from the early 90s likely to have used? QIC? DAT?

marcodiego(10000) 4 days ago [-]

My first PC came with LotusSmartsuite on CD-ROM. At the time I dreamed that the office suite area was full of competition and vendors would implement features all the time continuously improving their offers. It probably was actually like that at the time (1997). Soon monopoly turned on.

We are seeing a new resurgence now. There is some competition. Even on the FLOSS space, LibreOffice must show it is better than OnlyOffice. But it is very far from how things looked like in mid 90's when office suite vendor really had to include useful and differentiated software.

adwww(10000) 4 days ago [-]

Yeah I had Lotus SmartSuite - I remember it had quite nice skeuomorphic design in a lot of areas.

You're probably right about the competition, Wordperfect was the market leader at one point, and Adobe also had FrameMaker - perhaps more of a Publisher competitor.

(670) YouTubeDrive: Store files as YouTube videos

670 points about 18 hours ago by notamy in 10000th position

github.com | Estimated reading time – 3 minutes | comments | anchor

YouTubeDrive is a Wolfram Language (aka Mathematica) package that encodes/decodes arbitrary data to/from simple RGB videos which are automatically uploaded to/downloaded from YouTube. Since YouTube imposes no limits on the total number or length of videos users can upload, this provides an effectively infinite but extremely slow form of file storage.

YouTubeDrive depends externally on FFmpeg, youtube-upload, and youtube-dl. These programs must be downloaded and installed separately, and prior to first use, YouTubeDrive must be configured with their install locations. See below for details.

YouTubeDrive is a silly proof-of-concept, and I do not endorse its high-volume use.

Usage Example

NOTE: A short time needs to pass between calls to YouTubeUpload and YouTubeRetrieve for YouTube to process the uploaded video. I find that 5-10 minutes suffices for small (less than 10MB) file uploads.

The video YouTubeDrive produces in this example can be viewed at https://www.youtube.com/watch?v=Fmm1AeYmbNU.


  • Install FFmpeg, youtube-upload, and youtube-dl as your operating system dictates.

  • Find an arbitrary test video, say test.mp4, and run youtube-upload --title='Test Video' test.mp4. Follow the displayed instructions to create an OAuth token for your YouTube account. This will be the YouTube account used for all YouTubeDrive uploads.

  • Download and open YouTubeDrive.wl from this repository.

  • In lines 75-77, enter the install locations of the FFmpeg, youtube-upload, and youtube-dl executables. Make sure to use proper string escape sequences (in particular, backslashes \ need to be escaped as double-backslashes \\ in Windows paths).

    75 | FFmpegExecutablePath = 'FFMPEG_PATH_HERE';
    76 | YouTubeUploadExecutablePath = 'YOUTUBE-UPLOAD_PATH_HERE';
    77 | YouTubeDLExecutablePath = 'YOUTUBE-DL_PATH_HERE';

    For example, I use the following install locations on my system (Windows 10):

    75 | FFmpegExecutablePath = 'C:\\Games\\MiscExes\\ffmpeg.exe';
    76 | YouTubeUploadExecutablePath = Sequence['python',
    77 |   'C:\\Users\\dzhan\\AppData\\Local\\Programs\\' <>
    78 |       'Python\\Python35\\Scripts\\youtube-upload.py'];
    79 | YouTubeDLExecutablePath = 'C:\\Games\\MiscExes\\youtube-dl.exe';

    Note the use of Sequence[] to call python youtube-upload.py above.

  • After making the above edits, open YouTubeDrive.wl with Mathematica. Then, open the File ⇨ Install... dialog, and select the following options:

    • Type of Item to Install: Package
    • Source: YouTubeDrive.wl
    • Install Name: YouTubeDrive (no .wl suffix)

    Choose installation for all users or the current user only, according to your preference, and click OK.

All Comments: [-] | anchor

pingtickle(10000) about 12 hours ago [-]

Before broadband was widely available, TiVo used to purchase overnight paid programming slots across the US and broadcast modified PDF417 video streams that provided weekly program guide data for TiVo users. There's a sample of it on YouTube https://www.youtube.com/watch?v=VfUgT2YoPzI but they usually wrapped a 60-second commercial before and after the 28-minute broadcast of data. There was enough error correction in the data streams to allow proper processing even with less-than-perfect analog television reception.

414techie(10000) about 12 hours ago [-]

That is really interesting. I wonder if there were any other interesting uses of paid programming to solve problems like these around that time?

alphabet9000(10000) about 6 hours ago [-]

i made something like this for live streaming encrypted audio/video, but for the web, if you are interested: http://pitahaya.jollo.org

anyfoo(10000) about 16 hours ago [-]

I only looked at the example video, but is the concept just 'big enough pixels'?

Would be neater (and much more efficient) to encode the data such that it's exactly untouched by the compression algorithm, e.g. by encoding the data in wavelets and possibly motion vectors that the algorithm is known to keep[1].

Of course that would also be a lot of work, and likely fall apart once the video is re-encoded.

[1] If that's what video encoding still does, I really have no idea, but you get the point.

colejohnson66(10000) about 14 hours ago [-]

YouTube let's you download your uploaded videos. I've never tested it, but supposedly it's the exact same file you uploaded.[a] It probably wouldn't work with this "tool" as it uses the video ID (so I assume it's downloading what clients see, not the source), but it's an idea for some other variation on this concept.

[a] That way, in the future, if there's any improvements to the transcode process that makes smaller files (different codec or whatever), they still have the HQ source

dheera(10000) about 16 hours ago [-]

YT might still recompress your video, possibly using proprietary algorithms that are not necessarily DCT based

bambax(10000) about 15 hours ago [-]

Or, film pieces of paper in succession, in a clear enough manner that they're still readable even when heavily compressed.

NonNefarious(10000) about 14 hours ago [-]

Back in the day, VCRs were commonly used as tape backup devices for data.

Now studios are using motion-picture film to store data, since it's known to be stable for a century or more.

softfalcon(10000) about 16 hours ago [-]

Agree it would be cool to be 'untouched' by the compression algorithm, but that's nearly impossible with YouTube. YouTube encodes down to several different versions of a video and on top of that, several different codecs to support different devices with different built-in video hardware decoders.

For example, when I upload a 4K vid and then watch the 4K stream on my Mac vs my PC, I get different video files solely based on the browser settings that can tell what OS I'm running.

Handling this compression protection for so many different codecs is likely not feasible.

accrual(10000) about 15 hours ago [-]

I love that this is like tape in that it's a sequential access medium. It's storing a tape-like data stream in a digital version of what used to be tape itself (VHS).

layer8(10000) about 15 hours ago [-]

I believe YouTube supports random access, or otherwise you wouldn't be able to jump around in a video. Youtube-dl also supports resuming downloads in the middle, I believe.

kringo(10000) about 17 hours ago [-]

BEWARE: Until they clamp down and delete the files, you lose your data.

Good technical experiment though!

netsharc(10000) about 16 hours ago [-]

Since he's made a ready-to-use software, yeah Google will probably ban this quite quickly...

Annatar(10000) about 18 hours ago [-]

This works on the same principle as the video backup system (VBS) which we used in the 1980's and the early 1990's on our Commodore Amigas: if I remember correctly, one three hour PAL/SECAM VHS tape had a capacity of 130 MB. The entire hardware fit into a DB 25 parallel port connector and was easily made by oneself with a soldering iron and a few cheap parts.


SGI IRIX also had something conceptually similar to this 'YouTubeDrive' called HFS, the hierarchical filesystem, whose storage was backed by tape rather than disk, but to the OS it was just a regular filesystem like any other: applications like ls(1), cp(1), rm(1) or any other saw no difference, but the latency was high of course.

rahimnathwani(10000) about 14 hours ago [-]

'one three hour PAL/SECAM VHS tape had a capacity of 130 MB'

This reminds me of the Danmere Backer.

'The entire hardware fit into a DB 25 parallel port connector and was easily made by oneself with a soldering iron and a few cheap parts.'

This reminds me of the DIY versions of the Covox Speech Thing: https://hackaday.com/2014/09/29/the-lpt-dac/

thought_alarm(10000) about 16 hours ago [-]

That's how digital audio was originally recorded to tape back in the 1970s and 80s: encode the data into a broadcast video signal and record it using a VCR.

In the age of $5000 10 MB hard drives, this was the only sensible way to work with the 600+ MB of data needed to master a compact disc.

That's also where the ubiquitous 44.1 kHz sample rate comes from. It was the fastest data rate could be reliably encoded into both NTSC and PAL broadcast signals. (For NTSC: 3 samples per scan line, 245 scan lines per frame, 60 frames per second = 44100 samples per second.)

ogurechny(10000) about 17 hours ago [-]

130 MB for the whole tape is not a lot. It equals to a floppy disk throughput, which is probably not a coincidence. However, basic soldering implies that the rest of the system acts like a big software-defined DAC/ADC.

Dedicated controller could pack a lot more data, as in hobo tape storage system: https://en.wikipedia.org/wiki/ArVid

geoffeg(10000) about 18 hours ago [-]

This is great. I did something very similar with a laser printer and a scanner many years ago. I wrote a script that generated pages of colored blocks and spent some time figuring out how much redundancy I needed on each page to account for the scanner's resolution. I think I saw something similar here or on github a few years ago.

lifthrasiir(10000) about 18 hours ago [-]

Searching HN for 'paper backup' gives a lot of existing solutions, in fact too many that I don't know which one you saw.

aaaaaaaaaaab(10000) about 18 hours ago [-]

So you invented QR codes?

banana_giraffe(10000) about 18 hours ago [-]

Reminds me of 'Cauzin Softstrip', the format some computer magazines used back in the day to distribute BASIC programs, or even executables.

Random example from an issue of Byte:


daenz(10000) about 18 hours ago [-]

How much data can you store if you embedded a picture-in-picture file over a 10 minute video? I could totally see content creators who do tutorials embedding project files in this way.

accrual(10000) about 15 hours ago [-]

Would storing data as a 15 or 30 FPS QR code 'video' be any more useful? At a minimum one would gain a configurable amount of error correction, and you could display it in the corner.

dsr_(10000) about 16 hours ago [-]

Back of the envelope estimate:

4096 x 2160 x 24 x 60 is your theoretical max in bits/second, 127 billion.

Assume that to counter YouTube's compression we need 16x16 blocks of no more than 256 colors and 15 keyframes/second; that reduces it to

256 * 135 * 8 * 15 = 4.1 million bits/sec.

That's not too awful. Ten minutes of this would get you about 300MB of data, which itself might be compressed.

behnamoh(10000) about 17 hours ago [-]

"hope you enjoyed this video. btw, the source code used in this tutorial is encoded in the video."

cush(10000) about 16 hours ago [-]

Yeah seems way easier than adding a link in the description

legitster(10000) about 16 hours ago [-]

This reminds me of an old hacky product that would let you use cheap VHS tapes as backup storage: https://en.wikipedia.org/wiki/ArVid

You would hit Record on a VCR and the computer data would be encoded as video data on the tape.

People are clever.

gibolt(10000) about 16 hours ago [-]

Early games and software would be delivered on audio cassettes that would then have to be 'played' in order to load your software temporarily into the device, which could take minutes

edit: Video from the 8-bit Guy on how this worked - https://www.youtube.com/watch?v=_9SM9lG47Ew

mobilene(10000) about 16 hours ago [-]

This is old school. When I first wrote code back in the Stone Age we used to store our stuff on cassette tape.

alar44(10000) about 16 hours ago [-]

That's not really that hacky, audio cassettes were used forever, it's just a tape backup.

jhgb(10000) about 15 hours ago [-]

I remember a similar solution that was marketed in a German mail order catalogue in late 1990s. It could have been Conrad, but I'm not 100% sure. I recall it being a USB peripheral, though. (Maybe I could find more about it in time...)

philjohn(10000) about 16 hours ago [-]

The Alesis ADAT 8 track digital audio recorders used SVHS tapes as the medium - at the end of the day, it's just a spooled magnetic medium, not hugely different conceptually than a hard drive.

ben174(10000) about 15 hours ago [-]

Wow, 2GB on a standard tape. For the time, that's incredibly efficient and cheap.

gattilorenz(10000) about 13 hours ago [-]

Yes! There were many such systems, LGR made a video for one of them, also showing the interface (as in: hardware and GUI) for the backup: https://youtu.be/TUS0Zv2APjU

dahfizz(10000) about 18 hours ago [-]

Does YouTube store and stream all videos losslessly? How does this work otherwise?

kleer001(10000) about 18 hours ago [-]

things like redundancy and crc checks I assume

ezfe(10000) about 18 hours ago [-]

The data is represented large enough on screen that compression doesn't destroy it.

LukeShu(10000) about 18 hours ago [-]

No, YouTube is not lossless.

The video that is created in the example in the README is https://www.youtube.com/watch?v=Fmm1AeYmbNU

We can see that data is encoded as 'pixels' that are quite large, being made up of many actual pixels in the video file. I see quite bad compression artifacts, yet I can clearly make out the pixels that would need to be clear to read the data. It looks like the video was uploaded at 720p (1280x720), but the data is encoded as a 64x36 'pixel' image of 8 distinct colors. So lots of room for lossy compression before it's unreadable.

martincmartin(10000) about 18 hours ago [-]

Imagine a QR code that changes once every X milliseconds.

advisedwang(10000) about 18 hours ago [-]

Seems like a great way to get your account closed for abuse!

LewisVerstappen(10000) about 18 hours ago [-]

You'd be surprised how much YouTube lets you upload.

I've been uploading 2-3 hours of content a day every day for the past few years. On the same account too.

I have fewer than 10 subscribers lol.

johndfsgdgdfg(10000) about 18 hours ago [-]

Then the whole HN crowd would have enough outrage materials for weeks. Seems like a win-win situation to me.

robotnikman(10000) about 18 hours ago [-]

Another thread posted today makes it seem like they don't really care


Manuel_D(10000) about 18 hours ago [-]

If it becomes prevalent, I think YouTube would do something like slightly randomize the compression in their videos to dissuade this kind of use.

deckar01(10000) about 18 hours ago [-]

You could make it much harder to detect by synthesizing a unique video with a DNN and hiding the data using traditional stenography techniques.

umvi(10000) about 18 hours ago [-]

Turns out any site that allows users to submit and retrieve data can be abused in the same way:

- FacebookDrive: 'Store files as base64 facebook posts'

- TwitterDrive: 'Store files as base64 tweets'

- SoundCloudDrive: 'Store files as mp3 audio'

- WikipediaDrive: 'Store files in wikipedia article histories'

jasonlotito(10000) about 17 hours ago [-]

My friends and I had a joke called NSABox. It would send data around using words that would attract the attention of the NSA, and you could submit a FOIA request to recover the data. I always found it amusing.

upupandup(10000) about 18 hours ago [-]

What a great time to write botnets

itake(10000) about 17 hours ago [-]

Back in the day when @gmail was famous for their massive free storage for email, ppl wrote scripts to chunk large files and store them as email attachments.

momofarm(10000) about 7 hours ago [-]

I wonder if we could use this technique at place which gov will censored senstive data upload to streaming site like mainland china or North Korea(they do have streaming site right?)

although for propganda use, shortwave / sat tv is a much much simpler way to distribute information to place like that, but I belive now its hard to get one SW radio for anyone.

quickthrower2(10000) about 8 hours ago [-]

We need an HNShowDeadDrive

thrdbndndn(10000) about 16 hours ago [-]

This is pretty tame compared to some actual, practical ones such as https://github.com/apachecn/CDNDrive

For people who don't read Chinese: it encodes data into ~10M blocks in PNG and then uploads (together with a metadata/index file as an entry point) to various Chinese social media sites that don't re-compress your images. I knew people have used it to store* TBs after TBs data on them already.

*Of course, it would be foolish to think your data is even remotely safe 'storing' them this way. But it's a very good solution for sharing large files.

behnamoh(10000) about 16 hours ago [-]

also Telegram

WaxProlix(10000) about 18 hours ago [-]

I wrote one of these as a POC when at AWS to store data sharded across all the free namespaces (think Lambda names), with pointers to the next chunk of data.

I like to think you could unify all of these into a FUSE filesystem and just mount your transparent multi-cloud remote FS as usual.

It's inefficient, but free! So you can have as much space as you want. And it's potentially brittle, but free! So you can replicate/stripe the data across as many providers as you want.

willcipriano(10000) about 17 hours ago [-]

I made a tool that lets you store files anywhere you can store a URL: https://podje.li/

wging(10000) about 17 hours ago [-]

See also https://github.com/qntm/base2048. 'Base2048 is a binary encoding optimised for transmitting data through Twitter.'

vfinn(10000) about 2 hours ago [-]

Reminds me of when I tried to Gmail myself a zip archive, and it was denied because of security reasons iirc. I then tried to base64 it, and it still didn't work, same with base32, until finally base16 did work.

the_duke(10000) about 16 hours ago [-]

Github repos makes for a pretty good key-value store.

It even has a full CRUD API, no need for using libgit.

7373737373(10000) about 12 hours ago [-]

I think I've seen similar blog posts about doing the same with the DNS and BGP networks

mike00632(10000) about 17 hours ago [-]

I wonder if access permissions would be easier to maintain using Facebook...

fomine3(10000) about 11 hours ago [-]

I found some pirates uploads video to Prezi so they get free S3 video hosting.

saint_angels(10000) about 17 hours ago [-]

Reminds me of a guy who stored data in ping messages https://youtu.be/JcJSW7Rprio

evgen(10000) about 12 hours ago [-]

Back in the day, when protocols were more trusting we would play games by storing data archives in other people's SMTP queues. Open the connection and send a message to yourself by bouncing it through a remote server, but wait to accept the returning email message until you wanted the data back. As long as you pulled it back in before it times out on that queue and looped it back out to the remote SMTP queue you could store several hundred MB (which was a lot of data at the time) in uuencoded chunks spread out across the NSFNet.

alanh(10000) about 14 hours ago [-]

What part of the video discusses this? :D So far it's about juggling chainsaws

Edit: OK, I see where this is going. Lol

bluedays(10000) about 14 hours ago [-]

I watch these things and I begin to realize I'll never be as intelligent as someone like this. It's good to know no matter how much you're grown there is always a bigger fish.

kube-system(10000) about 18 hours ago [-]

I can't wait until malware uses this as C2

Tijdreiziger(10000) about 18 hours ago [-]

Seems pretty fragile. Google taking down your channel would be enough to disarm your malware.

vmception(10000) about 16 hours ago [-]

Ipfs is decent enough or better with free pinning services

productceo(10000) about 18 hours ago [-]

Imagine a free cloud storage, but you need to watch an ad every time you download a file.

stingta(10000) about 18 hours ago [-]

Wasn't that basically megaupload its ilk

rightbyte(10000) about 18 hours ago [-]

I read that you did not download shady files from the interwebs when that was a thing sane people actually did?

ranger_danger(10000) about 4 hours ago [-]

imagine not using an ad blocker

dzhang314(10000) about 11 hours ago [-]

Hey everybody! I'm David, the creator of YouTubeDrive, and I never expected to see this old project pop up on HN. YouTubeDrive was created when I was a freshman in college with questionable programming abilities, absolutely no knowledge of coding theory, and way too much free time.

The encoding scheme that YouTubeDrive uses is brain-dead simple: pack three bits into each pixel of a sequence of 64x36 images (I only use RGB values 0 and 255, nothing in between), and then blow up these images by a factor of 20 to make a 1280x720 video. These 20x20 colored squares are big enough to reliably survive YouTube's compression algorithm (or at least they were in 2016 -- the algorithms have probably changed since). You really do need something around that size, because I discovered that YouTube's video compression would sometimes flip the average color of a 10x10 square from 0 to 255, or vice versa.

Looking back now as a grad student, I realize that there are much cleverer approaches to this problem: a better encoding scheme (discrete Fourier/cosine/wavelet transforms) would let me pack bits in the frequency domain instead of the spatial domain, reducing the probability of bit-flip errors, and a good error-correcting code (Hamming, Reed-Solomon, etc.) would let me tolerate a few bit-flips here and there. In classic academic fashion, I'll leave it as an exercise to the reader to implement these extensions :)

femto113(10000) about 9 hours ago [-]

As far back as the late 1970s a surprisingly similar scheme was used to record digital audio to analog video tape. It mostly looks like kind of stripey static, but there was a clear correlation between what happened musically and what happened visually, so in college (late 1980s) one of my friends came into one of these and we'd keep it on the TV while listening to whole albums. We had a simultaneous epiphany about the encoding scheme during a Jethro Tull flute solo, when the static suddenly became just a few large squares.

Can see one in action here


freedomben(10000) about 11 hours ago [-]

Nice thanks, this answered my biggest question, which was 'will it survive compression/re-encoding.' (yes it will). Very cool idea!

dzhang314(10000) about 10 hours ago [-]

One more thing: the choice of Wolfram Mathematica as an implementation language was a deliberate decision on my part. Not for any technical reason -- YouTubeDrive doesn't use any of Mathematica's symbolic math capabilities -- but because I didn't want YouTubeDrive to be too easy for anybody on the internet to download and use, lest I attract unwanted attention from Google. In the eyes of my paranoid freshman self, the fact that YouTubeDrive is somewhat obtuse to install was a feature, not a bug.

So, feel free to have a look and have a laugh, but don't try to use YouTubeDrive for any serious purpose! This encoding scheme is so horrendously inefficient (on the order of 99% overhead) that the effective bandwidth to and from YouTube is something like one megabyte per minute.

ArrayBoundCheck(10000) about 10 hours ago [-]

Do you have any idea how many more bits you'd be able to use if you applied any of the encoding transformations?

metadat(10000) about 18 hours ago [-]

Could youtube-dlp and YouTube Vanced now be hosted on.. YouTube?

I wonder how long it'd take for Google to crack down on the system abuse.

Is it really abuse if the videos are viewable / playable? Presumably the ToS either already forbids covert channel encoding or soon will.

sevenf0ur(10000) about 18 hours ago [-]

Probably breaks TOS under video spam

robonerd(10000) about 18 hours ago [-]

If you put youtube-dlp on youtube as a video, make sure to use youtube-dlp to it up.

throwaway0a5e(10000) about 18 hours ago [-]

>Is it really abuse if the videos are viewable / playable? Presumably the ToS either already forbids covert channel encoding or soon will.

If creators start encoding their source and material into their content Google would probably be fine with that because it gives them data but also gives them context for that data.

Edit: I meant like 'director's commentary' and 'notes about production' type stuff like you used to see added to DVDs back in the day. Not 'using youtube as my personal file storage'. Why is this such an unpopular opinion?

cush(10000) about 16 hours ago [-]

It's one of those problems that resolves itself.

The process of creating and using the files is prohibitively unusable and so many better solutions exist that YT doesn't need to worry about it

freestorage(10000) about 17 hours ago [-]

Years ago when Amazon had unlimited photo storage, you could "hide" gigabytes of data behind a 1px gif (literally concatenation together) so that it wouldn't count against your quota.

rabuse(10000) about 9 hours ago [-]

Shhhh, I still do this with encrypted database backups.

xhrpost(10000) about 17 hours ago [-]

They still do if you pay for Prime. I was surprised to see that even RAW files (which are uncompressed and quite large) were uploaded and stored with no issues. Not the same as 'hiding' data but might still be possible.

Jimmc414(10000) about 18 hours ago [-]

Very cool. I wonder how difficult it would be present a real watchable video to the viewer. Albeit low quality, but embed the file in a steganographic method. I think a risk of this tech is that if it takes off, YT might easily adjust the algorithms to remove unwatchable videos. Perhaps leaving a watchable video could grant it more persistence than an obvious data stream.

ragingglow(10000) about 18 hours ago [-]

Sure, but the more structure your video has to have, the harder it becomes to hide information stenographically within it. Your information density will become very low I think.

8K832d7tNmiQ(10000) about 17 hours ago [-]

I remember seeing this first discussed at 4chan /g/ board as a joke wether or not they can abuse Youtube's unlimited file size upload limit, then escalated into a proof of concept shown in the repo :)

ranger_danger(10000) about 5 hours ago [-]

They also experimented with encoding videos and arbitrary files into different kinds of single (still) image formats, some of them able to be uploaded to the same 4chan thread itself, with instructions on how to decode/play it back. Examples:







marginalia_nu(10000) about 17 hours ago [-]

This is a tangent. I must have been maybe 15-16 at the time, so somewhere around 20 years ago: One of the first pieces of software I remember building was a POP3 server that served files, that you could download using an email client where they would show up as attachments.

Incredibly bizarre idea. I'm not sure who I thought would benefit from this. I guess I got swept up in RFC1939 and needed to build... something.

Saint_Genet(10000) about 17 hours ago [-]

Makes me wonder how many video and image upload sites are now used as easily accessible number stations these days

adolph(10000) about 15 hours ago [-]

Probably not many. The advantage of plain old-fashioned radio is that the station doesn't keep track of the receivers. Whoever watches a YouTube numbers station is tracked six ways to Sunday.

wanderingmind(10000) about 13 hours ago [-]

The code looks not too big (a single file). But it requries a paid symbolic language (Mathematica) to be used. Anyone with better Mathematica knowledge explain if it can be ported to another symbolic (Sage, Maxima) or non-symbolic languages (R, Julia, Python)

dzhang314(10000) about 11 hours ago [-]

Yep! I'm the creator of YouTubeDrive, and there's absolutely nothing in the code that depends on the symbolic manipulation capabilities of Wolfram Mathematica -- you could easily port it to Python, C++, whatever. However, there are two non-technical reasons YouTubeDrive is written in Mathematica:

(1) I was a freshman in college at the time, and Mathematica is one of the first languages I learned. (My physics classes allowed us to use Mathematica to spare us from doing integrals by hand.)

(2) I intentionally chose a language that's a bit obtuse to use. I was afraid that I might attract unwanted attention from Google if YouTubeDrive were too easy for anybody to download and run.

hifikuno(10000) about 12 hours ago [-]

I remember seeing years ago a python library called BitGlitter which did the same thing. It would convert any file to a image or video. You could then upload the file yourself. https://pypi.org/project/BitGlitter/

jimmydeans(10000) about 18 hours ago [-]

I remember a project that was doing this with photo files and unlimited picture storage.

e1ghtSpace(10000) about 7 hours ago [-]

This ones not the best but it works. I would recommend zipping everything and then using that as a single file. (file size limit is ~2GB fyi) https://github.com/Quadmium/PEncode

sunlite99(10000) about 16 hours ago [-]

How will you prevent youtube from re-encoding the video and data getting thrashed?

tenebrisalietum(10000) about 16 hours ago [-]

Make the boxes bigger.

lb1lf(10000) about 16 hours ago [-]

-Back in the day when file sharing was new, I won two rounds of beer from my friends in university - the first after I tried what I dubbed hardcore backups (Tarred, gzipped and pgp'd an archive, slapped an avi header on it, renamed it britney_uncensored_sex_tape[XXX].avi or something similar, then shared it on WinMX assuming that as hard drive space was free and teenage boys were teenage boys, at least some of those who downloaded it would leave it to share even if the file claimed to be corrupt.

It worked a charm.

Second round? A year later, when the archive was still available from umpteen hosts.

For all I know, it still languishes on who knows how many old hard drives...

marginalia_nu(10000) about 15 hours ago [-]

Poor guys, still looking for the right codec to play the britney tape they downloaded 28 years ago.

jimmygrapes(10000) about 7 hours ago [-]

ah hell, you're the one who made my computer crash trying to open that and make me panic? damn you man

jjice(10000) about 15 hours ago [-]

That's a perfect college CS story. Beer and bastardized files - what a combo!

S-E-P(10000) about 7 hours ago [-]

You devil! I'm pretty sure I remember running into a file that looked like that and a quick poke around showed it wasn't anything valid.

Funny how these things work since I'm pretty sure I remember running into it around 2008 (i'm a few years younger).

I think i just deleted it though since I was suspicious of most strange files back then; I was the nerd who didn't have friends so i used to troll forums for anything i could get my hands on.

f0e4c2f7(10000) about 10 hours ago [-]

Your story reminds me of a Linus quote.

'Real men don't use backups, they post their stuff on a public ftp server and let the rest of the world make copies.' -Linus Torvalds

Historical Discussions: I'm an addict (May 18, 2022: 601 points)

(605) I'm an addict

605 points 7 days ago by tarunreddy in 10000th position

tarunreddy.bearblog.dev | Estimated reading time – 6 minutes | comments | anchor

I like watching videos. A lot in fact. Today, I've spent over 6 hours watching youtube videos, an hour of reading through comments[1] on hacker news, 3 hours of sleep and poof, the day is gone.

Self trust, memory and the invincible autopilot: I can't trust myself. Cause: I'm simply incapable of doing things I've set out to do. Simple things. Everything is difficult. I overpromise and not deliver. Writing this post is very difficult in fact. I would switch to comforting myself by watching youtube if not for cold turkey (Site blocking software. Highly recommended. No affiliation). I say to myself as I write this, 'I will sleep early tonight'. But there is a 95% chance of that not happening. I'm not talking about what I think the cause is (I'll come to that later), this is just something I do or 'happens' to me everyday.

Another angle that makes this ever more distressing is that my memory is very, very fallible. I have a folder of text files in which I have written things from ADHD self-help books to anti-procrastination videos to random txt files I have written in the spur of the moment, all intending to right my path. A simple script shows me that there is around 2500 words of strong convictions that I've set to myself, not counting several other ones I wrote in different note taking systems. I can confidently say that I've done nothing I said I would do there. The thing is, I forget that I wrote them. My mind or I forget about the fact of their existence. Maybe visibility is a problem. I should have written it on the wall behind my PC. But the fact remains. I'll move to the hall across my bedroom and there goes everything out the window. I'm a blank slate.

Being the eternal hedonist I am, I indulge in pleasure every chance I get. Staying up late to get a quick dose over grogginess next day, sign me up. Avoiding difficult calls with hours and hours of pleasure seeking, a 100% I'm in. The pleasure I gain from these lowly enjoyments is so consistent that I'm not sure I'm conscious half the time everyday. I become a zombie whose only objective is pleasure and any moment spent not doing something or anything is to be avoided. I run on autopilot on most days.

An aside: The only time I went without Reddit and YouTube was 3 weeks back. When I think back to those days, I think the only difference between during those time periods was self awareness. I knew what would happen if I watched a single video of any sorts. So as an abstainer I made the smart decision of never even looking in the general direction of the TV when my family was watching it. Blocking video streaming sites using cold turkey (and not unlocking it) meant it was difficult to watch on my PC. I kept my phone on focus mode and only used chrome 5 minutes at a time during breakfast or other restless times.

Luke would say I'm doing what I'm doing because I LIKE it. NGL, I like it. I love it. During the week-long abstinence stint, I was reading through comments in a post on esports careers and got to this. I've watched the video since, but at that moment after all the avoiding I did, watching just a few seconds (I think it was 5 or 10 seconds maybe), I knew how powerful of a medium it was. I knew how dangerous it was. I think we (internet/video junkies) underestimate how enjoyable videos are. Just go a week without them and watch a video of your favourite content creator. It feels gooood... I forgot the satisfaction I was after without forgetting the behaviours which drove me to continue watching. I used to disrespect them and think it is absolutely within my power to use them wisely. But I have around now. I don't know anything about the company, but thousands of smart engineers and billions of dollars went into creating the experience we have when we interact with them, not discounting the content itself, which has gotten pretty good. I gotta respect the effort it takes to get over that. I gotta forgive myself if I fall into the spiral of endless consumption.

Is it really THAT bad? I'm sure that besides disappointing myself and my parents, I could continue doing what I'm doing. If I give in and not resist, if I don't set any goals whatsoever, I could live without much trouble I think. It would be so easy. I don't want it though. Remembering the times when I worked hard, I can confidently say I was more aware. If I am aware of what I'm doing, and I seek awareness as the thing that could get me out of this quagmire, I would never do this. I know that working hard on things I hate is the right path precisely because I hate doing it so much. Hatred and difficulty point in the direction of something worth doing. I don't know of any situations where this is false.

Getting over it: Besides blanket ban on all things video and social media, I don't think I have a better solution. Cold turkey on the laptop, always on focus mode and avoiding looking at the general direction of the TV screen when in the hall is the only solution that worked for me. I don't know what I will be doing in the next 10 minutes, let alone the next day. But I know this. When I have a precious visit from lady awareness and my thought is the clearest, I know this is not for me.

I had to (did I really?) search for this video for a while and watched it (oh god), but at around the ten minute mark, Andreas talks about why he named his project 'Serenity', his relation with addiction and how it's a part of him.

I wanted to give it a name that would always remind me of this thing that I didn't want to lose and don't want to lose and can't afford to lose. The knowledge that I'm a life long addict and I simply cannot have drugs or alcohol [...] I am completely allergic to it and it takes over my entire life in no time if I allow them in. And that's not because there is anything wrong with drugs or alcohol [...] It's just that I, Andreas, cannot handle these things.

I think I can follow his lead and say I am an addict.

[1] Reading long articles is tough. Sifting through comments on the other hand - pretty comfortable.

All Comments: [-] | anchor

artemonster(10000) 6 days ago [-]

This hits hard. I recently counted hours watched on YT (via history export) and OH MY GD, it averaged around 3 hours daily throught the year, for several years. I just imagined WHAT IF I have spent this amount of effort and time on ANYTHING else: playing an instrument, learning a new language, coding a side-project. The thought scared me so much, aka the difference between current me and 'potential' me that I have abstained from YT... for a week, and then I've slipped back. I am truly powerless, damn. Any other ideas besides cold turkey? I mean, some 'quality time' needs also to exist, you cannot abstain from everything, right? Or is this a fallacy that gets alcoholics backs to alcohol?

attero(10000) 5 days ago [-]

This is very real. The time adds up and it's often not the best way to relax/learn. Not only you could spend this time being productive, you could also just be resting or sleeping or just reflecting on your life.

You might want to check out https://watchlimits.com/ (free extension, I built this), or some other tools I mention in https://watchlimits.com/blog/posts/more_hours_per_week/

BbzzbB(10000) 6 days ago [-]

Did you estimate the hours by summing total video lenghts in history?

As I recall there was no time spent stat, just video history eothout watchtime indication.

imtringued(10000) 6 days ago [-]

>I just imagined WHAT IF I have spent this amount of effort and time on ANYTHING else:

That's impossible. You can click on a youtube video instantly at any moment and waste 3 hours thanks to the power of the algorithm. Spending that time on anything specific would require you to plan that into your schedule. You are not going to work on a side project when you think you have 30 min for some 'harmless' fun but you sure are going to be trapped by the algorithm for three hours without planning to do so.

015a(10000) 6 days ago [-]

If you have YouTube Premium & the YouTube mobile app; click on your profile picture, then YouTube Premium Benefits. It'll tell you there, of all places, your total 'Ad-free video watchtime'; doesn't include the hours watched without Premium, not organizing it per day, not the greatest analytics, but its a number that will surprise every person reading this.

switchbak(10000) 6 days ago [-]

A lot of the addictive component comes from the ML generated suggestions.

Just curate a decently small list of high quality channels and only browse via the subscriptions list. You'll know when you're all caught up and the FOMO isn't there. You'll still catch the stuff you're interested in, but you won't be pulled in a million other random directions.

That said, I have the YouTube bug pretty bad myself.

orangepurple(10000) 6 days ago [-]

Read what Ted has to say about the necessity of going through the power process (paragraphs 33 onward) and the motives of scientists (paragraphs 87 onward) for some perspective in his manifesto on Industrial Society and Its Future:


cyphar(10000) 6 days ago [-]

I started learning Japanese about two years ago now and now all of the time I used to waste watching YouTube is spent watching YouTube in Japanese as language practice. It's still the same activity but is at least somewhat productive. I barely go on Hacker News too, since it feels like more of a waste of time ('I could be studying Japanese right now!').

pjc50(10000) 6 days ago [-]

> 3 hours daily throught the year, for several years.

So, less than the average amount of time people spend watching TV?

nonameiguess(10000) 6 days ago [-]

Not to say it's a great use of your time, but watching YouTube videos is a fairly passive activity. You can do it when you're otherwise out of energy and barely able to pay attention, so it isn't necessarily competing with blocks of time in which you can do something like code up side projects and learn to play new instruments.

Also, again maybe not for you, but a lot of people's numbers if they're just looking at watch history are going to be totally out of whack if they're using YouTube as a music streaming service. My wife's account is the one logged into our television and this would constitute the vast bulk of watch hours, having music on in the background while cleaning the house, cooking dinner, and doing a whole lot of other things. You definitely can't practice tuba in the background while also cooking and cleaning.

wazoox(10000) 6 days ago [-]

Not only that, but now I watch all videos and listen to all podcasts at 1.5 to 3x speed. At some point I should accept the FOMO...

UglyToad(10000) 6 days ago [-]

Very un-HN take but maybe you're not meant to be 100% efficient all the time, maybe having some time to be passive and relax is good actually. Maybe we work far too much already and expecting to be additionally productive outside those hours is an express train to burnout?

jstummbillig(10000) 6 days ago [-]

> I just imagined WHAT IF I have spent this amount of effort and time on ANYTHING else: playing an instrument, learning a new language, coding a side-project

This is a very unfair set of 'ANYTHING else'. I am sure you can come up with more real world list of what other people actually do after work, and that might make you feel a little more lenient about your own choices and needs.

> Any other ideas besides cold turkey?

/etc/hosts youtube.com www.youtube.com

Best of luck.

jmiskovic(10000) 6 days ago [-]

Learning to play an instrument is fun and can be just as addicting. I'd recommend a small form keyboard that can be kept nearby at all times. I adore my Yamaha Reface CP, if you manage to find one. The best part - grinding through chords, arpeggios, scales and challenging song sections to get them into muscle memory can be done while watching unrelated videos.

klik99(10000) 6 days ago [-]

The broadest definition of addiction I've heard is 'When you do something compulsively enough that it's affecting the rest of your life' - the implication being that everyone compulsively do things and that's fine but it's only really a problem when it damages your relationships, careers, happiness, etc. All brain stuff has a chemical component but chemical dependency/addiction is a particuarly dangerous subset of addiction, and not necessarily what I'm talking about here.

Many people here have the super power to focus on a problem relentlessly, it's kind of the trademark of the nerd stereotype - Addiction is the dark side of that superpower and one that I have to constantly keep in check. But I don't want to kill that superpower by squashing whatever thing I'm deep into at the moment, so I always use the litmus test - 'Is this affecting the ability to keep my life in order?'

As long as there isn't a strong chemical component to whatever addiction you're trying to purge, a book like 'Power of Habit' by Charles Duhigg has a lot of practical ways to adjust your unconscious compulsions.

ransom1538(10000) 6 days ago [-]

'When you do something compulsively enough that it's affecting the rest of your life'

Similar to what I was told: 'When you compulsively do something despite it having negative impacts on your life'

thomastjeffery(10000) 6 days ago [-]

That 'superpower' you mentioned is a good description of ADHD hyperfocus.

People with ADHD are naturally at a constant deficit in stimulation/dopamine. That's why we get distracted mid-conversation: the conversation wasn't stimulating enough to fill that deficit, so the brain started looking for more stimulation, thinking it could just multitask to compensate.

The deficit in simulation/dopamine is why people with ADHD can hyperfocus: as soon as there is a satisfying source of stimulation, the brain tries to squeeze out as much dopamine as it can.

yamrzou(10000) 6 days ago [-]

> I'm simply incapable of doing things I've set out to do. Simple things. Everything is difficult.

"Independent discharges of dopamine neurons (tonic or pacemaker firing) determine the motivation to respond to such cues. As a result of habitual intake of addictive drugs, dopamine receptors expressed in the brain are decreased, thereby reducing interest in activities not already stamped in by habitual rewards."

From: Dopamine and Addiction | Annual Review of Psychology — https://www.annualreviews.org/doi/abs/10.1146/annurev-psych-...



I'd like to add a couple more ideas, because what you're describing in your article is spot on, and I believe can be generalized past your own experience.

> Another angle that makes this ever more distressing is that my memory is very, very fallible ... I can confidently say that I've done nothing I said I would do there."

Herbert Simon says: "In an information-rich world, the wealth of information means a dearth of something else: a scarcity of whatever it is that information consumes. What information consumes is rather obvious: it consumes the attention of its recipients. Hence, a wealth of information creates a poverty of attention".

As an information-addict myself, I've been meditating a lot about this topic. During the past two years I've been researching it from a psychological perspective (And for that, I'm grateful to @ericd for this HN comment: https://news.ycombinator.com/item?id=24581016). I'll throw in some resources that I've came across during this journey, in case anyone finds them useful:

- Dr Gabor Mate: Addiction: https://www.youtube.com/watch?v=_-APGWvYupU

- Dr. Anna Lembke: Understanding & Treating Addiction | Huberman Lab Podcast: https://www.youtube.com/watch?v=p3JLaF_4Tz8

- The book "The Molecule of More" by by Daniel Z. Lieberman and Michael E. Long.

- The book "The Shallows" by Nicholas Carr.

shroompasta(10000) 6 days ago [-]

I've read The Molecule of More and didn't consider it all too great of a read.

There were some correlations that didn't sit well with me - one, off the top of my head, was an implication that MDMA consumption could make me politically conservative.

I personally would recommend just Huberman as he covers Dopamine to a great extent.

orangepurple(10000) 6 days ago [-]

It should also be mentioned that the most addictive and dopaminergic drugs most people in the Anglosphere consume daily are the wide variety of pre-made pre-cooked fast food meals which are engineered for maximum palatability. These days this goes for just about everything that isn't bought in the grocery store in its purest form. Just about every product on the shelves these days is contaminated with engineered ingredients to get you a stronger feeling of reward so you come back for more.

"Talk to me about taste, and if this stuff tastes better, don't run around trying to sell stuff that doesn't taste good." - Stephen Sanger, head of General Mills

You can search the web using that quote to begin your descent into the rabbit hole.

The food you eat does not just affect your body weight. It also affects your mental state.

hericium(10000) 6 days ago [-]

Also from Huberman: 'Controlling Your Dopamine For Motivation, Focus & Satisfaction' [1]

[1] https://www.youtube.com/watch?v=QmOF0crdyRU

thomastjeffery(10000) 6 days ago [-]

Are you very familiar with ADHD? It's very much the same effect, except instead of substance abuse/addiction as the cause, it's a natural chronic deficit in stimulation.

I find this YouTube channel very informative (albeit cheesy): https://m.youtube.com/c/HowtoADHD

michalstanko(10000) 6 days ago [-]

It scares me how much this reminds me of myself. I don't know how I was able to keep my job (and keep the roof over my family's heads) with my habit of not being able to concentrate on work at all because the minute I need to think a bit harder I immediately switch to reading news, HN or watching YouTube, only to finish my work late in the evening to save my ass (on good days).

klondike_klive(10000) 6 days ago [-]

Not that I've conquered it by any means, but I find narrating helps for me to maintain a thread of activity. Lots of my work involves switching between applications & cloud folders and any break in that can trip me up. I find if I talk to myself as if I were explaining what I'm doing (ie a tutorial but not as rigorous) that really helps. I used to use screen capture software to take time-lapse videos to give myself the impression I was being watched/monitored. But that isn't as effective as I know I'll probably never watch them.

projektfu(10000) 6 days ago [-]

Yeah it is definitely for me a reaction to feeling overwhelmed. There's just that bit of activation energy that is missing so I slip back to equilibrium of doing easy things.

Adderall has helped, so has some behavioral approaches, but nothing is a cure. For me, trying to eliminate distractions is either a distraction itself or ineffective.

Structured procrastination is useful, at least I am practicing piano or getting more fit instead of sitting around watching TV or reading dumb articles and forums.

FeepingCreature(10000) 6 days ago [-]

High BPM music does it for me.

seti0Cha(10000) 6 days ago [-]

I'm not an addict, but I'm addict adjacent, literally and metaphorically. While it's good that the poster identified the problem, the cold turkey approach is not usually sufficient. Beating addiction by simply removing the thing you are addicted to is very difficult because it leaves a big hole in your life. You need to replace your addiction with something healthy. Andreas did very effectively because he found something that both occupied his time and gave him a new community.

phist_mcgee(10000) 5 days ago [-]

Andreas Kling who created Serenity OS if anyone was wondering. He's been an inspiration to me for sure.


IYasha(10000) 6 days ago [-]

We're in the same boat! But I fell for it, like, 15-20yrs ago. Have been restricting myself since, basically, to the web 1.0 level. I don't have broadband anymore, only limited plan, - this stops me from watching videos. This was a conscious step. After setting up some physical barriers it became easier to change my own mindset (I knew I didn't have such an iron will to begin with). Also, knowledge helps. For example, being concerned with own privacy, security and health, prevents me from using garbage like tiktok or instagram. There's also conventional wisdom which tells me to stop trying to catch on with everything (HN news stream, for example :) ). Although it was necessary for my line of work for quite a while, but in life it brings nothing but sense of miss or loss like 'Oh I could've done this/involved with that/joined those projects/used this library/visited that conference/etc. Sorry if this comment turned into ramblings )

dgb23(10000) 6 days ago [-]

HN is somewhere in the middle of useful and interesting versus distracting and repetitive. It's valuable enough for me to keep track of. I've noticed that I like it more through RSS. Because I don't see upvotes and comment counts there.

natly(10000) 6 days ago [-]

I used to be like this. I solved it by making my default activity reading (anything, stop if you don't enjoy it, novels are great) and working out. Just getting a week or month break is sometimes enough to break the habit (but best to do it in a way doesn't feel like an endurance challenge - that replaces it with something else fun or stimulating).

FrankyHollywood(10000) 6 days ago [-]

yes, that and go to sleep the moment you get the urge to eat lots of sugars, which usually means your brain needs rest.

dgb23(10000) 6 days ago [-]

I can relate too. Throw in video games and beer too for good measure.

For me there was no big Aha moment or solution to getting distracted from what I actually want to do. It had been incremental steps, such as:

- focusing on consuming long form reading/videos

- heavily curating consumption with RSS, individual settings etc.

- disabling notifications of anything that is not important

- taking responsibility of things that I could avoid, engaging more

- regular exercise and sleep

Things like that. But again, incremental steps. Sometimes I shifted from one distraction to the next, but after recognizing this it becomes clearer what's happening.

The results are quite powerful. Over just a couple of years I gained so much. I started to get bored of things that would distract me otherwise. I gained confidence and especially courage. I recognize undesirable behavior really quickly now and stopped fooling myself.

aphroz(10000) 6 days ago [-]

We're all being sucked in the internet. It's so easy to become addict to video, podcasts, games, etc. nowadays that I would not be surprised if that becomes a major society issue.

harryvederci(10000) 6 days ago [-]

We may just as well call it the InterCage at this point.

Nets are for trapping prey, that part has been completed a long time ago.

SkipperCat(10000) 6 days ago [-]

I always thought in 20 years, Facebook (or its equivalent) will have warning labels similar to how cigarettes do now. People should be free to do as they please, but I would hope that as a society, we would at least inform people as to what the actual product is designed to do.

You could think of it like the nutrition labels on food. Imagine a pop up when you log onto reddit saying 'this site on average engages users for 90-120 minutes per session'. That would give you some forethought about how much value you're getting from the engagement and prompt you to make a different decision. Or at least a more informed decision.

rr808(10000) 6 days ago [-]

Yeah I'm surprised there are lots of articles about keeping ipads from children, but not much about their parents.

gitfan86(10000) 6 days ago [-]

The problem I have is that some of the content is really good. I have been able to lose weight consistently and easily because I found resources online that helped me understand the science and cut through all the bullshit that we are inundated with around nutrition. My confidence and results in investing are at an all time high because I can fill in the gaps of understanding with a range of experts on YouTube.

But, it does seems like an addiction when I want to accomplish a task and 20 minutes into that task I'm back on YouTube or forums looking for more interesting data.

supernihil(10000) 6 days ago [-]

I dont like the idea of replacing the time spend compulsively procrastinating with 'learning a new language, getting a pet, going to the gym' is the right way.

I see the root cause (for me atleast (used to read HN and blogs for 4 hours everyday)) is that i cant stand being with myself.

During the last two months ive been trying to not panic when i'm idle. And not take out my phone or read the nearest material i can lay my hands on.

Instead i try to accept the necesity of 'falde i staver' (danish for 'falling out of presence'). When i was a kid i would often just fall into this state and just defocus with my sight and let daydreaming take over.

Basically i have a war going against effectiveness. I hold unto my right as a mammal to be inefficient and sit drooling looking at trees.

My advice on 'doing something' when you have day without plans is the following: Bike in the forrest, coffee from thermos near the ocean, read newspapers at the library, talk to people at trainstations (the frequent hangouts are always open for conversations)

jnovek(10000) 6 days ago [-]

I am nearly the opposite, but it has the same effect.

I can spend hours in my own head, just thinking about... everything. For me the phone rabbit hole usually starts with an idea that needs more information to live.

The outcome, however, is the same: I get very little practical work done, I am constantly behind and very frustrated.

dhosek(10000) 6 days ago [-]

I've found that having a very finite procrastination activity (like, say, doing ten push-ups) can help. The trick is finding something that can't be extended indefinitely.

noobker(10000) 6 days ago [-]

> I hold unto my right as a mammal to be inefficient and sit drooling looking at trees

here, here!

anarticle(10000) 6 days ago [-]

At least if you know you have addictive tendencies you will get a benefit! :D It's a good first step to replacing a destructive/ineffective behavior with something a little better. I think in our software somewhere we all have a kind of addictive complex that for some, modern tech seems to be good at pushing. If you can wield it for good, I think that's a good change.

I agree with you that the obsession with S-tier clearing life is very bad for people. I have many young friends who obsess over optimizing every move they make at cost to their sanity and time. Specialization is for insects.

Ryoung27(10000) 6 days ago [-]

I agree with you on not liking the idea of changing your time sinks. I ran into a similar problem as your's with Reddit, and realized it was because I did not really like my self.

Somewhere along the way I realized I had not day dreamed or used my imagination for a long time because it was so easy to 'read the nearest material I can lay my hands on'.

To add to your advice I would suggest to try to find out who you are without outside material and to get comfortable with this self.

notacoward(10000) 6 days ago [-]

> I hold unto my right as a mammal to be inefficient and sit drooling looking at trees.

OK, I get this. Really, I do. I've spent many many hours doing 'nothing' just like this, and those have generally been happy times. But ... how would that change if you had a lot more free time? Could you spend all day every day in such a state? Would it be healthy if you did?

The reason I ask is that I've had to grapple with that question since I retired. Probably will even more so when my daughter leaves for college. And as much as I enjoy doing 'nothing' I find that I just can't do it all day. I have to be doing something, which brings us right back to the issue of low-effort low-reward activities (e.g. doomscrolling on the internet) vs. high-effort high-reward activities (e.g. hobbies, community involvement, travel). I force myself to do the latter first so I don't lose the ability to do hard(er) things, and there's still plenty of time left over for the low-effort stuff.

I suggest that your 'war' only needs to be fought because you don't have enough total free time, and it will seem like a very different war when that changes.

magicroot75(10000) 4 days ago [-]

You might like the book '4000 Weeks: Time Management for Mortals.' It's about this precise idea of valuing idleness (there's no quick way to change that mindset, it's a process).

bsedlm(10000) 6 days ago [-]

> I hold unto my right as a mammal to be inefficient and sit drooling looking at trees.

yes, I even like framing this as an environmental conservation activity.

I'm doing nothing at all (only consuming oxygen, no content, no nothing) as a conscious action to save the environment.

whywhywhywhy(10000) 6 days ago [-]

Once you realize that idle brain time is when actual creative problem solving occurs without you conscious of it then its easier to let go. I definitely feel fine just staring out of a train window rather than my phone knowing this.

malux85(10000) 6 days ago [-]

I see this a lot and I'm curious - why can't you stand being with yourself?

You touch on it a little when you say you begin to panic when you're idle - what is the source of that panic? Is it FOMO? Is it un-processed traumatic memories? Is it unhinged neuroticism and overthinking? Is it something else? What is the source of that panic?

nextaccountic(10000) 6 days ago [-]

> I see the root cause (for me atleast (used to read HN and blogs for 4 hours everyday)) is that i cant stand being with myself.

> During the last two months ive been trying to not panic when i'm idle. And not take out my phone or read the nearest material i can lay my hands on.

> (...)

> My advice on 'doing something' when you have day without plans is the following: Bike in the forrest, coffee from thermos near the ocean, read newspapers at the library, talk to people at trainstations (the frequent hangouts are always open for conversations)

Or maybe meditate? It may be hard (specially when beginning) but it's wonderful for learning to comfortable with yourself

katzgrau(10000) 6 days ago [-]

They're just cliche ideas for how to fill the time. Anyone one who tries to break an addiction suddenly finds themselves with a lot of free time they don't know how to fill.

But on that note, have multiple kids AND get a pet. You'll never have a free minute again.

jimmydeans(10000) 6 days ago [-]

The internet was more addicting 20 years ago.

dave84(10000) 6 days ago [-]

Having to pay for it by the minute somewhat moderated the effect back then.

capableweb(10000) 6 days ago [-]

Not sure but anyway, 20 years ago it might have been accidentally addicting just because it was new, shiny and interesting. Today it's addicting because some people are employed full-time by large companies to make their products more addicting, and that's part of their OKRs and more ('Engagement' being one such metric that many companies try to increase religiously)

grumbel(10000) 6 days ago [-]

20 years ago you still had to go and search if you wanted to see something. Now you have endless streams of recommendations that do the thinking for you. All you got to do now is watch and consume. And of course 20 years ago we didn't even have Youtube, it was mostly all still mostly just text.

micromacrofoot(10000) 6 days ago [-]

no way, tiktok looks like one of the most addicting platforms i've ever seen

everyone(10000) 6 days ago [-]

I'm confused by this..

'over 6 hours watching youtube videos, an hour of reading through comments[1] on hacker news, 3 hours of sleep and poof, the day is gone.'

The day is only 10 hours long? Also author is only getting 3 hours of sleep per day!?!?

ModernMech(10000) 6 days ago [-]

They are speaking figuratively about the day being gone.

Mattasher(10000) 6 days ago [-]

For years I've tried to explain to people that working in tech, especially over time, is like having a job as a beer taster while living above a bar and hanging out with your drinking buddies all evening at that same bar, where an attractive bartender puts cocktails in front of you all night for free, all while trying not to become an alcoholic.

Any addict who said their plan for sobering up was to live like this would be told they are going to fail with 100% certainty.

solitus(10000) 6 days ago [-]

I agree, sometimes I feel like I should become an electrician. When I have to do manual work, I do manual work. When I have to code...it can go in many directions.

upupandup(10000) 6 days ago [-]

I'm curious to know whether OP has experienced full on addiction to substances, gambling or sex. Because what he's describing does not seem to be any of those things, more about complaints that Youtube's recommendation algorithm is causing him to stay on that platform for hours. I don't know what other 'pleasures' he is alluding to but I could infer here and say OP is male and he is probably referring to internet pornography.

All of these things could very well be what you end up doing but its really up to the individual to make the choices and change. You can't really do this reading an article like this nor can you find any solace by identifying others with your problem because it quickly becomes Wounded Club.

Instead of growing wiser, you stay wounded, thinking there is something wrong with you and you just end up like OP, watching youtube for hours on end, reading hackernews/reddit comments. If this is something you like to change then you need to take action. Without action all the advice in the world will do you squat.

Unfortunately, as of late, its become fashionable and quite profitable to humblebrag about non-issues. Do you really think that if OP's behavior is possible if their life depended on it? I think not.

One is too lazy to make a change then who's at fault? You can throw whatever 4 letter medical term and write entire books on it. It won't matter. If it is to be then its up to me.

TimTheTinker(10000) 6 days ago [-]

You're largely right, but I think you're underestimating the cognitive deficits that come with ADHD, particularly executive function.

The key to breaking through mental barriers as an ADHD-sufferer is usually not more will-power or less laziness, though one must at least want to change. The key is externalization -- that is, building external cues, props, and guardrails into one's life to help one stay on task in the moment - whether that means doing work or resting. We have trouble doing both proportionately.

Some people say they've had great success taking time every Sunday evening to review the past week and make adjustments -- whether that be putting a post-it note on the computer, installing a browser extension, setting a series of reminders, or any number of innovative ideas. Whatever helps you externalize the decisions you'll need to make in the moment.

power_bands(10000) 6 days ago [-]

Your critique of this 'Wounded Club' mentality is well placed and well taken but man this is like the least sympathetic response possible.

Yes, we should take responsibility for improving our lives, especially in the face of clear signals that a change is needed. But, did it occur to you that this post was indeed OP taking responsibility for their suffering and a first step towards improvement?

It takes a significant measure of courage and vulnerability to publish this confession OP has written. We can be pedantic about whether or not OP is clinically addicted to anything, but I see this post as a positive step in the same direction of healing/improvement you emphasized.

SnowHill9902(10000) 6 days ago [-]

Knowing that HN is generally against it, I say it anyway: I recommend religion and religious teachings which address this and many other daily worldly issues perfectly. Christianity and Judaism both have excellent resources. Religious scholars have actually been the best psychologists but are generally dismissed by non-believers.

Edit: for those asking for specific recommendations. It's always best to find your own path according to the religion of your parents and environment. However, I can suggest that you investigate Mussar and look up some books in English.

"Musar is a path of contemplative practices and exercises that have evolved over the past thousand years to help an individual soul to pinpoint and then to break through the barriers that surround and obstruct the flow of inner light in our lives. Musar is a treasury of techniques and understandings that offers immensely valuable guidance for the journey of our lives.... The goal of Musar practice is to release the light of holiness that lives within the soul. The roots of all of our thoughts and actions can be traced to the depths of the soul, beyond the reach of the light of consciousness, and so the methods Musar provides include meditations, guided contemplations, exercises and chants that are all intended to penetrate down to the darkness of the subconscious, to bring about change right at the root of our nature."

jsmith99(10000) 6 days ago [-]

There are 'religious but not spiritual' groups for those who appreciate the structure of religion in their life but who don't believe in God. Atheist Quakers are an established group, and some Jewish groups seem close to an atheistic religion.

bitexploder(10000) 6 days ago [-]

Religion for a modern person with the easy access to the knowledge we now have is effectively just being intellectually lazy. Some people do need help right now, but I don't think religions are any better than other addiction resources. AA gets a pass for me because I know atheists that used it and the religious component is easy to ignore. AA works because of the group and accountability, not faith in some higher power.

Anyway, not to be insulting, but it is all a bunch of made up nonsense for a time when we did not have actual explanatory knowledge for our existence and universe. We do now. Religion and its institutions are dying out in industrialized nations because they have lost their claim to having all the answers.

projektfu(10000) 6 days ago [-]

Can you be a little more specific?

igorkraw(10000) 6 days ago [-]

I'm not against religion, but you just want to add you don't need religion to get what I think is the good core of religions: healing stories and narratives, texts, mantras, rituals that help you in the moment, a community which shares your perspective and in the end, an explanation for existential dread, horrible things happenings and a way to get meaning.

You can find it in humanism, you can find it in secular philosophy, you can get therapy, you can find it in social political communities, it's in many places. You can even get some old bearded dude tell you what to do if that's what you need.

Religion is just one way to have faith.

loudmax(10000) 6 days ago [-]

Islam also has significant things to say on the issue. Hinduism and Buddhism likely have insights as well.

Zoroastrianism may also have something to offer here. Maybe it's time to revive it.

misiti3780(10000) 6 days ago [-]

christianity is most certainly not the solution, therapy is

WHA8m(10000) 6 days ago [-]

No need for

> Knowing that HN is generally against it, I say it anyway

I am actually on the contrarian side of you, but thanks for putting you out there. I understand and respect your point and everything, but there is one thing, that I want to put out regarding what you said. To state the following, is quite problematic:

> Religious scholars have actually been the best psychologists but are generally dismissed by non-believers.

Without going into detail, for every profession, there are people who are good and bad at it. This has nothing to do with any background or anything. The difference with psychologists and priests/ missionaries/ etc. is, that one is certified and the other is not necessarily certified. This makes a huge difference in liability of the term/ role and it's rather dangerous to put them in the same bag. And I don't think to make this distinction is not dismissive.

jstummbillig(10000) 6 days ago [-]

> Religious scholars have actually been the best psychologists but are generally dismissed by non-believers.

Citation not required, as long as you believe, assumedly?

boppo1(10000) 6 days ago [-]

How do I get the psychological benefits without having belief? Let me lay a story on you:

I fell in love with this girl who I had known on/off for a long time. I found out she was an escort (quietly but distinctly confirmed payment-for-sex) in her spare time (she was a student when this all happened). This was really upsetting and had me very distraught. I simultaneously could see a life with her, but also felt disgusted at the escorting.

I wished I could speak with my grandfather about it. I knew he'd know what to tell me. But he had recently passed away. 'Well, what did I like about Grandpa? Could I find a substitute? I need like, an old person who has reliable wisdom and experience, not just some wino who has hung on. Why isn't this a thing? An old person a community can approach for advice on...'

'Oh I think I need a priest.'

When I went looking for one though, it was all about accepting Jesus into my heart and spiritual learnings and miracles that I must accept literally happened, etc. Real hard to find the 'old wise person who can help me navigate this thing'.

sidibe(10000) 6 days ago [-]

I've never had anything against religion and know it is good for my family but there's just no way I'll ever get over my skepticism so it's not a choice. I think most nonreligious people are the same

kanonieer(10000) 6 days ago [-]

> Religious scholars have actually been the best psychologists

Any sources that will back this claim? Oh wait, you don't need facts. Do you give this sort of unsolicited religious advice to everyone or do you specifically choose people who are troubled?

aordano(10000) 6 days ago [-]

Religious beliefes provide a strong moral compass as a semi-coherent set that lets you define stances about a lot of things in your life without having to gs through the hassle and difficulties of building them. As long as it's a serious belief and adherence to the provided guidelines and not just posturing used to justify decadent conducts.

I am not religious and personally i think is best to develop this on your own than taking a prepackaged system, but the utility and practicality of having ssmething already done and battle-tested is undeniable.

Just like you don't need to reinvent the wheel and write a complex library on your own when there's one available, sometimes is best to just use a prepackaged beliefs set and moral system to follow.

Many people are even unable to produce that on their own and epd up disparaged, aimless, living their lives without any understanding of right, wrong, good, bad, moral, immoral.

For what religions are and what they do provide, i personally think some branches of buddhism are better, like the Sokka Gakkai International's approach provides.

I don't agree with the sentence that religious scholars are the best psychologists because they only can provide guidance inside what fits this prepackaged framework-for-living they adopted, and in many many cases (i.e. mentall illnesses, deep issues, moral hardship in grey areas, etc) they are unable to effectively help in any significant way.

Good news is that psychology isn't incompatible with religion and both can coexist peacefully, and one can get the best of both worlds wathout thinking one is best; they work in different ways and provide different things, and IMO they aren't directly comparable, as a psychologist cannot help you very well in terms of religion, and a religious scholar cannot help you very well in terms of psychology (except for the thinfs that fit witin the religious framework chosen).

So all in all, i agree that religion as a valid choice and should be part of discourse, as sometimes it can very well be the best course of action.

Just don't agree with throwing blanket statements of what's best or not in a world as plastic and complex as the one we live in.

FrankyHollywood(10000) 6 days ago [-]

The bible is a collection of many books and resources by various authors. It contains valuable ideas and experience which have survived the centuries. Many religious people however like it to be a 'single black & white truth of the all-mighty invisible ghost who says you are a guilty person'.

For me the bible is the same as any other (old) book where people write about their life experience. A good example is 'Meditations' by Marcus Aurelius. There is a lot of wisdom in it, and I have read it more than once over the years. It makes you reflect on your own life and decisions you make.

spoiler(10000) 6 days ago [-]

An alternate to religion is Meditation. IMO, a lot of spiritual practices share very similar mental mechanics (eg mantras and prayers, various forms of fasting, support networks, etc)

Scarblac(10000) 6 days ago [-]

> It's always best to find your own path according to the religion of your parents and environment.

That ship sailed a while ago, my parents aren't religious, and I don't know any religious people.

paskozdilar(10000) 6 days ago [-]

The problem I have with religion is the focus on 'removing the doubt', which I strongly disagree with - doubting a god's existance is a major no-no in most popular religions. And as soon as you remove the doubt ban, you don't really have a religion anymore, but a philosophy.

So I'd personally recommend philosophy to people, instead of religion. Bertrand Russell (also known for his mathematical work) is an excellent place to start.

EDIT: For those who disagree, I'd recommend a Russell's essay 'Why I Am Not A Christian' [0]. It is quite short and readable.


bowsamic(10000) 6 days ago [-]

Buddhism also has a well developed psychological system that everyone seems to ignore

Jaruzel(10000) 6 days ago [-]

> I recommend religion and religious teachings which address this and many other daily worldly issues perfectly.

This advice simply doesn't work if the recipient is an atheist.

To me, Religious texts are made up fiction that hold no more meaning in my world view than Harry Potter or Game of Thrones. If you read enough fiction on a shared topic, you'll be able to pull the same number of 'enlightening' quotes from those books as religious people can from their own sacred tomes.

However, IF you are a religious person, and find meaning in your religious books, then take the win, and enjoy that path. It's just that it's not a path everyone can take.

r_c_a_d(10000) 6 days ago [-]

And for atheists like me, you can still learn a lot from religions. I got a lot out of Alain de Botton's 'Religion for Atheists' https://www.librarything.com/work/11370617/book/89008159

sleepdreamy(10000) 6 days ago [-]

As someone who was religious when they are younger but no longer, why? I used to be a devout Christian until I went exploring the world and saw the unreal amount of massive suffering, imbedded greed etc; If god is real, he is a cruel god.

We have the technology and means to ensure every person on this earth does not go hungry and has a safe place to sleep at night. But humans do human stuff.

You probably pass plenty of homeless in your daily life and never look/think of them again. Yet somehow religion constantly preaches harmony and giving to others. Most religious people I know are inherently greedy and abide by Capitalistic morals and act as such.

I guess I could create a bubble for myself and not care about others at an inherently deep level like most humans on earth do.

What would a religious Scholar/Teachings do for me if I can plainly see that teachings are only followed when convenient or warped to fit my world narrative? What would you suggest?

bytematic(10000) 6 days ago [-]


flycaliguy(10000) 6 days ago [-]

I'm sure I've read a few dozen of the same old AA debates on HN, but, yeah it worked for my old man.

AA, in particular the serenity prayer, has at least some overlap with the more tech friendly pursuit of Stoic philosophy.

"God, grant me the serenity to accept the things I cannot change, courage to change the things I can, and wisdom to know the difference."

zhdc1(10000) 6 days ago [-]

For those of you complaining about SnowHill9902 recommending religion, please check out Optimize.me (free) for something secular. It's still, IMO, the best collection of practical self-help knowledge and insight available on the internet.

paulryanrogers(10000) 6 days ago [-]

Nothing quite like fear of eternal damnation to motivate oneself.

BbzzbB(10000) 6 days ago [-]

Got anything more specific to recommend? I'd like to read the main books of the main religions eventually out of personal culture, but they won't get to the top of the reading pile any time soon. Meanwhile, I'm sure procrastination, motivation and discipline are behaviors that religious scholars had to develop even in a pre-Internet world so I'm sure there are interesting takes written on the subject. I recall some reknown writer from a few centuries (like 16th/17th century?) writing about his struggles with procrastination and how he eliminated distractions (lol!) from his working environment.

bajsejohannes(10000) 6 days ago [-]

For youtube specifically, I found this great snippet for ublock origin. It removes (almost) all recommendations in youtube, which makes it much easier to quit after just one video:


www.youtube.com##ytd-browse[page-subtype='home'] #primary


Source: https://pawelurbanek.com/youtube-addiction-selfcontrol

RockRobotRock(10000) 6 days ago [-]

The firefox/chrome extension Unhook provides these features as well, and gives you fine-grained controls over most of the addictive features on YouTube.

bowsamic(10000) 6 days ago [-]

It could be a symptom of a persistent depression disorder, it is for me.

EDIT: I've been blocked from replying

Because lack of ability to concentrate, lack of motivation, and engaging in repetitive behaviour because it's the only thing that makes you not bored, are all symptoms of it

cassepipe(10000) 6 days ago [-]

Could you elaborate a bit on what makes you think that what's it could be ?

bengale(10000) 6 days ago [-]

My desire to read the comments sections on reddit and increasingly on here completely baffle me. If I was sat in a room and the people in there were saying things as stupid as I see on reddit I would leave that room and wonder how the hell I ended up in it. What possible reason do I have to be wasting time reading the barely formed thoughts of what must be predominantly teenagers. Yet ... I find myself back there.

bostonsre(10000) 6 days ago [-]

It kind of feels like some kind of idle loop where my mind has some spare cycles and it defaults to hacker news and similar stuff, then it goes down the rabbit hole once started. Just making it a little harder to view the sites seems to help me a lot. I add <site> to my /etc/hosts file during work hours. I find myself looking at a broken page a couple times a day without even realizing how I got there, but it allows me to avoid the rabbit hole and just get back to work.

sureglymop(10000) 6 days ago [-]

An idea i had a while ago is a HN client which will only show the comments after one has completely read the article. I don't know how to go about it yet, but could be really cool.

wutbrodo(10000) 6 days ago [-]

This is what bedevils me about Twitter. I'm rigorous about pruning my Following list to keep it intellectually honest, so my main feed is pretty great. I end up reading replies to thought-provoking or provocative tweets because there's often additional context or good-faith rebuttals in there.

But holy crap, those nuggets are buried among the ravings of the absolute stupidest people in the world. It feels poisonous to my epistemology, but the good stuff is _so_ valuable.

I suppose the solution is to try to pick topics that nobody cares about? This was tough when Covid was an informational lifeline, but maybe I can just change my interests from economics to... Quantum physics or something.

zhdc1(10000) 6 days ago [-]

> Yet ... I find myself back there.

Agreed. There have been entire days where I've found myself distracted from completing otherwise pressing deadlines.

What I've found is that the activity I do first when I start my 'working day' dictates what I do. So, if I show up at work and open Hacker News or Tweetdeck, there's a not-so-insignificant chance that I'll find myself distracted one way or another for the rest of the day. However, if I stick to a set schedule, there's a good chance that I have a productive day.

What worries me is that these distractions build on one another. So, if I start on Hacker News, and stay off of it for the rest of the day, I can still find myself spending a good chunk of time going through unimportant emails or browsing various news feeds.

bin_bash(10000) 6 days ago [-]

I'm in my mid-30's. I quit reading Reddit because in pretty much all the subreddits I was in I started to feel old. People sound like I did 10-15 years ago when I lacked the life experience I have now.

I don't have this experience on HN really. I suspect I'm closer to the average age here. There is the occasional poor opinion but it usually gets downvoted pretty hard. Even people that I disagree with I learn a lot from. Today in this thread I learned about how classic Christianity thought about doubt for example. Of course a lot of people are anonymous here (including me) so people are much more bold with their opinions than they would be in person.

Not discounting your experience, but to me I have a very different experience on HN than I do Reddit.

Cthulhu_(10000) 6 days ago [-]

I think tolerance for text is much higher than real life, I mean it's just text so you filter out 80% of the rest of what face to face communication is about.

zagrebian(10000) 6 days ago [-]

> things as stupid as I see on reddit

The majority of comments I encounter on Reddit are jokes that are at least mildly amusing, and some of them are hilarious. If I had to describe Reddit comments with one word, I'd say funny, not stupid.

thomastjeffery(10000) 6 days ago [-]

Sounds like OP isn't necessarily an addict: they have ADHD.

Half the article was them complaining about poor working memory, and the rest about getting stimulation/dopamine from unhealthy sources.

Everyone has 'a little ADHD'. We all occasionally, forget things, hyperfocus on things we are excited about, etc. ADHD is the disorder of having those struggles so often it's debilitating. It stems from having an underdeveloped frontal lobe that doesn't feel dopamine as well as neurotypical brains do, meaning an ADHD brain needs more stimulus to feel healthy.

Because of the deficit in stimulus, people with untreated ADHD are statistically much more likely to have a substance abuse disorder (SAD); but in many cases, treating their ADHD (with medication and cognitive behavioral therapy) has the side effect of treating their SAD.

Addiction, in the public consciousness/vernacular, is more abstract than SAD: it can mean anything from chemical dependence to unwanted habits. I suspect most people tend to make a false equivalence between the former and the latter. I see all the time people struggling to change a simple habit and immediately labeling themselves 'addicts'. I think that's a unhelpful characterization.

I think the point people tend to miss is that in order to change a habit, you don't just 'stop doing it': you must replace it with an alternative dopamine source. This is especially tricky with untreated ADHD.

jaqalopes(10000) 6 days ago [-]

As someone actually diagnosed with and medicated for ADHD, I'm stunned by the clarity and relatability of this description of it, right down to the SAD that has afflicted me for over a decade. None of my doctors has ever put it so clearly, so obviously, in a way that connects to my actual experience. All that is to say, I'm bringing this to my next therapy session, wish me luck.

psyc(10000) 6 days ago [-]

I was exactly like OP for 2 years. I was in therapy talking about it that whole time, to no result whatsoever. Then I got on Wellbutrin and literally ten days later all of that behavior just completely ended, and I returned to taking care of important shit and being productive. No willpower required. I just suddenly felt compelled to do different things. Just saying.

Wellbutrin is known to fuck with dopamine, presumably sometimes for the better.

Taylor_OD(10000) 6 days ago [-]

> 'a little ADHD'

Yikes. What a bad take. Many people experience some of the side effects of ADHD may be a true statement. But saying everyone has a little bit of this debilitating mental disorder makes light of something that is not an easy thing to deal with for many sufferers.

That's like saying everyone has a little cancer because sometimes we all get colds which people going through chemo may be more likely to get.

narag(10000) 6 days ago [-]

Another alternative explanation, that I find that extends to many alleged addictions is that he is avoiding doing what he needs to do because anxiety.

A recent related post:


jnovek(10000) 6 days ago [-]

You alluded to this, but I think it's worth saying explicitly:

People with ADHD are 2.7 times more likely to deal with depression than people without.

This is very subjective and I can't put a finger on what it is exactly but something in this post just feels like depression.

annie_muss(10000) 6 days ago [-]

I agree with this completely. I sounded just like the OP. I struggled with it for decades. Then recently I was diagnosed and started taking medication. Some of the problems basically stopped instantly.

I implore anyone reading this and noticing it in themselves to go to a professional and find out if you have ADHD or not. I am by no means 'cured' or completely functional but at least now I have a fighting chance.

burner22(10000) 6 days ago [-]

I feel seen.

My mechanism for remaining focused and getting through work has been to consume harder illicit substances, mostly cocaine. It's effective but definitely not sustainable long time and I'm not too sure where from here.

FrankyHollywood(10000) 6 days ago [-]

I do think one should not bring it all to adhd being a physiological deficiency of the brain.

For me personally it is also about being above average smart. I'm intensely bored by a lot of people and activities. I lose interest, can't remember a lot, get irritated easily.

After many years I have some good friends, work which suits me, I know how to leasure well, basically I know how to amuse and challenge myself and don't experience lots of the adhs symptoms anymore.

hirvi74(10000) 5 days ago [-]

> Because of the deficit in stimulus, people with untreated ADHD are statistically much more likely to have a substance abuse disorder (SAD); but in many cases, treating their ADHD (with medication and cognitive behavioral therapy) has the side effect of treating their SAD

I feel like I am one of the rare people that developed a SAD from medically treating ADHD. It's not from the intended effects of the medication, but from trying to treat the side-effects.

guerrilla(10000) 6 days ago [-]

This shouldn't be flagged... This was a totally valid comment which generated tons of fine discussion.

birdyrooster(10000) 6 days ago [-]

People with ADHD are twice as likely to become addicted to substances, I assume the same is true of YouTube. For many, including me, ADHD and addiction come hand in hand.

gspr(10000) 6 days ago [-]

How does one generally go about dealing with the fact that the above is a 100% match for oneself, 30-some years into life? I'm feeling physically sweaty realizing how accurately the article – and this comment – describes me.

dboreham(10000) 6 days ago [-]

> Sounds like OP isn't necessarily an addict: they have ADHD

Two sides of the same coin imho. Both mechanisms are dopamine-driven.

sirsinsalot(10000) 6 days ago [-]

> I think that's a unhelpful characterization.

I agree, and I think it is important to talk about the sensation, not just the observation or impact on others (which are easy to compare poorly).

To me it isn't so much the _what_, but the _why_. The brain is hunting for stimulation due to a baseline deficit below that required to operate normally. I can tell you from experience that 'hunt' can range from classic hyperfocus/lack of focus to chronic boredom, hypersexuality, drug abuse and so on.

What provides the stimulation to reach baseline one day, may not work the next day, or even from hour-to-hour or minute to minute. That's the core issue, however it manifests. If just one thing consistently hit the spot, it'd be akin to an addiction (where in addiction your brain down regulates, so you need to keep supplying it).

As it stands, it is more like being addicted to both everything and nothing, in a never ending frenzy of disatisfaction, frustration and boredom. Your brain, always downregulated, always chasing a moving fix.

Habit doesn't come into it one tiny bit.

kayodelycaon(10000) 6 days ago [-]

> Everyone has 'a little ADHD'.

I get what you're saying but I want to push back hard on this. Everyone has some of the symptoms of ADHD at any given time.

But, ADHD isn't more of the same symptoms. That's how it's diagnosed, but the underlying mechanism is a severely compromised ability to self-regulate which is categorically different from what people experience when they are out of mental energy.

Edit: I've clarified what I mean in other comments in the thread. My issue is the use of the phrase without explaining why it is wrong.

FeepingCreature(10000) 6 days ago [-]

Consider talking to a doctor about ADD? (Alternatively, get some Adderall somewhere and experiment.) Executive control is a distinct mental capability.

sascha_sl(10000) 6 days ago [-]

This. Even with all the addictive technologies around, this essay makes problems sound too extreme to not be at least worth evaluating for ADD. Especially the consistently poor working memory is a dead giveaway.

People forget AD(H)D is fairly common, regardless of if you think it's overdiagnosed or not. And with how technology has been turning everything into dopamine slot machines it will exaggerate the symptoms even more.

Otek(10000) 6 days ago [-]

Please don't ever again recommend to someone with problems randomly that they should 'get some amphetamine somewhere and experiment'. Okay?

rr808(10000) 6 days ago [-]

Maybe but it seems too common - does nearly everyone have ADD?

ceronman(10000) 6 days ago [-]

I identify myself with many of the things mentioned in this post. I am an addict too. Here is something that has helped me to improve things a little bit. This is of course 100% anecdotal and it might not work on anyone different than me, but here it is anyway:

Initially I also used focus mode and different kinds of blocking apps to try to control my addiction. I configured them to block HN, reddit, lichess, twitter, youtube, etc, allowing me to use them only a few minutes at the end of the day and during the weekends. This approach didn't work so well. It worked fine for a few days but then the abstinence syndrome kicks in and I inevitably ended up 'temporarily' disabling the blocks just to get that precious shot of dopamine before starting to work. Additionally, I was still wasting almost all my weekends and holidays consuming garbage.

So I got this idea, what if I tweak the approach a little bit: Instead of blocking the apps during 'productive' time, what if I block them during leisure time, i.e at the end of the day and during the weekends. The thing with media consumption addiction is that it leaves me feeling that I don't have time to do anything. I neglect a lot of my personal tasks and goals. Work is something that I have to do somehow, after all I need a salary. Doing apartment chores, reading that book I bought or learning a new skill is something that can always wait. And it's during leisure time that I should be doing those things. So I decided to block media consumption during the weekends and try to do something more meaningful in that time. It's easier to deal with the abstinence syndrome, because the activities can be fun as well, they also produce (more slowly) dopamine, just that they are more meaningful to me. It's easier to skip silly but addictive YouTube videos to work on an interesting programming project than it is to skip it for a boring task at work. And there is also this feeling that it's only for a short time, I can get the dopamine shot later and that somehow tricks my brain. And then on Monday I already feel much better, and without feeling the need to consume that much media.

LimitedInfo(10000) 6 days ago [-]

interesting tip thank you

comboy(10000) 6 days ago [-]

Here's what worked for me.

1. Ask yourself why do you think this is a problem and debug deeply. Maybe, after all, spending day watching videos is fine and what you want to do and the story that you don't is unnecessary.

2. Once you bring it to your conscious mind, you face a decision, do I want to watch videos and maybe learn something and relax, given that it comes with not delivering some things I promised and doing the thing I've been thinking I need to do for a long time, or do I want to do e.g. that thing.

3. Then do what you want. It requires no effort. You do what you want. I know habits and dopamine addiction narration, but there is no substance dependence here and the trick is that we cannot really hold contradicting beliefs in our conscious mind (we have plenty of them in the background but once you collide them actively you usually end up creating some micro-rationalization). But here your focus is consciously facing the choice. Plus there's plenty of dopamine for something that's been on todo for a long time.

wnolens(10000) 6 days ago [-]

I think we have completely different brains. What you said hits me like someone telling me to 'be happy' when I'm depressed. It's not a matter of rationalization.

I am jealous this works for you. Life would be so much easier if my intrinsic motivation was well-aligned with my rational thinking.

technovader(10000) 6 days ago [-]

I can relate to this.

My understanding is you're addicted to dopamine.

You need a dopamine fast. Just take away all your dopamine sources for a 1 day, or 1 week. And then self reflect on its effects on you.

The repeat it every once in a while or whenever you need it.

1 day a week. Or 1 week per month. Something like that.

You've encouraged me to do the same. I'm very unhappy with my own addiction to youtube and social media. Though its more like 2 hours per day maybe

Apocryphon(10000) 6 days ago [-]

Did you actually read the OP? The author describes times when they abstained from Reddit/YouTube and going cold turkey.

hericium(10000) 6 days ago [-]

For me, a major part of ADHD management consists of forcing myself to do something, multiple times a day. It's work, learning, chores-like tasks, sport, hygiene, regular sleep, cold morning shower, unprocessed food preparation - pretty much everything that isn't immediately pleasurable.

I'm 'unhappy' multiple times a day when I start doing something that doesn't usually have to be done right away. But it's often a choice not between doing something now or later, but doing something now, not never.

However weirdly this may sound - I constantly do things against myself, for myself. I don't get much satisfaction from finishing those tasks but my life quality has increased drastically and I wouldn't go back from moments of discomfort to an ongoing discomfort.

Gelitio(10000) 6 days ago [-]

Same same.

I think I'm a highly functional adha person due to my mom being a very responsible person and also I started taking Ritalin 12 years ago.

I'm always wandering how it feels for normal people who get there shit done without any meds.

I also want to do fire because all those responsibilities and the fighting feels exchausting.

121789(10000) 6 days ago [-]

Oh wow this resonates - I'm exactly the same. I basically try to minimize the time I spend thinking on most things. I have to consciously tell myself 'I'm going to stop thinking and just do this task'. It's amazing how when you do this, all of the sudden all of the stuff that's on the back of your mind (chores, exercise, etc) just get done in time and your anxiety goes away

tarunreddy(10000) 6 days ago [-]

Hi guys OP here. I abstained all day (yay!) until I was going to bed :( I've read through most comments here and want to clarify a few things

1. I tried to get diagnosed with adhd but my doc just prescribed me some meds (amphetamines I think). They seemed like they would work until the effect wore off and I stopped taking them.

2. I have never abused any drug in the past.

3. I know this is a non issue for a lot of you guys and rightly so. I know I'm well off compared to a lot of real drug addicts.

4. I'm kinda depressed I think?

After a day of reflecting, speaking to my grandparents, spending time in a ceremony with my parents, and at the end of the day compulsively opening hn (I didn't expect this post to have any comments, addiction just took over), I can say a few things (before forgetting them!).

Firstly, I had a lot of time on my hands today and spending it on personal relations has been good. And secondly staying away from my room in general is positive. Being alone in my room is almost certainly a negative. I want to practice programming and do projects before my masters so I don't know how Ill deal with it. I'll reply to other comments once I wake up. Its almost 2am now!

Thanks for all the comments and advice!

ppqq(10000) 5 days ago [-]

Family, friends, community, church.

Qub3d(10000) 4 days ago [-]

I did two things that greatly helped me.

1) I at least made an attempt at some of Cal Newport's suggestions in 'Deep Work' and schedule working blocks for myself. I adopt a different posture, maybe wear different clothes, If you can afford to, get a second cheap laptop and use that only for work... And then go somewhere else physically. Even if it's just a different room.

2) Consider trying a few sessions of neurofeedback therapy. I did it in high school and really felt my ability to concentrate and think along a single track improve significantly. I can't guarantee anything but it's worth looking into.

Historical Discussions: Donald Knuth on work habits, problem solving, and happiness (2020) (May 23, 2022: 588 points)

(600) Donald Knuth on work habits, problem solving, and happiness (2020)

600 points 2 days ago by Thursday24 in 10000th position

shuvomoy.github.io | Estimated reading time – 19 minutes | comments | anchor

Shuvomoy Das Gupta

April 13, 2020

Recently, I came across a few old and new interviews of Donald Knuth, where he sheds light on his work habits, how he approaches problems, and his philosophy towards happiness. I really enjoyed reading the interviews. In this blog, I am recording his thoughts on approaching a problem, organizing daily activities, and the pursuit of happiness.

Seeing both the forest and the trees in research. 'I've seen many graduate students working on their theses, over the years, and their research often follows a pattern that supports what I'm trying to explain. Suppose you want to solve a complicated problem whose solution is unknown; in essence you're an explorer entering into a new world. At first your brain is learning the territory, and you're making tiny steps, baby steps in the world of the problem. But after you've immersed yourself in that problem for awhile then you can start to make giant steps, bigger steps, and you can see many things at once, so your brain is getting ready for a new kind of work. You begin to see both the forest and the trees.'

How Knuth works on a project. 'When I start to investigate some topic, during the first days I fill up scratch paper like mad. I mean, I have a huge pile of paper at home, paper that's half-used, used on only one side; I've kept a lot of partially printed sheets instead of throwing them away, so that I can write on the back sides. And I'll use up 20 sheets or more per hour when I'm exploring a problem, especially at the beginning. For the first hour I'm trying all kinds of stuff and looking for patterns. Later, after internalizing those calculations or drawings or whatever they are, I don't have to write quite so much down, and I'm getting closer to a solution. The best test of when I'm about ready to solve a problem is whether or not I can think about it sensibly while swimming, without any paper or notes to help out. Because my mind is getting accustomed to the territory, and finally I can see what might possibly lead to the end. That's oversimplifying the truth a little bit, but the main idea is that, with all my students, I've noticed that they get into a mental state where they've become more familiar with a certain problem area than anybody else in the world.'

Visualizers vs Symbolizers. Well, you know, I'm visualizing the symbols. To me, the symbols are reality, in a way. I take a mathematical problem, I translate it into formulas, and then the formulas are the reality. I know how to transform one formula into another. That should be the subtitle of my book Concrete Mathematics: How to Manipulate Formulas. I'd like to talk about that a little.

I have a feeling that a lot of the brightest students don't go into mathematics because–-curious thing–-they don't need algebra at the level I did. I don't think I was smarter than the other people in my class, but I learned algebra first. A lot of very bright students today don't see any need for algebra. They see a problem, say, the sum of two numbers is 100 and the difference is 20, they just sort of say, "Oh, 60 and 40." They're so smart they don't need algebra. They go on seeing lots of problems and they can just do them, without knowing how they do it, particularly. Then finally they get to a harder problem, where the only way to solve it is with algebra. But by that time, they haven't learned the fundamental ideas of algebra. The fact that they were so smart prevented them from learning this important crutch that I think turned out to be important for the way I approach a problem. Then they say, "Oh, I can't do math." They do very well as biologists, doctors and lawyers.

What graduate students should do when they have expertise in a certain area. 'When they [the students] reach this point [expertise in a certain area] I always tell them that now they have a responsibility to the rest of us. Namely, after they have solved their thesis problem and trained their brain for this problem area, they should look around for other, similar problems that require the same expertise. They should use their expertise now, while they have this unique ability, because they're going to lose it in a month. I emphasize that they shouldn't be satisfied with solving only one problem; they should also be thinking about other interesting problems that could be handled with the same methods.'

On the importance of anthropomorphizing a problem. 'Another aspect of role playing is considerably more important: We can often make advances by anthropomorphizing a problem, by saying that certain of its aspects are 'bad guys' and others are 'good guys,' or that parts of a system are 'talking to each other.' This approach is helpful because our language has lots of words for human relationships, so we can bring more machinery to bear on what we're thinking about.'

Why putting the discovery of a solution on paper is important. 'Well, I have no sympathy with people who never write up an answer; it's selfish to keep beautiful discoveries a secret. But I can understand a reluctance to write something up when another problem has already grabbed your attention. I used to have three or four papers always in sort of a pipeline, waiting for their ideas to mature before I would finally prepare them for publication.

Frances Yao once described the situation very nicely. She said, you work very hard on a problem for a long time, and then you get this rush, this wonderful satisfaction when you've solved it. That lasts about an hour. And then you think of another problem, and you're consumed with curiosity about the answer to that new one. Again, your life isn't happy until you find the next answer.'

The philosophy behind seeking solutions. 'The process of seeking solutions is certainly a big part of a researcher's life, but really it's in everybody's life. I don't want to get deep into philosophy, but the book of Ecclesiastes in the Bible says essentially this:

Life is hard and then you die. You can, however, enjoy the process of living; don't worry about the fact that you're going to die. Some bad people have a good life, and some good people have a bad life, and that doesn't seem fair; but don't worry about that either. Just think about ways of enjoying the journey.

Again I'm oversimplifying, but that's the message I find in many parts of the Bible. For example, it turns up in Philippians 3:16, where the writer says that:

You don't race to get to the goal; the process of racing itself, of keeping the pace, is the real goal.

When I go on vacation, I like to enjoy the drive.

In Christian churches I am least impressed by a sermon that talks about how marvelous heaven is going to be at the end. To me that's not the message of Christianity. The message is about how to live now, not that we should live in some particular way because there's going to be pie in the sky some day. The end means almost nothing to me. I am glad it's there, but I don't see it as much of a motivating force, if any. I mean, it's the journey that's important.'

Knuth's process of reading papers. 'It turns out that I read everything at the same slow rate, whether I'm looking at light fiction or at highly technical papers. When I browse through a journal, the titles and abstracts of papers usually don't help me much, because they emphasize results rather than methods; therefore I generally go through page by page, looking at the illustrations, also looking for equations that are somehow familiar or for indications of useful techniques that are unfamiliar.

Usually a paper lies outside the scope of my books, because I've promised to write about only a rather small part of the entire field of computer science. In such cases there's nothing new for me to worry about, and I happily turn the pages, zipping to the end. But when I do find a potentially relevant paper, I generally read it only partway, only until I know where it fits into the table of contents of The Art of Computer Programming. Then I make myself a note, to read it later when I'm writing up that section. Sometimes, however—as happened last night with that paper about scheduling games of bridge—I get hooked on some question and try to explore it before I'm ready to move on to reading any other papers.

Eventually when I do begin to write a section of my book, I go into 'batch mode' and read all of the literature for which my files point to that section, as well as all of the papers that those papers cite. I save considerable time by reading several dozen papers on the same topic all in the same week, rather than reading them one by one as they come out and trying to keep infinitely many things in my head all at once.

When I finally do get into batch mode, I go very carefully through the first two or three papers, trying to work the concepts out in my own mind and to anticipate what the authors are going to say before turning each page. I usually fail to guess what the next page holds, but the fact that I've tried and failed makes me more ready to understand why the authors chose the paths that they did. Frequently I'll also write little computer programs at this point, so that the ideas solidify in my head. Then, once I've gone slowly through the first few papers that I've accumulated about some topic, I can usually breeze through the others at a comparatively high speed. It's like the process of starting with baby steps and progressing to giant steps that I described earlier.'

On parts of research that are much less fun. 'Well, some parts of a job are always much less fun than others. But I've learned to grin and bear it, to bite the bullet and move on, to face the music, to take it in stride and make a virtue of necessity. (Excuse me for using so many clichés, but the number of different popular expressions tends to make my point.)'

On scheduling daily activities. 'I schedule my activities in a somewhat peculiar way. Every day I look at the things that I'm ready to do, and choose the one that I like the least, the one that's least fun — the task that I would most like to procrastinate from doing, but for which I have no good reason for procrastination. This scheduling rule is paradoxical because you might think that I'm never enjoying my work at all; but precisely the opposite is the case, because I like to finish a project. It feels good to know that I've gotten through the hurdles.'

My scheduling principle is to do the thing I hate most on my to-do list.

On pursuing a PhD. 'A PhD is awarded for research, meaning that the student has contributed to the state of the world's knowledge. That's quite different from a bachelor's degree or a master's degree; those degrees are awarded for a mastery of existing knowledge. (In some non-science fields, like Art, a master's degree is more akin to a PhD; but I'm speaking now about the situation in mathematics and in the sciences.) My point is that it's a mistake to think of a PhD as a sort of next step after a BS or MS degree, like advancing further in some academic straight line. A PhD diploma is another animal entirely; it stands for a quite different kind of talent, which is orthogonal to one's ability to ace an examination. A lot of people who are extremely bright, with straight A+ grades as undergraduates, never get a PhD. They're smart in a way that's different from 'research smart.' I think of my parents, for example: I don't believe either one of them would have been a good PhD candidate, although both were extremely intelligent.

It's extremely misleading to rank people on an IQ scale with the idea that the smarter they are, the more suitable they are for a PhD degree; that's not it at all. People have talents in different dimensions, and a talent for research might even have a negative correlation with the ability to tie your own shoes.'

Whether volunteering helps Knuth with his principal vocation. 'Well, you're absolutely right. I can't do technical stuff all the time. I've found that I can write only a certain number of pages a day before running out of steam. When I reach this maximum number, I have no more ideas that day. So certainly within a 24-hour period, not all of it is going to be equally creative. Working in the garden, pulling weeds and so on, is a good respite. I recently got together with some friends at Second Harvest, repackaging food from one place to another. This kind of activity, using my hands, provides variety and doesn't really take away from the things I can do for the world.'

On unhappiness. 'I mean, if you didn't worry, and if you didn't go through some spells and crises, then you'd be missing a part of life. Even though such things aren't pleasant when you're doing them, they are the defining experiences — things to be glad about in retrospect because they happened. Otherwise you might be guilty of not feeling guilty!

On the other hand I've noticed in myself that there were times when my body was telling me to be unhappy, yet I sometimes couldn't readily figure out a reason for any unhappiness. I knew that I was feeling 'down,' but sometimes I had to go back several months to recall anything that anybody had said to me that might still be making me feel bad. One day, when I realized how hard it was to find any reason for my current unhappiness, I thought, 'Wait a minute. I bet this unhappiness is really something chemical, not actually caused by circumstances.*' I began to speculate that my body was programmed to be unhappy a certain percentage of the time, and that hormones or something were the real reason behind moments of mild depression.'

Why power corrupts. 'When people have more power and they get richer, and they find themselves rich but still unhappy, they think, 'Hmmm, I'll be happy if I only get rid of all the sources of my unhappiness.' But the action of removing annoyances sometimes involves abusing their power. I could go on and on in this vein, I guess, because you find that in the countries where there is a great difference between rich and poor, the rich people have their problems, too. They haven't any motivation to change the way they're living, exploiting others, because as far as they can see, their own life isn't that happy. But if they would only realize that their unhappy spells are part of the way that they're made, and basically normal, they wouldn't make the mistake of blaming somebody else and trying to get even for imagined misdeeds.'

Point eight is enough. 'In fact I've concluded that it's really a good thing for people not to be 100% happy. I've started to live in accordance with a philosophy that can be summed up in the phrase 'Point eight is enough,' meaning '0.8 is enough.'

You might remember the TV show from the 70s called 'Eight is Enough,' about a family with eight children. That's the source of my new motto. I don't know that 0.8 is the right number, but I do believe that when I'm not feeling 100% happy, I shouldn't feel guilty or angry, or think that anything unusual is occurring. I shouldn't set 100% as the norm, without which there must be something wrong. Instead, I might just as well wait a little while, and I'll feel better. I won't make any important decisions about my life at a time when I'm feeling less than normally good.

In a sense I tend now to suspect that it was necessary to leave the Garden of Eden. Imagine a world where people are in a state of euphoria all the time — being high on heroin, say. They'd have no incentive to do anything. What would get done? What would happen? The whole world would soon collapse. It seems like intelligent design when everybody's set point is somewhere less than 100%.'

High minimum more important than high maximum. 'I try to do a good job at whatever I'm doing, because it's more fun to do a good job than not. And when there's a choice between different things to spend time on, I try to look for things that will maximize the benefit without making me burn out.

For example, when I was working on the TeX project during the early 80s, hardly anybody saw me when I was sweeping the floor, mopping up the messes and carrying buckets of waste from the darkroom, cleaning the machines, and doing other such stuff. I did those things because I wouldn't have dared to ask graduate students to do menial tasks that were beneath them.

I know that every large project has some things that are much less fun than others; so I can get through the tedium, the sweeping or whatever else needs to be done. I just do it and get it over with, instead of wasting time figuring out how not to do it. I learned that from my parents. My mother is amazing to watch because she doesn't do anything efficiently, really: She puts about three times as much energy as necessary into everything she does. But she never spends any time wondering what to do next or how to optimize anything; she just keeps working. Her strategy, slightly simplified, is, 'See something that needs to be done and do it.' All day long. And at the end of the day, she's accomplished a huge amount.

Putting this another way, I think that the limiting thing — the thing that determines a person's success in life — is not so much what they do best, but what they do worst. I mean, if you rate every aspect of what someone does, considering everything that goes into a task, a high minimum is much more important than a high maximum. The TeX project was successful in large part because I quietly did things like mop the floor. The secret of any success that I've had, similarly, is that in all the projects I've worked on, the weakest link in my chain of abilities was still reasonably strong.'

A person's success in life is determined by having a high minimum, not a high maximum. If you can do something really well but there are other things at which you're failing, the latter will hold you back. But if almost everything you do is up there, then you've got a good life. And so I try to learn how to get through things that others find unpleasant.

A guiding heuristic. 'Don't just do trendy stuff. If something is really popular, I tend to think: back off. I tell myself and my students to go with your own aesthetics, what you think is important. Don't do what you think other people think you want to do, but what you really want to do yourself. That's been a guiding heuristic for me all the way through.'

Source of humility. 'I wrote a couple of books, including Things a Computer Scientist Rarely Talks About, that are about theology — things you can't prove — rather than mathematics or computer science. My life would not be complete if it was all about cut and dried things. The mystical things I don't understand give me humility. There are things beyond my understanding.

In mathematics, I know when a theorem is correct. I like that. But I wouldn't have much of a life if everything were doable. This knowledge doesn't tear me apart. Rather, it ensures I don't get stuck in a rut.'

Meaning of life. 'I personally think of my belief that God exists although I have no idea what that means. But I believe that there is something beyond human capabilities and it might be some AI. Whatever it is, I do believe that there is something that goes beyond human understanding but that I can try to learn more about how to resonate with whatever that being would like me to do. I strive for that (occasional glimpses of that being) not that I ever think I am going to get close to it. I try to imagine that I am following somebody's wishes and this AI or whatever it is, it is smart enough to give me clues.'

All Comments: [-] | anchor

mooneater(10000) 1 day ago [-]

> they haven't learned the fundamental ideas of algebra

I'd very much love to know exactly what Knuth considers the 'fundamental ideas of algebra'!

Hasz(10000) 1 day ago [-]

Perhaps the eponymous fundamental theorem of algebra is a good place to start.


bo1024(10000) 1 day ago [-]

It sounds like he's just talking about using variables to solve equations. I think the point he's making in that section is just about learning a good process so you can solve harder and harder problems, e.g.

Problem 1: Two numbers add to 100, one is 20 larger.

Smart student: oh, I see, 60 and 40.

Dumb student Knuth: x + y = 100 and x = y + 20, solves to x=60, y=40.


Problem 2: Four numbers sum to 1024, one is half the sum of the other three less 17, one of the others...

Smart student: uh, I don't see the answer.

Dumb student Knuth: w + x + y + z = 1024, w = (x + y + z)/2 - 17, ... solved it.

dboreham(10000) 1 day ago [-]

After reading the relevant article section, I'd guess he's talking about the idea that problems can be mapped onto mathematical structures, allowing use of pre-known rules within said structure, for transformation and identity and so on, such that the problem can then be solved. He's saying that for simple problems if you're clever you can intuit the solution without that mapping/manipulation/solve process, but as a result you can never see how to solve more complex problems. Implication being that if you had been slightly less smart, you'd end up understanding mathematical structure earlier in life, with associated benefits in terms of success in certain fields. Like: https://en.wikipedia.org/wiki/Abstract_algebra

JoshCole(10000) 1 day ago [-]

> it's selfish to keep beautiful discoveries a secret.

I found a beautiful thing recently and planned to do a write-up on it eventually, but I know I might get distracted. So I'll share the beauty here since I don't want to be selfish!

In K means clustering you know you've stabilized if centers t = centers (t-1). Stabilization has occurred because no clusters were reassigned during the lloyd iteration. People already know this. In many implementations of k means clustering you'll find this check in the body of the loop as a special case which means the loop should end. You can't have this as the condition of the while loop because you don't yet have a centers t-1 on your first loop. Actually you can by supposing a hypothetical all nil cluster definition prior to initialization, but people don't tend to do that. That failure to do that is ugly in the same way that Linus refers to code which uses special casing as being ugly. It doesn't apply the same procedure to every iteration. They should do that and it would make the code more beautiful. However, that is not my discovery, but just a preference for beauty and consistency.

What I noticed is that the equality check is actually giving you a bitset that tells you whether any of the centers was changed. This is a more general idea than just telling you that you can stop because you are done. It is telling you /why/ you aren't done. It is also deeply informative about the problem you are solving in a way that helps the computation to be done more efficiently. I want to show it being deeply informative. So I'll touch on that briefly and then we can revisit the simplicity.

Clusters being reassigned tells you the general location that have the potential to need future reassignment. For example, in the range of 1 to a 1,000,000 on a 1d line if a cluster at 10 moves, but there is a cluster at 500, then you know you don't need to look at reassignment for any cluster above 500. I mean this in two sense. One is that nothing in clusters past the 500 can change. So you don't need to look at them. The other is that clusters past the 500 cluster can't even be nearer. So you don't have to find the pairwise distance to them. In the assignment stage of the lloyd iteration you don't even need to look at everything above 500. So you not only reduce the amount you need to look at in the N dataset items. You also reduce the number of k clusters centers you need to compare them to. In the 1 to 1,000,000 domain example for stuff below 500 that is probably going to be more than 99% of your data that you can skip and the vast majority of clusters that you don't even to need to check distance for.

Returning to the simplicity discussion it means you can write the loop without the special casing. Instead of a break when stabilization has occurred you have a selection criteria function which tells you the selection criteria for that step of the lloyd iteration. Obviously at the initialization stage we went from no definitions to k definitions. So the selection criteria function is well defined even for the very first iteration on an intuitive level.

Why do I find this beautiful? Well, we can not only eliminate the special casing, which is beautiful on its own, but we can rephrase each iteration in terms of a selection criteria generated by that equality check! We are never special casing; the reason we stopped was always because the selection criteria was the empty set. We just didn't think of it that way, because we didn't phrase the update step in terms of the generation of a selection criteria for updates.

And when you do, suddenly it becomes obvious how to do certain parallelizations because your selection strategy tells you where to kick off another refinement iteration. And /locality/ in a dimensional space is determining where the updates get passed. I have this strange feeling that if we just keep pulling on this idea that we'll be able to eliminate the need for loops that await all cluster updates and instead express the computation in a massively parallel way that ends up taking advantage of the topological structure of the problem: I mean, clearly if you have two clusters that moved one at 5 and another at at 900900 you don't /need/ to wait for 5 to finish its refinement to know that it /isn't/ going to impact the next step for refinement at 900900, because there are so many clusters between them. So you should be able to proceed as if 5 cluster movement has no impact on 900900 cluster movement. Only if they drift closer and the topology differs do you have to backtrack, but since we already need to pass these updates through the topological structure we have a fairly straightforward way of declaring when it is appropriate to backtrack. This phrasing is really stupid for the toy problems that people solve in classrooms and when trying to understand things because of the overhead of keeping track of the work and the wasted work, but I have a feeling that it might be practical. In real massive problems you already have to pay the cost of keeping the work because stuff fails and you need to retry and in particular the geometric probability distrubition of failure is high enough that we just have to assume that stuff fails in these massive cases. So the added cost of keeping the work around during the computation isn't as extreme a barrier. It's basically optimistic massively parallelized clustering, but with a resolution protocol for how to handle two optimistic clustering runs which collide with each other, because the natural problem of scale forces redundancy on us effectively making the choice to be redundant free rather than expensive wasted work.

Maybe nothing will come of these thoughts, but I found the first thought pretty and it provoked the second line of reasoning, which I found interesting. I'm working on a k-means clustering system that incorporates the good ideas from several k means research papers and I plan to explore these ideas in my implementation, but in the spirit of not hiding beautiful things, I hope you enjoy.

Also, as an aside, these aren't completely new ideas. People have noticed that you can use the triangle inequality to speed up computation for a while and shown it to speed up computations. It's more of an observation of the way the looping structure can be seen in a non-special cased way, how that suggests ways to improve performance, and how it lends itself better to alternative control flow structures.

> it's selfish to keep beautiful discoveries a secret.

It would be really fun to read what others found beautiful that they've never heard someone else mention.

bakul(10000) 1 day ago [-]

I think we naturally want to share what we find beautiful as it is an expression of our joy as well as it enhances it. What we usually don't want to share is what we think will be profitable.

elwell(10000) 1 day ago [-]

> ...it turns up in Philippians 3:16, where the writer says that:

> 'You don't race to get to the goal; the process of racing itself, of keeping the pace, is the real goal. When I go on vacation, I like to enjoy the drive.'

> In Christian churches I am least impressed by a sermon that talks about how marvelous heaven is going to be at the end. To me that's not the message of Christianity. The message is about how to live now, not that we should live in some particular way because there's going to be pie in the sky some day. The end means almost nothing to me. I am glad it's there, but I don't see it as much of a motivating force, if any. I mean, it's the journey that's important.


This is Philippians 3:16: 'Only let us live up to what we have already attained.'

It's hard for me to connect that to the paraphrase given above. Furthermore, the previous few verses (12-14) of that same chapter seem to be more goal-oriented, than process-oriented:

'Not that I have already obtained all this, or have already arrived at my goal, but I press on to take hold of that for which Christ Jesus took hold of me. Brothers and sisters, I do not consider myself yet to have taken hold of it. But one thing I do: Forgetting what is behind and straining toward what is ahead, I press on toward the goal to win the prize for which God has called me heavenward in Christ Jesus.'

Also see 1 Corinthians 15:19:

'If only for this life we have hope in Christ, we are of all people most to be pitied.'

gjm11(10000) 1 day ago [-]

So, Knuth wrote this rather unusual book called '3:16' in which he took every chapter-3-verse-16 in the Bible (where chapter 3 is too short for that, he continued past the end into chapter 4 in the obvious way), thought about it a lot and wrote down his thoughts, and also commissioned eminent calligraphers to render his translations of all those verses.

(He was inspired, if that's the right idea, by the notion of stratified sampling in statistics. He called this application of it 'The Way of the Cross-section'.)

Here are a few extracts from what Knuth says about Philippians 3:16 in that book, which may help explain.

'English translations of verse 16 tend to be quite different from each other, because Paul's original Greek words are difficult to render in our language. A literal translation goes something like this: 'Hey, to what we've reached, by this to march!''

(Presumably the 'live up to' in the translation you quote is the 'by this to march' in his literal translation. So part of what's going on is that Knuth thinks, while the translators you're quoting don't, that the notion of progress in the Greek metaphor there is essential to what he's saying.)

'Putting all these words together, we can see what verse 16 means: 'Let's keep progressing from the point we've reached.''

(Making it more explicit. I have to say that the foregoing paragraphs in Knuth's thoughts on this verse don't particularly explain why he thinks it means that rather than, say, 'stay on the same road' or 'continue marching in the same pattern' or something of the kind, which seems to be the sort of idea most English versions embrace.)

'Paul's main point [sc. in this verse and the chapter as a whole] is that the Christian life is a process of continual striving for greater faith, for greater knowledge of God.'

(This seems pretty fair to me. Yes, what comes before verse 16 talks about a goal, but it also emphasizes that it's a goal not yet reached, still the object of constant straining.)

It still seems hard to reconcile what the rest of Philippians 3 actually says with Knuth's 'you don't race to get to the goal'. (For the avoidance of doubt, in '3:16' he says in so many words 'he [sc. Paul] continues to run toward the goal', so it's not like he's missed the aiming-for-the-goal idea here.) Here's my best guess at what he's thinking: Paul doesn't reckon he's reached the goal; he clearly doesn't think anyone else has reached it; indeed, it seems pretty clear that he doesn't expect anyone to reach it before their death. So (I conjecture Knuth thinks) what's the point of all that striving and straining? It can't really be that we need to strive and strain in order to be saved -- Paul seems pretty opposed to that sort of idea, elsewhere. So the real point of the striving and straining must be the effect it has on our lives here on earth.

That's my best guess at Knuth's thinking, anyway. He doesn't make it explicit, at all. Another possibility, of course, is that when he said what the OP here quotes him as saying he was only sketchily remembering what Philippians 3:16 says and attributed to it something related to, but incompatible with, what it actually says, and that if you pointed him at the actual text of the chapter and his own analysis of it in 3:16 he'd say something like 'oops, yes, I shouldn't have said it said that'.

(Disclaimer: I am not myself a Christian, though I was one for many years; you may or may not wish to discount accordingly anything I say about theology or exegesis. Also: my main reason for writing this comment is that I suspect that any HN readers interested in Knuth's thinking about this kind of thing would enjoy '3:16'.)

hintymad(10000) 1 day ago [-]

I still read TAOCP, particularly vol 4, for fun from time to time, but I have to admit that the days are long gone when an ordinary engineer needs to study algorithms in depth. The vast number of libraries and services are good enough that most people just need to know a few terms to function adequately for their jobs. I guess it's a good thing as it shows how robust the software abstractions are, in contrast to mathematics. It's just that I feel quite nostalgic about the countless days I spent understanding, proving, and implementing fundamental algorithms and data structures.

bigcat12345678(10000) 1 day ago [-]

The most valuable part of TAOCP, for me, is its writing.

I've never read anything that is more precise or intuitive. TAOCP is also pleasant to read.

It's the book that I go back to once a while after being bothered by the sloppiness in the documents and papers and many other written materials consumed everyday. Reading it gives a sense of enlightenment that regardless of all those poor writing, there is hope to reach the clarity that I have the deepest desire for.

SoftTalker(10000) 1 day ago [-]

> the days are long gone when an ordinary engineer needs to study algorithms in depth

Except to pass the interview screens at high-profile tech companies?

DeathArrow(10000) 1 day ago [-]

> I have to admit that the days are long gone when an ordinary engineer needs to study algorithms in depth

Even that I architect and write software for a living, I don't consider myself an engineer, but a computer scientist. I've studied Computer Science in University, not Engineering. I like to understand how things work, why they work, how would they work if anything would be changed. I like to try, discover and do new things, not new only for me, but things which weren't done previously. And that is impossible without a solid understanding of theoretical principles and continuous learning.

If I wouldn't be into computers, I could see myself as a mathematician, physicist, biologist, doctor, artist or architect but not as an engineer since I do like to go deeply to the root of knowledge, not just apply the said knowledge.

I see a great value in engineering mentality, but it's just not for me.

mhh__(10000) 1 day ago [-]

There will always be a 'higher' type of engineers who want to read TAOCP and similar.

My issue with the books is that they're actually quite long winded even by what you'd expect from the tone.

There's some really cool stuff in them, obviously, but I think they're objectively not very good textbooks for any purpose.

Then again I'm coming from a background of physics rather than mathematics so I'm not set out for a real battle of wits when it comes to constructing proofs.

hn-22(10000) 1 day ago [-]

Knuth is a failed Mathematician. He basically couldn't solve the problem Manin gave him so he escaped to a beach. I don't think his guidance matters that much. I'll listen to him only when he has the solution.

laichzeit0(10000) 1 day ago [-]

Interesting. Which problem did Manin give him? How do you know all this stuff?

SoftTalker(10000) 1 day ago [-]

Every day I look at the things that I'm ready to do, and choose the one that I like the least, the one that's least fun — the task that I would most like to procrastinate from doing, but for which I have no good reason for procrastination.

I'm not sure I've seen this approach to combating procrastination before. I can see how it might work: once you've completed the thing you least wanted to do, you might feel relief that the distasteful task is done and you can then dive into other stuff without that nagging you in the back of your mind.

I think I will give this a try...

mpwoz(10000) 1 day ago [-]

If it's your job to eat a frog, it's best to do it first thing in the morning. And If it's your job to eat two frogs, it's best to eat the biggest one first.

Mark Twain

ghaff(10000) 1 day ago [-]

The key is asking 'Is there a reasonable chance that this unpleasant task/expensive purchase/other potential PITA will go away if I put it on a list and continue to avoid it?' If the answer if yes, then procrastination can actually be a good thing. Kicking a can down the road can actually be a pretty good strategy.

Otherwise, you might as well bite the bullet, especially if there's some advantage in doing it sooner rather than later.

beebmam(10000) 2 days ago [-]

>In Christian churches I am least impressed by a sermon that talks about how marvelous heaven is going to be at the end. To me that's not the message of Christianity. The message is about how to live now, not that we should live in some particular way because there's going to be pie in the sky some day. The end means almost nothing to me. I am glad it's there, but I don't see it as much of a motivating force, if any. I mean, it's the journey that's important.

I find this quite sad. In the US, I have never known a kind Christianity that espoused these ideas. The end, either heaven or hell (or purgatory), is everything to Christianity in the US, in my experience. Perhaps it used to be different here.

christophilus(10000) 2 days ago [-]

Depends on what flavor of Christianity you adhere to. I follow the Catholic mystical tradition which doesn't really focus on pie in the sky, but rather on the purpose of being, which is to become one with the divine-- a purpose which doesn't have to wait for the afterlife.

UncleOxidant(10000) 1 day ago [-]

> The end, either heaven or hell (or purgatory), is everything to Christianity in the US

Yes, unfortunately this is the case in the dominant US expression of Christianity, Evangelicalism. But I think it's changing in some quarters. I've heard several sermons lately about how 'eternal life' starts right here on earth. Check out The Bible Project's video on the meaning of Eternal Life [1]

I think The Bible Project is kind of on the vanguard of this movement within Evangelicalism (I think they're still theologically Evangelical, but maybe they'd shy away from using the term now since it's become loaded with political baggage), I wouldn't necessarily call it 'progressive' but it's looking deeply into biblical interpretation and subtly calling out the predominate Evangelical interpretations.

Also, Check out NT Wright's 'Surprised by Hope'. He's coming from an Anglican perspective with an eschatology that predates the Evangelical 'Left Behind' narrative.

[1] https://www.youtube.com/watch?v=uCOycIMyJZM

pjmorris(10000) 1 day ago [-]

I can say that I know of communities (and am part of one) of Christians in the US who view the journey here and now as vital. If you're interested, consider the book 'We Make the Road by Walking', McLaren, or the BEMA Discipleship podcast, being sure to start with episode 0.

DeathArrow(10000) 1 day ago [-]

>In the US, I have never known a kind Christianity that espoused these ideas. The end, either heaven or hell (or purgatory), is everything to Christianity in the US, in my experience. Perhaps it used to be different here.

In the Orthodox Church, which I am member of, the accent is on the journey and on the transforming of the individual, not on the prize/punishment.

I live in Eastern Europe, but there are orthodox churches in US, like the Orthodox Church in America. You can go and visit at one, and see what's about. You can also chat with a priest or a monk while there.

I think that while most protestant, neo-protestant, episcopalian and even catholic churches steadily diluted the religion, the orthodox church try to stay true to the same initial truths the same ways as thousands of years ago.

theonething(10000) 1 day ago [-]

Why should the journey be orthogonal to the destination? The Bible confirms that both are vital.

The Sermon on the Mount commands Christians to be kind, loving and good people in this life.

Verses like Matthew 6:19-21, Colossians 3:2 and 1 Corinthians 2:9 compel Christians to live this life in light of eternity.

To me, if you accept the presuppositions of the Christian worldview, this is logical. If this life and how you live in it is important, how much more so is eternity? After all, life is temporal. (Mark 8:36)

> That's not the message of Christianity > The end means almost nothing to me.

This betrays a fundamental lack of understanding of the Bible. Again, the Bible presupposes the existence of an eternal Heaven and an eternal Hell. This life is the seedtime for eternity.

I'm not here to argue with non-Christians about the validity of these pre-suppositions. I'm saying for those who call themselves Christian and therefore hold the Bible to be true, the end (should) means everything to them.

golem14(10000) 1 day ago [-]

I recommend the Don Camillo stories from G. Guareschi.

It's kind of a balm for when I feel particularly atheist and annoyed at the world at large.

hprotagonist(10000) 1 day ago [-]

> sad. In the US, I have never known a kind Christianity that espoused these ideas.

And for contrast, I've never participated in an american church that has espoused anything but, and haven't found that aspect of the faith to be particularly difficult in meeting.

I know that the "pie in the sky" churches are out there, i just don't attend them.

mcswell(10000) 1 day ago [-]

I used to use the term 'evangelical' to describe myself, but given the usage of that term in the last ten or so years, I've stopped, and I just consider myself a Christian.

That's a preface to what I really wanted to say, which is that having been in various kinds of evangelical-like churches for the past 50 or so years, and having heard a lot of sermons both good and bad (and by 'bad' I mainly mean boring, only occasionally something wrong), I've never heard much about heaven or hell; in fact, I can think of precisely one sermon that really gave the topic of heaven much thought, and none that talked about hell. I'm at a loss to explain why your experience and mine are so different.

BTW, if you've read C.S. Lewis's Voyage of the Dawn Treader, I have always thought that the monopods (duffers) were a congregation, and their chief a pastor who is constantly preaching boringly obvious things: 'And what I say is, when chaps are visible, why, they can see one another.' And such like.

mooneater(10000) 1 day ago [-]

> trying to work the concepts out in my own mind and to anticipate what the authors are going to say before turning each page. I usually fail to guess what the next page holds, but the fact that I've tried and failed makes me more ready to understand why the authors chose the paths that they did

TIL Donald Knuth operates a bit like GPT-3 but for research paper narrative.

ohwellhere(10000) 1 day ago [-]

I see it differently. It's not prediction based on anything statistical but based on one's understanding.

I've made it a habit to ask myself what I expect the output to be for any programming operation, and why. It forces me to gain clarity into my mental model of what's happening, and it immediately highlights deficiencies in my model when it's proven wrong.

I ask the same of others when I pair program with juniors or interviewees. I find super useful all around.

paulpauper(10000) 1 day ago [-]

This guys name shows up on almost every important combinatorics result . amazing how much he has done

jjtheblunt(10000) 1 day ago [-]

that's got to be a typo: he is a venerable wizard, but combinatorics is a field far more vast than algorithmic things, often dominated historically by Hungarians

bowsamic(10000) 2 days ago [-]

> One day, when I realized how hard it was to find any reason for my current unhappiness, I thought, 'Wait a minute. I bet this unhappiness is really something chemical, not actually caused by circumstances.*' I began to speculate that my body was programmed to be unhappy a certain percentage of the time, and that hormones or something were the real reason behind moments of mild depression.'

This is exactly what happens to me with my dysthymia. The intensely heavy body feeling (medical term: 'psychomotor retardation') and low energy aren't really problems in themselves, it's when I 'buy into them' that it really goes downhill. The problem is that it does kinda suck and makes it hard to concentrate and do things.

Unfortunately my mood has been generally very low since about age 9 to 11 unfortunately, and I'm 27 now. I don't see much value in life or in others or relationships (even though I am married!). So that combined with the physical symptoms makes it a difficult and slow life.

wnolens(10000) 1 day ago [-]

How did you value another enough to enter into marriage with them? What was that decision like?

user_7832(10000) 1 day ago [-]

(Disclaimer: Please take what I saw with a grain of salt - I'm just a stranger on the internet, not a doctor. No disrespect intended to you or anyone.)

Have you checked if you might have other possible conditions? I too 'thought' I was mildly depressed for several years (I'm in my early 20s now). Turned out to be (undiagnosed) ADHD that held me back from working 'properly' (due to procrastination/planning issues) while making me ambitious, hence making me sad/disappointed/frustrated. (I hope to get a formal dx soon, apparently medication can help a very decent bit)

jmcphers(10000) 1 day ago [-]

> My mother is amazing to watch because she doesn't do anything efficiently, really: She puts about three times as much energy as necessary into everything she does. But she never spends any time wondering what to do next or how to optimize anything; she just keeps working. Her strategy, slightly simplified, is, 'See something that needs to be done and do it.' All day long. And at the end of the day, she's accomplished a huge amount.

This strategy is remarkably powerful and I've used it to great effect in my career. Committing yourself to pushing forward every single day, even if just a little bit, and always just peeling off one single thing you can do next (even if it's tiny yet takes you all day) has a dizzying compounding effect.

77pt77(10000) 1 day ago [-]

Until eventually someone invariable has to clean up your mess due to awful choices.

But why should you care?

You've moved on with your trail of collateral damage by then, reaped the benefits and the fixer will subconsciously take the blame for any problems from outside observers.

chmod600(10000) 1 day ago [-]

This is effective because analysis paralysis is mentally draining.

We tend to think/optimize in terms of time. But I've found it helpful to optimize for mental energy, which is often the limiting factor.

louky(10000) about 19 hours ago [-]

'Do the Next Right Thing' is what I try to do.

RaoulP(10000) 1 day ago [-]

My mom is exactly the same and I really admire her for it.

marttt(10000) 1 day ago [-]

As someone with rural/countryside roots, I recall similar advice on physical labor: it is best to work at a steady, consistent pace all day long without getting sweaty or breathless. You'll avoid (or postpone) dehydration, remain more alert and calm, and thus it is much easier to switch tasks immediately.

As a tree planter of several seasons, I've realized that this advice works remarkably well. Pushing yourself, sweating, drinking stimulating beverages instead of plain water etc seems to give you an edge while planting, but it's kind of deceptive in the long run IMO. Steady, moderately paced, but consistent, non-stop work is more effective in the grand total.

Then again, all this might come down to whether one is a 'long distance runner' or a 'sprinter' by personality and bodily characteristics. I guess I'm somewhere in between, but, approaching 40, these '100 meters in 9.2 seconds' daily life dashes are somehow starting to lose their appeal. Howdy, middle age!

deepGem(10000) 1 day ago [-]

This is exactly how I approached Leetcode. TBH the grind has had satisfactory effects even though the immediate practical impact is just for interviewing. I don't use it as a tool for interviewing, just as a tool of list of things to get done. I should just pour more energy into it and do it every day.

I also am stealing this principle 'My scheduling principle is to do the thing I hate most on my to-do list'

calvinmorrison(10000) 1 day ago [-]

or as we say at my job 'JFDI'. Just Do It. Stop Faffing. Just Do It. Ok. It's done. Now we can move on.

luigi23(10000) 1 day ago [-]

'Long-term consistency trumps short term intensity' - Bruce Lee https://www.dannyok.com/blog/2015/9/26/long-term-consistency...

tintor(10000) 1 day ago [-]

This only works for simple tasks (can be brute forced), and ones you've done before (you know steps that lead to goal), and those with greedy strategies (heuristics).

skadamat(10000) 1 day ago [-]

One of my favorite facts about Knuth is how rarely he checks email!


cato_the_elder(10000) 1 day ago [-]

My favorite Knuth fact is that he thinks P = NP. [1][2] That's a very contrarian view.

[1]: https://youtube.com/watch?v=XDTOs8MgQfg

[2]: https://www.informit.com/articles/article.aspx?p=2213858&WT....

nooorofe(10000) 1 day ago [-]

he has 'a wonderful secretary who looks at the incoming mail'

ipnon(10000) 1 day ago [-]

Fast responses to email was just cited as a key factor in founder success in Cowen's 'Talent'. He quoted Altman, who apparently ran some rudimentary data analysis based on his own emails while working at Y Combinator. Obviously Knuth is not successful as a founder.

zerop(10000) 2 days ago [-]

I admire Donald Knuth for his contribution on algorithms and CS stuff. He is one of greatest computer scientist of our time. But would his every advice outside CS fields will be great? I am not sure about this.

ciphol(10000) 1 day ago [-]

Anyone who achieves on his level possesses not just raw innate brainpower but also other skills, for example organizational skills.

hoten(10000) 1 day ago [-]

Don't you think well accomplished people are qualified to talk about problem solving and work habits? Seems those skills would be necessary for their achievements.

orzig(10000) 1 day ago [-]

Moreover: 'Is his advice outside CS great _for me_?'

He's exceptional, in a very literal sense, so your prior would have be be 'no'

pessimizer(10000) 1 day ago [-]

Do you require advice to be great before you listen to it? I tend to decide whether advice was great after I've heard it, or better still after I've put it into action.

nnoitra(10000) 2 days ago [-]

>That's quite different from a bachelor's degree or a master's degree; those degrees are awarded for a mastery of existing knowledge

I didn't know a BsC was a sign of mastery of a field.

svachalek(10000) 1 day ago [-]

How did you get from 'mastery of knowledge' to 'mastery of a field'?

bigcat12345678(10000) 2 days ago [-]

> Recently, I came across a few old and new interviews of Donald Knuth

I have developed the conclusion that reading digestive summary from original source materials is ultimately ineffective for me at this stage of the life.

Unfortunately, the author did not provide links to these interviews.

For anyone who is writing a summary from other source material, please do provide references. That's one of the things I learned churning out low quality academic papers in PhD study.

maxerickson(10000) 1 day ago [-]

Ironically, a significant portion of Knuth's lifetime work consists of digested summaries of original source materials.

belter(10000) 1 day ago [-]

Donald Knuth interviews are so interesting, but would like to particularly highlight this little piece of advice, out of this great playlist:

'Donald Knuth - My advice to young people': https://youtu.be/75Ju0eM5T2c

Complete Playlist - 'Donald Knuth (Computer scientist)' [97 videos]:


Also the 'Oral History of Donald Knuth' from the Computer History Museum is great.

'Oral History of Donald Knuth Part 1': https://youtu.be/Wp7GAKLSGnI

'Oral History of Donald Knuth Part 2': https://www.youtube.com/watch?v=gqPPll3uDa0


'Donald Knuth Interview 2006': https://github.com/kragen/knuth-interview-2006

'An Interview with Donald Knuth': https://www.ntg.nl/maps/16/14.pdf

'Interview with Donald Knuth': https://www.informit.com/articles/article.aspx?p=1193856

This somewhat 'colourful' page also tracks a few: http://www.softpanorama.org/People/Knuth/donald_knuth_interv...

PS: The story that he told Steve Jobs he was 'Full of shit' is not true.

'Donald Knuth never told Steve Jobs that he was full of shit'


madisp(10000) 1 day ago [-]

probably the Lex Fridman podcast interviews:

https://www.youtube.com/watch?v=2BdBfsXbST8 https://www.youtube.com/watch?v=EE1R8FYUJm0

the second one is definitely where the last paragraph in the article is from. Weird that the interview is dated 2021-09-09 and the post is 2020-04-30?

Historical Discussions: Rust: A Critical Retrospective (May 19, 2022: 574 points)

(574) Rust: A Critical Retrospective

574 points 6 days ago by sohkamyung in 10000th position

www.bunniestudios.com | Estimated reading time – 27 minutes | comments | anchor

Since I was unable to travel for a couple of years during the pandemic, I decided to take my new-found time and really lean into Rust. After writing over 100k lines of Rust code, I think I am starting to get a feel for the language and like every cranky engineer I have developed opinions and because this is the Internet I'm going to share them.

The reason I learned Rust was to flesh out parts of the Xous OS written by Xobs. Xous is a microkernel message-passing OS written in pure Rust. Its closest relative is probably QNX. Xous is written for lightweight (IoT/embedded scale) security-first platforms like Precursor that support an MMU for hardware-enforced, page-level memory protection.

In the past year, we've managed to add a lot of features to the OS: networking (TCP/UDP/DNS), middleware graphics abstractions for modals and multi-lingual text, storage (in the form of an encrypted, plausibly deniable database called the PDDB), trusted boot, and a key management library with self-provisioning and sealing properties.

One of the reasons why we decided to write our own OS instead of using an existing implementation such as SeL4, Tock, QNX, or Linux, was we wanted to really understand what every line of code was doing in our device. For Linux in particular, its source code base is so huge and so dynamic that even though it is open source, you can't possibly audit every line in the kernel. Code changes are happening at a pace faster than any individual can audit. Thus, in addition to being home-grown, Xous is also very narrowly scoped to support just our platform, to keep as much unnecessary complexity out of the kernel as possible.

Being narrowly scoped means we could also take full advantage of having our CPU run in an FPGA. Thus, Xous targets an unusual RV32-IMAC configuration: one with an MMU + AES extensions. It's 2022 after all, and transistors are cheap: why don't all our microcontrollers feature page-level memory protection like their desktop counterparts? Being an FPGA also means we have the ability to fix API bugs at the hardware level, leaving the kernel more streamlined and simplified. This was especially relevant in working through abstraction-busting processes like suspend and resume from RAM. But that's all for another post: this one is about Rust itself, and how it served as a systems programming language for Xous.

Rust: What Was Sold To Me

Back when we started Xous, we had a look at a broad number of systems programming languages and Rust stood out. Even though its `no-std` support was then-nascent, it was a strongly-typed, memory-safe language with good tooling and a burgeoning ecosystem. I'm personally a huge fan of strongly typed languages, and memory safety is good not just for systems programming, it enables optimizers to do a better job of generating code, plus it makes concurrency less scary. I actually wished for Precursor to have a CPU that had hardware support for tagged pointers and memory capabilities, similar to what was done on CHERI, but after some discussions with the team doing CHERI it was apparent they were very focused on making C better and didn't have the bandwidth to support Rust (although that may be changing). In the grand scheme of things, C needed CHERI much more than Rust needed CHERI, so that's a fair prioritization of resources. However, I'm a fan of belt-and-suspenders for security, so I'm still hopeful that someday hardware-enforced fat pointers will make their way into Rust.

That being said, I wasn't going to go back to the C camp simply to kick the tires on a hardware retrofit that backfills just one poor aspect of C. The glossy brochure for Rust also advertised its ability to prevent bugs before they happened through its strict "borrow checker". Furthermore, its release philosophy is supposed to avoid what I call "the problem with Python": your code stops working if you don't actively keep up with the latest version of the language. Also unlike Python, Rust is not inherently unhygienic, in that the advertised way to install packages is not also the wrong way to install packages. Contrast to Python, where the official docs on packages lead you to add them to system environment, only to be scolded by Python elders with a "but of course you should be using a venv/virtualenv/conda/pipenv/..., everyone knows that". My experience with Python would have been so much better if this detail was not relegated to Chapter 12 of 16 in the official tutorial. Rust is also supposed to be better than e.g. Node at avoiding the "oops I deleted the Internet" problem when someone unpublishes a popular package, at least if you use fully specified semantic versions for your packages.

In the long term, the philosophy behind Xous is that eventually it should "get good enough", at which point we should stop futzing with it. I believe it is the mission of engineers to eventually engineer themselves out of a job: systems should get stable and solid enough that it "just works", with no caveats. Any additional engineering beyond that point only adds bugs or bloat. Rust's philosophy of "stable is forever" and promising to never break backward-compatibility is very well-aligned from the point of view of getting Xous so polished that I'm no longer needed as an engineer, thus enabling me to spend more of my time and focus supporting users and their applications.

The Rough Edges of Rust

There's already a plethora of love letters to Rust on the Internet, so I'm going to start by enumerating some of the shortcomings I've encountered.

"Line Noise" Syntax

This is a superficial complaint, but I found Rust syntax to be dense, heavy, and difficult to read, like trying to read the output of a UART with line noise: Trying::to_read::<&'a heavy>(syntax, |like| { this. can_be( maddening ) }).map(|_| ())?;

In more plain terms, the line above does something like invoke a method called "to_read" on the object (actually `struct`) "Trying" with a type annotation of "&heavy" and a lifetime of 'a with the parameters of "syntax" and a closure taking a generic argument of "like" calling the can_be() method on another instance of a structure named "this" with the parameter "maddening" with any non-error return values mapped to the Rust unit type "()" and errors unwrapped and kicked back up to the caller's scope.

Deep breath. Surely, I got some of this wrong, but you get the idea of how dense this syntax can be.

And then on top of that you can layer macros and directives which don't have to follow other Rust syntax rules. For example, if you want to have conditionally compiled code, you use a directive like #[cfg(all(not(baremetal), any(feature = "hazmat", feature = "debug_print")))] Which says if either the feature "hazmat" or "debug_print" is enabled and you're not running on bare metal, use the block of code below (and I surely got this wrong too). The most confusing part of about this syntax to me is the use of a single "=" to denote equivalence and not assignment, because, stuff in config directives aren't Rust code. It's like a whole separate meta-language with a dictionary of key/value pairs that you query.

I'm not even going to get into the unreadability of Rust macros – even after having written a few Rust macros myself, I have to admit that I feel like they "just barely work" and probably thar be dragons somewhere in them. This isn't how you're supposed to feel in a language that bills itself to be reliable. Yes, it is my fault for not being smart enough to parse the language's syntax, but also, I do have other things to do with my life, like build hardware.

Anyways, this is a superficial complaint. As time passed I eventually got over the learning curve and became more comfortable with it, but it was a hard, steep curve to climb. This is in part because all the Rust documentation is either written in eli5 style (good luck figuring out "feature"s from that example), or you're greeted with a formal syntax definition (technically, everything you need to know to define a "feature" is in there, but nowhere is it summarized in plain English), and nothing in between.

To be clear, I have a lot of sympathy for how hard it is to write good documentation, so this is not a dig at the people who worked so hard to write so much excellent documentation on the language. I genuinely appreciate the general quality and fecundity of the documentation ecosystem.

Rust just has a steep learning curve in terms of syntax (at least for me).

Rust Is Powerful, but It Is Not Simple

Rust is powerful. I appreciate that it has a standard library which features HashMaps, Vecs, and Threads. These data structures are delicious and addictive. Once we got `std` support in Xous, there was no going back. Coming from a background of C and assembly, Rust's standard library feels rich and usable — I have read some criticisms that it lacks features, but for my purposes it really hits a sweet spot.

That being said, my addiction to the Rust `std` library has not done any favors in terms of building an auditable code base. One of the criticisms I used to leverage at Linux is like "holy cow, the kernel source includes things like an implementation for red black trees, how is anyone going to audit that".

Now, having written an OS, I have a deep appreciation for how essential these rich, dynamic data structures are. However, the fact that Xous doesn't include an implementation of HashMap within its repository doesn't mean that we are any simpler than Linux: indeed, we have just swept a huge pile of code under the rug; just the `collection`s portion of the standard library represents about 10k+ SLOC at a very high complexity.

So, while Rust's `std` library allows the Xous code base to focus on being a kernel and not also be its own standard library, from the standpoint of building a minimum attack-surface, "fully-auditable by one human" codebase, I think our reliance on Rust's `std` library means we fail on that objective, especially so long as we continue to track the latest release of Rust (and I'll get into why we have to in the next section).

Ideally, at some point, things "settle down" enough that we can stick a fork in it and call it done by well, forking the Rust repo, and saying "this is our attack surface, and we're not going to change it". Even then, the Rust `std` repo dwarfs the Xous repo by several multiples in size, and that's not counting the complexity of the compiler itself.

Rust Isn't Finished

This next point dovetails into why Rust is not yet suitable for a fully auditable kernel: the language isn't finished. For example, while we were coding Xous, a thing called `const generic` was introduced. Before this, Rust had no native ability to deal with arrays bigger than 32 elements! This limitation is a bit maddening, and even today there are shortcomings such as the `Default` trait being unable to initialize arrays larger than 32 elements. This friction led us to put limits on many things at 32 elements: for example, when we pass the results of an SSID scan between processes, the structure only reserves space for up to 32 results, because the friction of going to a larger, more generic structure just isn't worth it. That's a language-level limitation directly driving a user-facing feature.

Also over the course of writing Xous, things like in-line assembly and workspaces finally reached maturity, which means we need to go back a revisit some unholy things we did to make those critical few lines of initial boot code, written in assembly, integrated into our build system.

I often ask myself "when is the point we'll get off the Rust release train", and the answer I think is when they finally make "alloc" no longer a nightly API. At the moment, `no-std` targets have no access to the heap, unless they hop on the "nightly" train, in which case you're back into the Python-esque nightmare of your code routinely breaking with language releases.

We definitely gave writing an OS in `no-std` + stable a fair shake. The first year of Xous development was all done using `no-std`, at a cost in memory space and complexity. It's possible to write an OS with nothing but pre-allocated, statically sized data structures, but we had to accommodate the worst-case number of elements in all situations, leading to bloat. Plus, we had to roll a lot of our own core data structures.

About a year ago, that all changed when Xobs ported Rust's `std` library to Xous. This means we are able to access the heap in stable Rust, but it comes at a price: now Xous is tied to a particular version of Rust, because each version of Rust has its own unique version of `std` packaged with it. This version tie is for a good reason: `std` is where the sausage gets made of turning fundamentally `unsafe` hardware constructions such as memory allocation and thread creation into "safe" Rust structures. (Also fun fact I recently learned: Rust doesn't have a native allocater for most targets – it simply punts to the native libc `malloc()` and `free()` functions!) In other words, Rust is able to make a strong guarantee about the stable release train not breaking old features in part because of all the loose ends swept into `std`.

I have to keep reminding myself that having `std` doesn't eliminate the risk of severe security bugs in critical code – it merely shuffles a lot of critical code out of sight, into a standard library. Yes, it is maintained by a talented group of dedicated programmers who are smarter than me, but in the end, we are all only human, and we are all fair targets for software supply chain exploits.

Rust has a clockwork release schedule – every six weeks, it pushes a new version. And because our fork of `std` is tied to a particular version of Rust, it means every six weeks, Xobs has the thankless task of updating our fork and building a new `std` release for it (we're not a first-class platform in Rust, which means we have to maintain our own `std` library). This means we likewise force all Xous developers to run `rustup update` on their toolchains so we can retain compatibility with the language.

This probably isn't sustainable. Eventually, we need to lock down the code base, but I don't have a clear exit strategy for this. Maybe the next point at which we can consider going back to `nostd` is when we can get the stable `alloc` feature, which allows us to have access to the heap again. We could then decouple Xous from the Rust release train, but we'd still need to backfill features such as Vec, HashMap, Thread, and Arc/Mutex/Rc/RefCell/Box constructs that enable Xous to be efficiently coded.

Unfortunately, the `alloc` crate is very hard, and has been in development for many years now. That being said, I really appreciate the transparency of Rust behind the development of this feature, and the hard work and thoughtfulness that is being put into stabilizing this feature.

Rust Has A Limited View of Supply Chain Security

I think this position is summarized well by the installation method recommended by the rustup.rs installation page: `curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh` "Hi, run this shell script from a random server on your machine."

To be fair, you can download the script and inspect it before you run it, which is much better than e.g. the Windows .MSI installers for vscode. However, this practice pervades the entire build ecosystem: a stub of code called `build.rs` is potentially compiled and executed whenever you pull in a new crate from crates.io. This, along with "loose" version pinning (you can specify a version to be, for example, simply "2" which means you'll grab whatever the latest version published is with a major rev of 2), makes me uneasy about the possibility of software supply chain attacks launched through the crates.io ecosystem.

Crates.io is also subject to a kind of typo-squatting, where it's hard to determine which crates are "good" or "bad"; some crates that are named exactly what you want turn out to just be old or abandoned early attempts at giving you the functionality you wanted, and the more popular, actively-maintained crates have to take on less intuitive names, sometimes differing by just a character or two from others (to be fair, this is not a problem unique to Rust's package management system).

There's also the fact that dependencies are chained – when you pull in one thing from crates.io, you also pull in all of that crate's subordinate dependencies, along with all their build.rs scripts that will eventually get run on your machine. Thus, it is not sufficient to simply audit the crates explicitly specified within your Cargo.toml file — you must also audit all of the dependent crates for potential supply chain attacks as well.

Fortunately, Rust does allow you to pin a crate at a particular version using the `Cargo.lock` file, and you can fully specify a dependent crate down to the minor revision. We try to mitigate this in Xous by having a policy of publishing our Cargo.lock file and specifying all of our first-order dependent crates to the minor revision. We have also vendored in or forked certain crates that would otherwise grow our dependency tree without much benefit.

That being said, much of our debug and test framework relies on some rather fancy and complicated crates that pull in a huge number of dependencies, and much to my chagrin even when I try to run a build just for our target hardware, the dependent crates for running simulations on the host computer are still pulled in and the build.rs scripts are at least built, if not run.

In response to this, I wrote a small tool called `crate-scraper` which downloads the source package for every source specified in our Cargo.toml file, and stores them locally so we can have a snapshot of the code used to build a Xous release. It also runs a quick "analysis" in that it searches for files called build.rs and collates them into a single file so I can more quickly grep through to look for obvious problems. Of course, manual review isn't a practical way to detect cleverly disguised malware embedded within the build.rs files, but it at least gives me a sense of the scale of the attack surface we're dealing with — and it is breathtaking, about 5700 lines of code from various third parties that manipulates files, directories, and environment variables, and runs other programs on my machine every time I do a build.

I'm not sure if there is even a good solution to this problem, but, if you are super-paranoid and your goal is to be able to build trustable firmware, be wary of Rust's expansive software supply chain attack surface!

You Can't Reproduce Someone Else's Rust Build

A final nit I have about Rust is that builds are not reproducible between different computers (they are at least reproducible between builds on the same machine if we disable the embedded timestamp that I put into Xous for $reasons).

I think this is primarily because Rust pulls in the full path to the source code as part of the panic and debug strings that are built into the binary. This has lead to uncomfortable situations where we have had builds that worked on Windows, but failed under Linux, because our path names are very different lengths on the two and it would cause some memory objects to be shifted around in target memory. To be fair, those failures were all due to bugs we had in Xous, which have since been fixed. But, it just doesn't feel good to know that we're eventually going to have users who report bugs to us that we can't reproduce because they have a different path on their build system compared to ours. It's also a problem for users who want to audit our releases by building their own version and comparing the hashes against ours.

There's some bugs open with the Rust maintainers to address reproducible builds, but with the number of issues they have to deal with in the language, I am not optimistic that this problem will be resolved anytime soon. Assuming the only driver of the unreproducibility is the inclusion of OS paths in the binary, one fix to this would be to re-configure our build system to run in some sort of a chroot environment or a virtual machine that fixes the paths in a way that almost anyone else could reproduce. I say "almost anyone else" because this fix would be OS-dependent, so we'd be able to get reproducible builds under, for example, Linux, but it would not help Windows users where chroot environments are not a thing.

Where Rust Exceeded Expectations

Despite all the gripes laid out here, I think if I had to do it all over again, Rust would still be a very strong contender for the language I'd use for Xous. I've done major projects in C, Python, and Java, and all of them eventually suffer from "creeping technical debt" (there's probably a software engineer term for this, I just don't know it). The problem often starts with some data structure that I couldn't quite get right on the first pass, because I didn't yet know how the system would come together; so in order to figure out how the system comes together, I'd cobble together some code using a half-baked data structure.

Thus begins the descent into chaos: once I get an idea of how things work, I go back and revise the data structure, but now something breaks elsewhere that was unsuspected and subtle. Maybe it's an off-by-one problem, or the polarity of a sign seems reversed. Maybe it's a slight race condition that's hard to tease out. Nevermind, I can patch over this by changing a <= to a <, or fixing the sign, or adding a lock: I'm still fleshing out the system and getting an idea of the entire structure. Eventually, these little hacks tend to metastasize into a cancer that reaches into every dependent module because the whole reason things even worked was because of the "cheat"; when I go back to excise the hack, I eventually conclude it's not worth the effort and so the next best option is to burn the whole thing down and rewrite it...but unfortunately, we're already behind schedule and over budget so the re-write never happens, and the hack lives on.

Rust is a difficult language for authoring code because it makes these "cheats" hard – as long as you have the discipline of not using "unsafe" constructions to make cheats easy. However, really hard does not mean impossible – there were definitely some cheats that got swept under the rug during the construction of Xous.

This is where Rust really exceeded expectations for me. The language's structure and tooling was very good at hunting down these cheats and refactoring the code base, thus curing the cancer without killing the patient, so to speak. This is the point at which Rust's very strict typing and borrow checker converts from a productivity liability into a productivity asset.

I liken it to replacing a cable in a complicated bundle of cables that runs across a building. In Rust, it's guaranteed that every strand of wire in a cable chase, no matter how complicated and awful the bundle becomes, is separable and clearly labeled on both ends. Thus, you can always "pull on one end" and see where the other ends are by changing the type of an element in a structure, or the return type of a method. In less strictly typed languages, you don't get this property; the cables are allowed to merge and affect each other somewhere inside the cable chase, so you're left "buzzing out" each cable with manual tests after making a change. Even then, you're never quite sure if the thing you replaced is going to lead to the coffee maker switching off when someone turns on the bathroom lights.

Here's a direct example of Rust's refactoring abilities in action in the context of Xous. I had a problem in the way trust levels are handled inside our graphics subsystem, which I call the GAM (Graphical Abstraction Manager). Each Canvas in the system gets a `u8` assigned to it that is a trust level. When I started writing the GAM, I just knew that I wanted some notion of trustability of a Canvas, so I added the variable, but wasn't quite sure exactly how it would be used. Months later, the system grew the notion of Contexts with Layouts, which are multi-Canvas constructions that define a particular type of interaction. Now, you can have multiple trust levels associated with a single Context, but I had forgotten about the trust variable I had previously put in the Canvas structure – and added another trust level number to the Context structure as well. You can see where this is going: everything kind of worked as long as I had simple test cases, but as we started to get modals popping up over applications and then menus on top of modals and so forth, crazy behavior started manifesting, because I had confused myself over where the trust values were being stored. Sometimes I was updating the value in the Context, sometimes I was updating the one in the Canvas. It would manifest itself sometimes as an off-by-one bug, other times as a concurrency error.

This was always a skeleton in the closet that bothered me while the GAM grew into a 5k-line monstrosity of code with many moving parts. Finally, I decided something had to be done about it, and I was really not looking forward to it. I was assuming that I messed up something terribly, and this investigation was going to conclude with a rewrite of the whole module.

Fortunately, Rust left me a tiny string to pull on. Clippy, the cheerfully named "linter" built into Rust, was throwing a warning that the trust level variable was not being used at a point where I thought it should be – I was storing it in the Context after it was created, but nobody every referred to it after then. That's strange – it should be necessary for every redraw of the Context! So, I started by removing the variable, and seeing what broke. This rapidly led me to recall that I was also storing the trust level inside the Canvases within the Context when they were being created, which is why I had this dangling reference. Once I had that clue, I was able to refactor the trust computations to refer only to that one source of ground truth. This also led me to discover other bugs that had been lurking because in fact I was never exercising some code paths that I thought I was using on a routine basis. After just a couple hours of poking around, I had a clear-headed view of how this was all working, and I had refactored the trust computation system with tidy APIs that were simple and easier to understand, without having to toss the entire code base.

This is just one of many positive experiences I've had with Rust in maintaining the Xous code base. It's one of the first times I've walked into a big release with my head up and a positive attitude, because for the first time ever, I feel like maybe I have a chance of being able deal with hard bugs in an honest fashion. I'm spending less time making excuses in my head to justify why things were done this way and why we can't take that pull request, and more time thinking about all the ways things can get better, because I know Clippy has my back.

Caveat Coder

Anyways, that's a lot of ranting about software for a hardware guy. Software people are quick to remind me that first and foremost, I make circuits and aluminum cases, not code, therefore I have no place ranting about software. They're right – I actually have no "formal" training to write code "the right way". When I was in college, I learned Maxwell's equations, not algorithms. I could never be a professional programmer, because I couldn't pass even the simplest coding interview. Don't ask me to write a linked list: I already know that I don't know how to do it correctly; you don't need to prove that to me. This is because whenever I find myself writing a linked list (or any other foundational data structure for that matter), I immediately stop myself and question all the life choices that brought me to that point: isn't this what libraries are for? Do I really need to be re-inventing the wheel? If there is any correlation between doing well in a coding interview and actual coding ability, then you should definitely take my opinions with the grain of salt.

Still, after spending a couple years in the foxhole with Rust and reading countless glowing articles about the language, I felt like maybe a post that shared some critical perspectives about the language would be a refreshing change of pace.

This entry was posted on Thursday, May 19th, 2022 at 6:35 pm and is filed under Hacking, open source, Ponderings, precursor. You can follow any responses to this entry through the RSS 2.0 feed. You can leave a response, or trackback from your own site.

All Comments: [-] | anchor

est31(10000) 6 days ago [-]

I wonder what the author means by the alloc crate not being stable? The alloc crate is stable since 1.36.0: https://github.com/rust-lang/rust/blob/master/RELEASES.md#ve...

Regarding the reproducible builds concern around paths being integrated into the binary, a flag exists to get rid of paths: --remap-path-prefix


On nightly, there is also remap-cwd-prefix added by the chromium team to address some of the shortcomings with remap-path-prefix: https://github.com/rust-lang/rust/issues/89434

Overall I'm really impressed that an individual wrote 100 thousand lines of Rust. That's a lot!

celeritascelery(10000) 6 days ago [-]

> I wonder what the author means by the alloc crate not being stable? The alloc crate is stable since 1.36.0:

He is referring to the allocator api[1], not the std lib module

[1] https://github.com/rust-lang/rust/issues/32838

CryZe(10000) 6 days ago [-]

You can write libraries against alloc on stable, but not any executables, because executables not using std need to specify the alloc_error_handler, which you can't do on stable yet: https://github.com/rust-lang/rust/issues/51540

ntoskrnl(10000) 6 days ago [-]

Yep, this is a great article, but that section (the whole 'Rust Isn't Finished' section) jumped out as a place where there were some simple ways he could have made his life easier. It could also have been a failure of the Rust community to teach a good workflow.

You don't need to force every contributor to upgrade every six weeks in lockstep, since releases of Rust and std are backwards compatible. Upgrade at your leisure, and run tests in CI with the minimum version you want to support. If you're doing something crazier that requires ABI compatibility between separate builds (or you just want consistency), you can add a `rust-toolchain` file that upgrades the compiler on dev machines automatically, as seamlessly as Cargo downloads new dependency versions.

Starlevel001(10000) 6 days ago [-]

> This is in part because all the Rust documentation is either written in eli5 style (good luck figuring out "feature"s from that example), or you're greeted with a formal syntax definition (technically, everything you need to know to define a "feature" is in there, but nowhere is it summarized in plain English), and nothing in between.

I wish I wish that Rust had a better documentation system. It's rather telling that any serious project has to use an entirely separate static site generator because the official doc system is so crippled.

Compare this to the Python docs, or some truly excellent Python library docs (like Trio: https://trio.readthedocs.io/en/stable/, or Flask: https://flask.palletsprojects.com/en/2.1.x/, or Django: https://docs.djangoproject.com/en/4.0/https://docs.djangopro...), which are all written using Sphinx and integrate properly with crossrefs and such rather than writing manual markdown links as an example.

pitaj(10000) 6 days ago [-]

I don't find rustdoc lacking at all. It's great for API documentation and it does have intradoc links.

Of course for a more serialized tutorial, rustdoc is not a good fit so we have mdbook.

veber-alex(10000) 6 days ago [-]

You linked to docs of 3 python projects and each one looks entirely different while the docs of all rust crates look exactly the same.

burntsushi(10000) 6 days ago [-]

'cargo doc' is absolutely one of my most favorite things about Rust. I've never once seen it as crippled and I've never once reached for an 'entirely separate static site generator' to write docs despite maintaining several serious projects.

Writing out explicit links sucked, but we have intradoc links now. It was a huge win. But my first paragraph above was true even before intradoc links too.

Also, I hate Sphinx. It's awesome that folks have been able to use it to produce great docs, but I've never been successful in using it. I disliked it enough that I wrote my own tool for generating API documentation in Python.[1]

[1]: https://github.com/mitmproxy/pdoc

alexfromapex(10000) 6 days ago [-]

When Nightly is breaking no-std targets, is there not a way to pin a specific Nightly release to prevent that?

vgel(10000) 6 days ago [-]

There is, but then you're pinning yourself to whatever bugs are in that nightly, and making the eventual upgrade that much worse.

steveklabnik(10000) 6 days ago [-]

There is, yes. You put a file in the root of your project with the specific version of Rust you want, and it'll get picked up and used by the tooling.

xobs(10000) 5 days ago [-]

You can do that! But you may run into a problem where one of the crates you use detects that you're using nightly, and then opts in to nightly-only features. Except it may be using features that are only available on the latest version of nightly, or an ancient version that you don't have, and there is no version that makes both your code and the dependent package happy.

It's much better to just use stable. Then you're guaranteed to have forward compatibility, and you only have one place where you need to deal with the nightly weirdness.

collaborative(10000) 6 days ago [-]

I experimented with replacing an Express server with Rust while keeping the same js syntax and still running on Node

Granted this adds overhead, but my conclusion was that the performance gain is not worth the effort. Sure, memory looks almost flat but response times aren't that much better


lllr_finger(10000) 6 days ago [-]

It's really cool that you experimented with this!

My experience is that choosing Rust just for performance gains usually doesn't pay off. In your case, node already uses C/C++ under the hood, so some of what you're replacing could just be switching that for Rust.

The primary reason I reach for it is when I want the stability provided by the type system and runtime, and to prevent a litany of problems that impact other languages. If those problems aren't something I'm looking to solve, I'll usually reach for a different language.

dgan(10000) 6 days ago [-]

> 100k LOC over two years

Dude wrote more code per week than me in last 6 month at daily job

sydthrowaway(10000) 6 days ago [-]

Well, he quit Big Tech long ago, now actually builds things instead of phoning it in.

sim7c00(10000) 6 days ago [-]

really interesting read, and nice to see people writing operating systems on rust and have also plus points besides grievances. particularly enjoyed you found rust sometimes spares you the 'damn i need to rewrite this entire thing' tour that C always hits me with :D. now i am more hopeful my re-write-the-entire-thing-in-rust was an ok'ish choice.

bunnie(10000) 6 days ago [-]

Took me a full year of questioning life choices before it felt worth it, but fearless refactoring is so nice. I may have trouble going back to C just for that.

_wldu(10000) 6 days ago [-]

Once rust stabilizes, I think it needs an ISO standard like C and C++ have. I can't see automobile manufactures using rust without one. One reason C and C++ are still widely used is due to this. When we are writing code that is expected to run for decades, having a corporate/community backed language is not sufficient. We need global standards and versions that we can rely on decades latter.

rwaksmunski(10000) 6 days ago [-]

Lack of standards committee body making decisions is feature, not a bug. Car manufactures can stick with C.

pie_flavor(10000) 6 days ago [-]

What has the standard actually gotten C and C++? Basic features needed in every single code base like type punning on structures are standardly UB, while design-by-committee results in C++ feature hell.

It doesn't get any harder to write a function exhibiting a bug just because there's a standard saying the function shouldn't have bugs in it. No matter what, you are trusting a compiler vendor that the code it compiles and the functions it links against don't have bugs.

A standard is not a magic spell that creates better software through its incantation; it provides for multiple separate compiler vendors to be able to compile the same code the same way, which is a total fiction in C/C++, and not required for languages like Python or Lua. I view it as nothing more than the streetlight effect.

steveklabnik(10000) 6 days ago [-]

The industry has already taken an interest in Rust; a lot of things going on aren't public yet, but we've seen job openings, and things like https://www.autosar.org/news-events/details/autosar-investig...

ISO Standards are not generally required. https://news.ycombinator.com/item?id=28366670

avgcorrection(10000) 6 days ago [-]

I think C caught on because it spread like a cancer through institutions like universities.

Want to catch on? Be a virus. Not some gosh-darned international standard.

wiz21c(10000) 6 days ago [-]

FTA : 'This is the point at which Rust's very strict typing and borrow checker converts from a productivity liability into a productivity asset.'

that's what rust is about in my own experience. Especially with threads.

epage(10000) 6 days ago [-]

I remember someone saying that 'Rust skipped leg day', feeling that Rust was overly focused on the borrow checker while only solving a small number of problems.

1. I think its easy, especially for GC users, to forget that memory management is really about resource management.

2. The composability of features with the borrow checker is outstanding, like proper session types / locks or Send+Sync for safe use data with threads.

ModernMech(10000) 6 days ago [-]

Me too. A lot of people who try Rust encounter a very steep learning curve, and tend to question whether the borrow checker and strict typing is even worth it. For me, it's allowed me to build larger threaded and distributed systems than I've ever been able to before. I've tried to build such systems in C/C++ but I've never been able to make something that isn't incredibly brittle, and I've been writing in those languages for 25 years. For a long time I thought maybe I'm just a bad programmer.

Rust changed all that. I'm kind of a bad programmer I guess, because Rust caught a lot of bad decisions I was making architecturally, and forced me to rewrite things to conform to the borrow checker.

This is the point at which I've found many people give up Rust. They say to themselves 'This is awful, I've written my program one way I'm used to, and now it looks like I have to completely rewrite it to make this stupid borrow checker happy. If I had written in C++ I'd be done by now!' But will you really be done? Because I had the same attitude and every time I went back to C++ I surely built something, but if it got too large it would be a sandcastle that would fall over at the slightest breeze. With Rust I feel like I'm making skyscrapers that could withstand an earthquake, and I actually am because the programs I've written have weathered some storms that would have washed my C++ code out to sea.

Of course one can make stable, secure, performant systems in C++ and many other languages. But apparently I can't, and I need something like Rust to empower me. Someone else here said that Rust attracts people who want to feel powerful and smart by writing complicated code, but I like to write Rust code just to not feel inept!

ReactiveJelly(10000) 6 days ago [-]

> I wrote a small tool called `crate-scraper` which downloads the source package for every source specified in our Cargo.toml file, and stores them locally so we can have a snapshot of the code used to build a Xous release.

I thought `cargo vendor` already did this?


> This cargo subcommand will vendor all crates.io and git dependencies for a project into the specified directory at <path>. After this command completes the vendor directory specified by <path> will contain all remote sources from dependencies specified.

Maybe he doesn't want to depend on Cargo. Fair enough, it's a big program.

bunnie(10000) 6 days ago [-]

The big thing I wanted was the summary of all the build.rs files concatenated together so I wasn't spending lots of time grepping and searching for them (and possibly missing one).

The script isn't that complicated... it actually uses an existing tool, cargo-download, to obtain the crates, and then a simple Python script searches for all the build.rs files and concatenation them into a builds.rs mega file.

The other reason to give the tool its own repo is crate-scraper actually commits the crates back into git so we have a publicly accessible log of all the crates used in a given release by the actual build machine (in case the attack involved swapping out a crate version, but only for certain build environments, as a highly targeted supply chain attack is less likely to be noticed right away).

It's more about leaving a public trail of breadcrumbs we can use to do forensics to try and pinpoint an attack in retrospect, and making it very public so that any attacker who cares about discretion or deniability has deal with this in their counter-threat model.

dimgl(10000) 6 days ago [-]

> This is a superficial complaint, but I found Rust syntax to be dense, heavy, and difficult to read.

> Rust Is Powerful, but It Is Not Simple

This is exactly why I haven't gotten into Rust.

klysm(10000) 6 days ago [-]

What do you use instead?

secondcoming(10000) 6 days ago [-]

This was a very interesting read.

IMO the author underplays the visual ugliness of some Rust code. Programmers tend to look at code for hours a day for years, and so it should not be visually taxing to read and parse. This is why syntax highlighting exists, after all.

But the gist I got from it is that Rust is really a very good static analyser.

bb010g(10000) 5 days ago [-]

I find Rust code rather pleasant to read.

gary17the(10000) 6 days ago [-]

> This is a superficial complaint, but I found Rust syntax to be dense, heavy, and difficult to read


If you think that Rust is dense and difficult to eyeball, please do try... Swift - purely for therapeutic reasons. But not the usual, trivial, educational-sample, evangelism-slideshow Swift, please, but real-world, advanced Swift with generics. All the unique language constructs to memorize, all the redundant syntactic sugar variations to recognize, all the special-purpose language features to understand, all the inconsistent keyword placement variations to observe, all the inferred complex types to foresee, etc. will make you suddenly want to quit being a programming linguist and instead become a nature-hugging florist and/or run back to Go, Python, or even friggin' LOGO. I'm tellin' ya. And, when considering Swift, we're not even talking about a systems programming language usable with, say, lightweight wearable hardware devices, but about a frankenstein created (almost) exclusively for writing GUIs on mobile devices usually more powerful than desktop workstations of yesteryear :).

Rust is complex, but very good.

pphysch(10000) 6 days ago [-]

Interesting, I had no idea what I was missing out on.

Conversation from last week:

Me: 'So what are you working on at $company?'

Friend: 'We're building a complete HVAC management system for all types of buildings, from hardware to software'

Me: 'Cool! What technologies are you building it on?'

Friend: 'Swift'

Me: '...for like an iOS app to monitor the system?'

Friend: 'No, everything is written in Swift. The entire backend too.'

Me: 'Interesting... Have you shipped anything yet?'

Friend: 'No but the founder is running a prototype in his house and we just secured another round of funding...'


Is this a common thing?

jacquesm(10000) 6 days ago [-]

Absolute gold this article.

'In the long term, the philosophy behind Xous is that eventually it should "get good enough", at which point we should stop futzing with it.'

I wished more people would get this.

lodovic(10000) 6 days ago [-]

You do need a very patient sponsor for such projects though

throwaway17_17(10000) 6 days ago [-]

'Before [const generic], Rust had no native ability to deal with arrays bigger than 32 elements'.

Is this a correct statement? I have seen posts talking about const generics being a new thing as of 2022. Did Rust actually lack the ability to have an array with more than 32 elements? I find it hard to believe that there was no way to have an array of longer length and Rust still being a production level language.

jhugo(10000) 6 days ago [-]

Since nobody else mentioned it, it's worth pointing out that what e.g. JS calls an array is Vec in Rust and can be as long as you want, with no ergonomic difference regardless of the length.

Array in Rust specifically refers to an array whose length is known at compile time, i.e. a bunch of values concatenated on the stack, and that's what the limitations applied to.

The quoted statement pissed me off a bit (I otherwise enjoyed the article) because it seems intended to mislead. The author should have known the colloquial meaning of 'array', and 'no ability to deal with' is factually incorrect.

carry_bit(10000) 6 days ago [-]

You could have bigger arrays, what was missing were the trait implementations. Originally the traits were implemented using a macro, and the macro only generated implementations for up to length 32.

Animats(10000) 6 days ago [-]

There were some awful hacks to make integer parameters to generics sort of work before 'const generic' went in. There were tables of named values for 0..32, then useful numbers such as 64, 128, 256, etc. Those haven't all been cleaned out yet.

masklinn(10000) 6 days ago [-]

It's not quite correct no.

Before const generics most traits were only implemented up to 32 elements though, which could be quite annoying. Even more so as the compilation error was not exactly informative.

est31(10000) 6 days ago [-]

You have always been allowed to have arrays longer than 32 elements, but dealing with them used to be hard. Beyond the Copy trait, which is a compiler builtin, many traits weren't implemented for arrays with more than 32 elements.

The first such change was implemented in 1.47.0 in 2020 where a bunch of traits were made work on all array sizes: https://github.com/rust-lang/rust/blob/master/RELEASES.md#ve...

It took a few releases, until 1.51.0 in 2021, until custom traits could be implemented for arrays of any length: https://github.com/rust-lang/rust/blob/master/RELEASES.md#ve...

And the feature is still limited. For example, legacy users like serde still can't switch to the new const generics based approach, because of the same issue that the Default trait is facing. Both traits could be using const generics, if they were allowed to break their API, but neither want to, so they are waiting for improvements that allow them to switch without doing a hard API break.

alkonaut(10000) 6 days ago [-]

> "Hi, run this shell script from a random server on your machine."

You shouldn't run scripts from a random server but you probably have to consider running scripts from a server you trust. If you don't trust the server you run the script from, are you really going to run the executables this script installs? If we ignore the idea of downloading and building every program from source, then you'll download and run programs compiled by someone else. And you need to trust them, or sandbox the programs. There are no alternatives.

Yes, the bash script or msi can kill your dog and eat your homework but there isn't much we can do about that without running things in in sandboxes - and the (old/normal) windows app model doesn't have that.

Auditing the script won't help you, because it'll say it will install a program somewhere. Which is what you want, so you'll consider the audit 'ok'. But the people who wrote the script/installer are the same people that created the program (or have compromised the computers producing both) and now you'll run the rustc.exe program you just installed and that will eat your homework!

To most people there is no difference in how transparent a bash script is compared to an msi. Downloading an msi from a https server I trust, signed with a cert I trust, is something I'm mostly comfortable with. The same applies to running a bash script from a location that is trustworthy.

samatman(10000) 6 days ago [-]

This is threat modeling. Bunnie Huang's threat model for Precusor is considerably more stringent than the ordinary, to put it mildly.

Compare this to a C program where love it and hate, it's just a bunch of files that get included by concatenation. There's no magic to make your life easier or get you in trouble, everything is done via manual transmission.

The article goes into why they haven't been able to apply this approach to Rust, even though they would like to.

usrn(10000) 6 days ago [-]

With the other languages the apps on my machine are built with (C, and to a large degree Python) I have the benefit of the distribution maintainers at least looking in the general direction of the source for things I install (including development libraries.) Tools like Cargo shortcut that and open me up to a lot of nastiness. It's very similar to the problem on Windows really and I wouldn't be surprised if you started seeing malware disturbed that way like we're currently seeing on NPM and Pypi.

kbenson(10000) 6 days ago [-]

To me, the problem has never been that you're running a shell script from some remote source, but that you're expected to pipe it directly into an interpreter so the existence of what you actually ran is ephemeral.

There are the various levels of trust that you need to account for, but as you and others bite, that isn't specifically different to most people than some installer.

What is different is that there's no record of what you ran if you pipe it to an interpreter. If, later, you want to compare the current script available against what you ran, there's no easy way.

Datenstrom(10000) 6 days ago [-]

It always comes back to trusting trust [1].

[1]: https://www.cs.cmu.edu/~rdriley/487/papers/Thompson_1984_Ref...

amalcon(10000) 6 days ago [-]

Auditing the script can certainly help, just not against malice. E.g. if the script is not set up in such a way that it protects against partial execution, then this represents a kind of vulnerability (truncation) that signed MSI/.deb/etc files simply do not, by the design of the file format.

Yes, it's possible (even easy) to write a curlbash script that doesn't have this issue (or the various other issues). Reviewing the script still buys you something.

Confiks(10000) 6 days ago [-]

> Auditing the script won't help you, because it'll say it will install a program somewhere. Which is what you want, so you'll consider the audit 'ok' [but that program is made by the same people as the installation script].

Your argument doesn't take into consideration that build artifacts / software releases have culture and best practices behind them. Such releases are often considered, tested, cut, digested, signed and included in package managers delegating trust.

Many one-off installation shell scripts are not afforded that culture, especially when maintained from within (static) websites that update frequently. On the other hand, they are small enough for you to audit a bit. If you'd compare the script with one that someone else downloaded a month earlier (i.e. archive.org), that would help a lot to establish trust.

> If we ignore the idea of downloading and building every program from source

Your argument is equally valid when building every program from source. You will not be able to review the source code of moderately large programs. You will need to delegate your trust in that case as well.

beardicus(10000) 6 days ago [-]

bunnie is so kind and thoughtful, even when being critical. compare this to the typical frothy-mouthed 'rant' format we see here.

i'm sure rants are cathartic for the writer, but i rarely find them compelling.

jacquesm(10000) 6 days ago [-]

Not only that, he's modest.

rob74(10000) 6 days ago [-]

Well, that's the difference between the 'I like [X], but have a few complaints that I want to get off my chest' kind of rant and the 'I hate [X], and want to convince everyone how bad it is and to never ever use it again' kind of rant...

StillBored(10000) 6 days ago [-]

' Yes, it is my fault for not being smart enough to parse the language's syntax, but also, I do have other things to do with my life, like build hardware.'

and 'Rust Is Powerful, but It Is Not Simple'

among all the other points, should be enough to disqualify it for mainstream use. The core of most arguments against C++ boil down to those two points too. If a large percentage of the engineers working in the language have a problem understanding it, they are going to have a hard time proving that their aren't any unexpected side effects. Of which both C++ and rust seems to be full of, given the recent bug reports in rust and projects people are using it in.

So, I'm still firmly in the camp that while there are better system programming languages than C, rust isn't one of them (hell even Pascal is probably better, at least it has length checked strings).

IshKebab(10000) 6 days ago [-]

> The core of most arguments against C++ boil down to those two points too. If a large percentage of the engineers working in the language have a problem understanding it, they are going to have a hard time proving that their aren't any unexpected side effects.

That's true for C++ but not for Rust, because Rust will tell you if there's some kind of unexpected behaviour that you didn't think about, whereas C++ will allow UB or whatever without telling you.

That's the big difference between (safe) Rust's complexity and C++'s complexity. They are both very complex, but in Rust it doesn't matter too much if you don't memorise the complexity (complicated lifetime rules, etc.) because it will just result in a compile error. Whereas in C++ you have to remember the rule of 3... no 5... etc. (that's a really simple example; don't think 'I know the rule of 5; C++ is easy!').

klysm(10000) 6 days ago [-]

> among all the other points, should be enough to disqualify it for mainstream use. The core of most arguments against C++ boil down to those two points too.

Nope not at all, that's not a valid comparison.

I argue that there is no simple solution that affords what rust does. Engineers have to use their heads to write correct and fast software. I'm so tired of people just accepting lack of memory safety because it's "hard" to do correctly. There are real consequences to the amount of insecure trash that exists because of this mindset.

masklinn(10000) 6 days ago [-]

> The core of most arguments against C++ boil down to those two points too.

No, the core arguments against C++ boil down to it not providing enough value for these costs, and that its complexities are not orthogonal and interact sub-optimally with one another so the complexities compound superlinearly.

sophacles(10000) 6 days ago [-]

In that case we need to disqualify: Linux, threading, networking, anything graphical, anything involving a database, anything that has the ability to write memory that is read by other lines of code, and probably any computer that allows input and/or output just to be safe.

lawn(10000) 6 days ago [-]

C++ is one of the most used languages, and it does seem to me that Rust has enough momentum going for it to be a commonly used system programming language as well.

I do agree with his points, but I don't think it's enough to disqualify it for mainstream use.

nu11ptr(10000) 6 days ago [-]

> This is a superficial complaint, but I found Rust syntax to be dense, heavy, and difficult to read

I'm a huge Rust fan, but sort of agree. First, I dislike C-style syntax in general and find it all very noisy with lots of unnecessary symbols. Second, while I love traits, when you have a trait heavy type all those impl blocks start adding up giving you lots of boilerplate and often not much substance (esp. with all the where clauses on each block). Add in generics and it is often hard to see what is trying to be achieved.

That said, I've mostly reached the conclusion that much of this is unavoidable. Systems languages need to have lots of detail you just don't need in higher level languages like Haskell or Python, and trait impls on arbitrary types after the fact is very powerful and not something I would want to give up. I've even done some prototyping of what alternative syntaxes might look like and they aren't much improvement. There is just a lot of data that is needed by the compiler.

In summary, Rust syntax is noisy and excessive, but I'm not convinced much could have been done about it.

SemanticStrengh(10000) 6 days ago [-]

What kind of meaningful data is passed (besides lifetimes) that isn't passed in Kotlin or scala 3 extension methods?

LAC-Tech(10000) 5 days ago [-]

People seem allergic to anything that isn't superficially ALGOL like. I still remember Facebook had to wrap Ocaml in curly braces because it would apparently blow peoples minds.

Animats(10000) 6 days ago [-]

The main Rust syntax is OK, but as the author points out, macros are a mess.

The 'cfg' directive is closer to the syntax used in '.toml' files than to Rust itself, because some of the same configuration info appears in both places. The author is doing something with non-portable cross platform code, and apparently needs more configuration dependencies than most.

logicchains(10000) 6 days ago [-]

>That said, I've mostly reached the conclusion that much of this is unavoidable. Systems languages need to have lots of detail you just don't need in higher level languages like Haskell or Python, and trait impls on arbitrary types after the fact is very powerful and not something I would want to give up.

Have you checked out C++20 concepts? It supports aliases and doesn't require explicit trait instantiations, making it possible to right such generic code with much less boilerplate.

ducktective(10000) 6 days ago [-]

> I found Rust syntax to be dense, heavy, and difficult to read

Reminds me of this section of Rich Hickey talk: https://www.youtube.com/watch?v=aSEQfqNYNAc

fulafel(10000) 6 days ago [-]

From the modern systems programming languages set, Go does better in this respect. But admittedly it doesn't reach to quite as low in fitness for low level programming as Rust.

codegeek(10000) 6 days ago [-]

I wanted to learn Go while working professionally with PHP and Python. I loved the simplicity and syntax of Go overall. I learned Go enough to build a small internal tool for our team and it is Production ready (at least internally). Then I wanted to learn Rust since it is so popular and always compared with Go and the syntax made me lose interest. Rust may be amazing and I will be more open minded to try later but it didn't spark the interest. Superficial I know since the real power is in functionality etc but just an anecdote from an amateur.

sph(10000) 6 days ago [-]

There's definitely a space for native languages that are not as dense and performant possibly as Rust. I will trade some readability when I need strict memory guarantees and use Rust, but most of my time I'd like to use something readable and fun to use, which Rust ain't.

I used to use Go, not much of a fan anymore, but I'm liking Crystal a lot to fill this space. Eventually Zig when it's more mature.

amelius(10000) 6 days ago [-]

> Systems languages need to have lots of detail you just don't need in higher level languages like Haskell or Python

True, but Rust is being used for a lot more than just system programming, judging from all the '{ARBITRARY_PROGRAM} written in Rust' posts here on HN.

hawski(10000) 6 days ago [-]

I find that Rust tends to have code that goes sideways more than downward. I prefer the latter and most C code bases, that I find elegant are like that.

It is like that, because of all the chaining that one can do. It is also just a feeling.

dingoegret12(10000) 6 days ago [-]

I love Rust and use it everyday but the syntax bloat is something I will never get over. I don't believe there's nothing that could be done about it. There are all sorts of creative grammar paths one could take in designing a language. An infinite amount, in fact. I would really like to see transpiler that could introduce term rewriting techniques that can make some of that syntax go away.

dhosek(10000) 6 days ago [-]

Familiarity also alleviates the issue. I can remember when I first encountered TeX in the 80s and Perl in the 90s and thought the code looked like line noise and now I no longer see that (even in Larry Wall–style use-all-the-abbreviations Perl).

jillesvangurp(10000) 6 days ago [-]

Something like Kotlin but with a borrow checker might be the ultimate in developer ergonomics for me. I sat down at some point to wrap my head around Rust and ended up abandoning that project due to a lack of time. And because it was hard. The syntax is a hurdle. Still, I would like to pick that up at some point but things don't look good in terms of me finding the time.

However, Rust's borrow checker is a very neat idea and one that is worthy of copying for new languages; or even some existing ones. Are there any other languages that have this at this point?

I think the issue with Rust is simply that it emerged out of the C/C++ world and they started by staying close to its syntax and concepts (pointers and references) and it kind of went down hill from there. Adding macros to the mix allowed developers to fix a lot of issues; but at the price of having code that is not very obvious about its semantics to a reader. It works and it's probably pretty in the eyes of some. But too me it looks like Perl and C had a baby. Depending on your background, that might be the best thing ever of course.

cies(10000) 6 days ago [-]

Maybe we've reached the limits of the complexity we can handle in a simple text-based language and should develop future languages with IDEs in mind. IDEs can hide some of the complexity for us, and give access to it only when you are digging into the details.

shadowofneptune(10000) 6 days ago [-]

The same information can be communicated in different ways, trading one form of noise for another. I have a personal preference for Pascal-like or PL/I syntax. Instead of int *char x or int&& x, there's x: byte ptr ptr. It's more to type and read, sure, but sometimes having an english-like keyword really helps clarify what's going on.

singularity2001(10000) 6 days ago [-]

I think making things syntactically explicit which are core concepts is stupid:

```pub fn horror()->Result{Ok(Result(mut &self))}```

A function returns a Result. This concept in Rust is so ubiquitous that it should be a first class citizen. It should, under all circumstances, be syntactically implicit:

```pub fn better->self```

No matter what it takes to make the compiler smarter.

runevault(10000) 6 days ago [-]

Your summary is the thing I struggle with as well. How do you deal with the issues of density without either making it more verbose by a wide margin (which also hampers readability) or hiding information in a way that makes the code less obvious which is, IMO, worse.

Software is becoming more and more complex and unless there are entirely different design patterns we have failed to find, managing and understanding that during both the writing and the maintenance of software is the fundamental problem of our time. Someone else in these comments mentioned leaning more heavily into IDE tooling and I do wonder if we are coming to a point where that makes sense.

api(10000) 6 days ago [-]

IMHO it's at least somewhat better than 'modern' C++ where you end up having to wrap virtually every single thing in some kind of template class, and that's without the benefit of much stronger memory and thread safety.

Overall I think Rust is a hands-down win over C and C++. People who want it to be like Go are probably not doing systems-level programming, which is what Rust is for, and I have severe doubts about whether a rich systems-level language could be made much simpler than Rust and still deliver what Rust delivers. If you want full control, manual memory management with safety, other safety guarantees, a rich type system, high performance, and the ability to target small embedded use cases, there is a certain floor of essential complexity that is just there and can't really be worked around. Your type system is going to be chonky because that's the only way to get the compiler to do a bunch of stuff at compile time that would otherwise have to be done at runtime with a fat runtime VM like Go, Java, C#.NET, etc. have.

Go requires a fat runtime and has a lot of limitations that really hurt when writing certain kinds of things like high performance codecs, etc. It's outstanding for CRUD, web apps, and normal apps, and I really wish it had a great GUI story since Go would be a fantastic language to write normal level desktop and mobile UI apps.

queuebert(10000) 6 days ago [-]

Completely agree. I think of the extra syntax as us helping the compiler check our code. I have to write a few more characters here and there, but I spend way less time debugging.

Although I may have PTSD from Rust, because lately I find myself preferring Qbasic in my spare time. ̄\_(ツ)_/ ̄

singularity2001(10000) 6 days ago [-]

>>> I'm not convinced much could have been done about it.

Are you sure? What stops Swift with its beautiful syntax and safe optionals from becoming a systems language?

pjmlp(10000) 6 days ago [-]

System languages on the Algol/Wirth branch prove otherwise.

They can be ergonomic high level, while providing the language features to go low level when needed.

codebje(10000) 5 days ago [-]

> That said, I've mostly reached the conclusion that much of this is unavoidable. Systems languages need to have lots of detail you just don't need in higher level languages like Haskell or Python, ...

I am not convinced that there's so more to Rust than there is to GHC Haskell to justify so much dense syntax.

There's many syntax choices made in Rust based, I assume, on its aim to appeal to C/C++ developers that add a lot of syntactic noise - parentheses and angle brackets for function and type application, double colons for namespace separation, curly braces for block delineation, etc. There are more syntax choices made to avoid being too strange, like the tons of syntax added to avoid higher kinded types in general and monads in particular (Result<> and ()?, async, 'builder' APIs, etc).

Rewriting the example with more haskell-like syntax:

    Trying::to_read::<&'a heavy>(syntax, |like| { this. can_be( maddening ) }).map(|_| ())?;
    Trying.to_read @('a heavy) syntax (\like -> can_be this maddening) >> pure ()
It's a tortuous example in either language, but it still serves to show how Rust has made explicit choices that lead to denser syntax.

Making a more Haskell-like syntax perhaps would have hampered adoption of Rust by the C/C++ crowd, though, so maybe not much could have been done about it without costing Rust a lot of adoption by people used to throwing symbols throughout their code.

(And I find it a funny place to be saying _Haskell_ is less dense than another language given how Haskell rapidly turns into operator soup, particularly when using optics).

ducktective(10000) 6 days ago [-]

About the installation method ('hi! download this random shell script and execute it'), I agree this is really dangerous but mere installing stuff is a hairy thing on linux distros. I mean what is the practical alternative? Distro package manager versions are almost always way behind.

NixOS/guix are gonna solve this issue once and for all (famous last words)

mjw1007(10000) 6 days ago [-]

Here are some things that they could do better:

- the domain in the curlbashware URL could be less shady than sh.rustup.rs

- the 'rustup is an official Rust project' claim on https://rustup.rs/ could be a link to a page somewhere on rust-lang.org that confirms that rustup.rs is the site to use

maccard(10000) 6 days ago [-]

But it's not really dangerous, no more so than downloading an arbitrary binary and executing it at least. The script is delivered over https, so you're not going to be MITM'ed, and you're trusting rustup to provide you the valid install script. If you _are_ MITM'ed, it doesn't really matter what your delivery method is unless you do a verification from another device/network, and if you don't trust rustup then why are you downloading and executing their installer?

IshKebab(10000) 6 days ago [-]

> this is really dangerous

People repeat this a lot but really it just seems dangerous. Can you give an example of a scenario where offering a download via `curl | bash` is more dangerous than 'download this installer with the hash 01234 and then execute it'?

otterley(10000) 6 days ago [-]

> NixOS/guix are gonna solve this issue once and for all (famous last words)

Should we take bets on whether this happens first, or whether nuclear fusion becomes mainstream first?

NoGravitas(10000) 6 days ago [-]

> This is a superficial complaint, but I found Rust syntax to be dense, heavy, and difficult to read.

I'm not sure this is a superficial complaint. People say the hard thing about learning Rust is the new concepts, but I haven't found that to be true at all. The concepts are easy, but the combinatorial explosion of syntax that supports them is untenable.

gxt(10000) 6 days ago [-]

I use rust weekly and I find it to have the best DX. I have done work with Oracle Java 5-8, IBM XL C99, MSVC++11, CPython 2-3, C# .NET Core 3.1. Stable Rust 2021 is overall the most readable, least surprising, BUT only with the right tool which also makes it the most discoverable, with rust-analyzer. My only gripe is the lack of consensus on strongly typed error handling (anyhow+thiserror being the most sensible combination I found after moving away from bare Results, to failure, to just anyhow).

kkoning(10000) 6 days ago [-]

> but the combinatorial explosion of syntax that supports them is untenable.

I wouldn't go quite that far myself, but it's definitely one of the sharper edges of the language currently--particularly because some of the features don't work together yet. E.g., async and traits.

devit(10000) 6 days ago [-]

How would you change the syntax?

I don't think that Rust has much redundant syntax.

I guess you could do things like replace &'a Type with Ref<'a, Type> and *Type with Ptr<Type>, and get rid of some sugar like 'if let' and print!, but I'm not sure that would have much of an impact.

sidlls(10000) 6 days ago [-]

Back when I wrote C and C++ for a living I'd occasionally meet someone who thought their ability to employ the spiral rule or parse a particularly dense template construct meant they were a genius. I get the same vibe from certain other groups in this industry, most recently from functional programmers and Rust afficionados, for example. Nobody gives a damn if you can narrate a C spiral or a functional-like Rust idiom.

And this syntax density is one of the reasons I stopped advocating for the use of Rust in our systems. First, I don't want to work with languages that attract this kind of person. Second, I don't want to work with languages that require a relatively heavy cognitive load on simply reading the lines of the source code. Units of code (i.e. statements, functions, structures and modules) are already a cognitive load--and the more important one. Any extra bit I have to supply to simply parsing the symbols is a distraction.

'You get used to it,' 'with practice it fades to the background,' etc. are responses I've seen in these comments, and more generally when this issue comes up. They're inaccurate at best, and often simply another way the above mentioned 'geniuses' manifest that particular personality flaw. No, thank you. I'll pass.

krupan(10000) 6 days ago [-]

One human needs to figure out how to write a line of code once, and then that line needs to be read and understood by humans over and over.

Optimize for readability. Rust doesn't seem to do this.

UmbertoNoEco(10000) 6 days ago [-]

Correct, this is more or less like remarking that having to learn Kanji/Hanzi makes learning Japanese/Mandarin very difficult is a superficial complaint.

titzer(10000) 6 days ago [-]

I find Rust code hard to read...to the point where I don't feel motivated to learn it anymore. Line noise is confusing and a distraction. Random syntactic 'innovations' I find are just friction in picking up a language.

For example, in the first versions of Virgil I introduced new keywords for declaring fields: 'field', 'method' and then 'local'. There was a different syntax for switch statements, a slightly different syntax for array accesses. Then I looked at the code I was writing and realized that the different keywords didn't add anything, the array subscripting syntax was just a bother; in fact, all my 'innovations' just took things away and made it harder to learn.

For better or for worse, the world is starting to converge on something that looks like an amalgam of Java, JavaScript, and Scala. At least IMHO; that's kind of what Virgil has started to look like, heh :)

perrygeo(10000) 6 days ago [-]

It's not a superficial complaint but it is relative to one's experience. Something that's 'difficult' for me might be 'easy' for you and vice versa. I find it very much related to understanding the core concepts.

I personally find Rust syntax to be quite enjoyable, or at least it fades into the background quickly - with a few exceptions. The syntax for lifetime annotations can be challenging. And not surprisingly explicit lifetime annotations are a rather unique concept, at least among mainstream languages. IOW the syntax is difficult because it's an entirely new mental model (for me), not because `<'a>` is an inherently bad way to express it.

SemanticStrengh(10000) 6 days ago [-]

Let me takes this opportunity to explain that among the many contraints of rust, it is the undertalked one about the absurd no cast promotion from smaller integer (e.g. a char) to a bigger integer that made me quit and save my sanity. Having to make explicit casts a dozen times per functions for basic manipulations of numbers on a grid (and the index type mismatch) is an insult to the developer intelligence. It seems some people are resilient and are able to write nonsensical parts of code repeatedly but for me, I can't tolerate it.

josephg(10000) 6 days ago [-]

I don't mind a few "as usize" casts because usually you can cast once and be done with it. But the cast that kills me is this one:

How do you add an unsigned and a signed number together in rust, in a way which is fast (no branches in release mode), correct and which panics in debug mode in the right places (if the addition over- or under-flows)? Nearly a year in to rust and I'm still stumped!

allisdust(10000) 6 days ago [-]

Considering all the type casting bugs prevalent in other languages, I would have more trust in the compiler than programmers at this point. You can always pick javascript of course, which happily returns you what ever it feels like. Frankly this explicit casting makes the next developer's life easier.

voidhorse(10000) 6 days ago [-]

I wholeheartedly agree that rust's syntax is way noisier and uglier than I'd like, and it's nice to see someone else raise the point seriously. People tend to act like syntax is an ancillary detail in a language, but actually it's fundamental! It's our direct interface into the language itself and if it's painful to read and write the language won't be pleasant to use, no matter how great it's semantics may be.

Beyond the line noise problem. I feel some of rust's syntactic choices are confusing. For instance:

let x = 2

Introduces a new name and binds it to the value 2 while

if let Some(x) = y

Is a shorthand for pattern matching. Meanwhile other matching structures have no need of "let" at all. Likewise this extends the semantics of what "if" means and also overloads "=" (e.g, glancing at this, would you say equals is binding a value to a pattern, performing a Boolean check, or both?) Rust has a couple of one-off weird syntactical devices that have been introduced as shorthand that imo quickly increase the cognitive load required to read code because several structures and keywords are reused in slightly different ways to mean entirely different things.

There are a lot of similar syntactic hoops around type signatures because they didn't go with the old "type variables must be lowercase" rule which leads to subtle potential ambiguities in parsing T as a variable or proper type in some cases that thus forces additional syntax on the user.

I also think there are too many ways to express equivalent things in Rust, which again leads to more cognitive overhead. Reading the current docs, I get the sense the language is becoming "write biased". Whenever they introduce some syntactic shortcut the justification is to save typing and eliminate small amounts of repetition, which is great in theory but now we have N ways of writing and reading the same thing which quickly makes code hard to grok efficiently imo.

This minor gripe comes with the big caveat that it remains probably the most interesting language to become vogue since Haskell.

irishsultan(10000) 6 days ago [-]

> For instance:

> let x = 2

> Introduces a new name and binds it to the value 2 while

> if let Some(x) = y

> Is a shorthand for pattern matching.

Both introduce a new name (x) and both pattern match, it's just that the pattern in let x = 2 is simply match anything and assign it the name x, you could just as well write

let [email protected](x, y) = (2, 4);

Which binds t to (2, 4), x to 2 and y to 4 and there it's perhaps more clear that normal let is pattern matching as much as if let is pattern matching.

burntsushi(10000) 6 days ago [-]

It might help you to think of 'if let' as an extension of 'let' rather than an extension of 'if'. That is, 'let' by itself supports irrefutable patterns. e.g.,

    let std::ops::Range { start, end } = 5..10;
So the 'if' is 'just' allowing you to also write refutable patterns.
beltsazar(10000) 6 days ago [-]

> I feel some of rust's syntactic choices are confusing. For instance:

> let x = 2

> Introduces a new name and binds it to the value 2 while

> if let Some(x) = y

> Is a shorthand for pattern matching.

It won't be as confusing once you realize that both do the same thing: variable binding. The difference is that the former is an irrefutable binding, whereas the latter is a refutable binding.

Suppose that we have:

    struct Foo(i32);
A few more examples of irrefutable binding:

1. As a local variable:

    let Foo(x) = Foo(42);
2. As a function parameter:

    fn bar(Foo(x): Foo) {}
ntoskrnl(10000) 6 days ago [-]

I'm overall a rust fan but I've always agreed with you about `if let`. What I don't like is that it reads right-to-left and starts getting awkward if either side is much longer than just a variable name.

  if let Some(Range { start, end }) = self.calc_range(whatever, true) {
      // ...
I feel it would read much smoother if you switched the two sides so execution flows left-to-right

  if self.calc_range(whatever, true) is Some(Range { start, end }) {
      // ...
avgcorrection(10000) 6 days ago [-]

Conservative Java has something similar with try-with-resource and the upcoming instanceof pattern matching.

ArdelleF(10000) 6 days ago [-]

We do a lot of Rust compilation exploration during the development of TiKV(github.com/tikv/tikv), a lot of interesting learnings ... https://en.pingcap.com/blog/rust-huge-compilation-units/

SemanticStrengh(10000) 6 days ago [-]

thanks for maintaining jemalloc :)

cmrdporcupine(10000) 6 days ago [-]

Good article. I have some things to say, because that's what I do.

To start: I have to say that I find some of the comments here a little odd -- the competition for Rust is not Go or TypeScript or Kotlin or whatever. If you're using Rust in your full-stack webdev world to serve, like, database queries to webpages or whatever... I don't know why. Rust is clearly for things like: writing an OS, writing a browser, writing a low latency high throughput transaction server, writing a game. For the other things I'd say there's plenty of other options. It's been years since I worked in web applications, but I struggle to see the need for Rust there.

Rust is for the same niche that C++ and C sit in now. A similar niche that Zig is targeting. I don't think D with its <admittedly now optional> GC or Golang sit in this same space at all. Also, having spent a year working in Go I don't understand how anybody could complain about Rust encouraging boilerplate but propose Go with a straightface. Go (at least the Go I was working on at Google) was just a pile of boilerplate. Awful. The syntax of the language is... fine. Generics will fix most of my complaints with it. The culture around the language I found repulsive.

Anways, for years (prior to C++-11) I whined about the state of C++. Not just its lack of safety but the idiosyncracies of its syntax and its lack of modern language features I was familiar with from e.g. OCaml and from hanging out on Lambda The Ultimate. By modern features I mean pattern matching & option / result types, lambdas, type inference, and a generics/parameterized type system which wasn't ... insane. Remember, this is pre-C++11. It was awful. C++-11 and beyond addressed some concerns but not others. And I actually really love writing in C++ these days, but I'm still well aware that it is a dogs breakfast and full of foot guns and oddities. I've just learned to think like it.

Anyways, back to Rust...before C++11, when I saw Graydon Hoare had kickstarted a project at Mozilla to make a systems programming language (that is without a GC) that supported modern language features I was super stoked. I tended to follow what Graydon was doing because he's talented and he's a friend-of-friends. Rust as described sounded like exactly what I wanted. But the final delivery, with the complexities of the borrow checker... are maybe something that I hadn't gambled on. Every few months I give another wack at starting a project in Rust and every few months I tend to run up against the borrow checker with frustration. But I think I have it licked now, I think I will write some Rust code in my time off work.

So my personal take on Rust is this: on paper it's the fantasy language I always wanted, but in reality it has many of the complexity warts that other people have pointed to.

However it is better than all the alternatives (other than maybe Zig) in this space in many many ways. But most importantly it seems to have gained momentum especially in the last 2-3 years. It seems clear to me now that the language will have success. So I think systems developers will probably need to learn and 'love' it just like they do C/C++ now. And I don't think that's a bad thing because I think a culture will build up that will get people up to speed and some of the syntactical oddities just won't look that odd anymore. And the world of software dev will hopefully be a bit safer, and build systems less crazy, and so on.

devnull3(10000) 5 days ago [-]

> other than maybe Zig

As much as I like Rust, I am keeping an eye on Zig. But Zig 1.0 is too far away to be considered as a contender right now.

bilkow(10000) 6 days ago [-]

It looks like I'm on the minority here, but I generally like Rust's syntax and think it's pretty readable.

Of course, when you use generics, lifetimes, closures, etc, all on the same line it can become hard to read. But on my experience on 'high level' application code, it isn't usually like that. The hardest thing to grep at first for me, coming from python, was the :: for navigating namespaces/modules.

I also find functional style a lot easier to read than Python, because of chaining (dot notation) and the closure syntax.


    array = [1, 0, 2, 3]
    new_array = map(
        lambda x: x * 2,
            lambda x: x != 0,

    let array = [1, 0, 2, 3];
    let new_vec: Vec<_> = array.into_iter()
        .filter(|&x| x != 0)
        .map(|x| x * 2)
I mean, I kind of agree to the criticism, specially when it comes to macros and lifetimes, but I also feel like that's more applicable for low level code or code that uses lots of features that just aren't available in e.g. C, Python or Go.

Edit: Collected iterator into Vec

the__alchemist(10000) 6 days ago [-]

I think part of this comes down to: Does your Rust code make heavy use of generics? I find myself deliberately avoiding generics and libraries that use them, due to the complexity they add. Not just syntactic noise, but complicated APIs that must be explicitly documented; rust Doc is ineffective with documenting what arguments are accepted in functions and structs that use generics.

See also: Async.

klodolph(10000) 6 days ago [-]

There are people who write Python code like that, but it's an extreme minority. Here's the more likely way:

    array = [1, 0, 2, 3]
    new_array = [x * 2 for x in array
                 if x != 0]
Just as a matter of style, few Python programmers will use lambda outside something like this:

    array = [...]
    arry.sort(key=lambda ...)