Hacker News with comments/articles inlined for offline reading

Authors ranked on leaderboard
Last updated:
Reload to view new stories

September 20, 2019 07:05



Front Page/ShowHN stories over 4 points from last 7 days
If internet connection drops, you can still read the stories
If there were any historical discussions on the story, links to all the previous stories on Hacker News will appear just above the comments.

Historical Discussions: The boring technology behind a one-person Internet company (September 16, 2019: 1977 points)
The boring technology behind Listen Notes (January 24, 2018: 1 points)

(1979) The boring technology behind a one-person Internet company

1979 points 4 days ago by mxschumacher in 1864th position

broadcast.listennotes.com | Estimated reading time – 13 minutes | comments | anchor

Listen Notes is a podcast search engine and database. The technology behind Listen Notes is actually very very boring. No AI, no deep learning, no blockchain. "Any man who must say I am using AI is not using True AI" :)

After reading this post, you should be able to replicate what I build for Listen Notes or easily do something similar. You don't need to hire a lot of engineers. Remember, when Instagram raised $57.5M and got acquired by Facebook for $1B, they had only 13 employees — not all of them were engineers. The Instagram story happened in early 2012. It's 2019 now, it's more possible than ever to build something meaningful with a tiny engineering team — even one person.

If you haven't used Listen Notes yet , try it now:

Overview

Let's start with requirements or features of this Listen Notes project.

Listen Notes provides two things to end users:

  • A website ListenNotes.com for podcast listeners. It provides a search engine, a podcast database, Listen Later playlists, Listen Clips that allows you to cut a segment of any podcast episode, and Listen Alerts that notifies you when a specified keyword is mentioned in new podcasts on the Internet.
  • Podcast Search & Directory APIs for developers. We need to track the API usage, get money from paid users, do customer support, and more.

I run everything on AWS. There are 20 production servers (as of May 5, 2019):

The servers that run Listen Notes. This is a dashboard using Datadog.

You can easily guess what does each server do from the hostname.

  • production-web serves web traffics for ListenNotes.com.
  • production-api serves api traffics. We run two versions of API (as of May 4, 2019), thus v1api (the legacy version) and v2api (the new version).
  • production-db runs PostgreSQL (master & slave)
  • production-es runs an Elasticsearch cluster.
  • production-worker runs offline processing tasks to keep the podcast database always up-to-date and to provide some magical things (e.g., search result ranking, episode/podcast recommendations...).
  • production-lb is the load balancer. I also run Redis & RabbitMQ on this server, for convenience. I know this is not ideal. But I'm not a perfect person :)
  • production-pangu is the production-like server that I sometimes run one-off scripts and test changes. What's the meaning of "pangu"?

Most of these servers can be horizontally scaled. That's why I name them production-something1, production-something2... It could be very easy to add production-something3 and production-something4 to the fleet.

Backend

The entire backend is written in Django / Python3. The operating system of choice is Ubuntu.

I use uWSGI to serve web traffics. I put NGINX in front of uWSGI processes, which also serves as load balancer.

The main data store is PostgreSQL, which I've got a lot of development & operational experience over many years — battle tested technology is good, so I can sleep well at night. Redis is used for various purposes (e.g., caching, stats,...). It's not hard to guess that Elasticsearch is used somewhere. Yes, I use Elasticsearch to index podcasts & episodes and to serve search queries, just like most boring companies.

Celery is used for offline processing. And Celery Beat is for scheduling tasks, which is like Cron jobs but a bit nicer. If in the future Listen Notes gains traction and Celery & Beat cause some scaling issues, I probably will switch to the two projects I did for my previous employer: ndkale and ndscheduler.

Supervisord is used for process management on every server.

Wait, how about Docker / Kubernetes / serverless? Nope. As you gain experience, you know when not to over-engineer. I actually did some early Docker work for my previous employer back in 2014, which was good for a mid-sized billion-dollar startup but may be overkill for a one-person tiny startup.

Frontend

The web frontend is primarily built with React + Redux + Webpack + ES. This is pretty standard nowadays. When deploying to production, JS bundles would be uploaded to Amazon S3 and served via CloudFront.

On ListenNotes.com, most web pages are half server-side rendered (Django template) and half client-side rendered (React). The server-side rendered part provides a boilerplate of a web page, and the client-side rendered part is basically an interactive web app. But a few web pages are rendered entirely via server side, because of my laziness to make things perfect & some potential SEO goodies.

Audio player

I use a heavily modified version of react-media-player to build the audio player on ListenNotes.com, which is used in several places, including Listen Notes Website, Twitter embedded player, and embedded player on 3rd party websites:

Embedded player on 3rd party websites

Podcast API

We provide a simple and reliable podcast API to developers. Building the API is similar to building the website. I use the same Django/Python stack for the backend, and ReactJs for the frontend (e.g., API dashboard, documentation...).

Listen API dashboard
Listen API documentation

For the API, we need to track how many requests a user use in current billing cycle, and charge $$$ at the end of a billing cycle. It's not hard to imagine that Redis is heavily used here :)

DevOps

Machine provisioning & code deployment

I use Ansible for machine provisioning. Basically, I wrote a bunch of yaml files to specify what type of servers need to have what configuration files & what software. I can spin up a server with all correct configuration files & all software installed with one button push. This is the directory structure of those Ansible yaml files:

I could've done a better job in naming things. But again, it's good enough for now.

I also use Ansible to deploy code to production. Basically, I have a wrapper script deploy.sh that is run on macOS:

./deploy.sh production HEAD web

The deploy.sh script takes three arguments:

  • Environment: production or staging.
  • Version of the listennotes repo: HEAD means "just deploy the latest version". If a SHA of a git commit is specified, then it'll deploy a specific version of code — this is particularly useful when I need to rollback from a bad deployment.
  • What kind of servers: web, worker, api, or all. I don't have to deploy to all servers all at once. Sometimes I make changes on Javascript code, then I just need to deploy to web, without touching api or worker.

The deployment process is mostly orchestrated by Ansible yaml files, and of course, it's dead simple:

  • On my Macbook Pro, if it's to deploy to web servers, then build Javascript bundles and upload to S3.
  • On the target servers, git clone the listennotes repo to a timestamp-named folder, check out the specific version, and pip install new Python dependencies if any.
  • On the target servers, switch symlink to the above timestamp-named folder and restart servers via supervisorctl.

As you can see, I don't use those fancy CI tools. Just dead simple things that actually work.

Monitoring & alerting

I use Datadog for monitoring & alerting. I've got some high level metrics in a simple dashboard. Whatever I do here is to boost my confidence when I am messing around the production servers.

Datadog dashboard for Listen Notes, as of Dec 2017.

I connect Datadog to PagerDuty. If something goes wrong, PagerDuty will send me alerts via phone call & SMS.

I also use Rollbar to keep an eye on the health of Django code, which will catch unexpected exceptions and notify me via email & Slack as well.

I use Slack a lot. Yes, this is a one-person company, so I don't use Slack for communicating with human beings. I use Slack to monitor interesting application-level events. In addition to integrating Datadog and Rollbar with Slack, I also use Slack incoming webhooks in Listen Notes backend code to notify me whenever a user signs up or performs some interesting actions (e.g., adding or deleting things). This is a very common practice in tech companies. When you read some books about Amazon or PayPal's early history, you'll know that both companies had similar notification mechanism: whenever a user signed up, there would be a "ding" sound to notify everyone in the office.

Since launched in early 2017, Listen Notes hasn't got any big outage (> 5 minutes) except for this one. I'm always very careful & practical in these operational stuffs. The web servers are significantly over-provisioned, just in case there's some huge spike due to press events or whatever.

Development

I work in a WeWork coworking space in San Francisco. Some people may wonder why not just work from home or from some random coffee shops. Well, I value productivity a lot and I'm willing to invest money in productivity. I don't believe piling time helps software development (or any soft of knowledge/creativity work). It's rare that I work over 8 hours in a day (Sorry, 996 people). I want to make every minute count. Thus, a nice & relatively expensive private office is what I need :) Instead of optimizing for spending more time & saving money, I optimize for spending less time & making money :)

My office at WeWork

I'm using a MacBook Pro. I run the (almost) identical infrastructure inside Vagrant + VirtualBox. I use the same set of Ansible yaml files as described above to provision the development environment inside Vagrant.

I subscribe to the monolithic repo philosophy. So there's one and only one listennotes repo, containing DevOps scripts, frontend & backend code. This listennotes repo is hosted as a GitHub private repo. I do all development work on the master branch. I rarely use feature branches.

I write code and run the dev servers (Django runserver & webpack dev server) by using PyCharm. Yea, I know, it's boring. After all, it's not Visual Studio Code or Atom or whatever cool IDEs. But PyCharm works just fine for me. I'm old school.

My PyCharm

Miscellaneous

There are a bunch of useful tools & services that I use to build Listen Notes as a product and a company:

Keep calm and carry on...

As you can see, we are living in a wonderful age to start a company. There are so many off-the-shelf tools and services that save us time & money and increase our productivity. It's more possible than ever to build something useful to the world with a tiny team (or just one person), using simple & boring technology.

As time goes, companies become smaller and smaller. You don't need to hire tons of full-time employees. You can hire services (SaaS) and on-demand contractors to get things done.

Most of time, the biggest obstacle of building & shipping things is over thinking. What if this, what if that. Boy, you are not important at all. Everyone is busy in their own life. No one cares about you and the things you build, until you prove that you are worth other people's attention. Even you screw up the initial product launch, few people will notice. Think big, start small, act fast. It's absolutely okay to use the boring technology and start something simple (even ugly), as long as you actually solve problems.

There are so many cargo-cult-type people now. Ignore the noises. Keep calm and carry on.




All Comments: [-] | anchor

OJFord(3422) 3 days ago [-]

> Wait, how about Docker / Kubernetes / serverless? Nope. As you gain experience, you know when not to over-engineer. I actually did some early Docker work for my previous employer back in 2014, which was good for a mid-sized billion-dollar startup but may be overkill for a one-person tiny startup.

I really think the key thing here is familiarity. K8s is a bit different, but certainly in OP's position I (personally!) would be more comfortable with an image for each component. Perhaps a machine image rather than docker, if each component is going to be on its own machine as described, but something at least semi-reproducible for sure.

When I'm working on something alone, and particularly if on and off and not for several hours every day I need to be able to come back to it in a sort of self-documented state that doesn't leave me scared to touch anything lest it crumble.

sieabahlpark(10000) 3 days ago [-]

There's nothing wrong with the concept of containerization and you can use it without much additional headache.

Of course it's not for everyone but outright saying it's a waste of time hasn't spent much time with the ecosystem to know how easy it's become latey.

geggam(4219) 3 days ago [-]

Using containers without k8s is much much simpler if you need a container.

Learning system packaging and how to rebuild src packages is much much simpler than managing a container ecosystem

Think about this... if it was so easy to manage containers why does redhat still ship its operating system in packages ?

poidos(10000) 4 days ago [-]

As someone still in school, this type of article -- though inspiring-- kind of scares me, to be honest. I don't think I could do any one of the things mentioned in this article.

ashelmire(10000) 3 days ago [-]

As a senior dev: it will take years of experience and constant learning to get to this point, but it is attainable. You don't have to be a master of each part of the stack and each tech within it - you just need to be able to do enough, and solve the problems you face. You can build a professional SaaS product with a simpler stack as well.

You take a project like this one step at a time. Some bits of it are relatively easy - setting up a few postgres servers doesn't take much knowledge. ElasticSearch is a little more obscure, but for the most part, things like this are running a few commands, and setting up a few config files with the help of docs and google. Same for Redis, nginx... etc. Which isn't to diminish devops - you can dive deep into each of these configurations and develop pretty complex setups, but by the time you actually need to you hope to be making enough money to pay someone else to do it.

You won't get everything perfect all the time. You'll have to revisit parts of the stack and tweak them. But you can take it a day at a time and do what you need to do.

mLuby(4011) 4 days ago [-]

Try deploying a simple server (eg a dog API) on Heroku. Then back it with a database. Then add a web or mobile app. Then do it on AWS.

wil421(4074) 4 days ago [-]

From a mid-level dev, you'll learn and I doubt you'll need to know everything in this article and understand it for your first job.

Do you understand what the web, api, and DB servers are doing?

If your interested in Python and want to start small I can recommend Flask. Flask is smaller and could be more user friendly than Django.

Here's a great tutorial. You'll build a blog with Bootstrap, Python, Flask and SQLAlchemy.[1]

[1]https://www.youtube.com/playlist?list=PL-osiE80TeTs4UjLw5MM6...

wenbin(3686) 4 days ago [-]

Hello poidos,

(I'm the author of this blog post)

I couldn't do all these things when I was in school :) I worked in companies for a few years and learned some engineering practices. Then I had basic skills to prototype my own side projects. Then after working on many silly side projects, I started Listen Notes.

And initially, Listen Notes was running on 3 tiny DigitalOcean servers ($5/month each?). I logged in each server to git pull to 'deploy to production'. Then I added things little by little, day by day. It's a process. The key is to get started. People say that showing up is 80% (or whatever percentage) of success. I think this is very true. Just get started and you'll figure out things along the way.

localhost(4055) 3 days ago [-]

This looks like a great service; I just signed up for it and created my own listen later playlist. I spend 6-8h a week on my mountain bike and I use most of that time to listen to podcasts. The challenge with how I do it (using just an Apple Watch) is curating my feed. This is an excellent tool and a far better UI for curating a feed.

I also like the different and varied sources of income that you have, from the transcription service to ads to your API. Seems like you've built a great platform that you can use to experiment with different revenue models.

One additional question - your product is called Listen NOTES. Are you planning on adding note taking functionality to it at some point? One thing that I've always wanted to do was to jot down some set of notes to myself (typically during one of my bike rides). I always imagined that being some kind of voice activated thing, but I'd like that note sync'd to the spot within the podcast that I was listening to (and perhaps transcribed as well). Any thoughts about building something like this?

Thanks again for building this service!

agustif(10000) 3 days ago [-]

You could use Airpods + Siri + Notes.app for that.

csdreamer7(4212) 3 days ago [-]

> Basically, I have a wrapper script deploy.sh that is run on macOS:

I do the same thing on Linux.

build.sh, deploy.sh.

But, I also use a run.sh which runs tasks that I may not remember to run if I come back to software after a while on a certain machine.

Like 'git pull' or 'bundle install'. Then ./dev-setup.sh or whatever for starting up tasks.

I use multiple machines and having these two simple commands saves me so much frustration.

You sit down to began work. You just run ./run.sh and it will setup a dev instance for you. No sudden alerts that your gem is out of date... or worse... no warnings at all and hard to diagnose sudden bugs that steals half a day from you.

anaphor(2897) 3 days ago [-]

I have a .vimrc like this in some of my projects

set makeprg=./build.sh

set autowrite

I've found trying to figure out complicated build tools to mostly be a waste of time unless you really need them. It also encourages you to not do things that would require very costly builds.

tyingq(4179) 4 days ago [-]

Was interesting to read all the sort of 'boring' back end pieces, then see that he went with half-server and half-client rendered React for the front end. Was expecting Angular or similar, given the pattern of the other picks.

eropple(2750) 3 days ago [-]

React is 'boring' now, too. (And good for it.)

karambir(4033) 3 days ago [-]

I thought I was reading my own blog post :) We use very similar tech stack at my current company:

- Ansible for provisioning

- Python/Django for website/api

- VueJS for frontend(where needed, some pages are simple Django templates)

- Celery for background work

- uWSGI and Nginx as servers with AWS Load balancer

- Elasticsearch for search

- Redis for caching

- Postgres with Postgis as main datastore

- Datadog for monitoring

- Cloudflare for DNS

Some differences as I am working with a team:

- We do use multiple branches and git tags for releases. Feature branches are also common as multiple devs maybe working on different features.

- We use Gitlab-CI a lot for testing and auto-deployment(ansible script can be called from our machine as well)

- Terraform for infrastructure provisioning. We have stopped provisioning any AWS service by console. Once the service is provisioned by terraform, ansible takes over.

I have tinkered with Docker, Hashicorp Packer but this setup has been dead simple to reason and scale reasonably well.

bedros(3438) 3 days ago [-]

Did you do the entire stack yourself, I'm planning on using almost identical stack, but need some help, do you have some kind of blog on howto or recommend one

If you create a course on udemy/YouTube on how to setup a production full stack, there are many basic setup videos or tutorials, but none on production full stack with load balancing, replication, caching, etc

welder(1550) 3 days ago [-]

What's your website? Mine's also on the same stack (https://wakatime.com). Would love to trade notes sometime.

mordechai9000(10000) 3 days ago [-]

I am curious, do you use Django Rest Framework on top of Django for your API?

khalilravanna(10000) 3 days ago [-]

How do you feel about using a dynamically typed language like Python for all your backend code? Whenever I had a codebase that grew past several thousand LOC it became pretty unwieldy pretty quickly for me personally. I'm curious if there's a conscious tradeoff for people using Python/Django to start because it's really fast to get up and running with (for existing or new devs).

vowelless(3861) 4 days ago [-]

Wenbin, amazing article.

One question: what all do you use contractors for? How has your experience been in managing them?

Thanks

wenbin(3686) 3 days ago [-]

Some examples that I used help from contractors:

1. Built some reusable ReactJs components. 2. Design / illustrations 3. Proofread website copy / blog posts 4. Built experimental app like this one https://itunes.apple.com/us/app/just-listen-simple-podcast-a...

(probably there were other random things... I got to look at the billing history on my upwork :)

jstummbillig(10000) 3 days ago [-]

Ansible, AWS, SES, React and Cloudflare? Gusto, Notion and 10th of different services and integrations? That's boring now?

I was expecting something more along the lines of PHP + a single MySQL machine, plus all the accounting is done on a tablet made of actual stone.

This is not that.

empath75(1877) 3 days ago [-]

I do this stuff for a living and man is that a lot of stuff for one person to be responsible for. He could probably be a CTO at a lot of big companies making more than he's making now probably.

pikzel(10000) 3 days ago [-]

Agree. To learn all of this as a single developer is not an easy task either. These are real frontend, backend and devops languages, frameworks and tools. It takes time to learn.

mperham(3450) 4 days ago [-]

I love these stories. I'm a one-person company too: contribsys.com.

My production server stack is Apache + some Ruby CGI scripts, to serve static files and handle billing webhooks. I spend less than an hour per week on devops maintenance.

KISS is the #1 principle when scaling a solo operation.

dcsan(10000) 3 days ago [-]

sidekiq! Interesting wenbin also wrote a task queue system... must be something about that type of engineer that leads to this pragmatic tech strategy.

ravedave5(10000) 3 days ago [-]

Hey I thought I recognized that domain, yay sidekiq! Keep up the good work. By the way we're seeing issues with contrast security gem and sidekiq, but we're pushing on contrast not you.

canardlaquay(10000) 2 days ago [-]

That should have been the real 'boring' stack we're talking about here instead of the 20+ technologies mentioned in the OP.

non-entity(3491) 4 days ago [-]

> some Ruby CGI scripts

Didn't ruby remove its CGI libraries from their standard library somewhat recently. I believe there were mostly helper libs, but I am curious how that works.

wenbin(3686) 3 days ago [-]

Mike! Big fan of yours!

For the very few people who don't know mike or sidekiq, read this: https://www.indiehackers.com/interview/how-charging-money-fo...

tiborsaas(3886) 4 days ago [-]

This is where I'd put a 10x developer :) Complete competency across the whole stack with a solid understanding of a profitable operation.

buboard(3489) 1 day ago [-]

running many different systems is not development

john_moscow(4219) 4 days ago [-]

Yep, learning how to run your own business is the only way to convert 10x development aptitude to a 5-10x pay increase. Otherwise you get 1x pay, 10x the expectations, upset colleagues that feel inferior in comparison, and very little promotion chance because the company needs you right where you are.

ankit70(4195) 3 days ago [-]

Wonderful stuff! I've been trying to learn programming and able to code the CRUD apps for almost 6-7 years.

I've tried to learn Rails (Ditched learning Rails because JS framework are all the rage). Tried learning Flask/Django because it was considered easy. Ditched it too cause internet people said it's slow.

I tried learning Go, Phoenix and jumping between what's considered cool in last few years.

And here I am, no confidence to do a basic simple app. It's been an interesting journey with no luck because of constant chasing of 'Exciting' frameworks/Tools.

aeorgnoieang(4165) 3 days ago [-]

Pick one framework – any is fine – and finish the (or a) minimal version of a basic simple app.

I'd suggest minimizing, or even ignoring, JavaScript. A really basic CRUD app doesn't need any.

ultrasounder(3829) 3 days ago [-]

I started along the same path as you. What i have been experiencing is Analysis Paralysis. Like the OP says, the trick is to commit to something that helps you from point A to Point B the fastest. And that for the time being atleast is RoR as is very opinionated and there are lots of humungous opensource, well maintained libraries floating around. And last but not the least, The Rails Tutorial actually bets you going to building something useful in no time. Good luck.!

squeaky-clean(10000) 2 days ago [-]

I agree with the general consensus pick one and stay with it for a few projects. If you can't pick one, just roll a die or anything. Maybe pick one a friend knows so you can collaborate in the future.

The differences between languages and frameworks won't matter very much unless you're joining a team with a preexisting stack or an edge case company/application that needs top-of-the-line everything.

Even if your app usage does scale beyond what your first version can deliver, you'll often get a bigger improvement by rewriting your app in the same stack using the new knowledge you've learned, than you would by rewriting the same logic in a newer stack.

ryanolsonx(10000) 3 days ago [-]

My advice would be to quit chasing exciting frameworks.

Pick something that has been around a while (Ruby on Rails, Python with Django, Java Spring) and go with it.

Focus on performance once it becomes a problem. In python, you can rewrite parts in C that need a speed improvement.

Otherwise, focus on making a great product that solves a problem for your users.

aantix(4121) 4 days ago [-]

Picking your stack/architecture based on team size is something most engineers miss.

Microservices, great for companies with many teams. Not so much when it's three people scrambling to create something meaningful. Monolith all the way.

eloff(3898) 4 days ago [-]

It's funny because microservices are explicitly targeted at solving problems for development with many teams, and lots of single team companies cargo cult them. Did they miss the first paragraph when they were reading up on what microservices are?

test2016(10000) 4 days ago [-]

Satoshi Nakamoto was one person behind Bitcoin, similarly.

tim333(1252) 3 days ago [-]

Well, plus Hal Finney.

mullen(10000) 3 days ago [-]

Technically, you don't know that since we don't know who Satoshi Nakamoto is. Satoshi Nakamoto could really be 10 people and you nor I, would not know.

ademup(10000) 4 days ago [-]

This seems super complex to me. I am a single dev that runs a $30k\mo software based company off of PHP+MariaDB+bootstrap+jQuery+few other plugins. Hosted on a managed HIPPA setup. The firewall+app+DB servers run me about $550\month and I have excellent support. I spend effectively 100% of my time on business logic\ui and zero time keeping up to date on infrastructure (and learning it). Which means my customers benefit from me fixing problems and adding features. Kudos to what works for you....super great... And, for me at least...it is even more 'boring' and awesome.

buboard(3489) 3 days ago [-]

similarly , making half that and using dedicated servers. When you re a solo dev you dont really have time to risk learning unproven tech or tech that doesnt scale. And as a dev i like writing code that does new and interesting things, not learning tools and other people's APIs. The amount of services this guy has to manage is mind-boggling, and unfortunately i m simple and stupid. I guess i m a hermit dev but thankfully i ll never need to work for others again.

dtien(3288) 4 days ago [-]

Do you mind sharing what the managed HIPPA setup is? What vendor do you go with for that?

I've always wondered about how easy it would be to setup a SAAS that adheres to HIPPA.

riku_iki(10000) 3 days ago [-]

> Kaiser Permanente for health insurance.

Super important part. Any details about how much it costs for single person company? :-)

chasd00(4221) 3 days ago [-]

yes, i'm very interested in how they did health insurance

wenbin(3686) 3 days ago [-]

Sure. It's around $700/month for me + my wife.

I wish I could've known more practical info about starting a company before I quit my day job, e.g., how much for insurance, what company credit card to use, how to pay tax, where to find lawyers... Online articles / advices are mostly focused on big picture or very abstract concepts or fortune cookie type words :(

amdavidson(10000) 3 days ago [-]

It's pretty straight forward to get a quote, is it not?

https://individual-family.kaiserpermanente.org/healthinsuran...

cagenut(10000) 4 days ago [-]

key missing info though: does it make money? I checked the site and it looks like a mix of ads (that are blocked of course) and a patreon with 5 supporters.

wenbin(3686) 4 days ago [-]

I'm wenbin, the author of this blog post & I built Listen Notes. Thanks for bringing up this question :)

Yes, Listen Notes is making some money - not a lot, but enough to cover all the cost and bring in a bit profit as of today.

The basic idea is that Listen Notes should be free to 99% of users, while making some money from 1% of super users.

We run ads (obviously) on the website and we provide API: https://www.listennotes.com/api/pricing/ And I've been experimenting some paid features that are needed for PR/marketing/journalists to do their job, e.g., https://www.listennotes.com/datasets/

And today is special, because on Sep 16, 2017 (Exactly 2 years ago today), I started to work on Listen Notes full-time!

iMage(10000) 4 days ago [-]

He says that he uses Stripe to get money from his users, specifically noting the API as a source of income.

Here is the pricing page: https://www.listennotes.com/api/pricing/

ttul(10000) 4 days ago [-]

And does it make enough money to cover the time when it wasn't making money? As in: was the time spent bootstrapping worthwhile? The concept of a one-person company is extremely appealing in its simplicity and apparent lack of risk; however, the devil is in the details.

huangc10(3240) 4 days ago [-]

I'm going to go out on a limb here and confidently say he's doing fine. As he mentioned, he rented an expensive one person WeWork office in SF (when he doesn't even have to).

Beefin(4080) 4 days ago [-]

Same question I had. Startup = Money generator, so it's important to ask.

Looks like it's an ad-driven revenue model, and it looks like it gets ~ 1m views/month: https://www.similarweb.com/website/listennotes.com

Assuming $1 per 1,000 impressions, we get $1,000 / month.

That's purely the website. I can't speculate on the API side of his business though.

twox2(10000) 4 days ago [-]

Actually it's not the missing info... because it's none of your business. It's a technical post about his solo founder and operator DevOps practice.

ydnaclementine(4208) 4 days ago [-]

Only question I can think of that I didn't see an answer for is: is he running one postgres db, which is shared between the web and api services? Multiple services with only one db gets hairy fast with questions like: who owns the migrations? does the migration owning service have to restart the other service so the other service gets the new db column/table? etc

Unless the web also uses the API service to get its data. Really great article

aprdm(4022) 3 days ago [-]

Likely the web app uses the API to get its data, it hints at it when he says he uses the same APIs the customers do from his frontend.

czbond(4189) 4 days ago [-]

As a software SaaS CISO, who also pentests and determines partner risk - I take the approach of not sharing such in depth details. Articles like this are fantastic fingerprinting recon for those that look to compromise sites.

crispyporkbites(4165) 3 days ago [-]

Pretty sure a blackhat with nmap and a few hours would be a lot more effective than trying to glean something from this blogpost.

quickthrower2(1314) 3 days ago [-]

Security by obscurity?

reilly3000(4198) 3 days ago [-]

I've often worried about that; in fact its really kept me from blogging about our infrastructure at all. Am I too paranoid? I've often thought if I were to do so that I'd set up a honeypot; but who has the time for such games?!

phalangion(10000) 3 days ago [-]

I'd love to know how you're running Django and React together. I've been trying to figure out how to make that combo work in a mono-repo, and I'm definitely missing something. Any advice?

yagodragon(2679) 3 days ago [-]

I'm also struggling a lot with django, react and the whole asset pipeline. On the other hand, frameworks like rails and laravel have it all figured out for you.

phy6(4215) 3 days ago [-]

I just watched 8 hours of Django videos yesterday, and 4 hours of that was a Django + React tutorial. It covers in great detail the CRU from a CRUD, but you'll need additional videos to cover deployment on elastic beanstalk or ec2+elb, etc. https://www.youtube.com/watch?v=AHhQRHE8IR8

mattmar96(10000) 4 days ago [-]

Just because I saw you (wenbin) in this thread I thought I'd be helpful:

On https://www.listennotes.com/api/pricing/

There's a typo: 'Instantly access to 771,769 podcasts'

Instantly -> Instant

Really enjoyed the post, thanks for sharing.

giarc(10000) 3 days ago [-]

Also on that page...'No need credit card' should read 'No credit card needed'

jbverschoor(3393) 3 days ago [-]

That's not a typo. They are two different sentences with slightly different meaning.

Access as a noun or access as a verb.

avip(4083) 4 days ago [-]

First, that's a great stack and very well written/presented.

One comment - he dismisses serverless as being overengineering. I think the correct POV, moreso for the single-man company, is that running a server to perform a task is the overengineered option.

One can see from the snapshot the servers are indeed severely overprovisioned and underutilized. Building an api with api-gateway + lambda is less work than running django in uwsgi behind self-managed nginx, and is guaranteed to be more cost-effective for unpredicted load.

Same logic applies to the db servers - why not hosted?

And last - the inf is a good reminder that prefixing your api routes with /v1, /v2 is always a good habit.

peterwwillis(2589) 3 days ago [-]

If serverless were easier, more people would be using it. But it's not simple or straightforward. You have to learn new systems and conventions, it has a bunch of weird considerations depending on the use case, and most people just use it when they don't want to figure out what instance to run some periodic, one-off job on.

It's a niche, just like all solutions that aren't a single Unix process on a single Unix box. Even CGI scripts are a niche. You pick the niche that you know.

SPBS(10000) 3 days ago [-]

I don't know if overprovisioning servers to counter traffic spikes is over engineering, the mental model is pretty simple. May not be as cost effective or infinitely scalable, but it's simpler to wrap your head around.

spookthesunset(10000) 3 days ago [-]

As team of one, you have to be very judicious with your time. You often have to have a philosophy of 'if it ain't broke, don't fix it'. Yeah they could burn a week or two switching over to lambda and whatnot but is it going to have a higher ROI than all the other things they could be working on?

As me how I know. When I was a solo dev doing my own thing, I'd spend way too much time working on things that really wouldn't affect the business but were 'good engineering things' to do. If I spent more time working on things that would grow the business instead of wasting weeks writing fancy deployment scripts, maybe I'd still be doing my own thing now!

RX14(4049) 4 days ago [-]

Someone who has been doing this for years before serverless existed can set up and manage this infrastructure trivially, using almost no time compared to developing the application and the business. It also provides you with security: you know the pitfalls and the problems with hosting these, serverless is another new technology which can go wrong.

lacampbell(4215) 3 days ago [-]

I think the correct POV, moreso for the single-man company, is that running a server to perform a task is the overengineered option.

I was all gung-ho about serverless for a while. I wanted to release a demo for my product and thought I'd cut through all the hassles of managing my own server.

I found it bewildering. It was a whole new skillset with new benefits, but also new considerations and headaches. When push came to shove and the clock to release my demo started ticking down, I just went back to a linux server.

I use the same linux distro at home and on the server, and there are about 3 technologies I need installed. On retrospect I think I made the right decision, but happy to have my mind changed.

cuu508(3710) 3 days ago [-]

> Building an api with api-gateway + lambda (...) is guaranteed to be more cost-effective for unpredicted load.

Depends on the use case. I run a cron monitoring service on a similar nginx/uwsgi/django/postgres stack [1]. My service needs to handle lots of really small and simple HTTP requests, and almost every request needs to do a (small and quick) database write. I did napkin math – at the current usage levels, Lambda per-request fees alone would use up significant chunk of my current hosting budget.

[1] https://blog.healthchecks.io/2019/08/a-look-at-healthchecks-...

nwsm(3931) 3 days ago [-]

> lambda is less work than running django in uwsgi behind self-managed nginx

If you discount building an AWS-specific deployment process that includes 'pip install' from an AWS linux machine image, zipping the project, and putting it in S3.

wenbin(3686) 3 days ago [-]

Good point! For people who have tons of experience with serverless, serverless is probably a better choice than running servers for some use cases.

As a small business owner, there are two types of cost that I need to consider:

Time: the time I use to do A is the time I can't use to do B. Unfortunately I haven't used serverless so far in my professional career -- in this sense, I'm not full-stack enough :) It takes time for me to learn it, understand it, operate it, and experience various outage scenarios to gain the true learnings. It's more costly for me (probably not for others) to use serverless than the things that I already understand. I'd rather spend more time on other non-engineering things nowadays -- believe it or not, I spend 1/3 of my working hours replying emails :)

Money: the money I spend on A is the money I can't spend on B. I decided not to use api-gateway + lambda & hosted db servers, primarily because of $$$. I actually did the pricing calculation a few times last year. In addition, api-gateway + lambda also require some time for me to learn, which I should use to talk to users, marketing, building new product feature, thinking (yep, thinking also uses some time budget :)...

jbverschoor(3393) 3 days ago [-]

The lb, 2 web and 3 api frontends?

You'd also see that 7 of his 8 worker boxes are almost at 100%.

pier25(3433) 3 days ago [-]

Completely agree. Cloud functions are in many cases a better option than maintaining a server, specially for those background tasks that fire occasionally.

The only issue I've found about using serverless is the database. In most cases (Firebase, Fauna, Cosmos, DynamoDB) you have to couple your stack to the DBaaS provider which is not a great idea. AWS recently announced Amazon Aurora PostgreSQL Serverless but while it allows you to use regular Postgres tools/queries you are again tied to AWS.

seriocomic(10000) 3 days ago [-]

I take (very minor) issue with your first line and the point 'well written' (I do agree with well presented).

Since the author is reading/commenting here, and there was a large amount of space in the original article outlining tools/services he uses, can I humbly suggest they use a tool like 'Grammarly' or similar to help with the word-choice(s)?

Some distracting use of plurals for terms - e.g. 'traffics', 'stuffs', etc - may have been avoided and other spelling and grammar aspects could have helped make this easier to read. That all being said - the 'essence' of the article is to be commended.

zippergz(10000) 4 days ago [-]

Yes, and no. If you already have a lot of experience building apps that run on servers, there's a learning curve to switching to serverless. Is it huge? Not really. But there are certainly pitfalls and best practices to learn about. The costs can be harder to predict, especially when starting out. And the tooling is different (and much less mature). So now you have a bunch of stuff to learn about or consider, or you can just go do the same thing you already know how to do with minimal friction. It's possible that the cost savings of not overprovisioning servers is worth it, but I don't think it's that straightforward of an answer, and if your server costs aren't massive, you might be better of spending your time building a great product than learning a new way to build.

fovc(4138) 4 days ago [-]

Depends on what you're familiar with. If he already knows Ansible inside and out, why risk getting stuck with some undebuggable AWS failures?

As a solo entrepreneur I can say time risk is a crucial thing to be mindful of. I'll take 10h +/-1h vs 5h +/- 15h any day.

chillfox(10000) 3 days ago [-]

I think the far better consideration is picking the tech stack that requires learning the least amount of new skills.

aprdm(4022) 4 days ago [-]

That's awesome and it is a tech stack that I try to mirror and am confident running myself as well!

It's incredible the amount of knowledge required for a single person tho when you think about it eh? It's the full frontend (which I would have more trouble with) + databases + caches + search engine + metrics + deployment + source control + sysadmin all baked in a single person who is also trying to make it a business!

Kudos for the effort and making it happen, one day I might be joining the same journey with the same stack! Just gotta figure out what actually motivates me to build a business on top of =)

mtnGoat(10000) 4 days ago [-]

as someone who has only worked on small teams, i thought this was normal. My current position is leading a small team, but we have interactions with a number of teams inside of FANG companies and it's amazing the limited amount of knowledge and access each position/person has. Most of my engineers can run circles around our partners.

I always thought every engineer should know how to deploy a server, install deps, understand caching, etc and setup an app... turns out, that is apparently not even remotely expected at most companies. I guess the bigger the company the more narrow the skillset required.

grepfru_it(10000) 3 days ago [-]

I finally created an account because your comment resonated with me. I have created 95% of my platform by myself which itself was the manifestation of several business ideas. I started out buying servers and starting a dedicated, then vps, then shared hosting farm that requires all of the frontends you mention. I took a different approach, I went all out open-source and spent time creating glue and flashy bootstrap frontends to orchestrate everything reliably.

I currently run the remains of my companies as a lab that is spread out to a few datacenters and provides a UX where anyone can request a VM, launch a container, or drop a php/java war/RoR/django/etc onto a custom app server of varying security restrictions. You can request a service/vm/container by API, by chat, or any other host of events through my half-baked event controller and change mgmt database. In a lot of cases, changes are a two-way street. You can modify e.g. a bind zone file and that will reflect upwards in a CMDB or vice-versa and watch the zone file update automagically. The original idea was to allow mixing sysadmin strengths and still maintain a reliable complex system.

So now I have a platform that spans multiple datacenters, uses infra as code as you would expect (supporting another cloud provider is simply adding glue to their apis), has loadbalancing and SSO, and it's just literally sitting on the sidelines exhausting the remaining budget until I finally get tired and liquidate it all. The motivation of building a business on it is so tiny after years of failed attempts and seeing the shared cloud model completely destroy ROI on holding hardware. I can and have built e.g. fleet tracking services. I have gobs of storage, so I run an object store for giggles. But have no clue how to generate revenue from these ideas when the market is already saturated. My last ditch idea is to create a learning ground for the public. Training on how to build apps that scale, manage systems at scale, and give a real world environment to folks who may otherwise not work at an organization with more than 100 servers. shrug . until then I chop-chop away at my dayjob :)

WheelsAtLarge(3016) 3 days ago [-]

Well, I'm super impressed. I wish I could do half of what this guy can do.

But he needs to be careful not to overdo it. It's fun and exciting to get all this tech up and running. But at some point, it becomes a drudge. And burnout is right around the corner.

I would say that if he can farm out as much as possible and focus on marketing and sales which are the drivers for most company's continued success. In a sense, he has by using the cloud but what he's doing is way too much for one person.

I used to be a laid back kind of guy and would get irritated when I was hurried and clearly there was no impending death. But now I understand that the limit lies within us. At some point, we all give up. There are those that take a long time and there are those that give up relatively fast but giving up is part of the process. So we are in a hurry to get as much done before we decide to stop what has not been successful. By putting so much burden upon your self you make it so much more likely that you will give up before you find success.

wuliwong(3817) 3 days ago [-]

>he's doing is way too much for one person

You have no clue what is 'too much' for the author of this post.

goodroot(3946) 4 days ago [-]

Whoa -- is that a boring stack nowadays? There are many great cutting edge tools in use. Humble fella.

rhizome31(4036) 3 days ago [-]

My thought exactly. I was expecting good ol'LAMP or something along these lines. I think here 'boring' means 'has been released for more than a couple of years'.

eruci(4187) 4 days ago [-]

I'm a one-person company too ( geocode.xyz ). My tech stack is even more boring than that. (Nginx, MariaDB, Perl on AWS Ec2 Linux instances. I don't have an office either.

mzkply(10000) 3 days ago [-]

Is your site down?

peterburkimsher(3328) 4 days ago [-]

This is completely off-topic, but thank you for running geocode.xyz! I'm a very satisfied user :)

jcroll(4030) 3 days ago [-]

I ran a small company like this quite successfully earlier in my career. If you think this is awesome what goes unmentioned is how lonely it can get. Also, any issues you have (business or technology) you bear the brunt of alone. A coworking space doesn't really help either imo, if you like working with others you will miss having coworkers. Just something I think worth mentioning if you think this is something you might want to pursue.

brailsafe(3693) 3 days ago [-]

I'm kind of experencing this right now, a remote worker whose working on a thing solo. Feel like I need to dip back into a team for a while

jv22222(778) 3 days ago [-]

The part that I find the loneliest is making all the core technical and business decisions by myself. I really miss being able to bounce ideas of a partner.

Sure, you can do it with friends etc. But at some point you just become menace asking them to think about your stuff so much.

grepfru_it(10000) 3 days ago [-]

are there any groups out there for the small technology business owner? i'm extremely hesitant of asking about this because it's a fine line between brainstorming technology problems with like-minds and getting owners who want you to problem-solve for them.

cattlefarmer(3690) 3 days ago [-]

Yes, the worst thing is the huge amount of decision making even over very minor details that has to be done all the time. Coding and maintaining the servers isn't even that difficult. It's all the little stuff like scheduling with clients/vendors suppliers, wondering when is a good time to chase that invoice, which words to change in your proposal for this client, and so all. If you have employees, how to manage them, how to review their work, how to mentor them. It really is death by a thousand paper cuts.

jspdown(10000) 3 days ago [-]

Some people enjoy being alone, others don't. You don't know in which group you are until you experience it. Unlike you, I wasn't running my own business, I was just working remotely for a company where the HQ was on a +12h timezone. I felt the exact same way, even if I was working from a coworking space.

laurex(122) 3 days ago [-]

I know a few people who have entrepreneur Marco Polo support groups- asynchronous but still seeing actual faces and human expressions.

buboard(3489) 1 day ago [-]

artists and farmers have lived like that for centuries. i dont think it s so uncommon that we have to start warning people about it

drusepth(4083) 3 days ago [-]

My company's Slack is just me and ~12 Slack bots all doing various things (reporting subscription changes, feedback, bug reports, error alerting, CI status, etc). It's amazing to look at and functions immaculately, but occasionally does have the grim reminder of just how alone it is being on your own.

Especially when you think 'this all works great, but would still work equally great with another human or two here.'

W-Stool(10000) 3 days ago [-]

Years ago Joel Spolsky on his site 'Joel on Software' had a 'Business of Software' forum where a lot of very small ISVs hung out and discussed items of mutual interest. Anyone know where those kinds of folks hang out now to discuss things?

bythckr(4219) 3 days ago [-]

'I ran a small company', what happened? Why did you stop or did it get stopped? What was the reason?

One man show / sole proprietorship is great, but its bad for family life. It's 24x7. Atleast thats how I saw my dad, dining room was the company meeting room and we all are unpaid company 'helps',assisting our Dad support the family. This is the main thing that holds me back from being an entrepreneur (sole proprietorship 2.0).

wenbin(3686) 3 days ago [-]

(author of this blog post here)

The hardest part of building this business is to keep motivated for a relatively long period of time. I'm still early in this startup journey. This is only the 2nd year of me working full-time on Listen Notes.

I think it would be helpful to surround yourself with like-minded people (online or offline) -- we are social animals. Indie Hackers is pretty good: https://www.indiehackers.com/

I live in San Francisco and I used to work for companies, so at least I can often hang out with some friends/former coworkers who are doing startups or working in tiny startups.

In my coworker space, people from different companies rarely talk to each other...

jith(10000) 3 days ago [-]

The lonely aspect is incredibly difficult, especially if you're an extroverted person that draws their energy from collaborating with others. I also find the lack of accountability to others as hard. I know there are services that cater to this type of founder/freelancer.

cik(3949) 3 days ago [-]

Mind expanding on the coworking bit? I always think about hitting up a coworking just to have some human contact when I'm working on a project. I've yet to actually do it however.

mrlala(10000) 3 days ago [-]

Agreed. Part of a two person team, but the other is mainly on the business development front and I'm the main designer/coder.

It's so hard sometimes. No one to really bounce ideas off of, every little thing you have to do yourself. Staying inspired to work is sometimes difficult, because you are literally doing everything yourself it just gets so overwhelming.

spiderfarmer(4204) 3 days ago [-]

This feels like bragging but I also run a one man company and the loneliness is what I enjoy most. Granted, I only work from 08:45 to 14:30 and some hours in the evening, because I bring my kids to school and I pick them up in the afternoon, but the hours when I'm alone, I am 100% productive. When the kids are at home I mostly do some kind of manual labor to get some sort of exercise. Right now I'm building a shed. Last year I built my home office/guest house.

To make up for the missing social interaction I play football with friends, do some consultancy jobs on the side and I help where I can with my son's football club.

This really is my dream job and I enjoy it thoroughly. I have no problems staying motivated and every day I have loads of inspiration.

The moment I lose motivation will be a sign for me to sell everything and start something new. I ran an online marketing company before this and that was really exhausting for an introvert like me. Selling that company was the best decision I ever made, apart from marrying my wife.

AimForTheBushes(10000) 4 days ago [-]

I really like the idea of monolithic repos but can see some downsides when there is more than one person working on a project. It would be cool if there was a simplified way to have an entire business operate under source control.

themacguffinman(10000) 3 days ago [-]

> It would be cool if there was a simplified way to have an entire business operate under source control.

A lot of projects can and do apply CI tooling to achieve this. Every commit to a branch triggers a set of declarative deployment pipelines, simple. IIRC buzzword is 'GitOps' if you want to find out more.

tobib(10000) 3 days ago [-]

This interests me. Can you elaborate on what downsides you see? Even in a small team we're often working within the same repo at the same time without any issues.

halfjoking(10000) 3 days ago [-]

That's not boring - it's a professional dev techstack.

Boring tech would be a million dollar business running on Wordpress.

chillfox(10000) 3 days ago [-]

There is plenty of those, they just don't have anyone tech-savvy enough to be on hacker news.

_august(4178) 3 days ago [-]

Also running a solo company - https://fitloop.co

I'm primarily a front-end dev so I keep things pretty simple on the back end side.

Stack:

  Meteor, managed hosting on Galaxy
  MongoDB, hosted on Compose
  React
  GraphQL / Apollo API
stanislavb(2597) 3 days ago [-]

Nice! I've bookmarked it on https://www.saashub.com. You should verify it there :)

paxys(10000) 3 days ago [-]

Are there no load balancer replicas? It looks like a single point of failure for the entire service (and it runs Redis and RabbitMQ - yikes).

Scarblac(10000) 3 days ago [-]

There's also 1 person who is the whole company, a single point of failure comes with the territory.

peterwwillis(2589) 3 days ago [-]

The author can just run an Ansible job and recreate it instantly. It's a stateless ec2 node. If they wanted to be super duper fancy they could make an autoscaling group to destroy and recreate it if it went down. No need for a replica. This probably isn't a lose-100K$-every-5-minutes-of-downtime business.

aussieguy1234(4118) 3 days ago [-]

Here's my boring technology.

I've built Libr (https://librapp.com), a full social networking platform designed to fill the void Tumblr left behind.

The hosting is serverless for both the front end assets and back end no servers to maintain and infinitely scalable.

Libr has a front end progressive web app built with VueJS hosted on S3 with CloudFlare in front.

The back end API is hosted on Lambda. By using serverless framework things are portable. I could easily migrate to another serverless provider or even host my own.

Both the front and back end use the same programming language, TypeScript.

The database is Elasticsearch, just like WordPress.com (they don't use MySQL for most things).

Each month, I receive a nice little invoice from AWS for under $1 for the hosting. So far CloudFlare is free for me.

I built everything by myself, with no team. I have had input from Libr users on feature requests and UI improvements.

ultrasounder(3829) 3 days ago [-]

Is you site down? I am unable to access it right now.

tnr23(10000) 3 days ago [-]

Is this seriously considered as being a boring stack nowadays?

marknadal(2846) 3 days ago [-]

How big is your team? Mind disclosing what the SaaS is? I'd love to learn more.

urda(4202) 3 days ago [-]

What SaaS do you run?

faizshah(4086) 3 days ago [-]

Yea I feel like an actual boring stack would be LAMP w/ JQuery & bootstrap on some hosting provider (and the blog would be written on wordpress).

The stack described in the OP is a fairly modern stack...

meesterdude(3521) 3 days ago [-]

was honestly expecting a stack along the line of yours - the one in the article is actually fairly complex and not-boring.

imetatroll(10000) 3 days ago [-]

These success stories always pain me a bit. I tried for some time to get my own project going based on personal interest, but it never went anywhere. I have people visit the site but retention and use is a problem. https://imetatroll.com

- golang, gopherjs, bootstrap, DO, and kubernetes.

fuball63(10000) 3 days ago [-]

I think this looks cool, and I feel your pain on the retention/use problem as I have also made a game that requires multiple people to use to create value.

I never found a way to promote it to groups of people instead of individuals. Even in a meetups/expo setting it's hard to get critical mass and momentum.

jedberg(2257) 3 days ago [-]

I'm impressed with the profit margins. It looks like one of the main sources of revenue is transcription. From what I can tell the price seems to be about $4/hr.

OP said they are AWS. If they're using Amazon Transcribe, the cost for that is $1.44/hr.

A 150%+ profit margin is awesome! Well for that one piece of the service. Obviously at least some of the cost of the website and database, etc must be included, but still. Pretty amazing.

tudorpavel(10000) 3 days ago [-]

The post mentions using Google speech-to-text for transcription which seems to be even cheaper?

I'm not sure where you got the $4/hr price, but indeed kudos to OP for building a business around reselling cloud services for profit.

rsp1984(3745) 3 days ago [-]

A super interesting and refreshing read, especially since I am not very knowledgeable about web technology and most of the talk these days is about the latest fancy framework or database.

I am curious though, since I am using Google Cloud (App Engine in particular) for most of my company's modest backend needs: Would Google AE be able to handle all these backend requirements as well (but obviously without all the configuration and setup required)? Or asked another way: When is the point when you should move away from something easy and low-hassle such as GAE to something more advanced that requires a bit more manual configuration like setting up your own AWS servers?

Not trying to be critical, just honestly trying to learn from folks that know better than I.

ericlavigne(4155) 3 days ago [-]

The main idea of this article is to use what you already know and just get it done, rather than experimenting with new tools. The author already knew how to use most of these tools from a previous job. He stuck with the technology that he already knew, so that he could focus more on business aspects rather than new technology.

In that spirit, you should use the App Engine that you already know for as long as it seems to be working well for you. When you run into a problem that can't be solved in App Engine, that is the time to ask for advice on how to solve your problem.

hacker-gene(10000) 3 days ago [-]

Well, on frontend he uses React/Redux/Webpack, which is more of trendy, latest/greatest. If he used Django templates + jQuery, that would have been staid/boring tech. Great post though, aspirational for those of us who wish to found their own startup one day.

dangoor(3811) 3 days ago [-]

I disagree. React has been out 6 years. It most definitely can prevent bugs which were common with the jQuery approach.

abiro(4155) 3 days ago [-]

Security concerns are always conspicuously lacking from this type of posts and discussions. If you're a solo dev looking to start a project, please don't take OP's stack as a positive example. Use managed services whenever possible, they will at least keep you patched and simplify intrusion detection when configured correctly.

wolco(3683) 3 days ago [-]

Don't throw your money away for that reason. Keeping upto date is easier than ever.

whalesalad(351) 3 days ago [-]

Technology is usually just a means to an end. Unless IP is what you are selling, boring is great. I've seen SO many teams burn SO much energy on complicated stacks just to drink kool-aid. It's mind bogglingly frustrating, especially as a contractor. At the end of the day it's great for me: I get brought into shit shows to clean up the mess. But deep down, I want projects to succeed and clean/sound systems architecture is how you do that. Doesn't matter if it's PHP, Python or Java.

It hurts to see people continue to make mistakes over and over, so I'm working on a new website and series of engineering posts to help share my approach to a lot of these problems.

Any product I start building usually begins in Rails. React is great. Vue is great. It's not necessary, good ol' request/response is just fine. You don't need a service mesh. You don't need Kafka. You add that stuff later when it's required... if it's required. Rails can't be beat for startups. I wouldn't waste any time on a single page app, it's a completely pointless endeavor unless you have proven traction, users, revenue etc... and can afford to do it correctly.

chrischattin(10000) 3 days ago [-]

Yes! So much this. Every startup I've worked with regretted choosing a trendy tech stack early on. Speed of development is BY FAR the most important thing and the SPA/Javascript framework de jour trend of the past few years is an antithetical to rapid iteration. As a consultant, I'll happily bill hours to untangle that mess, but it's very frustrating to see startups with lots of potential spinning their wheels with unnecessary bloat on the front end.

qaq(4209) 3 days ago [-]

I think Phoenix LiveView is offering a very compelling value here. You are basically writing a regular server side app that will perform on par with SPA for most use cases.

phlakaton(10000) 3 days ago [-]

However, the post indicates React + Redux for the front-end. Perhaps it's my bias as a back-end curmudgeon, but those strike me as _unboring_ choices in the front end, particularly if you're not targeting SPAs.

My point being: one person's 'boring' could be another person's anything from 'clunky and archaic' to 'faddy' to 'efficient and sensible.' Not even Rails, built upon Ruby's idiosyncratic blood magick, is an entirely uncontroversial choice.

theonething(10000) 3 days ago [-]

My first thought was 'I wish the startup I'm working for would read this'. My second, 'eh, wouldn't change anything.'

> so I'm working on a new website and series of engineering posts to help share my approach to a lot of these problems.

Do you have a mailing list or something to get on to get updated on this?

NohatCoder(10000) 3 days ago [-]

I wonder what is wrong with single page apps? As I see it you can largely write the same code, you just have the additional option of modifying the page on the fly.

UserIsUnused(10000) 3 days ago [-]

I would go with spring boot. Sure rails is a faster start, but when it grows it becomes a mess. Spring boot gives you the fastest start in JVM-land, and if your product is successful and you need things to scale, or features are getting complex, you still have the spring framework in the insides that you can manage. Boot is almost just defaults.

But spring boot wouldn't exist, if rails didn't change the landscape of web development.

brillout(10000) 3 days ago [-]

What is it that Rails has that Node.js doesn't?

Node.js is missing a great ORM but other than that I don't see many things missing in Node.js anymore.

There is also Rails' admin panel but most projects don't need that.

dx034(3658) 3 days ago [-]

The only problem I see with rails is that it's hard to find devs these days (at least in Europe). Even if they could learn Rails for that project, many I've spoken to wouldn't want to take a job where Rails is required. I just witnessed a project switching from rails to React (for the frontend) because they just couldn't get enough good people willing to use Rails.

mattbillenstein(4213) 3 days ago [-]

If you do write something up, I'd like to read it - solid analysis re systems architecture et al.

rpedela(4217) 3 days ago [-]

I agree with your overall point, however I disagree with the 'just pick Rails and don't do SPA' part. For startups or any small team, the right approach is whatever boring technologies the team is comfortable with. If that is Rails, great! I personally don't know it so that would be a horrible choice for me. Likewise SPAs are comfortable and easy for me (aka boring), but a bad choice for someone else who isn't comfortable with them.

kabacha(10000) 3 days ago [-]

Why is HN so against people who _enjoy_ engineering? Why should I run some run-of-the-mill stack that has enourgmous legacy cruft and just not 'fun'. Not every business wants to be sillicon-valley optimized money farm - some people want to do some enjoyable work.

chopete(4218) 3 days ago [-]

>> It hurts to see people continue to make mistakes over and over,

Its because

https://vimeo.com/76499047

wuliwong(3817) 3 days ago [-]

Thanks for writing this article @webin. I'm about to launch a new startup myself and it is VERY boring tech. :p I'm running Rails with Sidekiq on Heroku, so I think I have you beat in the boring department. I do a lot of fancier stuff at work but I just don't need it for this product (at least not yet).

I am the only engineer but two non-technical friends (a designer and a lawyer[0]) make up the rest of the company.

[0] the original idea was the lawyer's. :)

ultrasounder(3829) 3 days ago [-]

Hi, I am currently getting started with the Rails stack to. What's your FE stack look like?

thanks Ananth

alexbecker(4044) 3 days ago [-]

I run a similar, maybe even more boring stack for my less-than-one-person company [PyDist](https://pydist.com):

- PostgreSQL database

- Nginx proxy in front of Django apps for UI and API servers (I use gunicorn instead of uWSGI though)

- Cron jobs which invoke django-admin commands to keep the PyPI mirror in sync

Perhaps the only place I'm any fancier than OP is that my deploy script is in Python, not shell, since any time I try to write a shell script with even slightly nontrivial logic it falls over and catches fire :)

welder(1550) 3 days ago [-]

What's your experience with gunicorn instead of uWSGI? I'm using haproxy + nginx + uWSGI but I'm wondering if gunicorn scales network more than uWSGI. My bottleneck isn't CPU, it's the amount of open connections uWSGI can handle at once.

Here's a trimmed down version of my web configs: https://wakatime.com/blog/23-how-to-scale-ssl-with-haproxy-a...

dcsan(10000) 3 days ago [-]

It seems wenbin is also the lead developer on ndkale, https://github.com/Nextdoor/ndkale/graphs/contributors so he would certainly be capable of using a more esoteric stack.

I wonder when developers are working for themselves / very early stage, if one automatically becomes more conservative? If someone else is paying for your time, it's nice to experiment and grow personally. When it's your own buck, the focus is on pragmatic shipping and getting revenue coming in.

wolco(3683) 3 days ago [-]

When you do it for yourself you choose a quick prototyping language like php because you want to create a product.

When you work for someone and they pay you to learn something hot why not take the opportunity as it will help with the resume.

As a small business no one pays you to learn.

baalimago(4184) 3 days ago [-]

PostgreSQL, Redis, RabbitMQ, Elasticsearch, Django/Python3, uWSGI, Celery, Celery Beat, Supervisord, Amazon S3, CloudFront, React, Ansible, Datadog, Rollbar, Slack, Vagrant, VirtualBox, PyCharm, iTerm2, Notion, G Suite, MailChimp, Amazon SES, Gusto, Upwork, Goodle Ads Manager, Carbon Ads, BuySellAds, Cloudfare, Zapier, Trello, Godaddy, Namecheap, Stripe, Google speech-to-text, Kaiser Permanente, Stripe Atlas, Clerky, Quickbooks, 1password, Brex.

Alright. Just make a 'boring' website now, it's 'easy'.

If it's one thing i really dislike within both the scientific and the technological sphere it's this arrogance disguised as common knowledge. Because it's not. Articles like this is nothing but bragging. The author, whoever it is, clearly has a very long time working in the field acquiring this knowledge. Be humble.

Kovah(4179) 3 days ago [-]

To be fair, half of the used services do not have anything to do with the website itself but with the whole business around it. Also, most of the tools do not require a PhD in computer science; neither Rollbar nor Cloudflare are actually hard to set up. Still, you have a valid point. The setup described here is not boring at all and not easy, I expected a one-root-server-for-everything that runs some PHP and MySQL.





Historical Discussions: Colorado Town Offers 1 Gbps for $60 After Years of Battling Comcast (September 18, 2019: 1522 points)
Colorado Town Offers 1 Gbps for $60 After Years of Battling Comcast (September 17, 2019: 7 points)

(1577) Colorado Town Offers 1 Gbps for $60 After Years of Battling Comcast

1577 points 2 days ago by CrankyBear in 277th position

www.techdirt.com | Estimated reading time – 4 minutes | comments | anchor

Colorado Town Offers 1 Gbps For $60 After Years Of Battling Comcast

from the build-it-and-they-will-come dept

A new community broadband network went live in Fort Collins, Colorado recently offering locals there gigabit fiber speeds for $60 a month with no caps, restrictions, or hidden fees. The network launch comes years after telecom giants like Comcast worked tirelessly to crush the effort. Voters approved the effort as part of a November 2017 ballot initiative, despite the telecom industry spending nearly $1 million on misleading ads to try and derail the effort. A study (pdf) by the Institute for Local Reliance estimated that actual competition in the town was likely to cost Comcast between $5.4 million and $22.8 million each year.

Unlike private operations, the Fort Collins Connexion network pledges to adhere to net neutrality. The folks behind the network told Ars Technica the goal is to offer faster broadband to the lion's share of the city within the next few years:

'The initial number of homes we're targeting this week is 20-30. We will notify new homes weekly, slowly ramping up in volume,' Connexion spokesperson Erin Shanley told Ars. While Connexion's fiber lines currently pass just a small percentage of the city's homes and businesses, Shanley said the city's plan is to build out to the city limits within two or three years.

'Ideally we will capture more than 50% of the market share, similar to Longmont,' another Colorado city that built its own network, Shanley said. Beta testers at seven homes are already using the Fort Collins service, and the plan is to start notifying potential customers about service availability today.

The telecom sector simply loves trying to insist that community-run broadband is an inevitable taxpayer boondoggle. But such efforts are just like any other proposal and depend greatly on the quality of the business plan. And the industry likes to ignore the fact that such efforts would not be happening in the first place if American consumers weren't outraged by the high prices, slow speeds, and terrible customer service the industry is known for. All symptoms of the limited competition industry apologists are usually very quick to pretend aren't real problems (because when quarterly returns are all that matter to you, they aren't).

For years we've noted how large ISPs like Comcast quite literally write and buy protectionist state laws preventing towns and cities from building their own broadband networks (or striking public/private partnerships). These ISPs don't want to spend money to improve or expand service into lower ROI areas, but they don't want towns and cities to either -- since many of these networks operate on an open access model encouraging a little something known as competition. As such it's much cheaper to buy a state law and a lawmaker who'll support it -- than to actually try and give a damn.

And while roughly nineteen states have passed such laws, Colorado's SB 152, co-crafted by Comcast and Centurylink in 2005, was notably unique in that it let local towns and cities hold local referendums on whether they'd like to ignore it. And over the last few years, an overwhelming number of Colorado towns and cities have voted to do so, preferring to decide local infrastructure issues for themselves instead of having lobbyists for Comcast dictate what they can or can't do in their own communities, with their own tax dollars.

There's probably not a day that goes by without these companies regretting letting that caveat make it into the final bill.

Filed Under: colorado, community broadband, competiton, fort collins, muni broadband, municipal broadband Companies: connexion




All Comments: [-] | anchor

rb808(2997) 1 day ago [-]

The first year it seems great. The real issue is 10 years down the road when people are using 5G and giving up their internet connections, who is left paying for the infrastructure and the union jobs?

knd775(10000) 1 day ago [-]

It's fiber. Fiber is going to remain relevant until we can somehow break the speed of light.

move-on-by(10000) 1 day ago [-]

I don't think you understand how limited 5G is. The range is basically the same has your home WiFi router. Without a fiber infrastructure to back it, 5G is nothing. If the city is no longer able to sell direct internet, then it will just sell/rent the infrastructure to whatever to 5G provider there is.

toast0(10000) 1 day ago [-]

If 5G lives up to the promises, it needs to have a microcell on nearly every light pole. You need at least a bunch of those to have a wired backhaul connection; that can certainly be municipal fiber, at least for the wireless carriers that aren't related to a local telecom.

dfsegoat(4010) 2 days ago [-]

Nitpicking title: 'Town' vs. 'City', bugs me as a former CO resident.

Ft. Collins is huge (~170k people compared to most CO towns which are 400-10k ppl). The title sort of leads you to believe it was a folksy, rural community effort. In reality Intel, AMD, Broadcom and a major State University are located there and that probably helped the effort substantially.

https://en.wikipedia.org/wiki/Fort_Collins,_Colorado#Major_i...

randomcarbloke(10000) 1 day ago [-]

anything less than 6-7,000 people is a village.

barkerja(10000) 1 day ago [-]

My little village (around ~1200 people I believe) in Central New York is currently exploring its own municipality owned internet. They sent out surveys to all the residents a few months back to see if there's a large enough interest.

randomdata(4220) 1 day ago [-]

Where I'm from in a "folksy" rural area you can get gigabit internet service to your farm for $75 a month. But it is not a particularly interesting story.

moojd(10000) 1 day ago [-]

For contrast: My rural hometown of around 15,000 (not tiny but still 10 times smaller than Ft. Collins) built its own fiber network in the mid-90s. It still runs today and has expanded to include many of our smaller neighboring communities of 400-10k people. Now that the city has grown to about 20k we have larger ISPs moving in. They lease the city's infrastructure so we have a municipal fiber network with multiple competing private options. You get fancier hardware and more channels with the private companies but I find the local ISP more reliable and I have one bill for internet, power, gas, water, and waste.

Way better setup than when I was in I was in ATL and forced to use Comcast. Plus there aren't any data caps.

smt88(4211) 2 days ago [-]

I looked into this, and Ft. Collins is a 'Home Rule Municipality' under state law, meaning it can name itself either a town or city. It seems to have chosen to be a city, so you're right.

Around the world, it seems that there's one consistent criterion for something being a city: it must have self-rule and a certain amount of government structure. Some places (notably not the US) have population requirements, but those tend to be really low (100-1,000 people)[1].

I personally appreciate the headline saying 'town' in a colloquial sense, because I don't think of 170,000 as 'huge' and it's significant that it's not a large city doing this.

For context, Ft. Collins is not in the top 150 most populous US cities, and to even reach the top 50, it would have to more than double in size.

1. https://www.thoughtco.com/difference-between-a-city-and-a-to...

munk-a(4119) 1 day ago [-]

Honestly I'm glad they didn't go with 'Colorado City', since Colorado City is a town... in Arizona.

situational87(3798) 1 day ago [-]

Yeah but everyone in Ft. Collins is a cow herder so this town label is still appropriate.

(sorry this is a bad intra-Colorado rivalry joke, please ignore)

maitredusoi(10000) 1 day ago [-]

For 60$ you have a Freebox delta in France (sorry only in French https://www.free.fr/freebox/freebox-delta/ ) . It includes 10 Gbps (via fiber cable) + a sound system ( from Devialet, an equivalent to Bose system) + Alexa on the remote controller + 500 Go HDD for video recording or music playback + 200 (international) TV channels + Netflix SD , all inclusive ?!

For the little story, french people said 60 $ for all of this Freebox delta was sooooo expensive. After reading the Colorado Town news, I stay amused.

wuliwong(3817) 1 day ago [-]

Prices vary from country to country on lots of things. How much does gas cost in France vs the USA?

mmanfrin(3917) 1 day ago [-]

Today, in one of the four cities that birthed the very first connections made to the proto-internet (Berkeley), I am finally getting a second broadband option other than Comcast.

I'm an hour in to a 4 hour window waiting for my Sonic install. I am so happy to finally be have a choice.

Obi_Juan_Kenobi(10000) 1 day ago [-]

Sonic fiber is great. It's been over a year, and zero service interruptions or degradations since the day it was installed.

Any network issues I now automatically assume are due to my home network or the server I'm trying to access.

rudolph9(10000) 1 day ago [-]

I don't understand why the upload speed is so slow for all the comcast packages. I reside in Portland, OR and Comcast seems to be the only descent option. However, the fastest upload speed I could get without springing for $300/month fiber was 35Mbs. I wound up paying $70/month for a package that provides 1000Mbs down and 35Mbs up despite only having modem that supports upto 283Mbs down just so I could get the faster upload speed.

rudolph9(10000) 1 day ago [-]

Ok, just got done with the Comcast technician at my home and and unfortunately Comcast is not able to provide my router with a boot config to support higher upload :( I just get the default we is still giving me 5Mbs upload and somewhat ironically I now get 300Mbs download (previously 70Mbs).

Has anyone put a custom boot config file on a home cable modem? Any resources you would recommend? Is this a fools errand?

jalgos_eminator(10000) 1 day ago [-]

We have fiber through Frontier in certain parts of the metro as well. A few years ago I lived in a place in Beaverton with a 30/30 fiber connection for $35/month. My parents have fiber as well. They were original Verizon FiOS customers before Frontier took over.

bojo(3130) 2 days ago [-]

We pay $165-175/mo for unlimited 1gig plans here in Alaska. I have no idea why :(

metalliqaz(4187) 1 day ago [-]

> Alaska

srbby(10000) 2 days ago [-]

En el cielo las estrellas, en el campo los arados y en el medio de tu culo mi chorizo Colorado.

martin_a(4161) 2 days ago [-]

Speak/Write English on here please, so anybody can understand you. Thanks.

polpo(10000) 1 day ago [-]

Longmont, just south of Fort Collins, was the first city in Colorado to build out a municipal ISP, and it's inspired surrounding cities to do the same. It's where I live and it was a not-insignificant reason why I chose to move here. So far it has been a resounding success for the city. The buildout completed on time and adoption rates are higher than initially expected (the city planned for 37% but the last number I saw was around 54%). In my experience, the service has been so good as to be totally invisible. And I know my $50/mo rate will never rise. I wish Fort Collins the same experience. The fact that they both cities have municipal electric service will help this significantly.

rhmw2b(10000) 1 day ago [-]

I live in Longmont as well and can confirm that internet here is awesome. I've never once had to think about it and always get at least 800 Mbps.

Longmont as a city is great as well. People view it as a cheaper place to live than Boulder, but I'd much rather live here.

jiveturkey(4177) 1 day ago [-]

> municipal

> my $50/mo rate will never rise

Came here to dispute that. Your rate may never rise but deficits will come out of taxes.

Being a public service the budget is publicly available.

For 2018, I count $11,930,874 in expenses and $12,420,323 in revenue (literally 99.9% from $50/mo service charge). That's a 3.9% margin. At that margin, I don't see how rates will never rise. The expenses will go up with inflation. If the revenue remains constant ...

That said, it wasn't readily obvious so I didn't dig deeply to see if some 2018 expenses were unusual one-time charges, or anything like that.

michilumin(10000) 1 day ago [-]

In Colorado here as well, but unfortunately unincorporated Jeffco. Really hope this begins to spread through Colorado, since right now our only option is Comcast.

If it does well for Fort Collins and Longmont, maybe other areas will try it as well.

Ididntdothis(4192) 1 day ago [-]

"And I know my $50/mo rate will never rise."

Let's hope they tie it to inflation or similar. Otherwise the system will die slowly due to underfunding.

longcommonname(4218) 1 day ago [-]

Theres still locations here that can't get fiber. The local hospital is owned by a church and they own apartments adjacent to the hospital. These apartments are surrounded by others that get fiber but the map still shows they aren't planned.

hanklazard(4074) 2 days ago [-]

After many years of using Comcast because it was my only option, I recently had my breakup call with them. We have Verizon Fios as an option and at their lowest tier speeds, we get 100 mbps symmetric (and every time I've checked the speed over the past week, it has been as advertised). Our overpriced Comcast plan was giving us 'up to 60 mbps' but we were getting, wait for it, 1 mbps in the last few weeks of our plan. We could barely load modern web pages in a reasonable time and streaming anything was a mess.

Glad to see that Colorado cities and towns are at least being given the option of building their own high-speed infrastructure. If successful, it should provide a great example for the rest of the country.

robohoe(10000) 1 day ago [-]

Speaking of poor cable speeds, were your downstream/upstream signals poor? Did you have an upstream filter installed on your line by any chance?

I too had a similar issue but a tech visit to disconnect a noise filter fixed it.

howard941(91) 1 day ago [-]

Verizon offloaded their Floriduh operation to Frontier but fortunately the fios is already built and I too see the 100mbs symmetric. Unfortunately like Comcast (spit), Frontier believes in abusing its loyal customers so I'm going to revert to Comcast (spit) and play the bounce between the two every year because $230/mo is way too much to pay for cable & internet & unused landline.

dub4u(10000) 2 days ago [-]

I'm in the Philippines and I pay $200 for 10Mbps :-(

wsc981(3789) 1 day ago [-]

I live in rural Thailand, near the border with Myanmar and we pay about 18 EUR for 60 Mbps download, 25 Mbps upload speed.

atupis(10000) 2 days ago [-]

Finland paying 9€ for 100Mbps.

achow(4205) 2 days ago [-]

In India..

~$20 for 100Mbps (data capped at 400GB)

~$85 for 1Gbps (data capped at 2.5TB)

divbyzer0(4157) 1 day ago [-]

Ireland: €59 for 240 mbps down, 24up. 500 mbps down is available for an extra €10, but 240 is plenty.

bawabawa(10000) 2 days ago [-]

I'm in the PH as well and I pay 55 USD (2,899 PHP) per month for 100Mbps. Ok, I am happy most of the time with 10Mbps but I have occasionally a good surprise.

misterdoubt(10000) 2 days ago [-]

I'm in the wealthiest county in my state in the U.S. and the best I can get is $50/mo for spotty ~12mbit hotspot service with 95% uptime that I have to physically restart every other day when it craps out entirely...

It theoretically has a data cap at 50 gb but it isn't enforced.

akouri(4214) 1 day ago [-]

I find it so hard to grapple that much of SF / Silicon Valley (supposedly the 'innovation hub of the world') is subject to near-monopoly by Comcast. AT&T's offerings are 40mbps/5mbps in 2019. So that's obviously not an option. Even Comcast only has 250mb/10mb. In _silicon valley_. In 2019.

I had WebPass in one building I was in in 2014, and had symmetric gigabit for $40/mo. Nothing like that has existed in any of the other buildings I've lived in.

It's gotten to the point where I'm seriously considering relocating away from the Bay Area due to the lousy internet here.

firefwing24(10000) 1 day ago [-]

This also confuses me as a resident of South Bay Area...

I've been researching ISPs every once in a while to see if new ISPs are able to provide decent deals to where I live... As of right now, here seems to be my options...

- AT&T used to charge maximum 5mbps/512kbps (for like something ridiculous like 60$/month), until Sonic came around and started using their infrastructure. Now Sonic & AT&T both provide something like 75 or 50 mbps/5mbps internet (for again.. 60$/month)

- Unfortunately, Sonic only reaches me through AT&T so its basically the same.

- Comcast is what I use for 150mbps/5mbps (around 70$ a month). Recently(and currently) getting episodes of massive uncorrectables. Technician came and found that a coax connector outside was fried, and replaced it.... 5 days later, I'm still getting uncorrectables, so clearly that wasn't the only problem. Fortunately, I'm getting free 20$ every time i complain, so it's sort of worth? lol...

- I can technically request comcast for their 2Gbps, but I'm not down to spend a massive amount of money for the installation costs w/ the overpriced monthly price tag.

- Common Networks shows most promise, but they can't reach me (despite my city being listed as one of the primary areas).

Even just having symmetric 100mbps/100mbps would be a godsend to this shit ISP situation.

Edit: formatting

fiter(10000) 1 day ago [-]

Sonic is doing a fair amount of fiber in the east bay. In fact, I just got my fiber connection installed today in South Berkeley. I know AT&T is also doing fiber in the peninsula and in the east bay. I just checked today and the place I previously lived in Oakland now has fiber availability. Large areas of the Richmond and Sunset districts in SF have had fiber installed. It's changing.

inimino(4175) 1 day ago [-]

This is because the tech workers who primarily fill those buildings tend to lack political awareness and hence the wishes of the people get stepped all over by entrenched powers.

Cerium(10000) 1 day ago [-]

I'm in San Jose and have gigabit symmetric service from AT&T. If internet is a serious issue for you, there are various areas that have good service.

fossuser(3926) 1 day ago [-]

In the Bay Area (Palo Alto) you can get 1 Gbps Comcast service and it works well, but getting it is hard because everything else about Comcast is bad.

- It's hard to order the service because the price online is a stupidly high price no one should pay if it's listed at all.

- When you call two thirds of the sales reps don't know what gigabit is and will tell you it's not in your area (even though it is) rather than finding out. I've had them tell me it requires a $300 installation fee (it doesn't) or that they'll email me when it's available (it was already available and they wouldn't have). You have to call until you get someone who can do their job.

- The upload is capped to 35mbps

- I was able to get 1gbps for $89/month, but it required a two year contract. It's also not listed on the bill as this, but as 'performance plus' with a gigabit add on which makes things confusing.

- There's a 1 terabyte/month data cap and it's $10 per 50GB after that capped at a max cost of $200. You get two 'courtesy months' of this, but then you get charged. You can pay $50 to have 'unlimited'. After months of using 700GB or so per month suddenly my usage spiked to 2TB so now I have to pay this - it's very hard to determine why (nothing on my end appears to have changed and my ubiquity software shows no massive increase in usage).

- Comcast also has a gigabit pro plan which is real fiber to the house (no modem/coax cable) - it's 2gbps up and down, but requires a $1000 installation since they're actually bringing fiber to your house and special hardware to make use of it (also good luck finding a rep that will let you actually buy it - none of them know about it, twitter is best option to find a good support person).

While the service once set up is good (except for upload) everything else about interacting with them is terrible. After the two years runs out they'll drop my speed to 25mbps and double the price and I'll have to call and negotiate the plans with a rep all over again. Any interaction with the public website or billing is also pretty bad.

All I want is to pay a reasonable price for faster service. I'd even pay more (potentially a lot) if I didn't have to deal with all of this.

Even in a competitive market Comcast has problems.

kstrauser(4198) 1 day ago [-]

> There's a 1 terabyte/month data cap

In other words, you're allowed to use your service for about three hours per month.

I worked for an ISP and I completely understand the idea behind overprovisioning, because you build your network for average traffic and not the maximum theoretical capacity needed to serve 100% of customers using 100% of their connection 100% of the time. But here Comcast is overprovisioning by a factor of 240:1 -- that is, you can only use your connection at 1/240th of your theoretical capability. That's insane, and demonstrably unnecessary.

I use Sonic.net which is a relatively much tinier ISP, but who offers uncapped gigabit for much less than Comcast's capped joke offering. Therefore, people who really want to use their connection heavily are much more likely to go to Sonic.net than Comcast, meaning that Sonic.net probably has a much higher average per-customer usage than Comcast does. And yet, they're still cheaper, still uncapped, and still growing. If Sonic.net can do it, Comcast is just being greedy.

microcolonel(4207) 2 days ago [-]

As somebody who really dislikes the idea of being forced to subsidize a municipal network, I like that they have some success. The cost of competing with Comcast should not be so great that people find this necessary.

someguydave(10000) 2 days ago [-]

https://pagetwo.completecolorado.com/2017/10/23/fort-collins...

Indeed, this is taxpayer-subsidized "competition"

rtkwe(10000) 1 day ago [-]

Democracy, if you don't like it vote for people against it and if it wins still that's the breaks sometimes. Some things work better with collective action than private enterprise especially things that require large build outs of infrastructure that you don't want duplicated: eg roads, you're paying to maintain a lot of roads you never drive on because their existence benefits everyone.

kbumsik(1575) 1 day ago [-]

I pay <$20/mo (22,000 KRW) for 2.5Gbps with a free modem having a 10Gbps ethernet port in South Korea, with a 4-years contract.

I know it is very unfair to compare it with such a small country but $60 still sounds too much.

metalliqaz(4187) 1 day ago [-]

it's pretty damn good in USA

breatheoften(4219) 1 day ago [-]

Denver internet is remarkably bad — very expensive for 10-20mbps service with not great reliability.

Very jealous of Ft Collins ...

bproven(10000) 1 day ago [-]

Denver is very much in bed with Comcast. It is after all Comcast's major headquarters. I would bet DEN is that last domino to fall once all other CO cities implement municipal internet.

godelmachine(324) 2 days ago [-]

I remember watching in Patriot Act with Hasan Minhaj how greedy monopolistic ISP corporations like Comcast are trying hard to outlaw town based internet offerings.

Must watch -

https://m.youtube.com/watch?v=xw87-zP2VNA

thoughtpalette(3990) 1 day ago [-]

Also recommend this episode. They go into details on several different CO cites/towns that are implementing it as well as the lobbying aspect from Comcast, etc.

pulse7(3772) 1 day ago [-]

Such business attempts should be punished by the law!

djzidon(10000) 1 day ago [-]

video not available

post_break(10000) 2 days ago [-]

ATT brought 1gig fiber to my neighborhood (probably because I'm in NASA's backyard). I have gig symmetrical unlimited for $70 a month. Comcast of course tries to offer the same but with cable and you and I both know it wont be symmetrical, or anywhere close to the speed you pay for. It's funny how quickly Comcast will bend when there is a competitor in the area. Just down the street where there is no fiber they will gladly charge you $100 a month for 50 meg down.

toast0(10000) 1 day ago [-]

Where you in Google's abandoned list of chosen city/metro areas? About 6 months after that list came out, AT&Ts fiber list came out and there was a lot of overlap, and then Google was like hmm, maybe wireless?

tehlike(3990) 1 day ago [-]

A few months back, i switched to att fiber for 70$, and called comcast to cancel it. They said they will match the price, and that my att installation has 30 day free cancellation period and i should really take that. Give it to me for 35, then i would consider.

proverbialbunny(10000) 1 day ago [-]

>ATT brought 1gig fiber to my neighborhood (probably because I'm in NASA's backyard).

Could be because Sonic is in the area and is a better ISP.

zer0faith(10000) 1 day ago [-]

North Alabama (Huntsville) ? Rumor had it google fiber was coming that way.

linsomniac(4062) 1 day ago [-]

CenturyLink, in particular, but also Comcast, has brought this upon themselves...

They are doing everything they can to protect their existing infrastructure and do as little as possible to upgrade services.

I'm thinking of back in '98 when QWest was rolling out DSL, the would only connect you to a central office. They refused to put DSLAMs in the pedestals, because if they did that they had to allow CLECs to do the same and CLECs could start 'cherry picking' service locations for DSL. One very rural community ('Ruby Ridge?') had to sue to get the right to put in DSLAMs. DSL was dependent on how many wire feet you were from one of the two central offices in town.

Compared to my inlaws up in Saskatoon Canada: Similar size city, they deployed DSLAMs in the remote terminals and ran fiber to them and used the copper to the house for the last mile and covered the whole city.

Remember, the telcos got a $2 billion rate hike to enable them to deploy fiber to the home by the year 2000. Except for a few small trials (I used to work at QWest), they just pocketed the money, no large scale fiber buildout was done.

Until Fort Collins passed the 'we are going to build out our own fiber network', Comcast was really dragging their feet on upgrades. You can get gigabit now for around $100/mo, if you commit to a year. I'm just sitting on 120mbps for $90/mo, because people the next neighborhood over are getting door hangers saying the city fiber is coming soon.

nostromo(3167) 1 day ago [-]

This isn't true for Century Link.

Lots of people in Seattle, and a growing number of other cities, have Century Link gigabit fiber to their residence. I pay less than $70 a month for this.

Comcast is losing marketshare fast in Seattle because of this.

https://www.centurylink.com/fiber/img/Gigabit-Launch-Map-res...

alexis_fr(10000) 1 day ago [-]

I'm in a major city in France and I use a gigabyte fiber for my 4 employees and it's 19,90€ per month. And at home I have ADSL and the 3G is faster (12€ for 100Go per month, 8Go in Europe). For the first time, France has one thing working correctly ;)

wmeredith(2874) 1 day ago [-]

> They are doing everything they can to protect their existing infrastructure and do as little as possible to upgrade services.

Well, that's how rent-seeking works.

metalliqaz(4187) 1 day ago [-]

Fiber is slowly making its way through my town and I'm looking forward to the speed, but more than that I'm looking forward to telling Comcast that I don't have to put up with them anymore. I picture it like that episode of South Park, except with the roles reversed.

chocolatebunny(10000) 1 day ago [-]

Saskatchewan has a government run telco called Saskatel, that's why you're inlaws have decent service. Every where else in Canada has worse service than Comcast.

jyrkesh(10000) 1 day ago [-]

I'm in a combo CenturyLink / Comcast market, and it's AMAZING what competition between even just those two firms does to my offerings. Switched from Comcast's $90/mo gigabit with a 1 TB data cap and only 90 Mb/s upload to CenturyLink's $60/mo symmetric gigabit with no data cap. I have a contract-less 'lifetime price guarantee', they did free install (which included running a fiber line from the pole to my house with a cherrypicker), and threw in a free fiber endpoint box (I don't know what it's really called) and a modem/router that's actually pretty good.

I get 980 Mb/s up/down with a 1ms ping in speed tests. Games are amazing, downloads are amazing, hosting files to myself is amazing...all it takes is a little competition.

bonestamp2(10000) 1 day ago [-]

That's what can happen when the provincial government runs the telco -- it's not a guarantee of course, but they act in the best interest of the stakeholders (the people).

My brother lives in Saskatoon as well and I remember going there in the early 2000s and his 'cable' TV was IPTV (the cable box for his TV was connected with an ethernet cable -- it didn't even have a COAX connection). That blew my mind at the time but it made so much sense from a network health/bandwidth standpoint and allowed them to offer great options for movie rentals, etc.

thedaemon(10000) 1 day ago [-]

I have CenturyLink 1Gbps in Colorado Springs. Not sure what you mean about not upgrading their infrastructure. The price is $75 fixed forever as long as I always pay my bill on time.

nicholasjarnold(10000) 1 day ago [-]

CenturyLink has been acting in what appears to be good faith within Denver city limits in the past few (~4) years or so. They've been offering gigabit fiber service in select parts of the city for around $60-70/month for at least that amount of time.

While I agree with the general sentiment about large providers needing to do more for less and faster, it seems that CL is being a good corporate citizen in Denver at least.

Personal Anecdote: I signed up for gigabit fiber at a previous residence and experienced consistent 850+ Mbps synchronously, often exceeding 900Mbps. Then upon purchasing a home in another part of the city I cancelled this newly-installed (< 1 year) fiber service with no charges.

Fast-forward to last week when I see CL contractors installing fiber in my alley. I'm now signed up for an install of this same gigabit fiber service for $70/month with no data caps, blocked ports, or other random charges (excluding tax) or restrictions. It's quite a good deal compared to other offerings here. A friend in another neighborhood is also set to receive his service soon, so it appears they're investing quite heavily in their fiber-to-the-home rollouts.

diminoten(10000) 1 day ago [-]

Cox is now offering universal 1 gbps services, so I'm not sure the 'do as little as possible to upgrade services' is accurate any longer.

Thorentis(10000) 1 day ago [-]

> fibre to the home by the year 2000

weeps in fibre to the home axed for Australia in 2015

jorblumesea(10000) 1 day ago [-]

Can't speak for Comcast, but for Centurylink, they've been rolling out fiber like crazy. In Seattle I have a fiber connection for $70 / month. This was maybe 3-4 years ago and never had an issue with it. It doesn't appear to be in bad faith. Now, who knows if they'll continue to roll it out.

bproven(10000) 1 day ago [-]

>You can get gigabit now for around $100/mo, if you commit to a year. I'm just sitting on 120mbps for $90/mo, because people the next neighborhood over are getting door hangers saying the city fiber is coming soon.

I'm in FoCo and I think that gigabit is now $70 bc of the city announcement, but that is only an 'intro' price (love that about comcast). It is also AFAIK not bi-directional gig. Uploads are still capped and you have a data transfer limit as well.

orblivion(4189) 2 days ago [-]

After a long period of my life not having the fastest in speeds at home, I ended up going all in and telling Comcast 'give me the fastest Internet you have'. So I'm paying for 400Mbps.

...what do I need all this speed for? What could someone possibly do with 1 Gbps? It's not as though web pages will load faster, they have their own throttling. I don't think I need 400Mbps to stream 4k video. Torrents don't seem to take advantage of the full bandwidth.

I could see the benefit if I were doing 10x as much stuff _at the same time_. So is this for large families or businesses? I'm considering going down to about 100 and saving a few bucks.

Strom(3878) 2 days ago [-]

Faster speeds help with downloads. Yeah you won't notice the difference when downloading a selfie, but when you buy Gears 5 on Steam and want to start playing, you will notice the difference between waiting ~1h45m @ 100Mbps / ~11m @ 1Gbps to get that 80GB game.

throw0101a(3223) 1 day ago [-]

I'm at work, which is about four hundred people, and looking at our LibreNMS graphs, we barely hit 200 Mb/s.

I'm not against having higher speeds as an option because we don't know what the next Killer App will be and what it may need, but there is a point of diminishing returns for most people.

r1ch(3899) 2 days ago [-]

For me the benefit isn't sustained peak use, it's about saving time when there is something big to download. I want to play a game I haven't played in a while but it needs a 4 GB patch? With 1gbps I don't mind sitting there waiting a bit, 100mbps and I'm likely to just quit and find something else to play.

cameronbrown(3747) 2 days ago [-]

Throughput - not all about speed. Downloading a game on Steam should not cripple my ability to watch TV, as an example. Plus most machines are off auto updating nowadays, I shouldn't have to suffer because my laptop/phone/TV/toaster decided to patch itself when I don't have any control over that.

JelteF(3753) 2 days ago [-]

Downloading Steam games is one thing that actually takes advantage of my 750Mbps.

xuki(1249) 2 days ago [-]

To stream 4K contents you only need 20-30Mbps. But for any kind of big download (game updates, OS updates, etc), being able to finish in less than 5 minutes is fantastic.

dzhiurgis(4126) 2 days ago [-]

My understanding is that you get a lot more bandwidth to abroad, which becomes quite important when you don't live near US or Europe.

bschwindHN(10000) 2 days ago [-]

It's the ability to do multiple heavy transfers at the same time, youtube videos loading instantly, game updates finishing within a minute, apps downloading in a few seconds, and never having your connection be the bottleneck to accessing a website.

It's even more important when multiple people are sharing the connection.

gok(671) 1 day ago [-]

The business plan is interesting: https://www.fcgov.com/broadband/files/broadband-business-pla...

Key points: they're expecting this effort cost $130-$150m, not pay for itself until 2033 or so, and only get around 28% market share. If Comcast/CenturyLink drops their prices slightly, uptake might be substantially worse.

a_wild_dandan(10000) 1 day ago [-]

Comcast now (surprise, surprise) offers 1Gb lines for $90. I jumped on it. But when the city's fiber rolls out to my location, I'm sprinting toward Connexion. Cancellation fees be damned.

thorwasdfasdf(10000) 1 day ago [-]

We really need to start voting against politicians that are in bed with comcast and At&t. Is there some kind of list or online database where we can see which of our local politicians are supporting comcast and at&t with these horrible regulations that prevent open competition?

sbarre(2482) 2 days ago [-]

I pay 75$ CDN for 1Gbps with no cap in Toronto, and I used to think we had it bad in Canada..

'Fighting' with a local cable company (monopoly I assume?) to offer a 1Gpbs service in 2019 is depressing.

deepspace(10000) 1 day ago [-]

I pay $30 CDN for 1Gbps symmetrical no cap because our municipality had the foresight to install municipal fiber, so we are not beholden to Bell/Telus/Shaw/Rogers for access to fiber infrastructure.

HeadsUpHigh(10000) 1 day ago [-]

I'm sorry but how exactly is that bad? I have 6 mbps down 0.5 up and this is the best speed I've ever had. I've never heard of anything going beyond 1gbps in my country. What opitons exist above that for consumer level and why would you want it? Like, what's there to do with more than 1gbps right now?

microcolonel(4207) 2 days ago [-]

That's because of Beanfield. Outside of the core of Toronto that is not the case. If you go all the way to Burlington, it starts costing about $300/mo, if they are even willing to install it.

I'm hoping Beanfield continues to expand their area of competition.

robert_foss(3622) 2 days ago [-]

You must be one of the lucky ones in Toronto. I remember having only pricier, capped & slower FTH solutions.

Zenbit_UX(10000) 1 day ago [-]

$60 CDN/month for 1Gbps symmetric in Montreal, had to fight bell tooth and nail, but very happy with it.

abledon(3862) 2 days ago [-]

Bell?

vincnetas(10000) 2 days ago [-]

One more data point. Vilnius, Lithuania 1GBs for 19.90€ (22.00$) /month

https://www.telia.lt/privatiems/kur-veikia-internetas-ir-tv/...

Donald(10000) 2 days ago [-]

As a FoCo resident, Comcast sent me an email saying that they're 'upgrading our network' and 'increasing your Internet download speed'. Not really clear what that means until I see an updated rate sheet.

I'm sure this is entirely a coincidence and has nothing to do with the fact that they suddenly have some competition now.

turk73(10000) 2 days ago [-]

Haha, I got the same email and I don't even live in Colorado!

I own 216 shares of Comcast but I wish they would fuck right off.

mentat(4091) 1 day ago [-]

Same thing happened when AT&T fiber came into my neighborhood. $70 for unlimited gig put some pressure on them.

justwalt(10000) 1 day ago [-]

Same thing for me. Went from 150 down to 175 down, upload unchanged. I'm only paying $40, though, so I'm not sure I'd upgrade to $60 fiber if I had the option. Might be worth doing on principle, considering how spotty the connection can be sometimes.

not_a_cop75(10000) 2 days ago [-]

You have to know the slang:

'upgrading our network' - In this case network refers to financial network, but should really be 'net worth'.

'increasing your Internet download speed' - download speed for them is actually slang for the amount of money they can charge you for standard access.

whalesalad(351) 2 days ago [-]

Seems that most cable providers do this when an incumbent moves into town. It's happened to me in two separate metro areas and just happened to my buddy in Phoenix.

gshubert17(4159) 1 day ago [-]

I lived in Fort Collins and had Comcast from 2005 through 2016. Over that time, Comcast did improve its download speeds from 3 to 25 Mbit/sec over that period, for the same tier of service, 'Performance Internet'.

Since then I've lived on a farm outside Boulder, Colorado, with internet from Comcast. Here, the download speed of their 'Performance Pro' tier has increased from 60 to 150 Mbit/sec, and will increase again to 175 soon. Also, Comcast has been good about fixing service problems, replacing the coax from their utility poles to the premises when I experienced loss of signal issues.

> I'm sure this is entirely a coincidence and has nothing to do with the fact that they suddenly have some competition now.

Yes, between Century Link putting in Gigabit fiber and Boulder's moves toward their own municipal internet, Comcast is likely feeling enough competition to keep making technical improvements.

jkilpatr(4205) 2 days ago [-]

> 'upgrading our network'

They actually where not doing this before because it's probably a bad idea.

Modern cable networks look like this. https://en.wikipedia.org/wiki/Hybrid_fiber-coaxial#/media/Fi...

Fiber goes to copper which then splits out to a large number of homes. The copper line near your home can do multiple gigabit but that needs to be shared with everyone else on the tree.

So selling higher speeds is easy, but reaching them for everyone requires splitting trees with new fiber drops. Providers usually do the first and only some or none of the second in order to sell bigger numbers and compete with Fiber.

A similar thing happened to DSL trying to compete with cable, lets look where that is now.

https://www.fcc.gov/reports-research/reports/measuring-broad...

See chart 15.3, DSL across the ISP industry only provides advertised download speeds 40% of the time. In any other industry this would be considered blatant fraud.

marviel(4207) 1 day ago [-]

Another datapoint is Chattanooga, TN, with their municipal 1Gbps / ~$65 plan. https://epb.com/home-store/internet

They had plans to expand the service to the surrounding area, but Comcast lobbied to prevent it... https://www.techspot.com/news/68941-residents-rural-chattano...

Scottopherson(10000) 1 day ago [-]

$67.99 exactly. My bill has always been the exact price that they list on their website; no extra fees or taxes randomly popping up on the billing statement.

driverdan(1482) 1 day ago [-]

They've expanded it to cover anyone who gets their electric from EPB which is a surprisingly large area.

zer0faith(10000) 1 day ago [-]

ya know... it's past time that they declare internet access a utility (like water, power, electricity) and tell garbage companies like Comcast to shove it.

AnIdiotOnTheNet(3914) 1 day ago [-]

Will never happen as long as 'Government bad!' is the predominant thinking in our culture.

iamsb(10000) 1 day ago [-]

It is incredible how similar electricity utility and internet utility delivery is happening. For first 30-40 years, governments and private enterprises were competing quite a bit over who can have the monopoly in providing electricity. I recently read a book which deals with impact of electricity from social, political, industrial perspective. https://www.amazon.com/Age-Edison-Electric-Invention-America...

Teknoman117(10000) 1 day ago [-]

Then they all started freaking out when people could generate their own power.

jrochkind1(1999) 1 day ago [-]

In Baltimore City, your only choice for broadband is Comcast.

They charge $75/month for 25Mbps down.

I believe that's about double what they charge for the same speed in places where they have competition.

paco3346(10000) 1 day ago [-]

I sure hope you mean Mbps otherwise I'm moving there!

Etheryte(4098) 2 days ago [-]

Could someone please explain the background to someone not from the US? Why is competition so scarce in this field in the US? Is it just the cost of infrastructure over large distances, or is something else at play here?

IfOnlyYouKnew(3669) 2 days ago [-]

In addition to the cynical everyone-is-corrupt answers you've been getting, I'll add:

ISPs, like mail delivery or electricity, are classical examples of 'natural monopolies' where the fixed price of installing infrastructure is high, while the marginal cost of each additional customer is low.

Such environments are tricky to manage: You want some market forces, because otherwise there's one lazy government-owned provider with bad service at high prices because they have no pressure to be better. At the other end of the spectrum, you have a private company with bad services at high prices that makes a ton of profit because competitors cannot afford the initial investment.

To balance these competing forces is tricky, and some countries manage better than others.

reallydontask(4091) 2 days ago [-]

> Is it just the cost of infrastructure over large distances

In the US there are over 50 metro areas with more than 1 million inhabitants, over 100 with 500k. Granted that they will be more sparsely populated than european metro areas but I'm just not too sure that the large distances are an issue here at all.

close04(3072) 2 days ago [-]

Big ISPs (and any big company really) can afford to lobby (legal bribery) to get laws exclusively in their favor. This basically guarantees that the bar for any new player is ridiculously high if it's even possible to pass, and that customers have no real choice.

clinta(10000) 2 days ago [-]

Permits are required for burying conduit, and are limited. Sometimes the city owns the poles that run along streets, sometimes the incumbent telco provider does, and space on those poles is limited. Sometimes local governments will promise monopoly access to a provider in exchange for free services to the city and school districts.

cptskippy(10000) 2 days ago [-]

It's that and incumbents get laws made to preserve their monopolies or make it more expensive for competition to start up.

Jnr(10000) 2 days ago [-]

I live in EU. I pay 15 EUR per month for 1 Gbps.

reallydontask(4091) 2 days ago [-]

I'm also in the EU, if only for a few weeks more, £29 per month for 100 Mbit/10 Mbit up/down.

I think our bundle also includes phone and TV but we use neither, works out cheaper that way ...

georgiecasey(4179) 2 days ago [-]

romania?

pmjordan(2214) 2 days ago [-]

The EU is a big(ish) and diverse place. We pay about twice that for 40Mbps VDSL and about 2.5x that for 50Mbps LTE. To be fair, we could get something slightly (~20%) cheaper as we've got a 'business' deal to get static IP addresses on each connection. The fastest consumer package we could get at this location (cable) is 500Mbit/s down, 40Mbit/s up for €80/month. (Their business level contracts top out at 300/40 Mbps and €200/mo)

unsined(10000) 1 day ago [-]

I'm taking this thread as an opportunity to pay Comcast a review I owe them as I recently became a first time customer with them:

Comcast may have the best deal in most areas by virtue of monopoly, but I can guarantee you that the average customer ends up paying more. I agreed with a rep on $60 per month for just internet, nothing else. The bill comes and it's $83 with basic TV and a service charge. How they did this was they sent me a text message with a link to an agreement and asked if 'everything looks right'. Being on my cell phone and on my lunch break I just wanted to get it over with and said yeah.

They also assume you want somebody to come over and plug in your modem and rent you a router. I explicitly said no to this, but they signed me up for installation anyway.

I'm convinced Comcast institutionally trains reps to steer sign ups in this way.

I was refunded eventually, but it still cost me time to dispute billing.

draw_down(10000) 1 day ago [-]

> I'm taking this thread as an opportunity to pay Comcast a review

No thank you, we're good without that!

cwkoss(4217) 1 day ago [-]

File complaints with any state and local govt agencies that will listen. SOS, ATG, UTC, etc.

The more complaints they have, the more regulators are empowered in taking action.

mac01021(3957) 2 days ago [-]

I have no sense about network infrastructure or what it costs but, FWIW, I would much rather have 100Mbps for something like $10 per month.

cptskippy(10000) 2 days ago [-]

The cost difference between infrastructure for delivering a 100 or 1000mbps capable connection is negligible. It's delivering it sustained to multiple people that's hard.

There are arguments about whether it's better to divide 1000mbps of bandwidth equally between 10 people or giving them all the ability to consume the full 1000mbps. It really depends on the traffic load, most people will never keep a 1000 Mbps connection saturated but as soon as someone finds a way it will negatively impact everyone else sharing the bandwidth.

The reality is most ISPs don't split 1000 Mbps between 10 people to give them each 100 Mbps, they'll split it between 20 or 30 people and give them 100 Mbps max because those provisioned customers are all unlikely to demand their full 100Mbps simultaneously.

jmspring(3818) 1 day ago [-]

The small town where I live in North Eastern California gets it's internet from a local co-op that provides both electrical and internet services to the local community. Part of where I chose to purchase when I moved up here was based on getting fiber. It's been rock solid, but it's about $110/mo for (recently updated) 100/20 fiber. Previously, people were stuck with line of site wifi (can be problematic with a lot of trees) or satellite.

Their roll out has been interesting. In some areas, they have taken advantage of legacy cable feeds (cable companies pulled out awhile ago). Sometimes they were successful, some needed repair, and some just flat out didn't work.

The service got kick started with grants for rural internet service.

webo(4201) 1 day ago [-]

Similar story here in Fayetteville Arkansas. I've been enjoying a truly gigabit internet for $80/mo since 2017. AT&T and Cox still don't have anything remotely close to the gigabit speed in residential areas.

https://www.ozarksgo.net/internet

linsomniac(4062) 1 day ago [-]

Aside: I can't figure out what I'd even do with 10gig symmetric at home for $300/mo. I'd be tempted to run a distro mirror for the city, the last time I ran a mirror I had 30mbps I could give to it. I'd push for 10gig at the office when it comes, but we'd have to upgrade everything except a few dev servers...

lonelappde(10000) 1 day ago [-]

How is running a distro mirror from home better than running it from a colo or cloud service?





Historical Discussions: Get started making music (May 09, 2017: 2106 points)
Get Started Making Music (September 13, 2019: 1080 points)
Get started making music (June 24, 2017: 8 points)
Get started making music (May 08, 2017: 7 points)
Get started making music (May 08, 2017: 4 points)
Get started making music (December 28, 2018: 2 points)

(1080) Get Started Making Music

1080 points 6 days ago by capableweb in 3467th position

learningmusic.ableton.com | | comments | anchor

Get started making music

In these lessons, you'll learn the basics of music making. No prior experience or equipment is required; you'll do everything right here in your browser.

To get started, check out the boxes below. Each one contains a small piece of music. Click a box to turn it on or off.

After playing with these boxes for a while, you'll discover certain combinations that you like. Many types of music are created in exactly this way — by mixing and matching small musical ideas to make interesting combinations, and then changing those combinations over time.

Now you've combined pre-made musical patterns. Next, you'll make some patterns of your own.




All Comments: [-] | anchor

milesward(10000) 4 days ago [-]

Ableton works how my brain works for music. It's lovely :)

milesward(10000) 4 days ago [-]

If you wanna hear some outputs: https://soundcloud.com/funkitekture

redmaverick(4191) 4 days ago [-]

I want to make music that sounds like the band 'Chinese Man'.

https://www.youtube.com/watch?v=kqjeNSNuNPM https://www.youtube.com/watch?v=A9QU5-9DFC4

Can we do it using Ableton? How does one even approach trying to do something like this?

gagege(4088) 4 days ago [-]

Yes, you could definitely make this in Ableton. Basically, all you need is samples (recordings of instrument sounds, either individual notes that you put in whatever order you like, or recordings of someone playing a whole musical phrase, preferably something that loops well) and a sequencer of some kind, which is the main function of Ableton. There are also effects and mixing and mastering techniques that can drastically change the sound of music, but all you need to get started are some samples and a sequencer.

Heliosmaster(1720) 4 days ago [-]

Talking about synths, I can definitely recommend VCV Rack [0], an open-source virtual modular synth!

What I mostly love is that through plugins you can find virtual versions of existing hardware modules!

[0]: https://vcvrack.com/

ptah(10000) 4 days ago [-]

so many toys, so little time. i played around with this for a bit but need to find time to get really stuck in

Tepix(3968) 4 days ago [-]

Looks like many of the modules need a purchase.

caffeinewriter(1997) 4 days ago [-]

If you want to see the power of modular synths, I definitely recommend checking out some modular streamers on Twitch. Some use VCV, some use actual racks, but there's a growing community on there.

https://www.twitch.tv/dronehands

https://www.twitch.tv/nitewurx

https://www.twitch.tv/earthvomit

https://www.twitch.tv/joobiedoobiedoo

afroisalreadyin(3665) 4 days ago [-]

It's really awesome Ableton is going in this direction. Live is a beast, but it's a quite intimidating beast if you don't have experience working with DAWs and music software in general. I worked there for three years, and couldn't bring myself to learn the basics. Tutorials like these will definitely make it easier to pick the basics and start off with Live with more confidence.

josmall(10000) 4 days ago [-]

how was working there? I did a code challenge there last year but I didn't make it to the next round. Still kind of bummed about it.

mister_hn(10000) 4 days ago [-]

Can someone here say more about Ableton as employer?

jamesb93(10000) 4 days ago [-]

I have friends who work there in a number of areas and colleagues who have freelanced for them (convolution reverb for example). It is pretty nice from what I hear and the focus is very much enabling creativity before profit.

adamnemecek(17) 4 days ago [-]

I've been working on an IDE for music composition. I'll launch soon http://ngrid.io.

WhitneyLand(3595) 4 days ago [-]

Is there a way to try it or learn more before signup/email list sub?

I started composing about a year ago using general tools like CuBase and Garage Band and wow, it's tedious, even when the initial sketch is worked out before hand on a keyboard.

There are just a ton of ideas that come to mind on how it could be an order of magnitude more efficient. Maybe there are apps that don't focus so much on production, that do a better job solely for the writing music part?

w56rjrtyu6ru(10000) 4 days ago [-]

Awesome! Looking forward to it!

chevas(4176) 4 days ago [-]

There's a delay when trying to hit stop on any of the boxes. It's really quite frustrating.

I really like this idea. This is something I want to explore as a non-music person.

coldtea(1239) 4 days ago [-]

>There's a delay when trying to hit stop on any of the boxes.

That's per design. They don't stop arbitrarily, but at bar boundaries, so that they stay on rhythm. In other words, it's not a sample player pads, it's clip launch pads...

badfrog(10000) 4 days ago [-]

> There's a delay when trying to hit stop on any of the boxes. It's really quite frustrating.

They all start and stop at the beginning of a measure so that it sounds natural/intentional no matter what you click on and when.

yellow_postit(3957) 4 days ago [-]

Checkout Pocket Operators from Teenage Engineering if you haven't seen them already. Its a lovely (to me) line of physical synths that are quite inexpensive < $30 USD and also as a non-music person really got me started down a path of wanting to learn more.

TheOtherHobbes(4214) 4 days ago [-]

As a dev, I find Ableton incredibly frustrating. The Live Object Model (LOM) allows custom automation and software control. But it's half-closed, and half-open.

Access is through Max for Live, which is a dataflow language 'programmed' by joining little object blocks to other object blocks - like Scratch. There's unofficial Python support, but it's poorly documented.

Many things are possible, but many other things aren't possible - even though they're available on Push, so obviously the hooks are there.

It would make me unbelievably happy if Ableton opened up the LOM and included a properly documented hook for absolutely every important feature - preferably one that could be used from any language, maybe via OSC, rather than M4L.

To be fair Live at least has a LOM, while other sequencers/DAWs don't. So that's a plus. But it's still a shame it isn't more complete - because that would make all kinds of cool things possible.

jcelerier(3906) 3 days ago [-]

If you use OSC you could be interested in the DAW I develop: https://ossia.io ; a part of its object model is accessible through it, hadn't had the time to expose everything yet.

gosub(3894) 4 days ago [-]

You should take a look at Reaper. It has macros, total reconfigurability and deep scripting. It's the emacs of the DAW world.

eric_trackjs(10000) 4 days ago [-]

I agree, their object model and developer experience leaves a lot to be desired. I wanted to control Live via OSC so I made a little library to do it via node:

https://github.com/BrandesEric/AbletonJS

It's a work in progress that, sadly, I have left dormant for awhile, but you can basically use Node/JS to control live (even over a network). It does rely on the internal JS implementation and you have to add the max plugin to a track before you can start coding against it, so still not anything to write home about.

johnhess(3681) 4 days ago [-]

Ironically for the name, ableton turns out not to be accessible for the blind.

big_chungus(3932) 4 days ago [-]

It's not exactly easy to make an 'accessible' DAW. Especially not when you consider that there probably isn't a huge market of blind music producers relative to the amount of work it would take to, say, make a somewhat-sane interface for a screen reader for a program as complex as ableton. The interface is very complex as is, and would almost certainly require some re-jiggering before it would work well. I doubt enough people would buy t to ever make it a worth-while investment.

jeegsy(10000) 4 days ago [-]

There it is, thread complete

PascLeRasc(1937) 4 days ago [-]

This is incredible, I love it. I've recently switched to using Ableton exclusively for making music, it feels like more of an instrument than a computer program to me. It's so expressive and lets me make the sounds I want to hear as well as things I can't even conceive of. I can't really articulate what it is about Ableton, but I really love it and I'm so thankful that it's around.

If anyone wants more, Ableton also has a synthesizer playground site at https://learningsynths.ableton.com/.

vonseel(4189) 4 days ago [-]

I love the learning synths site. I envy the guys who get to make these websites for a living.

tiborsaas(3886) 4 days ago [-]

Did you edit your comment? I don't get the downvotes :/

edit: I don't get the donwvotes on my comment of not getting other peoples downvotes :) Ok, I guess I asked for it :)

codesternews(2661) 4 days ago [-]

Are you a programmer? Could you please share your creations? Why you use these tools?

I wanted to learn but I do not know where to start. Just asking out of curiosity. Thanks

flavor8(4134) 4 days ago [-]

I got into programming as a kid through a desire to make music (my first program was a 'song' written in pascal, playing a series beep tones at different frequencies and durations.) I got into trackers (fast, impulse, buzz) in my teens, and then synths.

These days I have a strong preference towards 'hardware'* only music making -- I spend most of my waking life staring at a screen, so I find it satisfying to step away and be hands on when creating music. The brains of my studio is a Synthstrom Deluge, which is an amazingly intuitive little gizmo - it has a built in synth and drum machine, a looper, a sampler, and a MIDI sequencer allowing you to drive all other synths in the studio. I also have an Arturia Keystep, which has a great live MIDI sequencer. Another fun gadget is the Roland RC 505, which gives you 5 independent and dubbable loops - I drive one of the two fx sends from my mixer through it, letting me build loops live from any of the other synths. Aside from those I've collected a handful of synths, both FM and analog.

(* quotes because pretty much all available synths, analog included, run on software. Most come with USB ports allowing you to connect and change settings, update firmware, etc.)

npmaile(10000) 4 days ago [-]

I'd recommend you check out the Tennage Engineering OP-Z. It's a neat little thing that can do a bunch of the stuff you mentioned, but in a crazy small form factor.

mattmar96(10000) 4 days ago [-]

Curious, I was looking at a Keystep today. Can it be used as a regular MIDI keyboard as well? (No arp/sequence)

WarDores(10000) 4 days ago [-]

I'm in the same boat. However, I love the flexibility of a DAW. My 'hybrid' solution is the Ableton Push (small LCD screen, but not a monitor) and Komplete Kontrol S88 keyboard. Still get my soft synths, but I can do a ton without even looking at my monitor. I have both facing my window rather than my screen, and it's surprisingly easy to get in the flow.

discohead(10000) 4 days ago [-]

My story is similar to yours. I've just recently gone from 100% hardware to a more hybrid setup. I found that while I was having more fun with the hardware setup I wasn't finishing songs. Personally, I still need a computer to do editing and arranging. My new setup is a small eurorack skiff, Bitwig 3 and NI Maschine Mk3/Jam/S49 Mk2 w/ Komplete Ultimate. The integration between NI hardware and Maschine/Komplete Kontrol is amazing, feels like the best of both worlds. I'm definitely considering the Push 2 for Bitwig, although the Maschine Jam does a pretty good job as Bitwig controller.

munificent(1794) 4 days ago [-]

I'm getting back into making electronic music now and I've agonized over where to go the software/computer/DAW route or hardware/groovebox/sequencer. Right now, I'm doing it all on a computer using Reason and a very nice MIDI controller (Arturia KeyLab 61 mkii <3 <3 <3). But I follow the r/synthesizers subreddit and all of the pretty blinky lights and buttons look so fun and the sounds can be amazing.

The main things I like about doing it all on a computer are:

* Great screen and interface. It's easy to drag and drop and see the composition visually. Boxes like the Digitakt look like a lot of fun, but then I watch a youTube video of it and it's like 80% knob-scrolling through menus on a tiny LCD screen and that doesn't look like fun.

* Easy file and data management. It's all just files on a hard drive. It's trivial to switch between projects, back up, restore state, etc. Managing that when the data lives on flash cards across a handful of sequencers seems really stressful to me. I'd be so worried about accidentally losing a patch or something.

* It's cheaper. If I want two separate delays with different settings, I can just add a second delay. I don't have to go on Sweetwater and drop another $200. Sure, Eurorack stuff is 'modular', but each module requires shelling out cash. In Reason, I can wire up huge racks of crazy stuff without spending a dime.

But..., man, the hardware stuff looks like a lot of fun. I also feel like it can be a real struggle to get something that sounds rich and full out of Reason. I can get there, but it takes effort. It's default sound tends to be kind of brittle and dry, which is to be expected from software but can be uninspiring. (I should maybe check out a different DAW, but I know Reason well and exploring different software is a whole other can of worms.) With a lot of hardware gear — at least judging by videos online — you power it up and it sounds fat immediately.

I think what really matters the most to me is finding a path that gets me finishing music I like quickly. I don't want to just noodle, but I also want something fun and immediate enough to stay in the moment. I'm still not sure if software or hardware (or a mixture of both?) is the right path for that.

Any thoughts on how to dip my toes in the water with hardware to see if that's a better fit?

TomMarius(10000) 4 days ago [-]

Thats one of the most interesting hello worlds (like real first hello worlds) I've heard of.

zupreme(4083) 4 days ago [-]

You should try Sonic Pi. Its up your alley.

bauerd(10000) 4 days ago [-]

On a side note, this was written (partly) in Elm: https://twitter.com/abletondev/status/861580662620508160

jamil7(4193) 4 days ago [-]

I'm fairly sure Elm was ripped out and replaced with TS later on.

cubano(4048) 4 days ago [-]

Oh my...this is hardly 'making music'.

Is paint-by-numbers making art? I sure don't think so, so neither is this.

What would have been way way better is a brief introduction to music keys, major and minor, then basic chords, then progressions, then melody, with each section having a few examples of each so that the person going through the article could have some understanding of how music is really made, and how musical concepts actually relate to each other.

Maybe I will shut up and, instead of bitching, actual do something useful and create such a tutorial, using my own beats, progressions, and melodies.

airbreather(10000) 4 days ago [-]

And if you only ever do what is directly taught, likely at best you will be proficient but never great, and often not even entertaining.

I used to have a Fijian flatmate at uni who played guitar, totally self taught.

Played ukulele from age 3 with his sisters in acts in hotels back in Fiji. Just one thing (of many, many things) he could do musically was play along to most songs first time he heard them, anticipating chords/keys, no formal training.

He did say they had so much spare time on their hands as kids it was all he did, but he could play so many songs and lead breaks, bass riffs etc etc note for note, perfect timing.

While we were at uni he cleaned up because he could just roll up to any house band for the night and fill in on almost any instrument, pocket a hundred bucks or more, free drinks and at least one girl.

tomcam(589) 4 days ago [-]

Did you, ah, actually follow the tutorial?

readhn(4195) 4 days ago [-]

> Is paint-by-numbers making art?

haha is this art? Do you really need to go to school for a decade /study art to be able to paint this?

https://shorturl.at/krYZ4

coldtea(1239) 4 days ago [-]

>Oh my...this is hardly 'making music'.

This is a tutorial. Learning scales on the piano is hardly 'making music' either.

>Is paint-by-numbers making art? I sure don't think so, so neither is this.

Painting by number surely is one, albeit limited, form of expression, and thus art.

And if you could re-arrange the numbers and change colors, and add your own parts (like you can here) it would be 100% art.

>What would have been way way better is a brief introduction to music keys, major and minor, then basic chords, then progressions, then melody, with each section having a few examples of each so that the person going through the article could have some understanding of how music is really made, and how musical concepts actually relate to each other.

That would be better to make e.g. a classical or jazz musician.

(I was trained when young in those things).

This is for another style of music, that could not give less fucks about those 'musical concepts' (you can use some of them - and some are covered in the tutorial -, but you can do it instinctively too, or it can bypass it altogether -- great techno tracks using just one chord for example, or of indeterminate tonality, are very common), but which still enables all kinds of expression (to the point of creating works that can move listeners to tears and heightened emotional states).

AngryData(10000) 4 days ago [-]

You don't need to know anything about music theory to make music. You don't even need to know what a note is to make music, music predates every concept you just mentioned and it likely predates language. It seems pretty elitist to assume you need all or any of that to make music. Does it help? Sure, in the same way art classes help you make art. But you don't need to take an art class to paint a beautiful picture with time and dedication.

gnulynnux(10000) 4 days ago [-]

I don't know anything about making music and haven't really engaged much with other such tutorials.

Those still exist and have their place, but I like this, and it lets me explore with concepts. This is fun and I like it

uhoh-itsmaciek(10000) 4 days ago [-]

There are a lot of different ways to make music, and just as many ways to learn how to do so.

fenwick67(10000) 4 days ago [-]

Did you go past the first page? It does eventually start talking about scales etc.

fao_(3934) 4 days ago [-]

'Better' is relative. People will learn that by looking at what other artists are doing and figuring it out themselves, and they will remember it better for that reason.

Everyone I know who is into making music (which includes a large mix of professional and non-professional musicians) started out in electronic music by putting interesting sounds together, and then by imitating what their favourite artists have done.

What you describe is like starting out an art course by teaching students to mix colour -- describing the colour spectrum, which colours lead to what, etc. But really the students will learn all of that and much, much more by just experimenting with combining random colours and seeing what they get out of it, and sharing and comparing with the rest of the class.

I think most people have a prespondence to teach theory first -- indeed, this is how it is generally done in academic settings, but people tend to learn better practicing something first, and then seeing how the theory generalizes what they have learned. After all, you can't really learn something properly until you have experience with it.

I know this, because both of my parents are teachers -- one taught an adult class for decades and is now teaching children, the other is currently long-distance tutor for a university course, and has done/is doing art classes. But here, have a quote:

    According to studies, students who practise what they're 
    learning first-hand are three and a half times more 
    likely to retain that knowledge than when they're sitting 
    in a lecture room, hand-scribing notes.
(https://www.studyinternational.com/news/bridging-the-gap-bet...)
TheSpiceIsLife(4130) 4 days ago [-]

No true Scotsman, eh?

> Is paint-by-numbers making art?

Sure, why not? I have a paint-by-numbers a mate coloured in hung up on a wall.

Is there anything wrong with an 'intro to making and using loops'?

Maybe it'll get a few people interested enough to go through your tutorial.

seph-reed(4206) 4 days ago [-]

Just want you to know someone else agrees.

haunter(3998) 4 days ago [-]

I think you are looking from the wrong way. Imagine there is a language you don't speak but you get a phrasebook with 100 useful sentences. Like general experssions what to say, how to ask etc. Now if you visit a country where they speak this language it will be very useful. You don't know the grammar, you don't even know the proper pronunciation. Yet you already have a base to build upon and express yourself. This site is like that. You don't _have to_ know music theory and such to start making music. But that's not just the music I'd say the same applies to writing as well.

whiddershins(2572) 4 days ago [-]

I agree with you but I would put it differently.

The notion that music is at all a grid is just ... not true. It's true of a tiny subset of music.

I use ableton (and grids) all the time but this way of thinking about "music" is just reinforcing a weird picture of what it is.

Even scales and theory and even music notation enforce a confusing deconstruction of what music is.

But it's really hard to articulate this.

mntmoss(10000) 4 days ago [-]

It takes a whole chapter to explain rhythm because I have witnessed people spend literally over an hour stuck on 'counting out a measure' with varying note lengths. This series is starting where a from-scratch musical education must start.

An education for theory training does end up focused on note interval structures, and there's an endless amount to talk about there, but then, you don't need a great deal of traditional theory for simple songwriting - major and minor triads will suffice. This tutorial gets those but in my skimming of it seemed to stop short of trying to explain circle of fifths usage, which is probably the one most essential thing to make composition in the large go from 'I have to guess and check' to 'I have a guideline to work from that lets me calculate ahead.'

aylmao(3837) 4 days ago [-]

Hey pal, do follow the tutorial. Music keys, scales, chords + progressions, baselines, melodies, etc are all covered with examples and/or exercises. It's not as deep as a full music theory course but I don't think anyone is claiming this will get you a music major ̄\_(ツ)_/ ̄.

It even has an advanced topics section with an intro to pentatonic and octatonic scales, triads and inversions; check it out in the menu.

JacKTrocinskI(4167) 4 days ago [-]

Ableton is top notch but for me nothing beats coming up with melodies in FL Studio's 'Piano Roll', a lot of fun times spent in that program.

taffronaut(10000) 4 days ago [-]

FLStudio has a great community and Image-Line deserve kudos for their 'lifetime' license model. I used it extensively for a few years and found it solid and productive. It's maybe closer to Ableton than Cubase.

ubermonkey(10000) 4 days ago [-]

No discussion of Ableton should omit high profile users like Imogen Heap or Zoe Keating.

brokenmachine(4222) 3 days ago [-]

Zillions of electronic artists use Ableton.

Although I like Imogen Heap, I don't see why she's so deserving of a special mention.

marapuru(10000) 4 days ago [-]

Very cool and interesting website.

I recently bought a Maschine MK2 to get into the music making thing. It surprised me how easy to use it is.

My wife is a professional acoustic musician, and although she firstly was a bit hesitant with this 'instrument' she quickly turned around. And we started making music together.

It's perfect for me since it is very well arranged, the buttons make sense in my head.

I play a bit of guitar and always have trouble making sense of the notes in my head.

marpstar(10000) 4 days ago [-]

I'm a Maschine user as well, but I've got a background in guitar, having played in rock bands from my early teens through mid-20s.

I've always been intrigued by MIDI, but always felt 'limited' by piano-style keyboard MIDI controllers. Maschine really helped me break open my writers block and made playing music fun again.

I can put Maschine in front of my 5 year old and he can figure it out. It makes more sense to him than my 61-key MIDI controller does.

S_A_P(4190) 4 days ago [-]

I really tried to get into Ableton. I found a Push 2 at a pawn shop and while I can say that the push 2 controller is pretty damn amazing and turns Ableton into a pretty cool device, I can't get the arrangement to feel natural to me. I am a long term linear sequencer user, and Logic is pretty much muscle memory to me now. However, Ive used FL studio since version 1.x and that also feels much easier to use than Ableton. I can create patterns super easily on Ableton but then turning that into a song is just clunky to me.

amatecha(10000) 4 days ago [-]

In the same boat.. I used trackers, then FL Studio, Cubase for a while, then Logic for years and I've always found Ableton Live really hard to get into.

fenwick67(10000) 4 days ago [-]

Wait, were you arranging in the session view and not in the arrangement view?

vonseel(4189) 4 days ago [-]

I use both Logic and Ableton. What about the arrangement is so difficult for you!? my complaints about Ableton are much more editing-centric. I also greatly miss take folders when I work in Ableton, and I prefer mixing with busses to grouping tracks in Ableton.

rock_hard(3710) 4 days ago [-]

I came from CUBASE and it took me a couple attempts to get used to ableton

Frankly thinking of it as a sequencer is a limiting mental model.

Think of it as a instrument to jam with.

When I used cubase I spend about 20% of the time with jamming together a basic idea and then 80% with arranging.

Ableton flipped that for me...now I spend 80% of the time jamming (and loving it) and only all the way at the end I quickly create the arrangement once I am already super familiar with all the parts I created during jamming.

It's had a really super positive on my creative quality!

bartproost(4216) 4 days ago [-]

I love seeing how the big brands are picking up web audio. I refuse to work on anything else these days, and it's easier than I thought when I started. Built 5 web games using tone.js for Red Bull Mind Gamers last year and just launched a site that auto generates unlimited royalty free mp3s using web audio for a dollar[1]. [1] https://strikefreemusic.com

jwyatt1995(10000) 4 days ago [-]

Do you have any open source work? I've been playing around with web audio applications myself and would love to poke around.

fractalf(10000) 4 days ago [-]

Ableton is great and paved the way for a more creative and intuitive workflow! I switched from Cubase very early on and never looked back. That is, until I found Bitwig (https://www.bitwig.com) which supports Linux! They also deserve a shout out taking it even further!

kofejnik(4069) 4 days ago [-]

I was totally floored seeing deadmau5' masterclass, he composes in Ableton on Windows, using mouse and keyboard shortcuts, no controllers!

haywirez(3239) 4 days ago [-]

Ableton is a different ballgame if you have the Push interface. Other than that Reaper needs a mention, it is the best DAW[1] in terms of functionality and power. Truly for power-users and not necessarily as a musical idea starter.

[1] http://reaper.fm/

jonathanstrange(10000) 4 days ago [-]

I use Reaper for everything.

baldfat(3004) 4 days ago [-]

I also use Bitwig (And Tracktion's Waveform 10) They both work on Linux and are great.

Bitwig works on a subscription program for updates. You want the latest you buy a subscription and it lasts for a year. After the subscription is over you get to use whatever version you are on for life, but no updates.

0x70dd(10000) 4 days ago [-]

Recently I switched to Ardour for recording guitars on Linux - it has great VST support, allows syncing music with videos, and has automation built-in. Even Amplitube works through LinVst.

I was also blown away by their pricing - you can pay as little as $1 for the full version, which is what I did, but after seeing how well it works, I did a donation to match the recommended price of $45.

uxcolumbo(10000) 4 days ago [-]

Can you give more details about why you switched to Bitwig?

I'm a beginner, so don't know much about Ableton or Bitwig.

kabacha(10000) 4 days ago [-]

379€ - wow, that's an absurd amount of money for personal software. Include that with every tutorial being priced too and having to buy synths and samples etc this turns into one expensive hobbie.

codesternews(2661) 4 days ago [-]

To all guys commenting here - Are you guys hobist, programmers or professional. Why you use these tools?

Just asking out of curiosity. Thanks

alok-g(2731) 4 days ago [-]

I have heard good reviews about Bitwig elsewhere too. A few questions, as a newbie to DAWs:

1. What would one miss if using Bitwig over others, if anything?

2. Same question as the above for Reaper?

3. How would the two compare with each other.

Note: I have already noted the comments child to yours.

Thanks.

Shinchy(10000) 4 days ago [-]

I've been through them all, started on Audition, then to Cubase, then to Pro Tools and finally settled on Ableton. There's just nothing like Ableton for composing, especially with the right hardware (controller or push). It allows you to get completely lost in a way no other DAW I've used can match. Although I do now use Pro Tools for mixing since I find it to be far superior in that area.

kristiandupont(1832) 4 days ago [-]

Bitwig looks really promising. But how is the VST landscape? Is it even supported?

EDIT: sorry, my question was about whether Linux supports VST. I would assume that Bitwig did at v0.1 :-)

vectorEQ(10000) 4 days ago [-]

prefer renoise, it's cheap and super easy to work with :D. Never actually liked the workflow of ableton (personal flavour i suppose), its audio engine is fairly decent, but imho for 500ish euros it costs to get the suite it's a bit expensive for what u get.

Cubase has a much superior audio and processing engine like Logic Pro for mac users, and is in a similar pricing range. It has less quality built-in dps etc. ,but most DAWs lack on that.

Renoise for like 50 euros is super cheap in comparison, and most dps it has have awesome performance and quality.

That being said, ableton does offer a better 'live' environment for live/performance based things.

on workflows, bitwig is really the innovator, as it combines workflows from different daws like cubase and ableton and lets the user itself choose its work-flow instead of forcing it upon the user.

enqk(3729) 4 days ago [-]

funnily enough, the founder/creator of renoise was working for Ableton in the early years of renoise

PavlovsCat(4194) 4 days ago [-]

Pro/Fasttracker in the browser: https://www.stef.be/bassoontracker/

And it can load modules directly from modarchive and modules.pl, too! e.g

https://www.stef.be/bassoontracker/?file=https%3A%2F%2Fapi.m...

FWIW, that's how I learned making music, just learning the commands and making things I like by ear. Though being a kid with no internet probably helped.

dysoco(3683) 4 days ago [-]

I might be missing something here but I click record, play some notes with the A-Z keys, stop the recording, and when I click play nothing happens (I hear no audio) however with the demos I hear the audio.

Any ideas what am I missing?

EDIT: I was supposed to add a sample (in the Edit Samples tab) in case anyone was confused like me.





Historical Discussions: Estimates that mineral levels in vegetables have dropped by up to 90% since 1914 (September 13, 2019: 742 points)

(742) Estimates that mineral levels in vegetables have dropped by up to 90% since 1914

742 points 6 days ago by hispanic in 2010th position

www.ncbi.nlm.nih.gov | Estimated reading time – 175 minutes | comments | anchor

1. Introduction

Magnesium is a critical mineral in the human body governing the activity of hundreds of enzymes encompassing ~80% of known metabolic functions [1,2,3,4]. Despite the importance of magnesium, it remains one of the least understood and appreciated elements in human health and nutrition. It is currently estimated that 45% of Americans are magnesium deficient and 60% of adults do not reach the average dietary intake (ADI) [5,6,7,8]. A daily intake (DI) of 3.6 mg/kg is necessary to maintain magnesium balance in humans under typical physiological conditions, with the ADI for adults estimated at between 320 to 420 mg/day (13–17 mmol/day) [9,10].

The high rate of magnesium deficiency now postulated [5,6,7,8] can be attributed in part to a steady decline in general magnesium content in cultivated fruits and vegetables, a reflection of the observed depletion of magnesium in soil over the past 100 years [11,12,13]. A report to Congress was already sounding the alarm as far back as the 1930s, pointing out the paucity of magnesium, and other minerals, in certain produce [14].

This loss of mineral content across "healthy" food choices has been compounded by a historical rise in the consumption of processed food, which has been shown to impede magnesium absorption and contribute to the current state of magnesium deficiency (defined by serum blood levels, "normal" being considered as 0.7–1 mmol/L and hypomagnesaemia as <0.7 mmol/L) [15,16,17,18,19]. Given the role of magnesium in calcium and potassium transport, cell signaling, energy metabolism, genome stability, DNA repair and replication, it is not surprising that hypomagnesaemia is now associated with many diseases including hypertension, coronary heart disease, diabetes, osteoporosis, and several neurological disorders [1,2,4,20,21,22,23].

Despite its importance to human health, magnesium remains one of least investigated macro minerals, and while it is getting more attention, this still pales in comparison to the level of investigation into other macronutrients such as calcium or iron (). The root cause of this oversight likely lies in the fact that iron and calcium deficiency can be diagnosed through a variety of clinically well recognized associated signs and symptoms, and readily supported by commonly used, and clinically validated, diagnostic assays available for verification [24,25,26]. This tie-in is not the case however, for magnesium, where deficiency does not present with unique and identifiable clinical manifestations. Furthermore, even if clinical signs and symptoms are present, they are overshadowed by or taken to be the result of common co-morbidities such as diabetes and cardiovascular disease. The lack of a standardized laboratory test that accurately describes the status of magnesium [27] remains one of the most vexing challenges associated with the magnesium field, and contributes to the relative anonymity of magnesium compared to other macronutrients, which in turn, further contributes to magnesium deficiency and its sequelae.

Number of basic and clinical research papers published (Y-axis) as screened using Web of Science [v.5.28.1] under the search terms "magnesium deficiency" (yellow), "calcium deficiency" (green) or "iron deficiency" (blue) (performed 4 May 2018) over the past 25 years (X-axis; 2017–1992). (Inset) Trend lines show the relatively flat research output on magnesium deficiency relative to calcium and iron.

Moving forward, it is clear that there will be an important role to play for magnesium supplementation across, and within, certain populations. The key to unlocking the benefits of magnesium will be to understand the factors contributing to inadequate dietary intake, including the complexity of absorption, secretion, and reabsorption, and to address the challenges of representative compartment analytics. These factors make most human clinical magnesium supplementation studies are difficult to extrapolate and interpret accurately, leading to magnesium research being described as, "Far from complete and the conclusions that have been drawn are far from clear." [28].

Causes of Magnesium Deficiency

Despite the importance of magnesium to human health and wellness, 60% of people do not meet the recommended DI of 320 mg/day for woman and 420 mg/day for men, with 19% not obtaining even half of the recommended amount [5,6,29]. Magnesium dietary deficiency can be attributed not just to poor mineral intake due to modern diets, but historical farming practices may play a significant role as well. The highest food sources of magnesium are leafy greens (78 mg/serving), nuts (80 mg/serving), and whole grains (46 mg/serving), none of which individually have a high percentage of the recommended dietary allowance (RDA) of magnesium or are eaten consistently or sufficiently for adequate magnesium intake [10,15,30]. Increasing demand for food has caused modern farming techniques to impact the soil's ability to restore natural minerals such as magnesium. In addition, the use of phosphate-based fertilizers has resulted in the production of aqueously insoluble magnesium phosphate complexes, for example, further depriving the soil of both components [31].

Many fruits and vegetables have lost large amounts of minerals and nutrients in the past 100 years with estimates that vegetables have dropped magnesium levels by 80–90% in the U.S. () and the UK [11,12,13,32,33]. It is important to note that the USDA mineral content of vegetables and fruits has not been updated since 2000, and perhaps even longer, given that the data for 1992 was not able to be definitively confirmed for this review. The veracity of the mineral content to support the claim of demineralization of our food sources should be verified, particularly since farming methods and nutrient fertilization has undoubtedly advanced in the last 50 years. Hence, there is a clear need for a new initiative to study the current mineral content in vegetables and fruits grown in selective markets to get a current and validated assessment of the mineral and nutrient value of commonly consumed fruit and vegetable staples.

The average mineral content of calcium, magnesium, and iron in cabbage, lettuce, tomatoes, and spinach has dropped 80–90% between 1914 and 2018 [30,34,35,36,37]. Asterisks indicate numbers could not be independently verified.

Modern dietary practices are now estimated to consist of up to 60% processed foods [38]. Processing techniques, such as grain bleaching and vegetable cooking, can cause a loss of up to 80% of magnesium content [39]. Beverages, such as soft drinks, which contain high phosphoric acid, along with a low protein diet (<30 mg/day), and foods containing phytates, polyphenols and oxalic acid, such as rice and nuts, all contribute to magnesium deficiency due to their ability to bind magnesium to produce insoluble precipitates, thus negatively impacting magnesium availability and absorption [40,41,42,43]. Magnesium in drinking water contributes to about 10% of the ADI [44], however, increased use of softened/purified tap water can contribute to magnesium deficiency due to the filtering or complexation of the metal [45]. In addition, fluoride, found in 74% of the American population's drinking water, with ~50% of drinking water having a concentration of 0.7 mg/L, prevents magnesium absorption through binding and production of insoluble complexes [46,47,48]. Ingestion of caffeine and alcohol increase renal excretion of magnesium causing an increase in the body's demand [49,50]. Common medications can also have a deleterious effect on magnesium absorption such as antacids (e.g., omeprazole), due to the increase in gastrointestinal (GI) tract pH (see Section 2.5) [51,52], antibiotics (e.g., ciprofloxacin) [53], and oral contraceptives due to complexation [54,55], and diuretics (e.g., furosemide and bumetanide), due to an increase in renal excretion (see Section 2.6) [56,57].

2. Magnesium Absorption

2.1. Anatomic Considerations

Unlike other minerals, magnesium can be absorbed along the entire length of the gastrointestinal tract. However, different segments contribute unequally to the overall absorption of dietary magnesium. Due to the complex nature of magnesium absorption, segments of the GI tract can vary in their contribution to absorption, however, under normal physiologic conditions the general guide is the duodenum absorbs 11%, the jejunum 22%, the ileum 56%, and the colon 11% () [3,58].

Percentage of magnesium absorption in the GI tract. The majority of magnesium is absorbed in the distal portion of the small intestine. The ileum absorbs 56%, the jejunum 22%, the duodenum 11%, and colon 11% [3,58].

2.2. Absorption

Two transport systems, one passive and one active, are known to be responsible for magnesium uptake (). At lower intestinal magnesium concentrations, a transcellular and saturable transport mechanism predominates and relies on an active transporter [20,59,60], Transient Receptor Potential Channel Melastatin members (TRPM6 and TRPM7), which possess unusual properties designed to strip away the hydration shell of magnesium (see Section 2.3) [61,62,63,64,65,66,67]. This active transport occurs predominantly in the distal small intestine and colon, and due to saturability is only responsible for 10–20% of total magnesium absorbed [68]. Additionally, active transport can increase magnesium absorption, typically at 30–50% [69] of ingested magnesium, up to 80%, for periods of time during lower luminal concentrations [70,71]. TRPM6 and TRPM7 have high sensitivity to intracellular magnesium levels causing inhibition and saturation of transcellular transport at higher magnesium concentrations resulting in magnesium absorption being dominated by paracellular transport [64,67].

Magnesium absorption in the intestine. Magnesium is absorbed through either a saturable transcellular pathway (left) in which TRPM6 and TRPM7 actively transport magnesium into the GI epithelial cells, which is effluxed through a Na+/Mg2+ exchanger and/or a paracellular pathway (right) where magnesium transverses the tight junctions of the intestinal epithelium, assisted by magnesium associated claudin proteins.

Passive paracellular diffusion occurs in the small intestine and because it is non-saturable, is responsible for 80–90% of overall magnesium absorption [20,58,60,72]. The driving force behind this passive transport is a high luminal concentration, ranging between 1 and 5 mmol/L, which contributes to an electrochemical gradient and solvent drag of magnesium through the tight junctions between intestinal enterocytes [59,73].

The distal jejunum and ileum have relatively low expression of certain tightening claudin proteins (1, 3, 4 and 5) [74,75], the integral membrane proteins of tight junctions, which allow a higher permeability, and hence, higher magnesium transport [75,76,77]. Claudins are also known to form paracellular channels, in monomeric or heteromeric combinations, which can electively transport ions such as calcium and magnesium. Studies have shown that when expression of claudins 2, 7 and 12 (all highly expressed in the small intestine) [75,76] was decreased, magnesium paracellular transport was also decreased, indicating that certain claudins play a significant role in passive magnesium transport and absorption [78,79]. Claudins 16 and 19 have been shown to be involved in magnesium reabsorption in the kidney but are not expressed in the GI tract [80]. A series of in vitro experiments, designed to explore the involvement of active (solvent drag, voltage dependent or transcellular transport) and passive paracellular transport mechanisms of magnesium absorption, showed that the paracellular passive pathway was mainly mediated by claudin proteins at the tight junctions, which was attributed to their ability to remove the hydration shell of magnesium () [78,79,81].

Hydration shells of both magnesium and calcium. The hydrated radius of magnesium is >400 times larger than its dehydrated radius, which is much more prominent than calcium (~25-fold difference) [83,84]. This increase in radius, unlike calcium, prevents magnesium from passing through narrow ion channels.

Claudins are a large family of proteins and the identification, localization, and function of these integral membrane proteins is part of an emerging science, which at present only allows a glimpse into their role in mineral transport in general, and magnesium transport in particular [75,76,77,82].

2.3. Hydration Shell

Magnesium is a divalent cation, which plays a critical role in how the mineral is absorbed [83,84]. Magnesium is the most densely charged of all the biological cations, due to a high charge to radius ratio, resulting in high hydration energy for the Mg2+ cation [83]. This hydration energy results in tight coordination with a double layer of water molecules, increasing the hydrodynamic radius 400 times that of the dehydrated radius [83,85], resulting in an aquated cation that is too large to transverse typical ion channels () [86]. The removal of the hydration shell around magnesium is a precondition for absorption and can be accomplished by both TMPR6 and TMPR7 and the magnesium associated paracellular claudins [64,67,87,88].

2.4. Distribution in the Human Body

Once magnesium is absorbed it is distributed throughout the body for use and storage. Only 0.8% of magnesium is found in blood with 0.3% in serum and 0.5% in erythrocytes, with a typical total magnesium serum concentration between 0.65–1.0 mmol/L [89,90]. The rest is distributed in soft tissue (19%), muscle (27%), and bone (53%) () [89,90,91]. Up to one-third of the magnesium stored in bone is exchangeable [92], and while the total amount of magnesium stored in bone can change with age, bone remains the most significant area of stored and exchangeable magnesium.

Magnesium homeostasis. Dietary magnesium can be absorbed along the entire length of the GI tract and into the blood but can also be excreted in feces (between 20% and 70% of the ingested amount) [69]. Once in the blood, magnesium is quickly taken up into tissues with muscle containing 27%, bone 53%, and other tissues holding 19% [3,58,93]. Blood and tissue magnesium are in a constant state of exchange and the kidney, which can filter up to 2400 mg of magnesium per day [94] (or 10% of average magnesium content in an adult [95]) can excrete between 5% and 70% of that magnesium depending on multiple variables.

2.5. Factors That Influence Magnesium Absorption

Magnesium concentration within the GI tract is a key driver of how and which of the two transport systems become engaged in magnesium absorption. Active transport in the colon dominates absorption at lower magnesium concentrations but becomes saturated when luminal amounts are between 125 and 250 mg [70,72]. When luminal amounts reach ≥250 mg the absorption mechanism changes and is governed by passive transport in the distal small bowel [70,72].

That being said, the solubility of the magnesium form (inorganic salt, organic salt, chelate, etc.) is an important factor, with increased solubility correlating with increased absorption. The pH of the GI tract can impact how soluble the magnesium form is, with a lower pH increasing magnesium solubility [96,97]. This can make magnesium absorption increasingly difficult as it travels down the small intestine with pH steadily increasing to 7.4 in the ileum. In 2005, Coudray et al. showed that magnesium absorption is significantly affected by GI tract pH in rats [97]. The study showed that as pH gradually increased, the solubility of ten magnesium salts (organic and inorganic) gradually decreased from 85% in the proximal intestine to 50% in the distal intestine. Other studies showed that a commonly used proton pump inhibitor, omeprazole, affected passive transport in vitro [78,79]. They showed that omeprazole suppressed passive magnesium absorption by causing luminal acidity to rise above the range (pH 5.5–6.5) in which claudin 7 and 12 expression is optimized, magnesium hydration shell stripping is most effective, and electrostatic coupling between magnesium and the transporter takes place [79].

Magnesium absorption is enhanced by factors that contribute to water flow across the intestinal mucosal membrane, such as simple sugars and urea [59,98]. Therefore, meals containing carbohydrates and medium chain fatty acids will increase magnesium uptake but will also increase the demand since magnesium is critical to glucose breakdown and insulin release [99]. Solid meals, by prolonging GI transit time, can also enhance magnesium absorption [100]. Increased dietary fiber intake in the diet (e.g., cellulose, pectin, and inulin) does not appear to affect magnesium status but can increase magnesium excretion in feces [101,102,103].

2.6. Factors That Affect Magnesium Status

Renal function is a key player in magnesium homeostasis and filters approximately 2400 mg/day [94], and anywhere between 5% and 70% filtered magnesium may actually be excreted in the urine [89,103,104]. This wide range depends on ever changing variables such as dietary intake, existing magnesium status, mobilization from bone and muscle, and the influence of a variety of hormones (e.g., parathyroid hormone, calcitonin, glucagon) [105,106,107] and medications (e.g., diuretics and certain chemotherapies that can cause abnormally high magnesium excretion) [56,90,104,108]. Renal magnesium wasting can occur in patients who are on long-term diuretic management as well as those with diabetes. The resultant magnesium deficiency leads to higher nutritional requirements and the inevitable increase in magnesium absorption to re-establish homeostasis [109].

Gender also contributes to magnesium status as estrogen enhances magnesium utilization, favoring its uptake by soft and hard tissues [110]. Young women have better magnesium retention than young men, and as a result of this, their circulating magnesium levels are lower [111,112], particularly at the time of ovulation or during oral contraceptive use [54,112,113,114,115], when estrogen levels are highest. Consequently, samples taken in a mixed gender population or at time points that do not take this into account could further confound human magnesium studies.

Body mass index (BMI) also may affect magnesium status, particularly in women and children. Patients considered obese (BMI ≥ 30) have been shown to have lower magnesium consumption and reduced magnesium status compared to non-obese age matched controls [116,117,118,119].

3. Analytical Challenges in Establishing Magnesium Status

Understanding the relationship between the concentration of an analyte in the compartment being measured (e.g., blood, urine, and epithelial samples) and the status of that analyte in the body, or its relevance in the measured compartment is a fundamental principal that will render an analytical test useful or not. Due to the way in which magnesium is compartmentalized, the typical compartment (blood and urine) analytics may not provide an accurate proxy of magnesium status and will mislead the practitioner.

A literature search identified 54 randomized controlled magnesium supplementation studies (see Methods), and showed that the majority of studies examined blood and urine with only a few examining fecal material, other tissues such as muscle, or from different cell specimens ().

Table 1

Magnesium clinical trial studies by year with method of determining magnesium status indicated. Expanded from Zhang et al. [130].

Study Blood Urine Intracellular Fecal Tissue Challenge Studies
Serum Plasma 24 h NS RBC WBC SL Other Muscle Other Balance Retention
1. Zemel, 1990, USA [131] χ χ
2. Facchinetti, 1991, Italy [132] χ χ χ
3. Desbiens, 1992, USA [133] χ
4. Ferrara, 1992, Italy [134] χ χ
5. Bashir, 1993, USA [135] χ χ
6. Plum-Wirell, 1994, Sweden [136] χ χ χ
7. Witteman, 1994, Netherlands [137] χ χ
8. Eibl, 1995, Austria [138] χ χ
9. Eriksson, 1995, Finland [139] χ
10. Itoh, 1996, Japan [140] χ χ
11. Sanjuliani, 1996, Brazil [141] χ
12. Costello, 1997, USA [142] χ χ χ
13. Sacks, 1997, USA [143] χ
14. de Valk, 1998, Netherlands [144] χ χ χ
15. Lima, 1998, Brazil [145] χ χ χ
16. Walker, 1998, UK [146] χ
17. Weller, 1998, Germany [147] χ χ χ χ χ
18. Hagg, 1999, Sweden [148] χ χ
19. Wary, 1999, French [149] χ χ χ χ χ
20. Zorbas, 1999, Greece [150] χ χ χ χ
21. Schechter, 2000, USA [151] χ χ
22. Walker, 2002, UK [152] χ
23. Mooren, 2003, Germany [153] χ
24. Rodriguez-Moran, 2003, Mexico [154] χ
25. Walker, 2003, UK [155] χ χ χ
26. Závaczki, 2003, Hungary [156] χ
27. De Leeuw, 2004, Belgium [157] χ χ
28. Pokan, 2006, USA [158] χ
29. Rodríguez, 2008, Mexico [159] χ
30. Almoznino-Sarafian, 2009, Israel [160] χ χ
31. Lee, 2009, South Korea [161] χ χ
32. Romero, 2009, Mexico [162] χ
33. Aydın, 2010, Turkey [163] χ
34. Kazaks, 2010, USA [164] χ χ χ
35. Nielsen, 2010, USA [165] χ χ χ
36. Zorbas, 2010, Greece [166] χ χ χ χ
37. Chacko, 2011, USA [167] χ
38. Romero, 2011, Mexico [168] χ
39. Esfanjani, 2012, Iran [169] χ
40. Laecke, 2014, Belgium [170] χ χ
41. Cosaro, 2014, Italy [171] χ χ χ
42. Rodriguez, 2014, Mexico [172] χ
43. Setaro, 2014, Brazil [173] χ
44. Navarrete-Cortes, 2014, Mexico [174] χ χ
45. Guerrero-Romero, 2015, Mexico [175] χ
46. Park, 2015, USA [176] χ
47. Baker, 2015, USA [177] χ
48. Joris, 2016, Netherlands [178] χ χ
49. Terink, 2016, Netherlands [179] χ
50. Moradian, 2017, Iran [180] χ
51. Rajizadeh, 2017, Iran [181] χ
52. Cunha, 2017, Brazil [182] χ
53. Bressendorff, 2017, Denmark/Norway [183] χ χ
54. Bressendorff, 2017, Denmark [184] χ χ χ
55. Toprak, 2017, Turkey [185] χ
Total 35 11 16 10 12 3 3 3 2 4 2 1 1

3.1. Blood Levels

The current "normal" range interval of serum magnesium is 0.7–1 mmol/L and was established based on serum magnesium levels gathered by a U.S. study between 1971 and 1974 of presumably healthy individuals aged 1–74 years [120]. Serum changes can be influenced by dietary magnesium intake and albumin levels, but can also be affected by short term changes like day to day and hour to hour variability of the amount of magnesium absorbed and excreted through the kidneys [121]. Blood levels have been shown to increase in response to magnesium supplementation, but this does not signal that complete equilibrium has been established between blood and the nearly 100-fold larger body reservoir of magnesium. In fact, the much larger exchangeable pool of magnesium is more often called upon to augment blood levels to maintain a narrow range preferentially, which is a key reason why blood measurements can easily mask deficiency [122,123].

The tight control of magnesium serum levels, representing only 0.8% of total body stores (see Section 2.4), therefore serves as a poor proxy for the 99.2% of magnesium in other tissues that constitutes the body's true magnesium status. Furthermore, this narrow serum range feeds the common perception of clinicians that magnesium levels rarely fluctuate, and therefore, are not indicative of the condition for which the blood tests are ordered. Therefore, practitioners are apt to order blood tests for magnesium infrequently, if at all, and if a magnesium level is in the patient chart, it is more often as part of a blood test panel and not purposely ordered to determine the magnesium status [89,124,125,126]. This contributes significantly to magnesium deficiency not being recognized as a modifiable nutritional intervention, and magnesium in general, being the neglected mineral that it is.

Red blood cells' (RBC; erythrocyte and monocyte) magnesium levels are often cited as preferable to serum or plasma levels due to their higher magnesium content (0.5% vs. 0.3%, respectively). Some RBC studies report correlation to magnesium status particularly when subjects are placed on long-term (~3 months) magnesium replete or deplete diets. However, most studies using RBC magnesium endpoints do not satisfy this long-term design and have not been performed in nearly enough randomized clinical studies to be considered sufficiently robust or reliable () [127,128,129]. In addition, the majority of RBC studies do not validate the method through inter-compartmental sampling (e.g., urine and muscle), challenging the claim that this test is a reliable representation of the large magnesium pool.

3.2. Urine Levels

Due to the large amount of magnesium filtered and the variable degree of reabsorption and secretion (see Section 2.6), magnesium levels in the urine do not correlate with either the amount of magnesium ingested or the magnesium status in the body. Therefore, despite their frequent use in many published clinical studies () [6,130], they should be regarded critically in most clinical and research settings due to the wide fluctuation of renal magnesium reabsorption and excretion.

An epidemiologic study linking magnesium status with risk of heart disease highlighted the poor correlation between urine and blood results and called out the inconsistent results from many previous studies [186,187], even though 24 h urine analyses may still serve some useful function in population based epidemiological studies. The biological variation of magnesium status in smaller cohorts, however, has been highlighted in a study with 60 healthy males in which a within-subject variation of 36% and a between-subject variation of 26% was demonstrated when measuring the 24 h urinary magnesium excretion [187]. The same can be said about fecal magnesium levels, which require 3–7 days collection and are notoriously unpopular with researchers and subjects [69,188].

A more complicated method of determining magnesium status relies on intravenous magnesium loading followed by a 24 h urine collection, ostensibly to measure what percentage of administered dose is retained, from which an assessment of magnesium status can be derived. This retention test relies heavily on the reliability and standardization of the 24 h urine measurement, which is not uniformly accepted [125,189,190,191,192,193]. Additionally, this test is costly, more suitable for research units and impractical for most clinical settings.

3.3. Oral Sampling

Energy-dispersive X-ray analysis of magnesium in sublingual cells reports correlation between intracellular magnesium levels in sublingual cells and atrial cell biopsies from subjects undergoing open heart surgery in a small single cohort [194]. However, to our knowledge, this method has not been validated for application as a reliable and indicative of magnesium status in a broader context, beyond a single disease state cohort study. So too, saliva levels have not been adequately correlated with other conventional measurements, and therefore, to date, lack the requisite robustness to be considered as an improvement to assays of blood or urine [155].

3.4. Magnesium Isotopes

In recognition of the meaningful exchange of endogenous magnesium between physiologic compartments, and the high degree of biological variability in typical analytic measurements, some researchers maintain that the only reliable way of measuring the disposition of exogenous magnesium is by using isotopic labels [97,195,196,197,198,199,200]. A radioisotope, 28magnesium, has been used previously in magnesium research but it does not make an ideal nucleotide because its half-life (t1/2 = 21 h) [201,202,203] does not match the long biological half-life of magnesium (~1000 h) [201]. Therefore, 28magnesium is not commonly used in current research [128].

Stable isotopes retain all chemical characteristics of an element while being distinguishable from the endogenous elements within the body. This allows for a means of tracking the fate of an exogenously administered "dose" of the element upon ingestion or injection into the body without the harmful emissions associated with radioisotopes. Stable isotopes can be useful tools, particularly in nutritional research, because of the ability to use them in most populations (including small children and pregnant women) and more than one isotope can be used in a study to follow uptake and distribution of different forms of a nutrient.

However, stable magnesium isotopes have proven to be difficult to use because truly low-abundance stable magnesium isotopes do not exist, and therefore, provide significant background noise in the assays. There are three stable magnesium isotopes; 25magnesium, which has an abundance of 10%, 24magnesium has 79% abundance, and 26magesnium has 11% abundance. This means that these isotopes cannot be used in the customary small amounts to provide an adequate isotope signal to indicate magnesium status [84]. Very large amounts of isotope, using more than one isotope or significant enrichment, are needed for these studies, dramatically limiting the available supply and adding significantly to the cost, ultimately leading researchers to use less sophisticated and unreliable methods.

4. Conclusions

An argument can be made for revisiting the accepted ranges of diagnostic tests to capture clinical or other biologic consequences that lie within the currently accepted ranges of normal. Even though this has been suggested in a recent review [6], this approach is likely to be more impactful in large population studies (that have not been undertaken in the U.S. for more than 40 years) than provide pinpoint guidance to diagnose and manage magnesium deficiency in the individual. The multiple factors affecting magnesium status (e.g., dietary intake, luminal concentration, GI pH, weight and gender) in conjunction with the high degree of inter- and intra-variability of intestinal, renal, and tissue handling make an individual diagnosis extremely challenging for the clinician.

Until a commercially viable and unambiguous magnesium deficiency biomarker is identified and validated, it is worth exploring an alternative approach to diagnosing magnesium deficiency. A patient with dietary risk factors (e.g., high soda, coffee, and processed food ingestion); using medications known to affect magnesium (e.g., diuretics, antacids, oral contraceptives); with disease states (e.g., ischemic heart disease, diabetes, and osteoporosis); with clinical symptoms (e.g., leg cramps, sleep disorder, and chronic fatigue); or with Metabolic Syndrome () should prompt the practitioner to measure serum and/or 24 h urine for magnesium, bearing in mind that it is quite likely that results from these laboratory tests may read within the reference range (0.75–0.85 mmol/L in the case of serum magnesium) [204].

Table 2

Suggested illustrative criteria for assessment of magnesium deficiency.

Category Risk Factor Criterion
Disease Diabetes [4], Heart disease [22] Major
Osteoporosis [26] Minor
Diet Soda [41], Processed Foods [39] Major
Coffee [50], Alcohol [49], Protein [42] Minor
Medication Diuretics [57], Antacids [51] Major
Oral contraceptives [55], Antibiotics [53] Minor
Clinical History Leg Cramps [205] Major
Sleep Disorder [206], Fibromyalgia [207], Chronic fatigue [208] Minor
Metabolic Status Metabolic Syndrome [209] Major
BMI > 30 [117] Minor

It has further been suggested that if serum magnesium is below 0.85 mmol/L and urinary excretion is below 80 mg/day, it is appropriate to consider magnesium related co-morbidities and risk factors for magnesium deficiency when considering whether a state of magnesium deficiency exists [204]. This could warrant a medication change or dietary recommendations to increase intake of raw vegetables with higher magnesium content and reducing soda and processed food consumption with low or no magnesium and/or recommending magnesium supplements.

This new approach may further sensitize the clinician to the limitations of diagnostic tests and the need to incorporate risk and clinical considerations into the treatment paradigm. By way of suggestion, certain conditions may be regarded as "major" diagnostic criteria (e.g., diuretic use, ischemic heart disease, and high processed food and/or soda intake) and others as "minor" criteria (e.g., sleep disorder or BMI) ().

A clinician may recognize a patient at risk for magnesium deficiency with one major criterion and two or more minor criteria, or two major criteria and no minor criteria, and so forth. The parameters of such a system is beyond the scope and authority of this review, with being illustrative, but this could be entertained as a new way to look at a serious essential nutrient deficiency that is all but ignored because of the pitfalls of the analytic methods that are peculiar to this most important mineral.




All Comments: [-] | anchor

elmar(1623) 6 days ago [-]

Well there are some that say that minerals on vegetables are not bio-available for the human digestive system, so in the end doesn't make a lot of difference.

dean177(10000) 6 days ago [-]

This is nonsense. Perhaps you mean something like "less bio-available compared to..."

tempguy9999(10000) 6 days ago [-]

> some that say

who?

scythe(4179) 6 days ago [-]

Minerals in meat come from bioaccumulation of minerals in feed, though, so the problem works its way up the food chain.

JJMcJ(10000) 6 days ago [-]

Soups and other slow cooking methods release minerals from veggies.

On the other hand they degrade some vitamins.

dr_dshiv(4167) 6 days ago [-]

Fascinating discussion, just noticed the article is about magnesium deficiency.

Magnesium pills! I love those things. The only supplement I take, because it gives such a direct brain-fog lifting effect and makes me poo good. Real good. A must before international travel.

pinkfoot(10000) 6 days ago [-]

Magnesium is also essential for peristalsis. :)

krumpet(10000) 6 days ago [-]

Read The Omnivore's Dilemma. Do it now.

mnorton(10000) 6 days ago [-]

not sure why this would get downvoted

JohnJamesRambo(4155) 6 days ago [-]

Growing food has turned into an engineering problem where people think you solve it by investing the least possible resources into it. Whatever the consumer will buy and you can produce as cheaply as possible wins the day. Our tasteless vegetables are like cheap Bose (no highs, no lows, must be Bose) speakers or pressed paper furniture at Walmart. On the surface they look like a vegetable should look, but the taste, what's inside, is completely deficient.

fucking_tragedy(10000) 6 days ago [-]

> Growing food has turned into an engineering problem where people think you solve it by investing the least possible resources into it.

Our economic system optimizes for exactly this.

fbonetti(10000) 6 days ago [-]

> Growing food has turned into an engineering problem where people think you solve it by investing the least possible resources into it.

This is not a bad thing. Food is so abundant that globally, more people are obese than underweight. This is pretty remarkable considering that for all of human history, up until recently, periods of mass starvation was the norm.

nategri(4123) 6 days ago [-]

Can we not have LinkedIn links on HN

late2part(10000) 6 days ago [-]

Why do you not want to have LinkedIn links on HN?

NickM(10000) 6 days ago [-]

Something that has puzzled me recently: how is anyone supposed to get the daily recommended amount of potassium? If you look at foods like bananas that are supposed to be good sources of it, you still need to eat something like eleven bananas a day to get enough (according to US recommended daily intakes, anyway). At least with the minerals mentioned in this posting, you can fall back on supplements if you need to...but if you try to buy potassium supplements, the max dosage you can get over the counter is 99mg, which is only about 3% of the daily recommended intake. WTF?

zenon(10000) 6 days ago [-]

Tubers and root vegetables and are about equivalent to bananas in potassium content. Some fruits have a bit less, and some a bit more. Leafy greens have about the same level of potassium per unit of weight, much more per calorie.

11 bananas worth is about 1000 calories. So if you get about 1/2 of your calories from varied non-grain plant foods, you should be good.

giardini(4075) 6 days ago [-]

> the max dosage you can get over the counter is 99mg

Most people get enough potassium from their diet. It's easily available in larger quantities but safety is the main concern:

http://www.straightdope.com/columns/read/1364/can-salt-subst...

Nu-Salt salt substitute is 100% potassium chloride in a 3-oz container. Morton Lite Salt is a 50-50% mix of potassium and sodium salt. Both are available in grocery stores. But use with caution and usually with the advice of a physician.

BruiseLee(10000) 6 days ago [-]

Eat durian.

hollerith(3328) 6 days ago [-]

>if you try to buy potassium supplements, the max dosage you can get over the counter is 99mg

I had no problem buying potassium (gluconate) powder by the pound off of Amazon for cheap. (I am in the US.) Be careful with it because if the ratio of potassium to sodium in your blood gets too high, it interferes with the mechanism that sequences the contractions of the chambers of the heart or such.

mobjack(10000) 6 days ago [-]

Lots of foods contain potassium, it just isn't included in the nutrition label.

I was once on a low potassium diet and had to avoid half the of foods I normally ate. You are likely getting enough potassium in your diet without needing any effort.

everybodyknows(3280) 6 days ago [-]

You'll find higher doses of potassium in 'electrolyte replacement' drink powders.

tryitnow(4164) 6 days ago [-]

I've wondered about this too. I have no idea why people never discusses it.

I suspect it's an Illuminati secret.

lucb1e(2135) 6 days ago [-]

Through something like Soylent maybe? That is supposed to contain the recommended daily intake of everything.

I'm not seriously suggesting we all move to 100% cardboard-tasting goo, but if you are worried about one or more of your daily intakes, this could be an easy way for some people to get a good dose every now and then (for example for lunch on weekends, I notice that I'm often in the middle of something fun and will neglect to have lunch).

nokcha(4206) 6 days ago [-]

You can buy potassium citrate in bulk and measure out your own dose.

lr4444lr(4179) 6 days ago [-]

The limited amount in multivitamin/mineral supplements is actually a legal restriction, to boot.

rm_-rf_slash(4093) 6 days ago [-]

I eat potassium chloride (usually sold as salt substitute for people on low sodium diets) on occasion, like when fasting or working out. It has this awful, stingy metallic taste, but I've noticed I tend to have less fatigue and soreness from strength training afterwards. Might be placebo but YMMV.

shan224(10000) 6 days ago [-]

It's a misconception that bananas are high in potassium. E.g potatoes have higher potassium content on avg

wiml(10000) 6 days ago [-]

There's a nice table here: https://health.gov/dietaryguidelines/2015/guidelines/appendi...

Quite a lot of common staples (eg: potatoes, greens, vegetables, pulses) have a fair amount of potassium, if you look at the 'potassium in a standard portion' column (and remember these are fairly small portions, the assumption being that a meal has a portion of several different things).

jws(3892) 6 days ago [-]

the max dosage you can get over the counter is 99mg

This is an FDA limit on potassium chloride supplements. Too much potassium can cause heart rhythm problems and cardiac arrest.

Some organizations propose 3500 to 4500mg as a daily potassium intake, and 98% of people do not eat this much potassium, and unless you are living on beans and beats you probably aren't. Yet there isn't a huge "get more potassium" movement. I wonder if those targets need reassessment.

scythe(4179) 6 days ago [-]

Isn't this at least easy to fix? Required amounts of dietary minerals are minuscule: 50 mg iron, 15 mg zinc and less than 2 mg of the other transition metals and P. Presumably fertilizer could include trace amounts of these elements without substantial cost increases? (Ca/K/Mg are already in fertilizer.)

cwkoss(4217) 6 days ago [-]

I think plants may require symbiotic microbes to efficiently process some of these dietary minerals, and modern agroindustrial farming has damaged soil biodiversity and carrying capacity.

buckthundaz(10000) 6 days ago [-]

Wait until everyone learns about bioavailability of nutrients with respect to type of food -- ie. Animal-based nutrition is more bioavailable than plant-based nutrients. [1]

[1] - https://academic.oup.com/ajcn/article/78/3/633S/4690005#1098...

jacobwilliamroy(4144) 6 days ago [-]

I think it's important to consider the inputs. Carnivorism is just herbivorism with more overhead. Perhaps I do need to eat ~3 times the non-heme iron to get the same level of absorption, but plant crops usually win out over cattle when it comes to scalability.

Is plant iron more bioavailable to cattle animals than humans? That might tip things slightly in favor of carnivorism, however cattle does a lot more than just convert non-heme iron to heme iron and facilitating that excess resource usage may be more trouble than the heme iron is worth.

ip26(10000) 6 days ago [-]

The iron and zinc from vegetarian diets are generally less bioavailable than from nonvegetarian diets because of reduced meat intake as well as the tendency to consume more phytic acid and other plant-based inhibitors of iron and zinc absorption. However, in Western countries with varied and abundant food supplies, it is not clear that this reduced bioavailability has any functional consequences. Although vegetarians tend to have lower iron stores than omnivores, they appear to have no greater incidence of iron deficiency anemia.

-- Your Link

bwb(3575) 6 days ago [-]

Does anyone know how much we can absorb? Like if we can only take in 1 out of the 50, then I don't care if its down to 10. Anyone know the science on mineral absorption?

jacobwilliamroy(4144) 6 days ago [-]

Someone posted a study which specifically compared bioavailability of heme iron from animals to non-heme iron from plants:

https://academic.oup.com/ajcn/article/78/3/633S/4690005#1098...

Why would low bioavailability convince you to care less about the declining minerals? Surely declining mineral content would be exacerbated by low bioavailability? How does that make things better?

An 80% loss still means you have to quintuple consumption to maintain the same level of nutrients, and your agricultural resources have to scale beyond that to account for crop losses to pests and pathogens. Meat production becomes even more expensive and unsustainable. How does low bioavailability reduce the significance of this trend?

AngryData(10000) 6 days ago [-]

I don't think this is new or that unexpected, but it should be talked about. We use to breed plants based on both taste and size, taste being altered significantly by mineral contents. Currently, we mostly only breed plants based on weight, taste doesn't factor into it, and increasing sugars and starches increases weight far better than any mineral count. Heirloom plants are generally considered luxury products and since they don't have as large of yields, aren't a good competition against commercial plants, despite the fact that most people agree heirloom plants taste better and usually have higher mineral counts.

Now I don't know how much the the growing medium effects this, our topsoil is getting thinner and is fed primarily on 'purified' artificial fertilizers, and heirloom plants are far more likely to be home garden or 'organically' grown, but im willing to bet plant genetics play a much bigger role in mineral contents than anyone wants to admit. It would be another hit to the idea that current farming practices are sustainable long term and nobody wants to admit that.

Merrill(4201) 6 days ago [-]

Plants are also bred so that they can be picked before they are fully ripe and survive a multi-day trip to the grocery shelves of a supermarket.

This may be less important for vegetables that go from field to cannery or freezers, so canned or frozen might be a better choice than fresh vegetables.

lightedman(10000) 6 days ago [-]

'increasing sugars and starches increases weight far better than any mineral count'

No, increasing bioavailable silica content (like potassium silicate addition) increases weight far more than any starch or sugar could possibly do.

bt848(10000) 6 days ago [-]

These are normalized mass rates, ie grams per gram of plant mass. However, the mass yield per acre of cabbage has radically increased over that time period. Is it possible that a cabbage plant is capable of absorbing an absolute amount of minerals, and this is simply being diluted by ever-larger cabbages?

carry_bit(10000) 6 days ago [-]

The explanation I've heard is that the total mineral content is about the same, but plants are producing more carbohydrates now due to an increase in atmospheric CO2, diluting the minerals.

Consequently, if humans eat according the amount of minerals in the food, the increase in carbohydrates could also explain the increase in obesity.

ip26(10000) 6 days ago [-]

Ten fold larger?

wil421(4074) 6 days ago [-]

The link got that actual paper is much better than a sentence or two on LinkedIn. The paper's title isn't as catchy to put in nicely.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6163803/

From my link:

>It is important to note that the USDA mineral content of vegetables and fruits has not been updated since 2000, and perhaps even longer, given that the data for 1992 was not able to be definitively confirmed for this review.

I'd like to see if it's improved recently. Healthy foods and organics are more readily available now.

BasDirks(4093) 6 days ago [-]

Hoe come HN so often spends the top comment correcting Some popular outlet? Why dont we get it straight from the source?

opportune(10000) 6 days ago [-]

If organic food actually had a noticeably higher mineral count than nonorganic, I would start eating it. But I don't see why organic food would actually have more minerals. Organic doesn't mean healthier it just means that it uses different kinds of pesticides, to my knowledge

philipkglass(3854) 6 days ago [-]

Thanks for linking to the original source. The original source appears quite dubious, for reasons detailed below.

The eye-catching graph reproduced on the LinkedIn post is Figure 2 in this paper you linked, 'Challenges in the Diagnosis of Magnesium Status.' It is labeled as 'The average mineral content of calcium, magnesium, and iron in cabbage, lettuce, tomatoes, and spinach.' Note that 3 of the 7 data points -- including the two highest points -- are marked with asterisks as 'numbers could not be independently verified.' Excluding the asterisked points, there is one measurement in 1948 at around 200 mg and then 3 measurements in 2000, 2004, and 2018 all of which look similarly low -- I eyeball them as around 25 mg.

The oldest non-asterisked value is from 1948, cited as

Firman Bear. Ash and Mineral Cation Content of Vegetables. Soil Sci. Soc. Am. Proc. 1948;13:380–384.

The actual paper is here:

https://dl.sciencesocieties.org/publications/sssaj/abstracts...

Sadly, it appears to be in a journal old/obscure enough that sci-hub does not properly retrieve the full text of the article.

The results appear to be from a report commonly cited as the 'Bear Report,' numbers reproduced here:

https://njaes.rutgers.edu/pubs/bear-report/phosphorus.php

https://njaes.rutgers.edu/pubs/bear-report/ash.php

(See also https://njaes.rutgers.edu/pubs/bear-report/ for a top level overview of the report.)

What is the value plotted in Figure 2? It does not correspond to the sum of iron, calcium, and magnesium across any of {cabbage, lettuce, tomatoes, spinach}. Taking a guess, it looks closest to the highest reported value for spinach in magnesium (203.9 mg in the Bear report). Note however that the lowest value for magnesium in spinach in the Bear report is 46.9 mg. Whatever is being plotted here, it's also not an average.

Now take a look at the citation for the modern numbers in spinach:

https://ndb.nal.usda.gov/ndb/foods/show/11457?fgcd=&manu=&fo...

It reports an average of 79 mg of magnesium, 99 mg of calcium, and 2.71 mg of iron. There is no single nutrient number or sum of numbers that makes sense as the value plotted in Figure 2 of 'Challenges in the Diagnosis of Magnesium Status.' Figure 2 shows a number that I eyeball as around 25 mg for the modern (2000 and later) measurements.

The eye-catching claim of 'estimates that the mineral content of vegetables has declined by as much as 80–90% in the last 100 years', buttressed by Figure 2 and apparently valid citations, seems to fall apart once you actually look at the citations.

Why would authors put so much effort into making such a confused, thinly supported claim?

2 out of 3 authors, including the first author, are employees of the Balchem Corporation. Their paper helpfully suggests that low serum magnesium levels 'could warrant a medication change or dietary recommendations to increase intake of raw vegetables with higher magnesium content and reducing soda and processed food consumption with low or no magnesium and/or recommending magnesium supplements.' (My emphasis.)

Balchem Corporation sells supplements to supply magnesium, calcium, and iron:

https://www.balchem.com/our-products/

What about the third author, Robert P. Doyle? He is an apparently legitimate professor at Syracuse University. Here's his faculty page:

http://thecollege.syr.edu/people/faculty/pages/chem/Doyle-Ro...

He links to this full list of his publications from his faculty page:

http://thecollege.syr.edu/people/faculty/pages/chem/Doyle-Ro...

This paper, 'Challenges in the Diagnosis of Magnesium Status,' is not listed among them. In fact, scanning over the titles of his listed papers, I don't see anything mentioning magnesium, nutrition, or soil in his publication history. Did the Balchem authors get some preliminary involvement from an actual professor and then add his name to the paper to confer legitimacy? Did the professor consider the final result so low-quality that he didn't want it listed even in his 'full' publication list?

This paper has many problems that you can see just by trying to reconcile its Figure 2 with its corresponding citations, and other circumstantially suspicious factors. I would not put much stock in it.

mshroyer(10000) 6 days ago [-]

> Healthy foods and organics are more readily available now.

There doesn't seem to be any evidence that 'organic' foods are more nutritious than conventionally farmed foods: https://www.ncbi.nlm.nih.gov/pubmed/19640946

Mathnerd314(3648) 6 days ago [-]

Did you miss the graph? It looks like \___. But then they have asterisks because the historical measurement methods are probably not accurate.

I like one of their references, suggesting it's due to modern farming practices: https://journals.ashs.org/hortsci/view/journals/hortsci/44/1...

jryan49(4198) 6 days ago [-]

Why not just add minerals to our fertilizers? Organics is such a regression after the green revolution (which has saved billions of lives).

chimi(4126) 6 days ago [-]

I used to run an organic farm experimenting with hydroponics and I did a ton of research on this.

What I found, organic does not mean healthier. In fact, a larger percentage of food related illnesses are reported for organic foods than foods without the organic label, which itself is controversial and misleading. This is largely due to the types of fertilizer used. Animal waste is used to fertilize organic foods. Yes, that is a fact. Organic foods can also use BT pesticides, which are the same chemicals in GMO crops where the BT is produced inside the plant.

The nutritional content also depends on many factors, including what types of fertilizers are used. Are those fertilizers available to be absorbed by the plant. It also varies by variety. Some plants, especially strawberries are better grown outside in the soil than in hydroponics. That was a rare case. I also read many studies on nutritional content of plants grown hydroponically vs organically and some studies found some varieties were more nutritional when grown organically, others when grown hydroponically.

The reality is, the industry is so competitive and the studies are funded by folks with an agenda, so the data leaves me skeptical on both sides.

Only 17 elements are required to grow a plant and these elements can be provided entirely through hydroponics. From my research, hydroponic vegetables, for the most part, are actually healthier for people to consume than organic vegetables according to the science, but there is a stigma that since hydroponics is produced by science and technology and the industrial farming complex that it is worse for our bodies.

The research indicates though that hydroponic plants can more easily absorb nutrients, they are healthier and better able to resist fungus and insects, so not only are they more nutritious, but they don't need fungicides, pesticides or herbicides to be sprayed on them. Plus, plants can actually absorb salmonella and other bacteria into the plant from composted manure, which even when washed can still infect the human gut.

tastyfreeze(10000) 6 days ago [-]

This happened due to extermination of soil life through modern farming practices of heavy tilling, fertilizing and monoculture crops. Dead soil is more susceptible to erosion and requires ever increasing amounts of synthetic fertilizer to grow anything. More soil has been lost from the US than the amount of food that has been produced. Soil fertility can be returned to historical levels by changing farming practices to make living soil. The benefits of treating the soil as a living organism include increased fertility, water infiltration, moisture retention, and nutrient availability as well as decreasing or eliminating synthetic fertilizer requirements.

https://theweek.com/articles/554677/america-running-soil

This is a long video but it has completely changed my home gardening practices. https://www.youtube.com/watch?v=uUmIdq0D6-A

thaumasiotes(3713) 6 days ago [-]

> The benefits of treating the soil as a living organism include increased fertility

This is a pretty interesting (and correct) point, and there's a lot to say about it from a few different perspectives.

You get much higher all-inclusive yields from a unit of land by growing several different crops in the same space. They (can) complement each other, using different resources from the ground and providing resources to each other.

American agriculture is not set up to do this. Instead of optimizing yield as a function of land input, we optimize yield as a function of labor input, because we have tons and tons of fertile land relative to our small-for-the-size-of-the-country population. If you have one person growing food on a thousand acres of land (remember, one acre is originally 'the amount of land one man can plow in a day'), it makes more sense to give that one person just one thing to do. There's no way they're going to be able to care for 5 different crops simultaneously, or even adjust their practices to better suit the specifics of individual fields. We prefer to get a lot of land under low-labor cultivation, even though the cultivation isn't very effective, because our tiny population doesn't require high agricultural yields per unit land.

A more traditional model involves starving peasants intensively cultivating scarce land. With cheap and abundant labor inputs, you can get much more from one unit of land. But most of that increase goes to feeding the peasants who provide the labor. This is similar to how models of the effect of immigration on American GDP tend to show large increases in GDP -- the benefits of which, if you account for them, mostly accrue to the new immigrants.

I don't think there's an easy way to combine the ideas of 'infinite cheap peasant labor gives us higher agricultural yields' and 'a middle-class lifestyle should be in reach for everybody'. Americans are rich, from a historical and current-rest-of-the-world perspective, specifically because we use so few people to grow food. One guy with enough food to feed three million people is a huge benefit to the rest of the country, and he's personally stinking rich. Five million peasants in huts who collectively produce enough food to feed 5,100,000 people are all dirt poor, and they produce only a minor benefit to the rest of the country.

mbell(4052) 6 days ago [-]

> Soil fertility can be returned to historical levels by changing farming practices to make living soil.

Seems easier to just add the missing minerals/chemicals to the soil.

LifeLiverTransp(10000) 6 days ago [-]

Biological farming was the reason WW2 occured.

dv_dt(4111) 6 days ago [-]

Healthier soil may also capture carbon. There are many reasons to look at optimization of farming and food production for a whole range of concerns beyond profits. The question is how to broadly incentivize and fund those concerns - reinvigorating agricultural dept programs with a wider focus at a federal and state level is one way to do that; others would likely come up under a Green New Deal type framework too.

https://e360.yale.edu/features/soil_as_carbon_storehouse_new...

feedbeef(10000) 6 days ago [-]

To learn more about soil health, watch Dr. Elaine Ingham's talks available on YouTube. Also see: https://permies.com/wiki/redhawk-soil

samstave(3779) 6 days ago [-]

Exactly.

Rock dust, nitrogen etc...

I work in cannabis tech, and there is a really interesting farm i work with.

They have been using / revitalizing the same soil for a decade.

Here is what is really interesting, they grow their cannabis on top of an 18" thick mycelium fungus bed. They mulch out their detritus and then keep growing their cannabis in that same soil - and their quality levels are top fucking notch.

Big ag is like FB of agra: trying way to hard to scale and quality is near zero.

schaefer(10000) 6 days ago [-]

Thanks for sharing, Tastyfreeze.

kraig911(10000) 6 days ago [-]

There is also evidence due to higher concentrations of CO2.

https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6104417/

Essentially there's around the same mineral content it's just the vegetables are also on average larger in size and mass - yet yield the same mineral content.

atomi(10000) 6 days ago [-]

+1 on Living Web Farms. They have a lot of great videos about sustainable farming practices. I've learned a lot from the whole gang there - but especially from Mr. Pat Battle. If you have any interest in learning how to grow some of your own food, you should check them out.

01100011(10000) 6 days ago [-]

Not exactly disagreeing with you, but it seems like increasing yields and inadequate fertilization focusing on macronutrients would also explain the drop. If I grow crops that preferentially absorb certain minerals and then take away most of the vegetable matter from the plot, the soil content of the absorbed minerals is going to drop without something to replenish it. Now, I'm not saying soil bacteria and mychorrhizal networks can't pull minerals from the underlying bedrock and transport it to the top layer, but even assuming they did, I would think high-yield farming would tax their ability to keep up with mineral outflows.

mythrwy(10000) 6 days ago [-]

I think it also happens because of synthetic fertilizers (and possibly modern genetics).

It's a race to grow as big a product in as short of time as possible which causes the mineral ratios to be different.

A tomato which takes months to grow and ripens on the vine has more time to assimilate minerals than a tomato that is 'pushed' with synthetic fertilizers, then snatched off the vine unripened and ripened with ethylene gas in transport.

testfoobar(10000) 6 days ago [-]

Soil depletion: https://www.scientificamerican.com/article/soil-depletion-an...

I have a friend who refers to vitamins as 'expensive pee'. He means taking vitamin pills when you eat a good diet containing fruits and veg is unnecessary because your body will excrete out the excess. I like to point out to him that fruits and veg are not what they used to be.

chrisco255(4167) 6 days ago [-]

Don't forget meat has absolutely tons of essential nutrients and vitamins, some of which can't be found in fruits and vegetables.

behringer(10000) 6 days ago [-]

It's not only expensive pee, but also potentially damages your kidneys.

https://www.hopkinsmedicine.org/health/wellness-and-preventi...

You shouldn't take supplements unless your doctor advises you to due to some (actual and not perceived) deficiency.

dmacut(10000) 6 days ago [-]

Pretty much. My dad's a doctor and has always told me Emergen-C is just fizzy Tang, if you remember that stuff.

igammarays(3965) 6 days ago [-]

I've been living in Ukraine for the past few months, trying to figure out why the food here, especially the vegetables, simply taste so good. The very same dishes (e.g. steak and veggies) from popular mid-range restaurants are far more enjoyable to me over here in Kiev than I ever ate in Canada. When I went back to Canada for a brief visit, my body was craving Ukrainian food. Now this study on vegetable mineral content may explain why. I'm very keen to see a comparison study on Eastern European agriculture and produce versus North American. Ukraine was once the "breadbasket" of the Soviet Union, so there must be an explanation. Ukrainians who migrate to other countries are often said to complain about a loss in food taste - previously I assumed that was just some form of homesickness, but this study lends some potential scientific ground to their complaints.

krn(1834) 6 days ago [-]

> I've been living in Ukraine for the past few months, trying to figure out why the food here, especially the vegetables, simply taste so good.

I had exactly the same feeling when I was living in Lviv and buying vegetables from the local Farmers Market[1] to cook at home.

It seems that Ukrainians are still growing vegetables like their grandparents did in the 1930s, as it's one of the poorest and most isolated regions of Europe.

In my own post-soviet EU country, most people living outside the big cities still grow their own vegetables in their backyard[2], but more as a rewarding hobby.

[1] https://www.youtube.com/watch?v=yI0VnuZObjE

[2] http://imgur.com/a/3afScVR

nathan_f77(3313) 6 days ago [-]

Wow, I wonder if this is why I thought the food was so bad in San Francisco after I moved there from New Zealand. We went to a lot great restaurants, but I remember thinking that the restaurants in New Zealand was 100x better. We have an amazing cafe/restaurant scene in NZ, and I never found anything in San Francisco that could come close to a nice brunch in Auckland. (I was trying almost every place that had a good Yelp review over a period of ~2 years.) Maybe this was mostly a subconscious thing related to the mineral content of the ingredients.

RowanH(10000) 6 days ago [-]

Same when I moved back to New Zealand after years in Canada. The first trip to the fruit and vege store I went overboard...

newsbinator(10000) 6 days ago [-]

Same thing happened to me in the south of Bulgaria, just shopping at regular grocery stores. I couldn't figure out why the tomatoes tasted amazing.

It's like I'd been eating tomato-flavored potatoes all my life until that moment, and now trying a real tomato for the first time.

mixmastamyk(3510) 6 days ago [-]

I wonder what effect nuclear accidents have on the food as well.





Historical Discussions: 3D Ken Burns Effect from a Single Image (September 15, 2019: 724 points)
3D Ken Burns Effect from a Single Image (September 16, 2019: 3 points)

(726) 3D Ken Burns Effect from a Single Image

726 points 5 days ago by sniklaus in 3594th position

sniklaus.com | Estimated reading time – 2 minutes | comments | anchor

3D Ken Burns Effect from a Single Image Simon Niklaus, Long Mai, Jimei Yang and Feng Liu ACM Transactions on Graphics

The Ken Burns effect allows animating still images with a virtual camera scan and zoom. Adding parallax, which results in the 3D Ken Burns effect, enables significantly more compelling results. Creating such effects manually is time-consuming and demands sophisticated editing skills. Existing automatic methods, however, require multiple input images from varying viewpoints.

In this paper, we introduce a framework that synthesizes the 3D Ken Burns effect from a single image, supporting both a fully automatic mode and an interactive mode with the user controlling the camera. Our framework first leverages a depth prediction pipeline, which estimates scene depth that is suitable for view synthesis tasks. To address the limitations of existing depth estimation methods such as geometric distortions, semantic distortions, and inaccurate depth boundaries, we develop a semantic-aware neural network for depth prediction, couple its estimate with a segmentation-based depth adjustment process, and employ a refinement neural network that facilitates accurate depth predictions at object boundaries. According to this depth estimate, our framework then maps the input image to a point cloud and synthesizes the resulting video frames by rendering the point cloud from the corresponding camera positions. To address disocclusions while maintaining geometrically and temporally coherent synthesis results, we utilize context-aware color- and depth-inpainting to fill in the missing information in the extreme views of the camera path, thus extending the scene geometry of the point cloud.

Experiments with a wide variety of image content show that our method enables realistic synthesis results. Our study demonstrates that our system allows users to achieve better results while requiring little effort compared to existing solutions for the 3D Ken Burns effect creation.




All Comments: [-] | anchor

kd3(4220) 5 days ago [-]

Incredible. Amazing. Holy shit.

sniklaus(3594) 4 days ago [-]

This comment made me smile, thank you. :)

rolltiide(10000) 4 days ago [-]

Great! I was thinking about doing this for years

Every time I see an awesome landscape in person I think about whats missing from capturing that for other people to experience and I concluded it came down to depth perception and how our eyes dart around to create a composite experience resulting in ever slight shifts of depth

A 2D image cant capture that, but this seems a lot closer and its great it can use a 2D image as a base

sniklaus(3594) 4 days ago [-]

I wholeheartedly agree with you and believe that depth perception is a key aspect that is is missing in the status quo of viewing still images!

JonathanFly(10000) 4 days ago [-]

Not bad but monocular depth estimation has gotten pretty good all around. I made these similar images with basically no expertise, no manual mapping, just trying random single image depth projects from github. I kind of just went with whatever I could get running quickly, didn't even evaulate the quality other than cases where it obviously didn't work.

(Sorry for linking a million tweets but I didn't put these on a blog or anything, that's the only place they exist.)

Your version does look better though! But I was impressed I could just run random images through depth estimates and get anything cool like this.

https://twitter.com/jonathanfly/status/1156799136987013120

https://twitter.com/jonathanfly/status/1153383325974896646

https://twitter.com/jonathanfly/status/1154472832249860100

https://twitter.com/jonathanfly/status/1153120643040337925

I would love to your method take on some Escher paintings, can you try it for me?

sniklaus(3594) 4 days ago [-]

They look nice, thank you for sharing! It seems like the input images that you used are paintings. I am not sure how well our depth estimation would work on those, definitely something to try out.

sprash(10000) 4 days ago [-]

Found something similar on shadertoy [1] from 2013. Of course the geometry is handcoded in this case but it would be a good starting point to implement something like this via shaders in WebGL and use it for all your images on your Webpage.

[1]: https://www.shadertoy.com/view/XdlGzH

sniklaus(3594) 4 days ago [-]

That looks awesome, thank you for sharing!

TazeTSchnitzel(2149) 4 days ago [-]

I've seen this effect done to a single still image before, but presumably it was done by hand somehow. Specifically, in the "Kony 2012" video (remember that?)

wilsmex(4221) 4 days ago [-]

I did a simple tutorial on YouTube that illustrates the basic methods for the hand done 2.5D parallax effect in Photoshop: https://www.youtube.com/watch?v=giE4SuC9qWc

sniklaus(3594) 4 days ago [-]

The effect itself is actually not too uncommon. It is just very tedious and time-consuming to do manually since it requires segmenting the elements in the scene, arranging them in a 3D space, filling in any holes and specifying a virtual camera trajectory.

whalesalad(351) 5 days ago [-]

I've always wondered if Apple was using technology like this for the movie covers in the Apple TV store, since they have a parallax effect. For those who don't know what I am talking about, the remote control has a touchpad on it, so as you wiggle your finger around (while a title is selected) the cover will move on a 3D axis w/ your finger until you use enough force to move to the next title. Figured there would be a relatively straightforward way to separate the layers w/ software.

But only certain titles have the effect ... so I would imagine the studios provide a layered asset that can get composed together.

bonestamp2(10000) 5 days ago [-]

Yes, I've submitted artwork to iTunes and they want (but don't require) a layered file for the artwork to create that parallax effect. You'll notice most just have the Title and (other text) as moveable layers, which is easy since those were probably created as separate layers when the artwork was created to begin with.

The background artwork is typically static, although I suspect this tool will be included in photoshop at some point at which time it will make it much easier to give this effect to backgrounds too.

Here are some specs if you're interested (there are various different docs depending on which assets you're submitting, this is just one example):

https://help.apple.com/itc/videoaudioassetguide/#/itc0c10422...

saagarjha(10000) 4 days ago [-]

Not completely related, but app icons for Apple TV have explicit layers.

covercash(3552) 5 days ago [-]

I've manually done that in the past using Photoshop and multiple layers in Final Cut Pro, it was very time consuming. I wonder if this method requires you to isolate the layers manually.

sniklaus(3594) 5 days ago [-]

I was watching a lot of tutorials like that when I started working on the project. The shown results were created fully automatically, one can optionally refine the camera path though. You can find more information on that in the video on my website.

anon1m0us(4213) 5 days ago [-]

You'll notice the effect isn't as good at about 7 seconds into the movie where the green grass between the bride and her maids moves with the bride.

Why does the algorithm think that is part of her, rather than the background?

pedrocx486(10000) 4 days ago [-]

The two examples involving humans look terrible to me, the others look really nice. Probably some bias on my perception.

sniklaus(3594) 5 days ago [-]

Estimating the scene geometry from a single image is highly challenging and remains an unsolved problem. As such, you can find subtle artifacts like this in most of the results. The artifact that you are referring to stems from a geometry adjustment step in which we make sure that salient objects like humans or animals are on a plane in the 3D space. This requires image segmentation which is also highly challenging and may lead to parts of the background being assigned to the segmented foreground object and vice versa.

amayne(3990) 5 days ago [-]

Cool. But that's not what we call the Ken Burns Effect in the industry. This is 2.5 D parallax shift as seen in the documentary The Kid Stays in the Picture.

Ken Burns Effect is panning and zooming to highlight features in a photo then fading into another.

RichardCA(10000) 4 days ago [-]

It's called, 'using a rostrum camera'.

https://en.wikipedia.org/wiki/Rostrum_camera

If you grew up watching PBS documentaries there was always a rostrum camera operator somewhere in the credits.

In the UK, Ken Morse was/is best known for this.

https://twitter.com/kenmorsecredits

greggman2(10000) 5 days ago [-]

Amazing but yea, not the Ken Burns effect.

MacOS has 'Ken Burns' screen saver if you are on Mac. There's also plenty of examples of 'Ken Burns' effect on youtube

https://www.youtube.com/watch?v=WjdWvmjgBa0

Also Wikipedia

https://en.wikipedia.org/wiki/Ken_Burns_effect

The authors might want to change the description on their amazing effects if they want to avoid lots of threads and comments about it not being the Ken Burns effect (or maybe that will generate more comments)

onemoresoop(2453) 5 days ago [-]

Pretty much what I thought as well. But cool nonetheless.

sgt(2304) 5 days ago [-]

Perhaps it would also be worth mentioning Ken Burns and his documentaries.

They're absolutely worth watching and gives BBC a run for its money.

Perhaps start with The West and then move on to Civil War. My personal favorite was perhaps the one on Jazz, made in 2001.

semi-extrinsic(4000) 5 days ago [-]

Looking at the website of OP, they agree with you in the classic definition of Ken Burns effect, and call this '3D Ken Burns'.

sniklaus(3594) 5 days ago [-]

The examples in the video were automatically generated without user feedback. It is actually also possible to manually specify the camera path in order to achieve the effect that you are expecting, for more information feel free to have a look at the video on the project website.

k__(3307) 5 days ago [-]

Does this work real-time?

sniklaus(3594) 4 days ago [-]

It takes two to three seconds to process the input image and can subsequently synthesize each frame in the output video in real time. This makes it possible for the user to adjust the camera path and see the result in real time. Feel free to have a look at the video on the website for an example of this.

davidmurdoch(4143) 5 days ago [-]

Reminds me of this gimbal-shot portrait-video parallax effect: https://youtu.be/Ryu4hp-HbwU (which was probably somewhat inspired by the Ken's Burns Effect)

bonestamp2(10000) 5 days ago [-]

That's awesome, great tutorial. Another fun one is dolly zoom, which is pretty easy. Some of the DJI drones can even do it automatically: https://www.youtube.com/watch?v=MC7hkMR0hBs

sniklaus(3594) 5 days ago [-]

Those look nice! The premise of the project was to only use a single image as an input in order to be applicable to existing footage. Without this constraint, more stunning effects like the portrait-video parallax become possible. I am sure we will see more exciting work like this in the future!

dperfect(4044) 4 days ago [-]

The most interesting thing about this (compared to results from similar research/projects) is that in all of the examples, camera movement is forward and down relative to the original perspective. The results are really good, and some of that may be due to a superior algorithm, but it's also aided in large part by the choice of movement. Since objects lower in the frame tend to be closer (foreground elements), the downward camera movement causes those objects to occlude parts of the background above (and behind) them, meaning that a relatively small portion of the background needs to be inpainted by the algorithm. If too much is interpolated, visual artifacts often ruin the illusion.

The forward movement also helps in this case as those foreground objects grow in size (relative to the background) with time, so there's even less need for interpolation. If the movement were primarily lateral (or reversed relative to the original image), I imagine the algorithm would have a much harder time producing good results [1].

EDIT: After skimming the paper, it appears that the algorithm is automatically choosing these best-case virtual camera movements:

> This system provides a fully automatic solution where the start- and end-view of the virtual camera path are automatically determined so as to minimize the amount of disocclusion.

That is pretty impressive. I had originally assumed the paths were cherry-picked by humans, so it's cool that the paths themselves are automatically chosen (and that the algorithm matches the intuitive best-case scenario in most cases). It's still slightly misleading in terms of results because they mention that user-defined movements can be used instead, but of course, the results are likely to suffer significantly if the movement doesn't match the optimal path chosen by the algorithm.

[1] The last example shown in the full results video illustrates the issue with too much background interpolation in lateral movement: http://sniklaus.com/papers/kenburns-results

sniklaus(3594) 4 days ago [-]

Thank you for sharing your thoughts! We designed the automatic camera path estimation to minimize the amount of disocclusion which indeed simplifies the problem. As you correctly pointed out, inpainting the background is an additional challenge and while we address it, the inpainted results sometimes lack texture.

nunodonato(3629) 4 days ago [-]

Cool. I have a Fiverr service to make these for whoever wants to transform a normal photo into a 2.5D one. But it's hand made, so I guess this can make the whole thing more accessible to anyone

sniklaus(3594) 4 days ago [-]

Our work focuses on physically correct depth. We have noticed that such effects created by professional artists emphasize the parallax to an extend that is not physically correct. As such, artists still seem to know best how to animate a catchy parallax effect.

Animats(2071) 4 days ago [-]

OK, so this is depth estimation from a single image, right? Like this classic paper.[1] Then that's used to turn the image into a set of layers. This is often done for video; that's how 2D movies are turned into 3D movies.

[1] http://www.cs.cornell.edu/~asaxena/learningdepth/ijcv_monocu...

sniklaus(3594) 4 days ago [-]

More or less. Estimating the depth from a single image is highly challenging and far from solved. We thus had to made sure that the depth estimate is suitable for synthesizing new views. And we are not explicitly modelling layers, we actually model the scene geometry as a point cloud. But you definitely got the gist of it. By the way, one advantage when estimating the scene geometry in a video is that it is possible to employ structure from motion.

sniklaus(3594) 5 days ago [-]
kissickas(3809) 5 days ago [-]

Love your personal website.

sniklaus(3594) 4 days ago [-]

I just got the daily email from my server with some metrics. It has experienced over 500 gigabytes of traffic in the last few hours and the video is only 3 megabytes in size. I definitely did not expect such a HN-effect.

alphakappa(3630) 5 days ago [-]

The results look impressive. Is the code or a tool available that would enable others to try it out on their images?

nayuki(3072) 4 days ago [-]

This visual animation effect is almost the signature of the YouTuber 'Business Casual': https://www.youtube.com/channel/UC_E4px0RST-qFwXLJWBav8Q/vid...

sniklaus(3594) 4 days ago [-]

It seems like he is using them extensively but not too extreme. Looks great, thank you for sharing!

lostgame(4185) 4 days ago [-]

It's been said, but this is mistitled - this is not a 'Ken Burns' effect, but rather a parallax panning effect with depth. To be honest it's far more impressive than I expected.

JansjoFromIkea(4184) 4 days ago [-]

Yep same here, it misrepresents and understates the actual thing to the point that I wouldn't have even clicked on it if the 3D part of the title didn't confuse me.

ChuckMcM(629) 4 days ago [-]

Can we replace that with http://sniklaus.com/papers/kenburns the actual paper?

sniklaus(3594) 4 days ago [-]

I was thinking about what I should post and decided to make this little teaser since it demonstrates the gist of our work within a few seconds. I understand that the typically Hacker News audience may find the paper more worthwhile though.





Historical Discussions: Pure Bash Bible (June 24, 2018: 5 points)
Pure Bash Bible – A collection of pure bash alternatives to external processes (June 15, 2018: 4 points)
Pure bash alternatives to external processes (June 19, 2018: 4 points)
A collection of pure bash alternatives to external processes (June 17, 2018: 2 points)
A collection of pure bash alternatives to external processes (June 17, 2018: 2 points)
Pure bash bible: collection of pure bash alternatives to external processes (June 16, 2018: 2 points)

(673) Pure Bash Bible

673 points 1 day ago by ausjke in 662nd position

github.com | Estimated reading time – 64 minutes | comments | anchor

pure bash bible

A collection of pure bash alternatives to external processes.

The goal of this book is to document commonly-known and lesser-known methods of doing various tasks using only built-in bash features. Using the snippets from this bible can help remove unneeded dependencies from scripts and in most cases make them faster. I came across these tips and discovered a few while developing neofetch, pxltrm and other smaller projects.

The snippets below are linted using shellcheck and tests have been written where applicable. Want to contribute? Read the CONTRIBUTING.md. It outlines how the unit tests work and what is required when adding snippets to the bible.

See something incorrectly described, buggy or outright wrong? Open an issue or send a pull request. If the bible is missing something, open an issue and a solution will be found.

This book is also available to purchase on leanpub. https://leanpub.com/bash

Or you can buy me a coffee.

Table of Contents

FOREWORD

A collection of pure bash alternatives to external processes and programs. The bash scripting language is more powerful than people realise and most tasks can be accomplished without depending on external programs.

Calling an external process in bash is expensive and excessive use will cause a noticeable slowdown. Scripts and programs written using built-in methods (where applicable) will be faster, require fewer dependencies and afford a better understanding of the language itself.

The contents of this book provide a reference for solving problems encountered when writing programs and scripts in bash. Examples are in function formats showcasing how to incorporate these solutions into code.

STRINGS

Trim leading and trailing white-space from string

This is an alternative to sed, awk, perl and other tools. The function below works by finding all leading and trailing white-space and removing it from the start and end of the string. The : built-in is used in place of a temporary variable.

Example Function:

trim_string() {
    # Usage: trim_string '   example   string    '
    : '${1#'${1%%[![:space:]]*}'}'
    : '${_%'${_##*[![:space:]]}'}'
    printf '%s\n' '$_'
}

Example Usage:

$ trim_string '    Hello,  World    '
Hello,  World
$ name='   John Black  '
$ trim_string '$name'
John Black

Trim all white-space from string and truncate spaces

This is an alternative to sed, awk, perl and other tools. The function below works by abusing word splitting to create a new string without leading/trailing white-space and with truncated spaces.

Example Function:

# shellcheck disable=SC2086,SC2048
trim_all() {
    # Usage: trim_all '   example   string    '
    set -f
    set -- $*
    printf '%s\n' '$*'
    set +f
}

Example Usage:

$ trim_all '    Hello,    World    '
Hello, World
$ name='   John   Black  is     my    name.    '
$ trim_all '$name'
John Black is my name.

Use regex on a string

The result of bash's regex matching can be used to replace sed for a large number of use-cases.

CAVEAT: This is one of the few platform dependent bash features. bash will use whatever regex engine is installed on the user's system. Stick to POSIX regex features if aiming for compatibility.

CAVEAT: This example only prints the first matching group. When using multiple capture groups some modification is needed.

Example Function:

regex() {
    # Usage: regex 'string' 'regex'
    [[ $1 =~ $2 ]] && printf '%s\n' '${BASH_REMATCH[1]}'
}

Example Usage:

$ # Trim leading white-space.
$ regex '    hello' '^\s*(.*)'
hello
$ # Validate a hex color.
$ regex '#FFFFFF' '^(#?([a-fA-F0-9]{6}|[a-fA-F0-9]{3}))$'
#FFFFFF
$ # Validate a hex color (invalid).
$ regex 'red' '^(#?([a-fA-F0-9]{6}|[a-fA-F0-9]{3}))$'
# no output (invalid)

Example Usage in script:

is_hex_color() {
    if [[ $1 =~ ^(#?([a-fA-F0-9]{6}|[a-fA-F0-9]{3}))$ ]]; then
        printf '%s\n' '${BASH_REMATCH[1]}'
    else
        printf '%s\n' 'error: $1 is an invalid color.'
        return 1
    fi
}
read -r color
is_hex_color '$color' || color='#FFFFFF'
# Do stuff.

Split a string on a delimiter

CAVEAT: Requires bash 4+

This is an alternative to cut, awk and other tools.

Example Function:

split() {
   # Usage: split 'string' 'delimiter'
   IFS=$'\n' read -d '' -ra arr <<< '${1//$2/$'\n'}'
   printf '%s\n' '${arr[@]}'
}

Example Usage:

$ split 'apples,oranges,pears,grapes' ','
apples
oranges
pears
grapes
$ split '1, 2, 3, 4, 5' ', '
1
2
3
4
5
# Multi char delimiters work too!
$ split 'hello---world---my---name---is---john' '---'
hello
world
my
name
is
john

Change a string to lowercase

CAVEAT: Requires bash 4+

Example Function:

lower() {
    # Usage: lower 'string'
    printf '%s\n' '${1,,}'
}

Example Usage:

$ lower 'HELLO'
hello
$ lower 'HeLlO'
hello
$ lower 'hello'
hello

Change a string to uppercase

CAVEAT: Requires bash 4+

Example Function:

upper() {
    # Usage: upper 'string'
    printf '%s\n' '${1^^}'
}

Example Usage:

$ upper 'hello'
HELLO
$ upper 'HeLlO'
HELLO
$ upper 'HELLO'
HELLO

Reverse a string case

CAVEAT: Requires bash 4+

Example Function:

reverse_case() {
    # Usage: reverse_case 'string'
    printf '%s\n' '${1~~}'
}

Example Usage:

$ reverse_case 'hello'
HELLO
$ reverse_case 'HeLlO'
hElLo
$ reverse_case 'HELLO'
hello

Trim quotes from a string

Example Function:

trim_quotes() {
    # Usage: trim_quotes 'string'
    : '${1//\'}'
    printf '%s\n' '${_//\'}'
}

Example Usage:

$ var=''Hello', \'World\''
$ trim_quotes '$var'
Hello, World

Strip all instances of pattern from string

Example Function:

strip_all() {
    # Usage: strip_all 'string' 'pattern'
    printf '%s\n' '${1//$2}'
}

Example Usage:

$ strip_all 'The Quick Brown Fox' '[aeiou]'
Th Qck Brwn Fx
$ strip_all 'The Quick Brown Fox' '[[:space:]]'
TheQuickBrownFox
$ strip_all 'The Quick Brown Fox' 'Quick '
The Brown Fox

Strip first occurrence of pattern from string

Example Function:

strip() {
    # Usage: strip 'string' 'pattern'
    printf '%s\n' '${1/$2}'
}

Example Usage:

$ strip 'The Quick Brown Fox' '[aeiou]'
Th Quick Brown Fox
$ strip 'The Quick Brown Fox' '[[:space:]]'
TheQuick Brown Fox

Strip pattern from start of string

Example Function:

lstrip() {
    # Usage: lstrip 'string' 'pattern'
    printf '%s\n' '${1##$2}'
}

Example Usage:

$ lstrip 'The Quick Brown Fox' 'The '
Quick Brown Fox

Strip pattern from end of string

Example Function:

rstrip() {
    # Usage: rstrip 'string' 'pattern'
    printf '%s\n' '${1%%$2}'
}

Example Usage:

$ rstrip 'The Quick Brown Fox' ' Fox'
The Quick Brown

Percent-encode a string

Example Function:

urlencode() {
    # Usage: urlencode 'string'
    local LC_ALL=C
    for (( i = 0; i < ${#1}; i++ )); do
        : '${1:i:1}'
        case '$_' in
            [a-zA-Z0-9.~_-])
                printf '%s' '$_'
            ;;
            *)
                printf '%%%02X' ''$_'
            ;;
        esac
    done
    printf '\n'
}

Example Usage:

$ urlencode 'https://github.com/dylanaraps/pure-bash-bible'
https%3A%2F%2Fgithub.com%2Fdylanaraps%2Fpure-bash-bible

Decode a percent-encoded string

Example Function:

urldecode() {
    # Usage: urldecode 'string'
    : '${1//+/ }'
    printf '%b\n' '${_//%/\\x}'
}

Example Usage:

$ urldecode 'https%3A%2F%2Fgithub.com%2Fdylanaraps%2Fpure-bash-bible'
https://github.com/dylanaraps/pure-bash-bible

Check if string contains a sub-string

Using a test:

if [[ $var == *sub_string* ]]; then
    printf '%s\n' 'sub_string is in var.'
fi
# Inverse (substring not in string).
if [[ $var != *sub_string* ]]; then
    printf '%s\n' 'sub_string is not in var.'
fi
# This works for arrays too!
if [[ ${arr[*]} == *sub_string* ]]; then
    printf '%s\n' 'sub_string is in array.'
fi

Using a case statement:

case '$var' in
    *sub_string*)
        # Do stuff
    ;;
    *sub_string2*)
        # Do more stuff
    ;;
    *)
        # Else
    ;;
esac

Check if string starts with sub-string

if [[ $var == sub_string* ]]; then
    printf '%s\n' 'var starts with sub_string.'
fi
# Inverse (var does not start with sub_string).
if [[ $var != sub_string* ]]; then
    printf '%s\n' 'var does not start with sub_string.'
fi

Check if string ends with sub-string

if [[ $var == *sub_string ]]; then
    printf '%s\n' 'var ends with sub_string.'
fi
# Inverse (var does not end with sub_string).
if [[ $var != *sub_string ]]; then
    printf '%s\n' 'var does not end with sub_string.'
fi

ARRAYS

Reverse an array

Enabling extdebug allows access to the BASH_ARGV array which stores the current function's arguments in reverse.

Example Function:

reverse_array() {
    # Usage: reverse_array 'array'
    shopt -s extdebug
    f()(printf '%s\n' '${BASH_ARGV[@]}'); f '[email protected]'
    shopt -u extdebug
}

Example Usage:

$ reverse_array 1 2 3 4 5
5
4
3
2
1
$ arr=(red blue green)
$ reverse_array '${arr[@]}'
green
blue
red

Remove duplicate array elements

Create a temporary associative array. When setting associative array values and a duplicate assignment occurs, bash overwrites the key. This allows us to effectively remove array duplicates.

CAVEAT: Requires bash 4+

Example Function:

remove_array_dups() {
    # Usage: remove_array_dups 'array'
    declare -A tmp_array
    for i in '[email protected]'; do
        [[ $i ]] && IFS=' ' tmp_array['${i:- }']=1
    done
    printf '%s\n' '${!tmp_array[@]}'
}

Example Usage:

$ remove_array_dups 1 1 2 2 3 3 3 3 3 4 4 4 4 4 5 5 5 5 5 5
1
2
3
4
5
$ arr=(red red green blue blue)
$ remove_array_dups '${arr[@]}'
red
green
blue

Random array element

Example Function:

random_array_element() {
    # Usage: random_array_element 'array'
    local arr=('[email protected]')
    printf '%s\n' '${arr[RANDOM % $#]}'
}

Example Usage:

$ array=(red green blue yellow brown)
$ random_array_element '${array[@]}'
yellow
# Multiple arguments can also be passed.
$ random_array_element 1 2 3 4 5 6 7
3

Cycle through an array

Each time the printf is called, the next array element is printed. When the print hits the last array element it starts from the first element again.

arr=(a b c d)
cycle() {
    printf '%s ' '${arr[${i:=0}]}'
    ((i=i>=${#arr[@]}-1?0:++i))
}

Toggle between two values

This works the same as above, this is just a different use case.

arr=(true false)
cycle() {
    printf '%s ' '${arr[${i:=0}]}'
    ((i=i>=${#arr[@]}-1?0:++i))
}

LOOPS

Loop over a range of numbers

Alternative to seq.

# Loop from 0-100 (no variable support).
for i in {0..100}; do
    printf '%s\n' '$i'
done

Loop over a variable range of numbers

Alternative to seq.

# Loop from 0-VAR.
VAR=50
for ((i=0;i<=VAR;i++)); do
    printf '%s\n' '$i'
done

Loop over an array

arr=(apples oranges tomatoes)
# Just elements.
for element in '${arr[@]}'; do
    printf '%s\n' '$element'
done

Loop over an array with an index

arr=(apples oranges tomatoes)
# Elements and index.
for i in '${!arr[@]}'; do
    printf '%s\n' '${arr[i]}'
done
# Alternative method.
for ((i=0;i<${#arr[@]};i++)); do
    printf '%s\n' '${arr[i]}'
done

Loop over the contents of a file

while read -r line; do
    printf '%s\n' '$line'
done < 'file'

Loop over files and directories

Don't use ls.

# Greedy example.
for file in *; do
    printf '%s\n' '$file'
done
# PNG files in dir.
for file in ~/Pictures/*.png; do
    printf '%s\n' '$file'
done
# Iterate over directories.
for dir in ~/Downloads/*/; do
    printf '%s\n' '$dir'
done
# Brace Expansion.
for file in /path/to/parentdir/{file1,file2,subdir/file3}; do
    printf '%s\n' '$file'
done
# Iterate recursively.
shopt -s globstar
for file in ~/Pictures/**/*; do
    printf '%s\n' '$file'
done
shopt -u globstar

FILE HANDLING

CAVEAT: bash does not handle binary data properly in versions < 4.4.

Read a file to a string

Alternative to the cat command.

Read a file to an array (by line)

Alternative to the cat command.

# Bash <4
IFS=$'\n' read -d '' -ra file_data < 'file'
# Bash 4+
mapfile -t file_data < 'file'

Get the first N lines of a file

Alternative to the head command.

CAVEAT: Requires bash 4+

Example Function:

head() {
    # Usage: head 'n' 'file'
    mapfile -tn '$1' line < '$2'
    printf '%s\n' '${line[@]}'
}

Example Usage:

$ head 2 ~/.bashrc
# Prompt
PS1=''
$ head 1 ~/.bashrc
# Prompt

Get the last N lines of a file

Alternative to the tail command.

CAVEAT: Requires bash 4+

Example Function:

tail() {
    # Usage: tail 'n' 'file'
    mapfile -tn 0 line < '$2'
    printf '%s\n' '${line[@]: -$1}'
}

Example Usage:

$ tail 2 ~/.bashrc
# Enable tmux.
# [[ -z '$TMUX'  ]] && exec tmux
$ tail 1 ~/.bashrc
# [[ -z '$TMUX'  ]] && exec tmux

Get the number of lines in a file

Alternative to wc -l.

Example Function (bash 4):

lines() {
    # Usage: lines 'file'
    mapfile -tn 0 lines < '$1'
    printf '%s\n' '${#lines[@]}'
}

Example Function (bash 3):

This method uses less memory than the mapfile method and works in bash 3 but it is slower for bigger files.

lines_loop() {
    # Usage: lines_loop 'file'
    count=0
    while IFS= read -r _; do
        ((count++))
    done < '$1'
    printf '%s\n' '$count'
}

Example Usage:

$ lines ~/.bashrc
48
$ lines_loop ~/.bashrc
48

Count files or directories in directory

This works by passing the output of the glob to the function and then counting the number of arguments.

Example Function:

count() {
    # Usage: count /path/to/dir/*
    #        count /path/to/dir/*/
    printf '%s\n' '$#'
}

Example Usage:

# Count all files in dir.
$ count ~/Downloads/*
232
# Count all dirs in dir.
$ count ~/Downloads/*/
45
# Count all jpg files in dir.
$ count ~/Pictures/*.jpg
64

Create an empty file

Alternative to touch.

# Shortest.
>file
# Longer alternatives:
:>file
echo -n >file
printf '' >file

Extract lines between two markers

Example Function:

extract() {
    # Usage: extract file 'opening marker' 'closing marker'
    while IFS=$'\n' read -r line; do
        [[ $extract && $line != '$3' ]] &&
            printf '%s\n' '$line'
        [[ $line == '$2' ]] && extract=1
        [[ $line == '$3' ]] && extract=
    done < '$1'
}

Example Usage:

# Extract code blocks from MarkDown file.
$ extract ~/projects/pure-bash/README.md '```sh' '```'
# Output here...

FILE PATHS

Get the directory name of a file path

Alternative to the dirname command.

Example Function:

dirname() {
    # Usage: dirname 'path'
    printf '%s\n' '${1%/*}/'
}

Example Usage:

$ dirname ~/Pictures/Wallpapers/1.jpg
/home/black/Pictures/Wallpapers/
$ dirname ~/Pictures/Downloads/
/home/black/Pictures/

Get the base-name of a file path

Alternative to the basename command.

Example Function:

basename() {
    # Usage: basename 'path'
    : '${1%/}'
    printf '%s\n' '${_##*/}'
}

Example Usage:

$ basename ~/Pictures/Wallpapers/1.jpg
1.jpg
$ basename ~/Pictures/Downloads/
Downloads

VARIABLES

Assign and access a variable using a variable

$ hello_world='value'
# Create the variable name.
$ var='world'
$ ref='hello_$var'
# Print the value of the variable name stored in 'hello_$var'.
$ printf '%s\n' '${!ref}'
value

Alternatively, on bash 4.3+:

$ hello_world='value'
$ var='world'
# Declare a nameref.
$ declare -n ref=hello_$var
$ printf '%s\n' '$ref'
value

Name a variable based on another variable

$ var='world'
$ declare 'hello_$var=value'
$ printf '%s\n' '$hello_world'
value

ESCAPE SEQUENCES

Contrary to popular belief, there is no issue in utilizing raw escape sequences. Using tput abstracts the same ANSI sequences as if printed manually. Worse still, tput is not actually portable. There are a number of tput variants each with different commands and syntaxes (try tput setaf 3 on a FreeBSD system). Raw sequences are fine.

Text Colors

NOTE: Sequences requiring RGB values only work in True-Color Terminal Emulators.

Sequence What does it do? Value
\e[38;5;<NUM>m Set text foreground color. 0-255
\e[48;5;<NUM>m Set text background color. 0-255
\e[38;2;<R>;<G>;<B>m Set text foreground color to RGB color. R, G, B
\e[48;2;<R>;<G>;<B>m Set text background color to RGB color. R, G, B

Text Attributes

Sequence What does it do?
\e[m Reset text formatting and colors.
\e[1m Bold text.
\e[2m Faint text.
\e[3m Italic text.
\e[4m Underline text.
\e[5m Slow blink.
\e[7m Swap foreground and background colors.

Cursor Movement

Sequence What does it do? Value
\e[<LINE>;<COLUMN>H Move cursor to absolute position. line, column
\e[H Move cursor to home position (0,0).
\e[<NUM>A Move cursor up N lines. num
\e[<NUM>B Move cursor down N lines. num
\e[<NUM>C Move cursor right N columns. num
\e[<NUM>D Move cursor left N columns. num
\e[s Save cursor position.
\e[u Restore cursor position.

Erasing Text

Sequence What does it do?
\e[K Erase from cursor position to end of line.
\e[1K Erase from cursor position to start of line.
\e[2K Erase the entire current line.
\e[J Erase from the current line to the bottom of the screen.
\e[1J Erase from the current line to the top of the screen.
\e[2J Clear the screen.
\e[2J\e[H Clear the screen and move cursor to 0,0.

PARAMETER EXPANSION

Indirection

Parameter What does it do?
${!VAR} Access a variable based on the value of VAR.
${!VAR*} Expand to IFS separated list of variable names starting with VAR.
${[email protected]} Expand to IFS separated list of variable names starting with VAR. If double-quoted, each variable name expands to a separate word.

Replacement

Parameter What does it do?
${VAR#PATTERN} Remove shortest match of pattern from start of string.
${VAR##PATTERN} Remove longest match of pattern from start of string.
${VAR%PATTERN} Remove shortest match of pattern from end of string.
${VAR%%PATTERN} Remove longest match of pattern from end of string.
${VAR/PATTERN/REPLACE} Replace first match with string.
${VAR//PATTERN/REPLACE} Replace all matches with string.
${VAR/PATTERN} Remove first match.
${VAR//PATTERN} Remove all matches.

Length

Parameter What does it do?
${#VAR} Length of var in characters.
${#ARR[@]} Length of array in elements.

Expansion

Parameter What does it do?
${VAR:OFFSET} Remove first N chars from variable.
${VAR:OFFSET:LENGTH} Get substring from N character to N character. (${VAR:10:10}: Get sub-string from char 10 to char 20)
${VAR:: OFFSET} Get first N chars from variable.
${VAR:: -OFFSET} Remove last N chars from variable.
${VAR: -OFFSET} Get last N chars from variable.
${VAR:OFFSET:-OFFSET} Cut first N chars and last N chars.

Case Modification

Parameter What does it do? CAVEAT
${VAR^} Uppercase first character. bash 4+
${VAR^^} Uppercase all characters. bash 4+
${VAR,} Lowercase first character. bash 4+
${VAR,,} Lowercase all characters. bash 4+
${VAR~} Reverse case of first character. bash 4+
${VAR~~} Reverse case of all characters. bash 4+

Default Value

Parameter What does it do?
${VAR:-STRING} If VAR is empty or unset, use STRING as its value.
${VAR-STRING} If VAR is unset, use STRING as its value.
${VAR:=STRING} If VAR is empty or unset, set the value of VAR to STRING.
${VAR=STRING} If VAR is unset, set the value of VAR to STRING.
${VAR:+STRING} If VAR is not empty, use STRING as its value.
${VAR+STRING} If VAR is set, use STRING as its value.
${VAR:?STRING} Display an error if empty or unset.
${VAR?STRING} Display an error if unset.

BRACE EXPANSION

Ranges

# Syntax: {<START>..<END>}
# Print numbers 1-100.
echo {1..100}
# Print range of floats.
echo 1.{1..9}
# Print chars a-z.
echo {a..z}
echo {A..Z}
# Nesting.
echo {A..Z}{0..9}
# Print zero-padded numbers.
# CAVEAT: bash 4+
echo {01..100}
# Change increment amount.
# Syntax: {<START>..<END>..<INCREMENT>}
# CAVEAT: bash 4+
echo {1..10..2} # Increment by 2.

String Lists

echo {apples,oranges,pears,grapes}
# Example Usage:
# Remove dirs Movies, Music and ISOS from ~/Downloads/.
rm -rf ~/Downloads/{Movies,Music,ISOS}

CONDITIONAL EXPRESSIONS

File Conditionals

Expression Value What does it do?
-a file If file exists.
-b file If file exists and is a block special file.
-c file If file exists and is a character special file.
-d file If file exists and is a directory.
-e file If file exists.
-f file If file exists and is a regular file.
-g file If file exists and its set-group-id bit is set.
-h file If file exists and is a symbolic link.
-k file If file exists and its sticky-bit is set
-p file If file exists and is a named pipe (FIFO).
-r file If file exists and is readable.
-s file If file exists and its size is greater than zero.
-t fd If file descriptor is open and refers to a terminal.
-u file If file exists and its set-user-id bit is set.
-w file If file exists and is writable.
-x file If file exists and is executable.
-G file If file exists and is owned by the effective group ID.
-L file If file exists and is a symbolic link.
-N file If file exists and has been modified since last read.
-O file If file exists and is owned by the effective user ID.
-S file If file exists and is a socket.

File Comparisons

Expression What does it do?
file -ef file2 If both files refer to the same inode and device numbers.
file -nt file2 If file is newer than file2 (uses modification time) or file exists and file2 does not.
file -ot file2 If file is older than file2 (uses modification time) or file2 exists and file does not.

Variable Conditionals

Expression Value What does it do?
-o opt If shell option is enabled.
-v var If variable has a value assigned.
-R var If variable is a name reference.
-z var If the length of string is zero.
-n var If the length of string is non-zero.

Variable Comparisons

Expression What does it do?
var = var2 Equal to.
var == var2 Equal to (synonym for =).
var != var2 Not equal to.
var < var2 Less than (in ASCII alphabetical order.)
var > var2 Greater than (in ASCII alphabetical order.)

ARITHMETIC OPERATORS

Assignment

Operators What does it do?
= Initialize or change the value of a variable.

Arithmetic

Operators What does it do?
+ Addition
- Subtraction
* Multiplication
/ Division
** Exponentiation
% Modulo
+= Plus-Equal (Increment a variable.)
-= Minus-Equal (Decrement a variable.)
*= Times-Equal (Multiply a variable.)
/= Slash-Equal (Divide a variable.)
%= Mod-Equal (Remainder of dividing a variable.)

Bitwise

Operators What does it do?
<< Bitwise Left Shift
<<= Left-Shift-Equal
>> Bitwise Right Shift
>>= Right-Shift-Equal
& Bitwise AND
&= Bitwise AND-Equal
| Bitwise OR
|= Bitwise OR-Equal
~ Bitwise NOT
^ Bitwise XOR
^= Bitwise XOR-Equal

Logical

Operators What does it do?
! NOT
&& AND
|| OR

Miscellaneous

Operators What does it do? Example
, Comma Separator ((a=1,b=2,c=3))

ARITHMETIC

Simpler syntax to set variables

# Simple math
((var=1+2))
# Decrement/Increment variable
((var++))
((var--))
((var+=1))
((var-=1))
# Using variables
((var=var2*arr[2]))

Ternary Tests

# Set the value of var to var2 if var2 is greater than var.
# var: variable to set.
# var2>var: Condition to test.
# ?var2: If the test succeeds.
# :var: If the test fails.
((var=var2>var?var2:var))

TRAPS

Traps allow a script to execute code on various signals. In pxltrm (a pixel art editor written in bash) traps are used to redraw the user interface on window resize. Another use case is cleaning up temporary files on script exit.

Traps should be added near the start of scripts so any early errors are also caught.

NOTE: For a full list of signals, see trap -l.

Do something on script exit

# Clear screen on script exit.
trap 'printf \\e[2J\\e[H\\e[m' EXIT

Ignore terminal interrupt (CTRL+C, SIGINT)

React to window resize

# Call a function on window resize.
trap 'code_here' SIGWINCH

Do something before every command

Do something when a shell function or a sourced file finishes executing

PERFORMANCE

Disable Unicode

If unicode is not required, it can be disabled for a performance increase. Results may vary however there have been noticeable improvements in neofetch and other programs.

# Disable unicode.
LC_ALL=C
LANG=C

OBSOLETE SYNTAX

Shebang

Use #!/usr/bin/env bash instead of #!/bin/bash.

  • The former searches the user's PATH to find the bash binary.
  • The latter assumes it is always installed to /bin/ which can cause issues.
# Right:
    #!/usr/bin/env bash
# Wrong:
    #!/bin/bash

Command Substitution

Use $() instead of ` `.

# Right.
var='$(command)'
# Wrong.
var=`command`
# $() can easily be nested whereas `` cannot.
var='$(command '$(command)')'

Function Declaration

Do not use the function keyword, it reduces compatibility with older versions of bash.

# Right.
do_something() {
    # ...
}
# Wrong.
function do_something() {
    # ...
}

INTERNAL VARIABLES

Get the location to the bash binary

Get the version of the current running bash process

# As a string.
'$BASH_VERSION'
# As an array.
'${BASH_VERSINFO[@]}'

Open the user's preferred text editor

'$EDITOR' '$file'
# NOTE: This variable may be empty, set a fallback value.
'${EDITOR:-vi}' '$file'

Get the name of the current function

# Current function.
'${FUNCNAME[0]}'
# Parent function.
'${FUNCNAME[1]}'
# So on and so forth.
'${FUNCNAME[2]}'
'${FUNCNAME[3]}'
# All functions including parents.
'${FUNCNAME[@]}'

Get the host-name of the system

'$HOSTNAME'
# NOTE: This variable may be empty.
# Optionally set a fallback to the hostname command.
'${HOSTNAME:-$(hostname)}'

Get the architecture of the Operating System

Get the name of the Operating System / Kernel

This can be used to add conditional support for different Operating Systems without needing to call uname.

Get the current working directory

This is an alternative to the pwd built-in.

Get the number of seconds the script has been running

Get a pseudorandom integer

Each time $RANDOM is used, a different integer between 0 and 32767 is returned. This variable should not be used for anything related to security (this includes encryption keys etc).

INFORMATION ABOUT THE TERMINAL

Get the terminal size in lines and columns (from a script)

This is handy when writing scripts in pure bash and stty/tput can't be called.

Example Function:

get_term_size() {
    # Usage: get_term_size
    # (:;:) is a micro sleep to ensure the variables are
    # exported immediately.
    shopt -s checkwinsize; (:;:)
    printf '%s\n' '$LINES $COLUMNS'
}

Example Usage:

# Output: LINES COLUMNS
$ get_term_size
15 55

Get the terminal size in pixels

CAVEAT: This does not work in some terminal emulators.

Example Function:

get_window_size() {
    # Usage: get_window_size
    printf '%b' '${TMUX:+\\ePtmux;\\e}\\e[14t${TMUX:+\\e\\\\}'
    IFS=';t' read -d t -t 0.05 -sra term_size
    printf '%s\n' '${term_size[1]}x${term_size[2]}'
}

Example Usage:

# Output: WIDTHxHEIGHT
$ get_window_size
1200x800
# Output (fail):
$ get_window_size
x

Get the current cursor position

This is useful when creating a TUI in pure bash.

Example Function:

get_cursor_pos() {
    # Usage: get_cursor_pos
    IFS='[;' read -p $'\e[6n' -d R -rs _ y x _
    printf '%s\n' '$x $y'
}

Example Usage:

# Output: X Y
$ get_cursor_pos
1 8

CONVERSION

Convert a hex color to RGB

Example Function:

hex_to_rgb() {
    # Usage: hex_to_rgb '#FFFFFF'
    #        hex_to_rgb '000000'
    : '${1/\#}'
    ((r=16#${_:0:2},g=16#${_:2:2},b=16#${_:4:2}))
    printf '%s\n' '$r $g $b'
}

Example Usage:

$ hex_to_rgb '#FFFFFF'
255 255 255

Convert an RGB color to hex

Example Function:

rgb_to_hex() {
    # Usage: rgb_to_hex 'r' 'g' 'b'
    printf '#%02x%02x%02x\n' '$1' '$2' '$3'
}

Example Usage:

$ rgb_to_hex '255' '255' '255'
#FFFFFF

CODE GOLF

Shorter for loop syntax

# Tiny C Style.
for((;i++<10;)){ echo '$i';}
# Undocumented method.
for i in {1..10};{ echo '$i';}
# Expansion.
for i in {1..10}; do echo '$i'; done
# C Style.
for((i=0;i<=10;i++)); do echo '$i'; done

Shorter infinite loops

# Normal method
while :; do echo hi; done
# Shorter
for((;;)){ echo hi;}

Shorter function declaration

# Normal method
f(){ echo hi;}
# Using a subshell
f()(echo hi)
# Using arithmetic
# This can be used to assign integer values.
# Example: f a=1
#          f a++
f()(($1))
# Using tests, loops etc.
# NOTE: 'while', 'until', 'case', '(())', '[[]]' can also be used.
f()if true; then echo '$1'; fi
f()for i in '[email protected]'; do echo '$i'; done

Shorter if syntax

# One line
# Note: The 3rd statement may run when the 1st is true
[[ $var == hello ]] && echo hi || echo bye
[[ $var == hello ]] && { echo hi; echo there; } || echo bye
# Multi line (no else, single statement)
# Note: The exit status may not be the same as with an if statement
[[ $var == hello ]] &&
    echo hi
# Multi line (no else)
[[ $var == hello ]] && {
    echo hi
    # ...
}

Simpler case statement to set variable

The : built-in can be used to avoid repeating variable= in a case statement. The $_ variable stores the last argument of the last command. : always succeeds so it can be used to store the variable value.

# Modified snippet from Neofetch.
case '$OSTYPE' in
    'darwin'*)
        : 'MacOS'
    ;;
    'linux'*)
        : 'Linux'
    ;;
    *'bsd'* | 'dragonfly' | 'bitrig')
        : 'BSD'
    ;;
    'cygwin' | 'msys' | 'win32')
        : 'Windows'
    ;;
    *)
        printf '%s\n' 'Unknown OS detected, aborting...' >&2
        exit 1
    ;;
esac
# Finally, set the variable.
os='$_'

OTHER

Use read as an alternative to the sleep command

Surprisingly, sleep is an external command and not a bash built-in.

CAVEAT: Requires bash 4+

Example Function:

read_sleep() {
    # Usage: read_sleep 1
    #        read_sleep 0.2
    read -rt '$1' <> <(:) || :
}

Example Usage:

read_sleep 1
read_sleep 0.1
read_sleep 30

For performance-critical situations, where it is not economic to open and close an excessive number of file descriptors, the allocation of a file descriptor may be done only once for all invocations of read:

(See the generic original implementation at https://blog.dhampir.no/content/sleeping-without-a-subprocess-in-bash-and-how-to-sleep-forever)

exec {sleep_fd}<> <(:)
while some_quick_test; do
    # equivalent of sleep 0.001
    read -t 0.001 -u $sleep_fd
done

Check if a program is in the user's PATH

# There are 3 ways to do this and either one can be used.
type -p executable_name &>/dev/null
hash executable_name &>/dev/null
command -v executable_name &>/dev/null
# As a test.
if type -p executable_name &>/dev/null; then
    # Program is in PATH.
fi
# Inverse.
if ! type -p executable_name &>/dev/null; then
    # Program is not in PATH.
fi
# Example (Exit early if program is not installed).
if ! type -p convert &>/dev/null; then
    printf '%s\n' 'error: convert is not installed, exiting...'
    exit 1
fi

Get the current date using strftime

Bash's printf has a built-in method of getting the date which can be used in place of the date command.

CAVEAT: Requires bash 4+

Example Function:

date() {
    # Usage: date 'format'
    # See: 'man strftime' for format.
    printf '%($1)T\\n' '-1'
}

Example Usage:

# Using above function.
$ date '%a %d %b  - %l:%M %p'
Fri 15 Jun  - 10:00 AM
# Using printf directly.
$ printf '%(%a %d %b  - %l:%M %p)T\n' '-1'
Fri 15 Jun  - 10:00 AM
# Assigning a variable using printf.
$ printf -v date '%(%a %d %b  - %l:%M %p)T\n' '-1'
$ printf '%s\n' '$date'
Fri 15 Jun  - 10:00 AM

Get the username of the current user

CAVEAT: Requires bash 4.4+

$ : \\u
# Expand the parameter as if it were a prompt string.
$ printf '%s\n' '${_@P}'
black

Generate a UUID V4

CAVEAT: The generated value is not cryptographically secure.

Example Function:

uuid() {
    # Usage: uuid
    C='89ab'
    for ((N=0;N<16;++N)); do
        B='$((RANDOM%256))'
        case '$N' in
            6)  printf '4%x' '$((B%16))' ;;
            8)  printf '%c%x' '${C:$RANDOM%${#C}:1}' '$((B%16))' ;;
            3|5|7|9)
                printf '%02x-' '$B'
            ;;
            *)
                printf '%02x' '$B'
            ;;
        esac
    done
    printf '\n'
}

Example Usage:

$ uuid
d5b6c731-1310-4c24-9fe3-55d556d44374

Progress bars

This is a simple way of drawing progress bars without needing a for loop in the function itself.

Example Function:

bar() {
    # Usage: bar 1 10
    #            ^----- Elapsed Percentage (0-100).
    #               ^-- Total length in chars.
    ((elapsed=$1*$2/100))
    # Create the bar with spaces.
    printf -v prog  '%${elapsed}s'
    printf -v total '%$(($2-elapsed))s'
    printf '%s\r' '[${prog// /-}${total}]'
}

Example Usage:

for ((i=0;i<=100;i++)); do
    # Pure bash micro sleeps (for the example).
    (:;:) && (:;:) && (:;:) && (:;:) && (:;:)
    # Print the bar.
    bar '$i' '10'
done
printf '\n'

Get the list of functions in a script

get_functions() {
    # Usage: get_functions
    IFS=$'\n' read -d '' -ra functions < <(declare -F)
    printf '%s\n' '${functions[@]//declare -f }'
}

Bypass shell aliases

# alias
ls
# command
# shellcheck disable=SC1001
\ls

Bypass shell functions

# function
ls
# command
command ls

Run a command in the background

This will run the given command and keep it running, even after the terminal or SSH connection is terminated. All output is ignored.

bkr() {
    (nohup '[email protected]' &>/dev/null &)
}
bkr ./some_script.sh # some_script.sh is now running in the background

AFTERWORD

Thanks for reading! If this bible helped you in any way and you'd like to give back, consider donating. Donations give me the time to make this the best resource possible. Can't donate? That's OK, star the repo and share it with your friends!

Rock on.




All Comments: [-] | anchor

eatbitseveryday(3980) about 14 hours ago [-]

The title made me think you can query The Bible in bash...

deviantfero(10000) 1 day ago [-]

Great read dylan!

dyanaraps(10000) 1 day ago [-]

Thanks! Fancy seeing you here. :)

croo(4204) about 23 hours ago [-]

I write a bash script or two every month so I thought I'm okay. But then came along the very first example:

  trim_string() {
      # Usage: trim_string '   example   string    '
      : '${1#'${1%%[![:space:]]*}'}'
      : '${_%'${_##*[![:space:]]}'}'
      printf '%s\n' '$_'
  }
Ok, so the : is somehow a temporary variable... Then there is a variable starting at $ and you lost me :D Can someone break down that line for me? What the hell is going on here?

    : '${1#'${1%%[![:space:]]*}'}'
pxtail(10000) about 22 hours ago [-]

And that's the problem with bash scipting - very quickly it gets very cryptic, difficult to follow and understand without knowing various 'clever tricks' and gimmicks. This is all fine and dandy for personal usage but god forbid other developers might need to changes something inside this kind 'clever' code.

vghnb(10000) about 22 hours ago [-]

: is the null command. Kind of like /bin/true. So : followed by anything exists simply to perform expansion on something. If you fully understand what that means, then you will understand this: $_ is simply "whatever the last argument to the previous command expanded to". So : is the previous command and the ${1... expanded value is now ${_....

Because $_ is used in the expansion of itself, it is the same value, because the command has not yet completed, which would (re) set $_. So use of a thing ($_) in expanding the same thing ($_) is perfectly fine until after that (null) command (:) runs. You see that the final $_ is used standalone. Hope this helps.

lervag(4215) about 22 hours ago [-]

I think the idea is that the `:` builtin allows expansion of arguments without actually doing anything else. However, the temporary variable $_ is filled with the content of the expression. That is, after the first

    : '${1#'${1%%[![:space:]]*}'}'
The $_ temporary variable contains the result of removing the leading spaces. In the next line, the spaces at the end are removed from the temporary variable with the '${_%...' syntax.

You can test this in your own shell by e.g. doing:

    : $PATH
    echo $_
emmelaich(4028) about 22 hours ago [-]

I don't like this function very much but here's a few notes...

: is a 'do nothing' command -- but the line is still evaluated

%% means to replace leading chars that match pattern

## means replace trailing chars

I don't know why they're using $_; thats the variable containing the interpreter name, i.e. '/bin/bash' [edit - also the name of the previous command!]

I can't be bothered analyzing it any further :-)

Jenz(10000) about 21 hours ago [-]

Looking at the very first example:

  trim_string(){  
      # Usage: trim_string '   example   string    '
      : '${1#'${1%%[![:space:]]*}'}'
      : '${_%'${_##*[![:space:]]}'}'
      printf '%s\n' '$_'
  }
This reads horrible. I see no reason to prefer this over programs like sed, bash is after all a shell, intended firstly for running external programs/commands.
collyw(4199) about 21 hours ago [-]

> I see no reason to prefer this over programs like sed, bash is after all a shell, intended firstly for running external commands.

Are there systems that don't come with sed installed? (Some docker containers I have logged into don't seem to have less).

Tepix(3968) about 21 hours ago [-]

> This reads horrible.

Just use the shell function by its descriptive name...

The reasons are explained in the foreword:

Calling an external process in bash is expensive and excessive use will cause a noticeable slowdown. Scripts and programs written using built-in methods (where applicable) will be faster, require fewer dependencies and afford a better understanding of the language itself.

taviso(3800) about 17 hours ago [-]

This seems like a good time to mention my (ridiculous) project, a ctypes module for bash.

https://github.com/taviso/ctypes.sh/wiki

There are some little demos here:

https://github.com/taviso/ctypes.sh/tree/master/test

I even ported the GTK+3 Hello World to bash as a demo:

https://github.com/taviso/ctypes.sh/blob/master/test/gtk.sh

bokchoi(2986) about 16 hours ago [-]

This is so awesome. I need to play with this.

singron(10000) about 14 hours ago [-]

I found ctypes.sh to be legitimately useful for managing resources in nix-shell.

E.g. flock a lock file and set CLOEXEC so that subprocesses don't hold the lock open after the shell exits.

E.g. Use memfd_create to create a temp file and write a key. Then pass /proc/$$/fd/$FD to programs that need the key as a file. When the shell exits, the file can no longer be opened.

You can do similar things with traps, but they aren't guaranteed to execute, whereas these OS primitives will always be cleaned up.

donpdonp(3891) about 15 hours ago [-]

'ctypes.sh is a bash plugin that provides a foreign function interface directly in your shell.'

NegativeLatency(4219) 1 day ago [-]

Kinda cool, I don't write or read much bash and tend to stick to sh compatible stuff.

There are some neat tricks in here but they don't seem very readable compared to perl/awk/sed.

hagreet(10000) 1 day ago [-]

Yeah, I also don't look at things like

``` trim_string() { # Usage: trim_string ' example string ' : '${1#'${1%%[![:space:]]}'}' : '${_%'${_##[![:space:]]}'}' printf '%s\n' '$_' } ```

and think: 'I should use bash more'.

Bash is nice for making simple things simple but for complicated things it's just shitty. I used to think that this is due to the complicated quoting rules which make the simple things simple but tcl does a much better job at that.

In either case I prefer the clean rules of a Python or Perl for anything larger.

buraequete(10000) 1 day ago [-]

I expected a tool on bash where you can access the pure 'Bible'

hallelujah.sh

sodaplayer(10000) 1 day ago [-]

On that note, does anyone have a favorite cli Bible reading program?

tannhaeuser(2480) 1 day ago [-]

Didn't realize at first that 'pure' refers to features available in bash without calling out to external processes, when I would've thought purism in this context should refer to avoiding bashisms and writing portable (ksh, POSIX shell) scripts.

dyanaraps(10000) 1 day ago [-]

I understand your concerns about bash features and POSIX compatibility. The bash bible was written specifically to document the shell extensions bash implements.

My focus for the past few months has been writing a Linux distribution (and its package manager/tooling) in POSIX sh.

I've learned a lot of tricks and I'm very tempted to write a second 'bible' with snippets that are supported in all POSIX shells.

(I created the bash bible).

dyanaraps(10000) 1 day ago [-]

Hello, I'm the author of the Pure Bash Bible. Happy to answer any questions you may have.

Here's an example of what bash is capable of: https://github.com/dylanaraps/fff/ (a TUI file manager written in bash)!

davidrm(10000) about 22 hours ago [-]

completely unrelated, it's so funny that i saw this comment, looked at your name and thought 'hey, i know this guy!'. it turns out i forked your dotfiles ages ago when i first started playing with i3. also neofetch is pretty cool!

dredmorbius(199) about 9 hours ago [-]

Pointers to documentation of features used would be a major benefit to this reference.

The hacks (as a 30+ year sh / ksh / bash user) are indeed Very Cool.

tjpnz(10000) about 18 hours ago [-]

Any plans to sell the book in dead tree format?

tambourine_man(105) about 18 hours ago [-]

fff looks amazing. I've started building something like it many times, but never finished. Thank you

hnarn(10000) about 22 hours ago [-]

This might be an odd/off-topic question, but in Telegram this article has an auto-fetched thumbnail of a cat smoking a cigarette and a text similar to 'heavy metal music playing', I'm just curious where this picture is from, if you have any idea? I checked the README for the repo, pictures of the contributors etc. but I'm unable to figure out where it's coming from.

aydwi(4214) about 20 hours ago [-]

[OT] Hi Dylan! Just discovered your project - KISS. I respect you a lot for what you write, and you've been an inspiration at times.

Did you just decide one day that you have to write a distribution from scratch? What was the thought process, and how complicated is it actually. Also, I'd like to contribute if there's a chance.

CaptainZapp(2147) about 19 hours ago [-]

Just some feedback on the sale of the book (which I wanted and probably will buy).

Upon checkout you need to confirm the sale via a email sent to the email address provided by you.

I may be the exception and granted - it's basically due to a very crappy phone on which email ceased to work - but I was not able to finalize the sale since I'm not able to access my private email remotely.

I sent myself the link and will probably give it another shot from home. But you may want to take it up with the seller that there are folks out there for which this is not really a convenient way to close a sale. Especially not after entering valid credit card information.

jmnicolas(4130) about 19 hours ago [-]

I have a personal dislike for regexes and non human readable code, it gives maintenance headache. It's why I avoid shell scripts as much as possible.

The first example is not human readable, if the name of the function is a lie, I have no idea what this piece of code do :

  trim_string() {
    : '${1#'${1%%[![:space:]]*}'}'
    : '${_%'${_##*[![:space:]]}'}'
    printf '%s\n' '$_'
  }
dredmorbius(199) about 10 hours ago [-]

My inner pedant compels me to reply that these are not in fact regexes, but Bash parameter expansions.

https://www.gnu.org/software/bash/manual/html_node/Shell-Par...

My inner pedant has no comment regarding readability of Bash parameter expansions.

peterwwillis(2589) about 17 hours ago [-]

Is the following a maintenance headache, or non-human readable?

  #!/bin/sh
  FOO=' some long string here '
  FOO='$( echo '$FOO' | sed -e ' s/^[[:space:]]//g; s/[[:space:]]$//g ' )'
This isn't 'pure bash', but most of what I write in shell scripts isn't 'pure bash'. It's shell scripting: dirty, slow, easy, effective.

Like any 'language', it takes on the complexity you put into it. English is really complicated, but you can also use a subset of it with only 850, 1200, or 2400 words, and suddenly it's very simple and clear.

serhart(10000) about 18 hours ago [-]

Just because you seem to be unfamiliar with bash doesn't mean it's unreadable. Almost any language will look cryptic if you don't know it. That function is mostly just parameter expansion and very common in most bash scripts. I bet if you read the manual you would easily be able to figure it out. You just have to learn the language.

kamaal(633) about 3 hours ago [-]

On the contrary regexes were invented because you shouldn't have to write 500 lines of code for that which can be done in two lines.

If we didn't have regexes we would have 100s of if/else's scattered all over program logic. That would be more hard to handle than the regex itself.

imihai1988(10000) about 13 hours ago [-]

those are not regexes though, are they ? apart from the [:space:] part

soheil(3439) about 15 hours ago [-]

Do you have a personal dislike for math too?

chapium(10000) about 17 hours ago [-]

Perhaps it shouldnt be defined as what the regex can do, but which unit tests its able to pass.

Koshkin(4163) about 16 hours ago [-]

It might be worth it to learn it though, even if just for fun.

inimino(4175) about 18 hours ago [-]

I have a personal dislike for 'non human readable' used as shorthand for 'not immediately and easily readable by me'.

seamyb88(10000) about 17 hours ago [-]

> It's why I avoid shell scripts as much as possible.

This is all from somebody who has just written a linux user space. Perhaps the author avoids shell as much as possible. It just isn't possible. I doubt it, though. Bash is awesome.

heuxbzjnz(10000) 1 day ago [-]

no mention of /dev/tcp?!

yes, it looks like a device node in /dev, but it's really a pure bashism for opening tcp connections to arbitrary hosts and ports

anaphor(2897) about 15 hours ago [-]

https://www.linuxjournal.com/content/more-using-bashs-built-... has a decent explanation.

I came here to say the exact same thing. This is my favourite thing most people don't know exists in Bash.

unixhero(3844) 1 day ago [-]

Whoa... News to me! Thanks.

dredmorbius(199) about 9 hours ago [-]

Keep in mind that major distros (Debian, possibly derivatives) disabled tcp/udp service names in 2000:

https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=65172

dyanaraps(10000) 1 day ago [-]

I've been meaning to get around to it. I hadn't messed around with /dev/tcp prior to writing this bible.

I've since implemented a very bare-bones and very featureless IRC client using /dev/tcp and bash.

https://github.com/dylanaraps/birch

I will get around to it eventually. The one hurdle I want to get over before writing a piece about it is the handling of binary data using bash.

This is something a little tricky to do with bash but it'd allow for a 'wget'/'curl' like program without the use of anything external to the shell (no HTTPS of course).

I want to really understand the feature before I write about it though in the meantime I could just write a reference to the syntax/basic usage. :)

mistrial9(10000) about 18 hours ago [-]

I have inherited about 4K LOC Bash, which mostly works as advertised. No one wants to touch it! suggestions ?

stjohnswarts(10000) about 18 hours ago [-]

Leave it alone if you don't need to modify it. If you start needing to modify it a lot start rewriting it in ruby, go, or python at that point. Use set -x at the top of the file to get an idea on what's going on.

tjpnz(10000) about 18 hours ago [-]

Assuming it's all in one file I would move related pieces of functionality into separate files and and then source them when necessary. Should make things more manageable for people only wanting to make small changes.

andrewl-hn(10000) about 18 hours ago [-]

Skim https://learnxinyminutes.com/docs/bash/ Then open the code and just go through it. Check everything you find unfamiliar with that cheat sheet. If it doesn't cover something in your code, write it down to research separately later.

faragon(4184) about 24 hours ago [-]

Related, pure Bash L77 data compression/decompression, crc32, hex enc/dec, base64 enc/dex, binary head/cut, in a single 13KB file:

https://github.com/faragon/lzb

dyanaraps(10000) about 24 hours ago [-]

This is really neat, thanks for sharing.

jigglesniggle(10000) 1 day ago [-]

While this is interesting I see doing anything but launching programs with simple text-substituted arguments as too much for bash or sh. Run shellcheck on some of your own code, or the code of even a simple project to see how hard it is to really use bash.

Why I think people gravitate towards it is because languages such as python add too much pomp to launching a shell process. A language like perl is usually easier to use but everyone hates it now.

dyanaraps(10000) 1 day ago [-]

The shell is my favorite language and I really don't know why.

It's definitely possible to write shell code which properly passes shellcheck's linter though it's an uphill battle to learn the ins and outs and _why_ X is wrong when Y is right.

I even managed to write a full TUI file manager in bash!

https://github.com/dylanaraps/fff

I full understand that there are times when the shell should not be used and when other languages are a better way to solve a specific problem, however I love pushing the shell beyond its supposed limits! :)

davedx(2643) about 24 hours ago [-]

I use nodejs. Sometimes it's a case of 'go with what you know' rather than 'use the best tool for the job'. I have too many other things to do for me to bother learning yet another language just for command line scripting.

sambe(4222) about 24 hours ago [-]

The overhead of repeatedly launching subshells/processes to do simple operations can add up quickly, especially if it is happening in loops or in parallel. Yes, you shouldn't be using bash for performance, we all know that. But scripts often grow over time and suddenly are found to be slow/resource hogs. I have seen people demand that proper logging be added to a bash program and it then get minutes behind because of the overhead of all the processes called to do the logging. I was able to make that 10000x faster using pure bash.

As to why people like it: it often feels more natural when you are automating what you would type interactively. Unix pipes/coreutils/etc. also feel like a better fit when that automation is mostly about connecting other programs (it's the auxiliary stuff that you'd maybe want in pure bash). Reading the subprocess Python documentation does not exactly fill me with joy. I've heard libraries like Plumbum make it a bit neater - but then you have to ask why learn a bunch of new libraries when I already know bash? In the end it's about the best tool for the job. The danger with bash is going too far, especially if you don't actually know it very well.

harry8(4201) 1 day ago [-]

Yeah perl was nice when we were allowed to use that. Why doesn't python have something like named pipes?

    f = open('ls|', 'r)
    f.read()
    f.close()
letusnot(10000) about 19 hours ago [-]

I recently started working a more devops role at a company (I've been a ruby dev for most of my career) and one of the things that I've noticed is how _insane_ scripting by ops people is. I'll start poking around a build pipeline and there will all sorts of tortured usages of sed and jq and bash and coreutils to get things done that would be _really_ simple in ruby. I routinely see people using wget and curl in the same script. I'll see people pipe sed text replacement into perl for more text replacement. I'm constantly aware of ruby three liners that are fully portable, even to Windows, that could replace a dozen lines of potentially subtly buggy shell scripting.

And honestly I can't see any good argument for this patchwork approach to gluing things together. I guess some ops people might argue that you'd have to have ruby everywhere but the counter argument would be that we use docker images for everything and adding ruby as a dependency isn't any worse than all the insane dependency gymnastics it takes to get our node apps working.

And all of this applies equally for any language with a reasonable standard library (python, perl). I think people have weird feelings about using bash or make or whatever to accomplish things, like they are riding closer to the metal or that they are living some deeply pragmatic zen Unix philosophy, but mostly they are making an un-testable mess until it works once and then, if they are lucky, they don't have to touch it again.

masklinn(3065) 1 day ago [-]

> I see doing anything but launching programs with simple text-substituted arguments as too much for bash or sh

I would heartily agree. If your shell script grows beyond half a dozen commands or so, you're probably better off rewriting it in just about anything. Python, ruby, go (gorun), rust (cargo-script), or whatever else, doesn't really matter as long as it's not shell.

Crinus(10000) about 21 hours ago [-]

One reason i prefer Bash (and i mean Bash not just any shell) is that it tends to stay stable - scripts written years ago work just fine today. These days i mainly use Bash on Windows (via MSYS2) and really it is available pretty much everywhere, either out of the box or via installation.

If anything code that i used to write in Python at the past i write it in Bash nowadays, exactly because Bash has a better record when it comes to not breaking stuff. Though it helps that my Python use is also mostly scripts meant to run from the shell.

danielecook(3945) 1 day ago [-]

I don't think it's pomp. Once I learned Unix pipes and the tools for manipulating data (sed awk cut etc), it just became much faster and easier than writing python scripts that do the same thing. You can literally connect the output of one process to the input of another with a single character. It's much more complex in python.

It also tends to be very portable.

al_form2000(4211) 1 day ago [-]

Yes - and no. I find the shell is a useful glue language for procedural tasks (do this, run P1, then P2, and if blah) which can/had better be logically simple but rather long and heavy on the system interaction side.

For these, being able to avoid the various $(echo... |sed) can be refreshing. Beside, the book makes for a nice repository of techniques.

kamaal(633) 1 day ago [-]

Inability to use tools like Perl and Emacs is the biggest reason why I have to often break the bad news to people that their week to month long projects(Typically in Python and Java) can likely be done if a few minutes to a day or two, if they knew how to Emacs or Perl well.

Programmers take great pride in freeing accountants and ware house workers from drudgery. But seldom do we look at our work in the same way.

In the real world, most software work is done very similar to digging coal mines with shovels. Laborious manual hand typing jobs.

autoexec(10000) about 24 hours ago [-]

I started out with bash scripts and .bat files and eventually found perl to be the ideal solution to 99% of what I needed as well. I'm still using it for log file parsing and day to day administrative tasks, but I'm starting to pick up python because that seems to be where everyone has agreed to move to.

daitangio(10000) about 24 hours ago [-]

I am slowing moving from bash to python for my utility script. I was unable to love Perl, I think this weired syntax is a very bad choice.

Anyway bash is still faster to use, and a lot of Unix services are based on it. The book is well written and have a great added value. Thank you for sharing!!

grewil2(10000) about 24 hours ago [-]

Yes, Python is the Cobol of script languages.

jbrnh(10000) about 22 hours ago [-]

I don't quite get the recommendation to always use env bash over #!/bin/bash? If I use the full path, it is to get just that - the system's Bash. If it is missing or overruled in $PATH then I most likely don't want the script to run in the first place.

YawningAngel(10000) about 22 hours ago [-]

'System bash' isn't a universal or clear concept. For example, if you're using Modern OS X, you likely have bash via brew or some other userspace package manager but no system bash. Presumably you (or at least, most people) would still like your scripts to run in this case.

dredmorbius(199) about 9 hours ago [-]

    $ uname -a
    Linux localhost 3.10.49-5975984 #1 SMP PREEMPT Thu Oct 8 17:25:20 KST 2015 armv7l Android
    $ which env
    /data/data/com.termux/files/usr/bin/env
https://xkcd.com/927/




Historical Discussions: Amazon Changed Search Algorithm in Ways That Boost Its Own Products (September 16, 2019: 641 points)

(641) Amazon Changed Search Algorithm in Ways That Boost Its Own Products

641 points 4 days ago by juokaz in 1160th position

www.wsj.com | Estimated reading time – 16 minutes | comments | anchor

Amazon.com Inc. AMZN 0.39% has adjusted its product-search system to more prominently feature listings that are more profitable for the company, said people who worked on the project—a move, contested internally, that could favor Amazon's own brands.

Late last year, these people said, Amazon optimized the secret algorithm that ranks listings so that instead of showing customers mainly the most-relevant and best-selling listings when they search—as it had for more than a decade—the site also gives a boost to items that are more profitable for the company.

The adjustment, which the world's biggest online retailer hasn't publicized, followed a yearslong battle between executives who run Amazon's retail businesses in Seattle and the company's search team, dubbed A9, in Palo Alto, Calif., which opposed the move, the people said.

Any tweak to Amazon's search system has broad implications because the giant's rankings can make or break a product. The site's search bar is the most common way for U.S. shoppers to find items online, and most purchases stem from the first page of search results, according to marketing analytics firm Jumpshot.

Top Billing

When people search for products on Amazon*, nearly two-thirds of all product clicks come from the first page of results...

...so the proliferation of Amazon's private-label products on the first page makes it more likely people choose those items.

Search for 'men's button down shirts'

Search for 'paper towels'

Amazon private- label products

*Based on a study in 2018 of anonymous consumer actions on mobile and desktop devices

Note: Product searches conducted Aug. 28

Angela Calderon/THE WALL STREET JOURNAL

...so the proliferation of Amazon's private-label products on the first page makes it more likely people choose those items.

Search for 'men's button down shirts'

Search for 'paper towels'

Amazon private- label products

*Based on a study in 2018 of anonymous

consumer actions on mobile and desktop devices

Note: Product searches conducted Aug. 28

Angela Calderon/THE WALL STREET JOURNAL

...so the proliferation of Amazon's private-label products on the first page makes it more likely people choose those items.

Search for 'men's button down shirts'

Search for 'paper towels'

Amazon private- label products

*Based on a study in 2018 of anonymous consumer actions on mobile and desktop devices

Note: Product searches conducted Aug. 28

Angela Calderon/THE WALL STREET JOURNAL

...so the proliferation of Amazon's private-label products on the first page makes it more likely people choose those items.

Search for 'men's button down shirts'

Amazon private- label products

*Based on a study in 2018 of anonymous consumer actions on mobile and desktop devices

Note: Product searches conducted Aug. 28

Angela Calderon/THE WALL STREET JOURNAL

The issue is particularly sensitive because the U.S. and the European Union are examining Amazon's dual role—as marketplace operator and seller of its own branded products. An algorithm skewed toward profitability could steer customers toward thousands of Amazon's in-house products that deliver higher profit margins than competing listings on the site.

Amazon's lawyers rejected an initial proposal for how to add profit directly into the algorithm, saying it represented a change that could create trouble with antitrust regulators, one of the people familiar with the project said.

The Amazon search team's view was that the profitability push violated the company's principle of doing what is best for the customer, the people familiar with the project said. "This was definitely not a popular project," said one. "The search engine should look for relevant items, not for more profitable items."

Amazon CEO Jeff Bezos has propounded a 'customer obsession' mantra. Photo: JIM WATSON/Agence France-Presse/Getty Images

Amazon said it has for many years considered long-term profitability and does look at the impact of it when deploying an algorithm. "We have not changed the criteria we use to rank search results to include profitability," said Amazon spokeswoman Angie Newman in an emailed statement.

Amazon declined to say why A9 engineers considered the profitability emphasis to be a significant change to the algorithm, and it declined to discuss the inner workings of its algorithm or the internal discussions involving the algorithm, including the qualms of the company's lawyers.

The change could also boost brand-name products or third-party listings on the site that might be more profitable than Amazon's products. And the algorithm still also stresses longstanding metrics such as unit sales. The people who worked on the project said they didn't know how much the change has helped Amazon's own brands.

Amazon's Ms. Newman said: "Amazon designs its shopping and discovery experience to feature the products customers will want, regardless of whether they are our own brands or products offered by our selling partners."

Antitrust regulators for decades have focused on whether companies use market power to squeeze out competition. Amazon avoided scrutiny partly because its competitive marketplace of merchants drives down prices.

Profit Center

A majority of Amazon's sales come from retail, but a majority of its operating profits come from its cloud-computing unit.

Retail, subscriptions,

advertising and services

Percentage of total sales

Retail sales and

commissions: 75%

Percentage of operating income

Now, some lawmakers are calling for Washington to rethink antitrust law to account for big technology companies' clout. In Amazon's case, they say it can bend its dominant platform to favor its own products. Sen. Elizabeth Warren (D., Mass.) has argued Amazon stifles small businesses by unfairly promoting its private-label products and underpricing competitors. Amazon has disputed this claim.

During a House antitrust hearing in July, lawmakers pressed Amazon on whether it used data gleaned from other sellers to favor its own products. "The best purchase to you is an Amazon product," said Rep. David Cicilline (D., R.I.). "No that's not true," replied Nate Sutton, an Amazon associate general counsel, saying Amazon's "algorithms are optimized to predict what customers want to buy regardless of the seller." House Judiciary Committee leaders recently asked Amazon to provide executive communications related to product searches on the site as part of a probe on anticompetitive behavior at technology companies.

Amazon says it operates in fiercely competitive markets, it represents less than 1% of global retail and its private-label business represents about 1% of its retail sales.

Amazon executives have sought to boost profitability in its retail business after years of focusing on growth. A majority of its $12.4 billion in operating income last year came from its growing cloud business.

Pressure on engineers

An account of Amazon's search-system adjustment emerges from interviews with people familiar with the internal discussions, including some who worked on the project, as well as former executives familiar with Amazon's private-label business.

Share Your Thoughts

When you search for products on Amazon, how should it determine what listings to show? Join the conversation below.

The A9 team—named for the "A" in "Algorithms" plus its nine other letters—controls the all-important search and ranking functions on Amazon's site. Like other technology giants, Amazon keeps its algorithm a closely guarded secret, even internally, for competitive reasons and to prevent sellers from gaming the system.

Customers often believe that search algorithms are neutral and objective, and that results from their queries are the most relevant listings.

Executives from Amazon's retail divisions have frequently pressured the engineers at A9 to surface their products higher in search results, people familiar with the discussions said. Amazon's retail teams not only oversee its own branded products but also its wholesale vendors and vast marketplace of third-party sellers.

Amazon's private-label team in particular had for several years asked A9 to juice sales of Amazon's in-house products, some of these people said. The company sells over 10,000 products under its own brands, according to research firm Marketplace Pulse, ranging from everyday goods such as AmazonBasics batteries and Presto paper towels, to clothing such as Lark & Ro dresses.

Inside an Amazon fulfillment center. Photo: Krisztian Bocsi/Bloomberg News

Amazon's private-label business, at about 1% of retail sales, would represent less than $2 billion in 2018. Investment firm SunTrust Robinson Humphrey estimates the private-label business will post $31 billion in sales by 2022, more than Macy's Inc. 's annual revenue last year.

The private-label executives argued Amazon should promote its own items in search results, these people said. They pointed to grocery-store chains and drugstores that showcase their private-label products alongside national brands and promote them in-store.

A9 executives pushed back and said such a change would conflict with Chief Executive Jeff Bezos' "customer obsession" mantra, these people said. The first of Amazon's longstanding list of 14 leadership principles requires managers to focus on earning and keeping customer trust above all. Amazon often repeats a line from that principle: "Leaders start with the customer and work backwards."

One former Amazon search executive said: "We fought tooth and nail with those guys, because of course they wanted preferential treatment in search."

For years, A9 had operated independently from the retail operations, reporting to its own CEO. But the search team, in Silicon Valley about a two-hour flight from Seattle, now reports to retail chief Doug Herrington and his boss Jeff Wilke —effectively leaving search to answer to retail.

After the Journal's inquiries, Amazon took down its A9 website, which had stood for about a decade and a half. The site included the statement: "One of A9's tenets is that relevance is in the eye of the customer and we strive to get the best results for our users."

Mr. Herrington's retail team lobbied for the adjustment to Amazon's search algorithm that led to emphasizing profitability, some of the people familiar with the discussions said.

When a customer enters a search query for a product on Amazon, the system scours all listings for such an item and considers more than 100 variables—some Amazon engineers call them "features." These variables might include shipping speed, how highly buyers have ranked product listings and recent sales volumes of specific listings. The algorithm weighs those variables while calculating which listings to present the customer and in which order.

Nate Sutton, an Amazon associate general counsel, at a House Judiciary Subcommittee hearing on antitrust in July. Photo: Andrew Harrer/Bloomberg News

The algorithm had long placed a priority on variables such as unit sales—a proxy for popularity—and search-term relevance, because they tend to predict customer satisfaction. A listing's profitability to Amazon has never been one of these variables.

Profit metric

Amazon retail executives, especially those in its private-label business, wanted to add a new variable for what the company calls "contribution profit," considered a better measure of a product's profitability because it factors in non-fixed expenses such as shipping and advertising, leaving the amount left over to cover Amazon's fixed costs, said people familiar with the discussion.

Amazon's private-label products are designed to be more profitable than competing items, said people familiar with the business, because the company controls the manufacturing and distribution and cuts out intermediaries and marketing costs.

Amazon's lawyers rejected the overt addition of contribution profit into the algorithm, pointing to a €2.42 billion fine ($2.7 billion at the time) that Alphabet Inc.'s Google received in 2017 from European regulators who found it used its search engine to stack the deck in favor of its comparison-shopping service, said one of the people familiar with the discussions. Google has appealed the fine and has made changes to Google Shopping in response to the European Commission's order.

To assuage the lawyers' concerns, Amazon executives looked at ways to account for profitability without adding it directly to the algorithm. They turned to the metrics Amazon uses to test the algorithm's success in reaching certain business objectives, said the people who worked on the project.

When engineers test new variables in the algorithm, Amazon gauges the results against a handful of metrics. Among these metrics: unit sales of listings and the dollar value of orders for listings. Positive results for the metrics correlated with high customer satisfaction and helped determine the ranking of listings a search presented to the customer.

Now, engineers would need to consider another metric—improving profitability—said the people who worked on the project. Variables added to the algorithm would essentially become what one of these people called "proxies" for profit: The variables would correlate with improved profitability for Amazon, but an outside observer might not be able to tell that. The variables could also inherently be good for the customer.

Money Flow

Amazon commands more than one-third of U.S. retail dollars spent online.

Share of 2018 online retail sales

For the algorithm to understand what was most profitable for Amazon, the engineers had to import data on contribution profit for all items sold, these people said. The laborious process meant extracting shipping information from Amazon warehouses to calculate contribution profit.

In an internal system called Weblab, A9 engineers tested proposed variables for the algorithm for weeks on a subset of Amazon shoppers and compared the impact on contribution profit, unit sales and a few other metrics against a control group, these people said. When comparing the results of the groups, profitability now appeared alongside other metrics on a display called the "dashboard."

Amazon's A9 team has since added new variables that have resulted in search results that scored higher on the profitability metric during testing, said a person involved in the effort, who declined to say what those new variables were. New variables would also have to improve Amazon's other metrics, such as unit sales.

A review committee that approves all additions to the algorithm has sent engineers back if their proposed variable produces search results with a lower score on the profitability metric, this person said. "You are making an incentive system for engineers to build features that directly or indirectly improve profitability," the person said. "And that's not a good thing."

An Amazon warehouse in Mexico in July. Photo: Carlos Jasso/Reuters

Amazon said it doesn't automatically shelve improvements that aren't profitable. It said, as an example, that it recently improved the discoverability of items that could be delivered the same day even though it hurt profitability.

Amazon's Ms. Newman said: "When we test any new features, including search features, we look at a number of metrics, including long term profitability, to see how these new features impact the customer experience and our business as any rational store would, but we do not make decisions based on that one metric."

In some ways, Amazon's broader shift from showing relevant search results is noticeable on the site. Last summer, it changed the default sorting option—without publicizing the move—to "featured" after ranking the search results for years by "relevance," according to a Journal analysis for this article of screenshots and postings by users online. Relevance is no longer an option in the small "sort by" drop-down button on the top right of the page.

Write to Dana Mattioli at [email protected]

Copyright ©2019 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8




All Comments: [-] | anchor

simplecomplex(4206) 4 days ago [-]

Of course. Good for them. It's their fucking store!

> Amazon has adjusted its product-search system to more prominently feature listings that are more profitable for the company

Unthinkable. A store trying to make a profit selling things. How crazy.

bsamuels(4213) 4 days ago [-]

You might want to add some supporting information why you feel that way instead of just sharing your opinion and dropping the mic

berdon(4174) 4 days ago [-]

Is this only being down voted because of tone?

I am curious if there actually are any laws relevant to this situation. As far as I can tell, this is exactly like Walmart or BigBoxCo pushing their own brands. It's not illegal and, generally, it's not even a bad thing for the consumer (in the short term).

It really feels like most of the people taking issue with this are doing so because of their personal sentiments rather than legality.

UserIsUnused(10000) 4 days ago [-]

It depends if it operates as a store or as a market place, laws are different.

josefresco(3986) 4 days ago [-]

Side rant: Where is my Amazon branded phone case?!? This weekend I placed a half dozen searches for 'iphone 11 case' and was only presented with super shady, low quality vendors.

Floveme? Tendam? Donse? Spigen? and my favorite: Vapesoon.

Is it just too early for the legit brands to have a case ready? Or am I doing it wrong?

dehrmann(10000) 4 days ago [-]

This reminds me of CES. There were more phone case and screen protector manufacturers there than I knew existed.

ChrisLTD(3799) 4 days ago [-]

Spigen is a legit brand

johnqpub(10000) 4 days ago [-]

Spigen is a completely legit brand.

ptruesdell(10000) 4 days ago [-]

Can't speak for the others, but Spigen is a well known and generally well-trusted brand.

_coveredInBees(10000) 4 days ago [-]

I sympathize with your rant but wanted to point out that Spigen is in fact a pretty well known and recommended brand for phone cases. I suspect part of the reason for this is that a lot of Chinese manufacturers are able to get their hands on leaked prototypes / information regarding new iPhone models due to them being manufactured in China. This gives them a valuable headstart in getting phone cases to market so they can maximize sales in the first few weeks while a lot of the more reputable companies are busy building out their cases.

chihuahua(10000) 4 days ago [-]

I agree the brand names sound ridiculous, but I bought a Spigen phone case from Amazon for a Moto X4 and I'm very happy with it. Fits perfectly, looks fine, good access to buttons etc.

cptaj(10000) 3 days ago [-]

We definitely need platform legislation for the web.

After a certain size, they should be regulated as public utilities. So many companies completely depend on Amazon's platform for their business and they're all easy prey for the behemoth.

I did some work for a company that sells batteries on Amazon. A simple dispute got them suspended across the board. A decision made by a third rate employee from god knows where, probably without even a cursory reading of the case, made in a split second, brought down a company with 10 years in the market and hundreds of employees.

No due process, no rights, nothing. You can't defend yourself, they just shut you down and then you have to beg for weeks to be let back in. After weeks and hundreds of thousands in losses, you have no legal recourse against a wrongful suspension.

You can make the argument that they could just set up their own online store for the batteries and you're right, they can. But amazon and ebay are so big that its practically impossible to sell these things at scale without them. Its not a fair game and this is but one of the issues.

They definitely need to be treated as public utilities after a certain amount of users.

bongobongo(10000) 3 days ago [-]

You're describing a monopoly and the proper tool to deal with it is anti-trust, not entrenching it by pretending it's a public utility.

CriticalCathed(10000) 4 days ago [-]

>Amazon's lawyers rejected an initial proposal for how to add profit directly into the algorithm, saying it represented a change that could create trouble with antitrust regulators, one of the people familiar with the project said.

I wonder in what was they tweaked it so that the lawyers approved this change? I wonder if regulators will care if it skirts the letter of that law when the intention was to attempt to sneak by oversight by the FTC and the DOJ?

overkill28(10000) 4 days ago [-]

Later on in the article they explain that the initial approach was to include the profitability of an item directly in its metadata so that the algorithm could use that in its ranking criteria.

They changed their approach to instead measure how much profit Amazon makes on a given search. This allows them to analyze whether various changes to the search algorithm increase or decrease profit, and optimize for those that do.

It's a subtle distinction but it means that instead of explicitly promoting their products in search, they are instead making changes based on other product attributes that naturally boost the ranking of their products (and thus increase their profits).

It's kind of like a blind study: they're telling the engineers, 'we won't tell you which products are ours, but we'll see if you can figure it out based on these 100 other traits.'

deanCommie(10000) 4 days ago [-]

Did you finish reading the article before commenting? They explain pretty clearly.

stefek99(4148) 3 days ago [-]

Excuse me, what's the news here?

A company wants to maximize its profit.

If it is wasn't that way, they would be acting illegaly.

(publicly trading company must seek shareholders profit)

So what is the news once again?

diffeomorphism(10000) 3 days ago [-]

> If it is wasn't that way, they would be acting illegally.

Common misconception.

> Corporate directors are not required to maximize shareholder value.

https://www.lawschool.cornell.edu/academics/clarke_business_...

tachyonbeam(10000) 4 days ago [-]

It's not surprising that, given this kind of power, they would use it. What's a little unsettling is how little concern they have for people selling on their platform. If they treat their vendors like shit, and compete with them directly, that will incentivise said vendors to look for alternative platforms. You might say 'who cares, there is no real competition for Amazon', but competition will come. What is Amazon going to do, are they planning to make their own version of every single product out there? I don't think that can scale.

imglorp(3850) 4 days ago [-]

The competition is not zero. Walmart and alibaba to name two, not aws scale but aspiring.

Also some alliances like Target + Shipt are nipping at their heels with same day delivery. https://www.target.com/c/shipt/-/N-t4bob

pacala(4193) 4 days ago [-]

They are betting that when competition comes, they'll be subject to the same ~greed~ fiduciary duty forces. The market only has room for a handful of megacorps. They'll all highly likely to implement skewed algorithms to promote their own wares. If you don't like it, go build your own [TM].

monitron(10000) 4 days ago [-]

I would compare this to a supermarket allocating shelf space to maximize its profits, either by placing its own brands front and center or by featuring brands that have made a deal for prime shelf placement. I don't see this as scandalous or even new.

Coincidentally, Amazon brand products are usually pretty good in my experience, so this might not be a terrible outcome for users who have already decided to come shop at Amazon's store.

dclusin(4069) 4 days ago [-]

It's also understood by all who pass by store isles that the store is giving its own products prominence. Tweaking an algorithm that historically featured products based on users purchasing behavior to instead favor its own products without disclosing it is doing so is misleading.

hummusoiler(10000) 4 days ago [-]

That's peculiar. Here the market's products are placed in the bottom and top, whilst third party brands reserve all the immediate-view space

justinmchase(10000) 4 days ago [-]

It should be scandalous in those cases too. Having a company be both the distributor and producer and have a monopoly is problematic. This is sort of the problem of cable TV also, where companies like comcast tries to be a producer and distriutor and have a monopoly. In the past we have had unions fight against this sort of thing but unions have been busted to hell in this country over the last few decades to the point where even smart people don't even understand why its bad for a company like amazon to do this, we don't remember why.

m_ke(3394) 4 days ago [-]

Supermarkets purchase the inventory from suppliers so they have an incentive to move everything that's in their stores. It costs amazon nothing to host 3rd party inventory so they get to prioritize whatever maximizes their profits, including screwing over their top sellers.

A better analogy would be a walmart in a small town hosting a bunch of small business to showcase their products, then kicking out the top sellers and replacing them with a store brand version. You could argue that those producers should just open their own store but most people won't go out of their way to a different store to buy paper towels.

learc83(2750) 4 days ago [-]

Grocery stores don't operate as a 'marketplace' though. Grocery stores curate their offerings, vet vendors, deal with returns etc...

Amazon is more like shopping mall that decides to start operating their own stores and competing with their tenants.

bantunes(4080) 4 days ago [-]

> Coincidentally, Amazon brand products are usually pretty good in my experience, so this might not be a terrible outcome for users who have already decided to come shop at Amazon's store.

So it's fine they're shafting other vendors on their platform because their products are good?

bufferoverflow(3611) 3 days ago [-]

Or an OS manufacturer to preinstall their brand of the browser. Oh wait.

ryanmcbride(10000) 4 days ago [-]

EDIT: I didn't explain this well at all because I couldn't find the episode and haven't listened since it came out, I highly suggest someone better than me find the episode because it makes much more sense and has WAY more information than me.

There was a planet money episode about this exact comparison the other day. Basically it comes down to:

When a grocery store puts something on it's shelves, the store itself bought it from the manufacturer. They have already made their money. On Amazon, manufacturers don't make money when the item is listed, they make it only if and when a customer buys the product.

When a grocery store makes its own novel product and puts it on the shelf, it's taking a risk. If they make those products, but they don't sell, they just lost a bunch of money. Or, if their product is a competing one with an existing product, they know that products like that can sell, but now they have to focus on outselling the original product. When Amazon makes its own product, most of the risk has been removed. They pretty much already know it sells well on their platform, and they have enough money that knocking 5 dollars off the price, making it prime recommended, and putting it at the top of the list will cause it to beat the competition 9 times out of 10.

I'm not saying any of this is fair or unfair, because I'm not an economist so I don't believe I have a full understanding of the situation.

I tried to find the exact episode but it seems I can never find them when I need them.

yalogin(3937) 3 days ago [-]

They do that with their content also. There is no way to only see prime videos. They constantly push their paid content onto the main feed. On top of that there is no way to restrict purchase or gate it with a password on smart TVs so it's a bad experience. Of course amazon doesn't much care about user experience.

janesvilleseo(4221) 3 days ago [-]

I have to enter a pin every time on my smart TVs. I thought every smart tv had that feature.

Pxtl(10000) 4 days ago [-]

Considering how shopping at Amazon now feels like shopping at an electronics bazaar in Singapore with giant bins of random knock-off products of suspicious quality, tweaking the algo to push name-brand options (even if it's their own name-brand) would be a welcome move to me as a buyer.

Obviously it's grossly unfair to their vendors, but from a strictly user-centric view it's an improvement.

Otherwise, Amazon feels like AliExpress with faster shipping and better English.

Certhas(10000) 4 days ago [-]

Still, they are the monopoly marketplace online. They should be doing _everything_ to avoid looking like they are abusing their monopoly position to muscle into other fields. The fact that they don't seem too bothered just seems to indicate to me that regulators are not prepared to act on them.

Really the rule should simply be that if you operate a market place you can't sell on it. Full stop. Then we could simply be happy that amazon is finally improving it's search and/or the quality of its listings.

helpPeople(10000) 4 days ago [-]

I've seen Amazon prices go up while Walmart Online has been cheaper for household items. On speciality electronics, I don't think anyone would be surprised to know Amazon is more expensive.

It's an ancedote sure, but back in 2012, we shopped only at Amazon. Now they are no longer King.

Scoundreller(4218) 3 days ago [-]

AliExpress: a massive company that somehow never bothered to observe a new user using their software.

Why the heck does the ordering window have 2 or 3 "Confirm" boxes that I have to press?

And if I haven't, why can't it just tell me what's not confirmed when I hit "Submit Order"?

jhallenworld(3625) 4 days ago [-]

They do care about the quality of their own products. I posted a negative review on an Amazon product, and they called me(!) so that I could further elaborate. It was very odd because they called within an hour of the review. At some point I must have given them my phone number.

This was a review for their TV Cube. I was trying to buy something with good voice control to allow a paralyzed person to browse YouTube, and TV Cube could not do it (maybe it will be better in the future..).

I suspect the only reason I got the call is that TV Cube is in active development.

29_29(4117) 4 days ago [-]

> Otherwise, Amazon feels like AliExpress with faster shipping and better English.

I canceled prime some time ago, because I realized I was getting significantly lower quality products than Walmart. Its become a junk store.

password1(10000) 4 days ago [-]

> Otherwise, Amazon feels like AliExpress with faster shipping and better English.

It literally is. All these products are cheap stuff that marketers find on aliexpress. They buy them in bulk, stamp a logo on it, import in the US/EU, stock in logistic centers and then sell on Amazon. It's the same stuff, but it comes from an English seller and a warehouse in US/EU.

It's exactly the same thing that Amazon does with its product lines though (like Amazon Basics), so not really rooting for their own name-brands either.

seattlebarley(10000) 4 days ago [-]

Eh, I just don't agree with this at all. It's pretty easy to use filters in your search to find what you want.

tylerl(10000) 3 days ago [-]

Amazon: fast shipping, easy returns. That's literally the beginning and end of their entire value proposition. If any other company can fill that niche, they can have my money with no regrets.

paulcarroty(3488) 3 days ago [-]

Well, products which sells Amazon by yourself, not by third-party sellers, are mostly trustworthy.

Aliexpress - always been a Russian roulette.

mattmcknight(3392) 4 days ago [-]

I'm done with buying stuff from third party sellers on Amazon. If the seller is not amazon.com, I am not buying from Amazon. Fulfilled by Amazon is not good enough. Unfortunately, the only way to get a seller option is to pick an item department. I wonder why they would bury such a useful feature. It should be a box you can tick for the whole site.

momokoko(10000) 4 days ago [-]

At this point, anecdotally, it feels as though the top comment on almost every article I see on HN about Amazon is some variation of this complaint.

Has anyone done any analysis on the HN posting dataset to see if that happens to be the case?

And if that is the case, does anyone have any insight into why that might be?

snarf21(10000) 4 days ago [-]

I think the big changed needed on Amazon is the 'Seller'. With their commingling of all products with the same SKU, 'seller' is an irrelevant term. If people could review actual sellers, it could put a dent in the knock-offs. I'm not sure if there are just enough people who can't tell the difference or don't want the trouble of returning it to make this a profitable strategy for everyone.

TheSpiceIsLife(4130) 4 days ago [-]

Haha.

Here in Australia AliExpress feels like Amazon with faster shipping and more honesty.

Although Amazon now has an Australian entity, though I'm not sure how popular it is, maybe as (non-) popular as Amazon...

dannyr(946) 3 days ago [-]

In the long term, it's not user-centric at all.

Amazon pushing its own products drive 3rd-party resellers out of business and eventually, no other choices for consumers.

dmix(1368) 4 days ago [-]

Good thing people love Singaporean electronics bazaars.

gambiting(10000) 4 days ago [-]

(I can only guess you are American) - American experiences with Amazon are so different to mine over here in the UK it's incredible. I buy tonnes of different stuff from Amazon(multiple hundred orders last year) and I have never ever gotten anything counterfeit or had any issues. In fact every time when their 1-day delivery arrives late they just extend my prime by a month, no questions asked. And yet every time I go on HN it's 100% negative, stories about unreputable sellers selling fake stuff - this is just not happening outside of US for some reason.

MisterTea(10000) 4 days ago [-]

Amazon's reputation is pretty damaged in my eyes. I have rarely used them for electronics anymore as you can't trust anything. Even at work we used prime to buy certain items and I stopped it.

Example: Last week two symbol barcode scanners used in the erp system died. I look up the same model on newegg and see that sold and shipped by newegg it was about $120. Amazon had multiple listing in the $60 range and checking reviews I see things like 'died in two weeks', FAKE, 'died and returned to symbol only for them to say it was a fake and they were refused service or replacement'. How do you build trust with shenanigans like that? No thanks, Newegg got my money.

Amazon seems to be playing oblivious to all this mainly because people are STILL using them. So as long as they are making money they can afford to lose a few 'picky' customers here and there. The rest are happy to buy trash because they save a buck. Capitalism at its finest.

sokoloff(3938) 4 days ago [-]

It's odd to me to think that something which is user-centric could be "grossly unfair" to vendors.

If you view vendors on the platform as existing to serve the users (as I do), the contradiction evaporates. If you view users as existing to serve the vendors, then it's possible.

feketegy(4199) 4 days ago [-]

Shocking.

If you go into a candy store where the owner also makes their own brand of candy, what do you think will be the most prominent item on the shelf?

panopticon(10000) 3 days ago [-]

That's not really comparable. The candy store owner is still paying for the other brand of candy to collect dust.

Amazon isn't exposed to the same risks in a lot of product listings.

Animats(2071) 4 days ago [-]

It's too bad that Sears never really got into online. They were once the top catalog retailer, and known for consistent quality, good warranties, and boring products. Somehow they missed the Internet, retiring from catalog operations just as Internet shopping got going.

pessimizer(2099) 4 days ago [-]

Because Sears was a company in the process of being looted and trashed for at least the last couple of decades. I've had experiences in Sears of spending 15 minutes just looking for an employee. It had become a company so actively not interested in revenue that it actually became difficult to buy things there. At least they kept the stores clean; it seems like after they bought K-Mart, K-Mart literally stopped mopping the floors.

falcrist(10000) 4 days ago [-]

I cannot for the life of me understand why Sears didn't become a major online retailer. They not only had a physical catalog system that could be fairly easily converted into an online catalog, they have physical stores that could be used as pickup points and warehouses for customers who don't want to wait until things are mailed.

In the UK there's a company called Argos that worked ENTIRELY as a catalog store. Their physical locations were basically a warehouse with a lobby that had the catalogs and computers where you could look items up and order them. THEY should be a major competitor to Amazon... but according to my UK friends, they're not.

otakucode(10000) 4 days ago [-]

According to the book 'The Everything Store,' Amazon's search engine is a matter of internal contention. Years ago, when people noticed that Amazon's search engine was terrible (as it is), someone from another group in the company developed a new search system based on Elasticsearch and more modern technologies. He presented the new search to Bezos. But there was an existing team whose primary responsibility was the search functionality. And the man who led that team was one of Bezos' personal friends. That man was, apparently, petty and status-seeking, so pushed back against adopting the new search engine. Bezos proposed that there would be a contest between the old and new search engines. Judged by his friend, head of the current bad search engine team who didn't want the new search because it threatened his status. Predictably, the new search 'lost.'

senderista(4197) 4 days ago [-]

They even hired Udi fucking Manber to improve product search, with no apparent results. Like their hire of Jef Raskin: pearls before swine.

TheRealDunkirk(4219) 3 days ago [-]

This sort of thing makes me sick, and I'm familiar with the feeling because I've seen this several times in my career. But the older I get, the more a tiny kernel of respect grows in me for the kind of person that can parley a modicum of technical understanding into an unassailable political position in a large company, and succeed -- at least in terms of money and influence -- despite the opportunity costs I've witnessed. You just kind of have to hand it to people like this. I guess. Maybe that's the only way I stay sane at this point.

goatinaboat(10000) 4 days ago [-]

Umm, good? I want genuine products with an assured supply chain, not any random counterfeit that gets commingled.

kenforthewin(3694) 4 days ago [-]

So the solution to counterfeits on Amazon is to only buy Amazon brands? I'd prefer they address their counterfeit problem directly.

Analemma_(2878) 4 days ago [-]

That creates a horribly perverse incentive to never fix the counterfeit problem, and should be illegal on that basis alone without even considering the anticompetitive aspects.

benologist(989) 4 days ago [-]

If you want genuine products lobby your government for criminal consequences for willfully ignoring counterfeit products. Then lobby your government for antitrust action so vendors don't have to worry about Amazon themselves making the duplicates when they get jealous of the profit margins on counterfeits. Genuine products will be much easier to find without Amazon clones and Amazon-supported counterfeits and an Amazon-empowered criminal black market intermingling with the real one.

jjohansson(4128) 4 days ago [-]

If you're a manufacturer without a strong brand, it's incredibly risky selling through Amazon. They will take your sales data to evaluate ROI of building it themselves , and then undercut you.

Similar if you're a retailer.

This is why Shopify's new model is better for D2C and retail (but very difficult for them to pull off).

TazeTSchnitzel(2149) 4 days ago [-]

Also, a factory can produce a counterfeit of your product and sell it on Amazon on the same listing as your genuine product, with no way the customer can tell which they're buying.

grumpy8(10000) 4 days ago [-]

'Amazon Changed Search Algorithm in Ways That Boost Its Own Products', and the sun is hot

charlesism(3825) 4 days ago [-]

Read between the lines: 'Amazon Changed Search Algorithm in Ways That Lower Customer Satisfaction.' That's less obvious because Amazon traditionally has been a customer-focused company, and this is a step away from that.

PascLeRasc(1937) 3 days ago [-]

Could we take a break with these articles for a bit? I'm all for redistributing Jeff Bezos's money, and I'd be happy to have mine redistributed as well if that happened, but this and the Apple App Store algorithm news are so insignificant in the world. We're arguing about if online stores can organize their shelves how they want to while people are dying because they can't afford insulin or drink their tapwater. Yeah, you can care about two issues at once, but this isn't really an issue. I don't care to read the same comment about Microsoft with Internet Explorer every day. Maybe it's unfair to them but I really do not care, they clearly survived just fine.

CGamesPlay(4055) 3 days ago [-]

Honestly and without sarcasm, maybe just take a break from this site? If those are the kinds of issues that you want to find more about, I don't think this is the right venue. Yes, HN does dabble in politics sometimes but for the most part, this isn't a place to talk about the "big issues".

spike021(4200) 4 days ago [-]

Honestly, what's the big deal?

Google prioritizes certain websites/results when I enter queries too, it doesn't mean I'm required to click on them.

What ever happened to people spending a few extra minutes looking for the result they want? If the first few aren't correct for you then move on. Unless you're spending an extra 30+ minutes because the search doesn't work correctly at all then there's no issue here.

calibas(10000) 3 days ago [-]

If you have a product that's negatively affected by this, then this is kind of a big deal. If you look at the marketing data, most people don't like spending a dozen extra seconds, much less a few extra minutes, finding what they want.

bduerst(10000) 3 days ago [-]

The issue is that Amazon is prioritizing their own products/labels, not that prioritization just happens (which is basic search query functionality).

gundmc(10000) 3 days ago [-]

Google was fined several billion dollars by the EU for prioritizing their own products in search.

lleolin(10000) 3 days ago [-]

>Google prioritizes certain websites/results when I enter queries too, it doesn't mean I'm required to click on them.

I would like to call this 'an appeal to free will'. It's true that individuals can choose what they click, but there is also going to be a statistical reality of what people 'choose' to click more/most often.

Because of this, imo the big deal is absolutely gargantuan. Google for example handles several billions of searches per day. To your point of continuing to hunt for the right results, just adding a few seconds per search increases the amount of aggregate time spent searching on a scale of centuries per day. I would argue it's a similar case with Amazon; the service is so large that a minute changes can have tremendous impacts.

Insanity(3735) 3 days ago [-]

It's within their rights to do so and frankly a move most of us would make, no?

pb7(10000) 3 days ago [-]

Google got fined $3B for this exact thing on Google Shopping in the EU. 'Within their rights' is highly dependent on location.

pixelpoet(10000) 4 days ago [-]

ANTITRUST!

If Microsoft can get hammered for packaging Internet Explorer with Windows (and probably rightly so, given the market conditions at the time), there is a direct analogy to be made here.

Pick what you want to be: if you want to be the greatest goods index and shipment company in the world, fantastic. But you have to accept that you cannot sneak some of your own other-products to the top using that power. In this case, spin off your Kindle etc companies or have them broken up.

agumonkey(877) 4 days ago [-]

It's actually way worse than MS monopoly. I don't think MS impact on society is as hurtful as amazon's. I may be wrong but I'm really not sure.

toasterlovin(10000) 4 days ago [-]

No, there is not a direct analogy because Amazon is not a monopoly. They aren't even the largest retailer.

emiliobumachar(4210) 4 days ago [-]

Tangential, but MS got hammered for a package of practices, including worst stuff (IMO, IIRC), such as exclusivity deals, i.e. hiking Windows prices or outright refusing to sell Windows to PC manufacturers unless they ditched Linux from all product lines.

For some reason, the browser bundling is what stuck to the public consciousness.

ReverseCold(4184) 3 days ago [-]

The only reason I still use Amazon is because it's so easy to buy things using Bitcoin/Ethereum on it. I can get giftcards from literally hundreds of vendors (some even at a discount), or use a service like Moon which makes the gift card buying fully transparent (I just click checkout and scan the qr code with my phone), and I can buy almost anything on Amazon.

Everywhere else requires me to convert coin into USD and then buy things, which sucks.

soVeryTired(10000) 3 days ago [-]

Aren't you effectively converting to USD by buying gift cards?

noego(3668) 4 days ago [-]

The article is disappointingly misleading and buries one of the key details

'Amazon's lawyers rejected the overt addition of contribution profit into the algorithm...

They turned to the metrics Amazon uses to test the algorithm's success in reaching certain business objectives, said the people who worked on the project.

When engineers test new variables in the algorithm, Amazon gauges the results against a handful of metrics. Among these metrics: unit sales of listings and the dollar value of orders for listings. Positive results for the metrics correlated with high customer satisfaction and helped determine the ranking of listings a search presented to the customer.

Now, engineers would need to consider another metric—improving profitability—said the people who worked on the project. Variables added to the algorithm would essentially become what one of these people called "proxies" for profit: The variables would correlate with improved profitability for Amazon, but an outside observer might not be able to tell that. The variables could also inherently be good for the customer.'

Tldr: some people in Amazon wanted to give a boost to Amazon-products. The search team fought them vociferously and refused to budge. The lawyers came out against it as well. Amazon's internal A/B testing framework measures a number of metrics, including both revenue and profit, when determining whether a specific feature/change should be deployed.

The fact that the A/B testing framework measures the profit impact of any change, is hardly earth shattering. This is one of the core features that any A/B testing framework attempts to accomplish.

This also doesn't tell you anything about what the search algorithm is actually doing. An A/B testing framework can only help you evaluate the relative effectiveness of different algorithms. It doesn't actually create/influence the algorithm in any way. As WSJ themselves reported, Amazon's search algorithm does not take profitability as an input.

WSJ has done a fantastic job in unearthing this very interesting internal-debate that's happening in Amazon. But they have reported it in a way that is very misleading and gives laypeople the impression that they have a smoking gun. In reality, the only thing they have produced is the fact that Amazon's A/B testing framework is profit-aware.

throwawaysea(10000) 4 days ago [-]

Thank you. I was struggling to understand what the article was trying to tell me or why it was so lengthy. It feels like news has evolved to buzzword/sensationalist headline + enough content/data (even if irrelevant) to provide a sense of comprehensiveness/substance.

hbosch(3919) 4 days ago [-]

There was a post on HN a while back about Apple boosting it's own apps in their App Store search. Likewise, I am sure that Google prefers to show results from it's own companies (e.g. if I google the word 'spreadsheet', my first result is for Google Sheets). This is all the same thing, no?

Edit: I'll add that I'm not saying it isn't an anti-competitive practice, I'm sure it is. But I am saying that it's a bit silly to insinuate that stores don't already advertise for their own goods.

cglong(10000) 3 days ago [-]

It's worth noting Microsoft's own email offering is third on a Bing search for 'email': https://www.bing.com/search?q=email




(618) Stripe's new funding round values company at $35B

618 points about 14 hours ago by tempsy in 2663rd position

www.wsj.com | Estimated reading time – 3 minutes | comments | anchor

Stripe Inc. climbed closer to the top ranks of the highest-priced U.S. startups after a new fundraising round valued the financial-technology company at $35 billion.

Venture-capital firms Sequoia Capital, General Catalyst and Andreessen Horowitz were among the investors behind the $250 million investment, the company said Thursday. The $35 billion valuation, up about 50% from an early 2019 funding round, puts Stripe above Silicon Valley darlings Airbnb Inc. and Palantir Technologies Inc.

Stripe's technology allows internet companies and online marketplaces to accept credit cards for their goods and services and pay out money to the people and firms that sell on their platforms. It processes hundreds of billions of dollars in payments annually for millions of users, including consumer apps and websites such as Airbnb and The RealReal Inc. and makers of business software such as GitHub Inc. and Twilio Inc.

Investors view payments companies like Stripe as a way to get exposure to a basket of fast-growing public and private tech companies, since Stripe's revenues are tied to its customers' growth. The market for payments services is also expanding as more commerce moves away from physical stores and toward digital storefronts.

"Stripe is more than ever a bet on the internet as an economic engine," said Will Gaybrick, Stripe's chief product officer.

Founded in 2010, Stripe is middle-aged by Silicon Valley standards, but Mr. Gaybrick and Stripe president John Collison said it had no plans to go public. It has raised around $1.2 billion over the past nine years.

Still, a raft of younger startups, such as Checkout.com, are raising hundreds of millions of dollars in venture capital to challenge Stripe. Traditional payments processors, meanwhile, are selling themselves to larger financial institutions in a bid to bulk up their digital-payments offerings.

Some of those companies have had success picking off business from Stripe's customers. Dutch payments company Adyen NV said over the summer that it started processing some payments for delivery company Postmates Inc., a longtime Stripe user. Lyft Inc., one of Stripe's largest customers, disclosed in its IPO prospectus that it added an additional payments processor last year and may create its own payment products in an attempt to lower its costs.

Mr. Gaybrick said that the vast majority of users rely on Stripe to handle all of their payments, and it is adding more countries to its network to help businesses grow internationally. At a conference last week, Stripe announced it was available in eight new European markets.

Stripe also is using the data it collects from the payments it processes to build out its financial services. Earlier this month, it announced it would start issuing corporate credit cards with cash-back rewards and lending money to businesses that process payments through Stripe, using signals such as the percentage of sales coming from repeat customers to determine creditworthiness.

Write to Peter Rudegeair at [email protected]

Copyright ©2019 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8




All Comments: [-] | anchor

emdowling(10000) about 12 hours ago [-]

You know you have a killer product when your customers will literally fly half way around the world just to use it.

When I started my company (since acquired) in 2012, Stripe was so game changing I flew from Australia to the US to open a bank account so we could use it. While we could incorporate in the US online, we had to turn up in person at a bank to open an account.

The only alternatives in Australia at the time were PayPal, which had outrageous fees and a terrible experience for recurring SaaS billing, or a merchant account with a bank which was going to require a $30k deposit.

It cost me $750 to incorporate a US entity, $1250 for airfares and hotel (oh to remember SF was that affordable!!). I stayed at Hotel Whitcomb, which conveniently has a Chase branch right next door, and 1 day later we had a US bank account and could turn on our Stripe integration. We didn't look back.

I love Stripe Atlas, because I intimately relate to the problem, and wish we had it in 2012.

edwinwee(2874) about 12 hours ago [-]

Ha! Glad you were able to find a way. We still think it could be easier, even with Atlas and Australia support now. Would you be up to chat more? [email protected]

thinkingkong(4081) about 14 hours ago [-]

Before we start piling on about whether or not its really worth 35B just keep in mind how private companies are valued. Someone paid a lot of money for a chunk of stripe so thats what its "worth".

Personally I think stripe is close to worth - as another commenter put it - half of goldman sachs. The recent launch of loans, credit card issuing, etc are all bets or exercises that put stripe in an incredible position to build platforms for financial capabilities.

pmart123(4201) about 14 hours ago [-]

What about Stripe versus Adyen or Square?

rasz(10000) about 13 hours ago [-]

>Someone paid a lot of money for a chunk of stripe so thats what its "worth"

No, someone paid a lot of money in order to make even more using greater fool theory.

warent(2750) about 14 hours ago [-]

Stripe has made transactions for my SaaS business so painless. I recently learned that I have customers with different currencies and it's not even something I had to consider or account for. Yeah their API is a little bit... bloated for simple needs since they're covering for many different use-cases, but their superb documentation, clean dashboard UX, and relatively low fees really make the value offering huge.

Overall I'm really not surprised at the valuation. Stripe is one of the best products/services out there.

bluedevilzn(10000) about 13 hours ago [-]

>their API is a little bit... bloated for simple needs

I used Stripe about 7 years ago. I used it because their APIs were simple and PayPal etc. was complex and bloated.

It seems that this cycle repeats throughout b2b startups. Is there now an opportunity to create yet another simple lightweight API but one that doesn't cover all use-cases?

Despegar(2363) about 14 hours ago [-]

Why is it better than Adyen?

thaumasiotes(3713) about 13 hours ago [-]

> I recently learned that I have customers with different currencies and it's not even something I had to consider or account for.

If your customers are at all price-sensitive, you might want to give this a little more consideration. Banks and payment processors always offer 'We handle currency conversion for you! How convenient!', and they pretty much always do it by charging several times the normal conversion fee (in the form of converting at an extraordinarily unfavorable exchange rate).

I hate merchants who use those services; I have a credit card that will pay you in any currency and only charge me a 1% conversion fee. Please charge me in your currency, not my currency.

edwinwee(2874) about 13 hours ago [-]

We do want to optimize for those simple needs. Which parts do you think are bloated? Would love to hear how we can de-bloat. (Feel free to email me at [email protected] too.)

pc(572) about 13 hours ago [-]

> Yeah their API is a little bit... bloated for simple needs since they're covering for many different use-cases

You're right. We're fixing this. Stay tuned.

munk-a(4119) about 13 hours ago [-]

Yea, I'm pretty cool with this IPO as the company has built into filling quite a good niche while also putting pressure on MasterCard/Visa to clean up their monopoly softened act. I hope they find good success in the long term!

Edit: Oh please ignore my blindness, I didn't realize this was just a pure valuation announcement.

spencerwgreene(4217) about 13 hours ago [-]

> Founded in 2010, Stripe is middle-aged by Silicon Valley standards, but Mr. Gaybrick and Stripe president John Collison said it had no plans to go public.

Why doesn't Stripe want to become a public company?

soamv(3844) about 13 hours ago [-]

It's a carefully worded answer. No plans != doesn't want to. And also, I'd read 'no plans' as 'no plans that we're gonna tell you about right this minute'.

seansmccullough(10000) about 13 hours ago [-]

Generally consensus is a recession is going to happen in the next year or two, and the next good time to IPO won't be for several years after it that.

tempsy(2663) about 13 hours ago [-]

Incentivizes short-term thinking?

My main question then is how do you create liquidity for employees? Unclear if these rounds include secondary offerings for employees.

servercobra(10000) about 13 hours ago [-]

Why would you want to become a public company? Being public means more complex accounting, transparency, extra legal fees, focus on quarter-to-quarter profits, etc. Other than liquidity (especially for founders/VCs), I don't see much of a reason to and a lot of reasons not to.

erikpukinskis(3077) about 13 hours ago [-]

The point of an IPO is to A) create liquidity, and B) divest some of the risk to a bigger pool of investors. If you are cash flow positive, and have only long term investors, there's no real monetary benefit. You're giving away your dividends to other people.

It's a question of whether you want to discount your own risk or hold it. IPOing allows you to price your risk and sell some of it off. But if you're a long term investor there's a good chance you value your risk higher than the market, which means you'll lose money in the long term by selling it.

It also depends on your other investment opportunities. If you have other places to invest, then your money has a high time value and you're paying more "interest", so-to-speak, on the risk. If you don't have anywhere better to park your money then the time value is very low and holding that risk is cheap.

There are other factors as well, like government limits on buying other assets, that could make a private investor want to hold.

hkarthik(3323) about 11 hours ago [-]

Stripe is trying to capitalize for a capital intensive endeavor: Stripe Capital.

An IPO would have been one way to fund it, but likely would have involved a lot of financial scrutiny along the way, which would probably distract them from the main point of raising the funds.

A bold move which will hopefully work out for them and not overvalue the company before they attempt to go public later on.

puranjay(3888) about 12 hours ago [-]

Anybody who doubts whether Stripe is worth $35B just needs to read this comments section

I don't think I've seen a unicorn get this much love from developers

hobofan(4221) 33 minutes ago [-]

Developer love isn't everything. There are some projects on here that just as much dev love, but are not even able to fund a single person. There are also quite a few products I've used that have an amazing API but an otherwise awful product.

Yes, dev love is a big part of what allowed Stripe to become stripe, but it's probably not a good indicator whether Strip is worth $1B or $10B or $35B.

GeneralTspoon(4209) about 12 hours ago [-]

I quite like Stripe - but them forcing the recent switch to Checkout V2 has me a bit miffed.

Previously it was super simple to add dynamic card payments to a site - just drop the JS in, make a call to your server and then call Stripe from your server. Done. And it's all done in a popup on the same page (so no context switching UX for the user).

Now it seems they've killed the simplicity of that in favour of a more PayPal-like experience - where the user gets pushed out to the Stripe website to checkout. And it becomes more complex to implement too - due to everything being async and webhook based. At this point I think Paypal might actually be easier to implement for a certain class of simple one-off payments (although their sandbox servers are cripplingly slow).

I'm a little bit skeptical of this change because I think it's partially a strategic play by Stripe (with the smokescreen of SCA-compliance + Apple Pay support) to become more of a user-facing online 'wallet', rather than just a behind-the-scenes payment processor.

edwinwee(2874) about 11 hours ago [-]

A wallet isn't the intent. It's almost the opposite!

If you remember the previous version, your customers used their saved card within Checkout. The new version of Checkout lets your customers use whatever payment method is easiest for them, and wherever it is. That's why we're big fans of things like Apple Pay (some have seen a conversion increase of 250%!), local bank payments in Europe, or even saved cards in Chrome (rolling out now).

From its inception, Checkout was designed to be the fastest way to accept payments, and I think it's now faster (and even faster for customers). You can just drop in a single line of code—and that code doesn't have to be server-side. :)

mathattack(464) about 14 hours ago [-]

Great company but it's too hard to infer value from such a small investment.

amelius(879) about 12 hours ago [-]

Also I see little true (high-tech) innovation with the 'SV Elite'.

It seems that people are more interested in cornering markets than in creating exciting new things. The valuation doesn't surprise me because that's how you corner markets, by first building a huge pile of money.

danesparza(4184) about 13 hours ago [-]

Not really. Some people infer value without any investment -- only based on P&L and year-over-year growth. (Stripe is consistently making money, after all)

thorwasdfasdf(10000) about 12 hours ago [-]

Before stripe, I used paypal and it was sooo painful! There are so many terrible APIs out there that waste so much time needlessly.

Then I found stripe. What a treat. Everything was documented and the documentation itself was both readable and didn't go in endless circles like the paypal ones. The error messages were helpful. And every api endpoint had an example, which is hugely helpful: HUGELY! All the other API developers out there can learn from this: you can't underestimate the value of having a working example. Not only that but the testing is much easier in stripe, and there's no need for a bunch of seperate accounts and sandboxes that don't make sense.

Oh and did I mention that the variables describe themselves: the first 3 or 4 letter of an ID says what it is (subscription or etc). Genius! It's little things like this which make Stripe stand out orders of magnitude above the rest of all the APIs in the world.

I hope you guys rule the marketplace for payment processing and wish you all the success you deserve.

Stripe's success in payment processing sends a clear signal to the Paypals and API developers of world: If you're producing an API that sucks, it creates an opportunity for someone else to come eat your lunch. :)

adventured(104) about 12 hours ago [-]

> Before stripe, I used paypal and it was sooo painful!

Almost no matter what Stripe does from here, that they disrupted the PayPal monster (and the rest), will always reserve them a place in my developer heart. It really was an atrocious experience; there's pre-Stripe and post-Stripe.

enahs-sf(4199) about 11 hours ago [-]

I definitely consider stripe to be the canonical example when it comes to both API and documentation design. Really an excellent benchmark and point of reference for anyone who's making a published API.

aloknnikhil(4195) about 10 hours ago [-]

I know Stripe is in beta in India, but I don't see any other alternative better than PayPal. Or is there something?

theturtletalks(4217) about 11 hours ago [-]

Paypal also came out and said they will no longer refund seller fees when a customer is refunded. Before, they used to just keep the $0.30, but now they will keep the 2.9% as well. More the reason to switch to a new processor like Stripe.

Teichopsia(4220) about 11 hours ago [-]

Stripe is one of many services I would like to use but can't because they simply don't work in my country. I understand it can get somewhat complicated due to legalities I probably don't understand (or maybe the market is too small to even bother). But if your Navy can go through the country I live in, could you folks (in general and not directed at anyone) please try a little harder?

edwinwee(2874) about 10 hours ago [-]

Where do you live? Our roadmap—literally—is quickly expanding to many more countries (we just launched in 8 new countries last week). And this new funding round will be put to use for that. :)

tempsy(2663) about 13 hours ago [-]

I wonder how Square missed the boat so hard with a developer payment API. I know they have one, but of course never hear about anyone using it.

icelancer(4072) about 8 hours ago [-]

I have no idea. I kept hammering them on this for years until Stripe came and ate all their lunch. And only THEN did Square release an API.

the-trev-dev(10000) about 1 hour ago [-]

We implemented stripe as an add-on a little more than 6 years ago and left is mostly untouched. Now after a little TLC we facilitate $100 million a year through Stripe. Our next hurdle is trying to integrate Stripe terminal for thousands of our customers but $300 a pop is a tough sell.

edwinwee(2874) 42 minutes ago [-]

For huge Terminal device orders like that, we can help with a volume discount. Just email in to [email protected] with how many you need (feel free to CC me at [email protected]).

(BTW, if you didn't see, we have a $59 device too. We're also working on supporting more.)

toptal(4222) about 10 hours ago [-]

Adyen seems to have higher revenue, a faster growth rate, and stronger overall financials. By nearly double.

They're at $25b.

How is $35b justified?

oli5679(2516) about 10 hours ago [-]

The culture and product are exceptional. In the long run I think they will outcompete rivals.

ganitarashid(10000) about 13 hours ago [-]

No way that's worth as much as Airbnb

BenoitEssiambre(3957) about 13 hours ago [-]

What do you mean? They look like they're on a fast path to disrupt like all of banking! The banking industry has to be bigger than the bed and breakfast advertising industry.

yangcheng(10000) about 10 hours ago [-]

agree. Airbnb is overvalued.

pc(572) about 13 hours ago [-]

[Stripe cofounder]

Thanks to everyone here who took a chance on us in the beginning and shared helpful feedback over the years! How to serve startups/developers more effectively at scale is still the main thrust of our product focus. We've fixed and improved a lot of things since we launched here in 2011, but we also still have a lot of work to do. (Both 'obvious things we want to fix' and 'new functionality we want to build'.) I always appreciate hearing from HN users (even if I don't always have time to respond): [email protected]

For anyone thinking about what they should work on: I started building developer tools when I was 15 and 'tools for creation' is still, IMO, one of the most interesting areas to work in.

samstave(3779) about 2 hours ago [-]

I'm going to ask a favor on behalf of a multi billion dollar industry: please figure out how to service cannabis companies.

I build cannabis tech.

Its reall stressful when doing over $1 million in sales in a month and it's literally all freaking cash and we have to spend hours even with a counting machine counting $20 bills.

Please tell me what your plan is to support legal cannabis.

Thanks

rglover(2779) about 13 hours ago [-]

Well deserved. Stripe is and has been best in class and best in industry for years.

czbond(4189) about 12 hours ago [-]

What was your MVP that you were charging cards in a few weeks of start?

hbbio(3494) about 11 hours ago [-]

Thanks @pc for the last words!

I built opalang.org many years ago - introducing a mix of functional programming, strong static typing for the web, fixing the independence mismatch between client/server/db etc. in 2009, and although we were not able to make it a big success, I'm still spending all my time in this area and am convinced that there is a lot to fix.

andygcook(1322) about 13 hours ago [-]

Wanted to hop in and say thank you for starting Stripe. At my first startup, we had to integrate with an old merchant processor for payments and it was a nightmare. With my current startup, we were able to setup Stripe in the first week of operations and take payments seamlessly without too much hassle. Freeing us up from thinking about how to take payments allows us to focus on our product and building out our company. I'd imagine this is a similar story for thousands of other startups too. Hats off to you, your brother, and all the Stripes.

alanmeaney(4092) about 11 hours ago [-]

Patrick, how do you approach reconciliation at Stripe?

zelly(10000) about 13 hours ago [-]

one of the few that actually deserves it. thank you.

ravenstine(10000) about 13 hours ago [-]

Besides your product, I thank you all at Stripe for setting an example of what real API/SDK documentation is.

pitchups(3801) about 12 hours ago [-]

Kudos to Stripe on an amazing journey! Interesting to revisit the original HN submission of Stripe's launch:

https://news.ycombinator.com/item?id=3053883

josh33(4206) about 12 hours ago [-]

Just want to say your Connect product is INCREDIBLE. Our company couldn't function (at least not well and at scale) without it!

nailer(414) about 7 hours ago [-]

Hi Pat. Do you have archives of the original /dev/payments around anywhere? The internet doesn't seem to have any and it would be great to see what Stripe version 0 looked like.

markh(4210) about 12 hours ago [-]

Patrick, the way Stripe continues to engage with Hacker News and remains true to its roots is an inspiration. Congratulations!

hartator(3671) about 13 hours ago [-]

Congrats on Stripe. Producing and marketing for developers first is indeed a passionating topic.

WDimi(10000) about 9 hours ago [-]

Never had the chance to use Stripe, however back in the day I played stripe ctf. Best and most fulfilling 8 hours without a single break.

Thank you for the t-shirt and the most enjoyable ctf I had a chance to play!

dmarlow(4181) about 13 hours ago [-]

Thanks for all that you, John and Stripe have done for startups. Question for you, how will you continue to remain focused on helping startups if/when you go public and have shareholders to appease?

EpicEng(3973) about 7 hours ago [-]

I've always heard Stripe is a great place to work as well. Have been considering applying for one of your remote positions. Congrats and keep it up!

mattmar96(10000) about 13 hours ago [-]

Just listened to the How I Built This with you and your brother. Thanks for sharing your story there. Big fan of Stripe!

colemorrison(3909) about 13 hours ago [-]

Congratulations! Stripe has consistently made my life as a developer and entrepreneur significantly easier. I still remember that feeling of relief some 5-6 years ago thinking, 'Yes! I don't have to use PayPal's API!'

rblion(488) about 12 hours ago [-]

I respect the scale of your ambition and the suite of services/products you guys offer to realize it.

I love the copy on Stripe's website. Who wrote it?

Godspeed.

antihero(4038) about 10 hours ago [-]

Please can you make your update PIN flow a bit different to your view PIN flow ta? If you are developing an app it's nice to be able to confirm that someone has got their OTC correct before they move to setting their PIN - so perhaps give us an ephemeral key for updating PIN we can use to do this as opposed to it being a one step process

jiveturkey(4177) about 13 hours ago [-]

> [Chief Product Officer] Mr. Gaybrick and Stripe president John Collison said it had no plans to go public.

How's that possible? At this valuation they aren't looking at acquisition, right? VC aren't investing to the tune of $250mm (in this round alone) without an exit strategy, right?

What exit strategy if not IPO?

ceejayoz(2116) about 12 hours ago [-]

They're not ruling out an IPO in the future. They just don't have plans to do it right now.

I have no plans to buy ice cream, but that doesn't mean I'll never buy ice cream. In fact, it's highly likely I will buy ice cream at some point.

kchoudhu(4217) about 14 hours ago [-]

Stripe is cool -- I use it every day -- but is it really worth, oh, half a Goldman Sachs?

I dunno man.

mylons(10000) about 13 hours ago [-]

how often do you use goldman sachs?

Stevvo(10000) about 13 hours ago [-]

PayPal is a $100 billion dollar company. They have roughly 1/3 the market share of PayPal.

Stripe is the better product.

tempsy(2663) about 14 hours ago [-]

Square is $25b and was trading at above $40b at some point within the last year.

mathattack(464) about 14 hours ago [-]

A couple thoughts on why this isn't a fair comparison:

- Goldman is profitable but the employees extract most of it via compensation.

- Goldman has to rewin their business every year. Very little recurring revenue and they are one recession from imploding.

- You can't really compute valuation on a very small investment. It's not like the entire float is trading, and the small investment (relative to valuation) can have all kinds of liquidity preferences and special rights.

rolltiide(10000) about 12 hours ago [-]

Privately traded companies and publicly traded companies use different terminology for the same things.

In the private markets 'valuation' is a term that just means 'someone bought a couple shares at this arbitrary price, here are the value of the other millions of shares at that same arbitrary price'. Although the effects on all shareholder's balance sheets are real and useful, it is just broadcasted for marketing and hype.

In the public markets 'marketcap' is the same term.

And finally, older public companies have low price to equity ratios because the hype has fizzled and they are predictable. It doesn't mean anything. Its not enough inputs alone for you to decide what the rest of the market will do.

adamqureshi(3389) about 13 hours ago [-]

I use STRIPE as my payment Processor for my 1 man shop. They are great. Stupid simple. They even offered me a cash advance. All i need now is for stripe to be my bank to keep all my money in there and pay my business bills from it. One stop shop to run my business, would be great! What would even be more useful to me then lending me money, is if they help me build my v2 and take a slice from my business. Finding engineers is hard and very expensive for a small online business like mine.

buildawesome(10000) about 13 hours ago [-]

What do you feel like you need if Stripe gives you capital for your future receivables for you to go hire more engineers?

Xixi(3754) about 13 hours ago [-]

I remember when we launched our Japanese tea service [1]: we wanted to switch to Stripe so badly that I was emailing them at the very least once a month, until we made it in their beta program. I'm not sure the youngest remember what the payment landscape was before Stripe, but it wasn't great. Stripe lifted the whole industry up from mediocrity.

Around 10 years ago I had the pleasure of setting up payments with a French bank: instead of a web API we had to use an opaque binary installed on our server. In terms of setup it took many weeks and several in person meetings to get approved. There was a (big) setup fee and a (not so small) monthly subscription. And then of course transaction fees.

In Japan for startups targeting foreignland it was even worse: only PayPal supported payments in anything other than JPY! [2] Interestingly it seems that with Stripe Japan everything coming from outside of Japan goes through USD. I have many customers being charged in EUR, and on our end we of course get JPY. So I would expect an EUR/JPY transaction to occur in the middle, yet oftentimes the USD/EUR exchange rates ends up leaking on our customer receipts... I wonder how it works behind the scene...

[1] https://tomotcha.com for those interested.

[2] Maybe banks were offering this service directly, but for what was only an experiment at the time, I was expecting the setup cost to be prohibitive (at least in time, if not in money).

eps(3977) about 9 hours ago [-]

> I have many customers being charged in EUR, and on our end we of course get JPY.

We charge in USD, we pay for services in USD, but with Stripe we can get payouts only in our local currency. This leads to us losing twice on pointless currency conversions to the tune of 10-15% of the volume. It's been an ongoing headache for several years now and the only response from Stripe to all our asking to fix this has been that 'it will be passed to the respective team.'

aidos(3750) about 12 hours ago [-]

"Wasn't great" is an understatement! I dealt with it as a developer both in New Zealand and the Uk and it was outright terrible. Oh, you didn't apply for a merchant account 6 weeks ago...yeah, you're not gonna go live this month. You just go right ahead and book in a meeting with you bank manager and get down on your knees to plea for an account.

edwinwee(2874) about 12 hours ago [-]

The rates themselves show on your receipts? Are you using some sort of plug-in? Could you forward me a receipt at [email protected]?

For the conversion itself, we automatically convert from whatever currency you've accepted the payment in, then send it to your JPY bank account. The conversion rate is the average of whatever people are buying and selling the currency at that hour, but some financial institutions add a small markup on top of that along the way.

huac(2748) about 12 hours ago [-]

My guess is that Stripe has a service internally which predicts 'how much of each currency am I going to have, and how much of each currency do I need to pay out at the end of the day?' Then, they balance whatever currencies they can internally, and for the rest, convert to USD, before paying out to customers.

The reason here is because Stripe wants to mitigate exposure to non-USD price fluctuation, e.g. if EUR dives against the dollar. Many companies do (or should do) some version of this rebalancing, e.g. FB cited on an earnings call last year that they lost a large amount due to forex fluctuations.

harryh(2191) about 14 hours ago [-]

Why is Stripe still private?

neom(1895) about 14 hours ago [-]

It would probably be safest to stay private and ride out the weather, the public markets are not exactly fun right now.

29_29(4117) about 14 hours ago [-]

So VCs can capture every bit of value before its put on the public market.

DeonPenny(10000) about 14 hours ago [-]

Cause it's easier to be private. Look at what wall street is doing to Tesla and ask if they wished they we still private or not.

babl-yc(10000) about 14 hours ago [-]

There's two main incentives to go public: raising money and liquidity.

They don't need to tap public markets to raise money. And investors are likely OK holding off on liquidity if the valuation continues to rise.

Going public typically requires a significant amount of effort across the company, which can reduce focus on other initiatives key to growing the business.

danesparza(4184) about 14 hours ago [-]

Because they are wildly profitable already without outside capital investment. Also: much less paperwork to do.

dheelus(10000) about 14 hours ago [-]

Why should they be public?

jedberg(2257) about 11 hours ago [-]

I'm curious as to why they chose to raise $250M via VC instead of raising in the public markets? Of course there are trade offs and advantages to each, I'm just curious as to their calculus.

somebodythere(10000) about 8 hours ago [-]

Can get a higher multiple from VCs probably.

guanzo(10000) about 11 hours ago [-]

Please allow international, independent charges/transfers. This is a huge pain point in our Stripe experience.

edwinwee(2874) about 10 hours ago [-]

We announced a beta for this last week! You can send money to 45 countries. (We're turning this on for businesses in the US first, and we're working on more countries soon.) https://stripe.com/connect/payouts#request-invite





Historical Discussions: Inkscape 1.0 Beta 1 (September 18, 2019: 587 points)

(596) Inkscape 1.0 Beta 1

596 points 2 days ago by nkoren in 1762nd position

inkscape.org | Estimated reading time – 3 minutes | comments | anchor

Inkscape 1.0 beta1 available for testing

Sept. 8, 2019, 9:59 p.m.

The Inkscape project has released a first beta version of the upcoming and much-awaited Inkscape 1.0!

After releasing two less visible alpha versions this year, in mid-January and mid-June (and one short-lived beta version), Inkscape is now ready for extensive testing and subsequent bug-fixing. The most notable changes to watch out for are:

The new Fillet/Chamfer Live Path Effect
Changing themes in Inkscape 1.0beta1
Non-destructive Boolean operations with new Live Path Effect

Read the draft release notes for Inkscape 1.0, which list more than 100 major and minor improvements and bug fixes since 0.92.4, in the Inkscape Wiki. Before the final release, the project plans to create at least one other Beta version. Translations and documentation still need to be updated. A list of known issues to be worked on before 1.0 can be found here.

Please test away and report your findings at https://inkscape.org/report!

We are especially interested in:

  • problems with texts in Inkscape files from older Inkscape versions
  • unknown crashes
  • extensions not doing what they should be doing
  • things that worked in 0.92.4, but are no longer working in the beta version

If you are using Inkscape on the command line, please test the new functionality and let us know if any issues come up for you.

If you maintain a custom extension for Inkscape, please test it, and update it in time to be compatible with 1.0, so your users will be able to update their Inkscape installation together with your extension.

And lastly, if you are fluent in graphics lingo, in both English and another language, please consider helping your favorite vector graphics editor by updating translations for your language.

Download Inkscape 1.0 beta1 from the Inkscape website for your operating system (Linux, Windows, macOS).




All Comments: [-] | anchor

amelius(879) 2 days ago [-]

One thing I still miss in inkscape is calligraphic strokes, which is very useful for creating a professional looking cartoon style.

EDIT: Calligraphy is available but only hand-drawn. What I meant is calligraphy applied to a path, and the ability to transform the resulting stroke back into a path. These are powerful operations that are available in commercial offerings but not yet in inkscape, afaik.

yarrel(10000) 1 day ago [-]

Are they patented?

pbhjpbhj(4000) 2 days ago [-]

Slight aside, I don't see anything in the release notes about OCAL (Open ClipArt Library): is it still integrated?

Reason I'm asking is because there are links between the projects and OCAL has been offline since April [1].

OCAL doesn't seem to be coming back, their official line is they are handling a DDoS ... if it's no longer included I'd conclude that OCAL probably expired.

There was a death, of an associated dev, I believe.

1 - https://alicious.com/openclipart-ddos-offline/ my blog post on the issue.

jarek-foksa(2878) 2 days ago [-]

It looks like there is currently only one person in charge of OCAL and he works on it for free, which would explain why it takes so long to bring it back.

It's also sad to see that Pixabay has basically copied all assets from OpenClipart and republished them under a more restrictive license rather than try to cooperate and support them.

jmiskovic(10000) 2 days ago [-]

Non-destructive boolean operations! This it so much easier to reuse shapes. Great for obscured features, surface details, shadows, reflections... Too bad they are hidden away in live effects; I will still (ab)use them to no end.

jononor(10000) 2 days ago [-]

Yay! Always missing these since my main tool is parametric CAD background (FreeCAD). Using Inkscape is nice when output is more visual in nature.

hughes(4189) 2 days ago [-]

Oh wow it's finally no longer using XQuartz! This is great!

m-p-3(10000) 1 day ago [-]

Ok I gotta upgrade now :o

cpach(2829) 2 days ago [-]

That's awesome!

For those wondering why this is significant, this means that it will run as a native application on macOS. Really neat!

ris(3593) 1 day ago [-]

This is probably the biggest news for Inkscape for a while. I can't count the number of times I've not been able to recommend it to mac-using colleagues who don't have an adobe license.

yuchi(4064) 2 days ago [-]

I'm overly excited for this fact!! I've been an avid user of Inkscape for some time, but abandoned it because the UI was unbearable. So so so happy

vkaku(4174) 2 days ago [-]

I don't know if they'll be able to get GTK 3 into 1.0, but that would be awesome. The fact that the new builds don't need a X server kind of make it awesome already!

floatboth(10000) 1 day ago [-]

1.0 beta is GTK 3, and doesn't need X because of that :)

romwell(10000) 2 days ago [-]

My go-to graphics packages are still Gimp for raster / Inkscape for vector graphics.

Started using Inkscape in 2007 to illustrate a math paper, and been using it ever since whenever I needed to TeX something up.

acidburnNSA(3573) 2 days ago [-]

Me too. I only recently came across TikZ which is an even more epic way to TeX up fancy diagrams (you 'program' them). Still use inkscape for most stuff though.

hevsuit(10000) 1 day ago [-]

The Latex plugins really make things interesting. The LaTeXText package [1] allows formula rendering and editing from within the Inkscape canvas.

For tidy circuit diagrams the CircuitSymbols plugin [2] produces exquisite results by hooking into Latex's circuitTikz package and dumping the rendered result on the canvas. Typically I generate a bunch of circuit promitives and then connect them afterwards using snaps and the line tool.

[1] https://inkscape.org/~seebk/%E2%98%85latextext-render-latex-...

[2] https://inkscape.org/pt-br/~fsmMLK/%E2%98%85circuitsymbols

edit: wording

stupidcar(4193) 2 days ago [-]

Why does Inkscape's UI feel so... sloppy? I mean, Blender is a cross-platform, open-source design tool, and it manages to have a tight, professional looking UI. So how does Inkscape's UI still contrive to look like the first Java AWT application I wrote in the 90s?

I know it's just a surface thing, and there's a lot of great functionality there. But it's still not exactly a sight that helps inspire you to create beautiful content using it.

abcpassword(10000) 2 days ago [-]

Like most open source end user software, the open source contributions are strictly in development. Good software needs a consistent design vision, for which I've found exactly 0 open source examples.

This is exactly the opposite of "a good FLOSS project." It's comically bad compared to even the discount competitors (sketch, affinity)

ris(3593) 1 day ago [-]

It's fun how fashions change - for over a decade I listened to people decry how 'unintuitive' and 'cryptic' blender's interface and approach was (and I do realize that blender's interface has changed over the years, but not that much) and praised tools like Inkscape for following the 'user friendly' approach and straightforward paradigms.

(I have spent quite a long time using both and appreciate each's merits)

mjfisher(10000) 2 days ago [-]

If you're just looking at the surface of the UI, the v1 release appears to include theming. There's a lot more to UI than that, but it might help it feel a bit fresher

gnud(10000) 2 days ago [-]

I think this in general looks fine: https://inkscape.org/~doctormo/%E2%98%85ferrari

I hate programs that re-invent toolbars and menus just to look 'good' on the surface.

My main problem, as a non-expert user, is that the different 'groups' on the toolbars are not obvious to me, and so I don't understand all of the icons. They should have had a couple of more labels in there, or somethime.

mkl(4126) 2 days ago [-]

Maybe it's your GTK theme? I would certainly call its UI tight and professional-looking in Kubuntu (even a bit too 'nice-looking', as there's not much colour with which to identify icons at a glance; Blender looks like it has the same problem). The default theme on Windows does look kind of clunky, but it's never bothered me.

nineteen999(4220) 2 days ago [-]

> Blender is a cross-platform, open-source design tool, and it manages to have a tight, professional looking UI

Well let's remember that there was HEAVY criticism of Blender's UI up until the 2.5/2.6 releases, with some people even criticizing it now that 2.8 is released. There was an extreme amount of work done since 2.5 to clean up the UI.

It also has a lot more functionality than Inkscape, and so I think there is incentive as more people start using it to make sure that all those functions are filed away neatly and accessible without cluttering the interface too much.

Also, just going on users subscribed to their respective subreddits, Inkscape has around 6000 and Blender 151,000. So the size of the user base might have a direct impact.

Personally I don't find the Inkscape UI to be all that bad, although I agree it could do with some tightening up. I still prefer to use Adobe Illustrator if I have it to hand, but Inkscape will always work in a pinch.

Being able to use it to create SVG that can be directly imported into Blender as curves is something I've used it a lot for, for that kind of task they complement each other very well.

cies(3833) 2 days ago [-]

Inkscape is my go-to example of why FLOSS is great:

* Learn how to use it once

  * Keep using it for a life time
  * Not completely changing interface every major release
* Keeps getting better (some commercial software seems to bloat beyond repair)

* Available on many platforms

* All of the above for FREE

Now I dont need to do vector graphics every week, not even every month. But for those couple of times a year it's good to have a tool in your box. Inkscape is there for me, no need to buy a (subscription) license. No need to boot into another OS. No need to relearn it. More than complete feature set for my needs.

Congrats to the team!

twelvechairs(10000) 2 days ago [-]

Big ones for me are Firefox, Inkscape, QGIS, Blender

SquishyPanda23(10000) 1 day ago [-]

Xfig is open source and has barely changed since 1991, but I don't think that's a feature.

Learning to use updates to a UI is relatively cheap. Having a bunch of tools that are outdated is expensive.

Inkscape is great for what it is, but it could definitely use a well thought out interface refresh.

Erlich_Bachman(10000) 2 days ago [-]

> * Keeps getting better (some commercial software seems to bloat beyond repair)

Adobe's video editing suite of latest versions is actually well known to be buggy in terms of regular crashes and hangings. It took me many months to realize that the problem was not inferior CPU or drivers or other software or virtual machines or little memory, but that their software is straight up buggy. That's a company that charges hundreds of dollars for their suite and also a company that had a perfectly good software before and have also basically owned the market before. Now it is buggy and crashes. What other software even crashes these days? I have to think back several years when I remember office suites crashing or other complicated software. It is really a disgrace on Adobe's part. This is not a single user experience either, there is a general consensus now among video editors that this is the case with Premiere and After Effects.

TheSpiceIsLife(4130) 2 days ago [-]

And built in opening and save as DXF!

Being able rapidly and precisely build an item in 2D CAD, then do the graphic design component on Inkscape is amazing.

More recently I'm moving to do the graphic design part in Affinity Designer due to a couple of irritating bugs in Inkscape... but it doesn't have export to DXF... But Inkscape is what taught me vector graphics, and I'm still using it as my default vector graphics app in my day job as a laser cutter operator-extraordinaire.

Freak_NL(4197) 2 days ago [-]

IInkscape is just there for us when we need it; it's one of those staple FOSS applications that just is. I've been using it since the Sodipodi days; that's more than fifteen years ago!

Inkscape is pretty extensible as well. It's not too hard to write plugins for it (in Python). I've did it myself, and love how a handful of people seem to find it of use1.

1: https://inkscape.org/~jdhoek/%E2%98%85isometric-projection-c...

jayalpha(4209) 1 day ago [-]

'* Learn how to use it once * Keep using it for a life time'

Yeah, right. Like Xara XL

https://en.wikipedia.org/wiki/Xara_Xtreme_LX

misterdoubt(10000) 1 day ago [-]

I want to mess with vector graphics about twice a year, and I do essentially zero visual work beyond that.

Every time I'm pleased at how easy Inkscape makes it. And it's completely smooth on both Linux and Windows.

Andrew_nenakhov(10000) 2 days ago [-]

Unfortunately, this 'not completely changing interface' has a side effect: it fails to evolve and become better.

I am a long-time user of Inkscape (since 2007 at least), and it is sad how poor it looks now next to Sketch/Figma. Their vector tools are better, their layer tools are better. It's a million little things that make drawing a similar shape in Sketch 50x faster than in Inkscape.

jarek-foksa(2878) 2 days ago [-]

Just in case you are looking for a commercial alternative to Inkscape with a more modern UI and Chromium-based rendering engine, check out my project Boxy SVG: https://boxy-svg.com

The next version (to be released in 2-3 weeks) is going to introduce full support for filters and color swatches. In the upcoming 12 months I expect to reach full feature parity with Inkscape.

solarkraft(3791) 2 days ago [-]

'Unsupported browser. Please view this page in Internet Explorer 6.0'

new2628(10000) 2 days ago [-]

Why would someone look for a commercial alternative?

mixmastamyk(3510) 1 day ago [-]

> a more modern UI...

This is typically code for 'doesn't respect my OS theme.' I didn't like skinz in the past and that hasn't changed over the years, though the name has.

Good luck on an otherwise cool app however.

terragon(10000) 2 days ago [-]

I'm sad to say that I judged it purely on the fact that it wasn't open source.

Then I went ahead and tried it out... mind blown. It's that good. Feels like a native app in it's UI quality and speed. And $9/month is a very good price point, especially for those that regularly create vector art.

I'm amazed at the quality of your app. It'll be especially incredible once you're at Inkscape parity. How large is the team you got working full time?

Crinus(10000) 2 days ago [-]

'Unsupported browser' what exactly does Firefox not support?

marble-drink(10000) 2 days ago [-]

Thanks but I'll stick with free software. Why don't you give me an option to pay you to contribute to Inkscape instead?

emptysongglass(4102) 2 days ago [-]

That isn't open source? Get out of here.

zawerf(3315) 2 days ago [-]

It's sad that you're getting downvoted.

I was trying this out on my phone and although I had to switch to landscape to make the UI fit, it was buttery smooth!

I was really impressed with the sheer amount of features included, many of which I have never seen implemented in any other web based editor.

chrismorgan(3499) 2 days ago [-]

Full feature parity with Inkscape in a year is a bold claim, and I'm rather sceptical of it. You have built an impressive app (and I'm sure that being just one person helped with that), but there is a lot of advanced functionality in Inkscape that you don't have at present. A few examples that spring to mind: full pressure-sensitive tablet support (and sure, this is the beauty of using the web as your platform—it's all there in the PointerEvent; well, until you need more fine control, then you're completely stranded), live path effects, extensions, tracing. I'd also like a much stronger keyboard interface. I'd be interested to see your take on some of Inkscape's extensions particularly, like JessyInk and font editing.

CamperBob2(10000) 2 days ago [-]

$9 a month? I'm with the other guy, get out of here.

jordache(10000) 2 days ago [-]

Free is not free when said product results in reduction of productivity.

I've tried to make Inkscape work, after ending my Adobe subscription. Conclusion - Inkscape is not worth it.

Just pony up $50 for Affinity Designer. Great app. Facilitates tremendous productivity

misterdoubt(10000) 1 day ago [-]

$50 is not $50 when it also requires Windows or Mac to even function.

bcholmes(10000) 1 day ago [-]

Huh. My experience differs. I have both Illustrator and Inkscape, and for certain tasks (e.g. stuff I want to be native SVG or making icons), I always go back to Inkscape because I find it faster/easier.

Finnucane(10000) 1 day ago [-]

It's too bad Affinity is probably never going to port to Linux. Esp. Affinity Publisher would be great. There's no good layout application for Linux.

input_sh(10000) 2 days ago [-]

> Better HiDPI screen support

Oh finally! Haven't used it for the last two years simply because the interface was too tiny on my laptop.

jcelerier(3906) 2 days ago [-]

huh, what OS are you on ? I have hidpi screens since 2014 and inscape has always honored Xft.dpi

pugio(10000) 2 days ago [-]

Inkscape is my go to image editor. The UI is reminiscent of the old Macromedia Fireworks, with a paradigm that feels much nicer (to me) than Photoshop et al.

I downloaded the beta pessimistically thinking that they still wouldn't have native OSX Menu support (I've been using the non-updated 0.91+devel+osmenu fork/branch forever), but was pleasantly surprised to see full OSX integration in this beta. Great job guys!

killjoywashere(2811) 2 days ago [-]

Thanks for this comment. I will probably download for that reason alone. That said, my goto vector editor is InkPad on an iPad Pro. It's everything I need and nothing I don't.

Andrew_nenakhov(10000) 2 days ago [-]

If you liked Fireworks, try Sketch. It looks like Fireworks, but with those little annoying bits done right. I wish Inkscape copied these interface tricks.

Theizestooke(10000) 2 days ago [-]

I thought Inkscape was an alternative to Illustrator and other vector drawing programs, didn't know people were using it for photo editing.

tmikaeld(4088) 2 days ago [-]

I'm used to Affinity Designer, unfortunately Inkscape is very slow for even just ~10 layers of medium complexity shapes on MacOS :-/

While on Affinity, I can have hundred layers without noticeable performance issues.

dragonsh(3740) 2 days ago [-]

In our startup we do all our website and web app mock up in inkscape. Even artwork for our physical banner and posters we do using inkscape, krita [1] and then use scribus [2] to generate print quality pdf's.

We are overall very pleased with it.

We were eagerly waiting for 1.0 to have a HiDPI support and native Mac application. They both are there besides a lot of other features.

Kudos to team to keep it alive and continuously improve it. Even though Adobe XD,sketch and figma are preferred tools for UI and UX design, we build our assets using inkscape. We got this inspiration from Taiga [3] project which has an open source design repository in svg and a completed single page app using angularjs based on those designs. It gave our team confidence we can do it.

The added advantage is the artwork developed using inkscape can directly be used as svg images in website and single page app and are responsive by default.

Once again thanks to inkscape team to keep it alive and improve continuously.

[1] https://krita.org/en/

[2] https://www.scribus.net/

[3] https://github.com/taigaio/taiga-design

cies(3833) 2 days ago [-]

If Inkscape would have chosen Qt over GTK back then, combined with where Krita/Scribus are at right now, it could have been the KDE creative suite!

plq(3916) 2 days ago [-]

> The added advantage is the artwork developed using inkscape can directly be used as svg images in website and single page app

We had a lot of success cleaning Inkscape's svg output.

See here for solutions: https://news.ycombinator.com/item?id=20680559

I'm pretty happy with svgcleaner (https://github.com/RazrFalcon/svgcleaner)





Historical Discussions: Software Architecture Is Overrated, Clear and Simple Design Is Underrated (September 18, 2019: 567 points)
Software Architecture Is Overrated, Clear and Simple Design Is Underrated (September 17, 2019: 3 points)

(582) Software Architecture Is Overrated, Clear and Simple Design Is Underrated

582 points 2 days ago by signa11 in 21st position

blog.pragmaticengineer.com | Estimated reading time – 13 minutes | comments | anchor

I had my fair share in designing and building large systems. I've taken part in rewriting Uber's distributed payment systems, designing and shipping Skype on Xbox One and open-sourcing RIBs, Uber's mobile architecture framework. All of these systems had thorough designs, going through multiple iterations and had lots of whiteboarding and discussion. The designs then boiled down to a design document that was circulated for more feedback before we started building.

All of these systems were large at scale: hundreds of developers build them - or on top of them - and they power systems used by millions of people per day. They were also not just greenfield projects. The payments system rewrite had to replace two, existing payments systems, used by tens of systems and dozens of teams, all without having any business impact. Rewriting the Uber app was a project that a few hundred engineers worked simultaneously on, porting existing functionality to a new architecture.

Let me start with a few things that might sound surprising. First, none of these designs used any of the standard software architecture planning tools. We did not use UML, nor the 4+1 model, nor ADR, nor C4, nor dependency diagrams. We created plenty of diagrams, but none of them followed any strict rules. Just plain old boxes and arrows, similar this one describing information flow or this one outlining class structure and relationships between components. Two diagrams within the same design document often had a different layout and were often added and modified by different engineers.

Second, there were no architects on the teams that owned the design. No IT architects or enterprise architects. True, neither Uber nor Skype/Microsoft have hands-off software architect positions. Engineers at higher levels, like staff engineers, are expected to still regularly code. For all the projects, we did have experienced engineers involved. However, no one person owned the architecture or design. While these experienced developers drove the design process, even the most junior team members were involved, often challenging decisions and offering other alternatives to discuss.

Third, we had practically no references to the common architecture patterns and other jargon referenced in common software architecture literature, such as Martin Fowler's architecture guide. No mentions of microservices, serverless architecture, application boundaries, event-driven architecture, and the lot. Some of these did come up during brainstormings. However, there was no need to reference them in the design documents themselves.

Software design at tech companies and startups

So how did we get things done? And why did we not follow approaches suggested by well-known software architecture approaches?

I've had this discussion with peer engineers working at other tech companies, FANG (Facebook, Amazon, Netflix, Google), as well as at smaller startups. Most teams and projects - however large or small - all shared a similar approach to design and implementation:

  1. Start with the business problem. What are we trying to solve? What product are we trying to build and why? How can we measure success?
  2. Brainstorm the approach. Get together with the team and through multiple sessions, figure out what solution will work. Keep these brainstormings small. Start at a high level, going down to lower levels.
  3. Whiteboard your approach. Get the team together and have a person draw up the approach the team is converging to. You should be able to explain the architecture of your system/app on a whiteboard clearly, starting at the high-level, diving deeper as needed. If you have trouble with this explanation or it's not clear enough, there's more work required on the details.
  4. Write it up via simple documentation with simple diagrams based on what you explained on the whiteboard. Keep jargon to the minimum: you want even junior engineers to understand what it's about. Write it using clear and easy to follow language. At Uber, we use an RFC-like document with a basic template.
  5. Talk about tradeoffs and alternatives. Good software design and good architecture are all about making the right tradeoffs. No design choice is good or bad by itself: it all depends on the context and the goals. Is your architecture split into different services? Mention why you decided against going with one large service, that might have some other benefits, like more straightforward and quicker deployment. Did you choose to extend a service or module with new functionality? Weigh the option of building a separate service or module instead, and what the pros and cons of that approach would be.
  6. Circulate the design document within the team/organization and get feedback. At Uber, we used to send out all our software design documents to all engineers, until there were around 2,000 of us. Now that we're larger, we still distribute them very widely, but we've started balancing the signal/noise ratio more. Encourage people asking questions and offering alternatives. Be pragmatic in setting sensible time limits to discuss the feedback and incorporate it, where it's needed. Straightforward feedback can be quickly addressed on the spot, while more detailed feedback might be quicker to settle in-person.

Why was our approach different from what is commonly referred to in software architecture literature? Actually, our approach is not that different in principle, to most architecture guides. Almost all guides suggest starting with the business problem and outlining solutions and tradeoffs: which is also what we do. What we don't do is use many of the more complex tools that many architects or architecture books advocate for. We document the design as simple as we can, using the most straightforward tools: tools like Google Docs or Office365.

I assume that the main difference in our approach boils down to engineering culture at these companies. High autonomy and little hierarchy is a trait tech companies and startups share: something that is sometimes less true for more traditional companies. This is also a reason these places do a lot more 'common sense-based design' over process-driven design, with stricter rules.

I know of banks and automotive companies where developers are actively discouraged from making any architecture decisions without going up the chain, getting signoff from architects a few levels up, who are overseeing several teams. This becomes a slower process, and architects can get overwhelmed with many requests. So these architects create more formal documents, in hopes of making the system more clear, using much more of the tools the common literature describes. These documents also reinforce a top-down approach, as it is more intimidating for an engineer, who is not an architect, to question or challenge the decisions that have already been documented using formal methods, that they are not that well-versed in. So they usually don't do so. To be fair, these same companies often want to optimize for developers to be more as exchangeable resources, allowing them to re-allocate people to work on a different project, on short notice. It should be no surprise that different tools work better in different environments.

Simple, jargonless software design over architecture patterns

The goal of designing a system should be simplicity. The simpler the system, the simpler it is to understand, the simpler it is to find issues with it and the simpler it is to implement it. The more clear language it is described in, the more accessible that design is. Avoid using jargon that is not understood by every member of the team: the least experienced person should be able to understand things equally clearly.

Clean design is similar to clean code: it's easy to read and easy to comprehend. There are many great ways to write clean code. However, you will rarely hear anyone suggesting to start with applying the Gang of four design patterns to your code. Clean code starts with things like single responsibility, clear naming, and easy to understand conventions. These principles equally apply to clear architecture.

So what is the role of architecture patterns? I see them similarly in usefulness as coding design patterns. They can give you ideas on how to improve your code or architecture. For coding patterns, I notice a singleton pattern when I see one, and I raise my eyebrow and dig deeper when I see a class that acts as a facade, only doing call-throughs. But I've yet to think 'this calls for an abstract factory pattern'. In fact, it took me a lot of time to understand what this pattern does and had my 'aha!' moment, after working with a lot of dependency injection - one of the few areas, where this pattern is actually pretty common and useful. I'll also admit that although I spent a lot of time reading and comprehending the Gang of four design patterns, they've had far less impact on becoming a better coder than the feedback I've gotten from other engineers on my code.

Similarly, knowing about common architecture patterns is a good thing: it helps shorten discussions with people, who understand them the same way as you do. But architecture patterns are not the goal, and they won't substitute for simpler system designs. When designing a system, you might find yourself having accidentally applied a well-known pattern: and this is a good thing. Later, you can reference your approach easier. But the last thing you want to do is taking one or more architecture pattern, using it as a hammer, looking for nails to use it on.

Architecture patterns were born after engineers observed how similar design choices were made in some cases, and those design choices were implemented similarly. These choices were then named, written down, and extensively talked about. Architecture patterns are tools that came after the solution was solved, in hopes of making the lives of others easier. As an engineer, your goal should be more about solving solutions and learning through them rather than picking a shiny architecture pattern, in hopes that that will solve your problem.

Getting better at designing systems

I've heard many people ask for tips on becoming better in architecting and designing systems. Several experienced people will recommend reading up on architecture patterns and reading books on software architecture. While I definitely do recommend reading - especially books, as they provide a lot more depth than a short post - I have a few suggestions, that are all more hands-on than just reading.

  • Pull over a teammate and whiteboard your design approach. Draw up what you are working on and why you are doing things. Make sure they understand. And when they do, ask for their feedback.
  • Write up your design in a simple document and share it with your team, asking for feedback. No matter how simple or complex thing you're working on, may that be a smaller refactor or a large project, summarize this. Do it in a way that makes sense to you and a way that others can understand - for inspiration, here's how I've seen it done at Uber. Share it with your team in a format that allows commenting, like Google Docs, Office365, or others. Ask people to add their thoughts and questions.
  • Design it two different ways and contrast the two designs. When most people design an architecture, they go with one approach: the one popping in their head. However, architecture is not black-and-white. Come up with a second design that could also work. Contrast the two, explaining why one is better than the other. List the second design briefly as an alternative considered, arguing why it was decided against.
  • Be explicit about tradeoffs you make, why you made them, and what things you have optimized for. Be clear about constraints that exist and you've had to take into account.
  • Review other's designs. Do it better. Assuming you have a culture, where people share their designs via whiteboarding and sessions or documents, get more out of these reviews. During a review, most people only try to take things in, becoming one-way observers. Instead, ask clarifying questions for parts that are not clear. Ask them about other alternatives they've considered. Ask them what tradeoffs they've taken and what constraints they've assumed. Play devil's advocate and suggest another, possibly simpler alternative - even if it's not a better one - asking them their thoughts on your suggestion. Even though you've not thought as much about the design as the person presenting it, you can still add a lot of value and learn more.

The best software design is simple and easy to understand. The next time you're starting a new project, instead of thinking, 'How will I architect this system, what battle-tested patterns should I use and what formal methodology should I document it with?', think 'How can I come up with the simplest possible design, in a way that's easy for anyone to understand?'.

Software architecture best practices, enterprise architecture patterns, and formalized ways to describe systems are all tools that are useful to know of and might come in handy one day. But when designing systems, start simple and stay as simple as you can. Try to avoid the complexity that more complex architecture and formal tools inherently introduce.

This post has received much discussion from other industry professionals, who shared their views and experiences on architecture and simplicity. Read these interesting discussions on Hacker News, on Lobste.rs and on Reddit.




All Comments: [-] | anchor

jaequery(2803) 2 days ago [-]

First 1-3 years of coding, I just coded to get sht done. I got a lot of sht done.

Next 4-8 years, I started getting cute with it and applied all kinds of design patterns, Factory, Abstractions, DI, Facade, Singleton you name it. It looked cute and felt good when it all worked but it was a juggling act. There was usually like 2-3 files to touch just to do one thing. UserFactory, UserService, UserModel, User, you get the idea. It got to a point coding now felt like a burden and I started getting allergic reaction to any projects that had more than 50 files.

Next 4-5 years, I made it a mission to only code in a pragmatic, minimalistic way. Best decision I ever made, this have been the most productive time of my career. I don't look back and never going back again. A simple require and requireAll with basic OOP is all I need on most cases. Most of my project now have less than 10 "core" files minus the standard views/routes/etc. I enjoy just working on code now it makes me happy and also any devs who joins loves it too as they get it right away. I code almost exclusively in Sinatra now btw. Express is great too but I think the ecosystem isn't there yet for developer happiness.

Keeping code simple is not easy. It takes a lot of trials and errors to know what works and what doesn't. I realize I code a lot slower now than in the past and that I write much fewer lines of code. It's both good and bad because sometimes I'd even spend hours just trying to properly name a variable. But I believe this pays off at the end.

You just can't best simplicity.

wonderwonder(4043) 1 day ago [-]

I've always been happy with just OOP and dependency injection. Anything more and things start to get difficult to follow. Currently working on a a legacy system that uses micro services and it takes hours just to figure out where the code is that needs to be changed and to trace how those changes will be propagated through the system.

galaxyLogic(3575) 2 days ago [-]

>started getting cute with it and applied all kinds of design patterns

Even though there are books about design patterns, taking such a book and trying to 'apply' its patterns is a bit backwards I think. The idea of patterns is they describe commonly useful solutions, not designs you 'should' use.

Once you started to code in 'pragmatic, minimalistic way' I assume you found you could apply the same solutions you had found earlier in new contexts. Those are your own design patterns. That is how design patterns work, some patterns of design 'emerge', because they are the optimal solutions.

A Design Pattern should be minimalistic, it should only do what is needed, not anything more. It should only solve its problem in optimal, minimal way. But if the problem it is solving is not your problem, you should not use it.

enriquto(10000) 2 days ago [-]

> Next 4-8 years, I started getting cute with it and applied all kinds of design patterns, Factory, Abstractions, DI, Facade, Singleton you name it.

What's cute about this madness? There's nothing uglier than that! Simple code is cute. The simpler, the cuter.

GordonS(567) 1 day ago [-]

I've been on a similar journey, and I've seen this pattern repeat itself again and again!

1. Hack any old shit together, but it works

2. When you actually have to maintain what you previously wrote, you realise (1) doesn't work so well. Then design patterns seem like an epiphany, and you Cargo-cult the shit out of them, using them everywhere. You dogmatically eliminate all code duplicatation, use mocks with wild abandon, and are not happy unless you have 100% test coverage. For bonus points, you also overuse abstraction. A lot.

3. When you actually have to maintain anything you previously wrote, you realise what a tangled mess of abstraction you have made - you can't simply open a file and ascertain what it's doing! You also realise that the tests you wrote to achieve 100% coverage are crap, and don't really prove anything works. You finally reach a zen-like state, realising that simplicity is key. You shun all forms of dogma, and use patterns and abstraction, but only just enough

paxys(10000) 2 days ago [-]

This describes my career perfectly. And at every stage I inevitably get annoyed at other engineers not in the same stage as me.

taneq(10000) 2 days ago [-]

> I realize I code a lot slower now than in the past and that I write much fewer lines of code. It's both good and bad because sometimes I'd even spend hours just trying to properly name a variable.

I think the important thing happening here is more than just naming. You're taking the time to fully consider what you're doing with the new variable in order to name it. That's time very rarely wasted.

downtide(10000) 2 days ago [-]

When I couldn't program I almost achieved more! I saved time by picking stuff up and glueing it together. Then later spent ages learning specific softwares, plugins and their wiring only for them to fall out of favour. Later frameworks etc.

A web outfit I worked at should have concentrated on a few small plugins/components that would have handled most of their sites. Instead other behemoths emerged, that added pain and complexity to what should have been very simple sites. Only the author understood the ins and outs of a half finished product, that ended up bastardised for each project, resulting in multiple version hell. But hey this was before good 'Git'ing. Oh for hindsight.

croh(3871) 2 days ago [-]

Well said. Similar to you, I code now almost exclusively in Flask. I don't want to spend days and night learning (and remembering for interviews) unnecessary abstractions and apis. Instead I prefer to spend more time on CS fundamentals, if I have to. Sad part of this story is broken hiring. Your resume doesn't get short-listed unless it comes with new hyped-shiny-toy. But this can encourage you to put more efforts on finding good employer.

Apart from juggling act you mentioned, there is another caveat. Many devs don't understand exact use-cases of these design patterns and use them in wrong context.

On foot note, If you don't have good team of engineers similar to OP, best way to craft your art is -

- pick up a good library in your subject

- start copying it line by line

- when copying, try to understand everything

- this will teach you lot about designing softwares

Even though this sounds like stupid and time consuming, it is not. Believe me. You don't have to even reach 100 %, just try to reach 33 %. You will learn lot by this in short period.

winrid(10000) 2 days ago [-]

You can break things up without overdesign right?

I think you should do what is easiest most of the time. However, that is hard to measure. Easiest now, or when you need to finish this and move onto the next thing without spending two more sprints fixing bugs?

I prefer small/reasonably sized components because I can easily cover them in unit tests and sleep easier at night. I built a survey builder at one company (think mix of survey monkey/qualtrics) and that is probably over 100 files. But the codebase is straightforward and simple (no complicated inheritance, one tree data structure for pathing, lots of code reuse)...

GuiA(429) 2 days ago [-]

Well yes. It all sounds all so easy when you put it like that.

The problem is that, in my experience at least, you can't just teach junior engineers how to go straight to phase 3. You have to go through phase 1 and 2 to really develop a sense for what makes a solid, streamlined design.

Some never get there - either because they become set in their ways early, or because they work in organizations where the wrong kind of thing is encouraged. Some get there faster - because they've worked with mentors or in codebases that accelerated their learning.

But like with any craft, you have to put in the hours and the mistakes.

(Yes, there are John Carmacks in the world who go through all those steps within 18 months when they are 12, but they are 0.0001% of the programming population)

james_s_tayler(10000) 2 days ago [-]

Out of curiosity what kind of projects are that small?

I guess I have hobby projects that are that small, but all my professional work is large, enterprise systems that wouldn't fit in 10 files if they tried.

Makes sense when things are so small to only use what you need. Sounds like you made a reasonable decision for the kinds of things you work on. But when you get past a certain size actual architecture becomes very beneficial.

Of course it's also possible to have a massive enterprise system without any architecture. Believe me it's not very fun.

yodsanklai(4066) 1 day ago [-]

You can't beat simplicity, but software aren't planned entities. They evolve from the collaboration of multiple persons with a variety of skills and personalities, working together to meet deadlines.

afpx(4111) 1 day ago [-]

The only problem is that it takes at least 10 years to get to that point. No one has found a shortcut, yet.

feketegy(4199) 2 days ago [-]

Most devs prepare for the abstraction nirvana. I see a lot of fellow devs creating complicated code, because 'in case we need to switch out the database down the road' or 'what if we want to run the web app in CLI'

In 20 years of programming I maybe seen one or two times a large application switched database engines and I've never seen a client want to run his/her web application in CLI...

The art in programming is to decide whether you need that abstraction or not.

JMTQp8lwXL(10000) 2 days ago [-]

Over my career, I've worked with engineers that like to over-engineer and under-engineer.

The over-engineered code looked like russian dolls: had many layers to it, and some of the abstractions offered no value. That can make onboarding to such code unnecessarily complex.

On the other hand, under-engineered code made very little of use of even simple data structures or algorithms. I like to call it 'chicken scratch' code. I find it tends to be brittle, and it fails the longevity test: you end up frequently having to touch everything, and be aware of the entire state of the system at once, due to a lack of functional style. There are few enforced boundaries between subsystems in this type of code.

Like most things, moderation is key. I only introduce abstractions when there is a meaningful value-add to doing so. This is somewhat subjective, but there is a right level of application of design patterns. Not too much, nor too little.

amelius(879) 2 days ago [-]

It looks like you were applying design patterns 'just because'. Obviously this is not a good thing.

A better approach is to take some time and think about all the requirements of your project, and to take into account what requirements might be added later. With that in mind, you can choose the abstractions that you need, and from there start coding. That way, your design patterns start working for you instead of against you.

iask(10000) 2 days ago [-]

Thank you for sharing. Same experience here. I felt I was the only one going down this path. Many projects I look at has too many unnecessary layers files etc.

I joined a company recently that has a simple app for end users to take orders over the phone, perform lookup and refunds. Something that can be built in a few days, seriously. When I looked at the code - WTF!!! The previous dev over architected this thing. Unnecessary layers, interfaces etc. one simple change can take hours.

I think developers need a little bit of management experience to understand the impact of these complexity. At the very end, companies just want something usable to stay in the game...a Honda and not a Rolls.

tarsinge(10000) 2 days ago [-]

This mirror my experience too.

I think at its core the issue is that code duplication is irrationally seen as a bad thing. But from my recent experience of the last few years with ultra minimalists approach making a change to a non abstracted code is so much faster. Yes it's boring and feel unsophisticated, but when you only have flat functions vs an architecture tightly coupled to a business process, it's a matter of hours vs days/weeks.

In short I would add to the title "because reusability is overrated". Especially when the trade off is complexity.

dgellow(607) 2 days ago [-]

What about tests? In my experience simple code without abstractions often becomes a pain to write tests for. For example that's one of the main reason I see to use some form of Dependency Injection and other indirections, even if in practice you have only two implementation of each dependencies (once in your tests, once the real one).

StreamBright(2711) 2 days ago [-]

Exactly. I never got into OOP design patterns and my co-workers could not convince me this is a good idea. I thought for a while that I am crazy but then I got to know Erlang and Clojure. Joe and Rich set me straight on software design.

>> Keeping code simple is not easy. It takes a lot of trials and errors to know what works and what doesn't.

Refactoring helps. I usually achieve 20-40% reduction with the first refactoring.

obstacle1(3948) 2 days ago [-]

A big problem IME is people tend to define 'simple' as 'written in a style I prefer'. For example you can extract a series of 10 obviously-related methods from some 2000-line God class into their own class, but have others who are used to a more procedural coding style complain that the indirection is 'hard to read' because they need to open a new file. This despite the facts that others find the God class 'harder to read' because it contains 2000 lines of code doing everything under the sun, and that class is objectively harder to maintain/change for everyone because nobody knows what things are necessary to change to achieve some goal, because there are no logical boundaries between code functions so you can't tell what needs changing without reading everything.

Cue endless bikeshedding in the name of 'simplicity', which nobody is using an objective metric to define.

stinos(3992) 2 days ago [-]

2000-line God class ... 'hard to read' because they need to open a new file

Might be me, but I've always found this a rather strange argument: either they aren't using 'go to definition' which means that to be able to read the other code they have to scroll through the file manually, leaving where they are, and then go back. That's not really convenient? Or they are using 'go to / peek definition' and then it doesn't really matter it's in another file?

growlist(10000) 2 days ago [-]

In my experience the further away from fierce commercial factors, the greater the tendency towards cargo-cultism. Hiring for roles in government related work in the UK is awash with acronyms and buzzwords, as if it's the case that with enough methodology and certifications we can regulate failure away. Problem is: things still seem go wrong in all the same old ways despite all the latest greatest fancy new techniques. But hey, all our developers are TOGAF certified these days, so that's something!

james_s_tayler(10000) 2 days ago [-]

For some reason I read 'methodology' as 'mythodology' and I thought 'That's genius! That's the perfect portmanteau to describe the phenomenon of people trying to learn and adhere to 'methodology' but then really just adhering to the lore and the myth! I'm stealing that!'

Then I read it again and it didn't say that. But I think that should become a new word. Mythodology.

andreyk(2636) 2 days ago [-]

Boils down to this: 'So what is the role of architecture patterns? I see them similarly in usefulness as coding design patterns. They can give you ideas on how to improve your code or architecture.'

The whole idea of patterns is to identify often useful, and possibly non-obvious, ideas to be aware of when designing the solution. It's great to start simple, but tricky to make things both simple and robust/powerful - and that's what patterns are supposed to help with. This ends with:

'Software architecture best practices, enterprise architecture patterns, and formalized ways to describe systems are all tools that are useful to know of and might come in handy one day. But when designing systems, start simple and stay as simple as you can. Try to avoid the complexity that more complex architecture and formal tools inherently introduce.'

What this misses that if you start simple and stay as simple as you can, you may undershoot and be stuck refactoring code down the line; a fine balance is needed, and patterns are definitely part of a toolset that a good engineer should be aware of when trying to nail that balance.

james_s_tayler(10000) 2 days ago [-]

I really agree about undershooting. I like to try and overshoot by about 15%.

It's definitely a big mistake to overshoot by say 50 or 100 or 200%. But overshooting by just a little often leaves me feeling like 'thank God I did that' more often than it does 'hmm I guess I really didn't need that'.

Balance is absolutely key.

uber99953(10000) 2 days ago [-]

Services at Uber are pretty much all stateless Go or Java executables, running on a central shared Mesos cluster per zone, exposing and consuming Thrift interfaces. There is one service mesh, one IDL registry, one way to do routing. There is one managed Kafka infrastructure with opinionated client libraries. There are a handful of managed storage solutions. There is one big Hive where all the Kafka topics and datastores are archived, one big Airflow (fork) operating the many thousands of pipelines computing derived tables. Almost all Java services now live in a monorepo with a unified build system. Go services are on their way into one. Stdout and stderr go to a single log aggregation system.

At the business/application level, it's definitely a bazaar rather than a cathedral, and the full graph of RPC and messaging interactions is certainly too big and chaotic for any one person to understand. But services are not that different from each other, and run on pretty homogeneous infrastructure. It takes pretty strong justification to take a nonstandard dependency, like operating your RDBMS instance or directly using an AWS service, although it does happen when the standard in-house stuff is insufficient. Even within most services you will find a pretty consistent set of layers: handlers, controllers, gateways, repositories.

barrkel(3135) 2 days ago [-]

What you describe is an architecture, of course, and it didn't happen by accident.

cat199(10000) 1 day ago [-]

Umm:

    + all stateless Go or Java executables
    + running on a central shared Mesos cluster per zone
    + one service mesh
    + one IDL registry, 
    + one way to do routing
    + one managed Kafka infrastructure
    - handful of managed storage solutions
    + one big Hive where all the Kafka topics and datastores are archived, 
    + one big Airflow (fork) operating the many thousands of pipelines computing derived tables. 
    + Almost all Java services now live in a monorepo with a
    + unified build system. 
    + Go services are on their way into one. 
    + Stdout and stderr go to a single log aggregation system.
    = +11 singular/unified things, forming a single, larger system.
' It takes pretty strong justification to take a nonstandard dependency ... Even within most services you will find a pretty consistent set of layers ... '

Maybe I'm misunderstanding, but how in the world do you get 'bazaar' out of this?

cryptica(10000) 2 days ago [-]

Design is part of architecture so it doesn't make to compare them.

The best architectures are usually the simplest ones which get the job done.

To design the simplest architecture possible, you need to know exactly what 'get the job done' means. Many software architects have no vision when it comes to seeing all possible use cases for the product so they don't know what are the minimal structural requirements. So either they underengineer or they overengineer based on what is more profitable for them as employees of the company.

Underengineering is also bad but it's rare nowadays because it usually doesn't align with employee salary incentives so we forgot how painful it is to maintain a code base which copy pastes the code everywhere.

SamuelAdams(3875) 2 days ago [-]

Right, there's the concept of JBGE:

http://agilemodeling.com/essays/barelyGoodEnough.html

I think too many people want to apply a 'silver bullet' to all projects: IoC, Docker containers, auto-scaling, etc. But sometimes I'm just tossing data from an API into a database somewhere. I don't need all that complexity.

Other times, I'm building an enterprise product with three fully-staffed agile teams, spending a million dollars annually for five years. Architecture that enables those teams to work in a cohesive way becomes very important, so an IoC pattern might save us a lot of time down the road.

Great architects know when to underengineer and when to overengineer.

Hokusai(10000) 2 days ago [-]

> However, no one person owned the architecture or design. The experienced developers did drive this.

The lack of formality does not mean a lack of the role. If 'experienced developers' are the ones doing the design. They are de-facto architects.

> No mentions of microservices, serverless architecture, application boundaries, event-driven architecture, and the lot. Some of these did come up during brainstormings. However, there was no need to reference them in the design documents themselves.

So, the teams were thinking in patterns to discuss and express themselves. But, then decided to hide that and do not show the reasoning in the documentation, for reasons. That makes the job of understanding the conclusions of the discussion harder for people from outside their group.

I am all for transparency. If your company has architects but calls them 'experienced engineers'. If you use patterns and then remove them from your documentation. Your company is going to lack the transparency to allow everybody to participate.

Everybody has seen this with on-line rants. People raises and falls by politics. When they are one of the 'expert engineers' they talk about cool company and meritocracy. When politics make them fall, then there comes a rant and how now the company 'is not at it used to be'.

I like to spend my time doing software engineering instead of politics and gaining upper management favor or fighting peers. Clear responsibilities help with that when a company is big enough. Like any system, a quantitative change - number of employees - may lead to a qualitative change that needs a change of approach. To try to run a 1000 people company like a 50 people startup is like trying to design in the same way a small server with a few queries per minute and a critical server with thousands of queries per second.

To each problem its own solution.

closeparen(4112) 2 days ago [-]

Central, top-down architecture is extremely political. You have to fight with bigwigs who don't know your problem domain and don't live in your codebase to make it reasonable, or even possible, to solve the business problems on your plate when they inevitably don't fit the 10,000 foot 5-year plan.

Pushing down architecture responsibilities into the hands of senior engineers with specific problems to solve / features to build eliminates that form of politics. They are not disguised architects, because designing the architecture is only phase of the project. They also have to live with the architecture. This is a great thing.

ryanjshaw(10000) 2 days ago [-]

Uber is barely 10 years old. They can get away with this. Wait until it's 2 or 3 times that age, and its (present or future) regulators sign new laws into place that require massive changes or reporting feeds across multiple systems engineered and documented in this unprincipled fashion. Probably after a couple more privacy breaches or self-murdering car incidents. Nobody will be able to figure out how it all fits together, and the compliance team and auditors are going to throw a fit.

That's when all those architecture processes, repository tools and languages suddenly make a lot more sense. Uber deals with extremely sensitive personal information, and the move towards self-driving cars means they deal with highly safely sensitive systems. The dismissive attitude towards these tools in what should be a highly disciplined engineering organisation disturbs me, but I come from a highly regulated environment so perhaps I was just raised that way.

niceworkbuddy(10000) 2 days ago [-]

I think there is distinction between documentation in development progress and documentation afterwards. IMHO article is about the first. After something is done, you can write thorough documentation of product with UML diagrams and whatnot.

_pmf_(10000) 2 days ago [-]

> the compliance team and auditors are going to throw a fit.

I've never seen an auditor give a shit either way. They're just box ticking robots.

cryptica(10000) 2 days ago [-]

This article has some contradictions:

>> Third, we had practically no references to the common architecture patterns and other jargon referenced in common software architecture literature, such as Martin Fowler's architecture guide. No mentions of microservices, serverless architecture

Then a few paragraphs later:

>> Is your architecture split into different microservices? Mention why you decided against going with a monolith, that might have some other benefits

Another contradiction (which mostly contradicts the general premise of the article):

>> We created plenty of diagrams, but none of them followed any strict rules. Just plain old boxes and arrows, similar [this one describing information flow] or [this one outlining class structure and relationships between components]

In the last link ([this one outlining class structure and relationships between components]), the article says:

>> If you have previously worked with the [VIPER] architecture, then the class breakdown of a RIB will look familiar to you. RIBs are usually composed of the following elements, with every element implemented in its own class:

... and then it shows some kind of class diagram which looks vaguely like UML in which the classes have highly architected names like 'Interactor', 'Presenter', 'View', 'Builder'... Nothing to do with the underlying business domain. Doesn't look like simple design to me. The recommended approach looks more like complex VIPER architecture.

kdmccormick(10000) 1 day ago [-]

Good observations. I think a more accurate portrayal of the author's experience would be 'Clear and simple Architecture is underrated'.

jameslk(1502) 2 days ago [-]

Sounds great for a tech company with highly skilled engineers. They can afford the type of talent who will be thinking thoughtfully and have the time to do so. Startups seem to attract similar talent, and when not, don't always have the same problems anyway.

But what about the companies that can't afford the best engineers and don't have the bottom up culture? What about the companies that hire overseas IT agencies who do exactly what they're told and no more (it's safer that way when communication and timezones are working against you)?

I've worked in companies both the former and the latter. I've seen the top down 'architect' role work better in the latter.

The author even seems to admit this, although briefly:

> To be fair, these same companies often want to optimize for developers to be more as exchangeable resources, allowing them to re-allocate people to work on a different project, on short notice. It should be no surprise that different tools work better in different environments.

This best summarizes it. Different approaches work better in different scenarios. That's really what the title and article should be about.

rumanator(10000) 2 days ago [-]

> Sounds great for a tech company with highly skilled engineers. They can afford the type of talent who will be thinking thoughtfully and have the time to do so.

The blog post says nothing of the sort. It focuses on two aspects of software architecture which are entirely orthogonal to the design process: using a common language and tools to communicate and describe their ideas (UML, documentation) and leveraging knowledge and experience to arrive at a system's architecture that meet the project's requirements.

Deciding to invent their own personal language (their 'boxes and arrows') and take a naive tabula rasa approach to software architecture does not change the nature of the task.

A rose by any other name...

james_s_tayler(10000) 2 days ago [-]

I couldn't agree more. A lot of times people espouse a particular worldview without computing through the 2nd and 3rd order effects in different contexts.

If anything one of the fundamental things to get right is to pick an approach suited to the context.

cousin_it(3339) 2 days ago [-]

A couple decades ago we had a world that mostly standardized on the LAMP stack. It was an architecture that solved everyone's webapp needs, switching projects was easy, life was good. Then SOA happened on the server side, JS monoliths happened on the client side, and here we are, worse off than when we started.

cryptozeus(3590) 2 days ago [-]

Good article but some parts are outdated like who uses UML these days ? Saying you did not create diagrams using any architecture tools is but obvious, no ?

SamuelAdams(3875) 2 days ago [-]

I just started grad school this fall (September 2019). My 'systems analysis and design' course spends three weeks on UML, Data flow diagrams, and CASE tools.

This would have been a great course, 20 years ago. No sane business uses these tools today. The military might, but that's about it.

mrpickels(10000) 2 days ago [-]

I read the comments and see there are two types of engineers - conservatives and liberals, those who work for big corporations, draw UML diagrams with factories, bridges and facades and throwing arguments that because of some regulations or privacy policies your architecture can change and you need to be prepared for it. Those guys are right.

More liberal engineers are saying that keeping code simple is the key and to keep it simple you need to be smart and creative. Those guys are right as well.

Now back to reality, the conservative developers will always work on the code that was written by liberal developers because the latter deliver sh*t in time that works and carry the business on their shoulders, where the first makes it work in another scale.

Conclusion - there are different types of engineers and our responsibility is to accept that humans are not machines and somebody likes to CREATE vision and value, others like to manage huge system that are complex.

Gibbon1(10000) 2 days ago [-]

30 years ago? I talked to an project manager that designed and built factories. He said there were three kinds of engineers and techs he hired. Design, Construction/Shakedown and Maintenance. Design requires being creative and methodical. Construction and Shakedown requires the ability to gleefully beat chaos into submission. And Maintenance is the methodical following of rules and procedures. He hired three different groups for these tasks because they would go insane or be overwhelmed if they were doing the wrong job for their temperament and skills.

rgoulter(10000) 2 days ago [-]

I like the interpretation of 'conservative'/'liberal' as applied to engineering practices which Steve Yegge wrote (in a lost Google+ post): 'acceptability of breaking code in production'.

'Conservative' developers really wants to 'conserve' what's there. I feel that description suits both the 'draw UML diagrams with enterprise patterns' as well as 'loves dependent types' kinds of people.

In this sense a liberal 'keep code simple' is more about things like 'You Ain't Gonna Need It', and focussing on writing what code is needed now. (Since it doesn't matter if it needs to be broken later as requirements change).

rumanator(10000) 2 days ago [-]

I have a hard time understanding the author's point of not using UML but somehow boasting that they used 'plain old boxes and arrows' to create 'plenty of diagrams'.

UML is nothing more than a bunch of 'plain old boxes and diagrams', but which have concrete, objective meaning that has been specified and thus help share ideas as objectively as possible. UML is a language, and languages are tools to communicate and share information.

Using ad hoc box and arrow diagrams invented as they go along means they decided to invent their own language, which may or may not be half-baked, that is not shared by anyone other than the people directly involved in the ad hoc diagram creation process.

If the goal is to use diagrams to share information and help think things through, that sounds like a colossal shot in the foot. I mean, with UML there are documents describing the meaning of each box and each arrow, which help any third party to clear up any misconception. What about their pet ad hoc language?

In the end the whole thing looks like a massive naive approach to software architecture, where all wheels have been thoroughly reinvented.

hdfbdtbcdg(10000) 2 days ago [-]

This reminds me of the framework va libraries argument or ORM vs raw SQL. Yes frameworks and ORMs can be constraining and limit clever solutions. But when you need to add complex features to a complex project you are always glad that every other programmer that came before you was constrained and that things use a familiar pattern.

sebcat(4119) 2 days ago [-]

Data Flow Diagrams as described in 'Structured Analysis and System Specification' (Tom DeMarco) are lightweight and provides a common way to describe a system with a focus on the flow of data.

The book also goes into detail on how to apply it. e.g., the value of having simple diagrams and have separate diagrams to break down the more complex processes in detail.

It's not rocket science, but I have found it helpful in the past to communicate ideas or existing designs.

verall(10000) 2 days ago [-]

Because UML is generally about defining processes, and it is easy to accidently try to poorly 'code' parts of the system in UML, processes that might be easier represented in code. If there is distinct process that is complex/important enough to be architected, by all means use UML.

Normally, at a high level, where people are architecting, what is more important is flow of information and containment of responsibilities. UML is not really designed for describing these situations, and trying to wedge this type of information into a UML diagram can get confusing and can encourage architects to focus on the wrong things.

When people say 'box and arrow diagrams', I think that to mean boxes=infomation+responsibilities arrows=information flow.

Double_a_92(4096) 2 days ago [-]

The difference is that you can't really use proper UML to quickly explain something on a whiteboard, unless you were fluent in it. I personally get mental inhibitions when I have to quickly decide if the arrowhead needs to be hollow or filled, or if the arrow itself needs to be a line or dotted, or if the box needs to have rounded corners or not... Especially if it doesn't matter for the idea that I'm trying to explain (maybe even just to myself).

jacquesm(43) 2 days ago [-]

Clear and simple design is optimal software architecture. Oversimplification and architecture madness are sub-optimal.

vemv(4161) 2 days ago [-]

> Oversimplification and architecture madness are sub-optimal.

Let me point out, you are essentially saying 'bad things are bad' here.

_pmf_(10000) 2 days ago [-]

If I had more time, I would create a cleaner and simpler design.

kmote00(4215) about 12 hours ago [-]

'I apologize for the length of my letter. I'm afraid I did not have the time to write a shorter one.' -Blaise Pascal (et. al.[1])

[1] https://quoteinvestigator.com/2012/04/28/shorter-letter/

drawkbox(3196) 2 days ago [-]

A programmer, engineer, creative and product developers job is to create simplicity from complexity.

The job is to tame complexity via simplicity, not make a complex beast that needs more taming.

Sometimes engineers take something simple and make it more complex which is against simplifying either due to bad abstractions or proprietary reasons or obfuscation for job security or to ego flex. Anyone can make something more complex, making something simple and standard takes lots of work and iterations.

Nature is built with simple iterative parts that look complex in whole, systems that work well mimic that. Math is built with simple parts that leads to amazingly complex and beautiful things. Art is built with simple marks to create something complex and beautiful. Humans as well, and they desire simplicity so they can do more and build on that.

mikekchar(10000) 2 days ago [-]

I'd make a slight caveat with this. The our job is to make something that is as close as possible to the complexity of the problem. You don't want to make it more complex for obvious reasons. However, you also don't want to make it less complex, because then you are removing fidelity. Let me aim a slightly playful jab at the GNOME people for 'simplifying' by removing features that I actually need. Only slightly playful as it's the reason I had to give up GNOME. ;-)

d--b(4090) 2 days ago [-]

Let's see how the OP's system looks in 20 years. Then we'll see how clear and simple it has remained.

The OP is railing against a culture that never existed. Banks software architects are not in their offices smoking cigars and making UML diagrams that they send to coders, only to realize later that they made the wrong trade off.

What happens is:

You design a system for what it's supposed to do. You do it the way the OP says: nice ad hoc charts, talk to a lot of people, everybody agrees, all is swell.

Then time goes by, new integrations are made, newer tech comes out, costumers need change, business orientation changes. And what used to be neat starts to become ugly. People cut corners by putting things where they don't belong to save some time. And then it's all downhill from there.

There is a toilet analogy that works well. If you go to a bar and the toilet seat is very clean, then you'll make sure to clean it before leaving. But if the toilet is already dirty, you're not going to be the one cleaning other people's pee, so you just add your own and leave.

The same is true in software architecture, once it doesn't look neat, everybody just pile up their crap in the way that demands the least effort. "The system is shit anyways".

I find it a little easy to say: "ha look at those guys, they spend hours trying to sort out system architecture, while all you really need is common sense".

snapetom(10000) 1 day ago [-]

> Banks software architects are not in their offices smoking cigars and making UML diagrams that they send to coders,

You'd be surprised at how common this is, especially in large companies that play, 'let's pretend to do technology.' I'm leaving a large hospital where I've spent half my time butting heads with our 'architect' who's skills have been frozen since 2005. Leadership is all eager to chase modern buzzwords like 'machine learning' and 'AI' but this guy is advocating for outdated crap.

edelans(4193) 2 days ago [-]

It reminds me of the broken window theory https://en.wikipedia.org/wiki/Broken_windows_theory : same as the toilet analogy, but more classy

peteradio(10000) 1 day ago [-]

And then there's the only available bathroom that is filled 3 ft deep with tp and shit and you must cut paths through to make brown. That is where the magic thinking happens.

gerbilly(945) 1 day ago [-]

> But if the toilet is already dirty, you're not going to be the one cleaning other people's pee, so you just add your own and leave.

Good analogy, but in code it's more than just disgusting to clean up after others. Changing code that was poorly written by someone else may cause bugs, bugs that now become your problem.

The goal of every programmer faced with such a codebase—as in the dirty bathroom analogy—is to get in, do his business as quickly as possible, and get out. Iterate this over time and the problem just keeps getting worse.

It's like the tragedy of the commons, where each programmer pollutes a common resource because the incentives are set up to reward that kind of behaviour.

This leads the codebase to become a 'Big Ball of Mud', the most popular architectural pattern in the world: http://laputan.org/mud/

bigbluedots(10000) 2 days ago [-]

It's pretty rare these days that systems are maintained for that long. More than likely there'll be a rewrite every few years anyway to keep up to date with $EXCITING_NEW_TECH.

perlgeek(2666) 2 days ago [-]

Despite the provocative title, the author argues for software architecture, just doing it in a manner that suits the organizational culture.

He somewhat decries traditional software architecture material, which I find off-putting. IMHO the best approach is to be aware of the techniques/patterns/references architectures, and use just the parts that make sense.

rumanator(10000) 2 days ago [-]

> Despite the provocative title, the author argues for software architecture, just doing it in a manner that suits the organizational culture.

The problems demonstrated in the blog post go deeper than (and are not explained by) organizational culture. They convey the idea that the task of designing the architecture of a software system was assigned to inexperience and ignorant developers who, in their turn, decided that they didn't needed to learn the basics of the task, and winging it as they went along would somehow result in a better outcome than anything that the whole software industry ever produced.

There is a saying in science/academia that is more or less 'a month in the lab saves you hours in the library', which is a snarky remark on how people waste valuable time trying to develop half-baked ideas that match concepts that were already known, thought through, and are readily available if they only put in the time to do a quick bibliographical review. This blog post reads and awful lot like that.

bamboozled(4189) 2 days ago [-]

In my career thus far, I can honestly say I've never, ever, ever seen an 'Architect' who actually provided valuable inputs.

Not trying to say they don't exist, but I've just never witnessed someone with that title actually have a positive impact on any project I've worked on.

The only semi-positive value I've seen an architect have is when they counter another architect to allow the engineers get on with their work without interfering.

Maybe the issue with the job comes from the connotation that an architect is someone with supreme insights? Where as most usually, they just over simplify things and expect engineers to deal with the implementation details (the hard part).

bradenb(10000) 1 day ago [-]

I feel like an 'Architect' should not be a standalone role. The architect for a project should be an engineer working on the project that can make decisions about the underlying architecture when a decision is needed.

noobiemcfoob(4219) 1 day ago [-]

Much of an architect's role won't be visible to developers beneath them and -- like a manager -- involves coordinating with other projects or other business units. That a specific project exists at all to work on or is otherwise a discussion topic is often the result of an architect's work.

TeMPOraL(2647) 2 days ago [-]

I feel being an 'Architect' is trying to do half of the job that's atomic, unseparable, because the 'architecture' half informs and is informed by the other half, 'writing and running code', both of them working best in a tight feedback loop. An architect not writing code has to rely on engineers in their team to communicate to them all the insights gained by implementing the architecture - which is a really bad idea, because it's already hard to precisely articulate your feelings about the code to yourself, and now you have to explain that to another person and hope they understand.

petjuh(10000) 2 days ago [-]

We have a great architect right now, but he's really just an engineer designated as the 'architect'. He also codes sometimes.

Timberwolf(10000) 1 day ago [-]

In my experience, architects who are valuable to their teams tend to be the ones who rarely do any 'architecture' themselves; instead they work their arse off trying to smash apart every last blocker to the engineers in a team being able to own architectural responsibilities themselves. (This may include asking smart questions to help a team who don't really do systems thinking start engaging with it). This inevitably ends up off in the EA realm grappling with Conway-type questions: not so much 'how should we structure our software to make it good?' as 'how should we structure our organisation so it naturally produces good software?'

Sadly these people are also rare as it requires a combination of sufficient technical skill and the ability to effectively navigate the people side of the equation.

The 'white paper' style of architect is very frustrating in comparison, not least because they are too removed from the context and impact of their decisions. This results in a situation where a team views their architect as merely a source of additional work, much of which is frustrating and pointless if not outright damaging to the system being built.

tootie(10000) 1 day ago [-]

I was an enterprise architect for about a year and it was dullest most soul-sucking job I ever had. In a sense, it was incredibly cushy. I had zero responsibility. I could easily just drop technical decisions on teams and not have to deal with the repercussions. But it really just drove me nuts. And I hated the other architects because they had set this system up and seemed perfectly content.

My role before and after as a director was always to give my tech leads a really long leash. I try to never force decisions on them, but rather let them work their own way and my job is just to make sure they've considered the project goals correctly and their solution is going to fit.

pvorb(3861) 1 day ago [-]

In my company, every software engineer also has the software architect role. This way everybody is aware that they are welcome to think about the architecture of software. There are no dedicated architects. This works quite well in my experience.

moksly(10000) 2 days ago [-]

The value of Enterprise Architecture doesn't come in to play until you're an actual Enterprise.

We operate more than 300 IT systems, from a myriad of different and switching (courtesy of procurement) suppliers. These systems are operated by 5000-7000 employees, and range from making sure employees get paid and patients get the right medicine to simple time booking apps. Most of these systems need to work together, and almost all of them need access to things like employee data.

Before we had a national strategy for enterprise architecture, which defines a standard model for organisation data, all of those 300 IT systems did it their own way and most of them actually thought we liked that so they came with no APIs. Imagine having to manually handle 5000-7000 users in 300 different IT systems...

That's the world without Enterprise Architecture, and we're still paying billions in taxpayer money to try and amend it. Because you don't move 300 IT systems, some of them running on COBOL on mainframes, over night. And that's just our municipality, there are 97 others with the exact same problems.

Don't get me wrong, I get the sentiment of the article and I actually agree with most of it. The thing is though, developers have very different opinions about what "simple design" is, I know, because I've build a lot of the gaffa-tape that integrates our 300 IT systems and not a single system has had remotely similar APIs.

blub(4143) 2 days ago [-]

I've seen you mention your organization and the challenges you're facing few times and I'm curious what kind of architecture books or principles you'd vouch for based on your experience.

ajuc(3874) 1 day ago [-]

That's not architecture, just standarization.

corodra(4209) 1 day ago [-]

>The value of Enterprise Architecture doesn't come in to play until you're an actual Enterprise.

Probably the smartest thing ever said when it comes to design patterns.

To put it in non-tech terms, a lot of design patterns equates to learning how to build a suspension bridge when building a back patio to a house. There's value, sure, maybe. But don't kid yourself. 80% of projects don't survive for more than 3 years at best. Most of which never really get 'updated' after a year or two. Nor see teams more than half a dozen people.





Historical Discussions: Modern C, Second Edition (September 18, 2019: 566 points)

(575) Modern C, Second Edition

575 points 1 day ago by matt_d in 186th position

gustedt.wordpress.com | | comments | anchor

A new edition of the book Modern C is now available under a CC license via the following page

https://modernc.gforge.inria.fr/

This edition is the result of a collaboration with Manning, and improves a lot over the previous edition; material has been rewritten and reordered, a lot of graphics have been added. Manning is in the process of producing nicely formatted print and eBook versions of this, you may also find links and information how to purchase this through the central link above.




All Comments: [-] | anchor

aportnoy(2431) 1 day ago [-]

Is this a good book, can anyone comment?

aidenn0(4040) 1 day ago [-]

[edit]

This was originally a comment about a book I thought was the same but merely had a similar title.

ndesaulniers(1185) 1 day ago [-]

I'm perusing the Takeaways highlighted in yellow; there are multiples per page. Seems like some hard learned advice that I mostly agree with.

> Takeaway 1.4.5.2 Don't use the , operator.

Yep: https://news.ycombinator.com/item?id=20773742

torstenvl(4189) 1 day ago [-]

I'm not a fan so far. Lots of deliberately bad C code. Using uninitialized variables, etc.

Explanation of jargon seems to take priority over explanation of the language - the term 'string literal' is explained before the concept of a function.

abainbridge(10000) 1 day ago [-]

Looks reasonable to me. I've been programming C daily for 25 years. Cosmetic but it is nice to see 'int const foo;' being preferred to 'const int foo'. And 'We define variables as close to their first use as possible'. Less cosmetically, section 5.6 on named constants is great. That's a fiddly area of C from which to pull out reliable advice.

sosodev(10000) 1 day ago [-]

I really like the first edition. Old C diehards hate it because it breaks a lot of tradition. Personally I think it's really refreshing and made me appreciate C and lot more.

SAI_Peregrinus(10000) 1 day ago [-]

The first edition has been my go-to for introducing some of the newer (1999 and later) features of C (and some of the subtle footguns) to people. I'd definitely recommend it as a first introduction to C.

k_sze(4199) 1 day ago [-]

How does this book compare with '21st Century C'?

I know that, unlike 'Modern C', '21st Century C' can't possibly cover C17 because that book was released in 2014.

But otherwise, what would be notable differences? In terms of style, correctness, idiom, depth, and breadth?

fermigier(3224) about 19 hours ago [-]

'Modern C' is a beginners book.

'21st Century C' is for people who have learned C in the past but need to brush up their knowledge with modern practices.

As someone who learned C at the time of the first edition of the K&R, '21st Century C' would be more useful if I ever had to code in C again (which hopefully I won't). A refresh, 5 years later, would be useful, though.

AlexeyBrin(407) about 20 hours ago [-]

C17/C18 is a minimal 'bugfix release' to C11, mostly standard clarifications.

https://en.wikipedia.org/wiki/C18_(C_standard_revision)

big_chungus(3932) 1 day ago [-]

As long as he's publishing under a CC license, it might be nice to release the LaTeX source as well. PDF is usually good enough, but sometimes needs to be converted to other formats; you usually get best results going latex->epub than pdf->epub. Some devices it's a lot easier to be able to change attributes to better fit the form factor.

compressedgas(10000) 1 day ago [-]

If the source was available, we could fix all the formatting errors.

skocznymroczny(10000) about 20 hours ago [-]

I'd love a language that feels like a more convenient C. C as it is, with some niceties like built-in containers with some nice syntax sugar. Maybe someday Zig will get there.

olah_1(10000) about 17 hours ago [-]

You may also want to look at Odin: https://odin-lang.org/

I found it by hyping up Jai's 'using' keyword. Then someone told me that Odin has the same thing: https://odin-lang.org/docs/overview/#using-statement

coldtea(1239) about 18 hours ago [-]

And preferably part of Clang and GCC collection...

BubRoss(10000) about 5 hours ago [-]

A language called clay was basically this. With better marketing and follow through it could have taken off. It had templates and move semantics and was interchangeable with C. The author was using it to write parts of a program that was already written in C.

ktkization(10000) about 18 hours ago [-]

Have you looked at Crystal language? Its slogan is fast as C, slick as Ruby

gnode(10000) about 20 hours ago [-]

Why would conservative use of C++ not meet this mark? It's highly compatible with C, has container types, ranged for-loops, operator overloading which makes use of container types more sugary.

giancarlostoro(3177) about 18 hours ago [-]

D is pretty darn nice and you can write code as if it were a C project, you don't have to use classes if you don't need them. I also view Go as very C-like honestly. People complain about things missing in Go that are technically not in C either, except it has a lot more batteries included stuff out of the box.

mrspeaker(391) 1 day ago [-]

Damn it, two giant gross 'toenail fungus' ads taking up most of the page - shocked me a bit and I closed the page really quickly (and probably now associate this book with fungal diseases - choose your ad providers carefully!)... Is there a non-fungus-y link to this book somewhere?

AlexeyBrin(407) 1 day ago [-]
boring_twenties(10000) 1 day ago [-]

I was theoretically aware that there are people out there that don't use ad blockers, but I sure as heck didn't expect to find one on hackernews.

I'd recommend uBlock Origin.

ncmncm(4143) 1 day ago [-]

Contradiction in terms.

mumblemumble(10000) 1 day ago [-]

C itself is an old language that lacks a lot of features that are near-universal in newer languages, but the language is still evolving, and there is still a valuable distinction to be made between how people preferred to write C decades ago and what's considered good style today.

saagarjha(10000) 1 day ago [-]

There's a lot of new things in C since the K&R book came out.

Santosh83(2075) 1 day ago [-]

On a related note, do we have any fully complaint C11 compiler for MS Windows or is using gcc under WSL (or a VM) the best option? I take it there is no C17 compliant compiler yet?

AlexeyBrin(407) 1 day ago [-]

Clang and GCC both support C17 and there are binary versions for Windows see MSYS2. I think Pelles C also supports C11.

rwmj(735) 1 day ago [-]

GCC 8 and above support C17, and there is a version available for Windows. See also: https://gcc.gnu.org/onlinedocs/gcc/Standards.html

jeremyjh(4019) 1 day ago [-]

You don't need WSL to run gcc on windows. You can use the mingw distribution to get a GNU tool chain that compiles a standard windows EXE.

sigjuice(4077) 1 day ago [-]

IMHO, the phrase "C is a compiled programming language" is super confusing.

Edit: there are many languages that have both compilers and interpreters. There are several C interpreters as well. The classification of languages as "interpreted" or "compiled" does not appear to be a sound concept, IMHO.

KevinEldon(4205) 1 day ago [-]

What about the phrase do you find to be super confusing? It is a short summary (even labeled as takeaway 0.1.2.1 in the text) of the first paragraph of section 1.2. That paragraph explains how C source code is just text and that it is turned into an executable program with a compiler. In context the phrase seems more than clear.

pmikesell(10000) 1 day ago [-]

Why? I guess it's because some interpreted languages are also compiled?

'Compiled' traditionally means 'not interpreted', or rather 'compiled to machine code'.

fxleach(10000) 1 day ago [-]

I guess no pre-ordering, the link is broken: https://www.manning.com/books/modern-c

montalbano(2149) 1 day ago [-]

Depends where you are, Waterstones (UK) has it available for pre-order (where I have ordered mine from):

https://www.waterstones.com/book/modern-c/jens-gustedt//9781...

Also book depository which I think delivers worldwide:

https://www.bookdepository.com/Modern-C-Jens-Gustedt/9781617...

syphilis2(4218) 1 day ago [-]

Is there a place to report errors? I've just skimmed through and the pdf has some possible formatting bugs: tables inside code blocks, page breaks splitting the tops of code blocks, small things like that.

infiniteseeker(4221) 1 day ago [-]

From the author's page:

> I find such a C library project aiming to be standard .... with a really humane discussion culture, a proof that people (even men!) can constructively work together to achieve great things without denigrating each other.

Is this casual sexism helpful to anyone, men or women?

profitnot(10000) 1 day ago [-]

I can only beg that you lighten up a bit - after all it's men that have caused and fought most wars, etc etc.. the author is a man poking lightly at the tragedy of man's ego. Do you really need a list of examples to verify this?

Anyway... lighten up.

- a Man

reificator(10000) 1 day ago [-]

Whether it's helpful or not, I hope it's self aware. Jumping from denigrating men to celebrating a culture without denigration in a single sentence is impressive.

maxymajzr(10000) 1 day ago [-]

Why is it always that SJWs like you need to find that one joke, one excerpt and turn it into some kind of personal war/problem/argument/discussion/hate? Instead of focusing on the VALUE brought by the author, you managed to dig up one benign joke and went on to discredit the entire thing. What's worse, you actually put EFFORT in finding something you can use for this purpose.

Why aren't you ashamed of yourself?

dleslie(10000) 1 day ago [-]

> Is this casual sexism helpful to anyone, men or women?

It communicates the author's perspective to the reader, and that may be useful to them if they believe (correctly?) that such a statement will bring them favour.

Personally, I dislike it and hold a dim view of it; but I can't speak for others.

throwaway-hn123(3305) 1 day ago [-]

You pathetic, pompous millenial cunt. Shut. the. fuck. up.

tasogare(10000) 1 day ago [-]

That sounds like irony. Given the author is German and lives in France for years, it could totally be a sarcasm to denounce the current situation. It's hard to tell just by text though.

sys_64738(3735) 1 day ago [-]

Do people still use K&R these days?

earenndil(3273) 1 day ago [-]

Yes, it's a pretty good book. Not sure what the siblings are on about.

dlp211(4045) 1 day ago [-]

Yes, but that doesn't make it a good idea.

codemonkeymike(4214) 1 day ago [-]

Used it in college about 8 years ago. Can't say I'd do the same today, but back at that time it made some sense as college programming classes were teaching more than just practical programming knowledge.

wyldfire(600) 1 day ago [-]

I have it on the bookshelf but haven't consulted it for many years.

Zed Shaw is sometimes a bit controversial but he wrote a 'learn c the hard way' (or something similar) that aims to obsolete K&R.

mistrial9(10000) 1 day ago [-]

I would liken that to 'middle-english' or similar interesting but archaic forms. https://en.wikipedia.org/wiki/Middle_English

codesushi42(10000) 1 day ago [-]

K&R is a classic. A true exemplar of technical writing.

big_chungus(3932) 1 day ago [-]

That's what I used first. I did a lot more study, but I think that's actually the only book I used. What did it for me was trial-and-error and practice, but the most important part was getting the basics down so I could start reading through other people's good code.

arawde(10000) 1 day ago [-]

I can't speak for everyone, but I read K&R 4 years ago to supplement some of my classes in universtiy

saagarjha(10000) 1 day ago [-]

I did. It's short, clear, well written and I found it an excellent introduction to C. I'd recommend it if you read it alongside some more modern supplements.

Gibbon1(10000) 1 day ago [-]

If someone tells you to use K&R take that as license to ignore their advice from that point on.

baby(2335) 1 day ago [-]

Isn't modern C pretty much Rust?

correct_horse(10000) 1 day ago [-]

I like Rust, but [this](https://drewdevault.com/2019/03/25/Rust-is-not-a-good-C-repl...) blog post treats the topic of C replacement well. Rust is a C++ replacement.

steveklabnik(47) 1 day ago [-]

Could we not do this, please?

mumblemumble(10000) 1 day ago [-]

In terms of overall language design and semantics, Rust owes a lot more to ML. I'm not sure there's really much of anything that it owes to C.

Agreed with the commenter elsewhere in this thread that Zig seems like a more reasonable contender for a modernized descendant of C.

lone_haxx0r(10000) 1 day ago [-]

Call me when Rust becomes the interface to the most used kernel in servers and supercomputers.

coldtea(1239) 1 day ago [-]

No, for the foreseeable future (20-30 years) C will still be 10x larger than Rust in code and deployments...

nullbyte(4206) 1 day ago [-]

A little off topic, but that is some terrible art on the book cover.

Etheryte(4098) 1 day ago [-]

To be fair, I don't think I've ever seen a serious, hardcore programming book that looked nice. They're either neutral, or just downright weird [1].

[1] https://en.wikipedia.org/wiki/Structure_and_Interpretation_o...

giorgioz(10000) about 24 hours ago [-]

egghead.io has some awesome illustration for the courses: https://egghead.io/browse/languages/javascript Technical book publishers should just stop hiring ink illustrator that like medieval drawings and get graphic illustrators that like Adobe Illustrator and SVGs

reificator(10000) 1 day ago [-]

> A little off topic, but that is some terrible art on the book cover.

Relative to other books in the series, or do you object to the 'historical fashion' style book covers that publisher tends to use?





Historical Discussions: Too Many Video Streaming Choices May Drive Users Back To Piracy (September 18, 2019: 554 points)
Too Many Video Streaming Choices May Drive Users Back to Piracy (April 07, 2019: 2 points)

(567) Too Many Video Streaming Choices May Drive Users Back To Piracy

567 points 2 days ago by kamiYcombi in 3857th position

www.techdirt.com | Estimated reading time – 5 minutes | comments | anchor

Ironically, Too Many Video Streaming Choices May Drive Users Back To Piracy

from the adapt-or-perish dept

To be very clear the rise in streaming video competitors is a very good thing. It's providing users with more choice, lower prices, and better customer service than consumers traditionally received from entrenched vanilla cable TV companies. It's the perfect example of how disruption and innovation are supposed to work. And given the abysmal customer satisfaction ratings of most big cable TV providers, this was an industry that's been absolutely begging for a disruptive kick in the ass since the 1980s.

But we've also noted that, ironically, the glut of video choices--more specifically the glut of streaming exclusivity silos--risks driving users back to piracy. Studies predict that every broadcaster and their uncle will have launched their own direct-to-consumer streaming platform by 2022. Most of these companies are understandably keen on locking their own content behind exclusivity paywalls, whether that's HBO Now's Game of Thrones, or CBS All Access's Stark Trek: Discovery.

But as consumers are forced to pay for more and more subscriptions to get all of the content they're looking for, they're not only getting frustrated by the growing costs (defeating the whole point of cutting the cord), they're frustrated by the experience of having to hunt and peck through an endlessly shifting sea of exclusivity arrangements and licensing deals that make it difficult to track where your favorite show or film resides this month.

In response, there's some early anecdotal data to suggest this is already happening. But because these companies are fixated on building market share, and this will likely be an industry-wide issue, most aren't seeing the problem yet.

Others are. The 13th edition of Deloitte's annual Digital Media Trends survey makes it clear that too many options and shifting exclusivity arrangements are increasingly annoying paying customers:

But the plethora of options has a downside: Nearly half (47%) of U.S. consumers say they're frustrated by the growing number of subscriptions and services required to watch what they want, according to the 13th edition of Deloitte's annual Digital Media Trends survey. An even bigger pet peeve: 57% said they're frustrated when content vanishes because rights to their favorite TV shows or movies have expired.

"Consumers want choice — but only up to a point," said Kevin Westcott, Deloitte vice chairman and U.S. telecom and media and entertainment leader, who oversees the study. "We may be entering a time of 'subscription fatigue.'"

As it turns out, people don't like Comcast, but they do ironically want a little more centralization than they're seeing in the streaming space. What that looks like isn't clear yet, but it's something that will slowly get built as some of the 300 options (and growing) currently available fail to gain traction in the space:

All told, there are more than 300 over-the-top video options in the U.S. With that fragmentation, there's a clear opportunity for larger platforms to reaggregate these services in a way that can provide access across all sources and make recommendations based on all of someone's interests, Westcott said. "Consumers are looking for less friction in the consumption process," he said.

Variety's otherwise excellent report doesn't mention this, but a lot of these customers are going to revert to piracy. It's not clear why this isn't mentioned, but it's kind of standard practice for larger outlets to avoid mentioning piracy in the odd belief that acknowledging it somehow condones it. But if you don't mention it, you don't learn from it. You don't understand that piracy is best seen as just another competitor, and a useful tool to gain insight into what customers (studies repeatedly show pirates buy more content than most anybody else) really want.

It's easy to dismiss this as privileged whining ('poor baby is upset because they have too many choices), and that's certainly what a big segment of the market is going to do.

But it would be a mistake to ignore consumer frustration and the obviously annoying rise of endless exclusivity silos, given the effort it took to migrate users away from piracy and toward legitimate services in the first place. The primary lesson learned during that experience is you need to compete with piracy. It's not really a choice. It's real, it's impossible to stop, and the best way to mitigate it is to listen to your customers. Building more walled gardens, raising rates, and ignoring what subscribers want is the precise opposite of that.

Filed Under: piracy, streaming




All Comments: [-] | anchor

xfitm3(10000) 2 days ago [-]

I will always pirate shows. I cut the cord 11 years ago due to my objection of cable company billing practices. The entire industry is greedy. I was a netflix subscriber for awhile but I burned thru all the good stuff and left.

robertoandred(10000) 1 day ago [-]

YOU are greedy. You're not entitled to free things. Do you skip out on restaurant bills, too?

commandlinefan(10000) 1 day ago [-]

I don't even know if I'm "pirating" or not. There's pretty much nothing on Netflix anymore, and I can't even figure out how to sign up for HBO streaming (I think I have to get a code from cable provider first?) if I even cared enough to do so. I have found a few ad-supported/free streaming sites like Tubi and Popcornflix, plus some public domain streaming sites like publicdomainflix and archive.org. Are they legitimate? I don't know - I have no way of even finding out.

cryptofits(4040) 2 days ago [-]

Good point

When there are too many options it's becoming too much of an hassle to actually choose something

Just think on yourself getting into a restaurant with a massive menu, choosing a dish will be hard

on the other hand, a pop up restaurant with 3 dishes will make the dinner much more fun for everyone (that's from my expirience at least)

utf985(10000) 2 days ago [-]

What if you don't really want any of the 3 dishes?

eitland(3419) 2 days ago [-]

No surprise. Even we (HN) predicted this the moment the first content company withdrew their content from Netflix.

Two reasons:

1. People only have so much money.

2. When people compare it to what they used to payit seems unfair. People complain when gas prices increases by a few percent, no wonder they complain when they suddenly have to pay twice or even three times for the same content they used to pay one low monthly fee from.

Oh, and they probably also compare to Spotify.

I see three ways this market can work that immediately feels fair to me:

- all you can eat/listen/watch (Spotify model)

- pay per view but a lot lower than today. (Can work even with multiple streaming sites.)

- a hybrid approach where you pay for a number of monthly credits that works (should be somewhat cheaper than the Spotify model)

jfim(10000) 2 days ago [-]

Pay per view has the drawback of requiring too many decisions, leading to decision fatigue.

Subscribing to a service for $20/mo is a one time decision, whereas figuring out if the show you're about to watch is worth $0.30 gets tiring really quickly.

Monthly credits works a bit better, but the all you can eat model completely avoids this.

macspoofing(10000) 1 day ago [-]

>No surprise. Even we (HN) predicted this the moment the first content company withdrew their content from Netflix.

Did you need to predict anything? Why would you think that Netflix would get to be a monopoly for all content delivery?

teeray(3282) 1 day ago [-]

> hybrid approach where you pay for a number of monthly credits

We can call this the Audible model if you like

akhilcacharya(4196) 2 days ago [-]

I remember the days (circa 2009/2010) where the main complaint about streaming Netflix was that there was nothing on it compared to the disk collection. When exactly did that change?

There must have been a brief "monopoly period" when Netflix had all of the content, but I don't remember it.

MisterTea(10000) 1 day ago [-]

Perfect example. I have Netflix, Prime and Hulu. Thats already too many as most have a bunch of crap and a few shows I like, some have duplicate content.

I watched One Punch man on Netflix and can also watch on Hulu. Season Two was released on Hulu. Well that works for me but what about people who had Netflix, Like OPM and now want to watch season 2?

I've been binging Star Trek. I'd also like to watch the new series on CBS. Of course CBS had the bright odea of starting their own streaming service. The cost is $5.99/mo with commercials or 9.99/mo without commercials. No.

So all the idiots taking their toys home and starting their own streaming biz are driving piracy. I used to download TV shows all the time but streaming made that unnecessary as I could pay a little money to have legal access to plenty of things I want to watch. No one wants to juggle multiple streaming services just to watch a few shows. People don't want to go back to spending $200 on TV and internet. I'd rather spend no more than $25/mo on tv and $50 for >=100mbit internet. Thats an ideal number considering I spend less on electricity each month.

fucking_tragedy(10000) 1 day ago [-]

> I'd rather spend no more than $25/mo on tv and $50 for >=100mbit internet. Thats an ideal number considering I spend less on electricity each month.

Agreed. Other countries are able to do it, but cable companies and ISPs have monopolies in different regions of the US.

AnIdiotOnTheNet(3914) 1 day ago [-]

No shit. Valve worked out the key to combating piracy in the early 2000s. To quote Gaben himself:

'We think there is a fundamental misconception about piracy. Piracy is almost always a service problem and not a pricing problem. If a pirate offers a product anywhere in the world, 24 x 7, purchasable from the convenience of your personal computer, and the legal provider says the product is region-locked, will come to your country 3 months after the US release, and can only be purchased at a brick and mortar store, then the pirate's service is more valuable.'

People pirate media more for convenience than to save money.

tombert(4137) 1 day ago [-]

I've said this for years.

Piracy, in many cases, offers a superior product to streaming. If I go to ThePirateBay, I'm virtually guaranteed to find whatever movie I'm looking for, in 1080p (or even 4k nowadays) quality, without any DRM, which can then be easily streamed from Plex or Emby on any of my devices with no regards to regions, often with more options for subtitles and audio tracks (if you're watching anime).

I have Netflix, Hulu, and Amazon Prime, and frankly I already hate having to search all three for a movie, just to find that they don't have it anyway; I already pay enough for these services, I really have no interest in signing up for yet another just to make my search for a movie even longer.

Spotify (and nowadays Apple Music for me) has made it so that I've completely stopped torrenting music, since they offer a superior product to ThePirateBay. It has nearly any song I'm looking for, in decent quality, and there aren't banner ads for 'Hot MILFs in my area!' all over the place in a convenient app that works well on my phone. They made the legal option superior, and as a result I don't mind paying the ten bucks a month for it.

jdc0589(10000) 1 day ago [-]

there's a cliff though. I pay for prime and Netflix, and once or twice a month I'll rent a movie from Google Play if it's something I really want to see.

however, in my mind paying any more than that approximate monthly total is unreasonable, and not an option.

And to the article's credit, I think about booting my Plex setup back up pretty frequently now.

draugadrotten(2991) 1 day ago [-]

Valve is right. I haven't pirated a single game since I started using Steam, and I have spent dollars on games I will never ever have time to play. There is no way I will subscribe to 10 different movie streaming services. I want it all in one client. If they want to charge me 50 dollars for the latest movies, so be it. I just want the convenience of one single source available 24x7 wherever I am. Until then, /r/usenet.

Apple TV, I know this is where you are going. Netflix, I know you tried and failed.

redwall_hp(10000) 1 day ago [-]

This article's title/premise is being very generous to the media companies. It's not 'too many choices' driving people to piracy, it's the bloody parasites balkanizing everything and forcing you to pay more subscriptions to more services just to watch a handful of shows.

People are happy to pay for Netflix and/or Amazon Prime, with a model of 'you pay a streaming service provider and get shows from many media companies.' But the new model is 'media companies all want their own subscription service.'

Nition(3710) 2 days ago [-]

The last remaining video rental stores are closing down just as streaming services become so fragmented that making a short trip to rent any major film or TV series you want for a few dollars is starting to look okay again.

scarface74(3825) 2 days ago [-]

You've never been able to get many of the most popular movies from a streaming service without paying for them individually. Netflix's streaming catalog of popular movies has never been that great.

onychomys(3532) 1 day ago [-]

According to wiki, as of 2017 there were about 40k Redbox locations in the US. They recently put one in the exit of my grocery store, and it gets a lot of use. I think it's a pretty good business model, since so many people only want to see relatively new big name movies.

40acres(3652) 1 day ago [-]

I don't think this is as big a threat as these comments make it seem. For the tech savvy, setting up a Plex server or whatever can be done in an afternoon's time. The majority of consumers won't go through the 'hassle' of setting up their own infrastructure and will make judgement calls on what platforms align with their interests.

I'd bet on Netflix (incumbency), Prime (bundling), Apple (integration with iPhone) and Disney (best back catalog) being the leaders in this space for the next 5 years. Despite major networks getting into the game its hard to see them going all in and cannablizing their TV business just yet

laumars(3426) 1 day ago [-]

You don't need a dedicate Plex server to pirate. Any bittorrent client will do and people might just be happy using a built in media client. Plus often people are happy to buy systems pre-built (remember the 'Kodi Box' craze?). Piracy is a service problem not a price problem so people are willing to pay for systems like this if it's easier than managing subscriptions to multiple services (and the fact it usually works out cheaper to pirate is an added bonus).

jpetrucc(4198) 1 day ago [-]

Is this the final state of attempting to maximize profits? It feels like it's converging back towards the cable model [0]. Especially now with ad-supported streaming plans on Hulu [1] and increasingly more and more service exclusives [2] (same site as OP).

Streaming services are slowly but surely becoming cable 'Channels'/'Packages' now, and of course that's going to increase piracy - we've seen it happen before. If you make something people want, but make it overly expensive or inconvenient to consume, piracy is an attractive (albeit morally grey) alternative for many.

Sure you can buy the disk set for whatever shows you like and skip out on the streaming services, but that in itself comes with a level of inconvenience (and cost) - why use physical disks when you can just download the content, share on a NAS/Plex server, and have it on any device? Once you've made it more inconvenient than people are used to, you've again made piracy an option.

I wonder when we'll start seeing bundles of streaming services for a single price!

[0]: https://www.itproportal.com/features/is-streaming-in-danger-...

[1]: https://help.hulu.com/s/article/how-much-does-hulu-cost?lang...

[2]: https://www.techdirt.com/articles/20190627/11202942488/strea...

pmiller2(3092) 1 day ago [-]

Considering the average streaming service cost is a $12-15 subscription, once you start subscribing to 2-4+ of those, the cost becomes comparable to basic cable. Welcome to a la carte cable.

More precisely, I would say it's the balkanization of streaming services that's the real problem. If I could sign up for one service that offered everything I wanted to watch, on demand, for $50, I'd consider that. Now, Disney stuff is on the Disney service, some shows are on Netflix, others on Hulu, and a few on Amazon. On top of that, you have to consider that all the major TV streaming services also produce their own content, which is exclusive to their services and not licensed elsewhere. Even keeping track of what's on what service is a chore.

My personal compromise has been to use Prime Video and Netflix, because I already subscribe to Amazon Prime, anyway, and Netflix is a good incremental add. But, I don't want 5 subscription services and to have to keep track of what's on what service. It's bad enough with 2.

cjslep(4193) 2 days ago [-]

This is a tragedy of the commons scenario.

Instead of (all numbers illustrative):

Group A: 100%

group B gets jealous and does:

Group A: 25%, Group B: 25%, Piracy: 50%

As long as B's shareholders are happy (0->25%) then 'who cares'.

TeMPOraL(2647) 2 days ago [-]

It's worse than that. Netflix does pay for the IP it streams, and there's more players involved. The end result is, each of them will get even less, and some pirates can even make money off this - all they need to do is to collect and curate links to pirate media sources, and charge a tiny amount for ad-free streaming. And they are already doing it.

coldtea(1239) 2 days ago [-]

Why don't major services (Netflix, Disney, HULU, etc) come together to build a best of breed shared service, that operates vendor-neutrally, and each gets to keep all the profits from people watching their stuff?

Each could still have its own promotions, bundles, etc (e.g. pay X more to watch new movies from Disney), but at least the base proposition can have a single subscription price and some pay as you consume more scheme, single analytics, etc.

In my ideal autocratic regime, they'd be forced to work under such a scheme, like power companies are forced to ultimately connect to the same grid (at least in some countries).

larntz(10000) 2 days ago [-]

They could create a new company to keep track of who is watching what so each content creator gets paid accordingly. And it would be one company to pay for the per view option you describe. I could see this growing to 100s of different content providers and available to nearly every home!

It's interesting that we've come full circle. This seems like an opportunity for the tv streaming services to add streaming content from different providers and eventually we end up with a 150/mo for streaming TV with premium channels like HBO, NETFLIX, DISNEY, ESPN, etc.

TeMPOraL(2647) 2 days ago [-]

Because no one forced them to :). I think they'll get around to working on that later on, but first they'll ruin the whole streaming space in an attempt to capture all the value they can from their IP directly.

idlemind(4165) 2 days ago [-]

I'm guessing they don't do this voluntarily because they've seen how the music industry had its revenues eroded by Spotify/iTunes (where mostly the user can pick one and get pretty much everything). They're probably ignoring the fact that music piracy is presumably lower than ever because of this.

SamuelAdams(3875) 2 days ago [-]
tyingq(4179) 2 days ago [-]

Hasn't driven me to piracy. It has, however, driven me to subscribe/unsubscribe as needed. Paid for a month of HBO, for example, mostly for the Chernobyl series, watched that and a few movies, unsubscribed.

I think that pattern might hurt these services more than piracy.

kedean(10000) 1 day ago [-]

There's also the option of refusing the new services. When I think on what I'll do as content keeps fragmenting across services, I keep coming back to one answer...watch less video content. The time before easy streaming wasn't that long ago, and if the content makers and distributors aren't going to play ball, then I can entertain myself in plenty of other ways.

The bigger losers are going to be parents that have a harder time convincing theirs kids that they don't need, e.g., Disney+.

commandlinefan(10000) 1 day ago [-]

> subscribe/unsubscribe as needed

... if you remember to unsubscribe. I wonder how many of these monthly streaming services rely on the gym model where they hope that it's more trouble to unsubscribe than it is to just pay for the service month after month.

haywirez(3239) 2 days ago [-]

The only ethical pay-per-view system I can think of would be to stream directly from the production source (production house, artist), some sort of a paid RSS-like feed.

0-_-0(10000) 1 day ago [-]

Alternatively, pay a fixed monthly fee which gets distributed among production sources depending on what you watched. Cut out the middleman of Netflix, Amazon etc.

Could work with the rest of the internet too. Instead of relying on advertisements for revenue, you would pay a monthly fee which would get distributed among the websites you used. Aaaand we reinvented the Brave browser model...

apexalpha(10000) 2 days ago [-]

People forget you don't need to pay for all of them yourself. About 5 people share the Netflix I use. I suppose someone else will get Disney and give me access.

TeMPOraL(2647) 2 days ago [-]

That's true. But streaming services are also heavily underutilized. My Netflix subscription is used by 5 to 6 people - that's why I actually pay for the highest tier, to minimize the chance that somebody will be locked out because others wanted to watch something else at the same time. Even 5-6 people, I doubt my Netflix account is used for more than 10-15 hours a week in total.

ocdtrekkie(2741) 2 days ago [-]

Ugh, I am so sick of the 'if all of the content I ever want isn't available for less than a single $10 a month sub, I'll pirate it' nonsense. Netflix was never going to be the be-all, end-all of content and its time our generation grew up a little bit.

For the amount of content you could get in a traditional cable package, you should expect to pay about as much as a traditional cable package.

ShteiLoups(10000) 2 days ago [-]

Woah, free markets balancing themselves...

Crazy

lunchables(10000) 2 days ago [-]

But we removed the entire cost of the physical cable network from the equation. Unless you mean the cost of your internet service PLUS the cost of the streaming services should be about equal to your 'traditional cable package' ? If so, then you're probably not far off. Except that the cable providers don't also get some margin off delivering the programming anymore.

Freak_NL(4197) 1 day ago [-]

Why?

A subscription to Netflix already grants me access to way more than I can reasonably watch. If my tastes were perfectly aligned with what Netflix offers (or Disney, or Amazon Prime, et cetera), I could live this exact scenario of everything I could want for less than $10 a month (and some people do).

But if my interests are even slightly more eclectic, I get to pay for each service separately, which is silly if we assume that the production costs are equal for the sake of argument. And even then I would miss out on lots of content because it is being kept in a vault until copyright expires, or because it is geoblocked because the content owner is fishing for exclusive deals, or some-such nonsense.

So economically, $10 a month seems perfectly feasible.

Also, keep in mind that the outrageous costs of the US traditional cable package are very US-specific. In many countries cable television never got that expensive and broad. For those consumers the notion of paying more than $30 a month just for television (no internet included) is absurd.

qaq(4209) 2 days ago [-]

That the funny thing I have HBO, Hulu, Netflix, Amazon Prime but sometimes it's easier to watch something on a pirate site than searching for which service has what.

mhb(82) 2 days ago [-]

I realize this is funny, but doesn't Roku do that for you?

raydev(10000) 1 day ago [-]

My cable provider had an crazy discount on their HBO+OnDemand offering, so I bought it and I had complete access to nearly every HBO show and more for 2 years. I ended up torrenting either entire seasons or torrenting last night's episode because:

- The interface was painful to use. If I watched 2 eps of a show one night, and returned the next night, the box didn't remember where I was, so I would have to search up the show using the remote without a keyboard, scroll through an extremely slow UI of giant show posters that looked exactly the same and didn't show the episode number until I selected it. And that list would lead to another list, and every time I selected something, it took approximately 5-10 seconds to load the next page.

- It was 'HD' meaning it was outputting 1080p to my TV, but there was a ton of color banding, dark scenes had really obvious compression squares, etc.

- Newly aired shows would usually not appear until the next day, and sometimes they would appear, then disappear for a few hours, then reappear. The stated goal was to have the shows available right away, but something about the infra prevented them from reliably doing it.

- Once in a while I would be going through an old season, and some random episode, like the second episode of the season, would simply disappear from the list for a day or two.

- Maybe this is a Canada-only/CTV problem, but there were several CTV shows I tried to watch that had rewind/fastforward COMPLETELY DISABLED, presumably to prevent commercial skipping (because they had commercials in their OnDemand shows). If you watched a show halfway, and decided to come back and watch from that point, CTV said 'f* you, watch the whole thing again.'

In cable company's defense, the interface to all this was very pretty. They were iteratively updating it too and very slowly making the OnDemand menus just a little more easy to use, so it wasn't some old garbage that never changed.

But the sheer number of actions I had to take to watch a single episode made me join a private TV tracker, where I reliably got extremely high quality video immediately after airings, and entire seasons neatly packaged in folders.

proaralyst(4209) 2 days ago [-]
kalleboo(3960) 2 days ago [-]

It's even worse when a show is split by season among services

CivilianZero(10000) 1 day ago [-]

I really have to ask: Is everyone in the comments apart of an organized troll? Because I feel like I've lost my mind reading most of these comments.

It seems like everyone here thinks torrenting is as easy and simple as using streaming services actually is, and that streaming is as complicated as torrenting actually is.

I used to torrent stuff all the time. Constantly. I have several HDs laying around filled with torrented movies and tv shows. Why? Because I was in college and couldn't afford them.

The day I started making enough money to just sign up to services and buy copies of the movies I wanted I stopped torrenting forever. It's easy and I'm not stealing.

zzzcpan(3630) 1 day ago [-]

> It seems like everyone here thinks torrenting is as easy and simple as using streaming services actually is, and that streaming is as complicated as torrenting actually is.

Would be weird if people here didn't find torrenting easy and simple. And depending where you get your torrents from, it could be significantly easier than streaming.

neuronic(10000) 2 days ago [-]

Germans/Europeans might feel with me: I had the pleasure of using Sky to watch Game of Thrones and a few movies and it was the most abysmal experience I have ever had.

Your account is an email combined with a 4-digit NUMERIC-ONLY (!) 'password' and once you are hyper-securely logged in the amount of usability pain a single piece of software can cause will immediately become apparent.

How can anything paid by a big corporation be anywhere near as bad as the Sky apps is completely beyond me.

Their iOS app is somewhat usable and I was happy I could just cast GoT using my Google Chrome until I noticed that casting can only be done in German. The English original version wasn't available for casting but perfectly accessible on PS4 app, Mac app and web app.

denvrede(10000) 1 day ago [-]

Thanks for the reminder to cancel the Sky Ticket subscription. I feel you. The most frustrating thing are their SmartTV and AppleTV apps. If you watched an episode or a movie you can't start it again because it will start, jump to the end and will either recommend to start the next episode (in the case of a show) or do nothing (in case of a movie).

You have to 'reset' the content in you iOS app and can then watch it on your other devices. Sky is cancer.

thirdsun(4019) about 17 hours ago [-]

Sky used to have a web client. Now you have to use their apps, even on the desktop, which come with a suite of monitoring services (Cisco Videoguard) to prevent people from streaming their content. On my Mac Mini the video player takes up easily 80-100% of the 6-core/12-threads i7.

It's almost unwatchable. Unfortuantely it's a requirement if you're into german soccer.

Qwertystop(4213) 1 day ago [-]

> "Consumers want choice — but only up to a point," said Kevin Westcott, Deloitte vice chairman and U.S. telecom and media and entertainment leader, who oversees the study. "We may be entering a time of 'subscription fatigue.'"

Missing the point hard here. To the degree that 'consumers want choice', I would expect the choice they want is the choice of media, not the choice of subscription services. People don't subscribe to Netflix for its own sake, they subscribe because it lets them watch something they want to see.

pmiller2(3092) 1 day ago [-]

Bingo.

I don't want to keep track of what's on what platform, or pay $12-15/month for 5 different services. I would much rather pay ~$50/month for one service that had everything I wanted to watch on it, alongside some kind of good mechanism for content discovery (both recommendations, and search).

vmchale(10000) 1 day ago [-]

> We may be entering a time of 'subscription fatigue.'"

Also know as... not wanting to spend three times as much? lol.

guitarbill(3948) 1 day ago [-]

Right. It isn't a choice if I can only watch/play/listen to/consume <x> in one place.

_pmf_(10000) 2 days ago [-]

The children at my children's preschool seem to be split into Netflix master race and Prime peasants.

andrewl-hn(10000) 2 days ago [-]

In my surroundings there are a lot of kids and teens who only watch YouTube (and Twitch), while streaming services like Netflix are seen as what 'boring older people' would watch. It's very similar to how people in their 30s and 40s perceive cable these days.

e_carra(10000) 2 days ago [-]

That sounds terrifying.

aquova(4218) 2 days ago [-]

I was discussing with a friend that in some bizarre alternative universe where things had played out differently, cable would seem like the great alternative to all the streaming services we have now. You only have to pay one monthly fee, and you have thousands of choices available to you! Sure, it's not on demand, but with so many options, and many things on a regular schedule, finding things to watch is easier than ever.

TeMPOraL(2647) 2 days ago [-]

The cable suffered because of two things: aggressive market segmentation with channel packs, and ads. Without these two, cable TV would be like Twitter for TV - a streaming service with a weird but perhaps adorable limitation that the streaming schedule is set for you.

flamtap(10000) 2 days ago [-]

There's something to be said for the cord-cutting software ecosystem supporting more-than-casual piracy experiencing huge improvements. If you have the hardware, you can host your own media using Plex as a server, and Sonarr to manage your library and automate acquisition of new TV episodes.

Plex is one of the best pieces of software I've ever used. It Just WorksTM. You need a decent internet pipeline to support remote streaming, but LAN is a breeze to setup. It's so feature-rich I don't know where to begin. I honestly prefer it to any other streaming service. Apps on just about every platform you can think of (they had yet to release apps for Nintendo consoles last I checked).

I have my whole family set up on my server, and it doesn't cost them a dime. The primary drawbacks from their perspective are that they need to request shows and movies to be added, and I naturally don't have Netflix-level availability, but hey. Shit's free.

So when a cord-cutter with enough savvy to pirate video can actually replicate the services rendered by Netflix et al. on their own, why would they pay five times for something centralized that they can get for free[1]?

[1] Plex has $7 month premium features, with the option for a lifetime pass equal to about 2.5 years of subscription.

Cyph0n(4171) 1 day ago [-]

> The primary drawbacks from their perspective are that they need to request shows and movies to be added

Have you heard of Ombi? I've never tried it, but sounds like it could help with this issue.

https://ombi.io/

robertoandred(10000) 1 day ago [-]

Everything's free when you steal. Lame argument.

jasonkester(2236) 1 day ago [-]

Another factor in the mix is how poor a job a lot of the streaming services do at the mechanics of running their business.

I, for instance, pay for streaming packages from both the NFL and Formula 1, but more often than not the best way to watch the actual events is to download a torrent that some random guy recorded, edited, and posted. The video quality is just plain better. It's not missing the pre-event buildup coverage. And it doesn't randomly cut out and drop down to 160p all the time.

Now I say I pay for these services, but actually as of a couple months ago I can't even convince Formula 1 to take my money anymore. I live in France and first signed up using a US credit card. Now they won't let me put my card back in to renew, because it's not to a French address. And they won't let me buy the US or UK version of the service (as I have cards and mailing addresses in both), because my history shows I previously bought the French one.

But then really, it's not that much of a pain, since even for the couple years I paid for it I don't think I actually streamed more than a couple races. The product itself was just that awful, but still it'd be nice to be able to give them some money...

rolltiide(10000) 1 day ago [-]

Thats because there is more competition amongst pirates than rights holders

Turns out complaining about free things is actually an effective strategy for improving quality

raslah(10000) 1 day ago [-]

I was never patient enough for piracy as it takes diligence to find reliable sources. One thing that does irritate me is this general antipiracy attitude on the web today. It's like the entire population works for Big Tech. It used to be if you did enough searching, you could find pretty much anything you wanted, but now everyone acts like the piracy police and gets all offended if you even insinuate piracy. Probably sounds rather antisocial, but I think that attitude is dangerous as it gives corporations way too much influence, beyond what we've yielded them already. Maybe it's just the places I frequent.

izzydata(10000) 1 day ago [-]

I suspect they are a tad annoyed that you are potentially getting something for free that they payed for.

rezeroed(10000) 2 days ago [-]

My ISP (UK, Sky) has recently started blocking torrent/magnet sites. So I'm not convinced.

0-_-0(10000) 1 day ago [-]

Have you checked The Pirate Proxy's Twitter feed for the latest unblocked address?

bArray(10000) 2 days ago [-]

Finding the sites themselves is easy with a VPN, reverse SSH shell or proxy. Make sure all of your torrents are encrypted and it'll be more difficult for them to actively block packets (which is the way it will eventually go).

SmellyGeekBoy(4216) 2 days ago [-]

Speaking as someone in the exact same situation, just go to Google and type in '[torrent site] mirror' and away you go.

swebs(3936) 2 days ago [-]

ISPs have been doing this since the dawn of time. A VPN will fix it.

eitland(3419) 2 days ago [-]

Cheap VPNs might not provide much serious security but they might help a lot with availability.

cameronbrown(3747) 2 days ago [-]

Sibling posters are correct. VPNs went mainstream a long time ago.

acuozzo(4208) 1 day ago [-]

Usenet

ozim(4167) 1 day ago [-]

Just fix your FOMO and you don't have to pay for all crap you will never use.

I did not watched any episode of GoT and I don't feel like I missed something in my life. Occasionally I don't get some reference. I also don't run into people who are 'omg you really did not watched any of it? so great series', even if I would run into this kind, I could not care less about it anyway.

slothtrop(10000) 1 day ago [-]

There's something to this. Of course it's a convenience to have better pickings for shows we want to watch, but seeing it all means spending an inordinate amount of time watching tv. Most of it is dull filler. People think too much of it.

A lot of people seemed to enjoy GoT in the earlier years, and the barriers to watching it are ridiculous. It's cheaper to shell out $100+ for the dvd set even for a single watch. There's a reason it's the most pirated show of all time.

magashna(10000) 1 day ago [-]

It seems like everyone hated the last season so I feel justified in not watching now

jadams5(10000) 2 days ago [-]

10'ish years ago I remember people being mad at the cable companies for bundling all the channels together into all-or-nothing options. Why did I have to pay for ESPN when I just want the SciFi channel? Here we are today with lots of options where you only pay for what you want and now people want them all bundled back together again!

TeMPOraL(2647) 2 days ago [-]

People are still complaining about the same thing. Instead of 'why do I have to pay for ESPN when I just want the SciFi channel?', it's now 'why do I have to pay for the whole catalogue of yet another streaming service, and deal with its idiosyncraticly broken UI, when I just want to watch that one show that one time?'.

People still want the same thing: to have all they want to watch available in a single place, through a single interface, and for a flat price. Without ads, market segmentation, or other exploitative trickery.

DHPersonal(10000) 1 day ago [-]

That's why I don't really believe people who claim that having to pick a streaming service is a burden too great to bear and can only be resolved through piracy. I don't think the issue is too little choice, too much choice, too much cost, or two much mental overhead—I think they just want to pirate and find a reason to use for the moment.

kevstev(10000) 1 day ago [-]

Technology has advanced, but the product has changed little. I don't want to pay a monthly subscription and pay for all of the content on 300 channels, 280+ I will never ever watch. Despite paying $200 a month, I still don't even have the option to watch Spaceballs (substitute any particular movie here) on demand any time I want even if I am willing to pay for that privilege.

We have the technology to make the content of those 300+ channels available to me in a piecemeal a la carte fashion, but cable companies say if I want to watch ESPN I have to pay for 100 other channels. If I only want to watch Game of Thrones, I have to pay a bit under $20 a month to get a whole bunch of other stuff from HBO I will never watch.

I want to pay for only what I watch, but the cable and streaming services want me to pay for whats available to me. That is the big disconnect IMHO. There is no technical reason we can't have one service (like netflix) that has the world's digital content on it, and I can search and pick what I want to watch on it and pay for it and have the content owners get a cut of it. Netflix was kinda close for awhile, but then due to their licensing deals started pushing and pulling titles at random, and then different studios are now trying to put up Chinese walls to ensure that they have exclusive rights to content. I feel everyone loses here, and I myself can't imagine ever paying for a separate service just to watch star wars or Disney movies. There seems to be a real network effect here of just making everything available.

lm28469(10000) 2 days ago [-]

I watch movies once a week with a few friends. We have Netflix, Prime and 3 other smaller providers I can't recall the name of. We still have to look for 50% of the movies we want to watch 'somewhere else'.

It's even worse because most providers geo lock their content. So the movie might be available on Netflix US but isn't on Netflix Spain, or it's available on prime but only in _foreign language_. When the movie is available the service is slow, you get intermittent disconnections, the subtitles aren't available, &c. It's a running joke for us now, we have an hassle free movie night once every three month I'd say.

It's basically cable TV 2.0, it's such a pain in the ass.

blattimwind(10000) 1 day ago [-]

> or it's available on prime but only in _foreign language_

or it's only available in a dub.

sbarre(2482) 2 days ago [-]

'May' drive them back?

This has already happened in my circles..

People who had cut the cord but all but walked away from casual piracy and signed up for Netflix and HBO (and probably Prime Video because they had Amazon Prime already) are openly talking about the fact they're definitely not signing up for more streaming services, and are seeking out torrenting again for the stuff that's being pulled from Netflix (especially Disney shows).

I feel like there's a business opportunity for a meta-service that automatically manages monthly subscriptions for you on different services. You queue up a catalog of shows you want to watch, and it creates an optimized schedule for you and helps sign up for the individual streaming service on a given month in order to watch the show you want.

As long as you don't care about the zeitgeist, this would probably be fine for a lot of people.

raxxorrax(10000) 1 day ago [-]

Happened in my circle too. But most are spontaneous consumers. They don't plan ahead if they might watch a show at some point. They like to watch a movie or series from time to time, but I don't think anyone would put in effort for a 'streaming service vcr'. You would need to know about shows before watching them.

Still, might work for some, but you would also get back to a point where a lack of accessibility increases piracy.

on_and_off(10000) 1 day ago [-]

It has already happened to me as well.

Look, I want a convenient way to watch movies. I would even be ready to pay more than I do right now to Netflix to have a library of ALL the movies I want to watch.

1 library, not 10, especially if they may or may not contain the movie I want to see.

My honeymoon period with Netflix and co is over.

- I feel that their original content has dried out : there is even more of it, but it rarely interest me these days and I feel like the quality has dropped in favor of quantity.

- Let's say I want to watch Blade Runner. I want the 'final cut' (IMO the best version, it is the director's cut, unlike the 'director's cut' which isn't one in this case). Well, I have just checked, this movie is not on Netflix. If it was, I would not have the choice of the cut. I would not be able to e.g. listen to the team commentary either. That's the first movie that sprung to my mind, but that's a good example of my experience with Netflix.

A rolling library of content like what Netflix and co have is not what I want. At all. I want an ever growing one.

I like to be able to get my content 'legitimely' (quotes because if you watch old movies, chances are you are not paying the artists but the mega corporation who bought the rights to it but that's OT), and I am not looking for solutions to 'cheat the system'. However, right now torrenting sites are becoming again the most efficient (and in some cases only) solution to access the content I would like to watch.

eyegor(4012) 2 days ago [-]

This is why in my circle we set up the town pump plex server. Originally it was for streaming older movies (pre 1970) which you can't stream from any major service but the new era of fragmentation has drastically expanded it. We bought a retired datacenter rack and everyone contributes a few bucks a month to cover running costs. Then we have scripts scraping the top x movies/shows from torrent sites once a week and a little web ui for adding individual magnet links or uploading files directly. It's so much easier to find things when they aren't changing availability every 2 months (netflix et al).

scarface74(3825) 2 days ago [-]

"Your circle" unless it includes families with small kids, isn't Disney's primary audience. Most normal middle class people aren't going to go through the trouble of pirating movies and setting up a Plex server and then worry about setting up their router to get a good connection while they are away from home and even then deal with the abysmally slow upload speed of the typical home network instead of spending $8 a month to just have the convenience of Disney+ to be a glorified babysitter.

In 2003, people said that the iTunes music store was going to be a disaster. Who would pay to download music when they could get it for free. That worked out pretty well.

enos_feedler(3968) 1 day ago [-]

I agree with the product opportunity around smart subscription management but feel it will come from the big players who are already in the space of managing tv package subs (Amazon, Apple, Google, etc)

ssully(3856) 1 day ago [-]

In my family it basically led to everyone subscribing to the services that interests them the most and sharing accounts. I subscribe to Youtube TV, Netflix and sometimes Shudder. I use HBO and Hulu from my family members.

not_a_cop75(10000) 1 day ago [-]

We have a quality inversion at the moment. The amateurs on Youtube and other services are often 1000X better than the 'original' (is it really original?) series by Netflix. Media companies are churning out pure shit and we, the public are supposed to open our mouths like babies ready to take the next 'airplane' full.

ImprovedSilence(4197) 1 day ago [-]

I agree with you here, to a degree. It seems the model of "one place" for all movies would be (and has been) a huge boon for viewers. But eventually I think that "one source" would become monopolistic, and we would see consumer abuse and degraded service. If you doubt it, Who would of thought anyone would abuse their monopoly on search? Or social? Or online retail? I guess I don't know the best long term solution....

LanceH(10000) 1 day ago [-]

If someone asks for payment and I don't want to watch it enough to pay them, I don't watch it.

What this has driven me to do instead is to rotate subscription services and binge seasons in a month or less. So HBO is now a 1-2 month a year subscription. Showtime, Starz, etc... one month each.

hunta2097(10000) 2 days ago [-]

I think there is a model for rotating streaming provider. You can easily watch a year's worth of a provider's [good] content in 3 months.

Just change provider every 3 months to a different one, then binge it all.

Maybe a service that automatically deactivates your account at the best point in the billing-cycle? If you could do it without too much overlap you could actually do it on a monthly basis:

Jan - Netflix

Feb - Disney+

.

.

doctorpangloss(4080) 1 day ago [-]

> People who had cut the cord... As long as you don't care about the zeitgeist...

That's not really what it's about. You have no idea how utterly janky Plex and torrenting is. People on Hacker News find IT stimulating, so their experience is basically moot. It's like trying to predict trends based on a nerd's hot take of the iPod.

It has always been about enforcement. Seen in this lens, the hassle of pirating movies is itself a form of enforcement of copyright protections, even if the IT is not a legal instrument.

Enforcement has definitely weakened. Why though? It wasn't limited to bad PR, although again the Hacker News crowd is wildly incomparable to the population at large when it comes to issues like this.

What has changed is that TV makers are also now Internet providers. They are recouping the costs of piracy by raising your Internet prices. They don't make content you consume in the rest of the world, so naturally in the rest of the world fast Internet is dirt cheap.

Calling this a consequence of consolidation is accurate! It's just not about consolidation among ISPs, but between producers and ISPs.

And on top of that, all the people who held onto the cord were subsidizing your television. Now that they finally cut the cord, who's going to pay for it?

slothtrop(10000) 1 day ago [-]

> I feel like there's a business opportunity for a meta-service that automatically manages monthly subscriptions for you on different services. You queue up a catalog of shows you want to watch, and it creates an optimized schedule for you and helps sign up for the individual streaming service on a given month in order to watch the show you want.

Yeah I think this is what people mean when they say they want 'one service'. An actual single service would mean a monopoly and extravagant prices. That being said the other streaming options are being offered up by large telecoms that dwarf Netflix in what they control.

My ps4 menu already lazily sort of implements this in it's video section with a highlight menu of available shows on various platforms.

dreamcompiler(4034) 1 day ago [-]

I have those three services and I won't be adding any more monthly charges. Anything not on those three might as well not exist because it's not worth the risk of pirating (and my ISP blocks bittorrent anyway). So good riddance Disney, CBS, and anybody else. Your content is all dead to me.

js2(590) 1 day ago [-]

I have Netflix and Prime, and use iTunes for movie rentals, but I have no qualms about backfilling with Plex for TV shows and for movies not available via iTunes. Especially for shows that originally were available to me OTA or that were broadcast on a service that I had subscription to at the time they were broadcast. I'm just time shifting something I could have recorded at the time, right?

mc32(4157) 2 days ago [-]

I agree. There is definitely a market for a streaming bundling service where you pay for a bundle and get subscribed to multiple services for some monthly fee which would be at a great discount to subscribing separately to each service.

UncleMeat(10000) 1 day ago [-]

We used to have that. It was called cable. And for years people said 'ugh why do I need to pay for espn why can't I just subscribe to the parts I want'. And now we are closer to that world and everybody is saying 'ugh why do I need to subscribe to different services why can't I just pay more for a service with everything'.

8ytecoder(4156) 1 day ago [-]

I think there's also a major glut in TV shows right now. I'm not considering subscribing to anything new or pirating either. Just watching what's available already in my existing ones.

Another thing I'm planning on doing for movies is rotating subscriptions. Use Apple TV channels to subscribe to a different one every month or so and watch the movies that's there

robertoandred(10000) 1 day ago [-]

Lowlifes who steal tv shows don't really need a push to keep stealing.

epanchin(10000) 1 day ago [-]

I subscribe to a few services, but I travel for work and half of them don't work abroad. Haven't yet gone back to torrenting, but I do sympathise with those that do.

MrMember(3889) 1 day ago [-]

Between the service fragmenting, hostile UX, and price increases I had no issue going back to piracy. I don't have a problem paying for content, but when it's significantly easier and more user friendly to just download what I want, and I can watch it on any device I want, that's what I'm going to do.

I used to pirate music because that was the easiest and most straight forward way to get what I wanted. Now I buy it from Amazon because with one click I can pay for it and get a direct download link to the DRM-free mp3s.

ekianjo(323) 1 day ago [-]

> I feel like there's a business opportunity for a meta-service that automatically manages monthly subscriptions for you on different services

The better model would be to have all video providers make streaming available thru a common standard API, and let other companies compete on front-end/store and features instead of having ever single provider re-inventing the wheel behind walled gardens over and over again.

nesadi(10000) 1 day ago [-]

Seems to me like everybody here is out of touch. It's not the 2000s anymore. Torrenting isn't the only option, or even the best option anymore. There's a crazy amount of great non-legit streaming sites where you have access to probably every movie or series you'd ever want to watch. No need to subscribe to 5 different services for 50€, I can go to one site and watch everything for free in a convenient, simple way. Everybody I know does it, especially now that Netflix is a husk of what it used to be content-wise and how terrible it's become usability-wise.

nradov(886) 1 day ago [-]

The streaming services will never provide the APIs necessary for such a meta-service to operate efficiently.

zeruch(10000) 1 day ago [-]

'I feel like there's a business opportunity for a meta-service that automatically manages monthly subscriptions for you on different services. You queue up a catalog of shows you want to watch, and it creates an optimized schedule for you and helps sign up for the individual streaming service on a given month in order to watch the show you want.' Basically a premium Chromecast/Roku/etc service would help seal that up I suspect......but I suspect the licensing wars are only starting to ramp up.

onlyrealcuzzo(4130) 1 day ago [-]

I pirate based on principal. Most of these companies are actively trying to fuck me over as a user as much as corporately possible. I don't want to give them money. I can't help encourage this user hostile behavior.

oliwarner(4095) 1 day ago [-]

'Choice' here is just a euphemism for 'yet another $10.99/month subscription'. The more services, the less content I want each has. The more services/apps/etc I have to engage with, the less I want to engage with any of them. That is to say, it's not just as expensive as corded packages, it's harder to actually use. It's the opposite of what it should be.

Content needs F/RAND licensing, possibly with a temporary block on creators self-distributing to try and right the market and get some competition going.

If you think it's getting bad, just wait until Disney starts flexing.

durnygbur(10000) 1 day ago [-]

> just wait until Disney starts flexing.

They will savage or at least sue everyone alive until no other service will remain in the internet of the Western world, all these while hired actresses making cute faces will be throwing pink glitter at everyone involved.

aranw(3630) 1 day ago [-]

Need to change 'May' for 'Will'

lostgame(4185) 1 day ago [-]

Or 'has driven'.

toxik(10000) 2 days ago [-]

Streaming 'choices'? It's fragmentation and rent-seeking behavior. It is simply a case of the supply not meeting the demand at a reasonable price point, when there is an alternative.

scarface74(3825) 2 days ago [-]

Everyone wanted to get rid of the cable bundle and have channels a la carte. Be careful what you ask for.

mratzloff(3881) 2 days ago [-]

Yes, rent seeking by creating original content. A nefarious twist on the concept of rent seeking.





Historical Discussions: The Internet Relies on People Working for Free (September 17, 2019: 563 points)

(565) The Internet Relies on People Working for Free

565 points 3 days ago by gilad in 1063rd position

onezero.medium.com | Estimated reading time – 7 minutes | comments | anchor

The Internet Relies on People Working for Free

Who should be responsible for maintaining and troubleshooting open-source projects?

Credit: dhe haivan on Unsplash

When you buy a product like Philips Hue's smart lights or an iPhone, you probably assume the people who wrote their code are being paid. While that's true for those who directly author a product's software, virtually every tech company also relies on thousands of bits of free code, made available through "open-source" projects on sites like GitHub and GitLab.

Often these developers are happy to work for free. Writing open-source software allows them to sharpen their skills, gain perspectives from the community, or simply help the industry by making innovations available at no cost. According to Google, which maintains hundreds of open-source projects, open source "enables and encourages collaboration and the development of technology, solving real-world problems."

But when software used by millions of people is maintained by a community of people, or a single person, all on a volunteer basis, sometimes things can go horribly wrong. The catastrophic Heartbleed bug of 2014, which compromised the security of hundreds of millions of sites, was caused by a problem in an open-source library called OpenSSL, which relied on a single full-time developer not making a mistake as they updated and changed that code, used by millions. Other times, developers grow bored and abandon their projects, which can be breached while they aren't paying attention.

It's hard to demand that programmers who are working for free troubleshoot problems or continue to maintain software that they've lost interest in for whatever reason — though some companies certainly try. Not adequately maintaining these projects, on the other hand, makes the entire tech ecosystem weaker. So some open-source programmers are asking companies to pay, not for their code, but for their support services.

Daniel Stenberg is one of those programmers. He created cURL, one of the world's most popular open-source projects.

Developers use cURL to transfer information between two systems, generally in an "API" where a service needs to ask for or send data from another system. According to Stenberg, cURL is included in billions of smartphones, "several hundred million" TVs, and at least 100 million smart cars, every iPhone ever produced, and almost every other modern connected device and service you touch every day. The scale of its use is staggering considering that Stenberg does the lion's share of the work maintaining it, with assistance from a community of volunteers. Yet few of the companies that rely on his code even realize it's his code.

Stenberg, who lives near Stockholm, Sweden, invented cURL in 1998 and still maintains the project for free, though he did recently take a job at a company called wolfSSL, which now pays him to work on it "as full-time as possible." Companies that rely on a specific piece of open-source software occasionally hire those projects' creators to build upon the projects — in this case, wolfSSL has tasked Stenberg with not only maintaining cURL but building service contracts for providing personal support of cURL.

Stenberg never expected cURL to gain so much popularity. In fact, it took many years to learn that it was even being widely used. Because the code is available to use for free, without any commercial licensing, there's no reason that companies need to tell him that they're using it. He only realized the software he'd invented was becoming popular because people started telling him that they saw his name in the "about" window of software, or buried in documentation. "The temperature raised so slowly we never saw it coming," he said.

"I think I get annoyed when it feels like people try to take advantage of us instead of contributing their share to the project when they are getting so much out of it."

During the first 20 years of cURL's existence, Stenberg says he worked on it in his spare time and earned his keep at his "real job" doing other software development. Maintaining the project took a lot of work: he's spent thousands of hours improving cURL, fixing bugs, and improving his code. Of the 25,000 "commits," or updates, made to the GitHub repository for cURL, Stenberg created 14,000 of them. No other developer contributing to the software has made more than 2,586 commits.

Survival of cURL is thanks to a set of sponsors who fund the project's hosting and other costs — though Stenberg says no major company pitches in — and contributors like Stenberg that give their time away for free. Stenberg says he believes that it's important that open source exists and that he has never regretted making cURL open source. What frustrates him is when companies demand his help when things go wrong.

Last year, a company overseas contacted him in a panic after they paused a firmware upgrade rollout to several million devices due to a cURL problem. "I had to explain that I couldn't travel to them in another country on short notice to help them fix this [...] because I work on cURL in my spare time and I have a full-time job," Stenberg says.

Because he cares about the project, he scrambled to find a friend to help. His friend flew out and helped solve the problem.

To compensate open-source programmers for this kind of service, Stenberg believes that large companies should pay for support contracts from the developers of a library, which would compensate them for their time and help ensure a project is actually maintained for the long haul. With his work at wolfSSL, he hopes to convince companies like Apple to pay up in exchange for dedicated support, but the effort is still in an early stage.

Support contracts don't come cheap, often ranging in the thousands of dollars in exchange for dedicated help for using projects and support when something goes wrong. However, the type of companies that need such a service are typically well-funded, or have broad reach, especially in the case of cURL.

It's still not clear that companies would be interested in such a contract. When Stenberg asked the company that needed him to fly to a different country to troubleshoot their problem to pay for one, they refused.

This frustrates him. "I think I get annoyed when it feels like people try to take advantage of us instead of contributing their share to the project when they are getting so much out of it," he says. But he still sees support contracts as a long-term solution to maintaining open source: "Money needs to trickle down to the authors and not just get sucked up by the curators or the huge open-source projects/companies that tend to get most open-source money today."

Many in the open-source community are opposed to the idea that they should be paid in any way, which remains a controversial topic. Some in the open-source community believe that monetization defeats the purpose of "free" — but the reality is that the people working for free need to eat and feed families, just like everyone else.

Today, when a developer or company emails Stenberg for help as fast as possible, he says that "my attitude has shifted more toward 'well maybe, just maybe, this could be a case where you could consider paying for a support contract.'"

When I ask Stenberg whether he'll continue to maintain cURL forever — it's already been 20 years — he says he has no plans to abandon the project that has become a major part of his life.

"Of course, assuming I also manage to get paid," he adds.

Update: This article has been edited to reflect that after Stenberg, the most commits a single developer has made to cURL is 2,586.




All Comments: [-] | anchor

user_50123890(10000) 3 days ago [-]

Not just open-source maintainers but also moderators.

A bad thing about moderators is that nowadays they tend to be immature teens, it's one of the reasons Reddit content quality has been going down in the past few years.

It also creates some moral questions, as companies with giant revenues are outsourcing their internet janitor duties to naive unpaid child labourers

moron4hire(3414) 3 days ago [-]

When I first got on the internet 20 years ago, moderators were all teens, too. But I didn't notice because I was also a teenager. Then I grew up, and those mods for too, and we all went on to different things. Going back to the forums, it's just a different set of teens. It was always immature. What changed was that I matured.

helpPeople(10000) 3 days ago [-]

Or worse, a corporation secretly pays mods to let spam pass and delete dissent.

drei109(10000) 3 days ago [-]

In my experience with moderators on Reddit, is that they are power driven adults who still act like teens.

volkk(10000) 3 days ago [-]

i see your point, but it also brings about something like where certain people see quality degrading, and go off to make their own subreddit/community like /r/truereddit or /r/truetruereddit, etc.

it's an interesting phenomenon and am curious as to whether its better this way, or actually making things worse.

jen_h(3647) 3 days ago [-]

I think (mainstream) Reddit content is something completely different; more like big clumps of PR firms employed by corporations and lobbying groups bitbotting around with nation states, all fighting and agreeing with each other and trying to build false consensus. Source IP analysis there is probably pretty interesting.

mopsi(10000) 3 days ago [-]

I believe redesign is the single largest contributor to Reddit's decline. It went from text-based site (like HN is at the moment) to image-based site.

nitwit005(10000) 3 days ago [-]

I assumed from the headline that this would be about moderators. Given how major sites depend on them, and how many forums there are strewn about the internet, the total man years of free labor must be enormous.

cheez(10000) 3 days ago [-]

Edit: very interesting that this post is getting downvoted. I didn't even say anything controversial, just that banning people is bad. Exactly the danger we face online, which will eventually get into broader society.

I got banned on a platform for saying a public figure was physically attractive to me. Prior to that, I had engaged in a perfectly factual discussion about certain statistics that were in favor of a particular political position. There is no doubt in my mind that I was flagged for having a political position (to clarify: I didn't actually have the position, I was just posting statistics that supported that position).

I like HN's approach to moderation. You don't ban the person unless they are really unreasonable. You flag the comment, disappear it from the thread and tell them to behave. I also like Gab's approach which is: if it's not illegal and you don't like it, then just block them from your feed.

The wholesale banning of people for single statements that are generally taken out of context is emblematic of cancel culture and it isn't surprising to me that 'hacker news' is more nuanced and understands that people are not one statement or one act.

Y_Y(3773) 3 days ago [-]

A rational actor never works for 'free', they derive some perceived benefit (not necessarily money) from their action, like reputation or self-satisfaction or warm fuzzies. I think it's brilliant that we can use these carrots to get good work out of people that can be shared on the internet.

All the same that's no excuse to abuse people who are acting far from rationally.

Avamander(10000) 3 days ago [-]

Yes, but once they get bored they move on, to the dismay of the community, if there's money involved however that would give the community more stability.

wslh(93) 3 days ago [-]

> A rational actor never works for 'free'

A rational actor with emotions can work for free is that work makes him/her feel well. Some people even call this a hobby.

Scoundreller(4218) 3 days ago [-]

I know I do. The amount I've railed against companies I don't like... oyyy, they should have put me on their payroll to sit on a beach and drink.

And a few others should have put me on their marketing payroll.

chaostheory(224) 3 days ago [-]

One thing not mentioned in the article is that working on open source, especially when your project gets popular, helps a lot with the resume and may let you skip the interview quiz fest since your future employers are already familiar with your work and its quality.

That said, it would be nice if corporations would give their teams donation money for the smaller projects that they rely on. With easy options like OpenCollective and GitHub having a donation option, it would be good time to start.

circlefavshape(10000) 3 days ago [-]

Gah! A 'rational actor' is not a person, and 'free' has a commonly-understood meaning that you're redefining in order to ... what? Claim the headline is false? Why?

cycloptic(10000) 3 days ago [-]

I think you overestimate the amount that some people think their decisions through.

irrational(10000) 3 days ago [-]

The vast majority of people I meet, including myself, are highly irrational. Ironically, that is not where my user name comes from ;-)

ptah(10000) 3 days ago [-]

humans are not 'rational actors'

cesarb(3170) 3 days ago [-]

> they derive some perceived benefit (not necessarily money) from their action, like reputation or self-satisfaction or warm fuzzies

You forgot what is perhaps the most important one: using the resulting software. For instance, when Linus wrote git, it was because he wanted a source control system with certain characteristics for himself to use.

6gvONxR4sf7o(10000) 2 days ago [-]

It's a miracle we get what we do. If we found a better way to properly compensate these people for altruistic acts (making the acts more 'rational') we'd probably get a ton more.

wvenable(4154) 3 days ago [-]

I feel like there is an article a week about how open source developers are being used by corporations for free labor. I think there is a fundamental misunderstanding that these journalists aren't grasping.

Nobody is doing anything they don't want to do. Nobody is forced to build open source software. And, most importantly, most of these contributions aren't worth enough individually to charge for. It's only collectively that these contributions have value and we all collectively benefit from it. And for-profit companies are part of that collective benefit but that doesn't mean money needs to be involved.

I'm working on an open source project right now -- I've put a lot of hours into it -- and it's cool but there is no way to build a profitable business from it. It's an end-user product, the small number of users will like it, and I just enjoyed building it. But I also don't want to make it business. I already have a job.

svavs(10000) 3 days ago [-]

> Nobody is doing anything they don't want to do. Nobody is forced to build open source software.

Many job postings list OSS contributions as a requirement / desirable for employment. So, I'm not sure if your statement holds up.

I personally know developers that contribute to OSS because of this. And many that have burnt out because of the constant need to contribute.

Most OSS contributors do it for fun - but there is a section that do it because it's becoming part of the interview / job hunting process.

samirillian(10000) 3 days ago [-]

> Nobody is doing anything they don't want to do.

This is such a red herring. There are plenty of ways we can look at this situation that don't involve unreserved consent.

helpPeople(10000) 3 days ago [-]

I'm in a similar situation.

Started a website for fun, maybe I'll sell books. Then my website got popular and people want more content faster.

I have a day job, so I can't work full time unless I'm paid similar.

I'm unsure what to do next. My free website can be monatized with the goal of quitting my job.

Or users can get my hobby for free.

It seems no one is happy except me. The only benefit, my popularity will likely have me employed forever.

banannaise(10000) 1 day ago [-]

> Nobody is doing anything they don't want to do.

And yet, one party is making all the profit, and a different party is putting in all of the work. That seems like a non-ideal situation?

commandlinefan(10000) 2 days ago [-]

> aren't worth enough individually to charge for.

The downside I see is that leads to a lot of "just good enough" solutions out there. There are things like the Spring "framework" that kind of sort of almost do something useful but add conceptual overhead and leaning time and subtract troubleshoot-ability in exchange for not much, but enough that the people in charge push you to use it since it's free anyway. On the one hand, if not for open source, Unix, vi, grep, awk, sed, lex, yacc and bash would all be long dead and buried now - there was a time before I came across Linux that I was starting to try to implement all of these myself in DOS because they were so useful and there was no available equivalent. On the other hand, if not for open source, I wonder if we ever would have gotten PHP or Java or a lot of the monstrosities that hang off of them like Zend and Hibernate.

agumonkey(877) 3 days ago [-]

There's an issue though. You spend time as you please and its fine, but ultimately current society revolves on exchanging services in a semi quantifiable manner (transactions at price). You cannot live on free contributions. I find this hybrid situation too paradoxical.

Nasrudith(10000) 2 days ago [-]

It seems to me to be major permission culture speaking that shows they just don't get a culture other than monetizing every little snippet by prior agreement and getting lawyers involved. It is a very media culture with their excesses and it shows.

When really the open source software is about making it easier to work with across jobs so that everyone doesn't need to tediously reinvent the wheel ad nausuem or jump through hoops.

rhizome(4185) 3 days ago [-]

Nobody's saying it has to be, uh...what's the name of a profitable internet company that isn't Facebook? But making demands on people who are working for free is bad manners. I don't care if you think the internet changes everything, it's still possible to be an asshole.

My off the cuff solution is for project owners to add a status flag to their issue trackers: PAID. Anybody can submit a bug as PAID, but it costs $50,000-100,000 to do so, per bug. And no private fixes: there is one version and everybody gets the benefit. No badgering on ETAs either.

If Ford thinks it has value, then the developer should get some of that value, and $100K for a suitable patchlevel of cURL might even be a low estimate.

davnicwil(3916) 3 days ago [-]

> most of these contributions aren't worth enough individually to charge for

I agree with your general thrust but on a tangent, this isn't true and isn't the right way to think about software contributions, even 'trivial' ones.

A trivial change seems low value to you because you wouldn't have to spend much time at all doing the same thing yourself.

To see its market value, you have to view it from the perspective of the average person - how long would it take them to make that change? Answer: a very, very long time. They'd have to learn the fundamentals of programming first.

The value of something isn't how long it takes to do it in isolation. It also factors in how long it took to get to the position where you could do it at all.

turk73(10000) 3 days ago [-]

I think without the Euro social safety net system that open source would totally collapse.

TazeTSchnitzel(2149) 3 days ago [-]

> most of these contributions aren't worth enough individually to charge for

Then why does my employer pay me even for the small bugfixes I make to their code?

benologist(989) 3 days ago [-]

What corporations rely on is thousands of parallel endeavors surfacing the best ideas and educating and training programmers from which they will take, use, hire, profit, save, all for free.

The stuff they don't use and the years of learning how to invent successful stuff is every bit as important as the failures. The successes can only be built by salvaging good ideas and abandoning bad ideas from the failures and obsolete successes. You pretty much have to demonstrate how you continue learning on Github just to get an interview.

Vital projects are not ok financially and we're starting to find out when they get repurposed as malware, or go unsupported despite being a dependency for a double-digit percentage of the internet. Most of the world is default-excluded from creating brilliant programmers because their parents have zero hundreds of dollars, let alone thousands, to subsidize a Stallman, Gates or Torvalds for the rest of us.

We are subsidizing corporations and they are racing to see who can hoard the most hundreds of billions of dollars.

marknadal(2846) 2 days ago [-]

If you don't want companies using your tech, just use (A)GPL or anything from Richard Stallman.

But there are some of us who believe in Open Source[1] and giving value to the world, including companies. So please stop bullying us into being a victim culture, we're not.

Besides, scientific research over the decades has shown that Open Source produces valuable work specifically because it is unpaid creative work, and when pay is introduced, quality drops.

Here's a good summary video of the studies: https://youtu.be/u6XAPnuFjJc

[1] Disclosure: I work on Open Source full time now because 8M people use my tech ( https://github.com/amark/gun ), but I do not charge for it and previously had to do it in my spare time (it is only its success that has lead to me being able to work on it full time - not the other way around).

tome(3990) 2 days ago [-]

That's amazing. How do you fund it without charging for it?

saagarjha(10000) 3 days ago [-]

> Because the code is available to use for free, without any commercial licensing, there's no reason that companies need to tell him that they're using it.

Well, that depends on the license.

jxramos(4193) 3 days ago [-]

I was actually wondering if there's licensing that has a notification clause or what an attribution clause would look like.

joshlemer(4140) 3 days ago [-]

Some other commenters in this thread have pointed out that nobody is forcing open source developers to work on the contributions that they make, and that is strictly speaking true. But in the culture of software development, there does seem to be an ambient message often repeated or hinted at, that it is good to 'give back' to open source by contributing and that it is virtuous to do so.

I now am starting to rethink this sentiment because the vast majority of the benefit of open source contributions on github, be they to languages, runtimes, application frameworks, databases, etc, go towards increasing the bottom line of for-profit companies, not to developers. And the vast majority of the beneficiaries of open source will never support the project even when they are fortune 50 companies saving millions of dollars by using the work of one volunteer.

There is also the idea that contributing will be great for your career development. I have found that not to be the case at all, I think that no potential or current employer has ever given a rat's ass about open source contributions, and do not consider that work as valuable when making hiring decisions. The work you are paid to do is the only thing anyone cares about. I'm not saying that that shouldn't be the case, but just that that is the case.

Given then, that there's very little upside to doing open source, and most/all the benefits go to profit-making corporations, it is puzzling why do we even push for greater involvement in open source at all? It seems we shouldn't be, we should be warning people who want to contribute to Open Source that they should probably spend their time doing their own studying and personal/skill development which will allow them to succeed in the roles that they have with their current role or a role they'd like to obtain one day, for money.

concordDance(10000) 2 days ago [-]

> It seems we shouldn't be, we should be warning people who want to contribute to Open Source that they should probably spend their time doing their own studying and personal/skill development which will allow them to succeed in the roles that they have with their current role or a role they'd like to obtain one day, for money.

Money is only a fraction of motivation for many people. Many do open source to make the world a better or more efficient place and reduce barriers to entry.

njharman(4141) 2 days ago [-]

> Given then, that there's very little upside to doing open source, and most/all the benefits go to profit-making corporations

Such a wild claim would require some evidence. But it's so patently false, I'll give you pass.

Linus has gotten extreme benefit from Linux and Git. And developers have massively, directly benefited and the world indirectly from Linux and Git.

Wikipedia, directly benefiting everyone, would not exist without open-source either its infrastructure or the code it runs on. The Internet would not exist without open-source.

andreilys(10000) 2 days ago [-]

> There is also the idea that contributing will be great for your career development. I have found that not to be the case at all, I think that no potential or current employer has ever given a rat's ass about open source contributions, and do not consider that work as valuable when making hiring decisions

I don't think you can make a sweeping generalization like that, because 'employer' is not some omnipotent figure sitting in an ivory tower. It's other developers, and managers who are (typically) technical and can appreciate individuals who contribute to open source.

If I see someone is an active contributor on Scikit-learn, numpy, etc. that is a certainly a strong signal for me. Yes I'm not making the hiring decision, but I can assure you the HM is going to be influenced by those who interviewed the candidate.

SI_Rob(10000) 2 days ago [-]

Though I now cringe at the angsty teen outrage dripping from every word of this post, it's hard to believe that I wrote this on AskSlashdot 19 years ago:

https://yro.slashdot.org/story/00/01/22/1843258/open-defensi...

and still do not know whether any formal method exists (apart from the choice of license model) which could inoculate 'Open Source' from being tragically impressed into corporate rent-seeking service. Or at least, as much of the Open Source ethic as is encapsulated in the public works of ethically motivated open source contributors; people who are essentially investing their time, skills and resources in the improvement of a lightly defended public common that continues to be parceled out to private interests.

* rereading the above I don't think I've made much progress in my posting style. Maybe it's something about the apparent structural inevitability of this problem, which seems more a recapitulation of deeper conflicts in human dynamics than something specific to 'open source'.

jacobolus(3877) 2 days ago [-]

> vast majority of the benefit of open source contributions on github, be they to languages, runtimes, application frameworks, databases, etc, go towards increasing the bottom line of for-profit companies

This is nowhere close to true, unless you define "benefit" as "cash profits". Even then, it's a tough argument.

There have been extraordinary benefits to most people in the world from open-source contributions. Without a wide range of open source code projects there would be no Wikipedia, no SciHub, no web forums, no search engines, no web maps, no movie/book/restaurant review sites, no craigslist, no online dating, no news websites, no web education platforms (MOOCs, Open Courseware, Khan Academy, ...), no technical Q&A sites, no video sharing sites, no social media, no e-commerce, no online banking, no ...

The great majority of the benefits of all of these software products and projects accrue to their many users, not to their authors or corporate hosts.

(Of course technology is not all positive, and many people have also been harmed by technology. Open source also enables surveillance, stalking, bullying, new kinds of fraud, new venues for propaganda, new legal risks, new avenues for social and political control by self-interested and unaccountable people and institutions, etc.)

TuringNYC(3724) 2 days ago [-]

Do you remember being a developer in the 90s? It was horrible and you were stuck in the confines of truly for-profit companies and for-profit platforms and paradigms.

One benefit of OSS is to lead the world to a place where you want the world to go.

mbrumlow(10000) 2 days ago [-]

>> the vast majority of the benefit of open source contributions on github, be they to languages, runtimes, application frameworks, databases, etc, go towards increasing the bottom line of for-profit companies, not to developers.

Which inturn creates jobs doe devlopers.

Let's face it. As a devloper. My day to day would be tough and slow without many of the free tools devlopers created for fee.

So maybe the pressure for me to contribute is just.

nikdaheratik(10000) 2 days ago [-]

What helped me understand this better was thinking about writing software as a subset of all types of writing. People do write for free because they enjoy doing so or because they feel like they can advance knowledge in a particular area. This is true for both prose and also for the software medium. In both software and prose, you can gain valuable contacts and social credit by sharing your writing with others even if you don't make as much direct income from it.

The publishing industry is much better than software at getting the money to flow towards the creator, but software jobs overall pay better than most writers because of the skill sets involved. Both publishing and software, however, do have multiple ways of going from amateur enthusiast to highly paid professional if you have the talent and connections. And open software, like contributing to small literary magazines, is one way that you can make that transition.

na85(3879) 2 days ago [-]

As much as I dislike licensing zealotry, this is where I think the BSD guys have it dead wrong. The BSD license strictly enables corporate parasitism by dis-incentivising upstream contributions and source-sharing. If the pride in suspecting that one's work is widely used is payment enough for a person, then so be it. But I've never understood why subsidizing corporations is a point of pride for those folks.

WalterBright(4126) 2 days ago [-]

> I think that no potential or current employer has ever given a rat's ass about open source contributions, and do not consider that work as valuable when making hiring decisions.

That is not the case with the D programming language effort. Quite a number of strong D open-source contributors have been recruited into very well paying positions directly because of their contributions.

I suspect the key is contributing to a higher profile open source project. One that is high enough that one can make major contributions, and not so high that one has a hard time standing out amongst the other contributors.

_the_inflator(10000) 3 days ago [-]

The problem I see is, that companies make money by using OS, but are often very hesitant to commit to the code and instead bug the OS developers to fix these problems. In some rare cases only a lead developer of an OS project can do that without a fork.

OS matured, contribution in 2019 does not necessarily mean you commit code. Donating some money is fair use in my point of view and is simply a sign of respect.

cjohansson(10000) 2 days ago [-]

> Some other commenters in this thread have pointed out that nobody is forcing open source developers to work on the contributions that they make, and that is strictly speaking true. But in the culture of software development, there does seem to be an ambient message often repeated or hinted at, that it is good to 'give back' to open source by contributing and that it is virtuous to do so.

I can't see anything wrong with that, it's like people doing favors every day for other people without asking for money

> I now am starting to rethink this sentiment because the vast majority of the benefit of open source contributions on github, be they to languages, runtimes, application frameworks, databases, etc, go towards increasing the bottom line of for-profit companies, not to developers. And the vast majority of the beneficiaries of open source will never support the project even when they are fortune 50 companies saving millions of dollars by using the work of one volunteer.

Yes open-source improve computation as a whole, companies as well as individuals

> There is also the idea that contributing will be great for your career development. I have found that not to be the case at all, I think that no potential or current employer has ever given a rat's ass about open source contributions, and do not consider that work as valuable when making hiring decisions. The work you are paid to do is the only thing anyone cares about. I'm not saying that that shouldn't be the case, but just that that is the case.

Not my experience, have never stumbled upon a recruiter that doesn't value OS contributions

> Given then, that there's very little upside to doing open source, and most/all the benefits go to profit-making corporations, it is puzzling why do we even push for greater involvement in open source at all? It seems we shouldn't be, we should be warning people who want to contribute to Open Source that they should probably spend their time doing their own studying and personal/skill development which will allow them to succeed in the roles that they have with their current role or a role they'd like to obtain one day, for money.

Since all previous assumptios was wrong, conclusion doesn't follow. But I agree that people should not do Open Source for money

tathougies(10000) 3 days ago [-]

I have found the exact opposite actually.

Firstly, the reason I open-source most of my libraries is because, when I was a kid and learning to code, I couldn't afford or convince my parents to buy the expensive proprietary software I wanted to play with. Open source let me have access to this technology ( and to inspect how it worked ) without having to buy anything.

This is simple stuff like having GCC to compile C programs, rather than buying the intel or MS compiler.

Those early experiences got me where I am today. I contribute open-source software so that other, more junior developers have the same opportunities I did.

> There is also the idea that contributing will be great for your career development. I have found that not to be the case at all, I think that no potential or current employer has ever given a rat's ass about open source contributions, and do not consider that work as valuable when making hiring decisions. The work you are paid to do is the only thing anyone cares about. I'm not saying that that shouldn't be the case, but just that that is the case.

Again, I've never experienced this. I got my first programming job right out of high school, and my resume was filled only with various (small) open-source contributions. Without those I'm not sure what inspectable experience I would have had.

> Given then, that there's very little upside to doing open source, and most/all the benefits go to profit-making corporations, it is puzzling why do we even push for greater involvement in open source at all? It seems we shouldn't be, we should be warning people who want to contribute to Open Source that they should probably spend their time doing their own studying and personal/skill development which will allow them to succeed in the roles that they have with their current role or a role they'd like to obtain one day, for money.

Again.... the exact opposite experience. I released a moderately popular library, built a community around it, and this has opened up career opportunities for me. In the niche I work in, several interviewers had actually heard of me before I even interviewed. The scalability of open-source to distribute a positive first impression to potential employers should not be understated.

I do agree with you here:

> And the vast majority of the beneficiaries of open source will never support the project even when they are fortune 50 companies saving millions of dollars by using the work of one volunteer.

Lastly, I think the reason most people find open-source to not live up to its promises is that they do not take on leadership roles in open source organizations. Frankly, as a more senior developer now, simply being a casual contributor to a project would not add to my resume. However, being a project lead, or component lead, would. You need to target your open-source contributions to things your employers would find useful. For example, I work on mainly backend, low-level things in my current career. As much as I would like to contribute to GNOME or GTK or whatever (and I have the skills to do it), there is not enough reward to doing so, so I don't. Instead I contribute to projects that I know will ingratiate myself to future employers. Perhaps that's why our experiences are so different.

seph-reed(4206) 3 days ago [-]

Open source or not, the big companies will always win. The only real difference is that as a whole society would slow down and be worse off without open source. Either way, it'll be equally unfair.

nitwit005(10000) 3 days ago [-]

Back when people were making a big deal about the apparent lack of women/minorities/etc in open source projects, I objected to trying to get them to give away their labor for free.

The response to that was that these contributions lead to jobs. But, that idea seems to have largely disappeared. There was a period where people were claiming your github account is your resume, but I haven't heard that sentiment in several years.

hyperpallium(2718) 2 days ago [-]

It benefits other developers by making their lives easier: no management purchase decision hassle, can inspect source, fix bugs, customize, etc all the standard open-source stuff.

It's just not financially beneficial to developers.

brlewis(1511) 3 days ago [-]

There's a problem with this perspective. I have an insight that would help you see the picture more accurately. I would share it, but here on HN people from profit-making corporations could also see it and benefit, so just ask me in person next time you see me.

noonespecial(2628) 3 days ago [-]

>we should be warning people who want to contribute to Open Source that they should probably spend their time doing their own studying and personal/skill ...

Its Open Source that is chiefly responsible for every bit of that personal skill I have.

In this game, the 'giant' corporations are the irrelevant little players that hardly matter. They don't contribute, so they may as well not exist. They can be safely ignored except for the rare occasion that they 'fart in our elevator' with their patent nonsense.

hartator(3671) 2 days ago [-]

> I think that no potential or current employer has ever given a rat's ass about open source contributions, and do not consider that work as valuable when making hiring decisions.

We do care. We're hiring for backend engineers. https://news.ycombinator.com/item?id=20871478

js2(590) 2 days ago [-]

> In the culture of software development, there does seem to be an ambient message often repeated or hinted at, that it is good to 'give back' to open source by contributing and that it is virtuous to do so.

I can't think of any civic institution I belong to that isn't constantly asking for volunteers, and riding the volunteers they have to do more. I'm not sure it's any different.

Society needs volunteers, and money isn't everything.

killjoywashere(2811) 2 days ago [-]

As someone making hiring decisions, I very much do visit the github accounts of my applicants. I consider the quality of their code and look favorably on them for making code available. I look even more favorably on those who bother to make licensing decisions for their code. Do you get an extra $5k for pushing 1k lines of GNU-licensed code? No. Did you get a callback on your job application? You bet. Do I take pride in the OSS work of my people? Yes. Does that factor into their performance review? Yes. Does that factor into their annual raise and prospects for promotion? Yes.

I have also hired people who don't have GitHub accounts. In those cases, they have some other proof of work. Papers, thesis, references (which are a drag, because then I have to also figure out how to vet the reference) and, particularly valuable, referrals from people I already know personally. This last one goes both ways: not only do I have high confidence in the good intentions of the person making the referral, I can then retrospectively assess this contact's ability to assess.

Wowfunhappy(10000) 2 days ago [-]

> I now am starting to rethink this sentiment because the vast majority of the benefit of open source contributions on github, be they to languages, runtimes, application frameworks, databases, etc, go towards increasing the bottom line of for-profit companies, not to developers.

Isn't this kind of the problem the GPL is supposed to solve?

Sure, a company can build off an open source project to increase their bottom line, but they'll have to contribute back any changes they make. Which means their work is now benefitting other developers as well.

I feel like the GPL has somewhat fallen out of favor in recent years. To an extent I understand why, and yet...

Liquix(4217) 3 days ago [-]

Traditionally, OSS contributions are selfless labors of love and curiosity. These people are flexing their intellect and creativity doing something they thoroughly enjoy: solving problems in the most elegant way possible. Code is shared with an air of positivity, camaraderie, and we're-all-building-this-together. At the risk of sounding kitschy, it's this sense of collaboration that enables the open source community to produce robust software proudly capable of competing with proprietary alternatives.

If this culture is stifled the name of depriving corporations 'free labor' through OSS, the creative hacker community isn't elevated - everyone is brought down. IMHO anyone seeking to build financial value for developers or pad their resume is better off working on a side project/startup.

guntars(10000) 2 days ago [-]

Citations needed. This is just a pessimistic speculation based on your experience, but my experience is the exact opposite. So who's right? We should leave this kind of stuff at the door.

Is there research comparing the current situation with a hypothetical world where OSS doesn't exist? How many less developers would have a job? How much less would the remaining be paid? I'm imagining it and that's not a world I want to be in.

To add a bit more to the 'factualness' of the conversation, according to the Bureau of Labor Statistics Software Engineers in the U.S. took home $144 billion, which for comparison is more than the revenue of Alphabet or Microsoft. How much of that do you think can be attributed to OSS?

miloignis(10000) 3 days ago [-]

Perhaps I'm in my own echo chamber, but I definitely view open source contributions on a resume as a huge positive when I'm reviewing resumes (as a software dev my boss trusts to evaluate applicants), and I had thought that if that wasn't at least the norm that it would be pretty common among tech companies / companies where developers are part of the evaluation process.

edit: Also, all of the open source I do or that I know of my peers doing is 100% for the love of programming and open source culture, without a bunch of hidden motives. Open source was definitely key to me becoming a programmer when I was a kid - without it I would never have gone as deep as I did into coding (linux, gcc, irrlicht), 3D modeling (blender), etc.

jquery(4074) 3 days ago [-]

A better way to look at OSS is like charity. Giving to charity is not something you do to enrich yourself, although you will likely enrich yourself in certain ways. You don't give to charity so you can get a better job in the future. And the results of giving to a charity might result in profits for corporations: suppose the charity makes a breakthrough in medicine and a large medical research company uses that to leapfrog their R&D. That's a feature, not a bug. We shouldn't discourage people from giving to charity because of these aspects.

It's worth noting that many of these companies give back to the OSS they leveraged, and are often the genesis of some of the best OSS software themselves.

ex_amazon_sde(4063) 2 days ago [-]

> the vast majority of the benefit ... go towards increasing the bottom line of for-profit companies, not to developers

The problem is in the licensing.

Large companies have become the primary user rather than the real end users and other developers... because the licenses make it possible.

m463(10000) 2 days ago [-]

A couple of things I've been thinking of (speaking of GPL sofware)

- free software (GPL) can be used for any purpose, by the user. Yes, this means a corporation can use the software without paying. However if it tries to redistribute the software it has the obligation to distribute the source code and any improvements made to it.

- Large corporations are made of people. It is also of value that a person inside a large corporation can learn to use GPL tools, then leave the corporation and download use the same tools at home or another job and continue to use the skills learned.

- GPL software is also great for education - students can not only use tools, but can also pull them apart and see how they work

- GPL software can be a legacy if you're into that. You can write and distribute software under the GPL, and it can survive your current job, or corporation, or live on past your lifetime. GPL software has outlived most of the places I've worked. Meanwhile the non-free software those places created during my tenures have largely disappeared.

- GPL software is a great foil against our losing war with privacy. I can foresee a future where running software you can view and modify might be the only way to ensure you know you aren't being taken advantage of.

ken(3749) 2 days ago [-]

It's part of being a professional. We even have a word for it: pro bono. Volunteering your skills doesn't make you a sucker.

Companies may not care about volunteer work when making hiring decisions, but that should not be why you do it.

You say most open source software is used by companies to better their bottom line, but if you're making six figures through skilled work and not giving anything back to your community, I fail to see how that's any better.

_ZeD_(3921) 3 days ago [-]

this is the core reason for free software

antpls(4145) 2 days ago [-]

> Given then, that there's very little upside to doing open source, and most/all the benefits go to profit-making corporations

I partially disagree. Yes, the benefit will not be fairly shared. However, the volunteers' works accelerate entreprises' works, which can then provide new services and new products faster.

Imagine some ML algorithms could find the cure to aging. You have two options : either wait that some company develops it, but you might be dead by then, or accerelate the process by open sourcing your code and hopefully get the companies closer to this goal. Now, that doesn't mean you will be able to access that cure if a company finally finds it and decides to sell it, but you increase the odds that something might be discovered.

It's like giving $1000 to charity, but with the scale of software, your coded contribution can have way more impact than $1000 on the world.

zelphirkalt(10000) 2 days ago [-]

And that's why we have things like GPL and free software, which ensure, that contributions are brought back to the community. If you want to have contributions brought back to the communities and not only rest inside for profit organizations, then perhaps choose a license to enforce that. This is what many open source people do not get: Many 'don't care, it's open source'-licenses (MIT license for example) lack the backed in ethics of the free software licenses, which in contrast have been _designed_ to force contributions to be shared with the community. When someone chooses an open source and not free software license, they can't complain afterwards, that modifications are not brought back as contributions for the community. That's just being uninformed.

volkk(10000) 3 days ago [-]

all good points. i also think what really pushes a lot of developers to do open source, and bear with me for the unpopular opinion--is ego. they love having followers, and a voice on twitter with some open source badge on their profile page. it's in a way a status symbol of 'i did this important thing that a lot of people rely on, im pretty important.' i highly doubt that even 5% of contributors are doing it out of the kindness of their hearts or the community.

you can sort of see evidence of this within repositories filled with smug comments within the issues or pull requests

wvenable(4154) 2 days ago [-]

> I now am starting to rethink this sentiment because the vast majority of the benefit of open source contributions on github, be they to languages, runtimes, application frameworks, databases, etc, go towards increasing the bottom line of for-profit companies, not to developers.

I don't think that's true. The software itself doesn't benefit their bottom line because it's free for everyone. You can't charge a premium because your business runs on Linux.

All open source software does is provide a base-line of technology that all companies and all individuals can use. You can't directly or even indirectly profit of it -- you can only profit on the value you can provide on top of it.

As technologists, that's what we want! We want companies and individuals to stand on the shoulders of giants and peek a little bit higher. We don't need anyone to re-invent the wheel over and over.

bjornjaja(10000) 3 days ago [-]

Honestly not everything is business-centric—if I'm working on something for pure love of it, that's going to turn up the best quality I can muster. Those big companies run by business people usually just thinking about dollar signs. Just another shitty side effect of capitalism.

dr01d(10000) 2 days ago [-]

The upside is the satisfaction of solving sometimes interesting problems. Some people do crossword puzzles, some play video games. A rare few write operating systems used by perhaps billions of people.

nautilus12(4178) 3 days ago [-]

I agree with what you are saying here. The underlying problem here is that companies (being emotionless optimization machines) aren't willing to shell out for the benefiet they are getting from open source, because people are willing to give it out for free. The only way this will get resolved is if open source contributors as a whole stop contributing leaving companies to either have to support open source in some way, or go back to developing in house. Most likely the later as it has the benefiet of retaining IP and competitive edge. What this says is that most likely open source will start to go away in the years following if corporate entities don't start sponsoring it more. Maybe some type of subscription model like academic journals, but we all know how much people like that.

Also an angle you are considering are open source projects like Spark maintained by a company (databricks) for the benefiet of being used as a sales tool. This whole class of open source exists and pulls unwitting contributors into helping out who don't realize that they are benefieting multiple layers of corporate entities not willing to pay for it. And they will not see any benefiet from it (unless they use it as a personal sales tool to get a job at those companies).

decoyworker(10000) 3 days ago [-]

So since a corporation might get value from your work then nobody should get value from your work?

sharadov(4191) 2 days ago [-]

A lot of the FAANG and tech companies contribute to open source projects.

arandr0x(10000) 3 days ago [-]

How is software unique in this? Tons of museums around the world are visited by the kids of people who make 100k a year even though the average worker there is an unpaid intern. Do you read any news? Listen to NPR podcasts? I think NPR does pay interns, but they don't pay them millions of listeners * 50min a week of listener time * $60 an hour of average listener earnings.

What about science? If you ever benefitted from a genetic screening for any disease, your doctor made anywhere between $100 and $400 an hour, and the graduate student who discovered the function of that gene $10 an hour if you're accounting for his tuition credit. Who tells these students to stop spending 60 hours a week at the lab and start using their skills doing something people will pay for, like develop a no-palm-oil no-aspartame but-still-addictive Mars bar for 25c unit cost?

People will accept less money in exchange for doing something meaningful, like sitting in their basement painting stuff rich people will speculate on for millions of dollars after their death, or campaigning for a rich lawyer who already makes 200k and will get a good bonus post-election because he has the right party colors.

If you could literally sell meaning to people for less friction than those alternatives they'd eat it up. And they'd spend their time making money to spend it on your meaning-making money-sucking machine.

Because that's all money is for: be exchanged for things that make your life not suck. And this exchange tends to bear transaction costs.

wobbleblob(10000) 3 days ago [-]

But they are not working for free, they're doing this work during work hours. Individual volunteer maintainers this article is about are a minority. The majority of contributions to the open source projects that 'the internet runs on' are from businesses that use the software and maintain or improve it.

I'm not talking about your small hobby project with a few dozen users - as good as it may be, the internet doesn't run on it. cURL is really an exception, not the rule.

chrisweekly(3993) 2 days ago [-]

Isn't there a disconnect here?

> 'the benefit of open source contributions on github... go towards increasing the bottom line of for-profit companies, not to developers'

Don't those for-profit companies employ people -- including developers?

>'[developers] should... spend their time doing... skill development... to succeed in... current role or a role they'd like... for money'

Are those roles you mention positions in for-profit companies? In 'for money', whose money?

maximente(4116) 2 days ago [-]

programmers love - and i do mean love - to program, so it's not surprising that they're willing to give away their labors of love for free.

hell, most programmers would probably take a pay cut to work in a 'cool' language. i'm willing to wager that C/C++/COBOL/java mean salaries are a standard deviation higher than haskell/elixir/clojure salaries. why? many reasons but i bet that companies know that they can use programmers' passion for mastery against them. leave your boring C++ slog job to 'hack in Haskell!'

i'm guessing many of these guys are just clueless when it comes to how potentially valuable they are as individuals and what that's worth, while they put up a $10/mo patreon that 6 people use. i'm guessing most engineers are severely underpaid when it comes to value added, they consistently fall for the okie-doke's of 'we get to type stuff into a computer and learn and oh wow, i ought to be so grateful for my six figure salary', yet they're putting entire swathes of the economy out of work for their corporate overlords.

it's tough to convince management you're worth a lot of money if you're willing to give away your precious time and resources freely.

eliashaddad(10000) 1 day ago [-]

But then, to avoid that, you can embrace the concept of copyleft (free software) instead of open source.

So, you can make your software free as in 'free speech', and obligate other companies to keep the products they construct above yours still free.

Better than to make them free as in 'free beer' where they can just consume your work and not give back to the overall movement.

ViViDboarder(10000) 3 days ago [-]

"The catastrophic Heartbleed bug of 2014, which compromised the security of hundreds of millions of sites, was caused by a problem in an open-source library called OpenSSL, which relied on a single full-time developer not making a mistake as they updated and changed that code, used by millions."

And the catastrophic Specter and Meltdown bugs show that private code doesn't prevent this sort of thing either...

BrandonMarc(3378) 3 days ago [-]

Yes, the OpenSSL dustup led to this amusing article: The Internet Is Being Protected By Two Guys Named Steve [1] ... the more serious point being, at the time, massive pieces of the internet owed their security to two unpaid/underpaid developers on two different continents.

[1] https://news.ycombinator.com/item?id=7657571

LomaxJunior(10000) 3 days ago [-]

Global Standards can be included in that list, here is one example called Signalling System 7. https://en.wikipedia.org/wiki/Signalling_System_No._7#Protoc...

Caller ID uses a protocol used by dial up modems, which means the telecoms hardware supporting it, is in effect, a dialup modem which makes life easier for security services to remotely access systems once they have updated the firmware on telecoms hardware, for a persistent backdoor.

EGreg(1773) 3 days ago [-]

The best situation is many independent people vetting code and new versions before it is used somewhere. I wrote this 7 years ago:

http://magarshak.com/blog/?p=114

gitgud(3137) 2 days ago [-]

Yes, I thought that was a weird jab at the maintainer of OpenSSL...

muglug(4118) 3 days ago [-]

The overarching problem with expecting companies to pay for open-source is that they don't have to.

It's often as vital to them as the publicly-provided infrastructure used by their employees to get to work, but because they never see the costs, they assume there isn't any.

In an ideal world business-critical FOSS projects would be funded by the government the same way that other public infrastructure is.

Until that happens I think lone developers of those popular projects should start to adopt licenses that allow them to charge businesses. It's the only way to avoid creating martyrs of them.

jackcosgrove(10000) 3 days ago [-]

I understand the sentiment. However 1. Money corrupts everything. Be careful what you wish for. 2. Lone developers will probably do a better job than any sort of bureaucracy, public or private. Small-scale FOSS projects are the perfect platform for a lone developer. They can of course demand payment for their efforts, with all that it might entail regarding point 1.

pdonis(3976) 3 days ago [-]

> The overarching problem with expecting companies to pay for open-source is that they don't have to.

It might also be because, even when they have to, they refuse to. As in this case described in the article, for example:

'Last year, a company overseas contacted him in a panic after they paused a firmware upgrade rollout to several million devices due to a cURL problem.

'When Stenberg asked the company that needed him to fly to a different country to troubleshoot their problem to pay for [a support contract], they refused.'

SamuelAdams(3875) 2 days ago [-]

>The overarching problem with expecting companies to pay for open-source is that they don't have to.

Exactly this. Take the cURL program, as mentioned in the article. The license [1] states:

>Permission to use, copy, modify, and distribute this software for any purpose with or without fee is hereby granted, provided that the above copyright notice and this permission notice appear in all copies.

So, if an organization is free to use something, then you get upset that you don't get paid for it, why not just adopt a different license? Build a model where they have to pay for it?

[1]: https://github.com/curl/curl/blob/master/COPYING

benj111(4101) 3 days ago [-]

Isn't this one of Microsoft's arguments in its FUD phase? Open source being less secure.

petepete(2641) 3 days ago [-]

It definitely used to be. I think it tailed off when it turned out they were attempting to throw stones from inside a glass house.

jjohansson(4128) 3 days ago [-]

Perhaps most egregious are companies who profiteer from open source, without contributing back. Amazon comes to mind.

ensignavenger(4134) 2 days ago [-]

Amazon does contribute to open source, though- https://aws.amazon.com/opensource/

It may be argued that they don't contribute enough, or that they aren't contributing to what you think they should be, but they certainly do contribute.

soperj(4152) 3 days ago [-]

So does Apple.

human20190310(10000) 3 days ago [-]

There is basically no reason for them to contribute back. Using all available tools while restricting their own efforts for their own use creates a competitive edge.

The only time it makes sense to release something to the public is as a complement to one of their paid services.

partialrecall(10000) 3 days ago [-]

Copyleft licenses were created for a reason. If you want companies to either contribute back or fuck off, use copyleft licenses.

carapace(2884) 3 days ago [-]

I'm a Bucky Fuller fanboy. He calculated that we would have enough technology to make our current economic system obsolete by sometime in the 1970's. That has already happened, but we have, in general, failed to notice.

The advancement of automation should lead to a post-historical quasi-utopia, and our biggest problem will be sorting out our personal baggage. (Like Star Trek but without FTL and teleporters.)

From my POV, charging money for copies of software is regressive and foolish.

navigatesol(10000) 2 days ago [-]

>From my POV, charging money for copies of software is regressive and foolish.

Curious if you have any literature expounding on this idea; it's hard for me to wrap my head around.

Is the idea to use software as a tool to provide the product, not as the product itself?

djsumdog(1073) 3 days ago [-]

There aren't a lot of totally free, open-source, end user products. In the 90s FOSS devs though one day GiMP would surpass Photoshop, we'd see Linux on at least 8% ~ 10% of laptops at coffee shops, Inkscape would be better than Illustrator, etc.

Today, a lot of FOSS is corporate sponsored. But it's not end-product. It's all middleware. Hophop/Hack, Lightbend libraries like Slick, React, etc. It's all to help you build things to interact with the big commercial players.

Yes there is still a lot of FOSS that's small, 1 ~ 5 people maintainers that's volunteer. But there is a considerable about that's corporate backed. We're a far cry from the FOSS hope of the 90s. I wrote about this a few years back:

https://penguindreams.org/blog/the-philosophy-of-open-source...

ip26(10000) 3 days ago [-]

My perception was always that the FOSS hopes of GiMP, Inkscape, etc flopped mainly because nobody in FOSS put work into UI/UX. So, it kind of makes sense that FOSS found a niche somewhere that UI/UX didn't matter.

aexl(10000) 2 days ago [-]

Inkscape didn't exist in the 90s....

Tinfoilhat666(10000) 3 days ago [-]

It's great that corporates pay developers for writing open source software! I'm much happier today about open source than in 90ies, when no one was getting paid for writing it.

arandr0x(10000) 3 days ago [-]

In the 90s we thought open-source was countercultural. It was something you did to be part of a community, to be elite, to show those big dummies at Microsoft you didn't need them.

Today it's something you do to show Microsoft they need you, because today Bill Gates is the world's nicest old man, big companies are safer than the state, and nobody needs any individual person anymore.

It's happened with many other things. For example, early in the last century, journalism was something you did because you didn't want the powers that be to be the only ones heard, and today, well. In the 70s, popular culture was something you engaged with because you didn't want money to be the end-all of your life, and today, well.

It doesn't make open source (or writing or talking about music you like) a bad thing. It's just eventually adults who have money adopt things you like and adulthood and money change people. And when that happens it's not necessarily a loss. Counterculture becomes mainstream after it's _won_. In a lot of ways Linux won, and it _is_ on most people's daily computers today, even though that computer doesn't have a keyboard.

I wish I knew a kid today like I was then, though. A kid that doesn't want mainstream stuff, a kid who thinks we've all got it wrong, a kid who could tell me I don't get it. Because I'm sure I don't 'get it' the way adults didn't get it in the 90s: I just don't know what 'it' is anymore.

zip1234(10000) 2 days ago [-]

I think it may be in part because UI is very transient. Trends come and go very quickly. Well-made libraries can last a very very long time though as programs that work well are always in fashion.

qudat(4221) 3 days ago [-]

'[...] people working for free.' OSS is 'free' in the sense that anyone can consume the content. It's not free labor though. Most of the time, the people building OSS are gaining clout and career progression as a result of their 'free work.' Companies end up paying for it.

eropple(2750) 3 days ago [-]

I can assure you that the overwhelming majority of people building open source software, even very popular software, are not being paid for it and it offers little value for their careers.

Like most things, the top couple of percent are being used to project an upside that does not apply to everyone else.

chiefalchemist(4045) 3 days ago [-]

> 'It's still not clear that companies would be interested in such a contract. When Stenberg asked the company that needed him to fly to a different country to troubleshoot their problem to pay for one, they refused.'

The irony is, in not paying directly for suppirt they are paying indirectly with risk. Risk to product. Risk to brand. And perhaps in sone cases risk to life.

It seems to me a new licence is in order. One that distinguishs from small fries and big fish. And big fish that refuse to be good citizens of a given (product) community should be outed and shamed for their lack of cooperation.

Extreme? Sure. But sometimes fairness and justice require such guidance.

rabidrat(3959) 3 days ago [-]

'shame' is not a reasonable means of enforcing such a thing with big companies. People may feel shame, companies don't.

TurboHaskal(4219) 3 days ago [-]

I always liked this post by Erik Naggum:

    The whole idea that anything can be so 'shared' as to have no value in itself is
    not a problem if the rest of the world ensures that nobody _is_ starving or
    needing money. For young people who have parents who pay for them or student
    grants or loans and basically have yet to figure out that it costs a hell of a
    lot of money to live in a highly advanced society, this is not such a bad idea.
    Grow up, graduate, marry, start a family, buy a house, have an accident, get
    seriously ill for a while, or a number of other very expensive things people
    actually do all the time, and the value of your work starts to get very real and
    concrete to you, at which point giving away things to be 'nice' to some
    'community' which turns out not to be 'nice' _enough_ in return that you will
    actually stay alive, is no longer an option.
(continues) https://www.xach.com/naggum/articles/[email protected]
dleslie(10000) 3 days ago [-]

This is incredibly insightful; particularly the final dig regarding an unwillingness to compensate others for their efforts.

It strikes me that the open source bounty model never really took off, and neither did the open source crowd funding model.

It's hard to convince users to contribute funding when many of them were drawn to your software _because_ it is free as in beer. Free2Play video games have struggled with this for years, and look at the state of monetization in that market: predatory gambling mechanics, digital storefronts tuned to maximize anxiety in purchasers, and so many hats.

Perhaps the future of open source funding is in selling personalization customizations, somehow.

baron_harkonnen(10000) 3 days ago [-]

Erik Naggum is such a legend and it's truly a tragedy that he is gone, and also that we really no longer have any technologists of his calibre.

I saw a conversation on twitter the other day about how great XML is and why we should use it more and realized that even if I posted Erik's classic take on this, it would only mean reactionary outbursts. I wonder how many people would even understand his arguments about SGML today?

pitaj(4200) 2 days ago [-]

Please don't quote with code blocks. For mobile users:

> The whole idea that anything can be so 'shared' as to have no value in itself is not a problem if the rest of the world ensures that nobody _is_ starving or needing money. For young people who have parents who pay for them or student grants or loans and basically have yet to figure out that it costs a hell of a lot of money to live in a highly advanced society, this is not such a bad idea. Grow up, graduate, marry, start a family, buy a house, have an accident, get seriously ill for a while, or a number of other very expensive things people actually do all the time, and the value of your work starts to get very real and concrete to you, at which point giving away things to be 'nice' to some 'community' which turns out not to be 'nice' _enough_ in return that you will actually stay alive, is no longer an option.

bsmitty5000(10000) 3 days ago [-]

I'd never heard of Erik Naggum before but his wikipedia entry makes him sound interesting as hell-

https://en.wikipedia.org/wiki/Erik_Naggum

lucb1e(2135) 3 days ago [-]

This whole argument is built on the premise that when you fall ill for a while or have an accident, nobody will help you and you'll die instead (I guess they didn't mean it so literally, but that's what the text says). As far as I know, this is actually false in every part of the world that can afford it except for the USA. You may not earn a lot while ill, government support isn't everything, but roughly a thousand bucks a month plus medical expenses paid isn't peanuts.

seph-reed(4206) 3 days ago [-]

> Grow up, graduate, marry, start a family, buy a house

There's your problem. Don't shackle yourself to a system you already know doesn't care about you, then get upset when it's abusive.

- 'Grown up' is a children's game (which somehow doesn't involve taking responsibility for them world), - you don't need college to learn or hone a skill, - you can love a person and not marry them, - you can't simultaneously dislike the world and justify bringing children into it, - and if you can't afford your house you can always sell it and live in a van.

Open Source Software is one of the few decent things people still do, even if the world they're doing it for is not.

alephnan(10000) 3 days ago [-]

As Elon Musk described in the Joe Rogan interview: cybernetic collectives

navigatesol(10000) 2 days ago [-]

Elon Musk, the CEO of Tesla, who doesn't comply with GPL but spends $50k trying to dig up dirt on a person he accused of being a pedophile to avoid lawsuits? That guy?

Who cares what he says.

lysium(4052) 3 days ago [-]

It's a well written article. I did not get the point where the internet relies on people working for free. Yes, free software such as curl is in use, but where does the internet rely on it? If there were no curl, there would be something else, wouldn't it?

totaldude87(3991) 3 days ago [-]

>>there would be something else, wouldn't it?

ummm, the gist of it is, someone else have to write for free, else every time you curl you would be paying $$ to oracle or someone :|

judge2020(4202) 3 days ago [-]

libcurl is widely used and, along with the curl cli, is bundled with most linux/unix installations; it's installed on nearly every consumer device in the wild, including things like cars on the road. https://daniel.haxx.se/blog/2018/09/17/

The article's point is that the reason curl is so widely used by almost everything is because it's free and has a open source license. If it requires some small amount of money per commercial install, or even if it had some license terms that weren't appealing to legal teams at these companies, it would likely not have the widespread adoption is currently has. If every piece of curl-like software in the space were pay-per-install, these companies probably would go and use one of them, but if just one of the offerings is free for all use it's going to top the paid offerings most of the time.

Gpetrium(4154) 3 days ago [-]

What would the world look like if all volunteering stopped? Would society have reached the point it is today? Volunteer encompasses all kinds of areas, including software and research. It can be argued that some segments would be covered by for-profit organizations, but decisions in this area is often driven by the return on investment (ROI). This means that some solutions would likely be paid or not pass the potential ROI test. If it became paid, it would likely be a barriers of entries to a lot of people, leading to slower integration and growth of the platform, software, etc.

dnh44(10000) 3 days ago [-]

And if that replacement were not free then the costs would have to be passed on.

There is a tremendous amount of free software that runs the worlds infrastructure.

Programming languages, compilers, libraries, frameworks, text editors, databases, operating systems, utilities, servers, encryption, and even curl all lower the price of entry for everyone to contribute and innovate, even when they're using free tools for commercial purposes.

ken(3749) 2 days ago [-]

Isn't that true of most industries?

The shirt on your back cost only $10 because it was made by a kid in Bangladesh for pennies. The tunnel under the city cost millions more than planned, and somebody will have to eat that cost. Your theatre program has a page of 6-digit donors because ticket prices don't cover half the cost of producing a show.

Nobody likes to admit the true cost of anything.

gitgud(3137) 2 days ago [-]

In my opinion, software is slightly different, it's much easier to waste time on because the end product can be duplicated infinitely for free.

I guess this makes software, on average, much more costly...





Historical Discussions: Apple downloads ~45 TB of models per day from our S3 bucket (September 16, 2019: 545 points)

(545) Apple downloads ~45 TB of models per day from our S3 bucket

545 points 4 days ago by julien_c in 2492nd position

twitter.com | Estimated reading time – 1 minutes | comments | anchor

Welcome home!

This timeline is where you'll spend most of your time, getting instant updates about what matters to you.

Tweets not working for you?

Hover over the profile pic and click the Following button to unfollow any account.

Say a lot with a little

When you see a Tweet you love, tap the heart — it lets the person who wrote it know you shared the love.

Spread the word

The fastest way to share someone else's Tweet with your followers is with a Retweet. Tap the icon to send it instantly.

Join the conversation

Add your thoughts about any Tweet with a Reply. Find a topic you're passionate about, and jump right in.

Learn the latest

Get instant insight into what people are talking about now.

Get more of what you love

Follow more accounts to get instant updates about topics you care about.

Find what's happening

See the latest conversations about any topic instantly.

Never miss a Moment

Catch up instantly on the best stories happening as they unfold.




All Comments: [-] | anchor

cpach(2829) 3 days ago [-]

Isn't this a use case where BitTorrent would shine?

delfinom(10000) 3 days ago [-]

Not magically? Torrents need seeders. If you are the only seeder, then you will still get the full bill from AWS all the same.

michaelt(4046) 3 days ago [-]

The problem here is 'Someone's CI pipelines redownload the same models on every build'

I'd say there's only a 10% chance Apple's firewall would let BitTorrent through, and only a 3% chance the CI servers would maintain a positive seed ratio.

Possibly it might solve the problem because users would cache the resources themselves to avoid the hassle of getting BitTorrent into their CI pipeline...

CobrastanJorji(10000) 3 days ago [-]

If you host large, publicly available data in a cloud blob service, but you don't have a budget for it, one option is to use the 'Requester Pays' feature that Amazon and Google provide. This makes the data available to anyone to download, but they need to pay the download cost themselves.

This is at the tradeoff of making your data significantly more irritating to access, as it's no longer just plugging in a URL into a program, plus everyone who wants your dataset needs to set up a billing account with Amazon or Google.

baroffoos(10000) 3 days ago [-]

Or just post a magnet link.

dx034(3658) 3 days ago [-]

Or use Cloudflare in front of it, especially with mostly static data.

nurettin(3967) 3 days ago [-]

This is probably apple's continuous integration tests, lazily written to download the whole thing every time someone merges a commit.

mister_hn(10000) 3 days ago [-]

that's really stupid. I mean, I would have set a cache repository (SonaType Nexus maybe?), download everything there and use that repository. In the tweets the author says they've blocked the download from Apple IPs, so now their pipeline is broken.

z3t4(3937) 3 days ago [-]

Apple are probably doing 'continuous integration' where all assets are re-downloaded from the Internet in each iteration. Tip: put your stuff on Github :P

coleca(4208) 3 days ago [-]

With models that large you would be paying for GitHub's LFS credits. Those aren't cheap as I recall. Napkin math at $5/50GB of bandwidth per month for 45TB per day it would cost them $135k/mo to use Github. That's over 2x more than the S3 egress charges would be.

lacker(1694) 3 days ago [-]

Well, you could contact them and make a very-likely-to-succeed case that they should pay you some money, or you could complain about it on Twitter.

slenk(10000) 3 days ago [-]

Twitter will probably be a faster response than automated email inboxes at Apple

cs702(922) 3 days ago [-]

'Almost everyone' working on NLP uses one of hugginface's pretrained models at one point or another, sooner or later: https://github.com/huggingface/pytorch-transformers

It's so damn convenient, and so nicely done.

And they keep doing neat things like this one: https://github.com/huggingface/swift-coreml-transformers

Kudos to Julien Chaumond et al for their work!

BlueGh0st(3798) 3 days ago [-]

For anyone else initially confused, NLP in this context is 'Natural Language Processing.'

megaremote(10000) 3 days ago [-]

> Swift Core ML

Why does he call them Swift Core ML? They are core ml, usable in swift and objective-c.

dharmon(10000) 3 days ago [-]

I don't have high hopes for his business prospects if this is how he handles one of the richest companies in the world clearly having a high need for something his company offers.

Maybe spend less time on Twitter and more on your business model?

tln(10000) 3 days ago [-]

That tweet will get attention! Finding the right person is everything...

httpz(10000) 3 days ago [-]

They're basically bragging they have something Apple really wants. Now they have a bunch of people at least interested in what they got. I'll say that's not a bad PR.

viraptor(1897) 3 days ago [-]

You don't know that this was the only action he took. This was not a great criticism considering how limited our information is right now.

bryanrasmussen(311) 3 days ago [-]

if one of the richest companies in the world is hammering your server without paying for it I would hope they don't get their feelings hurt when the server blocks them until they pay for it.

I mean I've worked at some of the riches companies in the world and I think the conversation would have gone like this

Me: hey project manager our access to server X where we get the really needed X1 resource has been blo