Hacker News with comments/articles inlined for offline reading

Authors ranked on leaderboard
Last updated:
Reload to view new stories

March 18, 2019 23:06



Front Page/ShowHN stories over 4 points from last 7 days
If internet connection drops, you can still read the stories
If there were any historical discussions on the story, links to all the previous stories on Hacker News will appear just above the comments.

Historical Discussions: Show HN: A retro video game console I've been working on in my free time (March 14, 2019: 2648 points)

(2677) Show HN: A retro video game console I've been working on in my free time

2677 points 4 days ago by pkiller in 10000th position

internalregister.github.io | Estimated reading time – 29 minutes | comments | anchor

This post serves as an introduction to a "homebrew" video game console made from scratch, using a lot of inspiration from retro consoles and modern projects but with a unique architecture. Some friends of mine have told me again and again not to keep this project to myself and to put this information online, so here it goes.

How it got started

My name is Sérgio Vieira and I'm a portuguese guy who grew up in the 80s and 90s, I've always been nostalgic towards retro-gaming, specifically the third and forth generation consoles. A few years ago I've decided to learn more about electronics and try to build my own video game console. Professionally I work as a software engineer and had no experience with electronics other than ocasionally building and upgrading my desktop computer (which doesn't really count). Even though I had no experience, I said to myself "why not?", bought a few books, a few electronics kits and started to learn what I felt I needed to learn.

I wanted to build a console that would be similar to those which are nostalgic to me, I wanted something between an NES and a Super Nintendo or between a Sega Master System and a Mega Drive. These video game consoles had a CPU, a custom video chip (in those days it wasn't called a GPU) and an audio chip either integrated or separate. Games were distributed in cartridges, which were basically hardware extensions with a ROM chip and sometimes other components as well.

The initial plan was to build a console with the following characteristics:

  • No emulation, the games/programs had to run on real hardware, not necessarilly hardware of the time, but hardware that is just fast enough for the job
  • With a dedicated "retro" CPU chip
  • With TV output (analog signal)
  • Ability to produce sound
  • With support for 2 controllers
  • Scrolling background and moving sprites
  • Ability to support Mario-style platform games (and of course other types of games as well)
  • Games/Programs available through an SD Card

The reason I wanted SD card support instead of cartridge support, it's mainly because it's a lot more practical to have programs available in an SD card, as it makes it a lot easier to copy files from a PC to it. Having cartridges would mean to make even more hardware and to have a new hardware for each program.

Building it

Video signal

The first thing I worked on was the video signal generation. Each video game console of the era I was aiming for had different proprietary graphics chips which made them all have different characteristics. For this reason I didn't want to use any pre-made graphics chip, I wanted my console to have unique graphical capabilities. And because it was impossible for me to make my own chip, and I didn't know how to use an FPGA, I opted for a software based graphics-chip using a 20Mhz 8-bit microcontroller. It's not overkill and has just enough performance to generate the kind of graphics I want.

So, I started by using an Atmega644 microcontroller running at 20Mhz to send a PAL video signal to a TV (because the microcontroller doesn't support this protocol natively, I had to bit bang the PAL video signal protocol):

The microcontroller produces 8-bit color (RGB332, 3 bits for red, 3 bits for green and 2 bits for blue) and a passive DAC is used to convert this to analog RGB. Luckily in Portugal one common way to connect an external device to a TV is through a SCART connector and most TVs accept RGB input through SCART.

A proper graphics system

Because I wanted to have a microcontroller only drive the TV signal (I call it the VPU, Video Processing Unit), I decided to use a double-buffer technique.

I had the second microcontroller (PPU, Picture Processing Unit, which is an Atmega1284 also at 20Mhz) generate an image to a RAM chip (VRAM1) while the first one would dump the contents of another RAM chip (VRAM2) to the TV. After one frame (2 frames in PAL or 1/25th of a second), the VPU switches the RAMs, and dumps the image generated into VRAM1 while the PPU generates an image to VRAM2. The video board turned out quite complex as I had to use some external hardware to allow for the two microcontrollers to access the same RAM chips and also to speed up the access to RAM that also had to be bit-banged, so I added some 74 series chips such as counters, line selectors, transceivers, etc.

The firmware for VPU and especially the PPU also became quite complex as I had to do extremely performant code to be able to have all the graphical capabilities I wanted, originally it was all done in assembly, later I coded some of it in C.

I ended up having the PPU generate a 224x192 pixel image that is then sent to the TV by the VPU. This resolution might seem low, but it is in fact only a bit lower than the consoles mentioned above that usually had resolutions of 256x224 pixels. The lower resolution allowed me to cram more graphical features into the time I had to draw each frame.

Just like in the old days the PPU has "fixed" capabilities that can be configured. The background that can be rendered is composed of 8x8 pixel characters (sometimes called tiles). This means a screen background has the size of 28x24 tiles. In order to have per-pixel scrolling and the ability to update the background seamlessly I made it so there are 4 virtual screens each one having 28x24 tiles that are contiguous and wrap around one other:

Above the background, the PPU can render 64 sprites that can have a width and height of either 8 or 16 pixels (1, 2 or 4 characters) and can be flipped horizontally or vertically or in both axes. Also above the background an "overlay" can be rendered, which is a patch composed of 28x6 tiles. This is useful for games that need a HUD and in which the background is scrolling and sprites are being used for other purposes than to show information.

Other "advanced" feature is the ability to scroll the background in different directions in separate lines, this enables games to have effetcs such as a limited parallax scrolling or split-screen.

And there's also the attribute table, which is the possibility of giving each tile a value from 0 to 3, and then it's possible to set all the tiles of a given attribute to a certain tile page or increment their character number. This is useful when there are certain parts of the background that change constantly, the CPU doesn't need to update each one of the tiles, it only needs to say something like: "all tiles with attribute 1 will increment their character number by 2" (using different techniques, this effect can be seen for example in block tiles with a moving question mark in Mario games or in waterfall tiles that seem to be changing constantly seen in other games).

CPU

After having a functional video board, I started working with the CPU I chose for the console, the Zilog Z80. One of the reasons I chose the Z80 (other than it just being a cool retro CPU) was because the Z80 has access to a 16bit memory space and a 16bit IO space, something that other similar 8-bit CPUs do not have, such as the famous 6502. The 6502, for example, only has a 16bit memory space, which means that the whole 16bits were not reserved just for memory but had to be shared between memory access and external device access, such as video, audio, inputs, etc. By having an IO space together with a memory space, I could have the whole of the 16bit memory space reserved for memory (64KB of code and data) and have the IO space for communication with external devices.

I started by connecting the CPU to an EEPROM with some test code and also connecting it via the IO space to a microcontroller I had set up to communicate with a PC via RS232 in order to check if the CPU was functioning well as well as all the connections I was making. This microcontroller (an Atmega324 operating at 20Mhz) was to become the IO MCU (or input/output microcontroller unit), responsible for managing access to the game controllers, SD Card, PS/2 Keyboard and the RS232 communication.

The CPU was then connected to a 128KB RAM Chip, from which 56KB was accessible (this seems like a waste but I could only get either 128KB or 32KB RAM chips). This way the CPUs memory space is composed of 8KB of ROM and 56KB of RAM.

After this I updated the IO MCU's firmware with the help of this library and added SD Card support. The CPU was now able to navigate through directories, browse their contents, open and read from files. All this by reading and writing to specific IO space addresses.

Connecting the CPU and the PPU

The next thing I implemented was the interaction between the CPU and the PPU. For this I found "an easy solution" which was to get dual-port RAM (a RAM chip that can be simultaneously connected to two different buses), it saves me from having to place more ICs like line selectors and such and also it makes the accesses to the RAM between both chips virtually simultaneous. The PPU also comunicates with the CPU directly by activating its NMI (non-masking interrupt) every frame. This means the CPU has an interrupt every frame, which makes it valuable for timing and knowing when to update graphics.

Each frame the interaction between CPU, PPU and VPU is as following:

  • The PPU copies the information of the PPU-RAM to internal RAM.
  • The PPU sends an NMI signal to the CPU
  • At the same time:
    • the CPU jumps to the NMI interrupt function and starts updating the PPU-RAM with the new graphical frame state. (the program should return from the interrupt before the start of the next frame)
    • the PPU renders the image based on the information it had previously copied to one of the VRAMs.
    • the VPU sends the image in the other VRAM to the TV.

Around this time I also added support for game controllers, I originally wanted to use Super Nintendo controllers, but the socket for this type of controller is proprietary and was hard to come by, therefore I chose the Mega Drive/Genesis compatible 6-button controllers, they use standard DB-9 sockets that are widely available.

Time for the first real game

At this point I had a CPU with game controller support that could control the PPU and could load programs from an SD Card, so...time to make a game in Z80 assembly of course, it took me a couple of days of my free time to make this (source code):

Adding custom graphics

This was awesome, I now had a working video game console, but...it still wasn't enough, there was no way for a game to have custom graphics, it had to use the graphics stored in the PPU firmware that would only be changed when its firmware was updated, so I tried to figure out a way of adding a RAM chip with graphics (Character RAM) and somehow load it with information coming from the CPU and making it accessible to the PPU, all this with as little components I could, because the console was getting really big and complex.

So I came up with a way: only the PPU would have access to this new RAM, the CPU would be able to load information into it through the PPU and while this transfer was happening, the RAM wouldn't be used for graphics, but only the internal graphics would be used.

The CPU can then switch from internal graphics to Character RAM (CHR-RAM) mode and the PPU will use these custom graphics, it's possibly not the ideal solution, but it works. In the end the new RAM has 128KB and can store 1024 8x8 pixel characters for background and another 1024 characters of the same size for sprites.

And finally sound

As for the sound, it was the last thing to be implemented. Originally I intended to give it similar capabilities as those seen in the Uzebox, basically to have a microcontroller generate 4 channels of PWM sound. However I found out I could get my hands on vintage chips relatively easily, and I ordered a few YM3438 FM synthesis chips, these sound chips are fully compatible with the YM2612 which is the one found in the Mega Drive/Genesis. By integrating this chip, I could have Mega Drive quality music along with sound effects produced by a microcontroller. The CPU controls the SPU (Sound Processor Unit, the name I gave to the microcontroller that controls the YM3438 and produces sound on its own) again through a dual-port RAM, this time only 2KB in size.

Similarly to the graphics module, the sound module has 128KB for storing sound patches and PCM samples, the CPU can load information to this memory through the SPU. This way the CPU can either tell the SPU to play commands stored in this RAM or update commands to the SPU every frame.

The CPU controls the 4 PWM channels through 4 circular buffers present in the SPU-RAM. The SPU will go through these buffers and execute the commands present in them. In the same way there is another circular buffer in the SPU-RAM for the FM synthesis chip.

So, similar to how it works with graphics, the interaction between CPU and SPU works like this:

  • The SPU copies the information in the SPU-RAM to internal RAM.
  • The SPU waits for the NMI signal sent by the PPU. (for synchronization purposes)
  • At the same time:
    • The CPU updates the buffers for the PWM channels and for the FM synthesis chip.
    • the SPU executes the commands in the buffers regarding the information in its internal memory.
  • Continuously while all of the above happens, the SPU updates the PWM sound at a frequency of 16Khz.

The end result

After all the modules were developed, some were put into protoboards. As for the CPU module, I've managed to design and order a custom PCB, don't know if I'll do the same for the other modules, I think I was pretty lucky to get a working PCB on the first try. Only the sound module remains as a breadboard (for now).

This is the video game console now (at time of writing):

Architecture

This diagram helps illustrate what components are in each module and how they interact with one another. (the only things missing are the signal the PPU sends to the CPU directly every frame in the form of an NMI and the same signal being sent to the SPU as well)

  • CPU: Zilog Z80 operating at 10Mhz
  • CPU-ROM: 8KB EEPROM, holds the bootloader code
  • CPU-RAM: 128KB RAM (56KB usable), holds the code and data of the programs/games
  • IO MCU: Atmega324, serves as an interface between the CPU and the RS232, PS/2 Keyboard, Controllers and SD Card filesystem
  • PPU-RAM: 4KB Dual-port RAM, it's the interface RAM between the CPU and the PPU
  • CHRRAM: 128KB RAM, holds the custom background tiles and sprites graphics (in 8x8 pixel characters).
  • VRAM1, VRAM2: 128KB RAM (43008 bytes used), they are used to store the framebuffer and are written to by the PPU and read by the VPU.
  • PPU (Picture Processing Unit): Atmega1284, draws the frame to the framebuffers.
  • VPU (Video Processing Unit): Atmega324, reads the framebuffers and generates an RGB and PAL Sync signal.
  • SPU-RAM: 2KB Dual-port RAM, serves as an interface between the CPU and the SPU.
  • SNDRAM: 128KB RAM, holds PWM Patchs, PCM samples and FM Synthesis instruction blocks.
  • YM3438: YM3438, FM Synthesis chip.
  • SPU (Sound Processing Unit): Atmega644, generates PWM-based sound and controls the YM3438.

The final specs

CPU:

  • 8-bit CPU Zilog Z80 operating at 10Mhz.
  • 8KB of ROM for bootloader.
  • 56KB of RAM.

IO:

  • Reading data from FAT16/FAT32 SD Card.
  • Reading/writing to RS232 port.
  • 2 MegaDrive/Genesis-compatible controllers.
  • PS2 Keyboard.

Video:

  • 224x192 pixel resolution.
  • 25 fps (half PAL fps).
  • 256 Colors (RGB332).
  • 2x2 virtual background space (448x384 pixels), with bi-directional per-pixel scrolling, described using 4 name tables.
  • 64 sprites with width and height 8 or 16 pixels with possibility of being flipped in X or Y axis.
  • Background and sprites composed of 8x8 pixels characters.
  • Character RAM with 1024 background characters and 1024 sprite characters.
  • 64 independent background horizontal scrolling in custom lines.
  • 8 independent background vertical scrolling in custom lines.
  • Overlay plane with 224x48 pixels with or without colorkey transparency.
  • Background attribute table.
  • RGB and Composite PAL output through SCART socket.

Sound:

  • PWM generated 8-bit 4 channel sound, with pre-defined waveforms (square, sine, sawtooth, noise, etc.).
  • 8-bit 8Khz PCM samples in one of PWM channels.
  • YM3438 FM synthesis chip updated with instructions at 50Hz.

Developing for the Console

One piece of software that was written for the console was the bootloader. The bootloader is stored in the CPU-ROM and can occupy up to 8KB. It uses the first 256 bytes of the CPU-RAM. It's the first software to be run by the CPU. It's purpose is to show the programs available in the SD Card. These programs are in files that contain the compiled code and may also contain custom graphics data and sound data. After being selected, the program is then loaded into the CPU-RAM, CHR-RAM and SPU-RAM. And the respective program is executed. The code of the programs that can be loaded into the console, can take up the 56KB of the RAM, except the first 256 bytes and of course have to take into account the stack and also leave space for data. Both the bootloader and programs for this console are developed in a similar fashion, here's a brief explanation on how these programs are made.

Memory/IO Mapping

One thing to note when developing for the console is how the CPU can access the other modules of the console, therefore memory and io space mapping are crucial.

The CPU accesses its bootloader ROM and RAM through the memory space. CPU memory space mapping:

It accesses the PPU-RAM, SPU-RAM and the IO MCU through IO space. CPU IO space mapping:

Inside IO space mapping, the IO MCU, PPU and SPU have specific mappings.

Controlling the PPU

We can control the PPU through writing to the PPU-RAM and we know from the information above that the PPU-RAM is accessible through IO space from address 1000h to 1FFFh. This is how that address range looks like seen in more detail:

The PPU Status has the following values: 0 - Internal graphics mode 1 - Custom graphics mode (CHR-RAM) 2 - Write to CHR-RAM mode 3 - Write complete, waiting for CPU to aknowledge mode

As an example, this is how we can work with sprites: The console has the ability to render 64 simultaneous sprites. The information on these sprites are accessible through the CPU io mapping from address 1004h to 1143h (320 bytes), each sprite has 5 bytes of information (5 x 64 = 320 bytes):

  1. Miscellaneous byte (each of its bits is a flag: Active, Flipped_X, Flipped_Y, PageBit0, PageBit1, AboveOverlay, Width16 and Height16)
  2. Character byte (which character is the sprite in the page described by the corresponding flags above)
  3. Color key byte (which color is to be transparent)
  4. X position byte
  5. Y position byte

So, to make a sprite visible, we must put the Active flag to 1 and put the sprite in coordinates in which it is visible (coordinates x=32 and y=32 puts the sprite in the top left of the screen, less than that and he's off screen or partially visible). Then we can also set its character and what is its transparent color.

For example, if we want to set the 10th sprite as visible we would set io address 4145 (1004h + (5 x 9)) to 1 and then set its coordinates to, for example, x=100 and y=120, so we would set address 4148 to 100 and 4149 to 120.

Using Assembly to code

One of the ways to code a program for the console is using assembly language.

Below is a sample code of making the first sprite move and bump into the corners of the screen:

ORG 2100h
PPU_SPRITES: EQU $1004
SPRITE_CHR: EQU 72
SPRITE_COLORKEY: EQU $1F
SPRITE_INIT_POS_X: EQU 140
SPRITE_INIT_POS_Y: EQU 124
jp main
DS $2166-$
nmi:
    ld bc, PPU_SPRITES + 3
    ld a, (sprite_dir)
    and a, 1
    jr z, subX
    in a, (c) ; increment X
    inc a
    out (c), a
    cp 248
    jr nz, updateY
    ld a, (sprite_dir)
    xor a, 1
    ld (sprite_dir), a
    jp updateY
subX:
    in a, (c) ; decrement X
    dec a
    out (c), a
    cp 32
    jr nz, updateY    
    ld a, (sprite_dir)
    xor a, 1
    ld (sprite_dir), a
updateY:
    inc bc
    ld a, (sprite_dir)
    and a, 2
    jr z, subY
    in a, (c) ; increment Y
    inc a
    out (c), a
    cp 216
    jr nz, moveEnd
    ld a, (sprite_dir)
    xor a, 2
    ld (sprite_dir), a
    jp moveEnd
subY:
    in a, (c) ; decrement Y
    dec a
    out (c), a
    cp 32
    jr nz, moveEnd
    ld a, (sprite_dir)
    xor a, 2
    ld (sprite_dir), a
moveEnd:
    ret
main:
    ld bc, PPU_SPRITES
    ld a, 1
    out (c), a  ; Set Sprite 0 as active
    inc bc
    ld a, SPRITE_CHR
    out (c), a  ; Set Sprite 0 character
    inc bc
    ld a, SPRITE_COLORKEY
    out (c), a  ; Set Sprite 0 colorkey
    inc bc
    ld a, SPRITE_INIT_POS_X
    out (c), a  ; Set Sprite 0 position X
    inc bc
    ld a, SPRITE_INIT_POS_Y
    out (c), a  ; Set Sprite 0 position Y
mainLoop:    
    jp mainLoop
sprite_dir:     DB 0

It's also possible to develop programs using the SDCC compiler and some custom tools to use C language. This makes development quicker, although it could lead to less performant code.

Sample code with an equivalent result to the above assembly code, here I'm using a library to help with the calls to the PPU:

#include <console.h>

#define SPRITE_CHR 72
#define SPRITE_COLORKEY 0x1F
#define SPRITE_INIT_POS_X 140
#define SPRITE_INIT_POS_Y 124

struct s_sprite sprite = { 1, SPRITE_CHR, SPRITE_COLORKEY, SPRITE_INIT_POS_X, SPRITE_INIT_POS_Y };
uint8_t sprite_dir = 0;
void nmi() {
    if (sprite_dir & 1)
    {
        sprite.x++;
        if (sprite.x == 248)
        {
            sprite_dir ^= 1;
        }
    }
    else
    {
        sprite.x--;
        if (sprite.x == 32)
        {
            sprite_dir ^= 1;
        }
    }
    if (sprite_dir & 2)
    {
        sprite.y++;
        if (sprite.y == 216)
        {
            sprite_dir ^= 2;
        }
    }
    else
    {
        sprite.y--;
        if (sprite.x == 32)
        {
            sprite_dir ^= 2;
        }
    }
    set_sprite(0, sprite);
}
void main() {
	while(1) {
	}
}

Custom Graphics

The console has graphics read-only predefined graphics stored in the PPU firmware (1 page of background tiles and another page of sprite graphics), however it is possible to use custom graphics for the program.

The objective is to have all the necessary graphics in the binary form that the console's bootloader can then load into the CHR-RAM. In order to do this I start with several images already in the right size, in this case to be used as background in several situations:

Since custom graphics are composed of 4 pages of 256 8x8 characters for background and 4 pages of 256 8x8 characters for sprites. I convert the graphics above to a PNG file for every page using a custom tool (eliminating duplicate 8x8 resulting characters):

And then use another custom tool to convert it to an RGB332 binary file of 8x8 pixel characters.

The result are binary files composed of 8x8 pixel characters that are contiguous in memory (each one occupying 64 bytes).

Sound

Wave samples are converted to 8-bit 8Khz PCM samples. Patches for PWM SFX/music can be composed using pre-defined instructions. And as for Yamaha YM3438 FM Synthesis chip, I found that the application called DefleMask can be used to produce PAL-clocked music targeting the Genesis sound-chip YM2612 which is compatible with the YM3438.

DefleMask can then export the music to VGM and then I can use another custom tool to convert VGM to a homebrew sound binary.

All the binaries from all 3 types of sound are combined into a single binary file that can then be loaded to the SNDRAM by the bootloader.

Putting it all together

The program's binary, the graphics and the sound are combined into a PRG file. A PRG file has a header indicating if the program has custom graphics and/or sound and what's the size for each as well as all the corresponding binary information.

This file can then be put into and SD Card and the console bootloader will read it and load it into all the specific RAMs and run the program as described above.

Using the emulator

To help with the development of software for the console I've developed an emulator in C++ using wxWidgets. In order to emulate the CPU I've used the libz80 library.

I've added some debugging features to the emulator, I can stop in a given breakpoint and step through the assembly instructions of it, there's also some source mapping available if the game was the result of compiled C code. As for graphics I can check what's stored in the tile pages/name tables (the background mapping that's the size of 4 screens) and I can check what's stored in CHRRAM.

Here's an example of running a program using the emulator and then using some of the debugging tools.

Program Showcase

(The following videos are the console's video output to a CRT TV captured by a cellphone camera, I'm sorry for the quality not being the very best)

A BASIC implementation running on the console and using the PS/2 keyboard, in this video, after the first program, I write directly into PPU-RAM through IO space to enable and configure a sprite and finally move it:

Graphics demo, this video shows a program that bounces 64 16x16 sprites, over a background with custom scrolling and with the overlay plane enabled and moving up and down above or behind sprites:

Sound demo showing the YM3438 capabilities as well as PCM sample playback, the FM music plus the PCM samples in this demo take up almost all of the 128KB of the SNDRAM:

Tetris, using almost just background tiles for graphics, for music it uses the YM3438 and for sound effects PWM sound patches :

In conclusion

This project was truly a dream come true, I have been working on it for some years now, on and off during my free time, I never thought I would reach this far into building my own retro-style video game console. It certainly is not perfect, I'm still by no means an expert on electronic design, the console has way too many components and undoubtedly could be made better and more efficient and probably someone reading this is thinking exactly that. However, while building this project, I've learned a lot about electronics, game console and computer design, assembly language and other interesting topics, and above all it gives me great satisfaction to play a game I've made on hardware I've made and designed myself.

I have plans to build other consoles/computers. In fact I have another video game console in the making, almost complete, which is a simplified retro-style console based of a cheap FPGA board and a few extra components (not nearly as many as in this project, obviously), designed to be a lot cheaper and replicable.

Even though there's a lot I've written about this project, there certainly would be a lot more to talk about, I barely mentioned how the sound engine works and how the CPU interacts with it, there's also a lot more that can be said about the graphics engine, the other IO available and pretty much the console itself. Depending on the feedback I might write other articles focusing on updates, more indepth information on the different modules of the console or other projects.

Projects/Websites/Youtube channels that helped me for inspiration and technical knowledge:

These websites/channels not only gave me inspiration but also helped me with solutions to some of the dificulties I have encountered in the making of this project.

If you've read this far, thank you. :)

And if you have any feedback to give or any questions, please comment below.

Please enable JavaScript to view the comments powered by Disqus.



All Comments: [-] | anchor

mito88(10000) 2 days ago [-]

beautiful project.

how does one obtain a z80 if it's not manufactured anymore?

pkiller(10000) 1 day ago [-]

Thanks :)

Actually they are still being manufactured to this day, that's one thing I didn't mention in the blog post is that you can get a brand new Zilog Z80 today, the 6502 (the CMOS version, 65C02) is also available. You can probably get them from all the major electronic components stores (Digikey, Mouser, etc).

iheartpotatoes(10000) 3 days ago [-]

Oh to built 20Mhz systems again! ... no picosecond timing skews to stress over, no ground plane worries, no trace-related design rule violations, no harmonic noise issues, no dynamic bus inversions to prevent victim/attacker degradation, no thermal issues... I spend so much time debugging GHz multi-layer circuit boards that I forget how 'easy' 20MHz digital circuits can be. This guy's project is truly inspiring!

pkiller(10000) 3 days ago [-]

Thank you :)

I would love to learn more and be able to work with faster circuits like you. Yes it's way easier to work with 'slow' digital circuits I had very little issues using breadboards and really long wires, that probably look cringy to you and others.

cushychicken(3768) 3 days ago [-]

No pesky compliance issues eating into your precious timing margins...

Rooster61(4022) 3 days ago [-]

This is a wonderful project. Well done. You might just inspire this software engineer to take his first crack at hardware. :)

One somewhat personal question, if you don't mind. You say you are Portuguese. Is English a second language for you? I don't see the telltale signs of a Portuguese -> English speaker (I have a lot of experience interacting with Portuguese speakers, and their English mistakes are pretty uniform due to the specific differences between the languages, esp regarding prepositions and tense). Your article, as many have noted, is beautifully written even for a native English speaker.

pkiller(10000) 3 days ago [-]

Thank you, if it's something you like, you should try a hardware project, for sure :)

And yes I am Portuguese, born and raised and I do know what you mean, there are mistakes that are somewhat common for Portuguese speaking English and I do make an effort not to make the same mistakes. However I know my English is far form perfect and it's always easier when it's in written form. I think that the fact that I've always spent too much time watching american/british movies and TV shows made me pay more attention to how English should be spoken :).

But anyway, thank you for saying I speak perfect English, it's actually a great compliment. :)

bantunes(3889) 3 days ago [-]

It's almost like Portuguese people are not a uniform block, right?

nonamenoslogan(10000) 4 days ago [-]

THIS kind of project is why I've replaced ./ with HN in my favorites bar. Thanks for sharing, this is an incredibly cool project!

cristoperb(3541) 3 days ago [-]

In case you don't know about it, see also: https://hackaday.com/

stevenjohns(4029) 4 days ago [-]

This is something I wish I'd be able to do one day, but every time I look into getting into electronics I get overwhelmed.

Can you recommend the materials you used when learning? Books or resources etc.

This was a really interesting read, thanks for sharing.

deniska(10000) 4 days ago [-]

I quite enjoyed Ben Eater's playlist on building a breadboard computer, made a lot of electronic concepts click for me.

https://www.youtube.com/playlist?list=PLowKtXNTBypGqImE405J2...

tdy721(10000) 1 day ago [-]

Start small. Here's a shift register: http://www.ti.com/product/SN74LV8153

It turns serial signal into parallel output.

They are often found in kits like this: https://www.amazon.com/Smraza-Breadboard-Resistors-Mega2560-... (research a better one than this)

Get a Raspberry Pi (Any controller really, get what you what)

Build something useless like this: https://www.youtube.com/watch?v=Lgnopk1qmkk

There are lots of learning kits available on Amazon. That's how I 'taught myself' hobby electronics. I don't know much, but this stuff is magic! So it's impressive to most just to implement a simple shift register! Blinkenlights for the win!

snazz(3727) 4 days ago [-]

I'm no expert whatsoever, but I'd recommend getting an Arduino (the cheap knockoffs aren't horrible, but if you get one, be sure to donate a couple of dollars to the legit Arduino people), a couple of big breadboards, a jumper wiring kit (preferably with all of the different inch fractions so your wiring stays neat), and some basic components (resistors, capacitors, LEDs, buttons, simple ICs, etc).

Once you have all that, get the Arduino do turn on an LED when you press a button on the breadboard. Incorporate some kind of sensor, if you want. After that experience, you should be ready to tackle a bigger project. Now that you have a test bed, you can incrementally build circuits from books and the Internet and actually understand what's going on.

abhishekjha(3853) 4 days ago [-]

+1 from me. I am a lot interested in getting into hands on electronics and most of the times I end up falling a little too short as there is no support around where I live and I never really got a start. Would definitely love some inputs from people who are good at this.

pkiller(10000) 4 days ago [-]

Thanks :)

Well nowadays there's a lot of material online, I did find a book interesting, though: 'Make: Electronics', it teaches the basics in a very nice way.

Aside from that the key, I guess, is to start small and incrementally go for more and more complex stuff going towards things you like (video games, robotics, etc). If you consistently achieve small victories I think it's enough to stay motivated. Everytime I managed to overcome something, I took it as a victory (generating the video signal, getting a program to run on the CPU, etc) and this helped me get to the next problem to solve.

I started with discrete logic and an Arduino, went on to programming microncontrollers similar to the one on the Arduino and moved from there to CPUs, RAMs, FPGAs, etc.

Also Youtube channels like EEVblog and GreatScott! are great learning resources.

PyroLagus(10000) 4 days ago [-]

The first half of nand2tetris[1] (projects 1-6), which corresponds to their first Coursera course[2] is a great introduction to the digital side of electronics. The second half is more general purpose computer and OS specific but probably still pretty useful. I've only finished up to chapter 5 as of right now, so I can't say anything definite.

[1] https://www.nand2tetris.org/

[2] https://www.coursera.org/learn/build-a-computer

elliottcarlson(1140) 4 days ago [-]

There's the xgamestation boards that you put together yourself, that give you a helping hand in the initial steps of building your own retro console: http://www.ic0nstrux.com/products/gaming-systems/game-consol...

vvanders(2963) 4 days ago [-]

It's dense but the Art of Electronics is the one book I keep coming back too. Better as a reference than a tutorial, once you've played around with an Arduino or the like I found it great to opening up the 'what next?' part of EE.

noonespecial(2441) 4 days ago [-]

People invariable ask a form of this question every time an impressive project like this is posted.

For what its worth(1), I think that 'learn a bunch of stuff -> go build something cool on the first try' isn't quite the right approach. I think you have to set for yourself a series of tasks and then fight like hell to figure out how to accomplish them. Start with a single blinking led. Just google 'how to blink an led' and try to do it a few different ways.

In short the only 'material' you might use to get started is google, by typing in 'how do I...' while chasing modest goals that look like they might be in roughly the right direction. If there is a magic book out there that will 'teach you electronics', I haven't found it yet. It sounds a lot like the sort of book that might 'teach you astronauting'.

'Electronics' is broad enough that it might be more like learning a language than learning a skill.

(1)My own meandering opinion

larrydag(3906) 4 days ago [-]

Start with an Arduino. Really simple circuits and you get to learn on a microcontroller. It's fun too. For instance learn how to turn on and off an LED then go from there

https://store.arduino.cc/usa/arduino/most-popular

Yhippa(2514) 4 days ago [-]

I know this has been said before but this is one of the coolest things I've seen posted here on HN. My undergrad is in computer engineering and this post brought back a flood of writing VHDL and doing design in Mentor Graphics.

I'm going to read this a few more times. It's like reading a good book about my hopes and dreams.

pkiller(10000) 4 days ago [-]

Thanks :) (I don't mind people saying it over and over again, believe me :P)

I actually started building a much smaller video game console with a cheap Cyclone II FPGA board (a friend of mine wanted one and I tried to figure it out how I could make something smaller and cheaper but still retro and cool) and I'm using VHDL, it's not exactly easy to learn but it's pretty cool, and I find it really hard to find good resources to learn from.

If I was to start all over again, I would use an FPGA for the graphics, for sure, at the time I just didn't know how.

And thanks again for the comment :)

kotrunga(3794) 4 days ago [-]

If you wrote a book explaining how to do this, teaching the concepts, etc... I think a lot of people (or at least me) would be very interested!!!

Great job

iooi(4009) 4 days ago [-]

I can highly recommend nand2tetris. It doesn't go as far as this post but it covers a lot and is a ton of fun to work through.

dejaime(10000) 4 days ago [-]

I would definitely buy one. And also buy the companion DIY kit.

TomAnthony(1374) 4 days ago [-]

Yeah, absolutely. Especially coming from a modern software developers POV. I always think it would be fun to get into electronics, but it just seems so much to take in.

pkiller(10000) 4 days ago [-]

Thanks :)

I've kept this project to myself and would only describe it occasionally to some coworkers and friends. I never thought that many people would find it cool. And always thought that people with more knowledge than me would find a lot of flaws. So it's really awesome to read yours and all the other comments. :)

I don't think I could write a book, but I have though in writing other posts, giving more detail in certain aspects of the console. There's really a lot I could say about it and I only realised how much there was to say when I started writing this.

pjc50(1486) 3 days ago [-]

Hmm. What do people reckon the price point and market is like for this? Are people still willing to pay $30 for a book, or is it more of a $3 ebook?

bluedino(2207) 4 days ago [-]

Check out the XGameStation that Andre Lamothe created about 15 years ago. There's an eBook that goes along with it.

He wrote a handful of game programming books in the 90's as well.

ChuckMcM(733) 4 days ago [-]

It is sad that TAB books is no longer a thing, they totally would have published the 'BUILD YOUR OWN VIDEO GAME' book. That said, I bet that you could get NoStarch Press to publish it.

onatm(3916) 4 days ago [-]

Great work! I wondered what resource you have used when you started learning electronics. I'd be really appreciated if you share them

pkiller(10000) 4 days ago [-]

Like I wrote in another comment. There's a great book called 'Make: Electronics'. And you can get started with an Arduino kit, it comes with a book and several components. After this when you feel comfortable, you can move one to more advanced stuff, depending on what you're interested in.

There are plenty of videos online for almost any topic in electronics, and projects online using all kinds of components and for all kinds of purposes.

Check out EEVblog, the forums are good, and there are some videos for beginners and also the Youtube channel GreatScott!.

And thanks :)

wazoox(3578) 3 days ago [-]

You should talk to the 8bits guy, he's on the process of creating his own dream 8 bits machine, and it looks like you've beaten him :)

Corrado(2090) 3 days ago [-]

I agree! I was just watching one of his videos the other day on creating a brand new 8-bit computer and I think he would be super interested in your work.

https://www.youtube.com/channel/UC8uT9cgJorJPWu7ITLGo9Ww

eldavojohn(10000) 3 days ago [-]

I never comment on here. This was cool enough for me to try to figure out my password and log in and say that you are the tinkerer/creator I wish I could be.

pkiller(10000) 3 days ago [-]

Thank you so much :)

I'm actually a bit overwhelmed with all the comments and I'm trying to answer at least a few of them.

Just wanted to say that, even without knowing you, I think you could absolutely be a tinkerer or creator, just choose something you would like to build, start small and go from there. :)

Accacin(10000) 4 days ago [-]

Very impressed and you come across as such a nice person too! All the best to you.

pkiller(10000) 4 days ago [-]

(I'm actually secretly a horrible person :P)

Thanks for the comment :)

pjmlp(363) 4 days ago [-]

Great achievement! Muito bom!

pkiller(10000) 4 days ago [-]

Obrigado :)

lewiscollard(10000) 3 days ago [-]

At risk of being more 'me too' about this, I am in awe of the hard-earned multi-disciplinary skills that went into making this. You should be proud of yourself :) Great project, and a great read.

pkiller(10000) 3 days ago [-]

Thanks for the comment :)

I am proud and honestly reading all these comments of people like you saying awesome stuff and also saying this inspires them makes me even prouder :)

indigo945(10000) 2 days ago [-]

This is very cool. Doing this all on your own, especially since you're not a computer engineering professional, is astonishing.

The one thing that irks me about the console design is the amount of RAM it needs. In the age where 8-bit consoles were actually being built, RAM was the most expensive part of the devices by a far margin. It is no accident that the Atari 2600, for example, only had 256 bytes of main memory, and even the very popular NES/Famicom ran on only 2kB of main memory. This 'retro' design, by contrast, employs a lot of RAM chips in a variety of places, even where they are not actually needed: the CPU and PPU can be connected with different technologies (as the article hints at), and double buffering via a second VRAM chip is the kind of feature that classical home consoles would never do due to cost reasons.

Don't get me wrong though, it's still an amazing project!

pkiller(10000) 2 days ago [-]

Yeah I did put more memory than the old school consoles had, absolutely, however the RAM in those days was only used for data mostly because they also had the cartridge ROM (or ROMs) to store the code. In my case I need to load the code to RAM, so I would always need a larger RAM.

But yeah, they also had less colors to work with and palettes (which were costier to implement in a software renderer). And this CPU is also faster than of those consoles. I did give myself a more confortable system in terms of RAM and computational power.

But my aim was for a machine that would sit somewhere between the 3rd and 4th generation.

The double buffering I understand as well, if I was working with an FPGA I would not have gone with frame buffers, for sure, but that was the solution I found at the time using the components I was using and it worked.

The attention this has gotten, especially here in HN is still a bit unbelievable to me (and amazing of course). I don't want people to think this is an optimal console design or anything. This is by no means a perfect project or 'the way to go' in terms of homebrew video game consoles, it's just the way I found to do one while learning electronics along the way.

And thanks :)

martin1b(10000) 3 days ago [-]

Very impressive. My first thought is, why? Just use a Pi with Retro-pi. However, you wanted to be down at the metal. That's quite a challenge, particularly when you work alone.

I'm always impressed when someone is skilled at both hardware and software to make a finished product. It's Woz-like.

Very impressive work.

rafaelvasco(10000) 3 days ago [-]

Because the journey is more important than the destination.

nkrisc(4021) 3 days ago [-]

The same reason people run a marathon instead of just driving the route.

korethr(3494) 3 days ago [-]

Okay, this is cool as hell.

I've reading up about the architectures and programming of computers and consoles from the 80s and early 90s lately, and have been itching to do a similar project of my own, but have been kind of floundering on where to get started. The fact that you pulled this off inspires that this sorta thing can be done.

Have you considered doing a series of blog posts going into more detail on each section of the console and your journey in getting each bit working, describing failures and successes both? I think such would be instructional to other people who want to do some similar homebrew computer/console hacking.

I was kind of surprised that your PPU design was frame buffered instead of line buffered, but I suppose I perhaps shouldn't be. I imagine the PPU chips of old were line-buffered because RAM was expensive in the 80s, and it was a good enough interface to control a scanline-based display. In my recent reading about the architecture of 3rd, 4th, and 5th gen consoles, I noticed that the 5th gen systems became fully frame buffered, as memory had become cheap and fast enough in the early-mid 90s. And a frame buffer certainly feels a bit simpler and more intuitive to think with than a scanline buffer.

pkiller(10000) 3 days ago [-]

Thank you :)

I am considering doing a series of blog posts, I don't know how often I could write them or if I could keep them going, but I will try. I'm not big in social media and I have never written any posts or anything like before this one, which is weird, I'll admit it, so all this is kind of new, but I think I'll give it go.

Old systems used line-buffers like the Neo Geo or no buffers like the NES, for example. So yeah, going with a frame-buffered approach was definitely easier, but this was not the only reason I chose this way. I had a lot of restrictions, I was learning a lot of stuff, I didn't know how to work with FPGAs and I stuck with DIP package ICs that I could put on breadboards and experiment. And that's why I picked the AVR microcontrollers, which are awesome but have their disavantages. They have good performance (20Mhz), but not many pins and I had to bit-bang things like external RAM access (actually the PPU accesses 3 RAMs with only 32 IO pins available), which meant that it takes 'some time' to access external memory. That's why I chose 2 microcontrollers for video instead of just one, one of them could take 'all the time in the world' (one frame) to fetch information and write a frame to the frame buffer while the other would generate the video signal and dump the a frame to the TV. Connecting the two, I felt a double buffer was better.

I definitely would have preferred doing a more 'traditional' non-buffered render system, but this was the solution I found with what I had to work with.

I hope serves as a good explanation and maybe I'll get to explain these details better in another post :).

newnewpdro(10000) 4 days ago [-]

Very cool project, I just wish the images on the website weren't many megabytes a piece. They can't even all load on my slow internet connection without timing out.

It's unfortunate how often people share files directly from their camera without first compressing and resizing them appropriately these days.

pkiller(10000) 4 days ago [-]

Thanks :)

And yeah you're right...my bad...

(EDIT) It should be a bit better now, thanks for the warning

JabavuAdams(1690) 3 days ago [-]

Shine on, dude, shine on. Inspiring! I love the fact that you've been working on this for a long time, yet 60 seconds ago I was completely unaware of it. What other amazing things are amazing people working on out there?

drinfinity(10000) 3 days ago [-]

Amazing how the world keeps going even when you're not paying attention huh?

ai_ia(4016) 3 days ago [-]

Wow. This is incredible. Bookmarked and will be a lot interested if you could do a series of blog post, man.

This is amazing.

pkiller(10000) 3 days ago [-]

Thanks :)

I'm gonna try and keep posting.

NikkiA(10000) 4 days ago [-]

I'm surprised you didn't implement a simple 8k bank switching scheme to utilise the rest of the 128K chip, it's really just a handful of 74244 buffers and a 74138 decoder stuck on an IO port.

pkiller(10000) 4 days ago [-]

I thought about it, I really did, however, I really felt I didn't need the extra RAM, especially for the kinds of programs and games I was aiming for.

And in this case 'a handful of 74244' is a lot more complexity and it got a to a point where I really wanted to minimize the amount of ICs I used.

Also I was felt bank switching added complexity when developing for it. This way I can write a C program have it access data and I don't have to worry about the code being split in more than one bank or having the data be in a different bank than the code, etc.

It's a nice catch though, thanks for the comment.

sehugg(1822) 4 days ago [-]

Nice job! Are you going to do more designs in the future? Maybe Verilog/VHDL based?

pkiller(10000) 4 days ago [-]

Thanks :)

Yeah, I already have another ongoing project based on a cheap FPGA board with a Cyclone II (quite old but it does the job), the idea is to keep it cheaper and much less complex.

The FPGA has a Z80 implementation, video, sound and handle the game controllers. An external Atmega328 is used to handle the SD Card and then just RAM, an ICs for composite video and an op-amp.

I also have some ideas for a better console, maybe a generation forward and an 8-bit computer, but...these are just ideas, I doubt I'll actually get around to do them.

ChicagoBoy11(4013) 3 days ago [-]

This is simply incredible. I teach high school and elementary kids, and the thing I'm always telling them and parents is how the added complexity and modern design of software and hardware have made it so challenging for kids to take a 'peak' under the hood. Projects like this are such a wonderful way to really spark that curiosity between hardware and software. So inspirational!

osrec(3178) 3 days ago [-]

You see this with grown up devs too. Many of them have only ever learnt to code with a full suite of build tools such as web pack, grunt and npm. They seem lost without these things and only a few really know what's going on under the hood. The best devs I've worked with are feel comfortable getting things done even when those tools are taken away.

slackfan(10000) 3 days ago [-]

It's not really 'retro' if you've been building it in $current-year.

/pedant

Retra(10000) 3 days ago [-]

Retro doesn't mean 'old.' It is a style.

kgwxd(3425) 4 days ago [-]

This is the coolest thing I've seen posted on HN in years, very cool work.

I've been playing with Atari 2600 programming on and off for the past few years and it is so fun programming directly against the specific hardware. I can't help but occasionally wonder if I could piece together a similar system, but I have 0 experience with electronics. I can only imagine how satisfying it must be to actually pull it off.

bredren(3996) 4 days ago [-]

Agreed, this is dank. The successful wiring alone is noteworthy.

pushpop(10000) 3 days ago [-]

There is something weirdly satisfying about doing something purely for the sake of doing it. So many people start projects hoping they can turn it into a business or gain stars in Github - you see less and less people hacking stuff together just for personal enjoyment.

I think we need more of that too. Everyone seems to be on a quest for perfection: perfect code, 3D printed casing, everything intellectualised to the nth degree. There's a lot to be said for the 'rough and ready' approach of experimentation. However to do that you really need to be on a pursuit for personal gratification because the internet is a harsh bitch for pointing out ones mistakes.

pkiller(10000) 4 days ago [-]

Thanks a lot :)

It really is super satisfying to get this far, I still turn it on sometimes just to check if it really works.

And I also had zero experience with electronics (and I still don't know as much as I should...), so yeah I think you could piece together something like this. :)

snazz(3727) 4 days ago [-]

As much as I enjoy quickly building circuits on a breadboard like in many of the photos, they're hell to debug, because there's so much that can go subtly wrong. It's much easier with digital than analog circuits, of course, but it still can be crazy hard to logic-probe every connection to get it all to work. I have spent far too much time fixing little bugs in breadboarded circuits. The toughest issue I encountered was when someone melted through part of a big, expensive breadboard with a soldering iron and it caused shorts on the other side of the board. I couldn't even trust that the holes that were supposed to be connected were in that case, and I sure wasn't going to copy over the project onto a new breadboard.

However, I'm not sure if there is an easier way to get quickly prototype electronics. Opting for more ICs and a lower BoM count helps, because there's less wiring to do in the first place.

janekm(10000) 4 days ago [-]

I would argue that with the extremely low cost of PCB manufacturing (and... if you have access to it... assembly) it's easier now to just lay out a PCB and get it made. The trick is to build up a library of circuit 'modules' (like... power supply section, battery charging, video output, etc) that get copy-pasted together. Of course there's the delay in waiting for the PCBs, which could be frustrating for a hobby project (I was lucky enough to live in SZ for a while where getting PCBs back 2 days after ordering is standard). But parallelising projects can be an effective way of dealing with that, and the variety on working on different things (and the joy of getting a PCB in the mail) can help to mitigate that.

pkiller(10000) 4 days ago [-]

I totally agree. The reason I stuck to DIP ICs was because I could fit them into breadboards and I thought that was easier than any alternative. I was incredibly lucky on how all the breadboards were functional. And there were a few times where I would accidentally disconnect a wire and spend the next 2 hours trying to figure out what was wrong. All this without having a logic analyser and not even an oscilloscope until much later in the build (they are not that cheap). And yes the project sure is complex, too complex, but I guess I was lucky, and I'm glad it's working :).

xobs(10000) 1 day ago [-]

Copper tape on cardboard.

Cardboard is a fantastic material. You can draw the schematic on, and then run copper tape along the schematic. It's easy to annotate, since you just need to use a pen. It's able to handle much more current than a breadboard, since the copper tape has a lot more metal than thin wires. You can solder directly to it, and with tape (or some other insulating material) you can cross wires without having them short.

You can also work with SMT ICs. Using a pair of scissors, you can make your own pads and solder to them. I connected an SO8 to read off the SPI data using copper tape. I built an amplifier using a spare Novena amplifier IC on cardboard with copper tape. And I have a level shifter done on cardboard with copper tape.

Bunnie has a good article on it here, including an example power regulator that's handling 2A on cardboard: https://www.bunniestudios.com/blog/?p=5259

abraae(4032) 4 days ago [-]

When I was a lad, wirewrap was the thing. You can prototype quickly, and make changes easily too.

(It seems like it's still a thing in fact http://www.nutsvolts.com/magazine/article/wire_wrap_is_alive...)

jason0597(10000) 2 days ago [-]

This is a brilliant project. I've toyed around with electronics and microcontrollers and I once wanted to build a full 'desktop' PC with an STM32 and a PS/2 keyboard input and a VGA output. Unfortunately, school took over me and I left it.

One thing that I am curious about though is whether you could use such a big project as yours for employment. E.g. if you had absolutely no formal qualifications and applied to a (relevant) company by using such projects in your CV, would they consider you?

pkiller(10000) 2 days ago [-]

First of all thanks :) And if you do find some time that project seems really cool, I never really got to work with the STM32 micros, but they are really powerful and cool.

Answering your question, well, even thought this is an electronics engineering project, I don't feel qualified to work with electronics professionally, for example, there's a lot of knowledge I lack. I do work as a software engineer and I have done a couple of interviews myself to hire people and I do believe this helps when applying to a job, I'm not saying it completely replaces your qualifications but it might at least give you an edge. I'm seeing this as someone who would want to hire someone skillful. A technical project certainly is on the very least a nice thing to have in a CV. Of course this is just my opinion. And if you are feeling like applying for a job based on your projects, just give it a go.

One advice I can give is: if you do have projects of your own, big or small, don't do it like me and keep them to yourself for years :P. I really had no idea so many people would appreciate my pet project.

bookofjoe(532) 3 days ago [-]

I hope Woz sees this and says a word or three here.

pkiller(10000) 3 days ago [-]

Wozniak is a legend and this no where near the quality of his work. The way he designed the Apple II was just amazing.

Thanks for the comment :)

filmgirlcw(3925) 3 days ago [-]

This is incredible! Thank you for posting and outlining all your work! As a fellow 80s/90s kid, this is extremely my shit.

filmgirlcw(3925) 3 days ago [-]

Follow-up — you've inspired a colleague and I to do something similar! I read this, iMessaged her the link and said "we have to do this" (she has significant hardware experience. I do not.) and she's in total agreement!

So thank you again b/c this is legit inspiring and exciting!

kriro(3942) 1 day ago [-]

'''Even though I had no experience, I said to myself "why not?", bought a few books, a few electronics kits and started to learn what I felt I needed to learn. '''

Would you mind sharing what books (and kits) you decided to buy? I'd like to work on my electronics a bit as well :)

pkiller(10000) 1 day ago [-]

Like I've said in a few other comments (I've been trying to answer some of the comments, everyone has been really nice and really interested), one book that got me started was 'Make: Electronics', I believe there's a second one now, I really like how the book explains the very basics in a pratical way. Other than that there's a lot of info online nowadays. EEVblog aside from the forums has a nice youtube channel with videos with topics that range from beginner to advanced, GreatScott! is another one that explains things really well. Ben Eater has also a very interesting channel. Also check other people's projects similar to things you would like to make. As for kits, I started with the Arduino starter kit, it comes with a project book and a few varied components, after that I started buying components locally or from major international component shops and going towards the things that interested me the most in electronics. I hope this helps :).

forinti(10000) 4 days ago [-]

It's interesting how this kind of project has come into the realm of possibility of an individual's pet project (a smart individual, sure, but not a company).

I guess that the availability of information and materials through the internet has helped a lot. And also more people have knowledge in electronics and programming.

Great job, Sérgio. É muito giro.

krmboya(3478) 4 days ago [-]

Interesting perspective. I wonder what kinds of projects a motivated individual would be able to tackle say, 20-30 years from now.

pkiller(10000) 4 days ago [-]

I agree (and thanks for calling me a smart individual :) )

Nowadays is so much easier to get information on how to build something apparently very complex. You don't need to go to a library or seek out and go talk to experts, you actually have access to all that through the internet in forums, articles, posts and videos.

Also it's easy nowadays to order all the things you need for a project online.

Thanks :)

zokier(3679) 4 days ago [-]

Interestingly enough, looking at the HW, there is not much there that wouldn't have been feasible for hobbyist a decade ago. But of course the community and information has absolutely exploded during that period

geowwy(10000) 4 days ago [-]

If you're interested in this you might be interested in 8-bit Guy's new computer: https://www.youtube.com/watch?v=ayh0qebfD2g

83457(3979) 3 days ago [-]

Yeah, his project was the first thing that came to mind when I saw this.





Historical Discussions: Firefox Send: Free encrypted file transfer service (March 12, 2019: 2029 points)

(2029) Firefox Send: Free encrypted file transfer service

2029 points 6 days ago by dnlserrano in 3892nd position

blog.mozilla.org | Estimated reading time – 3 minutes | comments | anchor

At Mozilla, we are always committed to people's security and privacy. It's part of our long-standing Mozilla Manifesto. We are continually looking for new ways to fulfill that promise, whether it's through the browser, apps or services. So, it felt natural to graduate one of our popular Test Pilot experiments, Firefox Send, send.firefox.com. Send is a free encrypted file transfer service that allows users to safely and simply share files from any browser. Additionally, Send will also be available as an Android app in beta later this week. Now that it's a keeper, we've made it even better, offering higher upload limits and greater control over the files you share.

Here's how Firefox Send works:

Encryption & Controls at your fingertips

Imagine the last time you moved into a new apartment or purchased a home and had to share financial information like your credit report over the web. In situations like this, you may want to offer the recipient one-time or limited access to those files. With Send, you can feel safe that your personal information does not live somewhere in the cloud indefinitely.

Send uses end-to-end encryption to keep your data secure from the moment you share to the moment your file is opened. It also offers security controls that you can set. You can choose when your file link expires, the number of downloads, and whether to add an optional password for an extra layer of security.

Choose when your file link expires, the number of downloads and add an optional password

Share large files & navigate with ease

Send also makes it simple to share large file sizes – perfect for sharing professional design files or collaborating on a presentation with co-workers. With Send you can share file sizes up to 1GB quickly. To send files up to 2.5GB, sign up for a free Firefox account.

Send makes it easy for your recipient, too. No hoops to jump through. They simply receive a link to click and download the file. They don't need to have a Firefox account to access your file. Overall, this makes the sharing experience seamless for both parties, and as quick as sending an email.

Sharing large file sizes is simple and quick

We know there are several cloud sharing solutions out there, but as a continuation of our mission to bring you more private and safer choices, you can trust that your information is safe with Send. As with all Firefox apps and services, Send is Private By Design, meaning all of your files are protected and we stand by our mission to handle your data privately and securely.

Whether you're sharing important personal information, private documents or confidential work files you can start sending your files for free with Firefox Send.




All Comments: [-] | anchor

old-gregg(1344) 6 days ago [-]

If relevant Mozilla people are here: Send does not work if 'Delete cookies and site data when Firefox closes' checkbox in FF preferences is checked. Even the page doesn't load [1]. It surely is a bug, because I am not closing Firefox.

That checkbox is #1 reason I only use Firefox.

[1] Developer console log output: 'Failed to register/update a ServiceWorker for scope 'https://send.firefox.com/': Storage access is restricted in this context due to user settings or private browsing mode. main.js:38:10 SecurityError: The operation is insecure.'

_rlx_(10000) 6 days ago [-]

This is a current Firefox restriction: https://bugzilla.mozilla.org/show_bug.cgi?id=1413615

RJIb8RBYxzAMX9u(10000) 6 days ago [-]

You should be able to whitelist https://send.firefox.com/ with the 'Manage Permissions...' button right next to that option.

I block _all_ cookies except for a small list of sites (like HN...).

ihuman(2787) 6 days ago [-]

Does Firefox Send work on browsers besides Firefox for sending and receiving files? It's blocked at my office, so I can't test it.

pizzapill(4029) 6 days ago [-]

The page states that it'll be available on all browsers and a android app is going to be released later this week.

fzzzy(10000) 6 days ago [-]

Yes. Tested on Chrome, Safari, and Edge.

romantomjak(3949) 6 days ago [-]

I really don't understand why they didn't share a link to the repository in the article. For anyone who's interested - here it is: https://github.com/mozilla/send

Cyphase(4030) 6 days ago [-]

It's because this blog is for mainstream audiences who don't know what GitHub is and might be scared of all that code-y stuff if they accidentally clicked on it.

huhtenberg(996) 6 days ago [-]

Very clean and nice, but how is this financed?

That is, who's paying for the server storage and the bandwidth?

mzs(2144) 6 days ago [-]

We'll find out by the end of the year »

Secondary - In support of Revenue KPI

We believe that a privacy respecting service accessible beyond the reach of Firefox will provide a valuable platform to research, communicate with, and market to conscious choosers we have traditionally found hard to reach.

We will know this to be true when we can conduct six research tasks (surveys, A/B tests, fake doors, etc) in support of premium services KPIs in the first six months after launch.

https://github.com/mozilla/send/blob/master/docs/metrics.md

Vinnl(704) 6 days ago [-]

Presumably Mozilla, just like they do for the sync and Web Push servers.

kpcyrd(3237) 6 days ago [-]

mozilla

kgwxd(3425) 6 days ago [-]

If they can't keep up, at least we'll always have the code: https://github.com/mozilla/send

patrickxb(3834) 6 days ago [-]

I don't understand how they can afford the bandwidth...

If this were on AWS it would be around $0.09 per GB for downloads.

sirsuki(3740) 6 days ago [-]

First off, Mozilla believes in the service. Mozilla itself gets funding from donations and corporate backing (I think). The cost of bandwidth is small compared to other file share sites in that the files stored are temporary. The transient nature of the files means that the max storage space needed is relative to the concurrent number of users. Bandwidth also. That means sans a very clever DDoS their expenses should be manageable compared to say Google Drive, Dropbox, or MS One Drive.

I remember sending a signed PDF via Firefox Send and was at first horrified when I realized I couldn't get the file back after 24 hours but then relieved knowing that the recipient got it and then it disappeared from the internet. Very cool!

navaati(4032) 6 days ago [-]

I must say I am disapointed.

I thought this would be some cool realtime system to send from browser to browser, using WebRTC or something. Something that doesn't involve them paying for file servers, by the way.

I believed in Mozilla ! But no, here we are and I just don't see the difference between this and Mega.

EDIT: except for the auto-deletion trick that addresses the piracy problem. But still...

gsich(10000) 6 days ago [-]

But that would require more brain and effort. Since many users are usually behind a NAT, some NAT-traversal is neccessary. Combined with a robust detection (for shitty networks) and fallback to 'normal' servers ... you get the idea.

cmurf(1591) 6 days ago [-]

Another neat feature actually built into Firefox is Take a Screenshot. To the right of the URL field, in the three dots menu. Option to save it locally, or save in the cloud with a URL with some expiration options. Sorta like a pastebin for screenshots.

It only takes screenshots within the confines of a Firefox window.

fzzzy(10000) 6 days ago [-]

Glad you like it (I worked on it). Just a side note, the cloud service will be going away in the future, but the ability to save it locally will remain.

intellent(4023) 6 days ago [-]

Is there a simple way to get the direct URL of the file (e.g. to use in wget cli calls).

ubercow13(10000) 6 days ago [-]

The file is decrypted in client-side JavaScript so presumably no

jasonjayr(3566) 6 days ago [-]

Is the source available for this? A self-hosted version of this would be nice...

(Update: Yep, just found it: https://github.com/mozilla/send, just before the comment below was posted :))

nickik(2715) 6 days ago [-]

Anybody have a nice docker version to run this at home?

techaddict009(2163) 6 days ago [-]

Looks more like wetransfer.

Oras(4027) 6 days ago [-]

True, without the email. Actually, I like we transfer for sending emails and notifications when the user has downloaded the attachments.

Sammi(10000) 6 days ago [-]

Open source peer-to-peer solution in the browser using WebRTC: https://file.pizza/

krferriter(10000) 6 days ago [-]

Wow that's really neat. Downside is it only works while the page stays open on the uploader's machine, while send.firefox.org uploads the file for a limited time to a central server so you can close the tab before the recipient downloads it.

hprotagonist(2783) 6 days ago [-]

It doesn't exactly meet the needs of 'sending files to a non-technical person', but Magic Wormhole [0] has been truly great for flipping files around between me and anyone who is capable of being trusted to run `pip install --user pipe && pipe install magic-wormhole`. This is by no means everyone, but it's been very useful quite often.

[0] https://magic-wormhole.readthedocs.io/en/latest/ has

cherrypepsi(10000) 6 days ago [-]

I remember elementaryOS had a GUI for this in its app store. Never got around to try it, Linux is not well known in the consumer world, let alone Elementary

asutekku(10000) 6 days ago [-]

I have no clue why you would suggest a tool that requires using a linux command line after telling firefox send doesn't meet the needs of non-techical person.

dTal(3958) 6 days ago [-]

>pip install --user pipe && pipe install magic-wormhole

What am I looking at here? On PyPI 'pipe' is listed as a 'Module enablig a sh like infix syntax (using pipes)', and magic-wormhole's own docs just say to install with pip like anything else.

brundolf(3518) 6 days ago [-]

Ah man, I literally came up with (and prototyped) this exact thing in 2013. Minus the end to end encryption. I dropped it mostly because I wasn't sure how to prevent illegal use and didn't want to be liable.

Edit: mine was actually (partially) better because it assigned a short PIN instead of a full link, which meant you could just look at it and remember it for typing-in, instead of requiring a separate channel to 'send' the link.

scriptkiddy(4024) 6 days ago [-]

If you're still interested in this type of tool, I'm sure Mozilla would welcome your contribution: https://github.com/mozilla/send

hombre_fatal(10000) 6 days ago [-]

You came up with a web service that lets anyone upload something and then download it via /uploads/123?

That's basically a hello world project. As you found out, the hard part is everything else, like funding it.

tyingq(4004) 6 days ago [-]

The end to end encryption necessitates a hard to remember uri anyway, so I don't think you can have both 'secure' and 'memorable'.

brundolf(3518) 6 days ago [-]

Hah, found a record of the project (we did it at HackTX): http://techzette.com/2013-hacktx-winners-and-finalists/

It was called 'Catch'

TheShrug(10000) 6 days ago [-]

A short PIN seems nice for personal use (maybe on a self-hosted service) but wouldn't a short PIN allow people to potentially guess random PINs and download files that they shouldn't have access to?

Sammi(10000) 6 days ago [-]

Encrypting it with a random key that doesn't get sent to the server, but is in the URI that the user sends to the receiver means that only the sender and receiver know what the file actually contains. Means neither you nor law enforcement can know what you are storing unless the URI is captured.

Causality1(10000) 6 days ago [-]

Why does it have upload limits at all? Your client encrypts it, the data is sent over your internet connection to someone else's, their client decrypts it. Why would the data pass through Mozilla's servers?

arduinomancer(3493) 6 days ago [-]

Wouldn't you need both clients to be online at the same time to do that?

kikikiki09i(10000) 6 days ago [-]

How to they pay for the storage costs?

stunt(4007) 6 days ago [-]

Storage is extremely cheap. Especially for a service like Send which doesn't hold any data for a long period of time.

Elseways, It might be that they have bigger plans with it. This might be just a product to learn about market potentials.

Mozilla's manifesto is all about the Internet and Internet privacy. File sharing is one of the areas where the internet is losing privacy.

z3t4(3752) 6 days ago [-]

Why doesn't Firefox support p2p file sending !? Why do they do with the files I upload !?

icebraining(3455) 6 days ago [-]

P2P means both machines must be able to talk to each other (occasionally difficult when both are behind NAT) and must be turned on at the same time. Using a reliable intermediary gives some flexibility.

nukeop(4026) 6 days ago [-]

I wish Mozilla focused on core Firefox functionalities instead of coming up with so many small side projects that don't target their typical audience. Since Chromium-based browsers are not an option, many of us are stuck with Firefox as the only remaining choice. But even Firefox has to be heavily customized before it's completely deGoogled and stops contacting various motherships.

As a side note Nightly build for Ubuntu has been broken since version 61 and there's no sign of any effort to fix it.

kvark(3714) 6 days ago [-]

Is there anything specific you are missing in Firefox today? Or is it purely the fact that it's broken since version 61? Did you submit a bugzilla issue, or know the existing number? I'd be happy to check it out.

kikikiki09i(10000) 6 days ago [-]

How do they pay for the storage costs? What's the upside for Mozilla?

chrisseaton(3065) 6 days ago [-]

> How do they pay for the storage costs?

Using their revenue from search, like everything else they pay for.

> What's the upside for Mozilla?

'Our mission is to ensure the Internet is a global public resource, open and accessible to all. An Internet that truly puts people first, where individuals can shape their own experience and are empowered, safe and independent.'

passthejoe(10000) 6 days ago [-]

Upside is that this is another reason to get a Mozilla account.

sam_lowry_(4027) 6 days ago [-]

Google Search, Yahoo Search.

skrebbel(3047) 6 days ago [-]

In the not so recent past, HN'ers loved to quote tptacek's legendary rant about how in-browser JavaScript crypto is fundamentally broken[0].

What changed? Is that rant finally outdated? Couldn't Mozilla at any time serve a corrupted JS bundle (with or without their knowledge) which would leak the key somewhere, silently replace the encryption by a noop, etc?

I ask out of interest, not skepticism. I much prefer an internet where we can trust web apps to do proper crypto than one where we have to depend on some app store to somewhat adequately protect us.

[0] https://www.nccgroup.trust/us/about-us/newsroom-and-events/b...

Rebelgecko(10000) 6 days ago [-]

Some of those points are relevant and some aren't. For logging in to a website, 'just use SSL/TLS instead' makes sense, but not for this use case. There's better options nowadays for doing crypto in the browser, but I wouldn't be surprised if they were at least theoretically vulnerable to side channel attacks from JS running in another tab.

The main thing is that unless you're paying really really close attention to the JS that you're executing, you can't trust this any more than you can trust Mozilla and the security of whatever computer is serving their pages. I wouldn't use this for sending data that you're trying to hide from a nation-state, but it looks like a great option if you want to send a video to your grandma without posting it publicly on the internet or teaching her how to use GPG.

fastball(3950) 6 days ago [-]

Not relevant to me as all of my sites are entirely secured with SSL.

the8472(3978) 6 days ago [-]

Fundamentally the situation has not changed much. You redownload the code every time and servers could deliver tailored compromised versions if ordered so by some TLA. Which means audits have limited values since they can't attest that what they have seen is actually what anyone gets.

Compare with native tools which you only download once, can check its signatures and which strive for reproducible builds so that multiple parties can verify them independently.

lubesGordi(10000) 6 days ago [-]

Seems like Send would have to be a built in browser functionality or maybe a plugin.

mehrdadn(3544) 6 days ago [-]

There was also a time just a few years ago when evangelists claimed JS/CS/etc. were just as fast as native (some said faster) and blasted you for suggesting otherwise, even when it was clear as daylight this was blatantly false. This mantra also suddenly just faded away once native compilers for these gained popularity. I guess reality hits you after some time.

Now I see a similar issue with security experts preaching that merely possessing a single piece of software with a single thing they classify as a 'vulnerability' implies you will be murdered within the next 24 hours, and it seems they'll happily DoS your computer, get you fired from your job, take your second newborn, and blow up your computer in your face if that's what it will take to make you finally feel real danger. Not sure why it takes people so long to see that reality isn't black-and-white, but better late (hopefully) than never.

qrbLPHiKpiux(3990) 6 days ago [-]

As long as there is a possibility, I say yes - not 'if' but 'when.'

Humans are always the weakest link with the internet and someday, sometime, bad code (unknowingly) will be pushed and something will happen to someone.

cyphunk(3287) 6 days ago [-]

There are many use cases where compromise through code-interdiction after warrant is a perfectly acceptable risk. Also considering what it replaces may further increase the weight of privacy gain. Absolutism is definitely not the way to go, and looking at the state of the tech community (eg. npm, apt, pip, pacman, check that sha256 sum) we left the design-it-right-first a long time ago. A valid argument, though I wouldn't defend it to the death, is that we need to work slowly back toward more secure behaviors rather than chasing absolutely secure technologies. I think send.firefox is a step back from dropbox for some.

serkanyersen(3925) 6 days ago [-]

Couldn't they do the same If crypto code was on server?

thinkloop(3389) 6 days ago [-]

That article primarily comes down to this:

> WHY CAN'T I USE TLS/SSL TO DELIVER THE JAVASCRIPT CRYPTO CODE? You can. It's harder than it sounds, but you can safely transmit Javascript crypto to a browser using SSL. The problem is, having established a secure channel with SSL, you no longer need Javascript cryptography; you have 'real' cryptography.

In our case we aren't doing crypto inception where the cryptography is meant to secure itself. The crypto is being served securely (by ssl) and then used to solve the separate unrelated crypto problem of encrypting random files.

fouadmatin(2856) 6 days ago [-]

SubtleCrypto is a new browser-adopted spec for performing crypto operations natively. For example, instead of using Math.random() for random number generation, you can use https://developer.mozilla.org/en-US/docs/Web/API/Crypto/getR... in combination with the SubtleCrypto functions to work with keys securely

Your points around a compromised JS bundle are still possible but that has more to do with a company's deployment/change management setup than JS itself imo

tptacek(75) 6 days ago [-]

It's not outdated; it remains fundamentally true. But I'm uncomfortable with people calling it a 'legendary rant' because it was dashed off and I never promoted it as any kind of last word on the subject. There are better arguments against browser cryptography than mine.

In particular: you'd hope that WebCrypto would have changed things a bit, but, of course, it doesn't: it leaves all the cryptographic joinery up to content-controlled code. You're probably somewhat less likely to have side-channel flaws if you use it, but in reality timing side-channels are more talked about than seen. Who would bother, when they can just deliver a script that exfiltrates secrets directly?

quickthrower2(1803) 6 days ago [-]

SSL isn't the only crypto you'd ever want to do though. What if you want encrypt data so that it is encrypted all the way through the layers of the application to the database? That's a valid use case to use in tandem with SSL. Also I have to mention cryptocurrencies.

josefresco(3811) 6 days ago [-]

In one of their videos, the URL is www.send.firefox.com - the others drop the www - is this intentional, a mistake? Why would someone use www before a sub domain like that?

fzzzy(10000) 6 days ago [-]

Looks like www.send.firefox.com was a mistake. It's not a valid way to access the service. The correct url is send.firefox.com.

Moru(3759) 6 days ago [-]

Because some people just don't recognize a Web address if it does not start with www. I see that all the time with our subdomains.

crusso(10000) 6 days ago [-]

[Deleting this post as much as able. Didn't realize that the fanboys would be so insulted that I disagreed with Mozilla's marketing policy.]

Quarrelsome(4004) 6 days ago [-]

of all of the people you could be mad at about privacy you choose Mozilla to be mad at?

waplot(10000) 6 days ago [-]

are you that stupid? just enter a bogus email.

gshulegaard(10000) 6 days ago [-]

So since you don't need to sign up for anything in order to use Send, I am assuming you mean creating a Firefox Account which increases the send limit from 1 GB to 2.5 GB as well as enables some nifty Firefox features like tab sync.

Since it had been awhile, I tried to create a new Firefox account with an alternate e-mail of mine...and found that there is an opt-in check mark for newsletters and IT'S UNCHECKED BY DEFAULT. Since I can't embed images here, have a Send link to a screenshot:

https://send.firefox.com/download/bd0ea1c123/#99POcyrXU3Y0jv...

So it is starting to seem like you are intentionally smearing Mozilla for a fake problem.

Not to mention you are conflating marketing spam with privacy issues.

Edit: I made the send link valid for 100 downloads or 7 days which is the max...but if you click the link and it doesn't work just know that it's because it has hit the limit, not that Send is unreliable.

eatbitseveryday(3789) 6 days ago [-]

Please don't use code formatting except for code. Long lines require significant horizontal scrolling.

aeturnum(10000) 6 days ago [-]

Marketing mail is garbage and a bad user experience but it is not a privacy concern!

A privacy violation would be having your information revealed to someone without your knowledge / permission. Mozilla has not violated your privacy by sending the email you gave them marking emails! It's bad, but it's not bad for privacy-related reasons.

cmurf(1591) 6 days ago [-]

You're confused. Send doesn't require sign up or even an email address. You supply a file, it uploads, you get a URL.

svnpenn(10000) 6 days ago [-]

thanks for heads up

i wont be using the service now

dhimes(2841) 6 days ago [-]

Is that really a privacy issue?

AdmiralAsshat(1563) 6 days ago [-]

I've used Firefox Send for several months while it was still a test pilot program. It's been very useful for quickly sending files to family. The fact that the link expires as soon as the other party downloads it means I don't have to worry about clean up.

toomuchtodo(2467) 6 days ago [-]

Does the link expire after a successful transfer? Curious what happens if the transfer fails mid transfer and needs a retry.

kijin(3861) 6 days ago [-]

Do you ever run into a problem when an overzealous email service or virus scanner pre-fetches the link and invalidates it before an actual person clicks on it? This used to happen with all sorts of links in emails, though I haven't heard about it in a while.

timvisee(10000) 6 days ago [-]

I've been building a fully featured CLI tool for Firefox Send, supporting this new release.

For anyone that is interested: https://github.com/timvisee/ffsend

tintintin(10000) 6 days ago [-]

This is great, thanks :)

Mind if I port this to JS?

kevinherron(10000) 6 days ago [-]

This is neat, thanks.

shivkanthb(10000) 5 days ago [-]

That is so cool!

tomupom(10000) 5 days ago [-]

This is such a fantastic tool to have, thank you so much!

harshitaneja(10000) 6 days ago [-]

Thanks a lot. The first thought after seeing this was that I wish it had a CLI and I know I am lazy enough to never write one.

ycnews(2967) 6 days ago [-]

Python cli version at https://github.com/ehuggett/send-cli

disclaimer: I haven't used either cli version.

ausjke(975) 6 days ago [-]

do I need install firefox to use this tool? looks neat!

drewg123(2904) 6 days ago [-]

FWIW, I built and successfully ran it on FreeBSD-current. The only hiccup I ran into was that it puked building due to not having /usr/local/lib in its lib search path & not being able to find libxcb. I had to manually add -L/usr/local/lib to the cc args and manually link it. Not sure if that is a FreeBSD issue w/Rust, or something in your package.

At any rate, the tool works! Thanks so much.

nickpsecurity(2979) 6 days ago [-]

Love the demonstration on the Github page!

marcus_holmes(3875) 6 days ago [-]

I'm working on a file sharing product, for the niche use case of sharing documents between family and professional providers (lawyers, accountants, etc).

Documents are mostly emailed to recipients at the moment (unless they're too large, in which case... um....). The main problem we see is that you end up storing documents in email attachments on your email provider, and using email search tools to try and find documents.

Would this end up the same, only with all documents ending up in the Downloads folder?

Am I wasting my time working on creating a cloud storage sharing solution, and be better working on a method of organising files on the drive, that can also send them to other people?

77ko(10000) 6 days ago [-]

Why have a file transfer for imp docs when you can have a single authoritative source of truth for those docs, along with version history and who changed what.

So why not just use Google Drive (or dropbox)?

I feel with features like secure file sharing (though only with other ppl with google accounts), reasonably good security[1] and Inactive Account Manager[2] it should work for legal docs. Especially considering Google is going to be around for a while.

I would rather use a Mozilla offering but they don't really have too many things for regular consumers outside of firefox and send.

[1]: https://myaccount.google.com/security [2]: https://support.google.com/accounts/answer/3036546?hl=en

mrdoops(10000) 6 days ago [-]

Generate a temporary link that, when clicked sends an event to your system to deprecate the link and redirect the user to a presigned S3 download. In my case the file attachment was the product and it was important the system know when someone had downloaded, but a backend system that keeps temporary urls and requests a temporary download link from the file provider is a useful pattern. Nice thing about signed links is your server doesn't have to handle the file - it's between the client and storage provider.

foxhop(3677) 6 days ago [-]

Wow, this is really awesome and really cool! First I've heard of it. Just tested it and it worked great.

Is it possible to audit the tech? Is Firefox send open source?

rkagerer(3972) 6 days ago [-]

If I've got this right, the file is encrypted using a secret key which is generated on the client and appended to the anchor in the link, like:

http://send.firefox.com/download/<fileid>/#<secret>

Anyone who obtains the link (e.g. via email interception) gains access to the file.

Since browsers don't transmit the anchor when requesting a resource [1], Firefox servers never see a copy of the key. Provided you trust their JavaScript.

[1] https://stackoverflow.com/questions/3067491/is-the-anchor-pa...

somebodythere(10000) 6 days ago [-]

> Anyone who obtains the link (e.g. via email interception) gains access to the file.

True, but, if a third party decides to use the intercepted link to download the file, and you have it set to a limit of 1 download, the file will self-destruct (if you trust Mozilla). This way, the recipient can know that someone has tampered with the communication, which is certainly an improvement over the status quo (email attachments).

kgwxd(3425) 6 days ago [-]

Why not 'Mozilla Send'? If Firefox the browser isn't a requirement, the name is confusing.

mrhappyunhappy(10000) 6 days ago [-]

I was confused too. When it worked on non Firefox browser it was a pleasant surprise. I'm guessing this is just to promote Firefox browser. Wouldn't surprise me if they added higher file limit after usage grows and with it a paid tier :)

Cyphase(4030) 6 days ago [-]

The same reason it's Chromecast, not Googlecast.[1] Branding.

[1] The protocol is named Google Cast, but all the consumer branding is Chromecast.

agorabinary(3933) 6 days ago [-]

I'm quickly running out of excuses for still using Chrome...

Vinnl(704) 6 days ago [-]

While I'm not sure if this is a reason not to use Chrome (you can use it in Chrome as well), trying Firefox is really just a couple of minutes work, and you can easily go back...

Here, I'll type the download link for you: https://firefox.com

diegorbaquero(3763) 6 days ago [-]

I had the expectation that it would use WebRTC before opening the link, disappointed on that side. But really glad of the privacy minded offer. I appreciate Mozilla's work and effort towards a more private and encrypted internet!

JohnFen(10000) 6 days ago [-]

WebRTC and privacy don't exactly go together well.

JonathonW(10000) 6 days ago [-]

As I understand it, this 'guarantees' privacy by embedding the key in the link-- if that's generated client-side, it never gets sent to Mozilla's servers (assuming they don't go out of their way to grab it via JavaScript) and you can have end-to-end encryption.

But, if I'm logged in, it looks like Mozilla's storing that fragment on their servers: if I upload a file from one browser, then sign in on a different browser, I can see the link I generated (including the fragment) from the first browser in my list of uploads, and I can download the file.

Doesn't that negate their end-to-end encryption if Mozilla servers have access to the keys?

dcoates-moz(10000) 6 days ago [-]

The data that's synced when you log in is also encrypted, with a unique key derived from your Firefox Account called a scoped encryption key. Your key changes when you change your password. We, (Mozilla) don't know your key (and don't want to know it). Disclosure, I implemented the sync feature of Send.

ksec(2106) 6 days ago [-]

I keep seeing comments about Search Revenue and keeping this free. It would be useful if Mozilla is getting more Firefox users out of it, but it likely won't be in any significant number.

So what happen once this get popular and waiting to be abused? Just like Mega. Who is going to continue and foot the bill?

cyphunk(3287) 6 days ago [-]

most abuse mitigated by their limits on the number of downloads allowed and how many days it can stay online. Currently at 7 days max and 100 downloads. If they see abuse they could reduce this further.

about revenue, there are so many valuable directions this can go. It could undercut competitors in ways they cannot sufficiently respond to. (google responding in kind would leave them less reason to not add encrypted storage for drive) By stabilizing this platform they can start to build new privacy-enhancing apps on top. Calendar, contacts, etc. With more dependency on the platform, they will find areas where more storage, longer retention, will be income generating.

privacy may be the only frontier that can displace google,apple,microsoft.

qwerty456127(4008) 6 days ago [-]

How does it work? Is it P2P or what?

JohnFen(10000) 6 days ago [-]

The encrypted file is stored in the cloud. The recipient downloads it from there and decrypts it.

P2P would be much better, but this isn't that.

seveneightn9ne(10000) 6 days ago [-]

How is this using end-to-end encryption? It seems like the recipient just clicks a link to download. How can it have been encrypted for that person? end-to-end encryption normally means that there's no way for the intermediary to unencrypt the data but I can't see how that's possible in this case.

0xfeba(10000) 6 days ago [-]

IIRC The link contains an anchor `#abc123` which is the decryption key. Browsers do not send anchor parts of the URL to the server, and so the browser decrypts.

Hinges on the browsers never sending that key, though.

weaksauce(3301) 6 days ago [-]

Client side JavaScript that encrypts locally befor uploading and puts the encryption key in the url you share with someone that never gets sent to Mozilla. Also client side decryption on the person you shared the link with. It's end to end.

EwanToo(1098) 6 days ago [-]

The url effectively contains the decryption key, so the web server could be set to capture the urls and decrypt files.

If you want, you can also set a passphrase on the file to share via another channel

bjt2n3904(4021) 6 days ago [-]

I don't understand the end-to-end encryption claim.

1. Bob uploads a file, but specifies no password.

2. ???

3. Sue downloads the file.

Best case, Bob's browser encrypts it (with javascript?) before uploading. Either Mozilla provides a key, or Bob sends the key he used. When Sue's browser downloads it, Mozilla sends the key and her browser decrypts it client side.

In either case, Mozilla has the password for decryption. This makes a mild barrier to mass scanning content that's uploaded, so at least that's something... but that's little more than a promise I have to trust.

Am I missing something? Where is the 'end-to-end' encryption? End-to-end means I don't have to trust you (as much). Please don't turn this into a meaningless buzzword...

EDIT: I did misunderstand something. Please see timvisee's comment below.

mimsee(3711) 6 days ago [-]

Generated by random and applied to the URL hash data that is not sent to the server. Hash data is data in an URL after the hashtag

timvisee(10000) 6 days ago [-]

The client encrypts the file that is uploaded, along with some metadata. The key is appended to the share URL provided by the URL, in the fragment/hash, and is never sent to the remote server. Only people having the URL including the secret will be able to download and decrypt your shared file. See https://github.com/mozilla/send/blob/master/docs/encryption....

lol768(3956) 6 days ago [-]

It seems vulnerable to an active MitM - if the attacker is in a position to serve malicious JS that exfiltrates the data from window.location.hash.

I think the scheme is fairly robust against passive interception though.

tantalor(2677) 6 days ago [-]
kenrick95(2714) 6 days ago [-]

There's also a http://xkcd949.com/

voidmain0001(3772) 6 days ago [-]

I'm onboard as a regular user of send.firefox.com. How does Mozilla have the money to offer this for free?

snazz(3727) 6 days ago [-]

Maybe they just don't need too much storage since they expire quickly. This would be an interesting thing to graph, if they release the statistics.

Vinnl(704) 6 days ago [-]

Mozilla has quite a bit of money, most of it from their default search engine deals. I'd wager to guess that most of it goes to wages.

Aissen(557) 6 days ago [-]

I wonder if they've fixed the issue where one can force reuse of a link by slowing down a download, and sharing the URL ? Hence turning it into a cheap file hosting service:

https://news.ycombinator.com/item?id=15450524

I haven't been able to upload a file to try.

klohto(4014) 6 days ago [-]

I've tested this and the link seems to expire the moment the user starts a download.

laurent123456(1426) 6 days ago [-]

How does E2EE work if the recipient can download the file directly? I'd expect some key or password needs to be exchanged too?

Vinnl(704) 6 days ago [-]

They key is appended to the URL as a hash, which cannot be read by the server.

lmedinas(2766) 6 days ago [-]

a bit off topic but here it goes...

This is how i think Mozilla can capture more users back to Firefox. By providing 'extra' services attached to the Mozilla and Firefox brand will make them a superior product to the end user. Sure it's hard to compete with Chrome but if you offer useful features and services integrated in your Browser i see that Mozilla actually has a chance to compete with Google for the browser space.

This is one of the 'advantages', if you are a heavy Google user, of Chrome over the competition is that everything is attached to your Google account. Passwords, history, spellers, dictionaries, shortcuts, etc...

If Mozilla comes with Send, Notes, Password Manager all integrated in Firefox i see a good way to bring back some of the previous users that switched to Chrome.

scriptkiddy(4024) 6 days ago [-]

Along the same lines, a Gmail-esque Thunderbird web service would be amazing. I could finally de-google myself completely if that were the case.

Currently, I need to set up my own email hosting through a service like fastmail and then configure a desktop client(like Thuderbird) to use it.

A Mozilla Gmail-esque service would remove a lot of the friction there and probably bring in a bunch of users who are tired of google running everything.

swebs(3756) 6 days ago [-]

Literally all they need to do is advertise tree style tabs. It's the reason half my office stopped using Chrome.

tantalor(2677) 5 days ago [-]

> By providing 'extra' services attached to the Mozilla and Firefox brand

How is that different from the complaints people make about Chrome tightly integrating with Google?

asdgiobiobiuo(10000) 6 days ago [-]

You may be right, but I hate it. There is no reason I can think of to have all these tools integrated into a web browser, and the idea of having the Internet broken into silos based on your choice of browsers scares me.

We don't need another AOL Chrome.

hotgeart(10000) 6 days ago [-]

> If Mozilla comes with Send, Notes, Password Manager all integrated in Firefox i see a good way to bring back some of the previous users that switched to Chrome

As a Chrome user I can confirm. But for me the main raison I use Chrome is for the dev tools a found them better than FF

Shorel(10000) 6 days ago [-]

This really feels like something the old Opera (not the Chromium version) would have done back in the day.

all_blue_chucks(10000) 6 days ago [-]

They did. But it didn't work with NAT so it died.

icemelt8(4021) 6 days ago [-]

I wonder which cloud service they are using to store the files.

sccxy(3992) 6 days ago [-]

Google Cloud Platform

swtrs(10000) 6 days ago [-]

A bit of poking around leads me to prod.send.prod.cloudops.mozgcp.net so Im assumping Google CloudCloud.

NedIsakoff(10000) 6 days ago [-]

How are they going to deal with bad content? Child porn? Pirated content? Illegal stuff?

mac01021(3938) 6 days ago [-]

Since it's encrypted end to end, presumably they will be oblivious to all that stuff?

tasty_freeze(10000) 6 days ago [-]

The same way backup services and email servers deal with encrypted data. They have no way of knowing.

benawad(4031) 6 days ago [-]

> Key Business Question to Answer: Is the value proposition of a large encrypted file transfer service enough to drive Firefox Account relationships for non-Firefox users.

The metrics section is interesting https://github.com/mozilla/send/blob/master/docs/metrics.md

medmunds(3803) 6 days ago [-]

Oh interesting. Their two hypotheses (which they will test) are that Send 'can drive Firefox Accounts beyond the Firefox Browser' and that it will 'will provide a valuable platform to research, communicate with, and market to conscious choosers...'

It sounds like they're investigating a premium service offering targeted at privacy conscious users. (The secondary hypothesis covers 'revenue' and will be tested by conducting 'research tasks ... in support of premium services KPIs.')

oblio(3158) 6 days ago [-]

I wonder if they're running some malware scanners plus do they have to comply with DMCA takedowns? Based on what I see, the files are hosted on their servers, so they kind of have to, no?

mehrdadn(3544) 6 days ago [-]

There is end-to-end encryption, so unless they have homomorphic virus scanners I don't see how they would do this...

nvdk(10000) 6 days ago [-]

At maximum 200 downloads and an expiration of 7 days I don't think anyone will bother to be honest.

zyngaro(3159) 6 days ago [-]

What is the use case of such a tool? Real a question.

ebg13(10000) 6 days ago [-]

I don't understand what you're asking. The use case is literally in the title ('file transfer').

F_r_k(10000) 6 days ago [-]

Swisstransfer.com is more or less the same, but with 25Gb and no sign up

hiq(3252) 5 days ago [-]

Regarding the differences, this website does not seem to encrypt the files on the server, and does not provide links directly, so you need to provide at least one valid email address, if only to send the link to you to then send it to the party you want to share the file(s) with. It's also not open-source AFAICT.

hlnas(10000) 6 days ago [-]

How 'private' is it? Do you store metadata? i.e. if I upload a file and it expires, do you also delete any trace of me, including my IP address?

mehrdadn(3544) 6 days ago [-]

https://send.firefox.com/legal

> We receive IP addresses of downloaders and uploaders as part of our standard server logs. These are retained for 90 days, and for that period, may be connected to activity of a file's download URL. Although we develop our services in ways that minimize identification, you should know that it may be possible to correlate the IP address of a Send user to the IP address of other Mozilla services with accounts; and if there is a match, this could identify the account email address.

mFixman(10000) 6 days ago [-]

I can't believe that there isn't a simple service to transfer data between my cellphone and my computer without going through the internet. iTunes is terribly bloated, MTP is a mess, and Bluetooth is slow and frustrating.

Back in my hacker day I used to have an SSH server open on my cellphone and use it to transfer files back and forth with my computer. Why isn't there a mainstream service like that?

kop316(10000) 6 days ago [-]

I seem to remember that there was an app on android that allowed you to access your files via a webserver you could turn off and on. I used that a lot before I got nextcloud.

dx87(10000) 6 days ago [-]

KDE connect works without internet access. I haven't used it on Windows, but it works fine for me on Ubuntu.

merpnderp(10000) 6 days ago [-]

In the Apple ecosystem, there's AirDrop which uses either Bluetooth or Wifi. You can quickly share files between any iOS and Mac devices very simply.

matt-snider(10000) 6 days ago [-]

What about https://syncthing.net/?

EDIT: I know you said without going through the internet. Syncthing can be configured to only transfer over specific networks (e.g. home LAN/WI-FI)

rsync(3659) 6 days ago [-]

'I can't believe that there isn't a simple service to transfer data between my cellphone and my computer without going through the internet.'

The correct way to do this is to configure your phone to emulate USB mass storage and then connect with a USB cable.

Your phone looks like a thumb drive. It's the easiest workflow in the world.

Unfortunately, this workflow is off limits because of some licensing requirement from MS for fat32 (or something) which is why neither android nor ios has this very basic, simple feature.

samcday(10000) 6 days ago [-]

Tangentially related - I've always thought it's dumb that I can't just plug my iPhone in to any PC and have it show up as a removable storage device.

I'm sure people who know more than me will give me a list of great reasons why it's not straightforward to implement...

But it doesn't change the fact that I have this incredible device (iPhone X) with 256gb of blindingly fast NAND flash storage, of which I am only utilizing 30gb, yet I still have to tote around a f*ing stupid little plastic USB dongle if I want to copy some files around.

hiccuphippo(10000) 6 days ago [-]

What I'd like to see is an app that runs a webserver on my phone to share a slideshow of pictures or videos to a browser on the lan. I haven't found this and I'm thinking about writing one.

opencl(10000) 6 days ago [-]

Syncthing is fantastic for this (and file transfers between computers over LAN and/or internet), unless you happen to have an iOS phone.

msravi(3182) 6 days ago [-]

termux + woof

woof -i <ip_address> -p <port> <filename>

termux: https://play.google.com/store/apps/details?id=com.termux&hl=....

woof: http://www.home.unix-ag.org/simon/woof.html

Edit:

1. Allows directory upload/download (tar/gzip/bzip2 compressed)

2. Local file server (doesn't go over the internet)

3. Allows upload form (-U option)

4. Allows file to be served <count> number of times (-c option)

ufo(10000) 6 days ago [-]

Does local wifi count? I use KDE Connect for sending files over wifi and a bunch of other things.

You may also want to check Syncthing, which others have also recommended.

dec0dedab0de(3982) 6 days ago [-]

Plugging in a USB cable still works fine. I think the problem is if its too easy then people will be copying files off of each others phones without permission.

_petronius(3511) 6 days ago [-]

I recently switched back to iOS after years on Android, and on this point I've been very impressed with Airdrop. Dead simple UI, very quick transfer speeds, uses WiFi or Bluetooth as available. It's just a shame that it's limited to Apple devices.

dTal(3958) 6 days ago [-]

You might be interested in KDE Connect, which provides (among other things) essentially a thin wrapper around SSHFS. It's the most convenient method of computer<->phone transfer I've found.

sametmax(3736) 6 days ago [-]

There is. It's called dukto (http://www.msec.it/blog/?page_id=11), and works on mac, linux, windows and android. It will use zeroconf to automatically find all duktos on the local network, and let you send stuff to them in a blink.

Proprietary but free as in beer.

gurpreet-(10000) 6 days ago [-]

Resilio sync [1] is a great service I've used to transfer files using P2P technology. It still uses the Internet, but avoids any intervening parties.

If you're using Android, you could just use USB transfer using Android File Transfer [2]. Super easy, super fast.

[1] https://www.resilio.com/individuals/ [2] https://www.android.com/filetransfer/

kgwxd(3425) 6 days ago [-]

There are plenty of cross-platform local file transfer tools available but they all require manual setup and some knowledge of networking. If 'without going through the internet' is a requirement, I don't think an easier and secure tool could be made better than what's already available.

nobrains(10000) 6 days ago [-]

Airdroid

jwr(3610) 6 days ago [-]

If you use Apple devices, it's called AirDrop and works surprisingly well. I use it a lot, between computers, phones, and ipads, within the family and sometimes with other people, too.

gshulegaard(10000) 6 days ago [-]

There are a few around...I use File Explorer which can actually start an FTP server from my phone (iPhone) that my PC can connect to over LAN. It also can be a client to a remote FTP/file share.

MaxBarraclough(10000) 6 days ago [-]

You run a simple HTTP server on your computer, then download your files over Wi-Fi using your phone's browser. Works nicely.

    cd my/directory && python3 -m http.server 80
shmerl(3692) 6 days ago [-]

KDE connect?

buboard(3998) 6 days ago [-]

there is https://snapdrop.net/ but it didnt work for me

patr0nus(10000) 6 days ago [-]

What about Dukto [0]?

IMO if something doesn't require the internet connection, it is more likely to be called 'software', not a 'service'.

[0] http://www.msec.it/blog/?page_id=11

Benjamin_Dobell(3506) 6 days ago [-]

AirDroid is pretty handy on Android; file transfer, browsing your phone's files/images, sending SMS from your desktop browser (although Messages now does that native) etc. Much to my surprise, there's also an iOS version - https://itunes.apple.com/app/id1194539178

diegorbaquero(3763) 6 days ago [-]

You can try WebTorrent (P2P) based solutions, maybe https://btorrent.xyz could help.

m-p-3(10000) 6 days ago [-]

I'm on Android, and Syncthing is pretty seamless once configured. I just configure my cellphone, my laptop and my desktop to sync a specific directory in both directions.

It's decentralized, end-to-end encrypted and does local discovery of devices on a LAN so it will also works offline.

As long as one device lives and is synced, I have a copy of the files.

AdmiralAsshat(1563) 6 days ago [-]

A number of Android File Managers these days (Amaze comes to mind) include a toggle option to turn your phone into an FTP Server. You would then just pull it up on your computer via ftp://192.168.X.X and put an optional user/pass over it. I've used that for many years if I need to quickly transfer some documents or songs between devices.

Not technically internet so much as intranet.

microcolonel(4016) 6 days ago [-]

I use Syncthing, personally. It usually works pretty great, the only real issues I've seen are with locked down internet connections (the sort which also seem to meter or block VPNs).

m52go(3203) 6 days ago [-]
spieglt(3985) 6 days ago [-]

Check out https://github.com/claudiodangelis/qr-filetransfer for computers and phones on the same LAN.

Works great, and I'm planning on integrating that functionality into my project which transfers files between laptops using only wireless cards, no LAN required. https://github.com/spieglt/flyingcarpet

xioxox(3850) 6 days ago [-]

SimpleSSHD is a simple ad-free open-source SSH server for your Android phone (available on Google Play). It's very useful. It only supports public-key based authentication, so you can't use a password, however.

chongli(10000) 6 days ago [-]

I just use iCloud Drive. Files on my desktop and in my documents folder get automatically synced to my phone and vice versa. It's extremely easy and painless. I often find myself on my phone, saving a file to my iCloud desktop, and finding the file on my desktop the next time I open the lid of my laptop.

ocdtrekkie(2602) 6 days ago [-]

SMB over local network would be my default, I recall an app for using SMB with Android back as far as the 2.x days.

We have tons of protocols for transferring files over networks, there's no reason for them to go to the public Internet, nor for them to be mobile phone specific.

dooglius(10000) 6 days ago [-]

I'm a bit confused, in what sense is accessing an SSH server not going through the internet? Was the phone connected to the same LAN as the desktop via Wifi?

pault(3964) 6 days ago [-]

You didn't say whether you are on iOS or Android, but if you are on iOS airdrop works very well.

pwg(278) 6 days ago [-]

To add to the options (for Android phones):

https://f-droid.org/en/packages/org.primftpd/

akerro(3830) 6 days ago [-]

>I can't believe that there isn't a simple service to transfer data between my cellphone and my computer without going through the internet.

KDE Connect, https://community.kde.org/KDEConnect#What_is_KDE_Connect.3F i've been using it for years

Steltek(10000) 6 days ago [-]

When pushing files from phone to computer, I setup my Pixel to use AndFTP. The ubiquitous 'Share' button offers AndFTP as an option and lists preconfigured destination SSH servers. I upload photos this way to a distinct account (which gets scooped up later by a more privileged script).

What I'm really looking for is a Share button enabled app that can POST arbitrary files to a customizable URL.

deltaqueue(10000) 6 days ago [-]

Dozens of responses, and not one mention of Dropbox. Works perfectly for this exact purpose on Android.

SebiH(10000) 6 days ago [-]

How about https://snapdrop.net/ ?

est31(3924) 6 days ago [-]

I often first try MTP and when it is acting up again, I'm using adb pull / adb push to do it. Once set up, adb turns on automatically and all you need to do is to invoke the commands on the computer. If USB is unavailable, adb works via the network as well, provided the phone's IP is reachable. However, you need to know the ip address. The only problem really is figuring out the paths, but at least it works and overall I'm wasting less time than with MTP usage.

yjftsjthsd-h(10000) 6 days ago [-]

You can literally do that; termux will run sshd quite happily. I'm pretty sure there are sftp servers in the app store as well, but I don't really trust them.

I run the reverse; my laptop runs sshd and then I ssh/scp/rsync from termux on my phone. But either way works.

smoser(3198) 6 days ago [-]

Besides AirDrop there is 'Copy and Paste across devices' https://support.apple.com/kb/ph25168?locale=en_US

omouse(3213) 6 days ago [-]

It's because the whole app ecosystem is proprietary and the open source packages aren't as polished. I remember when it was a huge pain to use Bonjour.

It seems like this should be a solved problem but maybe it takes a Mozilla or some other larger entity to push the marketing and the customer support and development to really solve the problem of transferring large files securely.

alanpearce(10000) 6 days ago [-]

With macOS and iOS, Airdrop, as others have stated. For other platform combinations, there is NitroShare[1] which works in almost the same way.

[1]: https://nitroshare.net

morpheuskafka(10000) 6 days ago [-]

On iOS having a file manager web server is a common workaround, some apps like VLC even have their own. The only issue is that the server stops if you switch apps. There's also iMazing which uses the iTunes protocol and is pretty good, but unfortunately is paid.

TulliusCicero(10000) 6 days ago [-]

Neat!

How do they handle abuse though? Like, people using it to host, say, pirated TV shows? Maybe a max download limit that makes it impractical for that use case?

mont(10000) 6 days ago [-]

2.5GB file limit is a bit small for good quality TV shows (and especially movies).

Moter8(10000) 6 days ago [-]

The files are available up until they have been downloaded (from 1 to 100 times) or until a certain timeframe has elapsed (from 5 minutes to 7 days). See the screenshot at the article.

emddudley(10000) 6 days ago [-]

I've used this before to send sensitive documents to my attorney, who would have otherwise just wanted email attachments. It worked great.

BrandonM(2924) 6 days ago [-]

Based on what I've read, the security model seems to be almost the same as email attachments?





Historical Discussions: Spotify to Apple: Time to Play Fair (March 13, 2019: 1893 points)

(1894) Spotify to Apple: Time to Play Fair

1894 points 6 days ago by dmitriid in 3844th position

www.timetoplayfair.com | | comments | anchor

2010-2011

Apple starts changing its App Store Guidelines

When Apple introduced the Guidelines we thought, "Yep. Makes total sense to have rules for security, safety, privacy, and quality." But Apple not only has unilaterally changed the rules themselves time and again, but also frequently decides to interpret (and re-interpret) them in ways to disadvantage rivals like us. So those totally legit things we did which were fully in compliance just a few months ago? Now apparently not so much




All Comments: [-] | anchor

coldtea(1216) 5 days ago [-]

Knowing how crap Spotify app is (an Electron monstrocity on the desktop) that's for the best.

xxpor(3886) 5 days ago [-]

Given that electron monstrosity allows them to have a Linux client, Spotify will always have my support over Apple Music

duski(10000) 5 days ago [-]

except its not electron and it doesnt hog half as much as itunes

dang(163) 5 days ago [-]

We detached this subthread from https://news.ycombinator.com/item?id=19377601 and marked it off-topic.

calcifer(2501) 5 days ago [-]

Spotify is not an Electron app, it uses CEF.

ChrisRR(10000) 6 days ago [-]

While I was ready to leave a comment about Spotify not playing fair with artists, I have seen Apple's unfair business practices first hand with developing apps and bluetooth devices.

Unfortunately they are abusing their power, and are being flat out unfair if you want to develop a bluetooth device or app. If you want to vet apps to ensure a safe environment, that's fine, but these practices go beyond that and are unfair.

dbuder(10000) 5 days ago [-]

What kind of issues did you run into developing BT devices? beyond them treating the spec like a rough draft of suggestions.

dmitriid(3844) 5 days ago [-]

> Spotify not playing fair with artists

Spotify is paying as much as music industry extorts from it. Where does the money go? Ask the big labels that are the ones paying artists.

berbec(3328) 5 days ago [-]

I hate to say this, but one of my main gripes against Apple - the unfair advantage they have in pricing by not having to 'pay themselves' 30% - has lost some strength in my mind.

Every grocery store takes a cut of every item sold on the store. The store owner determines the percentage. Just like Apple. Grocery stores make similar/nearly identical products to the third-party merchandise they sell. Just like Apple. Grocery stores can routinely undercut the pricing of third-parties due to not needing to 'pay themselves'. Just like Apple.

Damn. I really wanted to hate EvilBigTech for this.

The rest of Spotify's points I have no issue, but this one doesn't hold as much water for me anymore.

miloignis(10000) 5 days ago [-]

I think the fact that Apple prevented them from using payment methods that didn't go through them makes it significantly worse though - to carry the analogy (perhaps to far) the grocery store prevents the packaging of the products it carries from mentioning the website where you can order other products or something.

steve1977(10000) 5 days ago [-]

Spotify asking someone else to play fair, ha, that really made my day...

I mean, of all companies, Spotify. Their whole business model is built upon not playing fair with content creators.

parthdesai(10000) 5 days ago [-]

> Their whole business model is built upon not playing fair with content creators.

Who decides whats fair? Or are you forgetting that the go to website to download songs before streaming was thepiratebay?

hbosch(3749) 5 days ago [-]

Which streaming service plays the most fair?

dmitriid(3844) 5 days ago [-]

They would gladly pay their fair share. They don't own the content, though, the labels do.

As much 70% of a streaming service's revenue (any streaming service, not just Spotify) goes to rights holders.

Orphis(10000) 5 days ago [-]

It's a lot more nuanced than that.

The payout per play is basically different between free and premium users as they add more to the revenue shared by all the artists.

If you look at revenue for premium users, then it's probably much higher than that, but when you add free users, it will lower the average.

A solution you might think would be to remove the free-tier. But that will only do one thing: cut a (admittedly smaller) revenue stream for artists, cut a promotional stream for artists and less possibilities to upsale premium accounts to free users.

Someone who isn't even a free user will either not listen to music or pirate it. Is that preferable?

A lot of artists admit that Spotify pay-out per stream is certainly lower, but it's still dominating all the other streaming revenues, as long as you are actually an artist who has the potential to make any. Some unknown artist selling 1 CD for $10 on bandcamp with no listen on streaming services will probably think differently.

badatshipping(10000) 5 days ago [-]

Apple owns their platform. It cost a fortune to develop, and it's theirs.

Imagine it's the 1800s and you wanted to develop technology to let anyone listen to music anytime. You'd have to invent the computer, an operating system, and the internet (some means of distribution), then invent Spotify. (Or maybe not all those things in their entirety, but just enough to support the Spotify's functionality.)

Since it's 2019, there are platforms that provide the operating system and means of distribution, and Spotify merely has to deliver an app that floats in high-level-land. They then complain that that platform, who did all the work Spotify didn't have to, isn't being 'fair.' What does that mean? What are they rightfully owed?

Why doesn't Spotify create their own OS and App Store, and develop/distribute Spotify there? Because it's inconceivably hard? Surely that's why the people who did it get to set the rules.

writepub(10000) 5 days ago [-]

Maybe Intel should only allow Apple to run Intel approved apps on Intel Mac's, because it's Intel's CPU running the show? And what about the modem chip vendors? Shouldn't Qualcomm put a firewall in firmware and filter all traffic on the iPhone that it hasn't approved? What about the ISP carrying those packets? Shouldn't they lay claim to packet ownership?

This platform ownership piffle Apple has sold for eons is pure horse shit, and the EU regulators will rule as such. Simply put, a fridge manufacturer cannot control what items get stored and cooled in the fridge, after the customer has paid in full for both the fridge and the item. And the fridge manufacturer certainly cannot restrict items only sold though it's own stores

UpperBodyEimi(10000) 5 days ago [-]

What am I missing here? Why can Spotify not redirect users to subscribe on their own website?

I'm pretty sure I signed up on Spotifys website, then logged into my iPhone. Apple definitely don't get 30% of that subscription fee.

thekyle(10000) 5 days ago [-]

According to the website, the debate is that Apple does not currently allow apps to do what you are describing.

A few quotes from the timeline:

> Apple now prohibits buttons or links to any other external ways to pay.

> While we haven't been able to include any buttons or external links to pages containing product info, discounts, promotions, etc. (even if they don't link directly to a payment system!) since Feb. 2011

coldacid(10000) 5 days ago [-]

Because Apple rejects their app whenever they try something like that, or even have phrasing suggesting that users do so without actually redirecting themselves.

alex_duf(3924) 5 days ago [-]

>Why can Spotify not redirect users to subscribe on their own website?

Because it's forbidden by apple to inform your user that it's cheaper on the website.

fb03(10000) 5 days ago [-]

Let's be fair then:

Phones now have the same level of processing power as portable computers. We shouldn't be obligated to only run applications that are siphoned thru a third party 'trusted clearinghouse', be it the smartphone or the operating system vendor.

If I want more safety or I am not a Power User, I can flick a switch on the device to allow that binding behavior to happen. Heck, they are giving me the 'free service' of taking care of app security for me, I could even pay for that if I am serious about having my apps checked and stuff.

Phones are computers. You own the hardware, you should be able to install whatever you want. Always.

Is the argument about protecting the masses of non-tech people? Great, make it an opt-out. And still: There ought to be someone keeping tabs on that big brother (gov regulations? agencies) or else shady behavior can ensue as well.

sneak(2871) 5 days ago [-]

> Phones now have the same level of processing power as portable computers. We shouldn't be obligated to only run applications that are siphoned thru a third party 'trusted clearinghouse', be it the smartphone or the operating system vendor.

I pay a lot of money for iPad Pros and Google Pixelbooks for precisely this functionality. When the user can run anything they want, the result is the unsafe landscape of malware we see on common desktop OSes.

bovermyer(2552) 5 days ago [-]

Unpopular opinion: I will never go back to the Apple ecosystem. I like my freedom way too much.

Doctor_Fegg(3185) 5 days ago [-]

That's not an 'unpopular opinion', that's a personal preference and you're welcome to it.

novaRom(4020) 5 days ago [-]

Why unpopular? It's a rational decision. Diversity vs centralization.

brootstrap(10000) 5 days ago [-]

yeah good for you, many of us never even got into it to begin with. I will say my job got me a macbook pro (after decades of windows usage and hackage) and this bitch is still kicking ass 5 years later. Big ups for the laptop. iPhone, who gives a shit it's a phone. Some people are super into it, i'm like dude i dont fucking care about the screen, the camera, the unlocking your face with a picture. If think all that jazz is worth $800+ be my guest.

Grustaf(4031) 5 days ago [-]

Not unpopular, just uninteresting.

digianarchist(3798) 5 days ago [-]

Pretty annoying as an Apple Watch user to find out only Apple Music is allowed to store music on the device itself.

saagarjha(10000) 5 days ago [-]

I don't see why other watchOS apps cannot store music on the watch itself. Don't apps get some local storage space? Why can't they use that?

wowandflutter84(10000) 5 days ago [-]

Pandora has an Apple Watch app with offline playback.

iambateman(3996) 5 days ago [-]

Just because Apple can do as they please, doesn't mean they ought to be allowed to.

Antitrust is a useful tool for when players end up controlling monopolies and using them in anti-competitive ways.

briandear(1874) 5 days ago [-]

It's important to define exactly what is "anti-competitive." If Apple banned Spotify or Deezer, that could be considered anti-competitive. But they aren't. Netflix has original programming, is that anti-competitive? Don't Netflix originals compete against non-Netflix shows for streaming revenue? Are there alternatives to Netflix? Yes. Are there alternatives to the App Store? Yes: Spotify could, for example be web-only if they wanted to. Spotify also could sell on macOS, Windows, Chrome, Android, and Linux. "Having an app" isn't a particular right, but even a requirement for reaching users. Also, they don't have to sell their subscriptions via in-app purchase; that can be done via web without paying Apple's commission.

ppeetteerr(10000) 5 days ago [-]

Antitrust would not apply here as Apple's share of the smartphone market is less than monopolistic.

There is a stronger case against Amazon for selling Amazon-branded, well-moving products like batteries and diapers, even though they don't stifle the other brands.

bunderbunder(3531) 5 days ago [-]

There's an interesting philosophical question there. Can a vendor who has 15% marketshare really be called a monopoly?

Sure, they have complete control over who publishes on their own platform, but that sort of thing happens all the time without anyone batting an eyelash at it: The major console vendors do this, as did all cell phone vendors in the pre-smartphone era. My digital camera does this.

That leaves me thinking there's no real ground for invoking antitrust laws on this issue. Though there's still plenty of room to say that these policies are consumer-hostile and not in the public interest, and therefore there ought to be a law against it.

willart4food(4028) 5 days ago [-]

Apple is the new Microsoft! #FTC #MONOPOLY

dang(163) 5 days ago [-]

Please don't do this here.

supermatt(3899) 5 days ago [-]

Just as Microsoft were stopped from shipping a browser with their OS, vendors should be prevented from shipping an App Store with their OS.

You cant move to another platform without losing access to all your 'purchases' - there is no free market. They have monopolies within ecosystems they created.

They should be FORCED to have an open platform, with users able to access multiple 3rd party storefronts on multiple platforms. They can market themselves as official/curated/whatever they want but the user should have the choice.

jhasse(3968) 5 days ago [-]

> Just as Microsoft were stopped from shipping a browser with their OS, vendors should be prevented from shipping an App Store with their OS.

Microsoft had a monopoly, Apple doesn't. It's as simple as that.

arendtio(10000) 5 days ago [-]

Vendors could solve the whole problem by letting users choose which default apps they want to install in the beginning. In order to make it simple, they could offer bundles. >95% of the users would choose the 'Apple Apps' bundle on iOS and the 'Google Apps' bundle on Android and the 'Microsoft Apps' bundle on Windows. And the few people choosing the 'Privacy-First' bundle with Firefox and Ad-Blocker are those people who would install an alternative browser anyway.

But instead, they are greedy and want 100% of the users of their platform to use their apps.

unstatusthequo(3972) 5 days ago [-]

Forced? That doesn't sound like freedom. Forced to enable your competitors to compete with you on your own platform? Forced to develop a platform upon which competitors can reduce your payoff on that investment?

Look at Google Play store. Much more 'open,' and a shitload of malware, scams, and shit software. Without curation of any kind, you end up with shit. It's the Tragedy of the Commons applied to apps. Build it, and they will ruin.

dwighttk(2958) 5 days ago [-]

That sounds like a great strategy to switch everything everywhere to crappy web apps.

ubermonkey(10000) 5 days ago [-]

>Just as Microsoft were stopped from shipping a browser with their OS, vendors should be prevented from shipping an App Store with their OS.

That comparison doesn't really work, because MSFT had an effective desktop monopoly. By contrast, Apple is a minority player in the mobile market.

>They should be FORCED to have an open platform,

Just because YOU want this doesn't mean the Apple users want it. I'm utterly content with a single, curated-by-Apple app store for iOS. I like the stability it affords the platform.

I'd never accept it on a general-purpose computing device, but for my phone it's perfect.

CivBase(10000) 5 days ago [-]

> vendors should be prevented from shipping an App Store with their OS

The problem here isn't the quality of the App Store. The problem is that Spotify has no alternative.

Apple is a hardware developer, a software developer, and a software retailer. The problem is how tightly coupled those three roles are on the iPhone.

You can't run iOS on other hardware. You can't run another OS on the iPhone. You can't install apps on iOS except from the App Store. You can't use the App Store on other OSes.

Those are the underlying problems.

Users can't use part of the ecosystem. They have to invest in the whole thing, giving Apple more control and reducing consumer mobility.

sandov(10000) 5 days ago [-]

Judging by number of sales, people seem to either like these walled gardens or don't care about them.

For those of us that care about freedom, we can just buy a phone that's compatible with LineageOS (or Replicant if you're willing to go full Stallman) and install it.

georgespencer(3871) 5 days ago [-]

Apple invests billions into R&D, design, manufacture, and UX design for its devices. Your logic would then see them dealing with picking up the pieces for bricked devices, hacked passwords, lost data, viruses, etc.

Using an Apple devices is accepting a benevolent dictatorship, and it's a trade off tonnes of users are happy to make.

ihuman(2787) 5 days ago [-]

> Just as Microsoft were stopped from shipping a browser with their OS

What do you mean? Every time I've installed Windows, it came with Internet Explorer or Edge (which I then used to install another browser).

acdha(3560) 5 days ago [-]

> vendors should be prevented from shipping an App Store with their OS.

Congratulations, you've just reinvented the desktop adware / malware market. We know that if this is possible users will be manipulated into making bad decisions which are hard to recover from, and in many cases those bad decisions will be made for them by their carrier.

What I would support is that the app store should be more open: no refusing apps for competing with the built-in apps, and removing the ban on purchases which don't go through the platform — let the store owner charge for payment processing but e.g. Amazon should be allowed to sell you a movie on iOS without paying Apple a cut simply by using their payment system instead.

iwasakabukiman(10000) 5 days ago [-]

> Vendors should be prevented from shipping an App Store with their OS.

So then how would a novice user get apps the first time they boot up their phone? They would have to know where to get apps from. That seems like an easy way for users to end up downloading a bunch of malware because they think it's the official Apple or Google app store when it's just a random website.

Having a built in app store has huge advantages in security and usability for the end user.

jdietrich(4020) 5 days ago [-]

>Just as Microsoft were stopped from shipping a browser with their OS, vendors should be prevented from shipping an App Store with their OS.

There's nothing wrong with providing an app store as long as that store offers fair and non-discriminatory terms to all vendors. Charging app developers for the costs of running a store and maintaining an ecosystem is perfectly reasonable; using those charges to stifle competition isn't.

Engineers tend to want neat, perfect solutions that require no human judgement, but they're very rare in the real world. There are obvious and major disadvantages to a free-for-all versus a walled garden, for both developers and users. Customers have a right to choose a tightly-controlled platform, just as they have a right to shop at a grocery store that refuses to sell poisonous food. Banning app store bundling is a scorched-earth approach that would do more harm than good for the average consumer.

coldtea(1216) 5 days ago [-]

>Just as Microsoft were stopped from shipping a browser with their OS

They were never stopped from that.

They were stopped for abusing their monopoly (e.g. threatening OEM PC vendors that unless they bundled this or that, they wont get Windows for a special price, etc).

And that when they had a monopoly (e.g. close to 98% of the desktop AND business market) -- which in itself is not illegal.

numair(2508) 5 days ago [-]

Considering all of the dirty tricks and favoritism between Spotify and Facebook, I'm not sure they are the right messengers for this.

Oh, and one of the early execs involved with Spotify told me, point blank, "that man needs to die already" in reference to Steve Jobs before his passing. Yeah, I was just as shocked as you might be in reading that. I doubt I was the only one aware of such dark sentiments.

The executive team at Apple is probably looking at this website and thinking, "sorry but not sorry." Making life difficult for Spotify could be seen as a way to carry on Steve's legacy.

snaily(3973) 5 days ago [-]

This does not ring true to my experience.

I was at the Spotify office the day the news of Steve's untimely death broke. It was a solemn day, and the the one senior executive I spoke to expressed true sorrow, as if a longtime friend had passed. Jobs was incredibly respected by the Spotify crew, as far as I'm concerned.

simias(3966) 5 days ago [-]

Your comment is effectively only gossip and broad accusations. Leaving your anecdote aside, what dirty tricks and favoritism do you have in mind exactly?

nchie(10000) 5 days ago [-]

Considering your other comment with 'billion dollar investors' in a company that was barely worth 1 billion dollars back then, this definitely reads like a made-up story.

ianai(4029) 5 days ago [-]

Doesnt really matter where the sentiment comes from if we're to have an efficient economy. There should be more choices available. Any idea what made the spotify execs wish death on Jobs?

Gigablah(3298) 5 days ago [-]

You're not really painting Apple in a good light here, either.

alibarber(3520) 5 days ago [-]

'Making life difficult for Spotify could be seen as a way to carry on Steve's legacy' - how does this benefit me as an Apple customer?

celticninja(4012) 5 days ago [-]

I have no idea what kind of justification you are trying to make for Apples behaviour here, but you think it is justifiable. If this was Microsoft instead of Apple you would probably be on Spotify's side. And it should not matter if it is Spotify or someone else, Spotify have the money and resources to be able to try and fight apple, so many other app makers don't and Apple shits all over them.

maaaats(2796) 5 days ago [-]

What are the dirty tricks you're mentioning?

And I'm not shocked about your comment, as I don't really believe it. I don't think HN should be a place to spread gossip and possibly lies like this.

ayvdl(10000) 5 days ago [-]

To whoever is responsible for this page: in the svg files, turn the text into paths, otherwise they look like Arial with terrible kerning.

https://i.imgur.com/uLaVwnd.png

ukyrgf(10000) 5 days ago [-]

As someone who constantly curses their Android app design, seeing them make such a rookie move was a nice bit of schadenfreude.

have_faith(3979) 5 days ago [-]

Glad I searched before commenting, it's the first thing I noticed. Could be a covert tactic to trigger Apple though, more deadly than a computer virus, will probably shut infinite loop down for a day.

crossman(10000) 5 days ago [-]

That was my first thought. The kerning is so bad that some of this is just unreadable

simongr3dal(10000) 5 days ago [-]

It's also very bad on the 'There's an app for that' image.

The other parts of the page are very nice though. The font contrast is fine and the font has a nice weight to it. When they want us to read something they actually still know how to make a proper webpage. And it works fine without JS.

yeldarb(3995) 5 days ago [-]

The timing of this is interesting in the context of Elizabeth Warren's recent proposal to forbid platform operators from also being participants on their own platforms.

It looks like they've filed a complaint with the EU Commission; I wonder if it will become a talking point in US politics as well.

atestu(2454) 5 days ago [-]

Didn't Elizabeth Warren say Apple shouldn't distribute its own apps in the App Store? Apple Music isn't in the App Store...

gurpreet-(10000) 5 days ago [-]

Personally, as an app developer, I think Spotify is taking the right stance here.

I believe that Apple should take some cuts but not as high as 30%. I believe for some categories like microtransaction based apps Apple should take maybe 5%. Plus, I think that if Apple has a competing service, then it should waive the tax altogether. It's only fair.

If the only function of the App Store (or Play Store) is to provide hosting and some quality control then I don't see why apps can't be hosted on a secure website from the vendor - as far as I'm aware, Apple requires you have some sort of website anyway.

At least Google allows installing apps without the Play Store, perhaps it's time Apple permitted something similar with its apps? This could solve this whole problem and have other positive side effects such as people being less likely to jailbreak their iPhones.

veritas20(4030) 5 days ago [-]

Not sure that I completely follow your logic in regards to the 'tax' here. With any product, you have two main things: production of the product and distribution of the product. Spotify doesn't NEED to be on Apple devices (they started off on the web), but they WANT to be on Apple devices (and Android devices) because they are great distribution channels for its product.

That said, how much is distribution worth to Spotify? Imagine that Spotify was not software, but instead it was a hardware device. Would they expect Best Buy to carry it for free? Would they expect Walmart or Target not to offer a store branded competitor? I think not.

When you don't own your distribution channel, you pay for distribution one way or another.

duhi88(10000) 5 days ago [-]

Making micro-transactions more profitable would be an unfortunate course of action. There are enough of those apps in the store, and they are clearly profitable enough if they can buy Super Bowl airtime.

Subscriptions for apps should be different, but I see a challenge in drawing the line between Spotify/Netflix and a scam app like 'awesome culculator' that charges $5/mo to people who don'didn't realize it was a subscription (there was an article on HN this week about apps like that, targeting kids and the elderly, of course).

If Apple has a stipulation about requiring a website, then maybe subscription-based apps can get a discount on the 30% fee if they also have a web or desktop-based version of their application that provides comparable functionality and takes payments.

judge2020(4027) 5 days ago [-]

While the Apple Tax is a problem of its own, I would hate for Apple to allow downloading apps through websites. A large part of iOS security is that everything has to be co-signed by Apple unless it's an Enterprise distribution app [1], so even if there is an exploit that breaks out of the app sandbox you don't have to worry about malicious websites drive-by downloading it.

1: they're likely refining the process of obtaining one of these certs after the recent news reports on business fraud

GeekyBear(10000) 5 days ago [-]

Spotify has already turned off the ability for new accounts to pay to upgrade to a premium account inside their iOS app last year.

You pay for your account on Spotify's own web site, which bypasses Apple getting any cut at all.

https://support.spotify.com/us/account_payment_help/subscrip...

Netflix has done the same thing.

https://www.billboard.com/articles/business/8471988/spotify-...

In my book, the problem is that you are not allowed to provide a link to your payment website inside your app.

latexr(3965) 5 days ago [-]

> I believe for some categories like microtransaction based apps Apple should take maybe 5%.

And then every PAID app will switch to FREE and charge for PRO, to circumvent the 30%. Oh, Apple complained it's not technically a microtransaction? Fine, just separate every feature into a new purchase.

> If the only function of the App Store (or Play Store) is to provide hosting and some quality control then I don't see why apps can't be hosted on a secure website from the vendor

If the vendor server is compromised or is down, a download from the App Store won't work for a single app, leaving customers confused and complaining to Apple, who can't fix the issue.

dalore(10000) 5 days ago [-]

What they need to do is split off the new competing services into a walled corporation (even new entity). And they play fair by the rules, so 30% tax even to them.

cma(3301) 5 days ago [-]

'then it should waive the tax altogether. It's only fair.'

They've still got to at least cover bandwidth (minor) and payment processing.

supernova87a(10000) 5 days ago [-]

What is 'should'? Who determines should, aside from what the law currently says?

Maybe I'm uninformed, but it doesn't appear to me that access to an app store and the terms of such access (which by the way didn't even exist almost 10 years ago) is a public utility or good with an expectation of equal access or certain fair pricing.

Then, under what right does anyone claim that Apple (or any ecosystem platform) has to do anything beyond what is regulated in the payment and terms of operation? What makes your 30% price the right call? If you're an app developer, are you equally ok with someone else determining what you get to charge for your app when you're done with it? Isn't that the same (lack of) logic?

scarface74(3939) 4 days ago [-]

If the only function of the App Store (or Play Store) is to provide hosting and some quality control then I don't see why apps can't be hosted on a secure website from the vendor - as far as I'm aware, Apple requires you have some sort of website anyway.

Because that worked so well for Windows with malware,viruses, and ransomware.

Oletros(3018) 5 days ago [-]

A lower cut?

Why not only allow to use their payment system?

3327(3832) 5 days ago [-]

DOJ - SOMEONE PLEASE TELL ME (SORRY FOR CAPS) HOW IS THIS DIFFERENT FROM USA vs. MICROSOFT (besides the obvious).

ksec(2106) 4 days ago [-]

I believe it is not the problem of 30% cuts. It is the Problem of Anti-Competitive behaviour when you have a competing services without the cost of cuts.

This is not the same as Amazon offering their own label in their Store. Customers could shop in dozens of many other online retail or local retail. And Amazon does not charge other label 30% cut for stocking fees.

I believe Apple should charge a fair amount only when they have a competing product or services within its locked system. Had Apple Charge 15% for the first year on Spotify and 10% for all subsequent subscription it would have been much better. I don't believe the 5% would work, as I have seen many saying 5% should be enough. The cost of running microtransaction, processing, billing, legal, etc are just about break even at 5% even in the scale of Apple. I don't see charging 10% would seem unfair. ( In US at least, in EU the processing fees are much much lower )

Or Apple should never have made Apple Music in the first place. I still don't see any value in Apple offering it. iTunes was required for iPod. And it changes the whole music industry as a whole, along with iPod sold which ultimately saved Apple. No one will buy iPhone because of Apple Music, and Apple Music itself isn't even profitable.

hokumguru(10000) 5 days ago [-]

This opens up a wide host of negative side-effects including the extreme ease of malware. I'd say the #1 value proposition for the App Store, and why most iOS users prefer it, is the guarantee of virus-free programs.

soup10(10000) 5 days ago [-]

Apple should take no cut, and profit only from the sale of hardware. By taking cut on software they are double dipping. Good software makes the phone more valuable for users and drives phone sales.

gigatexal(3924) 5 days ago [-]

Try telling the IRS you think their tax rates are too high. Apple owns the sandbox with the most valuable customers and if you want to play in it you gotta pay. Personally I love letting Apple handle my subscriptions and subscribing through iTunes is one of the things it does well.

hellopat(10000) 5 days ago [-]

Is Netflix next? March 25th is right around the corner...

eicnix(3680) 5 days ago [-]

Netflix already removed iTunes payment[0] to avoid the 30% Apple tax ($256 million in 2018)

[0] https://techcrunch.com/2018/12/31/netflix-stops-paying-the-a...

intellix(10000) 5 days ago [-]

When I'm on Spotify on macOS and press play on my keyboard, it always opens up iTunes instead. Am sick of this second party support for everything especially when the Apple versions are so dire

Ardon(10000) 5 days ago [-]

I had a similar frustration where the media keys would control videos in Safari instead of my music.

This little app lets you go back to the old behavior, or set a priority: http://milgra.com/mac-media-key-forwarder.html

rchaud(10000) 5 days ago [-]

It would be nice if the platform-independent music sites (Spotify, Tidal, Beatport, Bandcamp, etc) joined together to fund a web framework for music streaming that didn't require the same level of system access as a native app. That would allow them to escape the App Store tax, and the headaches related to Apple's review process, deployment issues, etc.

Right now, Apple owns the moat. The fairness Spotify is asking for requires people to give up money on the table. Because of that, a well-intentioned public awareness campaign is no substitute for legal action.

PS - I'd love it if more news websites were structured like the simple timeline on this site. I'd like to see how the situation developed over time, rather than seeing a stream of articles littered with links to past stories. Just give me one timeline I can scroll through, and I'll decide which of the linked stories I want to click on.

theandrewbailey(1927) 5 days ago [-]

I'm pretty sure that such a web framework exists. There's <audio> element and web audio JS API.

https://developer.mozilla.org/en-US/docs/Web/HTML/Element/au...

https://developer.mozilla.org/en-US/docs/Web/API/Web_Audio_A...

toasterlovin(10000) 5 days ago [-]

Whenever Apple's 30% take on the App Store comes up, people on this website get all up in arms about it. Let me provide some perspective, which I think is sorely lacking:

I sell physical goods on Amazon. They charge me 15% for this privilege. The only reason they don't charge more is because physical goods have lower margins than digital goods (my margins after it's all said and done are somewhere between 10-15%). And, you know what? I'm happy to pay Amazon their 15% because they're bringing me customers that I wouldn't have otherwise.

Perhaps thinking about it like this is instructive: does Amazon have an obligation to let me sell on their platform simply because lots of consumers choose to buy stuff from Amazon? Does Amazon have an obligation to change their fee structure so that I can more easily compete with Amazon Basics branded products?

Should Apple charge a smaller percentage to subscription services that can't afford to give up 30%? Probably. It would net Apple a percentage of something, rather than the 30% of $0 that they're getting now. And it would be a better experience for Apple's customers. But it is in no way unreasonable for Apple to expect to get paid for originating sales of software and subscriptions. And it is certainly not unreasonable for them to get paid significantly more than credit card processing fees.

leoh(3853) 5 days ago [-]

Amazon has an obligation to play fairly in the marketplace at large and they don't. They operate at a loss. They shouldn't have to let anyone on their platform. But they should also play fair. As it relates to Apple, the iPhone or Android are literally the only reasonable means of accessing online services on the go. They are a platform of a higher order than Amazon and should allow others to play in their sandbox. I don't know what the right number should be — probably it should be zero. 30% is insane.

bhl(10000) 5 days ago [-]

You're missing the point: Apple seems to be abusing its power as a platform to unfairly charge content based providers with 30% tax while not applying it to themself. On Amazon, it would be comparable to charging you 15% while not charging themselves the same on their Amazon Basic products.

LynxInLA(10000) 5 days ago [-]

The Amazon Basics point is interesting. They are able to capitalize on other's R&D and market research with almost no cost. If you want to sell a product, you have to do some amount of research to determine if it will be profitable. They are able to just look at selling statistics and launch their own version of popular, profitable to produce items.

This feels like an abuse of their position as a marketplace platform, but it isn't that different from grocery stores creating white label/off brand products.

Maybe it only becomes an issue when they promote their own items and bump original products down the search results. This is closer to the behavior that Spotify alleges Apple is doing.

locust101(10000) 5 days ago [-]

The difference is that for a mobile app, it's either apple's way or highway(more like nothing). You can't very well be a significant app developer by ignoring iOS. And apple does not allow users to sideload your app on the store. So the amazon analogy is disingenuous at best. You can still operate a website or dozens of other ecommerce platforms to sell your goods. But without app store, there is no way for you to reach iOS users. iOs users are almost 50% of mobile users and contribute much higher percentage of revenue.

WA(3447) 5 days ago [-]

Thing is, unless your app is already somewhat popular, Apple won't do anything for you. Maybe you get more customers. But maybe an app like Spotify invested enough in marketing to reach new customers without Apple's health.

The problem is that it's really hard to measure how many customers Apple really brings and how many would find an app through other means.

jordansmithnz(2985) 5 days ago [-]

Something to note: competition wise, Spotify doesn't have a 30% App Store profit handicap compared to Apple Music, it has a 60% one.

Spotify doesn't just forfeit 30% of their App Store revenue, they pay this directly to Apple - so the relative handicap is doubled.

Grustaf(4031) 5 days ago [-]

No, Apple doesn't get all its money from Spotify. For Apple it's insignificant.

And of course it only applie for in app purchases, if you browse to Spotify.com on you iphone and pay there, Apple doesn't get a penny.

Most of these claims are dubious at best. Bunch of cry-babies.

judah(3341) 5 days ago [-]

This exemplifies some of the reasons proprietary app store lock-in is bad for consumers.

Progressive Web Apps -- web apps that are installable and available offline without any app store -- are a viable alternative, and ultimately a threat to Apple's app store racket. It's likely why iOS Safari continues to drag it's feet on PWA support.

fmo91(10000) 5 days ago [-]

I agree with that. However, what worries me about PWAs is discoverability. How can I find a catalog of available PWAs to download? Is there a way?

dwighttk(2958) 5 days ago [-]

The next progressive web app that I like to use will be my first.

flanbiscuit(10000) 5 days ago [-]

Would a PWA be a viable option for an app like Spotify? I'm definitely on board with them for simple CRUD type apps

madeofpalk(3683) 5 days ago [-]

> Progressive Web Apps [are] ultimately a threat to Apple's app store racket

Honestly, no, they're not. Even the Gold Standard of PWAs - the Twitter Lite app that Google (the patron saint of PWAs) helped them build - provides an inferior experience compared to native apps.

People bring this up often - Apple refuses to improve Safari to bolster their App Store - which completely ignores all the valuable improvements Safari has made to give web apps more native-like features, like backdrop-blur for blurred backgrounds and CSS Snap points for native JS-less carousels. Chrome doesn't support either of these.

GeekyBear(10000) 5 days ago [-]

They have some valid complaints, but 'if we use Apple's payment infrastructure, we have to pay them a cut for doing so' isn't one of them.

The thing I find to be anticompetitive is that Spotify cannot provide a web link to their own payment processing web page within their app.

Apple has long allowed content providers to opt out of using Apple's own payment system (and paying Apple a cut), but without being able to point users to your payment website, this rings hollow.

As far as Apple Watch goes, the API to allow for downloading content to local storage and playing it back in the background landed in last year's Watch OS update.

Pandora manages this task just fine.

AsyncAwait(4025) 5 days ago [-]

> They have some valid complaints, but 'if we use Apple's payment infrastructure, we have to pay them a cut for doing so' isn't one of them.

But Spotify doesn't want to use Apple's infrastructure and is compelled to do so regardless.

Spotify is saying that if using Apple payment infrastructure was free, then not allowing other options would not be a problem, but since it's not free, not allowing anything else is a problem, not that Apple charging for the use of its payment infrastructure is a problem in itself.

Grustaf(4031) 5 days ago [-]

> The thing I find to be anticompetitive is that Spotify cannot provide a web link to their own payment processing web page within their app.

Well, you probably can't advertise your apartment on AirBnb and then Link people to your own payment service that bypasses AirBnb's fee either. It sort of makes sense.

nubela(3435) 5 days ago [-]

Upvote the fuck out of this.

Google does exactly the same thing with Chrome Extensions or Android Apps.

elagost(10000) 5 days ago [-]

I'm no great fan of Google, but it's a little harder to point the finger at them.

F-droid works perfectly well on any Android device, and is a viable alternative to Google Play (for some users). Android .apk files can be installed on any Android device with one switch flipped - and the first time you attempt to install an .apk file, it points you to the setting. Chrome extensions can similarly be packaged and distributed from another source (i.e. GitHub) and are treated as first-class chrome extensions alongside their store-installed counterparts. Apps installed on an iPhone via Xcode (only available for macOS, which is only available on Apple hardware) expire after 7 days and refuse to open unless they are re-deployed. They last for a year if the user pays Apple $99/year for a developer certificate.

Google's platforms don't have people spending money as much as iPhone users do, but they are not nearly as locked down or restrictive.

adontz(10000) 5 days ago [-]

Cannot post link to

https://www.timetoplayfair.com/ on Facebook.

It says:

  Oops
  Something went wrong. We're working on getting it fixed as soon as we can.
erikig(4011) 5 days ago [-]

Facebook was experiencing issues all day today.

https://www.cnn.com/2019/03/13/tech/facebook-instagram-down/...

localhoat(10000) 5 days ago [-]

That's actually the reason why I don't by a HomePod

Hamuko(10000) 5 days ago [-]

The HomePod is a horrible device unless you want Siri in a can. The source selection is just so locked down for whatever reason, so unless you're all in with the Apple ecosystem, you're going to be missing out something.

Zenbit_UX(10000) 5 days ago [-]

I guess now I know where Spotify's priorities have been over the last few years as their Android app regressed.

Your war with Apple seems to have distracted you from the one platform you are on good terms with. As a premium subscriber it's very frustrating to not be able to pause music on the lockscreen anymore. Or using the headphone controls. Or why a blocked song keeps being played in discover weekly. Get your act together or you'll lose your android customers to Apple too.

thekyle(10000) 5 days ago [-]

> As a premium subscriber it's very frustrating to not be able to pause music on the lockscreen anymore. Or using the headphone controls.

I just wanted to say that I am also a Spotify Premium subscriber using the Android (Pie) app and I don't have any issues with these two features. I cannot comment on the blocked songs, since I do not use that functionality. Have you checked the forums to see if other users are experiencing the same problems? Maybe there is a specific fix for your device.

mediocrejoker(4009) 5 days ago [-]

It's hard to have much sympathy when you know that this was written by the marketing department of a company with a $30bn market cap.

This looks to me like a blatant attempt to latch on to a perceived popular sentiment that Spotify thinks they can use to to have the government give them a big advantage that they didn't have before.

Call me cynical but it's hard to see this as anything but self serving.

mcrae(10000) 5 days ago [-]

Sure, but if another company with a $1T mkt cap is abusing it's market position to the detriment of a $30B company, wouldn't you want to hear about it?

After all, if they treat Spotify this way, how do you think they'll treat your app in the future if it happens to compete with them in someway?

alkibiades(10000) 5 days ago [-]

sure but it's 30bn vs apples almost trillion in market cap

sjg007(10000) 5 days ago [-]

What if Spotify takes your phone number and sends you a text link to sign up to pay?

erikig(4011) 5 days ago [-]

That would probably be fine...until Apple created a rule to discourage and then eventually ban all apps that used that pattern without going through their API.

georgespencer(3871) 5 days ago [-]

Can someone help me stop playing the world's smallest violin here?

Spotify knowingly built a low margin business living in the pocket of the labels (who force Spotify towards razor thin margins) and Apple/Google (who have, since before Spotify launched, operated app stores for their platforms which are to some extent curated and which are not free market economies).

Spotify feels aggrieved that Apple does not allow it to develop software for certain of their hardware lines, such as Homepod or Apple Watch. Why do I have to allow you to develop software for my proprietary hardware, just because it's technically possible?

The crucial line in their argument is this:

> giving up 30% was too much for us to keep our prices low for our fans. Unfortunately, the end result is that you can no longer upgrade to Premium through the app.

How is this Apple's fault and not your fault? Every market place takes a cut from the vendor. Your business not being able to sustain the cost of doing business is nobody's fault except your own.

Apple should not be allowed to send push notifications about products or services they prohibit other apps from sending. They shouldn't arbitrarily restrict Spotify from updating the app (virtually no information is provided about what infractions Apple saw, and history suggests that companies are great at presenting one side of the argument and then we find that Apple has a legitimate grievance). They should not be able to charge Spotify more because of their competition.

But to suggest that Apple should be forced to allow Siri integration with Spotify? Homepod? Apple Watch? Ridiculous.

Monopolies where prices rise are bad news. But look at UK football coverage. On top of my free-to-air channels (£150 p.a. TV licence) I need a Sky Sports subscription (minimum £25 per month) and a BT Sport subscription (which necessitates one of BT Broadband - gross - or Sky) which I think is around £5-£10 per year. The top flight football is fragmented across all three providers. Is that better for me?

fauigerzigerk(3065) 5 days ago [-]

>Why do I have to allow you to develop software for my proprietary hardware, just because it's technically possible?

It's not that you have to, but if you do and you become one of only two or three platform oligopolists worldwide, then you better make sure it's a level playing field or you risk getting regulated as a utility.

https://www.cnbc.com/2019/03/11/sen-elizabeth-warren-wants-t...

alanfranz(10000) 5 days ago [-]

> Why do I have to allow you to develop software for my proprietary hardware, just because it's technically possible?

Because Apple+Android = de facto monopoly.

aklemm(10000) 5 days ago [-]

Bottom line: I can't queue and download Spotify content to my Apple Watch for offline use. Considering Apple has a competitor product, there is no reason for me to believe Apple is playing fair.

Secondarily, the expectation that Spotify buck the labels AND expect to bring the concept of 'all music for one price' is a non-starter, so I don't understand that criticsim.

pedroaraujo(10000) 5 days ago [-]

A 30% cut is something that Spotify needs to pay to Apple for every user that subscribes to Spotify through Apple devices. This cost doesn't doesn't exist for PC users, for example.

They probably could live with it but I can understand why it feels like an artificial cost that Apple came up with. It would be an understandable cost if they were selling the Spotify App through the App Store and using the actual store infrastructure for supporting Spotify... but they are not.

The app is nothing more than a portal to the entire Spotify infrastructure, it doesn't weight anything to Apple.

And then we have the subject of the direct competition, Apple Music doesn't need to have their profits cut in 30% because they are owned by Apple itself.

And it's even worse if they are using Siri, Homepad and Apple Watch to make Apple Music more appealing in comparison to Spotify.

Blackbeard_(10000) 5 days ago [-]

RE: Football

Monopolies can have negative effects without trying to extract monopoly rents.

The competition between BT and Sky massively increased the TV rights price for e.g. the premier league, so the clubs got much more money. In theory, they used this to buy better players etc and increase the quality of the league.

Although you now have to pay twice, the quality of the product has gone up. So it's not a zero-sum game. You can argue that you don't think it's worth it, but that's an opinion, it's not true that it's an inherently worse situation.

Until they entered into a rights sharing agreement.

5trokerac3(3906) 5 days ago [-]

The difference comes in when Apple introduced Apple Music, a direct competitor to Spotify. So now, they're not only offering an essentially identical service, they're extracting a toll from their competition through their other holdings.

To put this in 19th century anti-trust terms, the manufacturing company owns the railways and charges the competition a toll to transport their goods. It's a clear cut case for Spotify, from a legal perspective.

xrmagnum(10000) 5 days ago [-]

More than a decade ago, the EU forced Microsoft to let people choose their browser on a Windows machine with a fresh install. Not only that but the list of choices was randomly sorted so that IE would not be the first listed.

> Why do I have to allow you to develop software for my proprietary hardware, just because it's technically possible?

Of course if you were Apple, you would not want to do it. But, that's what antitrust laws are for. As a consumer they are valuable: how long have people been waiting to use Spotify their Apple Watch?

endorphone(3056) 5 days ago [-]

The complaint regarding the fee is not just that Apple takes such a large fee (which is reasonable for a one-off game where the exposure and back-end processing is beneficial, but is ludicrous for a recurring subscription from a large scale org), it's that you are restricted from offering any other payment options, or even alluding to possible other payment options. That is grossly anti-competitive and does absolutely nothing for consumers.

As a user of an iPhone/Mac/etc, but who loves Spotify, this whole thing just sours me on Apple a bit. Spotify works great with my webOS TV, and just about everything else for that matter. It supports everything. Its networking/streaming model is brilliant. The interface is much better than Apple music (with its bizarre integration with iTunes). That I can't use Siri to play a song is just obnoxious and turns me off of Siri, and whatever middle managers in Apple are pushing this are just doing themselves harm in the longer run.

dalbasal(10000) 5 days ago [-]

~~Why do I have to allow you to develop software for my proprietary hardware, just because it's technically possible?

I don't think we have a clear ethic yet, for these situations. There's obviously a ton of economic power in platforms. Since the msft-vs-netscape days, it's been controversial.

Ultimately... these are marketplaces, important ones and, considering their size, scope and influence... I think there is a strong case that 'my house, my rules' is not a reasonable way of doing things.

Apple/Google's app & content stores are huge bottlenecks and being locked out of them is on the same scale as being locked out of the financial/banking system for certain companies.

Does 'free market' mean anything useful, if the free market consists of a handful of unfree 'platforms?'

There's a similar question for FB and Twitter. They are such big media channels that being locked out of one could (for example) make it impossible to run for elected office.

Things have different implications at large scale.

m-p-3(10000) 5 days ago [-]

Apple does provide the infrastructure to distribute the app to customers and does deserve some compensation for it.

But on the other hand, Apple is stepping into a market (music streaming) from which they control top to bottom and has a advantage no one has (no IAP transactions fee, they basically pay themselves for it), which in a way is unfair.

If they want to remain fair to competition, they should waive IAPs costs or reduce them significantly in market they engage themselves in which they are direct competitors.

jdietrich(4020) 5 days ago [-]

I think you're wrong - Apple's behaviour in this instance is clearly an abuse of market power and I fully expect the European Commission to rule in Spotify's favour.

Apple are directly competing with Spotify in the field of streaming music services via Apple Music. Apple's total control of the app store and their substantial share of the smartphone market means that they have a dominant market position within the meaning of Article 102 TFEU. Apple are using that dominant market position to advantage their own streaming service and disadvantage Spotify, for reasons set out at length in the original article. Apple are required under EU competition law to give Apple Music and Spotify an equal playing field, which they clearly aren't doing. Apple might have a partial defence if they allowed sideloading of apps, but they don't.

The obvious precedent is the European Commission's action against Google in 2018. Google were fined €4.34bn for using Android to unfairly advantage their search business. Android has a dominant market position within the mobile OS market - if you're a small mobile device manufacturer, you don't have many reasonable alternatives to using Android. Google didn't allow manufacturers to pre-install the Play Store app unless they also pre-installed Chrome and the Google Search app, which is an abuse of their dominant market position. They used their dominance of the mobile OS business to unfairly advantage their search business, which is blatantly illegal.

https://en.wikipedia.org/wiki/European_Union_competition_law...

http://europa.eu/rapid/press-release_IP-18-4581_en.htm

laumars(3241) 5 days ago [-]

> Spotify feels aggrieved that Apple does not allow it to develop software for certain of their hardware lines, such as Homepod or Apple Watch. Why do I have to allow you to develop software for my proprietary hardware, just because it's technically possible?

I see your point but you could flip that a 3rd way:

'Consumers pay for their hardware - they own the device - so why should manufacturers tell consumers what they can or cannot install on their hardware?'

I grew up in an era when hardware wasn't so tightly coupled with software. In fact you could go further than that and mod your hardware with custom chips and so on without violating anything more than your warranty. So I find this current era where consumers are expected to pay high prices for hardware and still not have any rights over that platform to be a massive con.

joeblau(3214) 5 days ago [-]

There was a post by Ben Thompson[1] that outlines some of the challenges with Apples approach in the App Store. It's a good read if you have the time. I'm not trying to change your mind, but more give a different perspective on the conundrum that Apple is in.

[1] - https://stratechery.com/2018/antitrust-the-app-store-and-app...

hokumguru(10000) 5 days ago [-]

Something not mentioned anywhere in this thread either is the fact that, after 1 year, any subscription made through the app store goes from a 30% cut to 15% - what I would call much more manageable.

intellix(10000) 5 days ago [-]

As an avid user of Spotify well before I switched from Android to iPhone. It annoys me to no extent how everything used to work over there like it was first party.

On iOS unless everything else is only partiality supported. iTunes in comparison to Spotify was a joke when it was first released and they're shoving it down our throats constantly.

Never really used Siri until I bought a pair of AirPods. How am I not able to tell it to play music via Spotify?

demuch(4026) 5 days ago [-]

I just don't understand why most people think Apple don't have a monopoly market position. Of course Apple is the monopoly on the App Service market ([1] App store generated 93 more revenue than google play in Q3). We are talking about app service rather than the phone units sold.

[1] https://techcrunch.com/2018/10/11/app-store-generated-93-mor...

8f7tjdsk9o8(10000) 5 days ago [-]

It's one things to compare app sales from one market to another, but the broader issue is that Apple maintains a monopoly on the store itself. There is no Google Play store on iPhone.

And... listening to SCOTUS oral arguments recently, that sounds like it will be changing soon.

ringaroll(10000) 5 days ago [-]

I really hope Apple dies and burns. They have good PR but thats it. They lie and deceit developers just like Google. Bait and Switch. There should be government regulation of Apple, Facebook and Google. These corps are just too big and control our democracy.

Apple is starving innovation by deliberately not supporting many thing on the Safari iOS browser and prevents competition illegally by restricting 3rd party browser engines.

Taking a 30% rent on purchases is blatant theft. More people need to speak out. #AppleRentSeeker

Because Apple is unable to increase revenues, it's now trying to increase by using uncompetitive tactics and illegal restriction of competition.

thanatos_dem(4011) 5 days ago [-]

What does iOS Safari not support? It uses WebKit, so more or less full technical/js support, it has ad blockers, tracking prevention, a built in password manager that uses the iOS keychain... not sure what else I'd really want for web browsing.

joshstrange(3536) 5 days ago [-]

Wow... Spotify had an opportunity to make a good case and then threw it all away because they got greedy.

A company not providing you and API you want does not equate to them blocking you. FULL STOP.

Spotify had some decent arguments (re: 30%, IAP, payment) but it fell 100% flat when they started giving equal importance to things that were not targeted at them. The lack of music API's on homepod, watch, iPhone are not some direct slight against Spotify and to pretend they are only shows Spotify's inflated view of themselves. Put simply: NOT EVERYTHING IS ABOUT YOU.

Spotify is twisting the truth to it's breaking point in this post with just enough truth sprinkled around that you might not notice the bullshit.

writepub(10000) 5 days ago [-]

The complaint is that Apple is withholding certain APIs from public access, though nothing technical prevents it from doing so, other than anti-trust issues. There is certainly truth to that! When Apple is both an app publisher, and API publisher, these anti trust issues are bound to pop up

sirmike_(10000) 5 days ago [-]

Its been a super long time since I have looked into developing on the iOS side -- are there restrictions on Safari which would prevent Spotify from going to an all PWA type deployment for iOS and peacing out from the App Store? What would the API for Safari restrictions be to overcome? Thanks in advance.

pier25(3325) 5 days ago [-]

AFAIK Safari has great support for PWAs these days except for not being able to show a native banner that allows the user to install the PWA on the home screen.

owenwil(1214) 5 days ago [-]

The claims in here are pretty wild, particularly around how Apple has favored its own products:

- Apple blocked Spotify from working with Apple Watch

- It blocked Spotify from building apps for HomePod

- It blocked Spotify from building apps for Siri

- It blocks Spotify updates on a regular basis

- It blocked Spotify from using a podcasting API after it acquired 2x major podcasting companies

I genuinely hope Europe takes this seriously. The issue of the 30% cut alone is enough for further investigation, particularly as Apple now uses that as an advantage to undercut Spotify with Apple Music.

gideon_b(4022) 5 days ago [-]

It's pretty clear that Apple is using a dominant platform position to raise prices and block competition.

By raising prices and blocking access to competing services, Apple is acting with malice to consumer welfare.

tumetab1(10000) 5 days ago [-]

> The issue of the 30% cut alone is enough for further investigation,

Not really, it's 30% for everyone, not just Spotify.

cujo(4019) 5 days ago [-]

> Apple blocked Spotify from working with Apple Watch

I'm not sure this is 100% true. From browsing the spotify support forums many moons ago, some guy had built a spotify playing app for the apple watch, but spotify squashed it. Given that some random dev could do this, it doesn't seem like apple prevented anything.

scarface74(3939) 4 days ago [-]

-Apple blocked Spotify from working with Apple Watch

The newest version of WatchOS does allow it. The first generation of watchOS really didn't allow any apps - just remote views of iOS apps (yeah I'm simplifying it).

- It blocked Spotify from building apps for HomePod

Apple doesn't have any apps on the HomePod. Now we are going to force all single purpose devices to ship with an SDK?

- It blocked Spotify from building apps for Siri

Apple also just came out with any third party integration with Siri a version or two ago. Again we want the government to dictate the timeline when they build features for apps?

- It blocked Spotify from using a podcasting API after it acquired 2x major podcasting companies

Netflix also blocked its API from most third parties years ago. Can we sue Netflix?

rhinoceraptor(3615) 5 days ago [-]

I don't see how blocking a company actively trying to monopolize podcasts from using the Apple recommendation API is monopolistic. Apple podcasts (either intentionally or just from neglect) have always been backed by open RSS podcasting.

ihuman(2787) 5 days ago [-]

> It blocked Spotify from using a podcasting API after it acquired 2x major podcasting companies

Is this talking about the Apple Watch-Podcasting issue? Until recently, podcast apps on the watch had issues saving where you are in an episode because they couldn't constantly run in the background; you could still play audio in the background without your app, you just couldn't watch the percentage played. Fitness tracking apps were allowed to run in the background forever, so some podcast apps told the API they were fitness apps to get around the restriction [0]. Apple later removed this loophole and created the APIs necessary for podcast apps to work on the watch [1].

[0] https://marco.org/2017/08/10/removed-send-to-watch#fn:pLK9h4...

[1] https://marco.org/2018/09/17/overcast5

joshstrange(3536) 5 days ago [-]

- Apple blocked Spotify from working with Apple Watch

How? By not providing API's to do what they wanted? yawn Next?

- It blocked Spotify from building apps for HomePod

How? By not providing API's to do what they wanted? yawn Next?

- It blocked Spotify from building apps for Siri

How? By not providing API's to do what they wanted? yawn Next?

- It blocks Spotify updates on a regular basis

Wakes up Ok here we have the first real issue, BUT even with that said this is SUPER one-sided. We have only Spotify's word and given their liberal stretching of the truth (or outright breaking it in some cases) I'm not willing to give them the benefit of the doubt.

- It blocked Spotify from using a podcasting API after it acquired 2x major podcasting companies

I'm going to need more info on this because it is super vague.

jackson1372(10000) 5 days ago [-]

If you read between the lines, it's clear that Spotify was able to make a normal watch app. But they wanted to make one that had special functionality not yet allowed by Apple, for any app, not just Spotify.

matwood(10000) 5 days ago [-]

> - It blocked Spotify from using a podcasting API after it acquired 2x major podcasting companies

Interesting one since it is Spotify that is trying to close the currently open podcasting universe.

Let's not pretend that Spotify is either the white knight or the underdog here. This is two big companies negotiating over pricing.

zimpenfish(3961) 5 days ago [-]

> - Apple blocked Spotify from working with Apple Watch / HomePad

But they've blocked every streaming music thing, right? It wasn't a vindictive campaign targetted at Spotify as this timeline is suggesting - no-one got to build streaming music apps for the Watch or HomePod IIRC.

jmull(10000) 5 days ago [-]

I don't know about the others, but Spotify is BSing about being blocked on the Apple Watch.

Until WatchOS 5 the APIs to do something like Spotify didn't really exist. There were workarounds, like by abusing the workout API, but unsurprisingly these has significant drawbacks, were unstable and Apple cracked down when they found API abuses.

Apple 'blocked' Spotify only in that it hadn't (yet) released APIs that supported their use cases.

It makes me wonder how disingenuous their other claims are.

I do think services should have more options than Apple allows for accepting payment.

headmelted(2638) 5 days ago [-]

Utterly disagree.

The problem isn't the amount of the Apple tax, and it buries the lead to make it about that. The contention here is that applying rules like this arbitrarily in a way that at least appears to favour your own products over your rivals is an abuse of your position.

If the commission rules in Spotify's favour (which I would think is likely, given the dim view they've taken of such matters previously), then I'd be astonished if Spotify doesn't file lawsuits in the US under the Sherman Act.

In any case, having this fight happen, and in public, can only be good for indie developers if it forces Apple to apply it's rules arbitrarily.

volandovengo(3326) 5 days ago [-]

For a long time I've been confused about the rules of monopolies. Microsoft got into a lot of trouble when they bundled IE into Windows so much so that the US threatened heavily to break up the company.

Fast forward a decade later and apple, google and amazon bundle a crazy amount of unrelated services into their platforms without the regulators raising an eyebrow...

headmelted(2638) 4 days ago [-]

* should read forces Apple to apply it's rules fairly. Was a bad edit, whoops!

sandov(10000) 5 days ago [-]

Asking the European commission to impose regulations is morally worse than taking advantage of your market position and consumers' ignorance.

JangoSteve(2847) 5 days ago [-]

For what it's worth, I don't think Spotify is asking the European commission to impose regulations, but rather asking them to enforce the regulations they already have.

ppeetteerr(10000) 5 days ago [-]

I agree that Spotify is taking the right stance. In their position, working on whatever team is responsible for fighting Apple, I would also do anything in my power to fight.

Having said this, Apple can do as they please. They control the hardware, the OS, the App Store, and the user accounts. The same was true of Twitter who effectively squeezed access to their API until one or two desktop clients remained.

The only two ways out of this is to legislate a lower rate (through campaigns such as these), or to create a competing platform that lowers costs for its users. Imagine if Spotify offered a lower rate for Android users... Wouldn't that send a very clear message to Apple?

blackflame7000(4005) 5 days ago [-]

'Imagine if Spotify offered a lower rate for Android users... Wouldn't that send a very clear message to Apple?' - That is exactly what Spotify should do. They need to use the fact that Apple is married to its platform against them. Charge Apple users 12.99 and Android users 9.99 and then beat them by offering superior content. If Apple users want to sign up online then they can get the 9.99 price.

pjc50(1486) 5 days ago [-]

> Imagine if Spotify offered a lower rate for Android users... Wouldn't that send a very clear message to Apple?

Apple shrugs and bans Spotify. Next move?

ReptileMan(10000) 5 days ago [-]

Legislating consumer right to root and sideload is even better solution.

Blaiz0r(10000) 5 days ago [-]

> Imagine if Spotify offered a lower rate for Android users...

They already did, the 12.99 for premium was only for purchase through the App store.

gimmeThaBeet(10000) 5 days ago [-]

Yeah, this does seem like a thorny issue. It feels like a shopping mall; you have this big space with a lot of people, a lot of things going on. If you didn't know what it was you could easily mistake it for a public space, but it most certainly is not.

But then you have the complication in the analogy that afaik, the landlord doesn't usually operate stores?

Similarly, at a local level, it looks pretty bad, Apple has advantages in its environment that allows it to operate in sort of unassailable ways. But indeed, if you zoom out, the big driver Apple has is millions and millions of iPhone users.

Apple has users that want its products, spotify wants market access to those users, the link imo is 'do the users want spotify more than apple?'. And the answer I would think is no, but then that brings up I think the reasonable issue that there's a larger barrier in switching devices than there is in switching apps. Sorry for the ramble, the whole thing seems like a mess.

thomascgalvin(10000) 5 days ago [-]

> Imagine if Spotify offered a lower rate for Android users... Wouldn't that send a very clear message to Apple?

I believe this is also against Apple's Terms of Service. IIRC, they have a 'most favored nation' clause which prohibits you from offering a lower price on a competing platform.

aclimatt(3901) 5 days ago [-]

Well, that's not entirely true, Apple can't exactly do what it wants. As other comments point out regarding Microsoft, Microsoft were forced to allow IE to be debundled and other competing browsers installed, because having a monopoly on a platform and using that platform to enforce anti-competitive practices is illegal under anti-trust law.

So given Apple's marketshare (not a monopoly per se though pretty substantial), and given they both control the platform that people pay money to access, and promote preferential treatment of a first party service at the expense of any third party services, it sounds pretty ripe for an anti-trust lawsuit.

The same I believe has recently been applied to Google in the EU for using its monopoly to promote its own product search results above other online stores.

The only difference now is, the teeth of anti-trust regulators are a lot more dull than they were in the 90s, for various reasons.

freeopinion(10000) 5 days ago [-]

I think you have made a great point. Until Spotify gives a 30% price break to Android, this just looks like a contest of who gets to keep the gouge.

JustSomeNobody(3792) 5 days ago [-]

> Having said this, Apple can do as they please. They control the hardware, the OS, the App Store, and the user accounts.

I disagree. Apple created the App Store and invited 3rd parties to host their apps there, so they should have to play fair for whatever the legal system deems is a good definition of fair.

have_faith(3979) 5 days ago [-]

> Apple can do as they please

I hope they continue to do so, in so much as I would like Apple to experience some backlash for having inconsistent stances when it comes to the app ecosystem and their rules.

pier25(3325) 5 days ago [-]

As a user, I much prefer the Android model. For example I can buy and browse Kindle or Audible books directly in the app. I don't use Spotify but I imagine it's a similar experience.

Apple policies do not really benefit anyone. Kindle iOS users will simply open Safari to buy their books, making their experience worse.

Either Apple should really remain objective and not have horses in the App Store race, or follow the same rules on its own apps that they impose on others, or change the rules to benefit everyone (including the users).

Mindwipe(10000) 5 days ago [-]

In all seriousness, when people say 'iPads are better tablets because of software compared to Android tablets' I really do raise an eyebrow.

If your use case for a tablet is to read books or comics (which I suspect it is for many people! A lot!), then the iPad is terrible because of this specific limitation, and Android tablets are much easier to use. Shame Google seems hellbent on destroying them - the Pixel C really was a very, very nice comic reading tablet when it wasn't suffering hardware failures.

talkingtab(4032) 5 days ago [-]

I don't think this is about Spotify versus Apple - its about Apple versus its customers. I want to be able to choose what music service I want, and I want the price to be competitive. And not just music.

You just have to wonder what's up with Apple.

mediocrejoker(4009) 5 days ago [-]

Why not switch to Android?

coldacid(10000) 5 days ago [-]

It's about Apple and anyone they consider a (potential or active) competitor. As far as customers come into it, Apple wants to be able to milk them dry and keep them from using anyone who might provide a similar or better service than their own, as well as take over any profitable channels they don't already have support for.

It's 1990s Microsoft on a mobile phone.

paulgb(1826) 5 days ago [-]

This is why I think the 'if you're not the customer you're the product' mantra is overly simplistic. In this case, you can still pay $1000 for a phone, and it only makes you a more valuable product for Apple to sell to its (developer) customers.

nevir(3933) 5 days ago [-]

Apple did the same thing to Kindle for iPhone back when it launched.

We submitted the original version to Apple with a fully functioning store built into it—and were then stuck in submission limbo. Two weeks later, Apple announces their intent to build in-app purchasing.

The kicker: Apple wanted a 30% cut of every book sold on the store ...and at the same time, had negotiated with book publishers that the publishers MUST sell all books at a 30% margin on ALL stores if they want to sell their books via Apple's own ebook store.

Aka we couldn't sell books at an increased cost, even if we wanted to. We would have had to take a loss on every purchase.

In the end, we had to remove all of the store functionality from the app, and weren't even allowed to link people directly to the web store for purchasing (or even instructions for purchasing).

Splendor(3369) 5 days ago [-]

I'll feel sorry for Kindle when their devices support EPUB files.

streblo(3782) 5 days ago [-]

Which is especially unfortunate, since most Kindle users want this feature and (probably) frequently request it, not knowing that Amazon's hand is forced.

baby(2163) 4 days ago [-]

As a user this is infuriating, I remember spending a good chunk of time trying to figure out how to get book on the Kindle app :/

RandallBrown(3556) 5 days ago [-]

The Kindle iPhone app still doesn't even link you to the website so you can buy books does it? It only looks like I can add it to a 'list' that I assume I can view in the browser and then purchase.

NyxWulf(3971) 5 days ago [-]

I've used the Kindle app ever since it launched, and have followed this fiasco. The trouble for someone like me is I would rather purchase my content from Amazon since I can read it on any device, not just an apple device.

p.s. Do you know how I would go about recommending someone add the ability to have green or amber text on the black background?

elagost(10000) 5 days ago [-]

My dad has had iPads since the first one, and it is still jarring for him to not be able to just click a 'store' button in the Kindle app. (I believe it used to have a webview for the store built-in; I do not use the Kindle app)

He was very confused when it disappeared. I imagine most users feel this way. I had to coach my mom to sign up for Pandora on the website so she saved a couple bucks a month, and it was baffling to her why it costed more in the app.

baby(2163) 5 days ago [-]

I wasted so much time trying to figure out how to get books in the iOS kindle app. When I finally figured out that I had to use my laptop, I was still confused at so why the iOS app was not saying anything about anything.

jarjoura(3933) 5 days ago [-]

It wasn't just Kindle. There were dozens of really high quality book reading apps that either sold old or hard to find books, comics, or even mangas. All of them evaporated over night once Apple forced the 30% cut. I'm still very bitter about that move.

Apple even went as far as making some Steve Jobsian rule that you had to sell your book for the same price on the web as you did in-app. So you couldn't make up for the 30% cut. That was a pure bad-faith move and Apple only recently rescinded that requirement.

username3(2824) 5 days ago [-]

Apple doesn't always get 30%. You can buy iTunes gift cards 20% off. Amazon can take 90% by selling iTunes gift cards used to buy through App Store.

CraneWorm(1825) 5 days ago [-]

> Kindle for iPhone

You put a walled garden... in another walled garden.

On a more serious note: I get it; among Apple users are the most happily paying people that you can find on this planet and if you have a product you want to make it available to them. No wonder they think it their right to take a massive 30% cut.

nevir(3933) 5 days ago [-]

Also: there were multiple ebook readers on the store at the time that were already selling books in-app. Several stopped development, others removed their in-app store.

I suspect we (Kindle) we're either the trigger for the in-app purchasing store, or really accelerated plans around it.

gnicholas(1632) 5 days ago [-]

Even crazier, Apple in some cases forces apps that use webviews to block out links to the Kindle Store that would normally appear. My iOS app [1] has speed reading and accessibility features that we overlay on websites, including the Kindle Cloud Reader.

Apple wouldn't approve our app until we blocked the Kindle Store button from loading on the Kindle Cloud Reader website. It never occurred to me that Apple would try to exercise control over how third-party websites are rendered inside webviews, but it turns out they do.

1: https://itunes.apple.com/us/app/beeline-reader/id938026867?m...

scarface74(3939) 4 days ago [-]

Amazon played the victim here but book publishers preferred Apple's model to Amazon selling their books at a loss to keep competitors out and to boost sales of the Kindle.

su8898(10000) 5 days ago [-]

Slightly off-topic: I find it intriguing that Spotify keeps on calling their users as 'fans' instead of users or customers!

Dirlewanger(10000) 5 days ago [-]

Slimy and disingenuous corporate PR speak is everywhere. They probably have something equally repulsive for their employees.

bpyne(10000) 5 days ago [-]

I've been with Apple Music and Pandora for years. A few months ago my teammates at work convinced me to try Spotify: I love Spotify! Both the content and the UX work for me.

While this post represents Spotify's view without having Apple's to contrast it with, it seem eerily similar to the 'Browser War' days leading up to Microsoft's admonishment. Apple seems to be the equivalent of a drunkard walking a fine line.

colmvp(2957) 5 days ago [-]

I would totally buy this fear if iOS was anywhere close to the marketshare that Windows was back in the 90s.

nerdwaller(10000) 5 days ago [-]

I've been waiting to see Spotify follow Netflix and create their own label, instead of just curated playlists. They'll otherwise always be at the whim of renegotiations. (Not entirely relevant to the post, but seems like it's an avenue for them to go to break a little from other players).

hesk(10000) 5 days ago [-]

I hope this never, ever happens. I love that I can basically listen to any music I like on Spotify. (I know there are exclusivity deals in music streaming but so far it hasn't affected me.) Compare that to the fragmented video streaming marketplace where services try to distinguish themselves with exclusive and/or original content. If you want to watch N shows, you have to subscribe to N different services. And none of the services have a comprehensive movie library (at least where I live).

yaseer(3725) 6 days ago [-]

Seeing this makes me want to get even further away from Apple's ecosystem.

4 years ago, I was all in- Mac, iPad, iPhone. In the last 4 years, I've been driven away by Apple seeming contempt for professional users.

Their contempt for other competition is even graver cause for concern.

bgeeek(10000) 5 days ago [-]

Like others, this is why I could never buy a HomePod and exactly why I like Sonos. I prefer choice.

kalleboo(3769) 5 days ago [-]

Sometimes I think so too. And then I look at the alternative ecosystems. Google? Amazon? Microsoft? Not even tempted...





Historical Discussions: Thinkpad X210 (March 17, 2019: 926 points)

(965) Thinkpad X210

965 points 1 day ago by MYEUHD in 4031st position

geoff.greer.fm | Estimated reading time – 5 minutes | comments | anchor

A couple years ago, I used an old Thinkpad while my MacBook was being repaired. I enjoyed the experience so much that I ended up getting a Thinkpad X62 (an X61 chassis with modern internals). Last September, the maker of the X62 announced that a 3rd batch of X210s would be made. I ordered one and received it in January. China has a bunch of laws that make it hard to move money across borders, so payment involved wiring $1200 to someone's personal bank account in China, then e-mailing a qq.com address. It was a rather harrowing experience.

Like the X62, the X210 is made by 51NB, a group of enthusiasts in Shenzhen. The X210 is an X201 chassis with:

  • A Core i7 8550u (4 cores, turbo boost up to 4GHz)
  • 2× DDR4 SODIMM slots. I put 32 GB of RAM in.
  • 2× mini PCI Express slots. There's an 802.11/Bluetooth card in one. The other is empty but could be used for LTE or a second wireless card.
  • An M.2 NVMe slot. I put a 2TB SSD in it.
  • A 3.5" SATA bay. I left it empty, but it's possible to put a second SSD in.
  • An upgraded screen (12.6 inch, 2880×1920, 450 nits, wide gamut). The bezel is cut to make room for the 3:2 aspect ratio. There is no webcam.
  • Mini DisplayPort & VGA out.
  • 3× USB 3.1 ports (no USB-C).
  • SD card reader.
  • Gigabit ethernet.
  • Physical switch to toggle Wifi/Bluetooth.
  • Headphone & microphone jacks.
  • Internal microphone & speakers.

The X210 is sold as either a motherboard that you install into your own chassis or as a barebones laptop where you bring your own RAM, SSD, and battery. I got the barebones kit.

My Impressions

I slightly prefer the X62's more compact keyboard, but everything else is much better on the X210. The CPU is over twice as fast and runs cooler. The wifi is faster. The SSD is faster. The screen is gorgeous. it's a 12.6" screen with a higher resolution than the 15" MacBook Pro. Overall, it's just plain better.

Linux worked out of the box. I had to install non-free drivers for the Broadcom wireless card, then tweak a few module options to get better power saving. Battery life is a little over 4 hours with the flush battery (55Wh) and 6-7 hours with the extended battery (80Wh). I haven't finished tweaking all the power saving options, so the lowest idle state is PC3. Battery life would increase by 50% if I got PC6 or PC8 idle states. The fan only turns on if I'm doing something intensive like compiling go or scrolling in Slack.

Update (2017-03-17): I managed to get PC7 idle by upgrading my kernel to 4.18 and replacing the r8168 module with r8169. Battery life has increased significantly. I now get 6 hours with the flush battery and 10 hours with the extended battery.

Like most older Thinkpads, the X210 is easily repaired and upgraded. You can swap the battery in seconds without any tools. If the SSD fails, you can replace it. If the RAM fails, you can replace it. If the wifi card fails, you can replace it. If the screen fails, you can replace it. You can even replace the Trackpoint and the little rubber feet without much trouble. The laptop can be entirely disassembled with two Philips screwheads (#0 and #1). At no point do you encounter tape, glue, or pentalobe screws.

Caveats

The X210 isn't perfect. It's made by a group of enthusiasts, not a big company. With that comes some disadvantages:

  • Like the X62, the mini DisplayPort cannot output in HDMI alternate mode. This means that miniDP→HDMI dongles won't work.
  • I sometimes notice PWM flicker on the screen. This only happens at the lowest brightness in a dark room when displaying mostly black content. I can see afterimages when my eyes saccade, similar to some brake lights at night. It's about as noticeable as the PWM flicker on my iPhone X.
  • If the motherboard breaks, you can't walk into a store and get it replaced or repaired. Your only recourse is to e-mail the manufacturer and ship it back to China. I've only read about one case of this happening, and in that case the motherboard was DOA. The unfortunate user was shipped a new one within a few weeks.

Conclusion

I love this laptop. It addresses almost all of the issues I had with the X62: better screen, better performance, and better microphone quality. More than anything, the X210 demonstrates just how much potential is being squandered by laptop manufacturers. If a small group in Shenzhen can make this laptop, Lenovo or Apple should be able to build something far better. Instead they make laptops with integrated batteries, fewer ports, soldered RAM, sub-par keyboards, and touchbars. Many professionals want something better.

I hope 51NB continues to build new internals for old chassis, because I doubt the major laptop manufacturers will get their heads out of their asses any time soon.





All Comments: [-] | anchor

shasheene(4022) 1 day ago [-]

This is a good time to bring up the fact there was never an industry-wide standardization effort for laptops. A standard form-factor means components would be re-usable between upgrades: the laptop case, power supply, monitor, keyboard, touchpad could all be re-used without any additional effort. This improves repairability, is much better for the environment, and means higher-end components can be selected with the knowledge that the cost can be spread out over a longer period.

For desktop PCs, the ATX standard means that the entirety of a high-end gaming PC upgrade often consists of just a new motherboard, CPU, RAM and GPU.

A 2007 Lenovo ThinkPad X61 chassis is not that different to a 1997 IBM ThinkPad chassis (or a 1997 Dell Latitude XPi chassis). If the laptop industry standardized, manufacturers would produce a vast ecosystem of compatible components.

Instead we got decades of incompatible laptops using several different power supply voltages (and therefore ten slightly-differently shaped barrel power plugs), many incompatibly shaped removable lithium-ion batteries, and more expense and difficulty in sourcing parts if and when components break.

A little bit of forward thinking in the late 1990s would have saved a lot of eWaste.

twblalock(10000) 1 day ago [-]

Standardization limits innovation. If we had standardized on laptop form factors in the late 1990s all laptops would still be an inch and a half thick, and all screens would still be 4:3.

benbristow(3311) 1 day ago [-]

How are the laptop companies meant to force you to buy a new one every so often if you can just keep upgrading them?

Pamar(2246) 1 day ago [-]

It did not happen for phones either. Why should this be different?

Maybe laptops now are mature enough as a product that what you suggest could be feasible but it is too late for business reasons, now.

peterwwillis(2415) 1 day ago [-]

There was a trend for a while of making business/power laptops much more configurable (I have an old Dell with a hard drive cage that swaps out without removing any screws). But most laptops are more about form rather than function; their design requires reworking all the internals to prevent getting a big clunky heavy box that overheats.

For very low-power machines you might have tons of internal space free, but more powerful laptops need complex heat management in addition to reducing size and weight. It's only now that we have very advanced fabrication techniques and energy-saving designs that we no longer have to hyper-focus on heat exchange.

If size and heat and weight weren't a factor, you can bet that a standard would have arose to manage interchanging parts. But soldered ram is a good example of why that's just not necessary, and can be counter-productive for reducing cost and maximizing form factor.

saagarjha(10000) 1 day ago [-]

Nobody is going to do this, because good components are a competitive advantage. I can't see any good manufacturer wanting their good {trackpad, keyboard, case} either being put in a computer that undercuts them or being forced to dumb down their computer to fit the "lowest common denominator".

hopler(10000) 1 day ago [-]

The LG Gram teardown on iFixit was amazing. It's 'moderately difficult' to remove almost everything including the trackpad and parts I forgot existed.

https://www.ifixit.com/Guide/LG+Gram+15-Inch+Repairability+A...

ekianjo(304) 1 day ago [-]

> A standard form-factor means components would be re-usable between upgrades

We don't even have to go that far. Just ensuring that laptops can be serviced by their own users would go a long way to reduce e-waste. i.e. not soldering RAM chips to the motherboard, making it feasible to remove every single part (not gluing the keyboard to the MB for example), etc... instead of pursuing an ever thinner laptop design, which has practically no use.

reaperducer(3842) 1 day ago [-]

For desktop PCs, the ATX standard means that the entirety of a high-end gaming PC upgrade often consists of just a new motherboard, CPU, RAM and GPU.

And that's great, if you're into generic beige boxes.

It's been years since I put together my own IBM compatible computers. But in the time since then, I haven't really seen any innovation in desktops.

Yes, for a while the processor numbers ticked up, but then plateaued. Graphics cards push the limits, but that has zero to do with the ATX standard, and more to do with using GPUs for non-graphics computation.

The laptop and mobile sectors seem to be what is driving SSD adoption, high DPI displays, power-conscious design, advanced cooling, smaller components, improved imaging input, reliable fingerprint reading, face recognition for security, smaller interchangeable ports, the move from spinning media to solid state or streaming, and probably other things that I can't remember off the top of my head.

Even if you think Apple's touchbar was a disaster, it's the kind of risk that wouldn't be taken in the Wintel desktop industry.

All we've gotten from the desktop side in the last 20 years is more elaborate giant plastic enclosures, LED lights inside the computer, and...? I'm not sure. Even liquid cooling was in laptops in the early part of this century.

Again, I haven't built a desktop in a long time, so if I'm off base I'd like to hear a list of desktop innovations enabled by the ATX standard. But my observation is that ATX is a pickup truck, and laptops are a Tesla.

alfonsodev(3754) 1 day ago [-]

At least for phones there is Phoneblocks[1] they are now part of Google's project Ara.

Maybe it could evolve to a laptop experience if blocks get powerful enough and somebody develops compatible chasis.

*update: The project Ara was cancelled in 2016 [2].

[1] https://phonebloks.com

[2] https://www.theverge.com/2016/9/2/12775922/google-project-ar...

ako(4006) 1 day ago [-]

Because size and weight is an important distinctive feature for laptops. Customers pay more for smaller, lighter laptops. Using standardized components and chassis would mean a big competitive disadvantage.

alkonaut(10000) 1 day ago [-]

Some parts such as batteries, storage, ram etc should at least be a standardized.

Manufacturers probably don't want to standardize on the remaining motherboard/graphics/chassis/cooling because a laptop isn't like an atx computer where you get modularity at the expense of wasted space. A laptop is basically a 3D puzzle with thermal components. Few consumers would buy a laptop with even a little wasted volume or weight, even if it meant better serviceability and upgradeability. Same with phones. We aren't going to see modular phones beyond the concept stage either.

caycep(3962) 1 day ago [-]

My experience has been limited by the fact that components increase at the same rate, and to get everything to place nice(r) with each other, you have to upgrade everything. 'A new motherboard, CPU, RAM and GPU' is almost buying an entirely new computer. You save a few hundred bucks by keeping the PSU (or maybe change it too after 5 years) and casing, assuming the ports didn't change.

ssnistfajen(10000) 1 day ago [-]

Novel form factors are often how laptop manufacturers distinguish themselves from their competitors. There is enough space within a desktop PC case to formalize a standard. As laptops get thinner and thinner, however, many engineering/layout tweaks are used to fit stuff within a thin chasis. Standardizing across different device models would be asking OEMs to stop putting efforts into competing with each other. And I say this as someone who has just had a catastrophic motherboard failure on their 8-months-old laptop and had to do a replacement that would've cost me a new laptop if outside warranty.

O_H_E(3404) 1 day ago [-]

Maybe we could try and write an open letter to companies and promise support even for less value at first. Chances are slim, but at least we would have done our part.

lapinot(10000) 1 day ago [-]

This would allow smaller players to step in and to start grinding some market shares of the big players. It would also turn the laptop market from a high margin market to a low margin one. Standardization is just not in the interest of any big player so it's probably never gonna happen. If you are a small player and want to go that direction you're probably gonna be bought out. The only way i see would be to somehow get pervasive open standards and libre schematics implementing them, and then cut out the big players and get several manufacturers to produce them. But that too is hardly gonna happen because of geopolitical problems: most of these manufacturers are domiciled in china and thus this move would cut to much income from western (and korean, japanese) countries. So for that to happen we would have to relocate some manufacturing industry and somehow not put them in the hands of any of our local big players. The problem here is not some problematic business decisions by companies, it's how we organized our economy. It would take radical changes in the economic/industrial policy to make that happen: much stronger anti-trust laws, which would keep companies smaller and force cooperation; public- instead of private-regulated prices so that you don't die to foreign companies' exportation when you start doing that; etc. This would drive cooperation up in all of the economy, take power away from multinationals, reduce waste, hinder 'useless innovation'. Long road ahead but i think that's what we need and that's what gonna happen at some point anyway: the capitalistic class may still be floating currently, but at some point the systemic crisis (financial instability, natural disasters, political instability, scarcity of energy and other basic resources) is gonna hit them too. What we have to make sure is that they don't get more out-of-sync with that than they currently are.

GordonS(1058) 1 day ago [-]

Is anyone happy coding on such a small screen, even if it is high-res?

I have a 2016 13' MBP, and find the screen too small for coding. It's high-res, but that means everything is super-small unless you increase the scaling, which of course reduces the available screen real estate. The screen is annoyingly reflective too, but that's another problem entirely.

My daily driver is a 15' HP Zbook G3 with a 1080p display, which I also find too small. I'm thinking a 15', high-res display would probably be ideal for portable coding?

chronogram(10000) 1 day ago [-]

I use an external screen on my 12' laptop. Obviously not useful for coding on the train, but for those short periods the 12' is fine.

firmgently(3878) 1 day ago [-]

I moved to a 10.1' display as a winter (low power) necessity. There's more sun now so I have enough power for my larger laptop but am not using it, I prefer the ergonomics of my current setup.

My tablet is on an arm suspended at eye height about 15-20cm away from my face. At the same time I can have my ThinkPad BlueTooth keyboard+trackpoint in an ergonomically sound position (not possible to do both of these things solely with a laptop due to the keyboard and display being tied together).

I had a 15.6' laptop display and a ruler within reach so I just measured... at 35-40cm away from my face the visible area of the 15.6' screen is occluded by the visible area of the tablet screen at 15-20 cm away. The aspect ratios are different (my preferred 16:10/1920x1200 on the tablet vs 16:9/1920x1080 on the laptop lcd) but this is roughly correct. Admittedly 35-40cm is probably a little further away than most people have their laptop screen but it's in the ballpark.

I've had setups with multiple/larger monitors in the past. It's hard to compare properly as so much has changed for me. I move towards spending more and more of my time in the terminal and have learned to make good use of tmux for workspace management (and i3 workspaces when I'm using a WM). I don't miss the multiple/larger monitors (but am not suggesting anybody should be the same as me).

I can say that this is my favourite of my personally-owned setups ever, for its lightness, silence, low power usage and minimal space requirements. These requirements of mine are very specific of course but you asked for a subjective measure. I am very happy (and on a 10.1' screen)!

Next time I upgrade I'll be looking for a nice rootable tablet... possibly something x86 which can run linux so I can get VMs to work. I think I'm done with laptops.

[ To repeat stuff I've mentioned here before but which might help make sense of the above:

+ the 'display' is an Android tablet running termux (as it's the fastest and nicest terminal I could find)

+ I just use termux for its terminal and work in Debian Stretch via Linux Deploy

+ Termux is very good on its own but in my experience the best armhf packages are on Debian. I'm comparing to termux and Arch which are all I have experience with - they are both great but I've found some packages to be either missing or had problems due to termux's clang vs gcc... or that Arch uses Armv7 binaries whereas my tablet seems happier with Debian's Armhf in some cases. I specifically had trouble getting a working binary for Chromium which is essential for me as I need the developer tools but achieved it on Debian.

+ I run Debian GUI apps via local XSDL server and/or VNC

+ So far the only thing I've been unable to achieve is VMWare emulation of X86 OSes but as I don't have an X86 CPU in here I can't be surprised about that ]

josteink(3411) 1 day ago [-]

My Carbon X1 has a smaller-than—14" screen, but due to the small bezel effectively has the same screen-size as a bigger, actual 14" laptop.

And I use it for coding.

saagarjha(10000) about 22 hours ago [-]

I do 99+% of my programming on my MacBook Pro's 13' screen. It's certainly possible to do, and I generally seem to prefer it to connecting to an external display.

mikkelam(10000) 1 day ago [-]

I once tried 15 and severely hated carrying such a huge device. I mostly dock my laptop so i dont mind the screen. I love the 13' size

efficax(10000) 1 day ago [-]

yes

mcv(4025) about 10 hours ago [-]

'Like the X62, the X210 is made by 51NB, a group of enthusiasts in Shenzhen.'

Is that normal? That a group of enthusiasts designs Thinkpad models?

0815test(10000) about 10 hours ago [-]

Is not officially Thinkpad model. Is 'Frankenpad', made by combining parts from many Thinkpad model, in a creative way. This is what 'group of enthusiast' in Shenzen do.

anderspitman(3424) about 9 hours ago [-]

My 2011 X220 refuses to die. It's been tossed in bags, scuffed, cracked, and burned with a candle. The keyboard is perfect. Touchpad is fine. Wifi is excellent. Upgradeability unmatched. I'm running Linux with a tiling window manager, and performance is fine for pretty much everything I want to do, including compilation. It runs super hot (80C+ sometimes), but is always way quieter than my XPS15 from work. Only complaint is the low resolution of the screen (1366x768). 1080p would be perfect for the 12.5in screen.

dwhitney(10000) about 8 hours ago [-]

'burned with a candle' - nobody upvoted you for any reason other than wanting to know how this happened. Do tell, OP!

tbrock(1677) 1 day ago [-]

These are awesome. For a long time I've thought a 13.3 X thinkpad like the 2X0 series (a proper successor to the x300/x301) would be the best thing ever. You take the x2X0 chassis, renowned for portabiltiy, battery life, and repair ability and shove a 13.3 inch screen in. Just like how the 13 inch class chassis on the x1 has a 14 inch screen.

Well it's happened: the x390 now exists. Maybe ever better would be a thick X1. Not larger or wider but thicker so the battery life was insane and you could upgrade the ram to 32/64gb.

jpalomaki(2819) 1 day ago [-]

X390 looks very interesting indeed. Did not at first realize that they have actually put the larger screen to the small chassis.

Lack of swappable battery is annoying. Maybe I belong to the minority, but battery life is still one of my main problems with laptops. Just so annoying to have to constantly look for an outlet in meeting rooms or conference places. I guess external USB-C batteries are the way to go.

arebours(10000) 1 day ago [-]

> Maybe ever better would be a thick X1.

T480?

driverdan(1345) 1 day ago [-]

The x390 screen is only 1080p.

1986(10000) 1 day ago [-]

I've seen a few people on /r/thinkpad trying to do the former, actually - modifying the bezels on an x2X0 to accommodate a 13 inch screen. One example: https://www.reddit.com/r/thinkpad/comments/akm76s/another_x2...

bitL(10000) about 22 hours ago [-]

I can't comprehend why not a single company could do a notebook like this! Classical keyboard, hi-dpi 3:2 screen, centered trackpad (well, in this case not 100%), latest CPU/memory/SSD. Is it that difficult?

iNate2000(4016) about 21 hours ago [-]

Isn't that Surface Book?

untog(2290) about 22 hours ago [-]

To sell enough of them at a price people want to pay? Sadly, yes.

TheCowboy(4028) 1 day ago [-]

It's a shame the Thinkpad Retro / 25 Anniversary Edition was released as a one time thing. I think it likely hurt the excitement behind buying one as you could end up with a machine that would be hard to find spare parts for in the future.

I never got a chance to buy one because they sold out and didn't release more.

systemBuilder(10000) about 24 hours ago [-]

My friend bought TWO and it was totally meh. Yes they did put a nice retro keyboard on that thing. It had the same horrible scratchpad as other thinkpads. No thinklight. Basically it was just a t440 with a nicer keyboard. The CPU (I5-7200) was crap. NO GRAPHICS ACCELERATION AT ALL.

tormeh(3066) 1 day ago [-]

>Linux worked out of the box. I had to install non-free drivers for the Broadcom wireless card, then tweak a few module options to get better power saving.

This is not, for the record, 'working out of the box'.

sametmax(3736) 1 day ago [-]

Yeah but after that, he never had any problem with it, except sleep mode cutting wifi sometimes. Or bluetooth not pairing. Or the battery lasting half as expected.

Nothing that a few commands can't fix. Zero maintenance.

viach(3379) 1 day ago [-]

> then tweak a few module options to get better power saving

What, what these options are? Is there an answer to the mystery of 'how to get power management working on a Linux laptop?'.

jokoon(4023) 1 day ago [-]

The lenovo website lets you custom the hardware, and I chose a non broadcom, so an intel chip I think.

xaduha(3875) 1 day ago [-]

> This is not, for the record, 'working out of the box'.

I have no doubt that some other distros would work just fine without any changes, you're nitpicking.

0815test(10000) 1 day ago [-]

Meh, if the non-free blobs are included as part of the distro (as they are with the Debian 'non-free' channel, similar to Ubuntu) it's as good as we've ever going to get. Even Intel wifi cards rely on a non-free firmware blob, and will only work with 'non-free' Debian.

GuB-42(10000) 1 day ago [-]

What about the GPU?

I wanted a ThinkPad but while they were overall good laptops none of them offered a decent GPU except for the very expensive and massive 'workstation' models with Quadros on the level of a mid range GTX.

I get that these aren't gaming laptops but why not give us the option of doing at least some gaming. And GPUs aren't just for gaming we did some projects at work that involved 3D rendering, plus there is all that GPGPU thing.

In the end I got a light gaming laptop, and at work, they also got gaming laptops for these projects that involve 3D. With RGB and all that, something I find kind of fun in a 'serious business' environment.

Improvotter(10000) 1 day ago [-]

It'd be neat if this had thunderbolt (or wait for USB4) so you could use an external GPU. I've heard that it works for some people.

nl(1096) 1 day ago [-]

You can get Surface Laptop 2 with a GTX 1060, and Razor has some nice looking, non-huge laptops with RTX-series.

I'm mostly interest in running ML on them, and Linux support is somewhat spotty though.

alipang(3426) 1 day ago [-]

Depending on what you want, a gaming box like https://www.gigabyte.com/Graphics-Card/GV-N1070IXEB-8GD#kf might do the trick. There's a lot of great laptops like the XPS13 that won't do gaming, but can do so with an external box.

disconcision(10000) 1 day ago [-]

The P1/X1e is 3.7 lbs and has a P2000/1050ti

saagarjha(10000) 1 day ago [-]

> Battery life is a little over 4 hours with the flush battery (55Wh) and 6-7 hours with the extended battery (80Wh).

Not going to lie, that's pretty horrible.

> Battery life would increase by 50% if I got PC6 or PC8 idle states. The fan only turns on if I'm doing something intensive like compiling go or scrolling in Slack.

lol. One of the things that really drives me nuts is my computer's fan turning on when I know really shouldn't be. I have lived and worked with people for whom having their fan randomly turn on for no reason is completely normal, and I just can't understand how they can bear it. If this happens to me, you can bet I'm digging through Activity Monitor and killing the culprit before the fans can get fully ramped up.

mjg59(3845) 1 day ago [-]

>Not going to lie, that's pretty horrible.

The stock firmware is, uh, not good. I ported Coreboot to the second batch boards (https://github.com/mjg59/coreboot/tree/X210_good ) and things improved significantly - I wrote it up at https://mjg59.dreamwidth.org/50924.html

archon810(1154) 1 day ago [-]

Whatever you buy, don't buy the ThinkPad X1 Extreme. Its fans are on all. the. time, and the CPU goes into thermal throttling even when idle.

https://www.google.com/search?q=x1+extreme+fans

ggreer(1951) 1 day ago [-]

I don't lie about battery life numbers. People like to say, 'I get 9 hours of battery life.' when they mean that they get 9 hours if they do doing nothing but let their computer idle with the brightness at minimum. 4 hours with a 55Wh battery is an average consumption of 13 watts. That's because my typical workflow involves running a VM containing cassandra & postrgres (among other services) and recompiling go and javascript. My coworkers with 15' MacBook Pros tend to worry more about battery life than I do.

My fan comment was a joke about Slack's efficiency. Of course compiling a bunch of go code will make the fan turn on. That will use up 100% of your cores on any decent sized project.

bibyte(3893) 1 day ago [-]

I used Arch Linux on a Dell laptop and my fan is pretty predictable. I have never used Windows so I am curious. Does the fan really turns on when you don't expect it ?

cyberpunk(10000) 1 day ago [-]

Slightly off topic; but on fans I recently got a totally fanless mini-pc (zotac) loaded it up with 16gb of ram and a tb ssd and the utter silence in my office is amazing. Since switching off apple (I used the fanless 'macbook' models for a while) I'd normally work with headphones, but I've found the difference to be noticeable.

FWIW it runs OpenBSD and is currently at 52C.. When doing a heavy compile or something it can get up to about 80. Mounted to the back of a monitor, can't even see it.

10/10 would recommend.

intrasight(3810) 1 day ago [-]

I (almost) always work where I can plugin - so the 'battery' I treat more like an internal UPS. 30 minutes would be fine :)

tartuffe78(4030) 1 day ago [-]

I used to use SMC fan control on my older MacBook Pro to just run the fan at medium at all times. Definitely miss that you can't use it on the new touchbar MacBook pros.

tbrock(1677) 1 day ago [-]

That's because it's a random chinese board that attempts to provide up to date performance but not focused on up-to-date thermals/efficiency.

If a modern 13.3 screamer with battery life is what you are looking for however, check out the just released Thinkpad x390. It's even more modern and the battery life is a staggering 17-18 hours.

smcl(3734) 1 day ago [-]

Yep the fan spinning up when nothing special is happening is infuriating - for me 99% of the time it's Debian's 'unattended upgrades' which I now realise I should just turn off since I update/upgrade/dist-upgrade relatively frequently anyway.

canuckintime(10000) 1 day ago [-]

I only buy fanless laptops now.

Damogran6(10000) 1 day ago [-]

As a part of our enterprise security improvement project, they removed local admin from all people who don't necessarily need it. They'll replace it with creds that can be unlocked for a short period of time, but that's not here yet, and honestly, I don't need it bad enough on my laptop to raise a stink.

Now when my fans spin up, I just blame McAfee.

gioele(1976) about 8 hours ago [-]

> > Battery life is a little over 4 hours with the flush battery (55Wh) and 6-7 hours with the extended battery (80Wh).

> Not going to lie, that's pretty horrible.

On the next line he says that with a newish kernel he can get 6 hours with the flush battery. That is not bad at all.

> Update (2017-03-17): I managed to get PC7 idle by upgrading my kernel to 4.18 and replacing the r8168 module with r8169. Battery life has increased significantly. I now get 6 hours with the flush battery and 10 hours with the extended battery.

sharno(10000) about 8 hours ago [-]

With all the hype about the efficiency of ARM processors and Apple thinking of putting their custom designed processors in their laptops, I wonder if we'll ever get better battery life on our laptops.

I know nothing will be very significant, but I wish at least my laptop can stay for the whole day on battery.

https://blog.cloudflare.com/arm-takes-wing/

johnvanommen(10000) about 11 hours ago [-]

> lol. One of the things that really drives me nuts is my computer's fan turning on when I know really shouldn't be. I have lived and worked with people for whom having their fan randomly turn on for no reason is completely normal, and I just can't understand how they can bear it. If this happens to me, you can bet I'm digging through Activity Monitor and killing the culprit before the fans can get fully ramped up.

At a place I used to work, laptops were routinely getting bricked by the anti-virus software. Basically it would run a scan, the laptop would overheat and die.

At some point, it seems like the 'cure' can be more dangerous than what it's designed to fix.

snazz(3727) 1 day ago [-]

If you tweak your power management daemon to force your CPU to stay in the lowest couple of SpeedStep levels, you can keep it much cooler at the expense of performance. This is effectively undeclocking on the fly. When you do something fancy, your CPU clock frequency will increase, but not enough to heat it to the point where the fans turn on.

Most MacBooks doesn't turn their fans on until the aluminum bottom is tenderizing your lap (and/or melting the table). You could configure this on most non-Apple laptops, too, but running the fans earlier keeps temps down and probably extends the hardware lifetime.

systematical(3970) about 22 hours ago [-]

What I wouldn't do for 4 hour battery life.

mbell(3861) 1 day ago [-]

If your fan is spinning up when scrolling in Slack it's likely an indication that Electron (Chrome) is refusing to use the GPU for rendering acceleration. This is likely either due to a driver issue or the driver/gpu being on chrome's blacklist. I had this problem once on a hackintosh and as I recall starting Slack from a terminal with the `--ignore-gpu-blacklist` option fixed it.

coldtea(1216) 1 day ago [-]

>One of the things that really drives me nuts is my computer's fan turning on when I know really shouldn't be. I have lived and worked with people for whom having their fan randomly turn on for no reason is completely normal, and I just can't understand how they can bear it.

Yeah, but try standing in their shoes: they too are probably wondering how you can bear getting worked up over such small things such as fans going off.

It admittedly sounds like a worse situation to be in (seeing that it means living one's whole life in constant irritation) than to have noisy fans.

zaroth(2761) 1 day ago [-]

You could drive about half a mile in a TM3 on that bigger battery (with heat and AC off).

You really should be able to cover a full day's work on 80Wh.

syntaxing(10000) 1 day ago [-]

You should see my X62, the battery is so bad. I even bought a 'brand new' battery and it last about 2 hours. We don't notice it but battery technology has really pushed forward since a decade ago.

Ndymium(4005) 1 day ago [-]

You say that's horrible, but I have never gotten more than around 5 hours from my MacBook Pros (my own and my work's). Maybe I could if the only thing I ran was Safari and iCal, but that's not my use case. If their number is from normal use, I'd consider it normal. :shrug:

unethical_ban(3994) 1 day ago [-]

Sigh...

I got a Thinkpad E485 ( wanted to try AMD Ryzen mobile) and with Ubuntu installed, I get about 2.5-3 hours on the battery, which is... I forget now. I think it's 48Wh.

systemBuilder(10000) 1 day ago [-]

You guys thinking ATX is a standard has me laughing my head off. On those motherboards we had 6 bus 'standards', 6 video card 'standards' (AGP, AGP 2X, AGP 4X, ...), 6+ disk 'standards' (IDE, EIDE, PATA, SATA-1,2,3, M.2, PCI, and the list goes on). the biggest joke is that CPUs are not even 50% faster in the last 10 years and yet we are still fooled into buying entirely new systems every two or three years by the Intel-Microsoft-Dell-Motherboard Cabal.

tbolt(10000) about 24 hours ago [-]

Can you provide some evidence that "CPUs are not even 50% faster in the last 10 years?"

detaro(2115) about 23 hours ago [-]

> the biggest joke is that CPUs are not even 50% faster in the last 10 years

2009 high-end/enthusiast desktop CPU: Intel Core i7-975

2019 high-end/enthusiast desktop CPU: Intel Core i9-9900K

Single-thread performance has doubled. And the i9 has double the number of threads. And uses less energy.

> and yet we are still fooled into buying entirely new systems every two or three years by the Intel-Microsoft-Dell-Motherboard Cabal.

The point is that you do not need to buy an entirely new systems every two or three years (and I personally know few people that do), but can comfortably upgrade parts. CPU upgrade compatibility has sometimes been annoyingly short, but most of time you can easily extend the lifetime of a system by buying a former high-end CPU for cheap, and for other pieces compatibility is way longer. SATA drives from 10 years ago still work with modern PCs if they survived (or your PC from 10 years ago you still use because 'CPUs haven't become faster' can be upgraded with a SATA or PCIe SSD, which it can't fully utilize, but it'll work and make a difference. PCIe cards always have been compatible between versions, you had the choice between AGP and PCIe for quite a while (when that switch happened, like 15 years ago?), and technically can still buy boards with PCI ports today (although that's a market niche).

shasheene(4022) about 23 hours ago [-]

Video card interface electrical standards may have changed, but GPU form factors have not.

Electrical interface changes just need to be paired with compatible motherboards.

Same with hard drives: the interface changed but the form factor has not. They are even compatible with their larger versions: a laptop 2.5' platter drive or SSD can fit in a 3.5' drive bay with a cheap bracket. The SATA versions were backwards compatible speed bumps. Performance was always the maximum supported on the motherboard.

M.2 is just PCIe in a different form factor, so on desktops a $5 passive adapter allows NVMe SSDs to be used on any PCIe slot on even on relatively ancient PCs (though use as a boot drive dependent on BIOS support).

Same with display interfaces. VGA is still supported on many systems, with DVI-I being backwards compatible with cheap passive adapters. DVI and HDMI are electrically identically (minus audio) so cheap passive adapters work.

The broader point is that large incompatible electrical changes are possible because they only mean the new motherboard just needs to be paired with a compatible component. There's still market pressure for backwards compatibility unless the leap is large enough.

mark_l_watson(2554) 1 day ago [-]

I was interested in the hassle for making a payment and buying something from China. Counter to my view of how the world is supposed to work. Payment hassles probably due to buying from a small group of enthusiasts, not a large company.

I haven't done business directly with anyone in China for a number of years (I helped a student on a project about 10 years ago and a media company had me write a simple machine learning model for them) but them paying me was easy, just used PayPal.

Buying not rebranded Chinese electronics interests me, because of potential lower cost, as long as it is westernized in things like keyboard, etc. and has a year warranty. I have specifically been looking at products like GPD Pocket 2.

dbg31415(3817) 1 day ago [-]

https://news.ycombinator.com/item?id=14219918#14223933

I bought a NAS off Amazon a few years ago... one of their 3rd party sellers had the same product for $50 cheaper so I went with that. Anyway, what happened next was really amusing to me.

I get the product about 8 weeks later... I had tried to cancel because it was so slow, and Amazon wouldn't let me -- they said I had to receive the product and then send it back as a return once I got it in order to get a refund.

I was planning on just sending the box back the same day... but something caught my eye. The box had come from Shenzhen, China. Curious I cut open the outside box.

Immediately I saw that the NAS they sent inside had been opened, and had been re-tapped shut -- and poorly, there was a clear bulge on top of the box. My heart sank a bit... but I figured, 'Well, let's see -- maybe it was a return or something... can't hurt to open it again since it's already been opened.'

Inside the NAS box, all the manuals are in Chinese... and seem like they are just photocopies of the originals. The NAS was not in the original packaging, but rather elaborate bubble wrap. And there's a China to US power adapter to the cord.

I'm curious if it would even turn on, so I plug in the NAS... It boots! But not in English. I'm thinking, 'What did I just buy?!' But I can sort of read some of the messages and it seems like instead of 4x2GB drives, it came with 4x4GB drives. Interesting.

It's late so I leave it initializing the drives (I think that's what it was doing anyway) and go to bed. Next morning I wake up, and it's still initializing the drives. Fuck it... time to call tech support and get my money back. The little light kept blinking yellow, but I couldn't read anything.

I email Amazon to initiate the return process, then go to work. I leave the NAS running -- honestly just sort of forgot about it. Got pulled into a business trip that day, so it was about 4 days until I got back to focus on the project again. When I got home the little yellow light on the NAS was still blinking, and I thought it was weird that I hadn't gotten an email from the seller with return instructions.

I email Amazon to tell them I hadn't heard anything back from the seller, and since I had to wait anyway, decided to call tech support. Cringe.

The thing was, the NAS was still doing something. The drives were still spinning... but after 4 days... I figured it wasn't doing anything good. But I didn't unplug it. I read the serial number to the guy in tech support. Pause... 'Can you read me the serial number again?' I do... longer pause. 'Can you read me the serial number one more time?' I do... Pause... 'Please hold, Sir.'

'Sir, where did you get this NAS?' I'd been transferred to someone up the food chain who told me that the device I had wasn't a valid serial number -- that the number I gave was for a model that hadn't been released yet. Super weird conversation, they took all my details, Amazon order number, and told me they would call me back.

Really late, like 2 AM that night, I got an email from the seller. It just said, 'What wrong?'

So I write back, and my phone signature had my cell phone number on it. I get a call. At like 3 AM. The guy is polite, but his English isn't great. He tells me to just unplug the NAS, and plug it back in again -- then walks me through how to install the English interface. We're chatting for like 2 hours. He's crazy knowledgeable. We get everything set up, but I have no idea what all I just put on the device... most of the links he had been emailing me throughout the process were just IP addresses and paths. But they seem legit... and there wasn't anything on the NAS yet so I didn't mind running strange updates on it.

He says the yellow light will blink for 4 hours 24 minutes. (I don't remember the exact number, but the point was it was an exact number.) He says to email him, not Amazon -- he's very clear about that -- if I need help after. He's a friendly guy, I liked how helpful he was.

I go back to sleep for a few hours, do some yard work when I wake up, forget about the NAS, but when I checked later... some point after 4 hours 24 minutes... it's all green and working fine. And it's got twice the space than I paid for. And it's all in English. Only thing was... when I hit the 'check for updates' option it just spun and nothing happened. But everything else seemed perfect.

So I'm on the fence... I have this clearly not authentic NAS from some random guy in China... that the manufacturer says it's not supposed to exist... but it's 100% bigger than what I thought I paid for...

That night I got an email from the seller, and you can tell he's sad. 'I tried to help you, why did you tell Amazon it was a fake? It not a fake.' I wrote back that I think that was from before he and I spoke, I had called the manufacturer's tech support to get the issue fixed and they told me the serial number wasn't real, but that it's working fine for me now.

He was writing back instantly, 'It not a fake, you want a refund? I give you refund, but please don't complain to Amazon about me, I sell good stuff.' Right away I see an email from Amazon telling me that the seller had given me a 100% refund. He even refunded shipping costs.

I felt awful. Here's this guy who spent 2 hours on the phone with me, probably spent a lot on an international call, and the manufacturer contacted Amazon about the order and came down on him for selling counterfeits. Probably scared him or told him he wouldn't be able to sell on Amazon any more if it was a fake... who knows.

I write back, 'Thanks for the refund, where do I send the NAS? I will pay for shipping. I didn't mean to get you in trouble.'

He writes back, 'I'm sorry. It's not a fake. Please don't be mad at me.'

The case was marked as resolved at Amazon, the guy told me I didn't have to return the NAS, and I never heard back from the manufacturer. Free NAS! But a little guilt because I didn't pay this guy for it, or for his time... and he stopped responding to my emails after that. I offered to send it back to him two more times.

About a year later I logged in to the interface, realized the auto-update feature was working fine. Updated the bios and firmware. I think that was 5-6 years ago now -- it's been running great.

I had raved about the quality of the NAS to a friend, who bought the same model about 6 months after I did... but his died a few months ago... when we took it apart to switch out the drives we realized it had totally different drives and cables, and even the logic board seemed different from the one that mine had. Sure hardware changes, but... his seemed more legit and polished inside than mine. Mine has a few spots inside that just look glue-gunned in place. (=

Anyway yeah, probably 7 years on... my little free (and probably semi-counterfeit) NAS is still running great.

That call at 3 AM was by far the most knowledgeable of any seller or tech support person I've ever spoken with.

lelf(41) 1 day ago [-]

> The fan only turns on if I'm doing something intensive like compiling go or scrolling in Slack.

That's intensive. And sad.

jasonvorhe(4030) 1 day ago [-]

I don't get why people don't just use Slack on Chrome. Everything people rant about when using the Electron-app doesn't happen in Chrome.

kzrdude(3351) 1 day ago [-]

so irc is still better

jandrese(4030) 1 day ago [-]

I LOLed at the shade thrown at Slack. It's funny because it's true.

Causality1(10000) 1 day ago [-]

This reminds me of just how much I hate whoever came up with chiclet keyboards and ruined laptops forever.

stevenwoo(3542) 1 day ago [-]

Didn't the Timex Sinclair and IBM PC Jr have that way in the dawn of the PC era? The PC Jr was especially egregious because normal IBM keyboards were pretty great in my memory.

saagarjha(10000) 1 day ago [-]

Of course, with opinionated things like keyboards there will be people who love the exact thing you hate. Me, for example.

badsectoracula(10000) 1 day ago [-]

Probably the same person who came up with the idea of using widescreens on laptops. At least with keyboards, if you have enough money, you can buy a brand new laptop with mechanical keyboard.

Nobody is even making 4:3 monitors anymore, laptops or not.

kensai(2518) 1 day ago [-]

I am happy to read this review a couple of days after I saw a video of this Unboxing guy evangelizing the use of other Thinkpads. Lenovo is definitely making nice machines.

https://www.youtube.com/watch?v=gZUSFda_W7k

'Unbox Therapy

After many years using MacBook variants I've made the switch to Windows. I've used every version of MacBook Pro and MacBook Air that have been released. My current laptop of choice is the Lenovo Thinkpad X1 Carbon / Lenovo Thinkpad X1 Extreme.Turns out switching from Mac to Windows isn't as painful as I expected.'

vxNsr(2788) 1 day ago [-]

Wow, how do people watch this format. Also why does he keep looking to his right off camera? I feel like I'm missing something that's happening.

theshrike79(10000) 1 day ago [-]

He's been salty at Apple for years since he made a big fuss about Bendgate (pretty much started it).

Hasn't been invited to an Apple event since. To the surprise of no one.

luxuryballs(4026) 1 day ago [-]

when I watched this video a few days ago all I couldn't get over how disingenuous he seemed, like he was just getting paid to make the video and didn't believe anything he was saying

noir_lord(3939) 1 day ago [-]

I wonder why Lenovo doesn't lean into this.

Clearly there is a market for people wanting this kind of machine to the extent they are jumping through hoops for it.

I have a maxed out T470P (stock) and it's a cracking little machine and while the keyboard is by modern standards excellent that's because the bar shifted, it's still not as nice as the one on my old R50 was.

efficax(10000) 1 day ago [-]

Lenovo knows about the love for the retro Thinkpads, and even tried to take advantage of it:

https://www.lenovo.com/us/en/laptops/thinkpad/thinkpad-t-ser...

It was a decent showing, but not what I wanted (an X60 form factor with a 4k 4:3 screen)

intopieces(4032) 1 day ago [-]

I suspect that their primary market is corporations, who have the resources to repair Lenovo's newest models or else have contracts for replacement.

chx(789) 1 day ago [-]

Hell if I know. On various forums we have raised the idea but I guess it just doesn't get through they should run a Kickstarter for the classic keyboard in the, say, successor of the T490 (because the 10nm is going to be a huge change so ppl want to buy that more). If successful, Lenovo adds this option while people front the cost of creating it -- new plastic mould for the palmrest etc -- and get a discount code they can put in when ordering the new machine with the classic keyboard. All this takes very little effort from Lenovo -- all their backend system needs is a batch entry of discount codes. They don't need to ship anything separately and if the Kickstarter fails, they spent very little money.

testacc432(10000) 1 day ago [-]

What laptops currently out in the market have a trackpad comparable to the Macbook Pro's?

shkkmo(10000) 1 day ago [-]

Most of the good ones. Manufacturers finally realized it was a stupid detail to fail at.

dotancohen(10000) about 10 hours ago [-]

> The fan only turns on if I'm doing something intensive like compiling go or scrolling in Slack.

The next time somebody asks why everybody has a problem with Electron, I'm referring them to this page.

Shacklz(10000) about 9 hours ago [-]

While I can understand to some extent that Electron receives a lot of heat, people really should stop arguing that 'Slack bad -> Electron bad'.

I can run Discord and vscode on my machine just fine, with plenty of editor tabs/server tabs open. While Electron surely has its issues, Slack being terrible really shouldn't be blamed solely on Electron.

cztomsik(3863) about 9 hours ago [-]

I scroll a lot in vscode and it's fine... but yeah, it could be lighter

ggreer(1951) about 8 hours ago [-]

It's a joke. That doesn't actually happen.

lucideer(3976) about 9 hours ago [-]

Electron is far from ideal, and it may be argued it's incredibly difficult to make a relatively efficient Electron app (vscode is often cited as one, but Microsoft has plenty of resources).

However, for all the efficiency challenges Electron brings, Slack is universally cited in types of comments such as the above. This makes me highly suspect that—beyond being just written in Electron—Slack is actually an extremely poorly written app, and just the canonical example of overengineered inefficiency in general.

Slack Web, for example, also does similarly draconian things to my CPU.

Similarly, Riot.im—probably the most comparable app to Slack functionality-wise—is also built in Electron. It has some of the performance problems one would expect from any Electron app, but it is nowhere near as bad as Slack (and has a much much smaller development team with much tighter resources).

In short: Electron may not be ideal, but it seems to get an unfairly bad name from Slack; we should be laying the criticism with the Slack dev team rather than with the Electron one.

eggy(3811) 1 day ago [-]

I have a Lenovo T430u running Kali, and it is rock solid. I love the keyboard, and I use the TrackPoint for CAD work in FreeCAD. I never feel like it is going to slip from my hands when I pull it from my backpack. It is so easy to open up, that I open it twice or more a year to clean out the fans, which are usually clean anyway; I like seeing the internals like a car mechanic who likes to check under the hood ;)

I considered the Lenovo Carbon X1, but it is pricey, doesn't have a number pad, and is at the ultra-slim form factor of a MBP or other similar notebook form factor.

The Lenovo T580 has the num pad, but the graphics card is the NVIDIA MX150, a mobile but faster version of the GeForce 1030. Not really an issue for me, but my son's Lenovo Yoga came with a 1050 two years ago.

Anyway, I've owned all sorts of notebooks, including MBPs, and have found the Lenovos to be my workhorses, and getting out of my way to get things done. Yes, the battery is only 4 to 6 hours, but for me, even traveling and living all over the world, it has never bit me work wise, only when playing.

evjim(10000) 1 day ago [-]

How did you learn freecad? I really want to switch to it from auto desk inventor but have spent hours messing around and can't even make a cube. The YouTube videos I was watching assumed I knew too much.

linuxlizard(10000) 1 day ago [-]

Lenovo/Thinkpad + Kali sounds like a match made in heaven. May I ask what wifi chipset is on board? 'lspci | grep Network' should show it. The product page just says a/b/g/n which isn't terribly useful.

cyberpunk(10000) 1 day ago [-]

You use kali as a daily driver? Wow.

Do you use a non-root account?

zepearl(4012) 1 day ago [-]

I've as well always been happy with Lenovos. I'm currently using an X1 Carbon (4th gen, FHD screen, ~6-7hrs battery) and a P71 (4k screen, in-built nVidia disabled, ~4-6hrs battery) and their fans run only when I'm stressing the CPU (e.g. compiling) and even then I only hear the flow of the air.

veryworried(10000) 1 day ago [-]

Why in God's name are you using Kali as a daily driver? Anyone who does this has no idea what they're doing. Kali is made exclusively for pentesting, with a modified and insecure kernel specifically for running certain pentesting apps better.

umvi(10000) 1 day ago [-]

I noticed the author names his computers after elements (iron). I do that too! I was taken aback because I have a box named 'iron' also (as well as cobalt, oxygen, and helium)

ggreer(1951) about 18 hours ago [-]

I name all of my computers after elements. I set up DHCP leases so that the last octet of each host's IP is their atomic number. eg: hydrogen is 192.168.1.1, lithium is 192.168.1.3, carbon is 192.168.1.6, iron is 192.168.1.26, etc.

I also tend to categorize machines by elemental classification. Servers are alkalai metals, laptops are nonmetals, consoles are halogens, etc. I didn't have any neat nonmetals left, so I just called my new laptop 'iron'.

I describe the whole scheme in a blog post I wrote long ago: https://geoff.greer.fm/2009/06/17/hostnames/

leiroigh(10000) 1 day ago [-]

I am a happy owner of an x62. This article almost convinced me to buy an x210 next.

Minor gripes: (1) The dispayport-hdmi thing, (2) the mainboard is split (sound + rhs usb are on a separate board) and the small part has pretty flimsy attachement inside the case, (3) the CPU (i5 broadwell engineering sample) is pretty lackluster but perfect for development (no turbo, no throttling, no surprises), (4) finding an ok screen is hard, (5) nobody told me that I have to buy a separate wifi card. There is a free mini-pci slot, but 51nb could have been clearer on their online description, (5) the old x60 speakers suck.

Overall, it is an awesome machine (trackpoint, ibm keyboard, beautiful rubberized metal case). And shockingly cheap: An x62 mainboard, plus an old x60 chassis from ebay and battery from amazon, plus screen from alibaba, plus SSD, wifi-card and ram comes at ~$600 (I think?), cheap enough to buy as an experiment. I can definitely recommend to anyone who is ok with a little tinkering and the small adventure of ordering this from China (best approach: have Chinese colleague).

edit: Linux drivers work perfectly. Really, the ram was the most expensive component of this build. To expand on the no-surprises advantage of the CPU: To me it is more important to effortlessly benchmark small code changes than to quickly compile large codebases. And the weird engineering sample CPU that the 51nb guys sourced somewhere is absolutely perfect for that: Timings reliably match up with Agner Fog's instruction tables / iaca, without anything getting into the way (after the kernel cpufreq manager adjusted).

blattimwind(10000) 1 day ago [-]

> the mainboard is split (sound + rhs usb are on a separate board) and the small part has pretty flimsy attachement inside the case

That's however the same with the stock design and I've never had an issue with that. IIRC there are a couple pins from the lower casting holding it in place and then it's sandwiched using IIRC two or three screws between the lower casting and the palmrest. That's pretty solid.

ggreer(1951) 1 day ago [-]

I'm not sure what screen you got off Alibaba but if you want something better, you might want to try the daylight LED mod.[1] You won't get better color gamut, but you will get insane brightness and better battery life.

I did that mod to my X62 and it made the screen actually usable.

1. https://people.xiph.org/~xiphmont/thinkpad/led-kit.shtml

0w4u2a(10000) 1 day ago [-]

Please, allow me to hijack this thread.

What's the best Thinkpad I could get in Europe for ~200€ right now? (Off eBay of course). I don't care whether it's 13' or 15'.

Edit: thanks for all the replies.

mcjiggerlog(3752) 1 day ago [-]

What's 'the best' is obviously subjective, but you can get a decent X250 for that price.

carlesfe(2156) about 15 hours ago [-]

I've bought two machines from bluelink.nl, they're a small shop that refubrishes corporate laptops. Both machines worked great and are like new.

(Not affiliated with them, just a happy customer)

ioddly(4007) 1 day ago [-]

I'm not sure what the price differences are in Europe but I snagged a X230 for ~that amount in the US and am fairly happy. My only regret is the TN panel screen; this can be avoided by getting an X230T or going out of your way to find one with an IPS panel (a lot of resellers won't state what kind of panel it has, though). I got mine from a company that only refurbishes Thinkpads and was pretty happy about it, so I'd suggest looking off eBay as well and seeing if there's any companies like that where you are.

As the article suggests, you can also upgrade it along the way if your budget increases.

deng(1613) 1 day ago [-]

For 200 EUR you should be able to get a decent T440. Then invest another 15EUR and buy a T450 trackpad.

mxuribe(3836) 1 day ago [-]

I've had great luck with Thinkpad T420. I bought 2 over the last 2 years for about $225 (USD) from amazon. One of them i run linux mint without issue - my daily driver. The other one i kept with win7 (for kid as their school machine). I did replace the battery on one of them for a smaller physical size, and it apparently boosted the duration/power...so the original battery must have been aged. I can't complain because the replacement battery - again off of amazon - was around $50 (USD). Overall quite happy with T420 and would purchase theme again. (I have no experience with any other thinkpad models.) I hope this helps!

TheCowboy(4028) 1 day ago [-]

The 520 series still has the classic keyboard. The T520 will come in under budget, but sometimes you can find deals for the W520 (my preference).

zhte415(2600) 1 day ago [-]

For what it's worth, I still use a circa 2011 T420 for my work PC (decent spec for the time with i7 and 8GB RAM) which will probably be around or under your price. Great ThinkPad keyboard (non-chiclet), and instead of following the company upgrade cycle I opted for an SSD and new battery.

A lot of corporates off-load perfectly good laptops (as 'company property' they've often been taken care of reasonably well, or just left on a desk, or kept in a cupboard as backups) as part of their procurement cycle. Find such a reseller and you'll likely not only have a cheap laptop, but one with years more life in it too.

blattimwind(10000) 1 day ago [-]

X220/X230/250

4ad(3249) 1 day ago [-]

Just don't get the ?40 series, it's the series without physical trackpad buttons. You can put the ?50 trackpad in it, but that's more hassle and doesn't work in Windows well without extra reflashing.

Thankfully, they reverted to physical buttons in ?50 series and later.

random878(10000) 1 day ago [-]

Best is different to everyone.

My top choice = best for Freedom; X200 or T400 for about £50 to 70. Debian or Trisquel. £3 or so for a cheap SOIC clip, beg/borrow a Raspberry Pi, and libreboot it.

If you want something more modern (but not as Free); X250, T450, T550. (13/14/15') Lots on eBay due to high turnover from business. I have an X250 running Debian (one blob for the WiFi). Perfect robust student laptop.

wiredfool(3821) 1 day ago [-]

I've got a T410 and a T430s. Keyboard on the 410 is way better, but I upgraded because I needed more memory and a stouter processor to do webpack compiles.

They're kind of a pain to disassemble, but it's doable.

snazz(3727) 1 day ago [-]

You could most likely get an X220 for that amount. It has nice old non-chiclet keyboard (I like it even more than the X201 keyboard because it has big escape and delete keys) and is still plenty powerful.

keithpeter(3760) 1 day ago [-]

'The fan only turns on if I'm doing something intensive like compiling go or scrolling in Slack.'

Things like scrolling down a recent issue Web site are the main reason I'm not still using a vintage unmodified X200. I realise that the OA quote was probably tongue in cheek but I do find that surfing the Web has become a processor intensive activity!

0x03(10000) 1 day ago [-]

Only if you enable JS ;)

WebDanube(10000) 1 day ago [-]

Curious about the weight of X210--couldn't find it mentioned anywhere. I'm guessing such a feature-packed device must greatly compromise on the portability factor.

keithpeter(3760) 1 day ago [-]

Probably works out a tad lighter than the X200 with whichever battery the user decides to use. The unmodified X220 I'm writing this on weighs 1.6 kg with the 'stick out' battery and I recollect that my old X200 was about the same.

pixelmonkey(2550) 1 day ago [-]

If you can still find one on then used market, I'll put in a plug for the Lenovo X1C 4th Gen (2016 model) as an ideal Linux laptop. It's what I switched to after the x220, and I describe it here:

https://amontalenti.com/2017/09/01/lenovo-linux

Fan never turns on, matte display, awesome connectivity, great battery life, and everything on Linux just works.

jseliger(15) 1 day ago [-]

Have you tried the Dell XPS Developer Editions? They seem to get pretty good reviews as well.

sigil(3728) 1 day ago [-]

Seconded. I had an X1C2 and am currently on an X1C5. Everything works out of the box on linux and the performance is crazy for something so thin & light (2.8lbs).

kjaer(3913) 1 day ago [-]

I recently switched to a X1C6, and I wholly agree. It's an amazing laptop in terms of hardware and build quality, but it does have a bunch of Linux compatibility problems.

airstrike(3121) 1 day ago [-]

> The fan only turns on if I'm doing something intensive like compiling go or scrolling in Slack.

There's something terribly wrong with this statement

sgc(10000) 1 day ago [-]

It's a joke.

b3b0p(4024) 1 day ago [-]

I have a MacBook Pro (2017, 15' top model). I came from a Thinkpad running Red Hat and FreeBSD later.

I have been for the last couple years putting a Thinkpad (P52 or X1 Carbon/Extreme or P1) I'm lusting over in my cart, but not hitting the buy button.

After owning this MacBook Pro, my first question is how replaceable are the keyboards? I want to have a spare in the case of spills, crumbs, dirt, etc and so forth. It's piece of mind at this point.

From my quick and dirty research, they aren't as replaceable as they used to be, but still able to be done by the user. The P52/72 seem to be the only easy option like the, 'good old days.' While the X1/P1/X series require the entire top palm rest replaced, which is costlier, but it's an option none the less.

I'm eyeing either an X1 Extreme, P52, or maybe getting something slightly more portable like the X390. I feel this is the year I'm finally going to hit that buy button and at least test the waters if I can get myself out of the Apple Ecosystem and survive without regrets using Linux, even so, it will be a fun experiment.

systemBuilder(10000) 1 day ago [-]

MacBook Pro keyboard replacement is a nightmare it's a 3-hour process search up the videos on YouTube you have to undo about 50 screws. The sooner you realize that Apple only makes throw away shit the better off you'll be in life!

ww520(3054) 1 day ago [-]

I stocked up thin silicon membrane keyboard covers, put one over the keyboard, and replace it every 6 months. It prevent spills and scratch. Keyboard can get really dirty. Replacing it once a while is like getting a new keyboard.

ahstilde(3994) 1 day ago [-]

Can someone help me understand why Americans still buy Lenovo computers after Superfish and Lenovo Service Engine?

Why isn't Lenovo as much of a security risk as Huawei?

Barrin92(10000) about 24 hours ago [-]

Honestly, because it has no relevance to me. I own a Huawei phone and a Lenovo laptop. I don't work any kind of government security related job and I run linux on my laptop so I don't see what I'm risking from the standpoint of a personal consumer and developer.

ilrwbwrkhv(10000) 1 day ago [-]

exactly and thinkpads especially. they are usually used in critical places.

elagost(10000) 1 day ago [-]

There's more or less a cult around using Linux on older Thinkpads. They're so widely available for so cheap, and since so many people use and test software with them, pretty much every Linux distro works out of the box. The older models are coreboot and libreboot-friendly, the keyboards are amazing, and almost every single part is replacable (the older 14' models have socketed processors even).

Many still enjoy the old IBM models. I'm typing this on an X220 model, and I do understand the risks associated, which is why it's running coreboot, a stripped-down Intel ME, and GNU/Linux (as opposed to original firmware and Windows).

The X220 that I'm using has run every Linux distro I can think of with zero modifications, and I even had macOS on it as a hackintosh for a time. I've replaced the wifi card, I have two hard drives in it and two batteries for it, and it still does everything I need a computer to do with zero fuss.

Even without Coreboot and Linux, many still find the risks don't outweigh the rewards. Same reason people buy newer Macbooks that lack magsafe, sd card slots, USB-A, Ethernet, a decent keyboard, an escape key, replaceable disks/RAM, etc. For Macs, it's form over function. For Thinkpads, specifically the older ones, it's the exact reverse.

VvR-Ox(10000) about 12 hours ago [-]

Because Huawei is a Chinese company and still media tells people they are 'the bad communists' though they are no communists at all since Mao is gone.

Same goes for spying: - The US spied on countries in EU by hijacking (network) hardware deliveries and installing intelligence offices near DE-CIX and many other actions we just don't know officially - China is suspected to do the same once in a while (I think there was some rumor about 500GB HDD's with pre-installed malware) and I'd wonder if any country with the necessary resources would do otherwise

These arguments are made to distract us from the fact that there is no real hiding place. It is made to try to convince us that one of those parties is 'the good' and the other 'the evil' because they want power.

FrankDixon(10000) 1 day ago [-]

Second that. I don't get it

camgunz(10000) 1 day ago [-]

You don't buy the new ones. You buy at latest an X230 and put Linux or BSD on it. All the excitement around 51nb is that you can upgrade to modern hardware if you think the X220 is the last good notebook ever commercially produced. A lot of people (disclaimer: including me) think that, which is why you see them as targets for things like OpenBSD and Libreboot.

But you're right, I would never buy a 'modern' Lenovo though. But in fairness, I don't know that I'd buy a modern notebook at all.

eikenberry(10000) 1 day ago [-]

Those are both issues with running Windows on Thinkpads. The article's author is running Linux. Besides, there are very few options for decently built laptops with good keyboards.

dhd415(3035) about 8 hours ago [-]

Only a partial explanation, but Superfish was not installed on the business-grade laptops, i.e., the T, P, and X-series, and those are really the only ones that the HN crowd would use. LSE was definitely a major mistake though they at least relented and offered a removal tool for it.

Huawei has been the target of a pretty unprecedented effort by the US to eliminate their hardware from both US and US-allied countries. I doubt many of us have the knowledge of the necessary facts to evaluate the appropriateness of that action against Huawai, but either way, Lenovo hasn't been singled out like that.





Historical Discussions: Rudder issue that plagued the Boeing 737 throughout the 1990s (March 14, 2019: 926 points)

(931) Rudder issue that plagued the Boeing 737 throughout the 1990s

931 points 4 days ago by IFR in 10000th position

imgur.com | Estimated reading time – 1 minutes | comments | anchor

Perhaps the single most complex, insidious, and long-lasting mechanical problem in the history of commercial aviation was the mysterious rudder issue that plagued the Boeing 737 throughout the 1990s. Although it had long been rumoured to exist, the defect was suddenly thrust into the spotlight when United Airlines flight 585 crashed on approach to Colorado Springs on the third of March, 1991, killing all 25 people on board. The crash resulted in the longest investigation in NTSB history, years of arduous litigation, and a battle with Boeing over the safety of its most popular plane. Flight 585 proved to be hardly alone; over the subsequent years, more planes crashed due to the same rudder defect, including USAir flight 427, which killed 132 people when it suddenly rolled over and crashed on approach to Pittsburgh, Pennsylvania in 1994. As it turned out, these were but two of the most serious of hundreds of incidents involving the rudder on the Boeing 737. This is the story of the origin of the defect, its consequences, and Boeing's efforts to cover it up. Images sourced from The Seattle Times, the NTSB, Boeing, Tails Through Time, the Colorado Springs Gazette, The Times of India, Wikipedia, TribLIVE, The Flight 427 Air Disaster Support League, and Forbes. Video clips courtesy of Cineflix and the Weather Channel. Special thanks to the Seattle Times for its series of articles on the subject in 1996, which brought to light many of the details referenced here.




All Comments: [-] | anchor

gist(2218) 4 days ago [-]

Here we go. Someone uploads an image from Jan and all the sudden whatever it says must be interpreted as fact in some way and true. [1]

We don't know who the poster of this is or even if this is correct:

> Images sourced from The Seattle Times, the NTSB, Boeing, Tails Through Time, the Colorado Springs Gazette, The Times of India, Wikipedia, TribLIVE, The Flight 427 Air Disaster Support League, and Forbes. Video clips courtesy of Cineflix and the Weather Channel. Special thanks to the Seattle Times for its series of articles on the subject in 1996, which brought to light many of the details referenced here.

[1] This reminds me of emails back from the mid 90's on the internet. Those were always a version of (at least) my brother is a Harvard trained doctor and he sent me this!

thatswrong0(10000) 4 days ago [-]

We do know who the poster of this is... http://www.reddit.com/user/Admiral_Cloudberg

fingerlocks(10000) 4 days ago [-]

Original Seattle Times post is here:

http://old.seattletimes.com/news/local/737/part01/

wintorez(10000) 4 days ago [-]

Is it just me or a lot of big names in various industries are dropping the ball on QA? From exploding Nike shoes to crashing Beings, to faulty MacBooks.

gpderetta(3442) 4 days ago [-]

One of those things is not like the others.

rargramble(10000) 4 days ago [-]

Everyone is outsourcing their QA. QA is now seen as the least important part of the process in many, many industries.

maccio92(10000) 4 days ago [-]

Hmm.. all made in China

lukewrites(10000) 4 days ago [-]

I don't see much similarity between Nikes torquing apart and what Boeing has done/is doing. I'd say a better comparison is between Boeing and Tesla's self-driving deaths.

asdff(10000) 4 days ago [-]

Big difference between a poorly glued shoe, a crappy keyboard, and 150+ dead per crash.

Not_a_pizza(10000) 4 days ago [-]

I'm expecting Boeing to do everything in their power to spin news from 'they were at fault' to 'poor little US company is being attacked by incompetent gang of pilots'.

michaelcampbell(4027) 4 days ago [-]

I doubt anyone with the least bit of interest in this story would consider Boeing '[a] little US company'.

argd678(10000) 4 days ago [-]

This is why we have the NTSB, prioror to it's existence manufacturers were the primary investigators of their planes' accidents with predictable results like above.

duxup(10000) 4 days ago [-]

They write such great reports. Technical, but also accessible. I'm not a pilot and I find them interesting to read.

rb808(2987) 4 days ago [-]

To me its interesting that they wouldn't redesign the part as it would be admission of causing previous crashes? Kinda scary that the lawyers rule USA at the end of the day.

Its easy to blame Boeing for faults like this but its a miracle that these things fly so reliably with so many moving parts and human involvement.

Dahoon(10000) 4 days ago [-]

>Kinda scary that the lawyers rule USA at the end of the day.

I'm not sure you thought that one through. Are you really surprised? Who else should it be? I'm sure no one would say the politicians.

hopler(10000) 4 days ago [-]

Lawyers don't design planes. CEOs who intentionally lie about their planes' safety do.

Without lawyers, those CEOs could go on killing people and never have any accountability.

JackFr(3232) 4 days ago [-]

I imagine that Boeing engineers are decent people who want to be able to look at themselves in the mirror and be able to sleep at night. So I find it hard to believe in an active conspiracy, but the propensity for groupthink and self-delusion seems extraordinarily high.

_s(3788) 4 days ago [-]

If you've spent your career at a company, who can fire you for not parroting the company line, in a field as small as aviation, chances are you'll not get hired elsewhere, and you'll do as your indirectly expected to.

mikeash(3613) 4 days ago [-]

I always quote this, but only because it's always applicable:

"It is difficult to get a man to understand something, when his salary depends upon his not understanding it!" - Upton Sinclair

People tend to understand this as saying that people will fight you and feign ignorance. I think it should be taken more literally: a person's financial position affects their reasoning, and it is literally difficult for them to understand something that will harm them. Not impossible by any means, but when there's a financial incentive not to understand, it's far more likely that people just won't get it.

rootusrootus(10000) 4 days ago [-]

I imagine some of it has to do with not wanting to be wrong. And especially not wanting to be wrong when that means you have caused/contributed to someone's death. That is a powerful motivator to find alternate explanations.

Everyone here is probably intimately familiar with that attitude, since it permeates software engineering (and probably every other technical field).

howard941(1115) 4 days ago [-]

FWIW the rudder hardover described in this very interesting link doesn't appear to be related to the Max 8 issues near MCAS.

mzs(2144) 4 days ago [-]

'Boeing blamed the yaw damper...'

What if this MAX thing isn't pitot tube or MCAS software...

fixermark(3856) 4 days ago [-]

Correct. Not to play psychic, but I'm assuming the original poster chose to share this as a friendly reminder that as a general rule, Boeing does not have a track record of placing human lives above their continued corporate profitability.

'If Boeing knew about a problem with the MCAS, they'd have told the FAA and corrected it' is not a hypothesis in-line with their past behavior, should anyone be holding that hypothesis in their minds.

gnud(10000) 4 days ago [-]

The articles says that since the valve was redisigned, there has been no more accidents.

I think the point here was to show how Boeing has responded to issues in the past.

marcosdumay(4030) 4 days ago [-]

Aviation has an well known acceptable risk level at around 10^-9 for each issue. That's the number that leads government intervention, pilot procedure designing, aircraft designing and everything else. It's expected to lead to a less than 10^-6 chance of accidents per flight. (Somebody calculated the B783 Max odds on 4*10^-6 yesterday, what is a crazy high level.)

That number has been higher in the past, and is moving into 10^-10 per issue with 10^-7 overall risk right now, with large airplanes in scheduled flight very near that level.

DuskStar(4022) 4 days ago [-]

> Somebody calculated the B783 Max odds on 4*10^-6 yesterday, what is a crazy high level.

I think that may have been me (1/250000), but that was based on a couple of generous assumptions - two crashes across 4 flights/day on 350 planes for an average of 365 days. Unfortunately I think a more reasonable flights/day number is 3 or lower - a lot of Max 8s are on longer routes - and the flight day average is almost certainly lower than 365 (which assumes linear deliveries for the past two years, with no days for maintenance).

hhmc(10000) 4 days ago [-]

Is that 10^-6 figure is for any sort of accident, or just catastrophic ones?

fopen64(10000) 4 days ago [-]

Put all Boeing engineers flying nonstop around the globe in Max 8 planes until someone breaks down and speaks up :)

winslow(10000) 4 days ago [-]

Why only the engineers? Shouldn't management and the rest of executives also be on the planes?

jackschultz(3707) 4 days ago [-]

I know these comments are kind of frowned upon on threads talking about an issue like this, but very interesting use of imgur. Effectively a blog post focused on images, which is a very good way of thinking about posts I feel. People love images rather than only words. Has imgur been looking at this? Trying to push it as another big use for its platform?

JohnJamesRambo(4018) 4 days ago [-]

I felt the same reading it. Thinking wow this is the best use of imgur.

lutoma(10000) 4 days ago [-]

FYI, the post is by reddit user /u/Admiral_Cloudberg/, who has done a number of flight crash investigation posts on imgur like that: https://www.reddit.com/user/Admiral_Cloudberg/submitted/

Edit: Ooops, someone already mentioned this below. Oh well.

teej(2287) 4 days ago [-]

This format is a known use case and results in some of Imgur's most interesting content. Imgurian missfilipina feeds her village's kids for free every Sunday using donations from the Imgur community and shares her stories like so - https://imgur.com/gallery/nFoh5Bn

Human interest stories are the most common posts to use the format but educational stuff like this is definitely seen too.

frosted-flakes(10000) 4 days ago [-]

This is a common pattern for posts on r/DIY on Reddit, where people have 5-100 photos covering each step of a project, and each photo has a bit of text below it explaining the process.

I thing it works great, and I much prefer it over videos because I can take my time, and videos tend to gloss over the details. My only issue with Imgur is that on mobile the images are very low-res, so zooming in to see details doesn't work.

scolby33(10000) 4 days ago [-]

Vepr157 [1] on Reddit has several similar Imgur albums on the design of submarines, such as 'American Second Generation SSNs' [2] and 'Soviet and Russian Submarine Propulsors' [3].

[1] https://www.reddit.com/user/Vepr157/

[2] https://imgur.com/a/9h3gD

[3] https://imgur.com/a/t6UjU

btmiller(10000) 4 days ago [-]

This makes me wonder how we'll view space travel generations from now. Will the descendants of companies like SpaceX and Blue Origin face outrage and calls for criminal charges like Boeing is receiving now?

Risk is inherent in fast modes of transportation, and I think it's very easy for us to ignore the underlying complexity of these feats. Great example: regular air travelers I'm sure are use to the preflight safety announcement run by the cabin crew, but when was the last time you (of the royal variety) actually stopped what you were doing, focused on the briefing, and made a mental note of the plane's safety features?

lutorm(3993) 4 days ago [-]

'when was the last time you (of the royal variety) actually stopped what you were doing, focused on the briefing, and made a mental note of the plane's safety features?'

Every. Single. Time. I fly.

I'm a pilot, and I know the chance of an accident is small and the chance of a situation where my actions will make a difference to the outcome are even smaller. However, since I'm locked in a seat with nothing important to do, paying attention and noting where the life vest is, how you put it on, and where the exits are in relation to my location has an opportunity cost of zero.

outworlder(3442) 4 days ago [-]

I personally don't pay THAT much attention to the briefing because I'm an aviation nut. However, I will still pull the safety card and check exactly which aircraft I'm in and where the emergency exits are.

EDIT: I do pay a lot of attention during the takeoff and landing phases. That's when most of the issues happen, so headphones and sleep can wait.

atomicbeanie(3971) 4 days ago [-]

Manufacturers take the blame for this. And they take the blame for things like no global transponder in the loss of the 777 over the Atlantic. Unfortunately the FAA processes, while enlightened in some ways, and firmly grounded in the science of safety, are effectively a strong deterrent for a manufacturer to avoid changing anything in a design.

The result is that many aircraft operate for a very long time with very outdated systems. Replacing designs is prohibitively expensive to prove to the FAA that there will be no corresponding degradation of the system's performance or new safety risk. Unfortunately such a process does not calculate the cost of not replacing the system. No cost is attributed with keeping something that is old and lacking in capability.

The result is that aircraft systems are woefully behind what technology can offer. And this is not just the hardware or the software, it includes the procedures and the overall set of capabilities. The result is that aircraft are being operated to the standards of the 50s, when in fact a much higher standard of crew and aircraft performance is possible. When I say performance I am also talking about safety performance, the ability to operate without harm causing failure.

sspyder(10000) 4 days ago [-]

That's not what happened in this case. They used a single piston where commonly two or more pistons would be used to control the rudder.

spiznnx(3099) 4 days ago [-]

This news cycle is surreal... just a few months ago my opinion of US aviation safety couldn't have been higher, and now that view is totally shattered.

js2(730) 4 days ago [-]

Don't pay attention to the news. It reports statistically rare events that are unlikely to affect you. Pay attention to the statistics[0,1,2]. US aviation is still remarkably safe[3]. Just don't be this guy[4].

[0] http://ipa-world.org/society-resources/code/images/95b1494-L...

[1] https://www.cdc.gov/nchs/fastats/leading-causes-of-death.htm

[2] https://www.bls.gov/iif/oshcfoi1.htm

[3] https://en.wikipedia.org/wiki/Aviation_safety

[4] https://www.xkcd.com/795/

avar(3099) 4 days ago [-]

Aside from this specific issue, all of aviation works like this.

E.g. the reason the 747 was decommissioned from passenger flight in the US when it was is because they flew it right up to the day that the FAA mandated that they couldn't fly it anymore.

The reason was that they didn't have then-mandatory fuel tank inerting. For something like a decade there were a bunch of planes in the air carrying people that were known to be more likely to explode than some other planes.

Regulatory safety is always a messy combination of new requirement and timed phase-out of old systems.

Same with cars, you can buy a used car today and even use it as a taxi to ferry passengers without it having safety features that would make it illegal to sell as a newly manufactured vehicle.

rwc(3828) 4 days ago [-]

'E.g. the reason the 747 was decommissioned from passenger flight in the US when it was is because they flew it right up to the day that the FAA mandated that they couldn't fly it anymore.'

I'm sorry, what? Do you have a source?

rootusrootus(10000) 4 days ago [-]

Do you mean a specific variant 747? Because 747s are still definitely flying passengers in the U.S. They are still making new ones.

rocketraman(10000) 4 days ago [-]

Leaving aside the (unconvincing) possibility that Boeing was actively covering up issues (which, if they were, is potentially a criminal issue due to fraud), there is a deeper philosophical point to be made about how people should view industry in the modern world, which includes aviation, but also all other production, ranging from farming and mining, to the manufacturing and use of products, and even to services.

It's easy to point to the various risks and lives at risk, due to the products of industry, such as aviation accidents as well as pollutants and even mundane things like typing on a computer (RSI anyone?).

However, what is often forgotten is all the amazing benefits of this industry -- from being able to fly to anywhere in the world in less than a day at a cost affordable by almost anyone in a developed country to having energy to light and heat our homes and run our medical devices, to the existence of this very forum. It is right and moral for both producers in setting their own safety and emission standards, as well as the state in setting limits on production in the name of 'protecting society', to consider these positives as well as the negatives. It is morally right even knowing that not setting these limits higher will result in lives and health lost, because the alternative is, bit-by-bit going back to a pre-industrial society in which humans were lucky to live past 35. The way to achieve setting these limits higher is in fact by becoming richer, such that we can afford the better controls. If the state attempts to too tightly control an industry before it can afford those same controls, it is essentially the same as destroying it, and keeping its benefits from the world forever.

Aviation is an example of this whole process working. It's exactly why aviation has become so incredibly safe, while at the same time becoming ever more economical. Companies like Boeing are to be, overall, praised. When fraud occurs, it needs to be investigated and punished, but that doesn't change the essentially good nature of Boeing.

jayrot(10000) 4 days ago [-]

Thanks for this. Perspective is refreshing sometimes.

syllable_studio(10000) 4 days ago [-]

Wow and this content is just posted on imgur with no link to sources? Is anyone already working on posting this through a legit source? If this content is real, it seems wild that it's not even published somewhere that is searchable online.

Admiral_C(10000) 4 days ago [-]

The source is here: https://www.reddit.com/r/CatastrophicFailure/comments/adl0jk...

I wrote this. I noticed it jumped 12,000 views and I got no username mention on Reddit, so I asked around and found it came from here. Made an account just to post the source. I'm pretty pissed that someone linked it completely without credit.

As for what it is, it's part of a series I write for reddit where I read as many sources as I can about a plane crash, write it up in a way that's understandable to laymen, and then post it for others to read. Taken away from me and my reputation on Reddit it has zero credibility because it's just a random album on Imgur.

0xdeadbeefbabe(3297) 4 days ago [-]

Can't you just stop buying airplanes from them?

Edit: Do the downvoters also want to throw the executives in jail for life?

FearNotDaniel(4006) 4 days ago [-]

As consumers, we do at least have the choice on some routes of whether to buy a ticket on a Boeing or not. While you can't guarantee 100% that the airline will use the equipment claimed at the time of purchasing the ticket, I've found at least for short haul within Europe and on one recent long haul holiday, I was able to make a reasonable choice between Airbus and Boeing machines without having to make significant tradeoffs around price, flight time, choice of airport etc. For instance, wherever Ryanair and Eurowings serve the same route, I have my pick of Boeing or Airbus respectively. For the record, when I have a choice, I tend to fly Airbus anyway.

sn41(10000) 4 days ago [-]

No, this is not the right way to look at it. How will you feel when a purported safe drug ends up killing or maiming many? Oh, they could've stopped buying thalidomide?

Shouldn't the management of a burger joint that caused deaths due to food poisoning be punished, just because there's a McDonald's nearby?

There is a breach of trust involved in these cases, where the potential fallout is death. That is very serious.

fixermark(3856) 4 days ago [-]

Not really. Aerospace is a very tight-knit oligarchy where the devil you know is entirely too often (historically shown to be) leagues better than the devil you don't. A company ceasing to be a Boeing customer over an issue like this runs the risk that the next company they work with is just as bad in different ways, but now they're ways the company's ground technicians are wholly unfamiliar with.

JustSomeNobody(3792) 4 days ago [-]

And.

And just stop buying airplanes from them.

diarmuidc(10000) 4 days ago [-]

Ah yes, the magical invisible hand of the market will solve this. Now when going on holidays, I need to be factor in the safety standards of the planes that will be servicing my route. Or you know, get a government regulator to regulate airplane manufactures like normal countries.

x0x0(10000) 4 days ago [-]

It seems to be a duopoly: Boeing and Airbus. There was Embraer, but Boeing is acquiring (acquired?). There's also Bombardier, but Airbus has some partnership deal with them.

JoshuaRLi(10000) 4 days ago [-]

> A new car built by my company leaves somewhere traveling at 60 mph. The rear differential locks up. The car crashes and burns with everyone trapped inside. Now, should we initiate a recall? Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. If X is less than the cost of a recall, we don't do one.

- Narrator, Fight Club

madez(3964) 4 days ago [-]

That first approximation is only valid for small total numbers of failures. If a failure repeats, the loss function is not simply A times B times C but greater, for example by loss of perceived quality of products, or by criminal investigations against the company and individuals in it.

dabbledash(10000) 4 days ago [-]

What is the alternative to a cost benefit analysis though? We could spend an infinite amount on diminishing safety gains.

jfk13(3931) 4 days ago [-]

Wow. The quote from Capt Ray Miller is particularly telling:

"I have been told by my company . . . that the FAA and Boeing (were) aware of the problems with the spurious rudder inputs but considered them to be more of a nuisance problem than a flight safety issue. I was informed, that so far as everyone was concerned, the rudder hardovers were a problem but that the `industry' felt the losses would be in the acceptable range. I was being mollified into thinking the incident did not happen, and for the `greater good' it would be best not to pursue the matter. In other words I am expendable as are the passengers I am responsible for, because for liability reasons the FAA, Boeing et al cannot retroactively redesign the rudder mechanisms to improve their reliability.'

And this was after the fault had not just caused in-flight emergencies, but had already killed people...

gameswithgo(3912) 4 days ago [-]

I wonder what solutions there are to these liability/blame problems. I have seen a similar case in Australia, where a parking barrier was an extreme danger as it cross over a biking path, and was hard to see until the last minute. It caused a crash that took a mans leg, and the legal proceedings took years. During that time the barrier remained in place, still a danger, because removing it would have admitted fault.

azernik(3661) 4 days ago [-]

A side note: there is a well-defined value of 'acceptable range', i.e. regulators regularly make decisions based on whether the cost of a change would be more than the 'statistical value of a human life'. (https://www.theglobalist.com/the-cost-of-a-human-life-statis...)

The question is whether, through regulatory capture or negligence, monetary costs are being valued too highly.

kibwen(772) 4 days ago [-]

To add a human element to this story, I'm from north of Pittsburgh and the crash of flight 427 is one of the events from my childhood that I can determinedly recall. One of my classmates--eight or nine years old--lost her father in that crash. Our class planted a tree outside our middle school with a plaque to memorialize him. I bother saying this only because, while air travel is impressively safe overall (hats off to the FAA and NTSB), it's natural to mentally dismiss a mere ('mere') 132 deaths in the grand scheme of things without pausing to consider the broader ripples such an event has on history.

If you'd like to experience a moment of somber horror, Wikipedia has a computer reconstruction of the final moments of the plane based on the recordings recovered from the black box: https://en.wikipedia.org/wiki/File:USAir_Flight_427_Chase.og...

eeeeeeeeeeeee(10000) 4 days ago [-]

My next door neighbor died in this crash (flight 427). I'm from Virginia. I remember this crash vividly and I was only 10 at the time.

zaroth(2761) 4 days ago [-]

I ended up finding my way to that after reading TFA and Googling 427.

Not just that, but the fact that they had audio all the way down, and the pilots utter disbelief that they plane was responding to his inputs in the way that it was.

The idea that they were not actually supposed to pull up in the scenario but leave the stick level and just turn hard right is unbelievable. The amount of training to be able to react like that instinctively would be tremendous.

The plane hit 3 or 4 G and apparently the copilot could be heard straining on the tape. They were analyzing the pilot's gasping to deduce how he way applying rudder because the black box didn't record rudder inputs at the time.

All of this is inconceivable. The sheer speed of the event and how quickly that 30 second video is over is probably the most shocking.

This one showing the pilot inputs is even more terrifying:

https://upload.wikimedia.org/wikipedia/commons/9/99/USAir_Fl...

cdolan(10000) 4 days ago [-]

Also from Pittsburgh, and my wife lost her dad in this crash.

The top comment on this thread (re: 'acceptable loss'), is infuriating.

iooi(4009) 4 days ago [-]

Take the number of vehicles in the field, A, multiply by the probable rate of failure, B, multiply by the average out-of-court settlement, C. A times B times C equals X. If X is less than the cost of a recall, we don't do one.

zdw(55) 4 days ago [-]

Maybe _Unsafe at any Speed_ ?

rkangel(3929) 4 days ago [-]

Even assuming you leave morals out of the argument, there are more variables, like the effect on future sales due to reputation (which could affect the equation either way).

iooi(4009) 4 days ago [-]

I don't agree with this being detached from the parent. It's directly related to the quote: 'the `industry' felt the losses would be in the acceptable range'

101001001001(10000) 4 days ago [-]

This quote happens while they are sitting in an airplane...

Sohcahtoa82(10000) 4 days ago [-]

Are there a lot of these kinds of accidents?

reviseddamage(3815) 4 days ago [-]

Which movie is this quote from? :)

usaphp(1481) 4 days ago [-]

> Instead, Boeing tried to claim that flight 427 crashed because a pilot had a seizure and depressed the rudder. NTSB investigators dismissed this as ridiculous.

> Boeing had no choice but to carry out the changes, but the company never stopped trying to deflect blame. While the investigation was ongoing, it adopted a philosophy of trying to avoid paying out damages to families of crews because this could be legally interpreted as an admission of responsibility. It had tampered with the PCU from the Colorado Springs crash and repeatedly tried to misdirect the investigation with "alternative" theories.

Should not there be some criminal charges?

rkangel(3929) 4 days ago [-]

That would require more evidence than it sounds like there is. And that's assuming that you subscribe to the motivations put forward in this article. I generally follow Hanlon's razor - 'Never ascribe to malice that which can adequately be explained by incompetence'. I have a hard time believing that a cover-up in Boeing was orchestrated over several years by a group of people all who didn't care about loss of life. I can easily believe that they didn't take the problem seriously, and that they were biased towards conclusions that weren't their fault.

mannykannot(3958) 4 days ago [-]

I do not wish to diminish how serious any tampering with the evidence would have been, or of Boeing's more general attitude of dismissing signs that there was a problem with the equipment, but it is not entirely clear from the article that someone from Boeing removed the Colorado Springs PCU spring and end cap. The article says the item had apparently been in the possession of United Airlines and valve designer/manufacturer Parker Bertea before their absence was noticed. The unit is also described as having been heavily damaged in the crash, to the point where several other parts had to be replaced before it was tested.

qrbLPHiKpiux(3990) 4 days ago [-]

> Should not there be some criminal charges?

With the size of the defense contract they have with the USG?

nabla9(705) 4 days ago [-]

Having theories and arguments that are convenient and self serving is not a crime. It's called public relations.

For crime to happen there has to be criminal activity.

matz1(10000) 4 days ago [-]

> Should not there be some criminal charges?

That depends on whether you can convince the court.

ams6110(3854) 4 days ago [-]

I had heard of the history of the 'rudder hardover' problems with the 737 but have never heard that Boeing was actively subverting the investigation. Assuming it's true, I'd agree that it's appalling behavior, but this post alone doesn't convince me. A lot of complex systems can fail in unlikely ways and it doesn't imply malfeasance that the company was wrong about the cause.

How many times have you investigated a weird, intermittent software or system problem and gone down the wrong path (or paths) because what turned out to be the actual cause seemed so unlikely, even if there were clues that in retrospect you should have given more weight.

itronitron(10000) 4 days ago [-]

They are removing evidence and that implies malfeasance.

mrguyorama(3983) 4 days ago [-]

I have never hindered any investigative agency, internal or otherwise, attempting to discover flaws in systems I have built. Doing so would be morally abhorrent, and hopefully someday illegal

georgecmu(180) 4 days ago [-]

This reminded me of the Yak-42 jackscrew failure due to a design defect, which caused a crash killing 132 onboard in 1982 [1]. The entire fleet was grounded for more than two years until the full investigation was completed and the defect was fixed. Three design engineers were convicted.

[1] https://en.wikipedia.org/wiki/Aeroflot_Flight_8641

hopler(10000) 4 days ago [-]

Convicted of what? Smells like scapegoating.

'The investigation concluded that among the causes of the crash were poor maintenance, as well as the control system of the stabilizer not meeting basic aviation standards"

masonic(2296) 4 days ago [-]

It made me think of the horizontal stabilizer jackscrew jam on Alaska Airlines 261[0], but I misremembered that as a 737 rather than a MD-83. SF radio personality Cynthia Oti was among the passengers.

[0] https://en.wikipedia.org/wiki/Alaska_Airlines_Flight_261

oldgradstudent(3917) 4 days ago [-]

Maybe the FAA should study how Soviet authorities responded to a crash of a new type of aircraft?

> Operation of the entire fleet of Yak-42 aircraft until the elucidation of the causes of the disaster and the elimination of the identified deficiencies was suspended until 1984.

Admiral_C(10000) 4 days ago [-]

I wrote this, not happy it got picked up and posted here without credit. Here's the original Reddit thread: https://www.reddit.com/r/CatastrophicFailure/comments/adl0jk...

Jolter(10000) 4 days ago [-]

To a HN reader, posting the Reddit thread would have been less useful than the imgur album, because the reader would have to click through to the album. Maybe you should edit the album and link it back to the reddit thread?

dang(163) 4 days ago [-]

I've put that link at the top and mentioned your username. If there's something else we can do to credit you, let us know.

I briefly changed the submission to point to that URL, but Jolter has a good point too: https://news.ycombinator.com/item?id=19394058.

coryfklein(2576) 4 days ago [-]

I'm not justifying how this was linked here, but I'd recommend putting your own attribution on the original source of the content (in this case, Imgur) rather than relying on everyone to access it through the specific forum you originally posted it on.

Also, Hacker News doesn't have submitter filled 'descriptions' for linked pages, so there literally is no way to add attribution meta-data when posting a link.

booleandilemma(3864) 4 days ago [-]
andrewflnr(3580) 4 days ago [-]

HN likes to link as closely to the original source as possible, which in this case looks like the imgur album rather than the reddit thread linking to it. It's probably not personal, just a de-noising instinct gone awry.

smlacy(3740) 4 days ago [-]

You'd be better served hosting this content on a traditional blogging / content platform rather than imgur, IMHO.

'Picked up without credit' on a platform that provides no attribution whatsoever isn't really a hill to die on.

kibwen(772) 4 days ago [-]

To showcase the parent's impressive efforts, it looks like this is but one of a ~70 part series on airplane crashes: https://old.reddit.com/r/AdmiralCloudberg/comments/a4ckhv/pl...

gkmcd(10000) 4 days ago [-]

For what it's worth, I recognized this as your content immediately. Thanks for your work.

Chris_Chambers(10000) 4 days ago [-]

How cute, he thinks he "owns" some text he shared online in 2019.

amelius(848) 4 days ago [-]

If the extreme deflection of the rudder causes serious control problems, then shouldn't the extremes simply be set at lower deflections (using a physical barrier/limiter)?

eppp(10000) 4 days ago [-]

That assumes that you do not need high deflection for low speed maneuvering. I would think you would in fact need that much for taxiing and landing/takeoff in a crosswind.

notaharvardmba(10000) 4 days ago [-]

If that's not possible, at the very least some type of feedback control loop that can sense the actual rudder position and compare it to the desired position (based on the pedal input) that feeds into the black box should be a no-brainer.

g-erson(10000) 4 days ago [-]

Was that the final resolution then; that Boeing were ordered to replace the part, and the crashes stopped happening? Was there ever a formal investigation into whether Boeing knew the true cause of the rudder hardover, and chose to ignore it and blame other stuff?

ams6110(3854) 4 days ago [-]

Without doing any research I'd assume that either there was an investigation and insufficient evidence was found, or that there was not because there was not even probable cause to begin one.

To think otherwise is to believe in a conspiracy between Boeing, NTSB, FAA, and the FBI or whichever law enforcement agency would have jurisdiction.

The FAA has some conflict of interest in its mission, but the NTSB does not and is generally considered to be the premier accident safety investigation group in the world.

muricula(3601) 4 days ago [-]

From the post: 'The NTSB report recommended that the valve be redesigned, and the Federal Aviation Administration mandated that the changes be implemented by November 2002. Since then, no 737s have crashed due to rudder hardover or rudder reversal.'

southern_cross(10000) 4 days ago [-]

Boeing certainly hasn't been inspiring a lot of confidence lately, given that decisions like this:

https://www.seattletimes.com/business/boeing-aerospace/boein...

have apparently been leading to decisions like this:

https://www.seattletimes.com/business/boeing-aerospace/air-f...

lukewrites(10000) 4 days ago [-]

'Best' part of the first article:

> Aero Mechanic, the District 751 monthly newspaper, accused Boeing of "essentially masking defects," by pressuring inspectors to not record defects when found but instead to simply have them fixed, then afterward produce data to the FAA showing a big decrease in defects as a justification for cutting out inspections.

Someone1234(4022) 4 days ago [-]

That was a surprisingly interesting read. I have to admit I was skeptical just because it was hosted on imgur, but both the images/text paint an interesting picture worthy of discussion.

thatswrong0(10000) 4 days ago [-]

If you want to read more, this is from a series on the subreddit /r/catastrophicfailure, written by Admiral_Cloudberg: https://www.reddit.com/user/Admiral_Cloudberg

cryptonector(4006) 4 days ago [-]

The author has posted many of these, all on imgur.

rubicon33(3893) 4 days ago [-]

>It is widely suspected that Boeing knew about the problems with the PCU for decades but had done nothing, despite the hundreds of reported incidents. Because no one was collecting all the accounts of rudder deflections, it was likely that no one except Boeing realized how common they were. It was not until people started dying in crashes that enough scrutiny was placed on the 737 to uncover this history of ignoring the problem.

I can't help but read these stories, and all the accounts of various other crashes, and question the whole 'safest mode of transport' line we've been fed? 'Safest' doesn't really mean anything to me, I guess.

Is it really outside of the realm of possibility that flying is less safe than the number we've all been given? I've certainly never seen the raw data myself, but it's hard not to take this skeptical perspective when you dig deeper into the number of crashes that happen world wide.

foldr(4027) 4 days ago [-]

Yep, it's pretty much outside the realm of possibility. You can't hide plane crashes in commercial aviation.

deusum(10000) 4 days ago [-]

For context, there's roughly 100,000 flights per day worldwide, now.

The incidents, though certainly deadly in some cases, sound like a very small percentage.

swasheck(3896) 4 days ago [-]

Statistically fewer incidents. Greater scale of tragedy, emotionally.

Though there are entire branches of fields that use statistics to mitigate risk, probabilities are tricky things. I found this interesting read a few days ago https://aeon.co/ideas/the-concept-of-probability-is-not-as-s... and it seems to have some overlap here.

How are we calculating 'safety' when it comes to transportation. I'm not sure that air transportation is less safe than other forms, but I wanted to pass this along as support for some sort of skepticism.

the_arun(3649) 4 days ago [-]

Any whistle blowers in Boeing who could come out & share facts on how employees of Boeing are reacting to these incidents?

rootusrootus(10000) 4 days ago [-]

Do we think they're all in conspiracy overdrive mode looking for ways to cover their asses and burning all the evidence?

Chances are better that (assuming they don't already know the answer) that a bunch of them are working long hours trying to find the cause and solution to this problem. I imagine every flight simulator at Boeing's disposal is being used to analyze this from every angle.

12345anon(10000) 4 days ago [-]

See what happened to John Liotine. https://en.wikipedia.org/wiki/Alaska_Airlines_Flight_261 Honestly, I'm more scared of airlines skimping on maintenance and fights with their mechanics (southwest) than this.

sschueller(2659) 4 days ago [-]

Some tried back in 2011, didn't go to well: https://youtu.be/vWxxtzBTxGU

BoorishBears(4007) 4 days ago [-]

I worked under a CEO who had worked at Boeing before, and he said before that at times there was a number that a life was worth when making decisions while he worked there, in a conversation about diminishing returns in quality.

Now I'm not sure if he meant that literally and there's a number in the Boeing employee handbook, but he had a point. He said they could make planes cost twice as much and save a few lives that will be lost one day, but no one would be able to afford flying.

This case definitely seems like that mentality gone wrong, but it's interesting to realize yes, cost was spared in making your plane/car/boat/train as safe as possible

jdsully(10000) 4 days ago [-]

The NHTSA has a similar number, I believe its in the realm of $2 million per life. Its based off of medical costs and a few other 'organic' numbers to estimate what the population as a whole values a life at.

Its morbid but it has to be this way or transportation would be unaffordable.

peteradio(10000) 4 days ago [-]

Sounds like something a CEO would say to cover up for shitty management and cost cutting R&D.

jfk13(3931) 4 days ago [-]

Sure. But the Ray Miller quote suggests that the issue was not that a modification to make the rudder mechanism more reliable was itself prohibitively expensive from an engineering point of view. Rather, the concern was that making such a modification could open up liability issues, as it would be an acknowledgement that the plane was faulty, and the FAA and Boeing were anxious to avoid being held liable.

That is what I think people find offensive.

arisAlexis(3514) 4 days ago [-]

this is a huge discussion and if seen with a cynic eye would lead to troubling results. For example telling someone to pay more for this flight so that is 1/100000 chance of dying becomes 1/1000001 .. this is a field of psychology even. Put this mixed with capitalism, profit etc. Not taking sides. I just find this extremely complex

smsm42(3748) 4 days ago [-]

> but it's interesting to realize yes, cost was spared in making your plane/car/boat/train as safe as possible

Isn't it always the case? Driving cars and flying planes is a risky activity (the former much riskier, but still). There are ways to reduce the risks, but millions of people choose to buy cars without most recent safety features. A lot of people choose to drive tired, intoxicated, distracted, under bad weather conditions, while using mobile devices, etc. - knowing it is risky. There could be more safety features in cars - stronger materials, more accident-preventing electronics, enforced speed limits, etc. - but nobody would buy a car that costs $200K and can go only 30 mph, even it'd be super-safe for the driver. So yes, we know we trade some safety for reduced cost (either directly monetary or convenience). There's no surprise there, and there's no surprise manufacturers participate in the balance too. Of course, the consumer can make voluntary decision about accepting risk only if they are informed about the risks - if the risks are purposely concealed from consumers, then it's a problem.

docker_up(3536) 4 days ago [-]

Did Boeing executives go to jail for this disgusting cover up? I hope that the diesel cover-up of Volkswagen and the people that went to jail will motivate governments to pursue criminal charges against Boeing and/or Airbus if similar things happen.

Fighting blame by lying and deflection at the risk of death should be a criminal offense. It's mass murder.

gist(2218) 4 days ago [-]

Why are you assuming what you have just seen is even true? An image, no attribution and you honestly don't even know who cooked it up or why.

lgvln(10000) 4 days ago [-]

Interesting point. We could be looking at a huge compensation from Boeing to the major airlines for this. But personally, I think the damage to their reputation, and possibly FAA's, would far exceed that.

macspoofing(10000) 4 days ago [-]

>Did Boeing executives go to jail for this disgusting cover up?

Did you read this one imgur blog and assume that what is stated is an accurate representation of reality?

iforgotpassword(10000) 4 days ago [-]

Yes, putting it into perspective like this makes the diesel case quite ridiculous. But I guess since Boeing is an American company it gets quite some bonus points before anything will happen. Not to say other countries wouldn't behave similarly...

abruzzi(3812) 4 days ago [-]

The impression I got from the text is that Boeing has steadfastly denied a coverup, and there is probably not enough evidence to convict if they were to take them to court. The earlier crashes, the ones that claimed lives, sound like they haven't been reclassified to include this as the cause. The only incident that specifically ascribed to the control valve is the one where the pilot regained control and there was no actual loss of life. That allowed the FAA to determine the cause, and force the changes, but it didn't allow them to retroactively change the cause for past crashes. (something undoubtedly Boeing would have fought.)

umvi(10000) 4 days ago [-]

That was harrowing... I feel like there should be quite a few Boeing executives in prison for life because of this.

imglorp(3662) 4 days ago [-]

Yeah, if things had gone just slightly differently. That, and also the company may well have ended and another gained US airline dominance.

I'd like to see a James Burke 'Connections' style series on near-misses. What could have been. Another case I like to think about is Sears missing the boat on the Internet. They were catalog based 80 years before Amazon and well could have decimated the industry if a few key decisions were different.

js2(730) 4 days ago [-]

Somewhat related:

https://nsc.nasa.gov/resources/case-studies

This one is interesting:

In March 2010, a 29-year-old shift nurse left her job in Atlanta, Georgia and headed to her boyfriend's house. She was driving her 2005 Chevy Cobalt on a two-lane road as she approached a half-mile downhill straightaway. As the road leveled after the straightaway, she approached an area where some rainwater had accumulated. Shortly after encountering this section of roadway, she apparently lost control of her Cobalt as it hydroplaned across the center line. The rear passenger side of her car was struck by an oncoming Ford Focus, causing the Cobalt to spin off the road and fall 15 feet before landing in a large creek around 7:30 p.m. The impact of the crash broke the nurse's neck, an injury that led to her death shortly after she arrived at the hospital.

While this tragedy might sound like a typical crash scenario, it was particularly puzzling to the victim's parents. Why? According to Atlanta magazine, she always wore her seat belt and never had a speeding ticket. So how did she suddenly lose control of her car on that fateful evening? Sadly, this unsettling question remained unanswered until several years later—after many more drivers suffered similar fates.

...

The ignition switch did not meet the mechanical specifications for torque and required less force to turn the key than its designers originally ordered. If the driver's knee hit the key fob, the car would often turn off, causing stalling at highway speeds and disabling the airbags.

https://nsc.nasa.gov/features/detail/hidden-hazards

Edit: apparently NASA is checking referrer and you can't follow this link directly. It's the third case study down the page from the first link.

jlv2(10000) 4 days ago [-]

The 'hidden-hazards' link says 'This is a NASA-Only site'

humblebee(10000) 4 days ago [-]

While this tragedy might sound like a typical crash scenario, it was particularly puzzling to the victim's parents. Why? According to Atlanta magazine, she always wore her seat belt and never had a speeding ticket. So how did she suddenly lose control of her car on that fateful evening? Sadly, this unsettling question remained unanswered until several years later—after many more drivers suffered similar fates.

I don't understand why this paragraph was written this way. Driving highway speeds and hitting a puddle of water seems like a reasonable cause to lose control of a car and result in the crash. I don't understand why this would be puzzling. On the other hand, the lack of airbag deployment would be puzzling.

dsfyu404ed(10000) 4 days ago [-]

In my opinion the GM ignition switch thing was over blown.

Getting worn out and sloppy and failing in one of several ways (e.g. turning the car off unexpectedly) is not an atypical failure mode of old ignition switches. The only reason it was a big deal was because wealth-ish people (i.e. not someone driving a 1993 Corolla) driving fairly new (at the time) vehicles died.

Their cover up was somewhat scummy but I don't think it wasn't the kind of thing they should not have had to cover up.

Edit: should not have

agumonkey(925) 4 days ago [-]

I forgot why there are no parachutes on planes ? weight ?

I'd pay a premium to have my own emergency wingsuit..

heptathorp(10000) 4 days ago [-]

weight and

size and

the fact most people don't know how to put one on properly or use one and

there's usually not enough time to prepare in an emergency and

it would be utter chaos inside the plane if everyone tried to put on their parachute and escape the plane.

alwayseasy(10000) 4 days ago [-]

Practicality, training, false-alarms...

anderskev(10000) 4 days ago [-]

Assuming you were able to open the door and open a parachute at that speed, you'd most likely be sucked into the tail of the aircraft.

outworlder(3442) 4 days ago [-]

> I forgot why there are no parachutes on planes ? weight ?

There are parachutes on planes, as in whole airframe parachutes. Just not airliners. See Cirrus aircraft.

However, even in a Cirrus, you can only open the parachute within very specific parameters (altitude, airspeed and so on). Exceed these, and your parachute is worthless.

For obvious reasons, a whole frame parachute in a 737 is a crazy proposition. It is ridiculously heavy. Also, the plane flies much higher and much faster, so even if you COULD fit one made of some form of unobtanium, it would likely be useful only under very specific scenarios.

The other option would be to provide individual parachutes. Much like life vests.

Ok great. Let's assume they are small and can be stowed under the seat. How long would it take for a non-trained individual to put one of these on properly? Do they even have enough space to do it? How would they exit the plane? Most airliners don't have cargo-bay style doors. Exiting through the side doors is a bad idea. Who would inspect and repack hundreds of parachutes per plane?

The plane would have to be under controlled flight and slow enough for this to even have a chance to save any passengers. If you are in a slow and controlled flight, what use is this? Just land somewhere.

For ethiopian and lion air, it all happened so fast after takeoff that it is unlikely the pilots even had time to run their checklists. And we want to don parachutes on 100+ people and have them jump from an out of control plane?

It just doesn't make sense from any angle.

rsweeney21(3965) 4 days ago [-]

My previous company made software for DOTs. Over the years I learned that DOTs assign a dollar amount to human lives/deaths when calculating the cost benefit ratio of implementing roadway safety improvements. It very much reminds me of the acceptable number of deaths mentioned in the article. DOTs don't like talking about this of course.

There should be a law that grants companies safe harbor for reporting and fixing defects in their product without the risk of accepting liability. Sort of like self-whistle blowing.

Erlich_Bachman(10000) 4 days ago [-]

Is there realistically any other way to do this though? The value of life, as estimated by a human in a generic situation is presumably infinite. However, companies, governments, regulating bodies, etc, have to regulate actual physical measures, which cost money, time etc- all very finite things. These agents need to act in actual physical environment like for example limited budgets or having to choose between two different safety measures. At some point the human life is going to have to enter that equation, if we are going to be talking about safety issues. How do you calculate en equation with finite and infinite variables without assuming a finite value for the human life?

tuna-piano(3672) 4 days ago [-]

While we're bashing Boeing, let us not forget how they tried to swindle taxpayers into buying their tankers... a scandal which led to their CFO going to prison.

https://www.washingtonpost.com/archive/opinions/2003/10/06/t...

cc439(10000) 4 days ago [-]

Honestly, I can't understand how this was a bad thing given just how dated and stretched-thin the KC-135 fleet was at the time and those problems have only growm worse. The Air Force has been trying to replace thr KC-135 for decades since the newest airframe was produced in 1965 and their entire in-flight refueling logistics rely on these aircraft. Yes, it is unseemly to slip something into a continuing resolution to fund a war effort but when the bureaucratic roadblocks to purchasing something so critically important yet so unsexy as a flying gas station, one has to wonder if the people involved were acting out of good-willed desperation to help avert a massive problem with critical defense infrastructure. While $16 Billion for 100 aircraft may seem exorbitant, it has taken until this past year for the KC-135's replacement to enter servjce at a cost of $179 million per unit. That still beats the inflation adjusted cost of $160m per 767 tanker but think about how !uch money has been wasted on upfit and restoratiom programs for the 60+ year old KC-135 airframes over the 15 years since that 'scandal'.

Source on KC-46 info and general issues with the agjng KC-135 fleet: https://www.defensenews.com/opinion/commentary/2019/01/16/pe...

syn0byte(10000) 4 days ago [-]

And MrHands[0] was a Boeing engineer. Over extending rods indeed...

0) Nope.





Historical Discussions: Google has added DuckDuckGo as a search engine option for Chrome users (March 13, 2019: 928 points)

(928) Google has added DuckDuckGo as a search engine option for Chrome users

928 points 5 days ago by jmsflknr in 2644th position

techcrunch.com | Estimated reading time – 4 minutes | comments | anchor

In an update to the chromium engine, which underpins Google's popular Chrome browser, the search giant has quietly updated the lists of default search engines it offers per market — expanding the choice of search product users can pick from in markets around the world.

Most notably it has expanded search engine lists to include pro-privacy rivals in more than 60 markets globally.

The changes, which appear to have been pushed out with the Chromium 73 stable release yesterday, come at a time when Google is facing rising privacy and antitrust scrutiny and accusations of market distorting behavior at home and abroad.

Many governments are now actively questioning how competition policy needs to be updated to rein in platform power and help smaller technology innovators get out from under the tech giant shadow.

But in a note about the changes to chromium's default search engine lists on an GitHub instance, Google software engineer Orin Jaworski merely writes that the list of search engine references per country is being "completely replaced based on new usage statistics" from "recently collected data."

The per country search engine choices appear to loosely line up with top-four market share.

The greatest beneficiary of the update appears to be pro-privacy Google rival, DuckDuckGo, which is now being offered as an option in more than 60 markets, per the GitHub instance.

Previously DDG was not offered as an option at all.

Another pro-privacy search rivals, French search engine Qwant, has also been added as a new option — though only in its home market, France.

DDG has been added in Argentina, Austria, Australia, Belgium, Brunei, Bolivia, Brazil, Belize, Canada, Chile, Colombia, Costa Rica, Croatia, Germany, Denmark, Dominican Republic, Ecuador, Faroe Islands, Finland, Greece, Guatemala, Honduras, Hungary, Indonesia, Ireland, India, Iceland, Italy, Jamaica, Kuwait, Lebanon, Liechtenstein, Luxembourg, Monaco, Moldova, Macedonia, Mexico, Nicaragua, Netherlands, Norway, New Zealand, Panama, Peru, Philippines, Poland, Puerto Rico, Portugal, Paraguay, Romania, Serbia, Sweden, Slovenia, Slovakia, El Salvador, Trinidad and Tobago, South Africa, Switzerland, U.K., Uruguay, U.S. and Venezuela.

"We're glad that Google has recognized the importance of offering consumers a private search option," DuckDuckGo founder Gabe Weinberg told us when approached for comment about the change.

DDG has been growing steadily for years, and has also recently taken outside investment to scale its efforts to capitalize on growing international appetite for pro-privacy products.

Interestingly, the chromium GitHub instance is dated December 2018 — which appears to be around the time when Google (finally) passed the Duck.com domain to DuckDuckGo, after holding onto the domain and pointing it to Google.com for years.

We asked Google for comment on the timing of its changes to search engine options in chromium. At the time of writing the search giant had not responded.

Reached for comment on being added as an option in its home market, Qwant co-founder Eric Leandri said "thank you" to Google for adding the search engine as an option in France, claiming "certainly it's because of the number of users of Qwant" in its home market.

But he added that Qwant still recommends to its users that they use Mozilla's Firefox browser or the pro-privacy Brave browser.

He also said it would have been nicer if Google had also added Qwant in Germany and Italy where he said the search engine also has a following.

Asked whether he believes expanded search engine options in Chrome will be enough to stave off further regulatory intervention related to Google's market dominance, Leandri said no — pointing out that Android OEMs still have to pay Google to install a non-Google search engine by default, following the European Commission's Android antitrust ruling last year, as we've reported previously.

"It's a joke," he added. "But thank you again for Chrome 73, I really and sincerely appreciate [it]. I still recommend Firefox and Brave."

This report was updated with comment from Qwant




All Comments: [-] | anchor

novaRom(4020) 5 days ago [-]

Fun facts:

* DDG in 2018 has served similar amount of search queries as Google in 2000.

* DDG growth rate is accelerating

* Google search growth rate is negative

* Google's share of global search is shrinking

DDG stats: https://duckduckgo.com/traffic

Google stats: http://www.internetlivestats.com/google-search-statistics/

gregknicholson(3968) 5 days ago [-]

How important to Google is their web search product?

I know it was their first product, but I would imagine they get much of their revenue from other avenues, such as Android's built-in totally-not-antitrust web search app, and YouTube and Gmail and web ads...

glenrivard(10000) 5 days ago [-]

Looks like Google search share is still increasing but just slightly. Now But f share has fallen over 20% the last couple of months.

Google is at 93% so not a ton more share to take.

http://gs.statcounter.com/search-engine-market-share

alpb(839) 5 days ago [-]

I think you're misinterpreting the chart you quoted.

Google search growth rate is always positive in that page. It just decelerated. Growth rate being negative means you're actively losing more users than you gain.

pcnix(10000) 5 days ago [-]

One argument to be made is that Google Search can only go downwards from here, as it is currently a clear market leader, and the remaining segments are not easy for them to break into. For example, Baidu has a stranglehold on search in China, and that's not likely to change drastically, with Google facing internal opposition to entering China.

_eht(3827) 5 days ago [-]

I just wish they would drop their featured Yelp results. You too, Bing.

aboutruby(10000) 5 days ago [-]

As some point they are caped by the number of people having access and using the Internet, same for Facebook or any world-scale tech company.

theBarleyMalt(10000) 5 days ago [-]

I would like if duckduckgo had an easy way to google a search just performed there. It's good, but it doesn't replace google regularly enough yet

brentadamson(3979) 5 days ago [-]

Something like what we do at Jive Search? Google/Bing/Amazon/YouTube are the defaults but these are customizable with the 'b' param....'&b=b,yf'

ebeip90(10000) 5 days ago [-]

Just put "!g" at the beginning of the query (or !s for startpage, which uses the Google search engine)

gist(2218) 5 days ago [-]

For those that don't know 'quietly' is newspeak for something that happened that no press release was issued. [1]

[1] Because in the world of the press everything should be announced so they can broadcast it and sell advertising by running stories. And not have to find it out by other more laborious methods.

stcredzero(3023) 5 days ago [-]

For those that don't know 'quietly' is newspeak for something that happened that no press release was issued.

This is a deliberate Orwell reference? Vernor Vinge speculated in Rainbows End that everything which couldn't be searched for in a search engine would effectively become invisible. In 2019, that manifests as, 'everything which can't be searched for in a search engine, which is backed up by crosslinked mainstream news sites and which isn't warded by words meant to scare casual readers away.'

TallGuyShort(2981) 5 days ago [-]

It is also often followed with a statement that the real authority on the issue didn't even bother responding with any comments, when in all likelihood the journalists also didn't try very hard to reach anyone. As it is in this case.

andrenotgiant(3971) 5 days ago [-]

It looks like in Jan 2019, Google Analytics finally started classifying DDG as an Organic Search engine instead of lumping it into 'Referrals' category.

Although the change has the awkward effect of splitting ddg reporting into the two groups based on date of traffic.

eclat(10000) 5 days ago [-]

This might explain some GA reports I've recently looked at.

bootlooped(10000) 5 days ago [-]

Not on Chrome mobile though, where you still cannot add search engines manually.

craigc(2965) 5 days ago [-]

I have been using Duck Duck Go on Chrome Mobile since the end of January. Do a search on the site first then it will show up in the list.

katsura(10000) 5 days ago [-]

On iOS I just updated and I was able to set ddg.

graycrow(10000) 5 days ago [-]

I started to use Chrome only for Google services (gmail, youtube, maps, etc) and Firefox with DDG for everything else. With this setup Goggle can send home only the data they already know.

newscracker(4022) 5 days ago [-]

As someone who supports Firefox, I would say that it's important to signal Google that there are Firefox users using its services. People have been reporting about issues with some of Google's services on Firefox. Skype from Microsoft was recently discovered as not supporting Firefox. Every signal users send to these companies matters.

ppeetteerr(10000) 5 days ago [-]

Probably so that they may track which searches people are performing on DuckDuckGo

marpstar(10000) 5 days ago [-]

sad but true. I'd be interested to know if this data (i.e. non-Google search engine queries) is sent to Google. I'm assuming it is.

thekyle(10000) 5 days ago [-]

I don't see how doing this helps them track DuckDuckGo searches any better than they already could in Google Chrome.

EDIT: Added italicized text for clarity.

sidcool(216) 5 days ago [-]

Genuine question, if I configure DuckDuckGo as my default search engine, would my key strokes still be sent to Google?

chvid(3711) 5 days ago [-]

Is the interface between browser and search engine not explicitly defined? Why can't I add an arbitrary web application as a search engine for chrome?

(I know ... for business reasons ... but isn't Chrome open source? How is this in practice prevented?)

Ajedi32(1735) 5 days ago [-]

You can. In fact, Chrome automatically creates search engines for any site you search on. I can search Amazon by typing 'Am<tab><query><enter>' in the address bar for example, and Chrome learned how to do that automatically despite not having any knowledge of how Amazon's search system works when I first installed it.

I guess the only difference is that with this change, DDG is available as a search engine by default with a blank install, even before you've actually used it.

thekyle(10000) 5 days ago [-]

You can add an arbitrary web application as a search engine for Chrome, just like in any other modern web browser.

You go to Settings > Manage search engines > Add or pick one of the auto discovered ones. This also works in the mobile versions of Chrome.

You can find more instructions here: https://support.google.com/chrome/answer/95426

ghinshell(10000) 5 days ago [-]

Not sure if this is good or bad for DuckDuckGo, as others have pointed out, this can be used to track DuckDuckGo usage. Interesting though :D

Zhyl(10000) 5 days ago [-]

I would expect to see a small bump in the stats [1] which given this is DDG's main source of revenue is absolutely a good thing.

'Duckduckgo is one of our main rivals.' Is a bit of a self fulfilling prophecy for Google. They need to amp up DDG's legitimacy to ward off accusations of antitrust. Credibility, legitimacy and awareness are really the only things DDG needs to reach a wider audience and gain greater adoption.

[1] https:/duckduckgo.com/traffic

40acres(3582) 5 days ago [-]

Is this the first of a wave of anti-anti-trust moves by big tech? It's a play that I certainly would advise. It makes sense to trade marginal revenue with low hanging fruit gestures like these to take the air out of folks like Warren and the European Competition Committee.

ucaetano(2313) 5 days ago [-]

No, it is just an update to what they've been doing for years: showing the top 4 search engines per market as options.

Nothing new to see here.

wintorez(10000) 5 days ago [-]

I started using two browsers + search engines:

1 - I use Chrome + Google for work stuff 2 - I use Firefox + DuckDuckGo for personal stuff

I sync my passwords with Bitwarden.

Ingon(10000) 5 days ago [-]

Initially I was doing the same, but then switched to using Firefox profiles - 1 for work + google search; and 2 for personal with DDG and ublock origin.

Also slowly migrating to Bitwarden.

luckman212(4005) 5 days ago [-]

I started doing the same. Bitwarden is the best! I urge HN users who haven't checked it out to give it a try and please support the project so it lives on.

mrweasel(3923) 5 days ago [-]

I have Chrome for one purpose, and one purpose only: vSphere installations that have yet to be upgraded to include the HTML5 version. Chrome is the 'Flash browser'.

Over the last few years DuckDuckGo have become so good at handling my queries that I only occasionally use Google. That typically happens when DuckDuckGo doesn't find what I expect, but it always turns out that neither does Google.

stcredzero(3023) 5 days ago [-]

I use Firefox + DuckDuckGo for personal stuff

I sync my passwords with Bitwarden.

I have started doing this as well, except I'm using KeePassXC and using Dropbox to distribute the file everywhere. Would Bitwarden work behind a company firewall at a company that doesn't allow Dropbox?

newscracker(4022) 5 days ago [-]

Curious — why not Firefox and Startpage for work stuff or Firefox and DDG for work stuff? You can always resort to bang commands if DDG results aren't great for particular searches. You can use the Multi-Account Containers extension (and related container extensions) to have Firefox work for multiple "profiles" of usage.

Or you could even use Chrome and DDG or Chrome and Startpage for work.

Anything where Chrome and/or Google are avoided is a good thing, IMO.

brento(4025) 5 days ago [-]

@wintorez, I started using Brave browser[1] and DuckDuckGo for work and personal. It's based on Chrome but with privacy in mind.

Also, I currently use 1Password, but have been thinking about using Enpass[2] because you can sync with any cloud drive. I like the idea of syncing to a third party cloud drive in case my password service is compromised.

[1]: https://brave.com [2]: https://www.enpass.io/

wicket(3620) 5 days ago [-]

I don't think this really changes anything. It's more important to Google that DuckDuckGo users don't disable Chrome's prediction service, that way they can still collect search data on them. Adding DuckDuckGo as a search engine option whilst they leave the prediction service option intact means that this is nothing more than a publicity stunt. It's actually quite deceiving for many users who do not realise they are still sending data to Google.

linuxftw(10000) 5 days ago [-]

I had a feeling that google was getting a sense of the search traffic no matter which 'search engine' you configured.

I suspect people that actually care about privacy aren't using Chrome though.

_dark_matter_(3976) 5 days ago [-]

Hold up, are you saying that users who use DDG are still sending _all_ their searches to Google? I'm not disagreeing but I'd love to see a source for this. It seems to me that if you switch, Chrome should use the DDG autosuggest API [0].

[0] https://duck.co/help/features/autosuggest

pkasting(10000) 5 days ago [-]

Eh? 'Use a prediction service' is about whether you send data as you type _to your default search engine_, not to Google. If you change to DuckDuckGo as your default search engine, toggling 'use a prediction service' on and off will not send any more or less data to Google, because omnibox typing is never sent to Google in that case regardless.

Source: I am the former Chrome omnibox owner. You can find the relevant code for this starting at https://cs.chromium.org/chromium/src/components/omnibox/brow... ; look for how GetDefaultProviderURL() works and when that query is sent. You can also watch packets with your favorite network analyzer.

keiru(10000) 4 days ago [-]

>nothing more than a publicity stunt

It's the complete opposite to that, and you said it yourself. Their aim is to quietly retain/recapture users while keeping antitrust at bay, and they did well precisely in not publicizing it.

libso(4023) 5 days ago [-]

Purely from a search quality and end user experience stand, I'd choose Google or Bing over ddg. I have given ddg a shot for over a couple of months. But I found myself using other search engines more often than not for the lack be quality results.

hnruss(10000) 5 days ago [-]

Can you give some specific examples of search queries in which Google had better results?

novaRom(4020) 5 days ago [-]

My impression Google helps DDG to become a popular alternative to its own search engine. Why? Just funny fact: In 2018 Google transferred ownership of the domain name Duck.com to DuckDuckGo.

SlowRobotAhead(10000) 5 days ago [-]

Antitrust and PR.

Look, you know that Google in an act of benevolence gave them duck.com last year, that's PR.

Antitrust angle is obvious. They want to appear they aren't the only game in town. Esp when you have people like Warren making (hollow) antitrust campaign noise.

lawrenceyan(2232) 5 days ago [-]

DuckDuckGo is pretty much just Bing with a duck taped over it though.

FabHK(3821) 5 days ago [-]

Except for all that privacy and functionality and stuff that Bing doesn't have (like bangs, or cursor down + enter to directly go to search result without using your mouse, etc. etc.)

sigacts(3840) 5 days ago [-]

Isn't DuckDuckGo just a white label of Bing?

freediver(3771) 5 days ago [-]

For the most part yes. They could be getting search results from other paid search engine APIs but you have to balance cost of providing results with ad/affiliate revenue.

untog(2290) 5 days ago [-]

No:

> DuckDuckGo gets its results from over four hundred sources. These include hundreds of vertical sources delivering niche Instant Answers, DuckDuckBot (our crawler) and crowd-sourced sites (like Wikipedia, stored in our answer indexes). We also of course have more traditional links in the search results, which we also source from a variety of partners, including Oath (formerly Yahoo) and Bing.

https://duck.co/help/results/sources

rkangel(3929) 4 days ago [-]

Even if they are, with DDG in between you and Bing you only have to trust DDG from a privacy point of view (and that's their whole selling point).

ocdtrekkie(2602) 5 days ago [-]

As noted in the article, apparently Google is presenting the top four search engines used in a given country. So presumably this means they're seeing a lot more DuckDuckGo searches in the data they're collecting from Chrome users.

It's also a solid choice for them to hedge against antitrust claims, if they can point to having just added them to their browser, regardless of the fact that Google is the default and they do not present a choice screen like Microsoft had to in the EU.

ratling(10000) 5 days ago [-]

Antitrust was the first thing I thought of when I saw this. Doing it automatically based of statistics would work toward that end as well.

stcredzero(3023) 5 days ago [-]

As noted in the article, apparently Google is presenting the top four search engines used in a given country

Good. 4 is a good number. It's on the low end of the number range people think of as 'enough choice.'

3xblah(10000) 5 days ago [-]

'... in the data they're collecting from Chrome users.'

What percentage of Chrome users consented to the data collection? (Is consent even required?)

Does the data represent all Chrome users or only those who have consented?

dddddaviddddd(4030) 5 days ago [-]

> So presumably this means they're seeing a lot more DuckDuckGo searches in the data they're collecting from Chrome users

Reminds me of the sort of advantage Facebook had from its VPN app to identify competitors early to kill/acquire them.

crossman(10000) 5 days ago [-]

> So presumably this means they're seeing a lot more DuckDuckGo searches in the data they're collecting from Chrome users

There's a lot to unpack in that statement... Is there any recent analysis on the usage stats that chrome is reporting back that someone could point to?

fader111(10000) 5 days ago [-]

rebos.

pkasting(10000) 4 days ago [-]

Regarding 'data they're collecting': The list here is based on popularity of search engines in different locales, determined using publicly available data.

auvi(2785) 5 days ago [-]

I am a bit curious, how the name 'DuckDuckGo' was chosen? 'Google' comes from Googol i.e. 10^100.

Fishkins(10000) 5 days ago [-]

It's an abbreviation of Duck, Duck, Goose. They don't really explain why they chose to name it after that game.

https://duck.co/help/company/name

nullandvoid(10000) 5 days ago [-]

My guess would be from the child's game 'duck duck goose'.

Maybe the creator really enjoyed that game as kid!

isostatic(3662) 5 days ago [-]

Wikipedia provides

Weinberg explained the beginnings of the name with respect to the children's game duck, duck, goose. He said of the origin of the name: 'Really it just popped in my head one day and I just liked it. It is certainly influenced/derived from duck duck goose, but other than that there is no relation, e.g., a metaphor.'

machiaweliczny(10000) 4 days ago [-]

They have omnibar anyway, aren't they collecting data from it?

ngngngng(3702) 5 days ago [-]

I'm on my second attempt to use DDG instead of Google. As time goes on, my percentage of searches I use google for ticks higher and higher. I'm starting to intuitively recognize when search results will be garbage with DDG. It's tough because I really want to take back my privacy, but it seems that for 50% of searches, DDG just doesn't get me anywhere near what i'm looking for.

The other day I searched for the website to check a restaurant gift card balance. All of DDGs results were obvious scam webpages. I often search for ElasticSearch documentation. DDG always returns very old versions for these docs, while google returns the most recent version.

sstangl(10000) 5 days ago [-]

DDG has a 'retry search in Google' mode if you prefix !g to your search query.

I usually try in DDG first, and then in the small cases where it's not found, I just prefix '!g' and re-execute the query.

paul7986(4003) 5 days ago [-]

DDG is my primary engine yet i bang Google probably 40 to 50 percent of the time to find what I'm looking for.

I look forward in time to not having to bang Google a lot and being able to find...

- Distance info.. how far a drive is X point to Y point. DDG doesn't offer this capability yet and it's something I do Very frequently.

- nearest Movie showtimes

- nearby concert listings for today, tomorrow, weekend

- flight info and links to purchase flights

brundolf(3518) 5 days ago [-]

I've had a similar experience. I stick with DDG anyway for personal stuff, but at work I still use Google because it affects my productivity.

CaptainMarvel(10000) 5 days ago [-]

DDG is my default search engine, and I really want to use it for privacy reasons. However, I have developed a habit of querying with '!g' to switch the search over to Google.

This has happened because, firstly, I, too, can instantly recognise when results are garbage and so immediately type '!g'. Secondly, I know when certain types of searches will be garbage - usually anything related to programming is useless using DDG. So, for work, my default search engine is just Google.

Sometimes, I just query with '!g' without even thinking about it, and at one point I realised I hadn't even been using DDG for several weeks except as a redirect.

bhl(10000) 5 days ago [-]

Curious to know whether someone has made a website to compare DDG and Google search results side by side. Anyone on HN want to take up that challenge? This story is definitely not the first DDG against Google story in the last few months.

FabHK(3821) 5 days ago [-]

1. Hmm, I rarely switch back to Google, and the most recent time I did, it did not deliver better results. It might be that Google has so much information on you that it gives better results (while it, fortunately, has not much information on me, so it has to compete with DDG on an equal footing).

2. I don't use ElasticSearch, but I can tell you that searching the python docs is quite simple in DDG, just throw a !py3 in there to directly search the latest Python 3 docs. Apparently, there's a comparable bang for ElasticSearch, !elastic. But I don't know how well it works (and it's a bit long, really).

taneq(10000) 5 days ago [-]

I've been using DDG for the past few years and I think I've lost my Google-fu. I used to be able to get the result I was after in a couple of searches with a few carefully chosen keywords. Now when I strike out on DDG and search Google, I get a bunch of popular stuff with similar words in it, rather than what I'm looking for. Whether that's my fault or Google's, I dunno.

brentadamson(3979) 5 days ago [-]

Have you tried Jive Search? I run it and it's 100% open source. Would love your feedback.

ComputerGuru(674) 5 days ago [-]

Disclaimer: @yegg, if you're reading this, I'm posting this rant with love.

I am so disappointed with DDG recently, it has adopted Google's strategy of returning searches that have nothing to do with your query if not enough results were found [0], and dialed it up to 11. If 'I' 'don't' 'put' 'each' 'word' 'in' 'quotes,' the results I get have nothing to do with my search... but if I do that (apart from the inconvenience of it all) it means (presumably?) that stemming isn't done on the search terms.

Maybe I'm old school, but I expect search results to match the search terms. Fuzzy matching (stemming, synonyms) is an added bonus, but silently dropping words which don't appear is decidedly not. Moreover, a search result returning 'only' two results should be taken as a good thing for someone with confidence in their dataset (DDG naturally doesn't have that, because their coverage is far from 100% of the web) - it means the search terms were extremely precise and the results are highly relevant, with irrelevant results filtered out. Decreasing the signal-to-noise ratio by willfully ignoring my search terms may increase the quantity of search results but - and I don't know about you - for me I don't care about quantity and would choose relevance as the more appropriate metric to benchmark against.

(All that said, I still use DDG as my main search engine even if I am turning to appending !g far more than I ever used to because I firmly prefer DDG's respect for my privacy and person over Google's treatment of the same. But I'm disgruntled and, frankly, very disappointed. Sorry, @yegg!)

[0]: https://neosmart.net/blog/2016/on-the-growing-intentional-us...

Edit: actually the situation is even worse. DDG doesn't seem to even always respect 'quoted' terms. Here's literally the first search I did after posting this [1]. The quoted term 'CFF2' doesn't even appear in the majority of the results DDG pulls in - not just not in the page summary displayed, but literally not on the result page at all. For comparison, here's the Google equivalent:

[1]: https://duckduckgo.com/?q=windows+10+%22cff2%22&t=ffab&ia=we... [2]: https://www.google.com/search?hl=en&q=windows%2010%20%22cff2...

larkeith(10000) 5 days ago [-]

Personally, I've moved to Searx due to similar issues with DDG results. Hopefully someday Chrome will allow you to use it on mobile.

Kiro(3602) 5 days ago [-]

Your rant is misdirected. This is a problem with Bing (the underlying search engine of DDG).

simias(3966) 5 days ago [-]

>Moreover, a search result returning 'only' two results should be taken as a good thing for someone with confidence in their dataset

I completely agree with you here but in my experience it's not anything new with DDG, that's always been a problem as far as I'm concerned.

As a hobby I sometimes have to reverse engineer electronic circuits, when I'm not sure what a chip does I try to search the inscriptions on the package to see if I can find a datasheet online. Sometimes you end up with very cryptic strings like 'xardc10-egh' or whatever. If you input this string on Google it gives you no results:

https://www.google.com/search?hl=en&q=xardc10%2Degh

If I do it on DDG I get pages of irrelevant results:

https://duckduckgo.com/?q=xardc10-egh&t=ffab&ia=web

That being said DDG improved slightly, when I did searches like those a couple of years ago I'd often end up with results containing completely broken encodings, binary dumps as ascii and other obviously erroneous content that got indexed by mistake. Here the results at least appear to link towards proper pages.

0003(10000) 5 days ago [-]

1 week since Elizabeth Warren published this: https://medium.com/@teamwarren/heres-how-we-can-break-up-big...

ocdtrekkie(2602) 5 days ago [-]

The change was committed in December, according to the article and the PR: https://github.com/chromium/chromium/commit/98b2af784450beb2...

bluetidepro(3414) 5 days ago [-]

This is new? I could have sworn I saw it in there as an option like 5+ years ago? Or was it taken out, and now they are re-adding it?

MagicPropmaker(2773) 5 days ago [-]

They've been using the top 4 search engines in any given country for some number of months. Sometimes it's duckduckgo

samfisher83(2366) 5 days ago [-]

You could always add whatever search engine you wanted in the settings. Maybe it wasn't one that was already setup as one of the drop down menus.





Historical Discussions: Cookie Warning Shenanigans Have Got to Stop (March 13, 2019: 882 points)

(882) Cookie Warning Shenanigans Have Got to Stop

882 points 5 days ago by weinzierl in 540th position

www.troyhunt.com | Estimated reading time – 7 minutes | comments | anchor

This will be short, ranty and to the point: these warnings are getting ridiculous:

I know, tell you something you don't know! The whole ugly issue reared its head again on the weekend courtesy of the story in this tweet:

I'm not sure if this makes it better or worse... "Cookie walls don't comply with GDPR, says Dutch DPA": https://t.co/p0koRdGrDB

— Troy Hunt (@troyhunt) March 8, 2019

The reason I don't know if it makes it better or worse is that on the one hand, it's ridiculous that in a part of the world that's more privacy-focused than most it essentially boils down to 'take this cookie or no access for you' whilst on the other hand, the Dutch DPA somehow thinks that this makes any sense to (almost) anyone:

And the Dutch DPA's guidance makes it clear internet visitors must be asked for permission in advance for any tracking software to be placed — such as third-party tracking cookies; tracking pixels; and browser fingerprinting tech — and that that permission must be freely obtained. Ergo, a free choice must be offered.

Is this really what we want? To continue chucking up cookie warnings to everyone and somehow expecting them to make an informed decision about the risks they present? 99% of people are going to click through them anyway (note: this is a purely fabricated figure based on the common-sense assumption that people will generally click through anything that gets in the way of performing the task they set out to complete in the first place). And honestly, how on earth is your average person going to make an informed decision on a message like this:

I'm sure its a good article though... It might have been nice to read it! pic.twitter.com/95bpDtmjDO

— Paul Court (@MrPCourt) March 8, 2019

Do you know how hard it is to explain OAuth to technical people, let alone the masses? Oh wait - it's not OAuth - it's Oath but even I didn't get that at first because nobody really reads these warnings anyway! And now that I have read it and I know it's Oath, what does that really mean? Oh look, a big blue button that will make it all go away and allow me to do what I came here for in the first place...

But say you are more privacy focused and you wanted to follow that link in the original tweet. Here's your fix:

And if you're smart enough to actually understand what cookies are and be able to make an informed decision when prompted with a warning like TechCrunch's, then you're smart enough to know how to right click on a link and open it incognito. Or run an ad blocker. Or something like a Pi-hole.

Or you move to Australia because apparently, we don't deserve the same levels or privacy down here. Or have I got that back to front and Europeans don't deserve the same slick UX experience as we get down here? You know, the one where you click on a link to read an article and you actually get to read the article!

So let's be European for a moment and see how that experience looks - let's VPN into Amsterdam and try to control my privacy on TechCrunch:

Are you fucking serious? This is what privacy looks like? That's 224 different ad networks that are considered 'IAB Partners' (that'd be the Interactive Advertising Bureau) and I can control which individual ones can set cookies. And that's in addition to the 10 Oath foundational partners:

You can't disable any of those either by the look of it so yeah, no privacy on that front. But at least you can go and read their privacy policy, right? Sure, Unruly's is 3,967 words, Facebook's is 4,498 words and Zentrick's is another 3,805 words. Oh - and remember that you need to accept cookies on each one of those sites too and you're going to want to read about how they and their partners track you...

And the ridiculous thing about it is that tracking isn't entirely dependent on cookies anyway (and yes, I know the Dutch situation touched on browser fingerprinting in general too). Want to see a perfect example? Have a go of Am I Unique and you'll almost certainly be told that 'Yes! You can be tracked!':

Over one million samples collected and yet somehow, I am a unique snowflake that can be identified across requests without a cookie in sight. How? Because even though I'm running the current version of Chrome on the current version of Windows, less than 0.1% of people have the same user agent string as me. Less than 0.1% of people also have their language settings the same as mine. Keep combining these unique attributes and you have a very unique fingerprint:

The list goes on well beyond that screen grab too - time zone, screen resolution and even the way the canvas element renders on the page. It's kinda cool in a kinda creepy way.

And here's the bit that really bugs me (ok, it all bugs me but this is the worst): how do we expect your normal everyday person to differentiate between cookie warnings and warnings like these:

I know what these are and you probably do too by virtue of being on this blog, but do you really think most people who have been conditioned to click through the warning that's sitting between them and the content they wish to read understand the difference between this and a cookie warning? We literally have banks telling people just to ignore these warnings:

German bank @comdirect recommends to just ignore the warning about an insecure connection in their online banking app.

Unbelievable... @troyhunt https://t.co/ROOol70OyB

— der JayJay (@jayjay_92) November 26, 2018

So in summary, everyone clicks through cookie warnings anyway, if you read them you either can't understand what they're saying or the configuration of privacy settings is a nightmare, depending on where you are in the world you either don't get privacy or you don't get UX hell, if you understand the privacy risks then it's easy to open links incognito or use an ad blocker, you can still be tracked anyway and finally, the whole thing is just conditioning people to make bad security choices. That is all.

Privacy Cookies



All Comments: [-] | anchor

VectorLock(10000) 5 days ago [-]

Its too bad nobody invented a browser header to be sent with HTTP requests for Allow-Cookies: SURE_YES_WHATEVER_OMG_STOP_ASKING_PLZ

mherrmann(1905) 5 days ago [-]

The name for such a header should have been included in the EU directive about cookies.

HN, can we get a political movement going to make the EU adopt this?

dexen(3178) 5 days ago [-]

The next best thing is uBlock filter list. What if I told you... [1][2]

[1] http://prebake.eu/

[2] https://www.i-dont-care-about-cookies.eu/

tgsovlerkhgsel(10000) 5 days ago [-]

Why would you want such a header?

Would be much nicer if the - already existing! - do not track header was interpreted to mean 'Allow-Cookies: HELL_NO_WHY_ARE_YOU_EVEN_ASKING_FUCK_OFF'.

Thanks to GDPR, the provider does NOT have to ask for consent for necessary cookies - only for the tracking stuff to which you have no incentive to agree. Every time a page pops up one of those 'we value your privacy' screens, they're lying in your face - if they did, they wouldn't have to ask.

e1ven(376) 5 days ago [-]

We could further optimize by just assuming that people are OK with it if they didn't send the header, and then have them opt in to sending it.

Maybe we could call it something like 'DoNotTrack', to get the idea across.

DNT was mostly ignored, but if it had the weight of law behind it, it could still be great.

pxtail(10000) 5 days ago [-]

There are browsers addons for removing these annoying notifications, most popular is named 'I don't care about cookies'

contravariant(10000) 5 days ago [-]

It's too bad nobody came up with the simple idea of forcing browsers to ask permission before sending personally identifying information everywhere.

foateaca(10000) 5 days ago [-]

Standardize it. Policy makers should provide a standard message, or small set of messages websites can reuse and consumers recognize with links to an EU-run informational website easy to understand for consumers. I don't understand why it wasn't implemented this way, maybe to allow for more freedom of implementation, but we have seen that has hurt the policy effort. Just standardize it.

Incidentally, with all of the Organic / GMO-Free / et all certifiers out there, would a privacy badge be the way to go, or is that backwards thinking in the way of early internet site badges?

asgrdz(10000) 4 days ago [-]

There's not a standard copy-paste-ready set of phrases because it would be impossible for the law makers to craft such.

They couldn't possibly know what site A or B respectively does with a user's data. That differs from site to site.

It is therefore of course the obligation of each site that directly or indirectly works with user PII to explain to its users how that specific site is using the data.

pedro_hab(10000) 5 days ago [-]

What I don't understand is why websites hosted outside the EU, for non-EU users have the cookie banners.

At least keep it in Europe, use the IP to geolocate, let the EU users deal it.

Some companies have outright banned EU traffic, sounds like only showing the banners for EU IPs seems ok.

ucarion(10000) 5 days ago [-]

The sibling comment is right -- and furthermore, thanks to holdovers from colonialism, the EU is a vast area that the sun never sets on. Reunion, French Guinea, and Curacao are just as European as Bruxelles is.

bunderbunder(3531) 5 days ago [-]

The law doesn't just apply to pages being served to the EU, it applies to pages being served to EU citizens, wherever they happen to be at the moment.

So geolocation is not a satisfactory option.

duxup(10000) 5 days ago [-]

I feel like GDPR and such had some good spirit to it... but the result isn't what they had in mind and the consumers just click through everything / have no more clue. Piling on or malforming GDPR seems like it would just make the already unworkable situation more of a mess.

I like the 'ideas' behind GDPR, it's just this isn't the way to do it and really accomplish anything that really helps an individual.

Baeocystin(10000) 5 days ago [-]

It kind of feels like Prop 65 here in California. So many things have a 'this could cause cancer' tag that it is ignored 100% of the time, the exact opposite of its intent.

fixermark(3856) 5 days ago [-]

I wonder how often we'll repeat the error of treating user privacy as something the user cares deeply about (against all this observable evidence to the contrary) before we accept that the well-demonstrated-and-documented default is users do not care (and if we want the behavior of websites to change, step 1 is educating users as to why they should care and what the risk models are).

dcbadacd(10000) 5 days ago [-]

The reason it hasn't been super successful is that there haven't been big punishments.

purple_ducks(3789) 5 days ago [-]

(European)

Most sites have a simple 'Reject All' which I definitely use all the time.

Oath and it's sites are the devil incarnate when it comes to implementation and they deserve to be slapped down.

eclat(10000) 5 days ago [-]

This is precisely it. It's not that the EU legislation is too restrictive, it's that it doesn't go far enough. Sites like these shouldn't be compliant, users should be prompted with a simple yes/no consent form without any dark patterns.

shaki-dora(3389) 5 days ago [-]

What people miss about the new GDPR notices, compared to cookie warnings of yore, is that they offer you the choice of opting out.

In my experience, the option is usually hidden (look for "options"). But a surprising number of sites do actually comply and make this not prohibitively obscure.

rcxdude(10000) 5 days ago [-]

Some of them make the process as slow and painful as possible (I've seen one where you need to deselect a huge number of pre-checked boxes, and then you need to wait through an excrutiatingly long and artificial 'applying preferences' process bar before you are permitted to continue. The website then forgets this preference the next time you visit it). This is directly against GDPR and I hope that companies engaging in these practices to try and make sure as few users as possible opt out get slapped as hard as they can by the regulators.

duxup(10000) 5 days ago [-]

I think Troy's point though is that if you understand the situation. You already have lots of tools at your disposal to opt out. It's not easy, but you know and can to some extent do it.

Meanwhile everyone else doesn't understand these pop ups, doesn't know anything more post GDPR, and they just roll through them and get tracked just the same.

In effect we have big annoying pop ups and little seems to have changed. If we care about the ideas behind GDPR, I think we have to recognize that it may be failing miserably in practice.

mrweasel(3923) 5 days ago [-]

It was my clear understanding that 'pre-checked boxes' would be illegal. Meaning that if you want to have 200 trackers on a page, then the user needs to click accept to all 200 trackers. So that if you just dismiss the 'Cookie warning', you would be default get no tracking. That pretty much solves the issue right there, most people would effectively have zero tracking.

brundolf(3518) 5 days ago [-]

It's not even clear to me how they actually work. Usually what I do when there's no 'Deny' button (which is most of the time) is just leave the dialog open in hopes that that qualifies as 'not accepting'. But then some of them say 'by continuing to use this site you agree'. But, what does that even mean? Do they wait for a scroll event before setting the cookie, or is it there already before I even click 'Agree', and the dialog does nothing whatsoever?

askvictor(4017) 5 days ago [-]

Spin up developer tools and see if the cookie has been set

tgsovlerkhgsel(10000) 5 days ago [-]

Legally, you haven't consented. They're of course tracking you anyways because they're too lazy/greedy to implement it properly, so you should use the usual security measures (adblocker, PrivacyBadger), but they're breaking the law.

It's just that there's too many people doing that for the overloaded DPAs to take care of them all.

peterwwillis(2415) 5 days ago [-]

Can anyone explain to me why the browser isn't the one asking the user? Since, y'know, the browser is the only thing that actually prevents a cookie from being placed or sent in the first place?

tgsovlerkhgsel(10000) 5 days ago [-]

The browser cannot distinguish between cookies that are necessary to support a feature you're using (e.g. a session cookie for a login or shopping cart) and a tracking cookie.

The former does not require consent. The latter does.

todd3834(3596) 5 days ago [-]

I am not a fan of the cookie banners at all. If anything I feel like browsers should implement it as it already does with other security settings (access to location, camera, etc...) and then people can decide to allow all websites. Blacklist, whitelist whatever. Why are we making every site implement a completely unique interface with different verbiage?

zanny(10000) 5 days ago [-]

Because its done under legal compulsion rather than web standards good faith. Cookies predated the more democratic process by which the web evolves today, and it was outside the EUs power to just ask the likes of the dying Netscape / Microsoft / etc to all standardize on this feature at the spec level, especially when nobody was really much following web standards at all in the early 2000s.

All those settings in browsers that let you control access to your camera, location, etc aren't part of legal compliance, they are just there because Google, Mozilla, sometimes Apple and Microsoft all agreed this is a good best behavior to avoid getting regulated on again.

If the EU started mandating an opt-in system for camera access you can bet most websites would start dumping pop up banners about it regardless of if all the browsers supported the regulation already just to avoid culpability.

Mirioron(10000) 5 days ago [-]

But you've had control over cookies for at least 15 years in your browser.

syrrim(10000) 5 days ago [-]

Cookie warnings don't show up for any old cookie usage. HN has no cookie warnings, despite having accounts and logins. Cookie warnings are shown when the 'evil bit' or the 'color' of the cookie is set; that is, there is no way for the browser to know when a cookie should require a warning and when it shouldn't.

colordrops(10000) 5 days ago [-]

Is it possible to write a browser extension that has the browser request access to store cookies just like it does for microphone or location access?

nvr219(2729) 5 days ago [-]

You can do this and it's abhuge pain in the butt for the user

noxToken(4026) 5 days ago [-]

You don't need an extension. You set cookies to deny all. Then you look at the list of blocked cookies when you on a website, and you can individually allow cookies on a per domain basis.

timbit42(10000) 5 days ago [-]

Firefox has had that built in for years but you have to manually accept or deny cookies for each website and you would still get the popups.

I just use this Firefox add-on to hide most of the popups: https://addons.mozilla.org/en-US/firefox/addon/i-dont-care-a...

bo1024(10000) 5 days ago [-]

I think it must be. I use uMatrix which essentially does this, except it doesn't use popup requests - if you want cookies, just make a few clicks.

KorematsuFred(10000) 5 days ago [-]

It is common sense vs dumb bureaucrats of EU. Who do you think wins in the long run ?

expertentipp(10000) 5 days ago [-]

Media companies, publishers, and copyright hoarders, obviously.

kilburn(2553) 5 days ago [-]

I'm sure there's some good reason not do it, so I'll ask if anyone here knows: why doesn't the law just require some technical implementation that can be automated? Why can't the law just specify something similar to the DNT header (finer grained) and require compliance with that?

wsy(10000) 5 days ago [-]

The law should lay out principles, not techniques. The GDPR works for any kind of data processing, it is not specific to browsing on the Web.

The core issue is that the ~150 companies of the Oath network will effectively go out of business when they comply with GDPR. So now they try to play some games, until the fines handed out to them become too large to sustain in EU.

currysausage(3648) 5 days ago [-]

The GDPR offers a general privacy framework. Technological specifics may (or may not) become part of the upcoming, heavily embattled 'ePrivacy regulation', which was intended to come in effect simultaneously with the GDPR. Right now, we have a somewhat unfortunate limbo.

RivieraKid(3942) 5 days ago [-]

This shows utter incompetence and detachment from reality by European legislators. Maybe it seemed like good idea in theory but the only practical significant impact is that browsing the web has become more annoying.

Surely there are solutions that don't require a popup on every webpage you visit? For example enforcing no tracking by default for advertising purposes?

tabs_masterrace(10000) 5 days ago [-]

The EU has generally been a really positive force when it comes to consumer rights, but I'm not a fan of this either. The question I have is, what did web company do to deserve this kind of regulation? It is quite unusual to see governments enact regulations, without the existence of a measurable harm being caused - but solely on the premise, that the act of collecting data is 'unethical'. I mean this is really not normal, and quite unfair, if you look how regulations worked in the past for other industries, it has always been a response to very clear quantifiable harm being caused.

We have seen nothing of that, contrary, tech companies have improved our life's immensely, for free, and in my opinion, are the one of the biggest driving force towards improving the future. Data is not just being collected for advertisement, tracking, and evil purposes, but is a very important asset in the development of products.

Furthermore, historically it was governments, not companies, that were abusing private data for nefarious purposes. Yet there seems to be no effort to stop it happening from that direction? Well of course not, its way to useful, and you'd be a fool not to use it, but companies are 'bad' trying utilize it...

dmitriid(3844) 5 days ago [-]

"incompetence", "detachment" right.

I, for one, am happy that bullshit like "hey, we send your data to 244 trackers uncontrollably" has become visible and is being called out.

I mean, visible only in the EU.

Dark patterns and site-blocking are anti-GDPR, so I'm hoping for some heavy fine across the board. And, hopefully, if not the end then curtailing of the intrusive and tracking cookies, ads etc.

raverbashing(3532) 5 days ago [-]

> Surely there are solutions that don't require a popup on every webpage you visit?

I don't get any popups or cookie notices on visiting HN or several other sites. It's not like it's a fundamental need to set hundreds of tracking cookies on a visitor's browser to show them a website.

ben174(3861) 5 days ago [-]

Back in the early days, browsers used to prompt you for every cookie:

https://i.imgur.com/FThIFHe.png

dageshi(10000) 5 days ago [-]

I installed a plugin in firefox that just autoclicks 'yes' or closes (no clue which) either way it makes the web much less annoying.

sytelus(317) 5 days ago [-]

I've rather radical thought: I don't think tracking itself is fundamentally a bad thing. I rather see useful relevant ads then irrelevant ones. Yes, it may be creepy but its not a bad thing that people in ad industry are working hard to figure out things that would be worth my time and interest. However, what is bad is how else is tracking data used? Who else has access to it and for what purpose? GDPR should have created law that tracking data may not be used for anything other than machine generated recommendations by same company and it would have been 1000X more beneficial.

Downvotes may begin now.

smileysteve(10000) 5 days ago [-]

How do you remember that a customer has responded to a popup if you don't give them a cookie? Even a cookie as a session identifier.

chriswarbo(3970) 5 days ago [-]

> This shows utter incompetence and detachment from reality by European legislators.

> Surely there are solutions that don't require a popup on every webpage you visit? For example enforcing no tracking by default for advertising purposes?

Wait, what? There are such solutions. GDPR, and the 'cookie law' before it, don't 'require' any popups.

They allow cookies, 1x1 pixel images, browser fingerprinting, Flash supercookies, browser local storage, etc. without any need for stupid popups... as long as that's required to implement the site's functionality. Consent for these things is implied by the user's use of the functionality (e.g. game scoreboards, saving word processor documents, keeping track of a user's shopping cart, etc.).

What these laws do require is that handling such personal data without such implied consent, should require explicit consent. This acts as a disincentive for sites who want to continue spying on their visitors, by forcing the UX to be more annoying and dissuade visitors from staying.

> the only practical significant impact is that browsing the web has become more annoying.

Sounds like the dissuasion is working. Hopefully that is causing spyware sites to receive fewer visitors (and perhaps revenue), and potentially rethink their decisions.

currysausage(3648) 5 days ago [-]

The omnipresent 'Please accept our privacy policy (or leave)' is worthless cargo cult GDPR pseudo-compliance. If it's neither freely given nor informed, it's not consent under GDPR.

See Art. 7: 'When assessing whether consent is freely given, utmost account shall be taken of whether, inter alia, the performance of a contract, including the provision of a service, is conditional on consent to the processing of personal data that is not necessary for the performance of that contract.'

See Recital 32: 'Consent should be given by a clear affirmative act establishing a freely given, specific, informed and unambiguous indication of the data subject's agreement to the processing of personal data ... This could include ticking a box when visiting an internet website ... Silence, pre-ticked boxes or inactivity should not therefore constitute consent.'

If you want to use external tracking and be GDPR-compliant, you must offer a clear choice ('yes/no') and you must not use pre-ticked boxes (i.e. an opt-out approach).

Please feel free to downvote this if you don't like it, but I'm merely telling you what the law says. If you disagree factually, I'd appreciate a comment though.

vonmoltke(2266) 5 days ago [-]

> If you want to use external tracking and be GDPR-compliant, you must offer a clear choice ('yes/no') and you must not use pre-ticked boxes (i.e. an opt-out approach).

You can use absolutely no external tracking and be GDPR-noncompliant. In fact, an Apache web server running the default test page is technically noncompliant. Everyone loves to jump to the tracking ads and data selling, since they are easy targets, but the scope of the law is much broader than that.

flukus(3848) 5 days ago [-]

There is also this in article Article 7.4 (https://gdpr-info.eu/art-7-gdpr/)

> 4 It shall be as easy to withdraw as to give consent.

If you prompt me to accept on every page then you must also prompt me to decline on every page, otherwise you fail this test. Hiding the option to withdraw consent in some random settings page is obviously not as easy as clicking yes when prompted.

Most sites have already created all their tracking cookies before the user even sees the opt-in form too, which isn't compliant with the GDPR or the old cookie law.

alkonaut(10000) 5 days ago [-]

> If you want to use external tracking and be GDPR-compliant, you must offer a clear choice ('yes/no') and you must not use pre-ticked boxes (i.e. an opt-out approach)

Exactly this. Basically what the GDPR says is: if your business doesn't require the data, you cant use it without the user's consent. And data used for better advertising is NOT essential to e.g. a news site.

What's more, the regulation syas that you can NOT simply say 'accept or leave' in that case. You then have to provide the service to the user without storing that non essential data. You can't provide a service, even for free, that you condition on storing data not essential for that service. There is no 'if you don't like it, leave' clause.

Basically: spiegel.de has to be prepared to show their news to anyone, including those that do not wish to be tracked by their ads. Right now we are in a period of denial where site owners believe they can have these 'By entering you agree to...' banners. Once the first large fines are handed out, It'll be fun to watch.

radium3d(10000) 5 days ago [-]

As a web developer I gotta say the only true solution to this is to stop using the internet altogether. Might as well shut the internet down.

We can't authenticate you without cookies or some other form of identification, so that throws out any site with an account.

Even if I am not even remotely interested in tracking what pages you view on my website if I need to have you login and authenticate I need some form of cookie / session ID.

If you need to be anonymous use a browser like Firefox Focus or similar but understand that you won't be able to log in for longer than a single session, if at all.

Cancel GDPR and similar privacy laws before we outlaw the [useful] internet completely. These laws are a mess written by people who honestly are not remotely qualified to make these kinds of decisions.

I'm all for the option of privacy but it's your own responsibility -- stop using the internet leave all your electronics at home and go ride your horse into the wilderness and breathe some fresh air if you want privacy.

throwaway-hn123(3114) 5 days ago [-]

Wow, you are a complete moron.

sarcasmOrTears(10000) 5 days ago [-]

Your post is at the bottom, as a proof of why we can't have nice things. People don't understand that it's MY webiste, not a property of the users or some EU's fattie bureaucrat. If they don't like my website they can just not visit instead of telling me how my w3bsite must behave.

JoshTriplett(160) 5 days ago [-]

While I'm certainly not going to argue in favor of the GDPR, the 'cooking warning' actually specifically excludes cookies used for things like authentication. It covers cookies used for other purposes, such as trackers.

scrollaway(2878) 5 days ago [-]

As I'm sure you'll be told by the time I finish writing this comment: authentication cookies do not need a compliance banner.

brundolf(3518) 5 days ago [-]

What if the EU instead mandated that browser makers have to keep up with, implement, and enable by default the highest levels of tracking protection possible? It would be far easier to enforce that on four or five organizations (half of which already do that) than to try and corral millions of websites into compliance.

lmkg(3987) 5 days ago [-]

GDPR applies to a lot of things that are not mediated by the data subject's web browser. For example, when your grocery store takes your purchase history (from your rewards card) and sells it to a marketing or credit reporting agency. Or when housing or job applications are rejected by error-prone automated background checks. The website stuff may be more visible, but it's not the main thrust of the legislation except insofar as it has exposed how many websites are collecting and trading in your information.

wereHamster(4030) 5 days ago [-]

So I have a question regarding GDPR. I've recently done two projects for intergovernmental organizations (like the UN, to give you an example). Both organizations claim that they do not have to comply with GDPR. I kindof doubt that but IANAL. Short of reading the GDPR laws, or contacting an expensive lawyer, what's the best way to find out if they are right? I'd like to find out before I start the next project for such an organization. Thanks.

asgrdz(10000) 4 days ago [-]

Read the document: https://eur-lex.europa.eu/legal-content/EN/TXT/PDF/?uri=CELE...

That's the cheapest. It's not really that hard to understand. I'm not a lawyer, read it and had a GDPR consulting firm review the tweaks we made of our systems. They were happy with it.

But if you're working for a larger company, then consult with their legal department or have the company hire a GDPR consulting firm. They should be able to afford it without a problem, and will likely be happy to support such an action if the non-compliance risk is deemed large enough. A business decision, not a technical one.

weinzierl(540) 5 days ago [-]

To add insult to injury the big players with most trackers just refuse to show the cookie warnings at all. At least that's the situation Germany where most major news outlets are full of ads and trackers and handle all of it via opt-out(!) in the privacy policy. For a example see spiegel.de, the most widely read German-language news website.

It's mostly small and medium sized firms that show the cookie warning out of fear. That's completely the opposite of what I want as a consumer as well as a small-time webmaster.

EDIT: Just to be clear: spiegel.de never shows a cookie warning even if visit with a fresh browser. You can opt out by visiting their privacy page [1] (in English).

[1] http://m.spiegel.de/extra/what-we-do-with-your-data-a-121194...

rebelde(4025) 5 days ago [-]

Can somebody explain how Der Spiegal does this legally? If they can do it, maybe others can use the same justification.

fixermark(3856) 5 days ago [-]

Anyone who didn't see this as the predictable end-result of requiring cookie awareness and consent was hopelessly naive about how large vs. small organizations respond to unfunded government mandates.

The money and time could have been a lot better spent on international awareness campaigns arming consumers with more privacy knowledge instead of expecting website owners to shoulder the informational burden (because those orgs in aggregate have no core incentive to treat user's privacy as an inherent good).

Angostura(3476) 5 days ago [-]

Popped in a complaint to your local data protection body?

alkonaut(10000) 5 days ago [-]

There is a window now when players can argue that the regulation text isn't clear (it is). Let's hope regulators pick a few high-profile targets that are in violations and hit them with massive fines.

lixtra(4016) 5 days ago [-]

Spiegel.de also sometimes refuses to serve you content if you have DNT flag set which in theory would be a convenient way of getting rid of cookie banners.

tmikaeld(3900) 5 days ago [-]

That's a lot of tracking, uBlock blocks 13 domains before even loading google tag manager.

dexen(3178) 5 days ago [-]

>players with most trackers just refuse to show the cookie warnings at all

The results of the legislation -in regard to cookies- are inconsistent, annoying, unevenly enforced, create a moral hazard and two-tier system, and I presume have negative overall utility.

This is why I do not consider the law to have been written with good intentions. The intentions were claimed to be good, but I don't see the lawmakers having had put in the necessary effort to ensure privacy improvement. Nor admit there are shortcomings to the legislation that need either fixing, or perhaps scrapping the legislation. Did they really intent on exerting enough effort to write the legislation well? To shoulder blame if it does not work out? To take the responsibility? Or to shore it up as situation develops?

Right now I perceive the cookie warnings to be merely EU's advertising banners - 'Heeey, this is EU taking care of you!' - plastered all around the web just like banner ads used to be plastered all over the web. Morally the same - pompous self promotion, except paid for with legal rubberstamp rather than money.

reaperducer(3842) 5 days ago [-]

internet visitors must be asked for permission in advance for any tracking software to be placed — such as third-party tracking cookies; tracking pixels; and browser fingerprinting tech — and that that permission must be freely obtained... Is this really what we want?

Yes. I'm not even European, and I'm perfectly fine with this.

Ask me to track me. If I like you, your web site, or your content, then sure. I'll give you a little personal data.

Permit(2180) 5 days ago [-]

>Yes. I'm not even European, and I'm perfectly fine with this.

>Ask me to track me. If I like you, your web site, or your content, then sure. I'll give you a little personal data.

This should be a setting you choose in your browser and the rest of us (> 99%) should be able to ignore.

Mirioron(10000) 5 days ago [-]

>Ask me to track me. If I like you, your web site, or your content, then sure. I'll give you a little personal data.

You are asking them for the website. They are asking for your data in return which your browser provides because that's how it is configured. Your browser could simply refuse at any point.

l9k(10000) 5 days ago [-]

I agree they need to ask to track us.

But the little personal data you mention is not only given to them but to 224 different ad networks. Most of them make it impossible or difficult to opt out.

blensor(10000) 5 days ago [-]

So once you've determined that you like a site you are going through the hassle of figuring out how you can reenable the tracking after you disabled it on your first visit?

dash2(10000) 5 days ago [-]

This is like a case study of well-intentioned, carefully designed regulation doing more harm than good. Honestly, I'd rather just have a browser addin that blocks the cookies I don't want. The market was working fine. Now every new website is a pain, and my organization has hired some amiable lady to be 'GDPR expert'. She doesn't appear to know anything about anything, but she sure seems nice.

paxys(10000) 5 days ago [-]

I wouldn't call GDPR 'carefully designed' at all. It is ridiculously broad and vague, and so far all implementations (including the cookie warnings all over the internet) are best guesses.

rcxdude(10000) 5 days ago [-]

The harm is being done by companies attempting to keep the status quo by nagging users unless they give consent. I hope some of the worst offenders in this regard get slapped by regulators. You can be 100% GDPR compliant and have a functional website without needing any cookie or GDPR consent boxes.

scrollaway(2878) 5 days ago [-]

GDPR absolutely does not do 'more harm than good'. It extends well, well beyond these dumb cookie warnings.

GDPR puts the citizen/customer in power of their own data. They can ask for their data, they can ask for it to be deleted, they have (however shitty the UX) control over where it goes. They can contact large corporations and request these things and be heard out.

I don't know how to explain it any other way: These things are fucking important.

As for your organization's GDPR expert who doesn't know anything about anything, this sounds like a 'your organization' problem. Replace GDPR with some other acronym such as HIPAA, PCI or even SEO or PHP, it's still your organization's fault for hiring someone who doesn't know their stuff. How is that GDPR's fault?

Edit: Yes, keep downvoting facts. GDPR isn't just cookie warnings, how is that controversial?

ska(10000) 5 days ago [-]

   The market was working fine.
At the very least, this is up for debate.
dcbadacd(10000) 5 days ago [-]

It's time GDPR is actually enforced. I tried to get my country's law enforcement on the tail of some violators but they're toothless.

I don't understand the downvotes though, are you disagreeing that certain countries do not have the manpower to enforce GDPR to the extent they could? Please.

jolmg(10000) 5 days ago [-]

Are those violators outside the EU? I'm interested to know if the EU would seriously try to enforce their laws in foreign lands that never agreed to them.

Semaphor(10000) 5 days ago [-]

I think they are currently going after the big ones. Personally, I expect that to change once they are through with them. It simply makes no sense to go after big companies when there is still google and facebook to go after. And well, medium and small? I think those still have another year or 2.

tgsovlerkhgsel(10000) 5 days ago [-]

NOYB helps enforce it through DPAs, and judging by the fines that have been handed out, it's far from toothless.

It's just that basically the entire world is violating the law, and it'll take a while to get to everyone.

Sir_Cmpwn(339) 5 days ago [-]

The problem isn't dumb ol' grandpa EU's folly lawmaking. The lawmaking is on point and the cookie banners are the problem - not the symptom. EU regulations are trying to eliminate privacy-invasive practices like tracking. It's not about making users aware that they're being tracked. It's about making businesses cut that shit out.

Of course, on HN I have to acknowledge that half of the users on this website get their paychecks from invading the privacy of the general public. If you fall into this category: what you're doing has always been wrong and now the law is catching up to that.

Mirioron(10000) 5 days ago [-]

It doesn't matter what they're trying to do. What matters is what actually ends up happening. The socialists in the Soviet Union tried to improve things as well.

It's also quite ironic that the EU now cares about privacy considering that a decade ago they passed the Data Retention Directive. I guess privacy didn't matter then, huh?

gnud(10000) 5 days ago [-]

Most cookie warnings are beyond useless, in that they don't even try to actually comply with the GDPR.

The fact that your site uses cookies is irrelevant, and there's no need to tell anyone. However! If your site stores personal information (directly or via a partner), you need to have a valid reason.

The definitions of 'personal information' and 'valid reason' are, fortunately, not exhaustively enumerated in the GDRP. I say fortunately, because if they were exhaustively enumerated, Facebook would find a loophole, and the whole law would be worthless.

One of the 'valid reasons' for storing personal information, is a clear, freely given, consent from the user. This is the one that all the tracking companies want to get, because they think it allows them to do shady things if they can trick the user into pressing 'OK'. But if the user was tricked or coerced, the consent was not really clear or freely given. Hence the sort of court rulings that the article mentions.

So, if you store a cookie for your domain saying 'tracking_consent=false', this is probably not personally identifiable, so you can just do it. No reason for any banner.

But if you track the 'browser fingerprint' that Troy Hunt is talking about, without consent, you are probably in violation of the GDPR. Even if it's not a cookie. And you had a cookie banner.

currysausage(3648) 5 days ago [-]

Very true, that can't be stressed enough. One note though:

> The fact that your site uses cookies is irrelevant, and there's no need to tell anyone.

Let's not forget the infamous ePrivacy directive, e.g. Recital 66:

'Third parties may wish to store information on the equipment of a user, or gain access to information already stored, for a number of purposes, ranging from the legitimate (such as certain types of cookies) to those involving unwarranted intrusion into the private sphere (such as spyware or viruses). It is therefore of paramount importance that users be provided with clear and comprehensive information when engaging in any activity which could result in such storage or gaining of access. ... Exceptions to the obligation to provide information and offer the right to refuse should be limited to those situations where the technical storage or access is strictly necessary for the legitimate purpose of enabling the use of a specific service explicitly requested by the subscriber or user.'

Of course, the relationship between GDPR and the old ePrivacy directive is rather ambiguous.

tylerrobinson(10000) 5 days ago [-]

I find the clarification about cookie walls being out of compliance with GDPR to be a real headscratcher. Here's part of the Dutch authority's FAQs[0] thanks to Google Translate:

'At a cookie wall, website visitors have no real or free choice. It is true that they can refuse tracking cookies, but that is not possible without adverse consequences. Because refusing tracking cookies means that they cannot access the website. That is why cookie walls are prohibited under the AVG.'

I find this fascinating. Is there really not a free choice to simply leave the website? A paywall is surely legal. But it is illegal for me to 'pay' using something other than money.

Am I allowed to offer users additional functionality in exchange for access to their data?

[0] https://autoriteitpersoonsgegevens.nl/nl/onderwerpen/interne...

roywiggins(3918) 5 days ago [-]

That's how the GDPR is designed. I'm not a lawyer or a GDPR expert, but I believe this means that unless your service actually requires a particular type of data collection to function, you're not allowed to make access to that service contingent on that data collection.

http://www.privacy-regulation.eu/en/r43.htm

> Consent is presumed not to be freely given if it does not allow separate consent to be given to different personal data processing operations despite it being appropriate in the individual case, or if the performance of a contract, including the provision of a service, is dependent on the consent despite such consent not being necessary for such performance.

In other words, people aren't allowed to sell their data in exchange for services. I suppose the argument is that people are really bad at valuing their own data. They don't know how it can be combined with other datasets and how it might be resold and repackaged and never go away. There are other things people can't legally sell, such as organs, votes and sex, so it's not an entirely brand new idea.

shaki-dora(3389) 5 days ago [-]

I am befuddled by your befuddlement.

Consumer protection laws regularly put limits on the freedom of contracts between companies and consumers. A rental car agency can't give you a rebate for renting an unsafe car.

The fact that no money changes hands doesn't change this. If you're offering free taster portions of bread to passer-bys, you can not use lead as an ingredient. Neither being free, nor putting up a sign with the list of ingredients will change that.

lbarrow(3474) 5 days ago [-]

That's how the GDPR works - unless the data being collected is required for the operation of the service, you cannot make data collection a requirement of using the service.

EpicEng(3790) 5 days ago [-]

Of course you have a choice, but in reality who leaves a site because they see that warning? I imagine almost no one. Hell, I understand the implications and I don't care because site X has what I'm after and perhaps no one else does (or they all have the same warning anyway.) I have no reasonable option but to accept whatever these sites want.

The warnings accomplish nothing. It's just another nag screen. It was a bad idea when it was thought up and it still is today. It's an attempt to seal a wound (misuse of personal data) with a piece of string. It wasn't even a half decent band-aid.

pilsetnieks(4028) 5 days ago [-]

No, it's not a free choice. You cannot pay with your personal data the same way that you cannot sell yourself into slavery, even if you wanted to.

These cookie notices are also almost invariably violating the GDPR. There must be a clear choice, and if you choose not to be tracked, you must be able to still use the service unimpeded; then there must be a clear and understandable description to the intents of data collection; and lastly, the opt-out choice must be accessible equally simply as the opt-in choice (none of this 'Accept' vs. 'Manage options' bullshit.) For example, one of the very few larger pages where I've seen it done right is Wikia/Fandom.

> Am I allowed to offer users additional functionality in exchange for access to their data?

In a way, yes, but you're phrasing it in a roundabout way. You can ask for personal data to enable additional functionality that requires that data. For example, you're allowed to ask for location if you want to show them some offers nearby. They are allowed to refuse and in that case they cannot use the particular function that's tied to their realtime location. If they've given permission to use their data, you, however, are not allowed to use that location data for any other purpose other than what they explicitly agreed to and what's actually needed to provide the service. I.e. you can ask for the location to provide a location based service but you don't need their age and income data; also you cannot use their location for other purposes they aren't informed about. And you certainly aren't allowed to sell it to someone else without an express permission.

In short - you need a clear and explicit permission for specific purposes, and you cannot deny access to those parts of your service that don't require personal data.

dheera(3689) 5 days ago [-]

Do they expect a user to re-enter their password every time they click a button on a website? Without requiring cookies how do they expect a session to persist between pages?

TheGrumpyBrit(10000) 5 days ago [-]

Session cookies are fine, it's only advertising and tracking cookies which are covered by the legislation.

Causality1(10000) 5 days ago [-]

Congratulations, there's now a pop-up added to every single website on the internet that isn't blocked by adblockers. It makes zero difference to me whether the pop-up is telling me about cookies or advertising 'huge anime tiddy' it's still an obstruction and a delay.

timbit42(10000) 5 days ago [-]

This Firefox add-on hides of most of them: https://addons.mozilla.org/en-US/firefox/addon/i-dont-care-a...

DavideNL(10000) 5 days ago [-]

In The Netherlands the Data Protection Authority announced this month that websites are no longer allowed to block access when people click 'NO' in the cookie warning;

Clicking 'no' should still allow people to view the website, but without placing any tracking cookies.

Source (in Dutch): https://autoriteitpersoonsgegevens.nl/nl/nieuws/websites-moe...

jrochkind1(2380) 5 days ago [-]

Hmm, there are features that one literally can't provide without state (cookies).

I think the real problem here is that the technical feature of cookies providing browser state is a poor proxy for what EU/DPA _really_ wants to regulate, which is privacy-related tracking.

There are tons of sites I've written which use cookies, but have no ads and perform no user-tracking whatsoever, not even Google Analytics. It is true that cookies are the _easiest_ (if not the only) way to do the other; but making 'cookies' the thing that gets effectively 'regulated'... I realize this isn't necessarily the intent of the regulations, but I'm suggesting it's part of what results in the situation troy mentions. There are very few sites that don't use cookies; there are or at least could be more sites that don't track you in a privacy-compromising way, to train the user that this is the same thing is just too much noise for the user to actually make any discernments.

mobilemidget(10000) 5 days ago [-]

DPA can track you for at least 31 days before their log files rotate and get aggregated. It is not via a tracking cookie, but unique enough in my opinion to track you, and that happens without consent.

For a privacy advocating party it would suit them to not log anything and be very clear about that.

Source: https://autoriteitpersoonsgegevens.nl/nl/over-deze-site/cook...

wsy(10000) 5 days ago [-]

We are now in the middle game of GDPR. Companies whose business model depends on tracking users essentially have now an illegal business model, because practically no user will give informed consent when they are offered the same service without consenting.

So what can these - now shady - companies do? They probe the limits of the law, and try to keep their business model alive as long as they can. We need to wait and see. In my opinion, the most probable development is that European data protection agencies will start to hand out fines. Of course, the shady companies will fight them in court, and of course, they will lose. Then they will retreat a step, and try again with a little bit less intrusion into the user's privacy. Over time, courts will rule, and fines will increase, until the shady companies will give up in EU.

Then EU will essentially become free of tracking networks. It might take a few years, but I think the intermediate annoyance is worth it.

paulcole(4001) 5 days ago [-]

> Clicking 'no' should still allow people to view the website, but without placing any tracking cookies.

No! It doesn't. Viewing a website isn't anyone's god-given right. The tracking cookies are part of the business model. If you don't agree with how a business makes money, stop patronizing them.

If this is serious, look for companies to just block all traffic from The Netherlands. Why even bother dealing with the hassle.

danielfoster(3806) 5 days ago [-]

I haven't read the full decision but I'm always surprised at how little regard European courts have for property rights. If it's my website, I should be able to decide who has access and under what terms.

Don't like cookies? No one is forcing you to visit a particular website.

I also feel like tech companies could adopt an open standard for cookie acceptance preferencesin web browsers, but they're afraid to lest they be forced to deal with even more regulation later.

Wowfunhappy(10000) 5 days ago [-]

I don't understand how this wasn't obvious from the start. GDPR was quite clear that you can't 'punish' users who reject cookies.

plopz(10000) 5 days ago [-]

So websites are supposed to just absorb the cost? That seems like a ridiculous stance.

duxup(10000) 5 days ago [-]

Could say Germany, decide differently?

GDPR seems like it could evolve quite quickly, and sort of fork a bit here...

Also makes me wonder, do they have to have access to the whole site?

This feels like a war on cookies, and there are bad things done with cookies, but I'm not sure if they're fighting on the right front long term here.

If people simply just say yes all the time / don't know, not sure we're making progress.

PopeDotNinja(4013) 5 days ago [-]

Does that affect the use of the session & local storage?

1nverseMtx(10000) 5 days ago [-]

Which doesn't mean they have to provide access for free, they are most likely allowed to ask a fee for that.

From the same website you link [1]: "Does anyone refuse tracking cookies? Then you still need to give this person access to your website or app, for example after payment." (google translated)

[1]: https://autoriteitpersoonsgegevens.nl/nl/onderwerpen/interne...

danra(4029) 5 days ago [-]

What is the significance of this decision? This is already explicitly stated as disallowed in GDPR. Is this just an declaration of intention to enforce the existing law?

dev_dull(10000) 5 days ago [-]

This is honestly what your "regulated internet" looks like. None of the solutions ultimately address the real issue and are done purely for liability purposes.

anonymousab(3921) 5 days ago [-]

It's what it looks like without any enforcement of the spirit of the law, sure.

You'll see malicious or smart-aleck compliance with any rule that a group doesn't agree with or when they feel that it personally spites them.

OscarTheGrinch(10000) 5 days ago [-]

Train people to click away on annoying shit, what could go wrong?

ihuman(2787) 5 days ago [-]

Aren't people already trained to ignore cookie banners? They've been around for years.

underdown(10000) 5 days ago [-]

ideal for clickjacking

Mirioron(10000) 5 days ago [-]

That's how we got pop-up blockers, wasn't it?

bjt2n3904(4021) 5 days ago [-]

FTA he's quoting:

> And the Dutch DPA's guidance makes it clear internet visitors must be asked for permission in advance for any tracking software to be placed — such as third-party tracking cookies; tracking pixels; and browser fingerprinting tech — and that that permission must be freely obtained. Ergo, a free choice must be offered.

Neither cookies, nor tracking pixels, nor browser fingerprinting are software. Your web browser is software. The server side runs software. These are data.

It seems pedantic, but I think it shows that the lawmakers have an underlying misunderstanding of how tech (and the world) works.

To make an analogy, cookies and tracking pixels are akin to license plates. I think the authors of this law thought they were more like cellular GPS beacons.

It's one thing to say, 'no installing a device which actively communicates home on your visitors'. It's quite another to say, 'No remembering your visitor's face unless they tell you it's ok.'

tivert(10000) 5 days ago [-]

>> And the Dutch DPA's guidance makes it clear internet visitors must be asked for permission in advance for any tracking software to be placed — such as third-party tracking cookies; tracking pixels; and browser fingerprinting tech — and that that permission must be freely obtained. Ergo, a free choice must be offered.

> Neither cookies, nor tracking pixels, nor browser fingerprinting are software. Your web browser is software. The server side runs software. These are data.

> It seems pedantic, but I think it shows that the lawmakers have an underlying misunderstanding of how tech (and the world) works.

No, that's just TechCruch's summary. This is the Dutch DPA's actual guidance: https://autoriteitpersoonsgegevens.nl/nl/nieuws/websites-moe...

It's in Dutch. I would not be surprised if 'software' has a slightly different meaning or connotations than in English.

And even if it doesn't you don't need a precise command of technical jargon as a practitioner would use it to have a good understanding of an area. The meaning of TechCruch's translation was perfectly clear to me, and better than alternate formulations I can think of that avoid using 'software' to refer to cookies. Maybe they should have just government-jargon and called them 'tracking cybers.'

sk5t(10000) 5 days ago [-]

Tracking cookies, pixels, etc., are implemented by server-side software; perhaps the matter you would like to contend is what the meaning of the word 'placed' is.

cmenge(10000) 5 days ago [-]

> It seems pedantic, but I think it shows that the lawmakers have an underlying misunderstanding of how tech (and the world) works.

In Germany, they actually demanded an 'Internet eraser' (https://www.heise.de/newsticker/meldung/Digitaler-Radiergumm...) [so ridiculous I don't think anyone ever attempted to translate] so content, mostly images, would somehow 'automatically expire'. Never worked, images could be screenshotted, etc. etc. Said this was 'highest standards made in Germany'. Never made sense, never took off.

The US came up with Snapchat.

jrochkind1(2380) 5 days ago [-]

That post reminded me about the https://amiunique.org/ site for seeing how trackable you are with browser fingerprints.

I had remembered that I had installed a 'Random User-Agent' plugin in Chrome, with privacy concerns in mind. Sometimes it sends a user-agent that causes a site to send me a page that can't actually be rendered by my browser, so I have to turn it off on some sites.

But I was curious to see what amiunique.org would make of the various random user-agent strings that the plugin would send it.

Ironically, the plugin seems to break the amiunique.org site, I can only get a whitescreen or occasionally a spinner forever, unless I disable the 'random user-agent' plugin.

Not sure what to make of that.

kevindqc(10000) 5 days ago [-]

I think it's having problems right now. I don't use anything and it's slow, blank page, 500 errors, etc.





Historical Discussions: Give Me Back My Monolith (March 13, 2019: 851 points)

(856) Give Me Back My Monolith

856 points 5 days ago by zdw in 55th position

www.craigkerstiens.com | Estimated reading time – 4 minutes | comments | anchor

It feels like we're starting to pass the peak of the hype cycle of microservices. It's no longer multiple times a week we now see a blog post of "How I migrated my monolith to 150 services". Now I often hear a bit more of the counter: "I don't hate my monolith, I just care that things stay performant". We've actually seen some migrations from micro-services back to a monolith. When you go from one large application to multiple smaller services there are a number of new things you have to tackle, here is a rundown of all the things that were simple that you now get to re-visit:

Setup went from intro chem to quantum mechanics

Setting up a basic database and my application with a background process was a pretty defined process. I'd have the readme on Github, and often in an hour or maybe a few I'd be up and running when I started on a new project. Onboarding a new engineering, at least for an initial environment would be done in the first day. As we ventured into micro-services onboarding time skyrocketed. Yes, we have docker and orchestration such as K8s these days to help, but the time from start to up and running a K8s cluster just to onboard a new engineer is orders of magnitude larger than we saw a few years ago. For many junior engineers this is a burden that really is unnecessary complexity.

So long for understanding our systems

Lets stay on the junior engineer perspective for just a moment. Back when we had monolithic apps if you had an error you had a clear stacktrace to see where it originated from and could jump right in and debug. Now we have a service that talks to another service, that queues something on a message bus, that another service processes, and then we have an error. We have to piece together all of these pieces to eventually learn that service a was on version 11 and service q was expecting vesion 12 already. This in contrast to my standard consolidated log, and lets not forget my interactive terminal/debugger for when I wanted to go step by step through the process. Debugging and understanding is now inherintly more complicated.

If we can't debug them, maybe we can test them

Continuous integration and continuous development is now starting to become common place. Most new apps I see now days automatically build and run their tests with a new PR and require tests to pass and review before check-in. These are great processes to have in place and have been a big shift for a lot of companies. But now to really test my service I have to bring up a complete working version of my application. Remember back to onboarding that new engineer with their 150 service K8s cluster? Well now we get to teach our CI system how to bring up all those systems to actually test that things are working. That is probably a bit too much effort so we're just going to test each piece in isolation, I'm sure our specs were good enough that APIs are clean and service failure is isolated and won't impact others.

All the trade-offs are for a good reason. Right?

There are a lot of reasons to migrate to micro-services. I've heard cases for more agility, for scaling your teams, for performance, to give you a more resilient service. The reality we've invested decades into development practices and tooling around monoliths that are still maturing. In my day to day I work with a lot of folks from all different stacks. Usually we're talking about scaling because they're running into limits of a single node Postgres database. Most of our conversation focuses on scaling the database.

But in all the conversations I'm fascinated to learn about their architecture. Where are they in their journey to micro-services. It has been an interesting trend to see more and more reactions "We're happy with our monolithic app." The road to micro-services may work fine for lots, and the benefits may outweigh the bumpy road to get there, but personally give me my monolithic app and a beach somewhere and I'll be happy.




All Comments: [-] | anchor

ivanbakel(10000) 5 days ago [-]

>I'm sure our specs were good enough that APIs are clean and service failure is isolated and won't impact others.

Surely if you're building microservices, this line of thinking would be a failure to stick to the design? If your failures aren't isolated and your APIs aren't well-made, you're just building a monolith with request glue instead of code glue.

I appreciate the point is more that this methodology is difficult to follow through on, but integration tests are a holdover - you can test at endpoints: you should be testing at endpoints! That's the benefit.

ljm(10000) 5 days ago [-]

I've had this feeling in some place that 'SOA' is a bit of a dirty word because it connotes a certain style of systems architect, or working like you do in Java or enterprise-scale PHP.

Many monolithic apps would benefit from a refactoring towards that rather than distributing a call stack across the network. The microservices can come later on if there's a need for it. If nothing else, it'll present a clearer picture of how things fit together when you start enforcing boundaries.

herval(3757) 5 days ago [-]

> If your failures aren't isolated and your APIs aren't well-made, you're just building a monolith with request glue instead of code glue.

That's pretty much every single microservice architecture I've ever seen, and I've seen a lot of them :(

int_19h(10000) 5 days ago [-]

But conversely, how much of the purported benefits of microservices are really the benefits from having well-defined contracts between components? Are microservices mostly a forcing function for good architectural practices, that could be applied equally in a monolithic design (with internal component boundaries) with enough discipline?

avinium(10000) 5 days ago [-]

Can you explain what you mean by 'request glue'?

briandoll(3201) 5 days ago [-]

If you want the 'simple' dev experience of a monolith, but the technical advantages (or just plain reality of your distributed systems) of services-based architectures, Tilt is a really great solution: https://tilt.dev/

It makes both development of services-based apps easier, and the feedback/debugging of those services. No more 'which terminal window do I poke around in to find that error message' problem, for one.

yjftsjthsd-h(10000) 5 days ago [-]

> No more 'which terminal window do I poke around in to find that error message' problem, for one.

What? Just throw everything in syslog/journal, then stream that to an aggregator like logstash. Now you can get all logs from one system with journalctl, and all logs for an environment from kibana.

pantulis(10000) 5 days ago [-]

There were no silver bullets, there aren't and there won't be. IMHO, I'd bet you would never hear a construction contractor say 'give me back my hammer'. The value remains in the choice of the tools and methodology in order to solve a problem.

Of course the author's point of view is totally valid, and so the microservices trend is also valid, and so are solutions in-between. One size won't fit everyone and as with anything going blindly for any solution can cause trouble.

al2o3cr(10000) 5 days ago [-]

    IMHO, I'd bet you would never hear a construction contractor say 'give me back my hammer'.
I'd bet they'd say it if half the construction industry had decided that using wood was 'not webscale' and switched to using carbon fiber for everything, even where it was inappropriate and made things difficult.
groestl(10000) 5 days ago [-]

We've used a monolithic microservice architecture before and were happy enough about it. The application was basically structured in microservices, but developed in a single project (monorepo and all) and the build produced a single build artifact. At deployment time, configuration decided what set of services the monolith would boot and expose.

Probably not for everyone (i.e. polyglot is hardly possible and it takes a lot of discipline to avoid a hairy ball of interdependencies), but it scales in ops complexity from very small setups to large ones, when needed.

int_19h(10000) 5 days ago [-]

This sounds a lot like what traditional Unix apps would do with fork().

metaphyze(10000) 5 days ago [-]

I'd like to point out that microservices are not always as cheap as you may think. In the AWS/Lambda case, what will probably bite you is the API Gateway costs. Sure they give you 1,000,000 calls for free, but it's $3.50 per million after that. That can get very expensive, very quickly. See this hacker news post from a couple years ago. The author's complaint is still valid: 'The API gateway seems quite expensive to me. I guess it has its use cases and mine doesn't fit into it. I run a free API www.macvendors.com that handles around 225 million requests per month. It's super simple and has no authentiction or anything, but I'm also able to run it on a $20/m VPS. Looks like API gateway would be $750+data. Bummer because the ecosystem around it looks great. You certainly pay for it though!'

https://news.ycombinator.com/item?id=13418332

JMTQp8lwXL(10000) 5 days ago [-]

The $750/month is well worth it to organizations with billions in revenue, wishing to protect user data. Better to route all traffic through an API gateway, and exposing all of your micro services on the public internet.

Everyone has to communicate through the API gateway. Then, you get a single point where things are easily auditable.

It has a lot of benefits that apply to business use cases. Your free API may not have as strict requirements.

013a(10000) 5 days ago [-]

Worth saying: Now that ALBs support Lambda as a backend, reaching for APIG w/ a lambda proxy makes less sense, unless you're actually using a lot of the value-adds (like request validation/parsing and authn). Most setups of APIG+Lambda I've seen don't do this, and prefer to just Proxy it; use an ALB instead.

ALB pricing is a little strange thanks to the $5.76/mo/LCU cost and the differentiation between new connections and active connections. The days are LONG GONE when AWS just charged you for 'how much you use', and many of their new products (Dynamo, Aurora Serverless, ALB) are moving toward a crazy 'compute unit' architecture five abstraction layers behind units that make sense.

But it should be cheaper; back of the napkin math, 225M req/month is about 100RPS averaged, which can be met with maybe 5 LCUs on an ALB. So total cost would be somewhere in the ballpark of $60/month, plus the cost of lambda which would probably be around $100/month.

Is it cheaper than a VPS? Hell no. Serverless never is. But is it worth it? Depends on your business.

todd3834(3596) 5 days ago [-]

I've never used API gateway outside of quick prototype tests to access lambda. $750 per month doesn't sound like a lot of money if you have 225 million requests per month. A free API is probably an exception but I do realize why that would too expensive for your use case.

zhobbs(10000) 5 days ago [-]

Probably cheaper to invoke the lambda functions from Cloudflare workers.

staticassertion(10000) 5 days ago [-]

I've built my personal side project as microservices. I started with an initial POC in Python and then I had a clear vision for what services to build.

https://github.com/insanitybit/grapl

> I'd have the readme on Github, and often in an hour or maybe a few I'd be up and running when I started on a new project.

I can deploy all of my services with one command. It's trivial - and I can often just deploy the small bit that I want to.

I don't use K8s or anything like that. Just AWS Lambdas and SQS based event triggers.

One thing I found was that by defining what a 'service' was upfront, I made life a lot easier. I don't have snowflakes - everything uses the same service abstraction, with only one or two small caveats.

I don't imagine a Junior developer would have a hard time with this - I'd just show them the service abstraction (it exists in code using AWS-CDK)[0].

> This in contrast to my standard consolidated log, and lets not forget my interactive terminal/debugger for when I wanted to go step by step through the process.

It's true, distributed logging is inherently more complex. I haven't run into major issues with this myself. Correlation IDs go a really long way.

Due to serverless I can't just drop into a debugger though - that's annoying if you need to. But also, I've never needed to.

> But now to really test my service I have to bring up a complete working version of my application.

I have never seen this as necessary. You just mock out service dependencies like you would a DB or anything else. I don't see this as a meaningful regression tbh.

> That is probably a bit too much effort so we're just going to test each piece in isolation, I'm sure our specs were good enough that APIs are clean and service failure is isolated and won't impact others.

Honestly, enforcing failure isolation is trivial. Avoid synchronous communication like the plague. My services all communicate via async events - if a service fails the events just queue up. The interface is just a protobuf defined dataformat (which is, incidentally, one of the only pieces of shared code across the services).

Honestly, I didn't find the road to microservices particularly bumpy. I had to invest early on in ensuring I had deployment scripts and the ability to run local tests. That was about it.

I'm quite glad I started with microservices. I've been able to think about services in isolation, without ever worrying about accidental coupling or accidentally having shared state. Failure isolation and scale isolation are not small things that I'd be happy to throw away.

My project is very exploratory - things have evolved over time. Having boundaries has allowed me to isolate complexity and it's been extremely easy to rewrite small services as my requirements and vision change. I don't think this would have been easy in a monolith at all.

I think I'm likely going to combine two my microservices - I split up two areas early on, only to realize later that they're not truly isolated components. Merging microservices seems radically simpler than splitting them, so I'm unconcerned about this - I can put it off for a very long time and I still suspect it will be easy to merge. I intend to perform a rewrite of one of them before the merge anyways.

I've suffered quite a lot from distributed monolith setups. I'm not likely to jump into one again if I can help it.

[0] https://github.com/insanitybit/grapl/blob/master/grapl-cdk/i...

jcims(10000) 5 days ago [-]

Grapl looks quite interesting. I'm looking for something similar for public cloud (e.g. cloudtrail+config+?? for building graph+events). Is there a general pattern you employ for creating the temporal relationship between events? e.g. word executing subprocess and then making a connection to some external service. Just timestamp them or is there something else?

purrcat259(10000) 5 days ago [-]

Scaling the database link unfortunately 404s. Would love to read the accompanying blog post.

craigkerstiens(65) 5 days ago [-]

Thanks for the catch, should be updated now.

faizshah(3904) 5 days ago [-]

Check out these two articles from Shopify on their Rails monolith: https://engineering.shopify.com/blogs/engineering/deconstruc...

https://engineering.shopify.com/blogs/engineering/e-commerce...

Specifically relevant to the discussion is this passage:

> However, if an application reaches a certain scale or the team building it reaches a certain scale, it will eventually outgrow monolithic architecture. This occurred at Shopify in 2016 and was evident by the constantly increasing challenge of building and testing new features. Specifically, a couple of things served as tripwires for us.

> The application was extremely fragile with new code having unexpected repercussions. Making a seemingly innocuous change could trigger a cascade of unrelated test failures. For example, if the code that calculates our shipping rate called into the code that calculates tax rates, then making changes to how we calculate tax rates could affect the outcome of shipping rate calculations, but it might not be obvious why. This was a result of high coupling and a lack of boundaries, which also resulted in tests that were difficult to write, and very slow to run on CI.

> Developing in Shopify required a lot of context to make seemingly simple changes. When new Shopifolk onboarded and got to know the codebase, the amount of information they needed to take in before becoming effective was massive. For example, a new developer who joined the shipping team should only need to understand the implementation of the shipping business logic before they can start building. However, the reality was that they would also need to understand how orders are created, how we process payments, and much more since everything was so intertwined. That's too much knowledge for an individual to have to hold in their head just to ship their first feature. Complex monolithic applications result in steep learning curves.

> All of the issues we experienced were a direct result of a lack of boundaries between distinct functionality in our code. It was clear that we needed to decrease the coupling between different domains, but the question was how

I've tried a new approach at hackathons where I build a Rails monolith that calls serverless cloud functions. So collaborators can write cloud functions in their language of choice to implement functionality and the Rails monolith integrates their code into the main app. I wonder how this approach would fare for a medium sized codebase.

blt(3707) 5 days ago [-]

shopify's problem can be fixed without microservices by writing modular code. The monolith should be structured as a set of libraries. I find it so strange, the way these microservice debates always assume that any codebase running in a single process is necessarily spaghetti-structured. The microservice architecture seems to mainly function as a way to impose discipline on programmers who lack self-discipline.

acdha(3560) 5 days ago [-]

> All of the issues we experienced were a direct result of a lack of boundaries between distinct functionality in our code

This is the key lesson to learn: if you are struggling to have clear separation of responsibilities, you are going to have a bad time with either approach. To the extent that a replacement system ends up being better it's probably due to having been more conscious about that issue.

JMTQp8lwXL(10000) 5 days ago [-]

Microservices at least force people to draw a line in the sand between subsystems/services. How effective or useful the lines you draw are, that's up to the skill of the engineers building your stuff.

I'm not saying microservices are better, but people should really take more serious considerations between the boundaries between subsystems. Because it's so easy to create exceptions, and things end up infinitely more complex in the grand scheme of things.

Clear, well-defined boundaries matter. It's the only way a developer can focus on a small part of a problem, and become an expert at working on that subsystem without needing greater context.

NicoJuicy(413) 5 days ago [-]

1400 people work on Visual Studio, no microservices possible.

Modular code

jwr(3610) 5 days ago [-]

I am very happy with my monolith. I've been watching the K8s craze with amusement.

I will be splitting off pieces of my monolith soon, but docker-compose is a very reasonable compromise for running stuff, and the pieces I'm splitting off are for aggregation and background computation, so not really micro-services at all.

mooreds(166) 5 days ago [-]

I worked for a number of years on a large webapp. It talked to a couple of databases and used them as a bus. There were a number of other back end processes that read and wrote to the database. Not sexy, but solid.

discobean(10000) 5 days ago [-]

Microservices as just small monoliths

externalreality(10000) 5 days ago [-]

I agree. The popularity of Microservices stems from messy large systems. So why not just have messy small systems instead.

Why is that people believe they need a Microservice architecture in the first place? None of the benefits of Microservices are absent in a carefully designed monolith.

If we are not going to give up our frenetic rapid development practices then we just need tools that help us move fast while keeping code understandable. Maybe we just need higher level languages where the machine can just keep track of all the details from extremely high level specifications. Software is too hard for humans.

sheeshkebab(10000) 5 days ago [-]

The author is doing it wrong - they don't need to run a local k8s cluster with 150 services - this is a monolith way and they should have stayed with mobilith if they want to do this.

Microservices require quite a bit of dev setup to get it right but often it comes down to be able to run a service locally against a dev environment, that has all those 150 other microservicea already running.

Queues are setup to be able to route them to your local workstation, local ui should have ability to proxy to ui running in dev (so that you don't run entire amazon.com or such locally), deployments to dev have to be all automated and largely lights out, and so on.... it takes a bit of time to get these dev things right, but it doesn't require running entire environment locally just to write a few lines of code.

Debugging and logging/tracing are an issue - but these days there are some pretty good solutions to that too - Splunk works quite well, and saves a lot of time tracking issues down.

kevindqc(10000) 5 days ago [-]

For tracing, I tried Jaeger recently and it looks promising! https://www.jaegertracing.io/

ChicagoDave(3992) 5 days ago [-]

I've noticed a large difference in opinion from Eurocentric architecture to America-centric. The U.S. seems to favor ivory tower, RDBMS centric systems and Europe is headlong into domain-driven design, event storming, serverless, and event driven architectures.

Monolithic design is fine for simple systems, but as complexity and scale increase, so do the associated costs.

I'm currently using DDD, micro services, and public cloud because complex system are better served.

ivalm(10000) 5 days ago [-]

I mean, if you can engineer a simple system you're better off than making a complex system. I think as many mentioned the main advantage of microservices is that it (a) forces people to have boundaries (b) conceptually easier to scale (because the 'bespoke' part of the architecture needs to only do simple things).

paulddraper(3866) 5 days ago [-]

> Monolithic design is fine for simple systems

Most systems are 'simple'. Or mostly simple.

What's the saying? It should be as simpler as possible (but no simpler).

jrochkind1(2380) 5 days ago [-]

Hmmmmm, what makes you say 'domain-driven design, event storming, serverless, and event driven architectures' is less 'ivory tower'?

'ivory tower' to me means academic, theoretical, 'interesting', 'pure', vs on the other end of pragmatic, practical, get-it-done, whatever-works, maybe messy. (either end of the spectrum has plusses and minuses).

'DDD, event storming, event driven architectures' don't sound... not 'ivory tower' to me. :) Then again, I am a U.S. developer!

jcims(10000) 5 days ago [-]

Are you sure the alignment is continental across the board? Are you talking a specific industry?

amluto(3746) 5 days ago [-]

I work on a project that is somewhere in the middle. We have one repo that builds some microservices. We deploy them like a monolith, though. We have absolutely no compatibility between microservices built from different versions of the repo, and we have some nice tooling to debug the communication.

And we have a little script that fires up a testable instance of the whole shebang, from scratch, and can even tear everything down afterwards. And, through the magic of config files and AF_UNIX, you can run more than one copy of this script from the same source tree at the same time!

(This means we can use protobuf without worrying about proper backwards/forwards compat. It's delightful.)

JohnBooty(10000) 5 days ago [-]

I worked at a company where we did something similar to that once. It was a nice compromise.

It was a Rails monolith; one of the larger ones in the world to the best of our knowledge. We (long story greatly shortened) split it up into about ten separate Rails applications. Each had their own test suite, dependencies, etc.

However, they lived in a common monorepo and were deployed as a single monolith.

This retained some of the downsides of a Rails monolith -- for example each instance of the app was fat and consumed a lot of memory. However, the upside was that the pseudo-monolith had fairly clear internal boundaries and multiple dev teams could more easily work in parallel without stepping on eachothers' toes.

sterlind(10000) 5 days ago [-]

My current project does something similar. There's a single hierarchical, consistent, scalable database, and a library of distributed systems primitives that implement everything, from RPC to leader election to map/reduce, through database calls.

All other services are stateless. I just shoot the thing and redeploy, and it only costs me an acceptable few seconds of downtime.

vasilipupkin(3983) 5 days ago [-]

how can these types of discussions be held in the abstract? the number of components or services, micro or otherwise, should depend on the specific application needs.

jasonm23(4011) 4 days ago [-]

Because patterns replace thinking in too many corners of our world

t0astbread(10000) 5 days ago [-]

Every time I read a pro-monolith article it's just 'oh you don't need a microservice arch, monoliths are simpler' and every time I read a microservice article it's 'microservices are more scalable' and both claims sound valid to me.

Yet I never see anyone talking about how we could combine the two to get the best of both worlds. It's always just microservices vs monoliths. (Similar things are happening in the frontend community with JS vs. no-JS debates.)

kaidax(10000) 4 days ago [-]

Actually, people do think about combining both approaches, see the Roles concept - https://github.com/7mind/slides/raw/master/02-roles/target/r...

Nasrudith(10000) 5 days ago [-]

I agree that when it comes to 'arrangement' aspects like this arguing one or the other misses the point and seems more like fetishism in the 'shamanic' sense. Engineering involves trade-offs and one may fit for one task or environment but not the other. It could be the proverbial square wheels that look horrible but actually work perfectly for its niche.

One would be rightfully considered batty for trying to do /everything/ recursively for the sake of it while dogmatically avoiding recursion by creating massively multi-dimensional arrays would also be considered insane.

(Ironically I must disagree with client-side JS as something to be avoided whenever possible but that is over concrete concerns of trust, bloat, and abuse where 'but we can't do that then' is greeted with 'mission accomplished'. If it is locked away server-side I particularly don't care if you use assembly code or a lolcat based language.)

mlthoughts2018(3498) 5 days ago [-]

I cannot comprehend how someone could believe monoliths are simpler. It sounds like someone is drastically confused about the difference in kind that exists between the inherent coupling of monolith / monorepo systems and the utterly superficial overhead of configuration and individual tooling of microservices / polyrepo.

Having worked on many examples of both Fortune 500 monoliths and start-up scale monoliths, I feel confident saying monoliths just fail, hands down, at all these scales.

santoshalper(10000) 5 days ago [-]

Microservices is a fad and a poorly named one at that. SOLID principles and loose couple are a foundation for long term design.

asaph(3647) 5 days ago [-]

Poorly named? I happen to think microservices succinctly describes what they are: small services each focused on a single task or area, and assembled together to form a whole system.

NicoJuicy(413) 5 days ago [-]

I've seen a lot of comments here about microservices.

At work we are transforming also, so I'm in the process of setting up a personal environment for it.

I'm also joining a Hackerspace and pitching for it next week ( hands-on learning).

About the architecture, not much made 'sense' in practice untill I encountered Akka, which uses the Actor model for creating microservices.

It's seems like a much better approach then everything I learned elsewhere.

Does anyone already have experience with it? ( Ps. Akka.net exist also)

quasar_ken(10000) 5 days ago [-]

I use elixir, same thing. The erlang VM is very powerful and makes separation of concerns easy. Splitting an app apart is hard because you get boundaries wrong, but there is no way to scale without adding more complexity somewhere.

jaequery(2924) 5 days ago [-]

This is a never ending cycle

davidw(206) 5 days ago [-]

I feel we're about due for another round of

'It makes programming so easy that anyone could do it because it's basically like writing English!'

ascendantlogic(10000) 5 days ago [-]

The first part of the hype cycle is 'I have a hammer and now everything is a nail'. The second part of the hype cycle is 'I need to hammer some nails but I'm tired of hearing about how great hammers are'.

jrootabega(10000) 5 days ago [-]

When all you have is a hammer you spend a lot of time on hacker news reading about everybody else's hammers

weberc2(3998) 5 days ago [-]

So how do you scale your monolith? Just run more instances even when your few interesting routes are the primary bottlenecks?

mooreds(166) 5 days ago [-]

Exactly. If the option is more servers on one hand, and servers plus k8s plus specialized skills plus additional deployment and development complexity on the other, I know which one I'd choose.

40acres(3582) 5 days ago [-]

I'll be honest, I don't understand the difference between what defines a monolith vs. a microservice. My 'organization' is about 15 developers, and we all contribute to the same repo.

Visually the software we provide can be conceptually broken apart into three major sections, and share the same utility code (stuff like command line parsing, networking, environment stuff, data structures).

Certain sections are very deep technically, others are lightweight modules that serve as APIs to more complex code. Every 'service' can be imported by another 'service' because it's all just a Python module. Also, a lot of our 'services' are user facing, but perform a specialized task in an 'assembly line' way. A user may run process A, which is a pre-requisite to process B, but may pass off the execution of process B to a co-worker.

Is this a microservice or a monolith?

jacquesm(42) 5 days ago [-]

Microservices are vertically integrated, they have their own endpoints, storage and logic and do not connect horizontally to other microservices.

A monolith does not have any such restrictions, data structures are shared and a hit on one endpoint can easily end up calling functions all over the codebase.

Spearchucker(4006) 5 days ago [-]

A monolith is a big, often stateful app. An insurance quotation web site, for example. A micro service is a discrete and often stateless service than can be re-used by both the quotation web site as well as an underwiting web site. A service that looks up financial advisor commission rates, for example. Another good use for a micro service is for logging.

iambvk(10000) 5 days ago [-]

To me personally, it is not monolith vs microservice that bothers me, but statefull vs stateless services.

If a service can't assume local state, it creates unnecessary design overhead. For example, you cannot achieve exactly-once semantics between two services without local-state. If you replace local-state with message-queues, you just turned 1-network-1-disk op into 5-network-3-disk op and introduced loads of other problems.

linkmotif(3622) 5 days ago [-]

What do you think of Kafka Streams?

cortesoft(10000) 5 days ago [-]

If you are relying on local state, you can never scale to more than one machine.

padobson(3862) 5 days ago [-]

I don't think I blame the author at all. I'm not sure why you would start with microservices, unless you wanted to show that you could build a microservices application. Monoliths are quicker and easier to setup when you're talking about a small service in the first place.

It's when an organization grows and the software grows and the monolith starts to get unwieldy that it makes sense to go to microservices. It's then that the advantage of microservices both at the engineering and organizational level really helps.

A team of three engineers orchestrating 25 microservices sounds insane to me. A team of of thirty turning one monolith into 10 microservices and splitting into 10 teams of three, each responsible for maintaining one service, is the scenario you want for microservices.

cortesoft(10000) 5 days ago [-]

I really hate the term 'microservice', because it carries the implication that each service should be really small. In reality, I think the best approach is to choose good boundaries for your services, regardless of the size.

People forget the original 'microservice': the database. No one thinks about it as adding the complexity of other 'services' because the boundaries of the service are so well defined and functional.

kakwa_(4002) 5 days ago [-]

Microservices are interesting.

Not Technically, as they increase complexity.

But they enable something really powerful: continuity of means, continuity of responsibility, that way a small team has full hand of developing AND operating a piece of a solution.

Basically, organization tends to be quite efficient when dealing with small teams (about dozen people, pizza rule and everything), that way information flows easily, with point to point communication without the need of a coordinator.

However, with such architecture, greater emphasis should be put on interfaces (aka APIs). A detailed contract must be written (or even set as a policy):

* how long the API while remain stable?

* how will it be deprecated? with a Vn and Vn-1 scheme?

* how is it documented?

* what are the limitations? (performance, call rates, etc)?

If you don't believe me, just read 'Military-Standard-498'. We can say anything about military standards, but military organizations, as people specifying, ordering and operating complex systems for decades, they know a thing or two about managing complex systems. And interfaces have a good place in their documentation corpus with the IRS (Interface Requirements Specification) and IDD (Interface Design Description) documents. Keep in mind this MIL-STD is from 1994.

pbreit(2350) 5 days ago [-]

I've worked at 2 companies with monoliths that had great products and tremendous business success.

And 3 companies with micro service infrastructures that had lousy products and little business success.

Can't totally blame microservices but I recall a distinctly slower and more complicated dev cycle.

These were mostly newer companies where micro services make even less sense and improving product and gaining users is king.

bunderbunder(3531) 5 days ago [-]

It might depend a bit on how you scope it, too.

I once worked at a company where a team of 3 produced way more than 25 microservices. But the trick was, they were all running off the same binary, just with slightly different configurations. Doing it that way gave the ops team the ability to isolate different business processes that relied on that functionality, in order to limit the scale of outages. Canary releases, too.

It's 3 developers in charge of 25 different services all talking to each other over REST that sounds awful to me. What's that even getting you? Maybe if you're the kind of person who thinks that double-checking HTTP status codes and validating JSON is actually fun...

linkmotif(3622) 4 days ago [-]

You start with microservices when you realize that including the Elasticsearch API in your jar causes dependency conflicts that are not easy to resolve.

pjmlp(363) 5 days ago [-]

Even then, that is what libraries are for.

kabes(3996) 4 days ago [-]

If your monolith goes unwieldy you have a problem with your code structure which microservices won't solve. As we all know, you need well isolated, modular code with well defined boundaries. You can achieve this just as well in a monolith (and you can also achieve totally spaghetti code between microservices).

Microservices is a deployment choice. It's the choice to talk between the isolated parts with RPC's instead of local function calls.

So are there no reasons to have multiple services? No there are reasons, but since it's about deployments, the reasons are related to deployment factors. E.g. if you have a subsystem that needs to run in a different environment, or a subsystem that has different performance/scalability requirements etc.

tylerl(10000) 5 days ago [-]

Well, THERE'S your problem!

If you are doing http/json between microservices then you are definitely holding it wrong.

Do yourself a favor and use protobuf/grpc. It exists specifically for this purpose, specifically because what you're doing is bad for your own health.

Or Avro, or Thrift, or whatever. Same thing. Since Google took forever to open source grpc, every time their engineers left to modernize some other tech company, Facebook or Twitter or whatever, they'd reimplement proto/stubby at their new gig. Because it's literally the only way to solve this problem.

So use whatever incarnation you like.. you have options. But json/http isn't one of them. The problem goes way deeper than serialization efficiency.

(edit: d'oh! Replied to the wrong comment. Aw well, the advice is still sound.)

skohan(10000) 5 days ago [-]

I also think designing in the microservices mindset (i.e. loose coupling, separable, dependency free architecture) is something which can be done on a continuum, and there's not a strict dichotomy between The Monolith and Microservices(tm).

Even if you're working on an early prototype which fits into a handful of source files, it can be useful to organize your application in terms of parallel, independent pieces long before it becomes necessary to enforce that separation on an infrastructure/dev-ops level.

Cthulhu_(10000) 5 days ago [-]

10 teams of 3 each owning their own little slice of the pie sounds like an organizational nightmare; mostly, you can't keep each team fully occupied with just that one service, that's not how it works. And any task that touches more than one microservice will involve a lot of overhead with teams coordinating.

While I do feel like one team should hold ownership of a service, they should also be working on others and be open to contributions - like the open source model.

Finally, going from a monolith to 10 services sounds like a bad. I'd get some metrics first, see what component of the monolith would benefit the most (in the overall application performance) from being extracted and (for example) rewritten in a more specialized language.

If you can't prove with numbers that you need to migrate to a microservices architecture (or: split up your application), then don't do it. If it's not about performance, you've got an organizational problem, and trying to solve it with a technical solution is not fixing the problem, only adding more.

IMO, etc.

dunk010(3632) 4 days ago [-]

That's just 'services', though, and it's been the way that people have been building software for a very long time. I can attest to have done this in 2007 at a large website, which was at least 7 years before the 'microservices' hype picked up (https://trends.google.com/trends/explore?date=all&q=microser...). When people say 'microservices' they're referring to the model of many more services than what you describe, and the associated infrastructure to manage them.

mirkules(4022) 5 days ago [-]

We've done exactly this - turned a team of 15 engineers from managing one giant monolith to two teams managing about 10 or so microservices (docker + kubernetes, OpenAPI + light4j framework).

Even though we are in the early stages of redesign, I'm already seeing some drawbacks and challenges that just didn't exist before:

- Performance. Each of the services talks to the other service via well-defined JSON interface (OpenAPI/Swagger yaml definitions). This sounds good in theory, but parsing JSON and then serializing it N times has a real performance cost. In a giant "monolith" (in the Java world) EJB talked to each other, which despite being java-only (in practice), was relatively fast, and could work across web app containers. In hindsight, it was probably a bad decision to JSON-ize all the things (maybe another protocol?)

- Management of 10-ish repositories and build jobs. We have Jenkins for our semi-automatic CI. We also have our microservices in a hierarchy, all depending on a common parent microservice. So naturally, branching, building and testing across all these different microservices is difficult. Imagine having to roll back a commit, then having to find the equivalent commit in the two other parent services, then rolling back the horizontal services to the equivalent commit, some with different commit hooks tied to different JIRA boards. Not fun.

- Authentication/Authorization also becomes challenging since every microservice needs to be auth-aware.

As I said we are still early in this, so it is hard to say if we reduced our footprint/increased productivity in a measurable way, but at least I can identify the pitfalls at this point.

boomlinde(10000) 5 days ago [-]

While there are cases where I think microservices make it easier to scale an application across multiple hosts, I don't understand the organizational benefits compared to just using modules/packages within a monolith. IMO a team that makes an organizational mess of a monolith and makes it grow unwieldy will likely repeat that mistake with a microservice oriented design.

darkerside(10000) 5 days ago [-]

And then you pray to whatever God you believe in that you happened to get those 10 abstractions just right!

ragona(10000) 5 days ago [-]

The definition of "micro" appears to be hugely variable! If you'd asked me I'd say that sure, my last team definitely built microservices. A team of around 10 engineers built and maintained something like 3 services for a product launch, each with a very different purpose, and over time we added a couple more. Three people maintaining 25 services sounds absolutely bonkers to me.

stcredzero(3023) 5 days ago [-]

Funny, but we saw a debate around monolithic codebases and the monolithic image in Smalltalk.

A team of three engineers orchestrating 25 microservices sounds insane to me. A team of of thirty turning one monolith into 10 microservices and splitting into 10 teams of three, each responsible for maintaining one service, is the scenario you want for microservices.

A team size of 10 should be able to move fast and do amazing things. This has been the common wisdom for decades. Get larger, then you spend too much time communicating. There's a reason why Conway's Law exists.

https://en.wikipedia.org/wiki/Conway%27s_law

ataturk(10000) 4 days ago [-]

Every place I have worked that had a monolith sucked. It has only been in the last two years as microservices have been rolling out with containerized platforms like OSE and Kubernetes that my life has gotten better.

I was in the same boat about a dozen years ago when I was doing a lot of UI work in Javascript and hating it and then jQuery came out and saved my career. Nobody thinks much of jQuery today, but back then it was such a breath of fresh air.

I feel that way about Kubernetes right now--my hands are so tied by 'operations' people gatekeeping and now that I've been mostly a back-end developer for many years, I felt like I was between a rock and hard place until DevOps finally broke loose.

alexk(2944) 5 days ago [-]

I think that microservices are just a deployment model of the service boundary and there should not be really a distinction between whether something is deployed as a microservice or a monolith, because application should support both for the scenarios when it makes sense.

Consider the following API:

  UsersService:
   CreateUser
   GetUser
  AppCatalog:
   GetApp
   CreateApp
What if AppCatalog and UsersService implement both local version of the interface and GRPC one? Then the distinction whether it's a microservice vs a monolith goes away, it becomes a matter of whether they are deployed in a single linux process or across boundaries of processes/servers.

I have implemented this technique in teleport:

https://github.com/gravitational/teleport/tree/master/lib/se...

Integration test suite is run against RPC version and local version at the same time to make sure the contract remains the same:

https://github.com/gravitational/teleport/blob/master/lib/se...

A single teleport binary can be deployed on one server with all microservices, or multiple cluster scenarios.

where the binary is simply instantiated with different roles:

  auth_service:
    enabled: yes
  node_service:
    enabled: no
Is Teleport a monolith? Yes! Is it a micro-service app? Yes! I'm so happy that we don't have to think about this split any more.
AmrMostafa(4027) 5 days ago [-]

I don't think it is ok to try and make service boundaries transparent and swappable. A service speaking to another service has to know the cost and overhead of the call it is making as otherwise it can't provide an efficient interface

qaq(4018) 5 days ago [-]

The question is transaction boundaries try unrolling a change that had to touch state of several services and one of the requests failed

scarmig(2622) 5 days ago [-]

> It feels like we're starting to pass the peak of the hype cycle of microservices

I feel like any article I see on microservices bemoans how terrible/unnecessary they are. If anything, we're in the monolith phase of the hype cycle =)

If you're moving to microservices primarily because you want serving path performance and reliability, you're doing it wrong. The reasons for microservices are organizational politics (and, if you're an individual or small company, you shouldn't have much politics), ease of builds, ease of deployments, and CI/CD.

nwah1(3797) 5 days ago [-]

Developers think of microservices this way, now. Managers still think of it as exciting.

ori_b(3985) 5 days ago [-]

There are also two purely engineering considerations: scalability and crash isolation.

Scalability -- for when your processes no longer fit on a single node, and you need to split into multiple services handling a subset of the load. This is rare, given that vendors will happily sell you a server with double-digit terabytes of ram.

Crash isolation -- for when you have some components with very complex failure recovery, where 'just die and recover from a clean slate' is a good error handling policy. This approach can make debugging easy, and may make sense in a distributed system where you need to handle nodes going away at any time anyways, but it's not a decision to take lightly, especially since there will be constant pressure to 'just handle that error, and don't exit in this case', which kills a lot of the simplicity that you gain.

Both are relatively rare.

cc81(4018) 5 days ago [-]

You need to build a pretty big system for ease of builds, ease of deployments and CI/CD to become easier with microservices.

cphoover(10000) 5 days ago [-]

100%

jacquesm(42) 5 days ago [-]

It's like almost everything else a matter of balance. A monolith that cross-connects all kinds of stuff in ways will become an unmaintainable mess over time, a ridiculously large number of microservices will move the mess to the communications layer and will not solve the problem.

A couple of independent vertically integrated microservices (or simply: services, screw fashion) is all most companies and applications will ever need, the few that expand beyond that will likely need more architectural work before they can be safely deployed at scale.

marcrosoft(10000) 5 days ago [-]

Microservices are a business organizational tool. They literally bring nothing to table from a technical standpoint.

baumy(10000) 5 days ago [-]

What? This comment seems ridiculous to me. They aren't a panacea and aren't right in all circumstances, but they have plenty of technical advantages. You can write code for different services in different languages / on different stacks, prototype using a new language/technology/stack with a small piece of the overall application, develop and deploy in parallel more easily, if one component fails it's less likely to bring down the whole application, gives you more freedom to scale if certain components of an application require more resources or different types of resources than others....

That's off the top of my head. These all come with tradeoffs of course, but to say they bring nothing to the table is absurd.

marsrover(3761) 5 days ago [-]

Isolated horizontal scalability? Sure, microservices aren't the end all be all of architecture design but let's not act like they 'bring nothing to the table' technically.

xissy(10000) 5 days ago [-]

I doubt if the author had '150 micro-services' in real life. A monolithic that could be separated into more than 100 micro-services is already too complicated like hell and engineers live with pains on it.

hitpointdrew(10000) 5 days ago [-]

cough, cough, SAP

jacquesm(42) 5 days ago [-]

I've seen half that IRL recently, given my sample size I do not doubt there is an architectural whiz somewhere that has pushed it to double that.

user5994461(2724) 5 days ago [-]

I know of a team who has 150 micro services. It's probably more, I should count them.

Needless to say, it is a giant clusterfuck.

jacquesm(42) 5 days ago [-]

Recent encounter: 70+ microservices for a minor ecommerce application. Total madness and while I'm all for approaching things in a modular way if you really want to mimic Erlang/BEAM/OTP just switch platforms rather than to re-invent the wheel. In Erlang it would make perfect sense to have a small component be a service all by itself with a supervision tree to ensure that all components are up and running.

lucian(3991) 5 days ago [-]

Erlang/OTP and Elixir on top of that, brings more than 30 years of experience of building Reliable, Scalable and High Available Systems.

It's difficult to fight this.

Somebody new to Erlang can get a feel of what Systems Architecture in Erlang really means from a great article by Fred Hebert:

'The Hitchhiker's Guide to the Unexpected' https://ferd.ca/the-hitchhiker-s-guide-to-the-unexpected.htm...

keithnz(3673) 5 days ago [-]

I'm always curious why the Actor concept isn't more widely used. Many platforms / languages have some form of it.

elgenie(4024) 5 days ago [-]

In Erlang it makes perfect sense to have a single counter variable be a gen_server by itself and be part of a supervision hierarchy. Deploying and versioning that counter separately will still correctly get laughed at.

Erwin(3918) 5 days ago [-]

On my Samsung TV (and via casting) I have access to 6 streaming platforms: Netflix, HBO Nordic, Viaplay, TV 2 play, C-more (related to the French Canal+) and a local service for streaming managed by the national public library system (limited to loaning 3 movies per week)

Of those Netflix is famous for its complex distributed architecture and employs 100s (if not 1000s?) of the very best engineers in the world (at $400k+/year compensation). I haven't heard about ground-breaking architecture from the others and don't imagine they spend 10s of millions of $ every year on the software like Netflix does.

I'm not really seeing any difference in uptime or performance. In fact, if I want to stream a new movie, I will use Viaplay (I can rent new stuff there for $5), or the library streaming service (which has more interesting arthouse stuff).

So why is Netflix in any way a software success story, if competitors can do the same thing for 1/100th the cost?

mymythisisthis(4031) 5 days ago [-]

Glad that you gave a shutout to the library!

user5994461(2724) 5 days ago [-]

Other pay tens of millions to CDN like Akamai. Not to mention that many streaming services are affiliated to ISP, whose specialty is to distribute content.

Netflix has to be a bit more efficient on the tech, because they have lower revenues and they don't own the pipes.

Besides that, the war is about content rights, not distribution. Netflix can maintain an image to be the cool kid in town, unlike older companies that don't care about that.

barbecue_sauce(10000) 5 days ago [-]

I would say its impressive that Netflix (an international organization present in 190 countries) has been able to leverage its operational and development expertise to offer an experience on par with those programs that only need limited local (small nation rather than international) distribution.

cuddlecake(10000) 5 days ago [-]

Not sure if the competitors have the same kind of scaling demands as Netflix does.

If we were to go `extreme` in regards to your comparison: why does Facebook need such a large infrastructure for messaging, if my monolithic Homebrew family-only messaging application has just as much uptime and performance, for 20$ a month?

jakeinspace(10000) 5 days ago [-]

Because scale is not trivial. Netflix offers more titles (I'm assuming) than all of those services combined, and more importantly, over many many times more simultaneous streams. Sure, there may be engineering effort being wasted in the ML-fueled-recommendation department, but their back-end is expensive for a reason.

nisa(3825) 5 days ago [-]

Netflix has 15% of Internet traffic worldwide.

jeremyjh(3828) 5 days ago [-]

I often go for weeks where HBO Now won't work all or at least much of the time. I try to watch a movie, it says an error occurred and gives me a trace ID. I contact support, they ask me to reboot my router. They have no idea what trace IDs are for. Could I reboot it again? HBO Now still doesn't support 4k. Netflix virtually never fails for me, is always streaming in high-quality 4k. Whatever they are doing, it is working and they are operating a scale much larger than those other players you mention.

gambler(3883) 5 days ago [-]

It seems like Erlang strikes the perfect balance between what people want from both worlds. Scalability and fault-tolerance, but also coherence and established ways of doing things.

jacquesm(42) 5 days ago [-]

It does, but it isn't quite as cool (or as good for your job security) to roll your own, preferably from the ground up without any libraries or other battle tested code.

bcheung(10000) 5 days ago [-]

I've come to the conclusion that microservices work for large organizations where division of labor is important but for small development teams it actually makes things worse.

What once was just a function call now becomes an API call. And now you need to manage multiple CI/CD builds and scripts.

It adds a tremendous amount of overhead and there is less time spent delivering core value.

Serverless architectures and app platforms seems to correct a lot of this overhead and frustration while still providing most of the benefits of microservices.

afgionio(10000) 5 days ago [-]

I've never found the division of labor argument all that compelling. Using multiple services compels clean divisions and impenetrable abstractions, but shouldn't it be possible to achieve that within a single program? A language with strong support for information hiding should be able to enforce the restrictive interfaces that microservices compel, but without the complexity and overhead of going over the network.

If that's not possible, I'd take that as a sign that we need new languages and tools.

ngngngng(3702) 5 days ago [-]

I'm on a small team working with microservices. I have different complaints than yours. The main issue I run into with microservices is I lose the benefit of my Go compiler. I don't like in dynamic languages because of all the runtime errors I run into. With microservices, even using a statically typed language becomes a nightmare of runtime errors.

If I change the type on a struct that i'm marshaling and unmarshaling between services, I can break my whole pipeline if I forget to update the type on each microservice. This feels like something that should be easy to catch with a compiler.

outworlder(3442) 5 days ago [-]

I agree with the author to some extent.

The main thing, however, is many people think that, by breaking up their monolith into services, that they now have microservices. No, you don't. You have a distributed monolith.

Can you deploy services independently? No? You don't have microservices. Can you change one microservice data storage and deploy it just fine? If you are changing a table schema and you now have to deploy multiple services, they are not microservices.

So, you take a monolith, break it up, add a message broker, centralized logging, maybe deploy them on K8s, and then you achieve... nothing at all. At least, nothing that will help the business. Just more complexity and a lot more stuff that need to be managed and can go wrong.

And probably a much bigger footprint. Every stupid hello world app now wants 8GB of memory and its own DB for itself. So you added costs too. And accomplished nothing a CI/CD pipeline plus sane development and deployment practices wouldn't have achieved.

It is also sometimes used in lieu of team collaboration. Now everyone can code their own thing in their own language without talking to anyone else. Except collaboration is still needed, so you are accruing tech debt that you know nothing about. You can break interfaces and assumptions, where your monolith wouldn't even compile. And now no-one understands how the system works anymore.

Now, if you are designing a system using microservices properly, then it can be a dream to work on, and manage in production. But that requires good teamwork on each team and good collaboration between teams. You also need a different mindset.

qaq(4018) 5 days ago [-]

Do you have a recommended way of handling transaction boundaries that span multiple services. Everyone always likes to outline how happy path works and when it comes to real world it basically comes down to well now you have eventually consistent distributed system that there is no general valid way to unroll a change to multiple services if one of the calls fails.

shapiro92(3749) 5 days ago [-]

You can break up the monolith, use a message broker or even let the services communicate via http but you do not need K8s. Its pointless unless you want to orchestrate multiple vms/images and your infra scales to that of more than 10-15 servers/containers etc.

matt2000(1920) 5 days ago [-]

When I started programming professionally it was the era of 'Object Oriented Design' will save us all. I worked on an e-commerce site that had a class hierarchy 18 levels deep just to render a product on a page. No one knew what all those levels were for, but it sure was complicated and slow as hell. The current obsession with microservices feels the same in many ways.

There appear to be exactly two reasons to use microservices:

1. Your company needs APIs to define responsibility over specific functionality. Usually happens when teams get big. 2. You have a set of functions that need specific hardware to scale. GPUs, huge memory, high performance local disk, etc. It might not make sense to scale as a monolith then.

One thing you sure don't get is performance. You're going to take an in-process shared-memory function call and turn it into a serialized network call and it'll be _faster_? That's crazy talk.

So why are we doing it?

1. Because we follow the lead of large tech companies because they have great engineers, but unfortunately they have very different problems then we do. 2. The average number of years of experience in the industry is pretty low. I've seen two of these kinds of cycles now and we just keep making the same mistakes over and over.

Anyway, I'm not sure who I'm writing this comment for, I guess myself! And please don't take this as criticism, I've made these exact mistakes before too. I just wish we as an industry had a deeper understanding of what's been done before and why it didn't work.

Quarrelsome(4004) 4 days ago [-]

> One thing you sure don't get is performance.

You can optimise per use case. In the monolith everything has to work for every use case. In a service you might not care to write, you might not care if your writes are super async. This means you can start to take liberties with the back-end (e.g. denormalising where necessary) and you have room to breathe.

rossdavidh(3966) 3 days ago [-]

A great many of our problems in tech are the result of '...we follow the lead of large tech companies...but unfortunately they have very different problems than we do.'

Imagine if we built single-family houses based on what made sense for skyscrapers. Or if we built subcompact cars based on a shrunk-down version of semi-tractor trailers. They would not be efficient, or even effective.

But, if your aspiration is to get a job at a skyscraper-builder, then it MIGHT be what makes sense to do. 'Have you used appropriate-only-for-largest-websites technology X?' 'Why yes I have.' The same incentives probably apply to the tech management, btw. We have an incentives problem.

pojzon(10000) 5 days ago [-]

Its not about the age of the engineers but maturity. Some just dont care about quality of their work because they get paid either way. Look at Silicon Valley, 'ageism' is real there. They need young devs with ideas and huge skill to bring them to life, to stay ahead of the competition. Most companies dont understand that and blindly try to copy that often because their management is not competent enough.

Reasons for current situation are plenty. World and ppl are complicated.





Historical Discussions: Dwarf Fortress is coming to Steam with graphics (March 13, 2019: 812 points)

(812) Dwarf Fortress is coming to Steam with graphics

812 points 5 days ago by danso in 5th position

www.polygon.com | Estimated reading time – 5 minutes | comments | anchor

Dwarf Fortress, the famously complex and often inscrutable colony simulation, is coming to Steam and itch.io. The new version will be published by Kitfox Games, a Montreal-based independent studio, and will include graphics, music, sounds, and Steam Workshop integration. An ASCII-based mode will still be available in this new version, and development of the original game will continue unabated.

Also known as Slaves to Armok: God of Blood Chapter 2: Dwarf Fortress, the game has been in development since 2003 by the two-man team of Tarn Adams and Zach Adams, aka Bay 12 Games. The elaborate simulation has three different modes. In Fortress Mode, players guide a small band of dwarves into creating a settlement from scratch. Meanwhile, Adventure Mode plays like a classic roguelike dungeon crawler. Legends Mode allows players to create an elaborate procedurally generated world and then inspect it in detail.

All three modes record player actions, meaning that you can explore your own deserted settlements in Adventure Mode, or read about the exploits of the city you helped to create in Legends Mode.

The new, premium edition of Dwarf Fortress will include actual graphics for the first time. Traditionally, the game has only offered ASCII-style icons. According to an FAQ published alongside today's announcement, the graphics will be handled by Michał "Mayday" Madej and Patrick Martin "Meph" Schroeder, two well-regarded members of the game's modding community. Another community member, who goes by the handle Dabu, will be handling a musical score inspired by the seasons and select bits of audio.

Steam Workshop integration may be the biggest selling point here. Fans have made many mods to supplement the core Dwarf Fortress experience. One of the most popular, called Dwarf Therapist, allows you to troubleshoot individual dwarves, digging down into their wants and needs and even controlling them in ways the base game simply doesn't allow. Many consider Dwarf Therapist and other mods essential to the gameplay experience, and Steam Workshop support will make accessing them easier than ever before.

  • Bay 12 Games/Kitfox Games
  • Bay 12 Games/Kitfox Games
  • Bay 12 Games/Kitfox Games
  • Bay 12 Games/Kitfox Games
  • Bay 12 Games/Kitfox Games

In today's FAQ, Bay 12 stressed that Dwarf Fortress isn't being changed to make it "fit" into Steam or itch.io. It will be the exact same experience plus a few modern quality-of-life features. Kitfox said that it has "no access to the source code, and will have no influence on the design, programming, or updates to Dwarf Fortress."

Virtually the same game, minus these new features, will still be available for free at the Bay 12 website.

"Steam/itch.io and Kitfox will get cuts of the sales on those platforms," states the FAQ, "so if you want to give MAXIMUM money to Tarn and Zach, direct donation is the way to go."

Traditionally, Dwarf Fortress has only been available as a free download direct from the Bay 12 website. The Adams brothers say they've been offered large sums of money in the past for licensing deals, but the team has always subsisted on donations alone. It even made the move to Patreon in 2015. But, in today's FAQ, the pair said that they're selling this premium version in part to pay for health care for ailing family members.

"Dwarf Fortress is going premium because we want more people to encounter the game, grow the community," Bay 12 wrote in the statement, and because "some of the creator's close family members have developed serious health issues within the past 6 months, and money to support them is tight. As it's a sensitive and difficult matter, please respect Tarn and Zach's privacy about this, but keep some well wishes in your hearts for them."

Dwarf Fortress currently has no release date on Steam or itch.io, since, as the developers indicate, "time is subjective."




All Comments: [-] | anchor

danbolt(3923) 5 days ago [-]

Kitfox Games is run by Tanya Short, who's co-authored a book on procedural generation in games.[1]

She gave a pretty interesting talk in Vancouver about how it integrates into her studio's production methodology. A lot of Kitfox's games employ procgen, so I feel like they're a good fit for the title. [2]

[1] https://www.amazon.ca/Procedural-Generation-Design-Tanya-Sho...

[2] https://m.youtube.com/watch?v=TH11Q7VPXj8

jyxent(10000) 5 days ago [-]

Co-authored with Tarn Adams, the co-creator of Dwarf Fortress.

katamaritaco(10000) 5 days ago [-]

Tanya Short and Tarn Adams (of DF) are collaborating again for a new book built around procedural storytelling that is set to release soon. [1]

[1] https://www.crcpress.com/Procedural-Storytelling-in-Game-Des...

bdz(1426) 5 days ago [-]

The problem with DF was never the graphics but the UI itself. See Rimworld which is hugely popular but much more 'playable'. Basically the UI is holding back the game to become a much bigger success

mlindner(3804) 5 days ago [-]

Rimworld is also a MUCH simpler game. It's simpler to the point when I played it Rimworld felt 'dead' as compared to how 'alive' the DF simulation felt.

kuwze(3257) 5 days ago [-]

I wish they would release on gog too, but I guess that kind of loses the benefits of the steam workshop. Maybe an independent mod manager?

cwyers(3942) 5 days ago [-]

If you want a DRM-free version without Steam integration, they're also releasing on itch.io.

Pfhreak(10000) 5 days ago [-]

I've often wondered if DF will ever get an open source version that tries to recreate the same experience but focuses on consistent interface design, performance and multithreading, and ease of extensibility.

DF has a huge head start, and two of the most incredibly passionate developers out there, but it's also clear they prioritize expanding the model first and foremost rather than trying to tackle any of the user experience issues that have plagued the game forever.

(To be clear, I adore DF. I've put many, many hours into DF. It's a truly amazing game and I continue to wish the developers success.)

snazz(3727) 5 days ago [-]

Dungeon Crawl Stone Soup is to Nethack as __________ is to Dwarf Fortress. My understanding is that Gnomoria isn't there yet, but I haven't played it, so I could be wrong. It certainly would be an interesting technical challenge to write a competitor in a modular, maintainable way that allows tile-based frontends without injecting memory.

Gene_Parmesan(10000) 5 days ago [-]

Rimworld is kind of this. It's obviously a different, far more accessible experience. But I can't help but feel like it took much of its inspiration from this very idea of making a DF with much better UX.

earenndil(3751) 5 days ago [-]

I've heard a rumour that DF is planned to be opensourced once zach and tarn die, but they're keeping this a secret in order to keep from being assassinated.

roenxi(10000) 5 days ago [-]

Hopefully when the developers are eventually incapacitated they've made provisions for the source code to be released. I know I stopped playing because occasionally there were bugs were so bad that the fortress had to be abandoned; and there is substantial evidence in the bug count and framerate that there are underlying architectural issues in their design.

An open source version would be technically superior; although how that would affect their income stream I do not know.

mikekchar(10000) 5 days ago [-]

I think there will eventually be an open source game that captures the same kind of magic that DF has. I don't think it will be DF, though. If the DF source code was ever opened, I think it would be pretty similar to what happens when any classic games opens it's code: nobody touches it for a long time. You need someone passionate about developing the code to keep it going -- and if you are passionate about developing it, you're more likely to write your own.

If only I could stop playing the game long enough to write some code... ;-)

LyndsySimon(10000) 5 days ago [-]

I feel like every time someone sets out to create a F/OSS successor to DF, they end up mired in creating dungeon and world generators and never produce a functional game.

Really, that's what DF is anyhow; they even call it a 'story generator' IIRC. Tarn just managed to put enough of a playable game together before getting lost in the details of world generation, and people have stayed engaged enough with it to keep him involved in the project.

tlynchpin(10000) 5 days ago [-]

Interesting choice of Steam and itch. I'd be very interested to hear more about their selection process on platform once they decided to Get The Money. Specifically I wonder if they explored the new Epic platform, guess I'll have to tune into the AMA tomorrow.

ilaksh(3042) 5 days ago [-]

Its not that interesting to me to pick Steam and Itch since they have many more users than anything else including something that barely launched.

cma(3301) 5 days ago [-]

Buy through Patreon and they get a much larger portion of your payment.

shadowfacts(4027) 5 days ago [-]

I think you mean Itch.io: https://kitfoxgames.itch.io/dwarf-fortress

wyldfire(718) 5 days ago [-]

I've never played it but it sounds like a compelling game with an in-depth model.

I started out playing games on a monochome amber monitor + Hercules graphics. But I still think that going beyond ASCII characters is a nice touch that will make the game easier on the eyes.

> in today's FAQ, the pair said that they're selling this premium version in part to pay for health care for ailing family members.

That's too bad that it's come to this but perhaps it will bring the game to a wider audience.

lainga(10000) 5 days ago [-]

The game really is just the hyper-detailed model. The gameplay is emergent and comes most easily if you know how to spin a good story as you're playing.

crooked-v(4018) 5 days ago [-]

The gameplay model is extremely elaborate and compelling, but the UI is wildly inconsistent and vigorously inscrutable, even compared to 'every key is a separate command' classics like Nethack.

ceejayoz(2056) 5 days ago [-]

> But, in today's FAQ, the pair said that they're selling this premium version in part to pay for health care for ailing family members.

'Name one positive thing about the US health insurance setup.'

'Uh... it got us Dwarf Fortress with graphics, I guess?'

tptacek(75) 5 days ago [-]

DF already has graphical tilesets, right? This is just a packaging of them?

YukonMoose(10000) 5 days ago [-]

I was consumed by df for about six months to the detriment of everything else.

I've never been able to go back, but I always remember it as the best gaming I ever did.

Df is a piece of art comparable to anything created by mankind to date. In a 1000 years I believe df will be compared with Mozart and the Mona Lisa.

And hopefully they've improved the efficiency... I want my descendants to be able to build that 1000 dwarf fortress!

nyolfen(3876) 5 days ago [-]

>Df is a piece of art comparable to anything created by mankind to date. In a 1000 years I believe df will be compared with Mozart and the Mona Lisa.

you're not the only one who thinks so: https://www.moma.org/collection/works/164920

intended(10000) 5 days ago [-]

> 1000 dwarf fortress!

What did the sons of today's processors ever do to you, for you to hate them so?

crooked-v(4018) 5 days ago [-]

'No access to the source code' is a seriously weird thing to note. Is this going to be like one of the existing memory-injecting/manipulating mods, with all the downsides that implies?

mont(10000) 5 days ago [-]

That is really interesting, though IIRC the dev of df doesn't use any reasonable sort of cvs, and I got the impression that the source code was essentially enough spaghetti to feed an Italian wedding.

0xffff2(4032) 5 days ago [-]

DF has long had support for graphical tile sets, and the announcement on Patreon mentions that the developers will be doing some work specifically to support the new graphics. I think that comment is there to allay any fears that this release and future releases might be burdened with any kind of functional user interface.

StrangeDoctor(10000) 5 days ago [-]

It's a roundabout way of assuring the existing community that the integrity of the simulation/game is the same as always. All of these mods are memory injection/watching, but nothing is being done to circumvent the modders.

tomku(10000) 5 days ago [-]

I think they're just making it clear that Kitfox is only distributing the game and not participating in development at all. All the development (including anything necessary for adding the 'official' graphics/music) is still being done by Bay12.

ssully(4022) 5 days ago [-]

Yeah, that's kind of weird. I think they said that to reinforce that this will be the exact same game. Thats great, but not providing the port team the source just sounds like they are trying both arms behind their back.

mlindner(3804) 5 days ago [-]

No. Tarn is modifying Dwarf Fortress to support the graphics and sound. Kitfox/graphics people will tell Tarn what they need to support the graphics/sound and he'll add it.

ajuc(3683) 5 days ago [-]

How is it possible that reasonable people are OK with this kind of bullshit healthcare policy?

muzani(3544) 5 days ago [-]

because capitalism lets people charge accordingly to supply and demand, and America will defend capitalism till the end

hiccuphippo(10000) 5 days ago [-]

The corner grass in the pics looks weird. Are they supposed to be cliffs?

I've heard of this game but never tried it. I've only played nethack on a terminal. I'll definitely try it!

legohead(4022) 5 days ago [-]

game has a z-index. so you can build down (or up) into mountains and such. I haven't played the tiled version, but maybe this is its way of showing depth.

Retra(10000) 5 days ago [-]

Those are cliffs. The grass is at a lower elevation than the mountain.

aresant(656) 5 days ago [-]

Three points of interest:

1) Full developer announcement on their Patreon page -> https://www.patreon.com/posts/25343688

2) They are putting on steam largely because of the USA's shit healthcare system 'after Zach's latest cancer scare, we determined that with my healthcare plan's copay etc., I'd be wiped out if I had to undergo the same procedures . . '

3) Cool to see that they are going to use graphics built by two of the most popular community modders:

-=> MayDay built one of the most popular current graphics packs @ http://goblinart.pl/vg-eng/df.php

-=> And Meph has built a fairly massive tile set as well @ http://www.bay12forums.com/smf/index.php?topic=161047.0

Will be interesting to see how they change with native support.

fzeroracer(10000) 5 days ago [-]

It's such a disaster to see people so severely affected by the healthcare system in America. If you haven't been keeping track, there's been a similar issue with SomethingAwful's Lowtax, who is deep in debt as a result of some of the medical issues he's going through.

It's enough to certainly make me consider working in a different country. I wonder if there would be any possibilities of the Dwarf Fortress developers moving elsewhere such that they don't have to worry about a medical disaster bankrupting them.

MrFoof(3910) 5 days ago [-]

I'm one of the longer time Dwarf Fortress supporters, with somewhere around $2500 donated over the past decade or so via PayPal. If I remember correctly, it was around January 2007 when Tarn started accepting donations.

I don't play it much, only a week every year or so to see what's new, get lost in the wacky reality of it all, and put it down to tend to other things. I read the communities from time to time. The wacky stories. Occasionally watch someone have a go on Twitch and enjoy seeing everything unfold.

Dwarf Fortress is an unsung touchstone of internet culture. Game devs in AAA studios (I know a few) talk of it in hushed whispers -- most are aware of it, even if only a few of them have played it. In fact, Dwarf Fortress is what sparked Markus Persson to create Minecraft, which was originally intended to be a voxel version of Dwarf Fortress itself.

I regret the circumstances that have resulted in Tarn and Zach having to find ways to bring in more income. They moved onto Patreon in the past year or so, and thankfully it was a meaningful increase in income. Hopefully Steam can help in that regard, as Tarn seems to manage his time and communication with the community quite well, and at least the Bay 12 forums by and large respects that he takes the time to reach out before continuing with the game as he sees fit.

It's something very novel and lovingly crafted that creates real joy and sparks the imagination in ways most people will rarely see. Although a bit obtuse upfront, it is one of the most novel toys mankind has ever created. Hopefully this can ensure that its development can continue for much longer still, and help other people discover the magic that it is.

Pharmakon(3182) 5 days ago [-]

Oh man, I'm so sorry for the people involved who are in this position, yet I'm torn because the outcome here beyond that is ideal. I'm going to assuage my conflict with a big donation on bay12 and then I'm going to tell everyone I know who didn't want to get into DF that the time is now!

novok(10000) 5 days ago [-]

Since they live around Seattle, maybe they should consider immigrating a few hundred miles north to Canada? They might qualify under the self-employed artist category?

https://www.settler.ca/english/immigration-to-canada-for-art...

coldacid(10000) 5 days ago [-]

Also in the article but buried:

4) Steam/itch.io and Katfox take a cut so if you want to really help out Zach and Tarn you should donate directly @ http://www.bay12games.com/support.html

odorousrex(10000) 5 days ago [-]

Incidentally, Meph has also made one of the most fantastic game enhancing DF mods out there, Masterwork Dwarf Fortress.[1]

I'm really glad to hear he is involved with this.

[1] http://www.bay12forums.com/smf/index.php?topic=98196.0

stcredzero(3023) 5 days ago [-]

2) They are putting on steam largely because of the USA's shit healthcare system

I know both sides of this. Obamacare was truly a boon to me when I was an 'independent consultant.' We in the USA need to value our liberty. But if you want people to value liberty, you first have to keep them safe from Kafkaesque nightmares which involve their bodies not working correctly and not getting fixed. Basically, it works just like hunger.

In particular, in the present day US, the societal cohorts which create media seem to be comprised of 1) out of touch left-leaning upper class people or 2) people with so few resources they are at risk. Together, they form a larger faction. This is a recipe for a society that stops tells itself it wants freedom and changes into a society that wants to take care of people. (1 are people who identify as those doing the taking care of, and 2 are people who would be taken care of.)

If you want people who appreciate and take care of their freedom, you have to make sure they're not hungry and sick. History shows us this clearly.

http://smbc-comics.com/comic/healthcare

tkiley(3614) 5 days ago [-]

> They are putting on steam largely because of the USA's shit healthcare system 'after Zach's latest cancer scare, we determined that with my healthcare plan's copay etc., I'd be wiped out if I had to undergo the same procedures . . '

I have mixed feelings about this.

On the one hand, it totally sucks.

On the other hand, I suspect that as as consequence of this crappy pressure, Dwarf Fortress will reach a substantially wider audience and bring joy to a greater number of people.

I'll be interested to see how Tarn looks back on this moment in, say, five years.

coldacid(10000) 5 days ago [-]

Oh no. There goes everyone's productivity, killed by catsplosions.

crocal(3976) 5 days ago [-]

And by killer carps. Never turn your back on carps. They want you dead!

Explanation: http://dwarffortresswiki.org/index.php/40d:Carp

umvi(10000) 5 days ago [-]

One thing that amazes me about Dwarf Fortress is that the creator(s) don't use version control[1] (as of 2014, things may have changed):

'I don't use version control -- I didn't like the feeling of having the code get committed into a black box thingy with no immediate upside.'

I can't fathom how you can manage the complexity of a game like DF without a VCS.

[1] https://www.reddit.com/r/IAmA/comments/1avszc/im_tarn_adams_...

daemin(4028) 4 days ago [-]

I think as long as you have a system for versioning the source code you are working on you will be fine. It's just that it will be a little more manual a process to perform (hence you might do it less often), but if it's a stable process then it should suffice.

Configuration Management isn't just about keeping your source in source control software / server somewhere, it's also about the tools, dependencies, and other artefacts that you need to build and distribute the software.

How many people take great care for the version control system but forget everything else?

danielbarla(10000) 5 days ago [-]

Honestly, for personal projects where I'm the sole developer, I only use git nominally. I mean, it's there, and sure, I commit; but it's not like I'm spawning feature branches or anything like that. It's a rather linear set of commits, functionally almost equivalent to having a zip of the source every now and then. I'm sure many people are the same.

stcredzero(3023) 5 days ago [-]

One thing that amazes me about Dwarf Fortress is that the creator(s) don't use version control[1] (as of 2014, things may have changed)

A friend of mine, years ago, took the 'daring, revolutionary' step of convincing NASA management to embrace version control. I also know of a European consulting firm which made a lot of money swooping in and saving a major Euro bank's problems due to lack of version control.

ethbro(3683) 5 days ago [-]

No development team, no need for version control.

I meta love DF for it's primitive development practices too.

Guy decides to make a game with a block of stone and a chisel, development community calls him crazy and says games can't be made that way, but he keeps chiseling away.

tom_(10000) 5 days ago [-]

They take backups, name each folder based on the date, and what feature that was just completed (or just about to be started) that prompted the backup. I bet you a pound.

This is what everybody that says they don't use version control does.

honkycat(10000) 5 days ago [-]

Dwarf Fortress is a popular game, but it's progress has been stalled for years.

One of its big issues is performance. For an ascii game.

If anything, it is an example of how saying 'Eh, screw it.' can stall your progress

rcxdude(10000) 5 days ago [-]

I think one problem is he doesn't, or at least it is managed through heroic effort as opposed to efficiently. Updates take longer and longer and have a longer period of 'everythings broken' as the game progresses. From all reports the code is a complete mess. It's an amazing piece of work, but it does seem like there is some wasted potential there (though I can believe that it may not be possible to harness that potential without giving up on the incredible ambition of the game, because this is the kind of thing that can only really be accomplished by someone who is completely unreasonable).

AdmiralAsshat(1563) 5 days ago [-]

Copy of Copy of Copy of DFSource.zip

tareqak(403) 5 days ago [-]

Tarn Adams did however release the RAWs of the games to the public domain. As a personal project, I wrote a script to take the different zipped releases, extract the RAWs, and layer them into a git repository while attributing everything to Adams' handle Toady One. I separated the script I used to create it, and the layered RAWs into two separate repos. It's not up to date, but here it is:

https://github.com/tareqak/df_raws_helper https://github.com/tareqak/df_raws

Pfhreak(10000) 5 days ago [-]

They almost certainly use some sort of version control, like all novice programmers do, right? -- copying files/directories, creating backups, etc. (MyProject02.old.bak.zip).

jandrese(4030) 5 days ago [-]

It makes a little sense. They aren't working with any kind of time limit, so if they break something they just work at it until it is fixed. There's no rolling back to the last good version.

But yeah, this is undoubtedly a factor in the games rather glacial development rate.

Sargos(10000) 5 days ago [-]

Tarn isn't known for being a good developer. He's just very dedicated :)

In fact it's pretty well known that Dwarf Fortress has a horrible code base and it would be a huge project to bring it up to a good standard.

marcotaves(10000) 5 days ago [-]

Dont know why this remembers me a Story from HackerNews...

I call this story by the xiaoming of 'The Black Dice on White Table'

In October of 1994, I'd just started as an honest-to-goodness videogame programmer at a small startup called SingleTrac which later went on to fame and glory (but unfortunately not much in the way of fortune) with such titles as Warhawk, the Twisted Metal series, and the Jet Moto series. But at the time, the company was less than 20 employees in size and had only been officially in business for about a month. It was sometime in my first week possibly my first or second day. In the main engineering room, there was a whoop and cry of success.

Our company financial controller and acting HR lady, Jen, came in to see what incredible things the engineers and artists had come up with. Everyone was staring at a television set hooked up to a development box for the Sony Playstation. There, on the screen, against a single-color background, was a black triangle.

"It's a black triangle," she said in an amused but sarcastic voice. One of the engine programmers tried to explain, but she shook her head and went back to her office. I could almost hear her thoughts... "We've got ten months to deliver two games to Sony, and they are cheering over a black triangle? THAT took them nearly a month to develop?"

marcotaves(10000) 3 days ago [-]

so what the fuck is going on...

you think I am the guy who was there in 1994 and made the bçack triangle video game? you know I am not!

you think I have an flock of flying robots made with A.I.?

you know I dont, I didnt even could finish the most basic phtotransistor prototipe or even any experiments you know that!

dont know what the is it? hell is it going to be??? going on with some people that is not me...

honestly!

thanks...

marcotaves(10000) 2 days ago [-]

delete the parent comment

minikites(2256) 5 days ago [-]
Grue3(10000) 4 days ago [-]

This is true, but Tarn Adams was in fact offered a big commercial deal before and rejected it. He really values the independence of his vision.

joshstrange(3536) 5 days ago [-]

I've never been able to get into DF but I would suggest people check out RimWorld. I've heard it described as 'Dwarf Fortress is Rimworld for people who hate fun' [0] and people often draw parallels between the two [1].

[0] https://www.reddit.com/r/RimWorld/comments/7dm7j8/how_does_r...

[1] https://www.reddit.com/r/RimWorld/comments/6xeie3/coming_ove...

billfruit(10000) 5 days ago [-]

I heard some people recommend Oxygen Not Included, as also being in a similar vein, albeit at a smaller scale(ie, you have only 2 or 3 people in your colony), but with nice visuals (Klei are generally good with the animations, most things are animated, machines have cranks and pinions that move, water and other liquids actually flow with gravity, and visuals match the simulation as far as I have seen). ONI has a somewhat detailed array of systems like electrical wiring, plumbing( separate systems different for liquids and gases, with pumps, sumps, valves, etc), thermal management, people management (food, sleep, breathing air quality, lighting, waste management, morale, etc.)

mlindner(3804) 5 days ago [-]

RimWorld doesn't really compare. The world is dead comparatively.

billfruit(10000) 5 days ago [-]

Also (since RimWorld has been mentioned) Prison Architect, may also have some similar qualities, like complex intertwined systems, and emergent events like prison riots, etc.

rjbwork(10000) 5 days ago [-]

I love RimWorld, and have hundreds of hours in it, but I'd say it's really more like Babby's First DF game than truly comparable to DF. It just doesn't have the depth, or capability to present a deep challenge forever. It also takes place in a single layer, whereas DF has gobs of layers.

Symmetry(2581) 5 days ago [-]

After the second time I stayed up to 2 AM playing that I deleted it from my computer. Too much fun :(

chobeat(3933) 5 days ago [-]

Rimworld is kinda lame if you've played DF. They are not even comparable.

soneca(1418) 5 days ago [-]

I always read about Dwarf Fortress in HN and it always interested me. But some elements, like the ASCII art, always seemed to me as entry barriers purposefully built to maintain a protected subculture that I had no interest in being part of. I just wanted to try out a good game.

So this comes as good news to me, I'll probably give it a try now with the tile graphics.

mLuby(4032) 5 days ago [-]

>...there's way too much information to decode Dwarf Fortress. You get used to it, though. Your brain does the translating. I don't even see the code. All I see is dwarf, tree, bed.

mikekchar(10000) 5 days ago [-]

To understand DF, you need to understand the the reason that the ASCII art exists is because Tarn is not an artist and decided to prioritise other things. Art was not considered important enough to spend time on. Neither was a consistent UI. Neither was fixing 'quality of life' bugs.

This was never an attempt to purposely build a subculture that enjoys puzzling out how to play the game. It's a choice of prioritising the other aspects of the work over these aspects. The people who avidly play DF self-selected by being able to persist through all of the sub-optimal parts to discover the wonderful parts.

Just keep that in mind if you decide to give it a go. I think to enjoy it, you need to have a relaxed attitude. I see a lot of incensed people on the forums who loudly proclaim, 'How can you put up with such a crappy game?'. The reputation of being the most complex game in the world, or being the hardest game to play is just wrong, unfortunately. People get the idea that there is some benefit to the complexity -- like using Blender or Vim. But there isn't. It's just complex because the work flows have never been thought through that much and there are bugs all over the place.

The reason to play the game is the emergent story telling, for which there is nothing else that even comes close. You persist through the problems, to get to the gems. Maybe some people enjoy the persistence, I don't know, but that's not the point of DF.

Anyway, I hope that puts things into perspective.

jandrese(4030) 5 days ago [-]

IMHO, the graphics aren't nearly as much of a barrier as the interface. If you just install DF and open it up you will be overwhelmed and there's no help system holding your hand. The word 'inscrutable' gets tossed around a lot with Dwarf Fortress, because it is the prefect encapsulation of the first time player's experience.

gmfawcett(10000) 5 days ago [-]

You've been able to play DF with tile graphics for many years. Check out the 'Lazy Newb Pack':

http://dwarffortresswiki.org/Utility:Lazy_Newb_Pack

Retra(10000) 5 days ago [-]

To be fair, it is a video game and a personal project. There's no expectation of inclusiveness there. And I wouldn't say that the interface is an intentional barrier to entry, it is just not their favored feature to work on, so it simply receives little work.

uglygoblin(10000) 5 days ago [-]

Another Roguelike game called ADOM that has been around since the 90's was brought to Steam with upgraded assets and sounds a couple years back.

muzani(3544) 5 days ago [-]

And it highlighted how aged the game was. It's a roguelike masterpiece, but there has been many more playable ones since. DF hasn't aged as badly, but standing up in Steam to more playable games will be challenging.

bradford(10000) 5 days ago [-]

I've played DF a lot.

Biggest problem that I had is that it eventually becomes a challenge to manage the framerate. I realize there are ways to fix this that are intertwined with the gameplay, but I'd rather play the game instead of butchering kittens and other things in an effort to keep the game performant.

Second issue is with military organization. I never really became confident in my ability to get the squads doing what I wanted them to do (wearing the proper uniforms, training with a crossbow).

Despite this, I've had a lot of fun with the game. Building, farming, and managing a metal industry is a lot of fun. The barrier to entry is still pretty high, and I don't think the announced additions are going to change that.

(haven't played in the last year or two, so my criticism may be outdated)

Derpdiherp(10000) 5 days ago [-]

No the criticisms are still valid. There's still severe frame rate issues under quite a few circumstances. The military thing is more subjective though, it's still a complicated system, but such is the way of DF.

As much as I can see why Tarn wants to keep the code closed source, I really wish he'd allow someone to help him with some optimisations.

Fjolsvith(1629) 5 days ago [-]

I once had a mature fortress laid waste by a black dragon. My dwarves got it killed, but there were only three surviving dwarves (who had been fully plate armored) out of 200, and after the battle they were so stricken with grief they could hardly do anything.

I've had invasions of orcs and goblins thrown back by fortresses. What a complex and !FUN! game.

LyndsySimon(10000) 5 days ago [-]

> The barrier to entry is still pretty high, and I don't think the announced additions are going to change that.

This would be a problem for a new game, especially in Steam's early access program - but at this point DF has a huge 'brand', and people know what they're getting into. I think it could even be argued that the difficult UI is part of the experience, as weird as that sounds.

Floegipoky(3991) 5 days ago [-]

Don't forget how dwarves drop their personal items all over your fortress, even when they have an assigned bedroom. Some ahole leaves their sock in your front door and the vile forces of darkness just waltz on through.

But all of the half-implemented features, years-old bugs, 'screw you' approach to UX, and general jankiness are worth it when a fire imp immolates 10 people in your great hall only to have its skull caved in when it tangles with the wrong toddler.

shrimp_emoji(10000) 5 days ago [-]

Multithread w h e n ???

Also, early embark setup fatigue is real. Setup the stockpiles. Plan the rooms. Have the plans outpace your current productivity and have it take way too long. Do this every time.

Once that's over, though, you get to enjoy sorting through droves of migrants, assigning each to the tasks they're most appropriate to or which need the most dwarves right now (the game actually approximates this automatically in that it generates migrants vaguely in reflection to the fortress's needs, but it's not like it assigns all burly, tough, slow-to-tire dwarves to your melee squads or that talented bonecrafter to more useful crafting jobs, so it's up to your manic OCD). Do this every season.

Somewhere in there, have some !FUN!.





Historical Discussions: Facebook's Data Deals Are Under Criminal Investigation (March 13, 2019: 810 points)

(810) Facebook's Data Deals Are Under Criminal Investigation

810 points 5 days ago by tysone in 174th position

www.nytimes.com | Estimated reading time – 5 minutes | comments | anchor

Federal prosecutors are conducting a criminal investigation into data deals Facebook struck with some of the world's largest technology companies, intensifying scrutiny of the social media giant's business practices as it seeks to rebound from a year of scandal and setbacks.

A grand jury in New York has subpoenaed records from at least two prominent makers of smartphones and other devices, according to two people who were familiar with the requests and who insisted on anonymity to discuss confidential legal matters. Both companies had entered into partnerships with Facebook, gaining broad access to the personal information of hundreds of millions of its users.

The companies were among more than 150, including Amazon, Apple, Microsoft and Sony, that had cut sharing deals with the world's dominant social media platform. The agreements, previously reported in The New York Times, let the companies see users' friends, contact information and other data, sometimes without consent. Facebook has phased out most of the partnerships over the past two years.

"We are cooperating with investigators and take those probes seriously," a Facebook spokesman said in a statement. "We've provided public testimony, answered questions and pledged that we will continue to do so."




All Comments: [-] | anchor

BucketSort(2788) 5 days ago [-]

Create a culture that is so fixated on wealth above all else and this is what happens. When everyone around you is judging you based on what you have and on the successes you've achieved, what motivation does one have to behave morally? God died and the dollar took its place. I know we will one day look back on this in complete confusion as to how we just watched things so clearly destructive destabilize the country. It's always hard to understand how these things happen when looking at history without being in the contemporary madness of the times.

gotduped(10000) 5 days ago [-]

> God died and the dollar took its place

'God died' is such bull. The worst people in the world do what they do in his name.

kweinber(3993) 5 days ago [-]

It is possible to act morally without God as motivation. God was a strong motivator in the Dark Ages- did people act more morally then?

It is possible to be a capitalist and act morally. Rates of crime are at some of the lowest levels of all human history now.... is that a sign of immorality?

Morality is about making responsible choices.... often tough ones. Abdicating responsibility to a deity or a monetary system is the opposite of what is needed for morally responsible decision making.

tabs_masterrace(10000) 5 days ago [-]

> destructive destabilize the country

I don't think Facebook is destabilizing the country. In fact I don't think anything bad at all is happening. Like please explain to me your negative repercussions from Facebook sharing some of their data with Amazon. How did that undermine the country? People have gotten so hysterical about this topic, it's a madness, and that is what's destabilizing the country IMHO.

Like OP is calling to put some of the most brilliant minds in Silicon Valley in jail, which is ridiculous! But if anything like this should happen, I think you gonna see FAANG & Co seriously consider rebasing outside U.S. (well maybe not G)

kevin_thibedeau(10000) 5 days ago [-]

Facebook isn't doing anything worse than what Acxiom and other data brokers have been doing for decades. None of it is criminal without any general purpose data protection laws. This is just pitchfork populism.

anonymfus(2389) 5 days ago [-]

How do you know it?

reaperducer(3842) 5 days ago [-]

Facebook isn't doing anything worse than what Acxiom and other data brokers have been doing for decades

Because other people are doing bad things, it's OK for Facebook to do bad things.

I'm not sure that's how the law works.

m4x(10000) 5 days ago [-]

If federal prosecutors are conducting a criminal investigation, it's almost certain those investigators believe(d) a law was broken. I don't think anyone here is in a position to claim 'they did nothing criminal' without inside knowledge of the investigation - in which case you certainly wouldn't be commenting here

badfrog(10000) 5 days ago [-]

No indication of what the charges are? The closest thing in the article is:

> the partnerships seemed to violate a 2011 consent agreement between Facebook and the F.T.C

which doesn't seem like it would be criminal?

Does the US actually have criminal laws regarding selling data? Any educated guesses on what's actually going on?

realodb(10000) 5 days ago [-]

I would entertain a wire fraud argument.

otterley(3661) 5 days ago [-]

Attorney here!

Violation of a consent decree can result in criminal contempt-of-court charges. See 18 U.S.C. section 401 (https://www.law.cornell.edu/uscode/text/18/401). See also United States v. Schine, 125 F. Supp. 734 (W.D.N.Y. 1954).

shhehebehdh(10000) 5 days ago [-]

At this point it is just an investigation. Nobody has been charged with anything. They have to find a law Facebook has broken first, and presumably establish enough evidence that they expect to succeed at a trial.

solomatov(3994) 5 days ago [-]

Do you have any idea what charges these may be? IFAIU, there're no laws which make it criminal to share information.

badfrog(10000) 5 days ago [-]

Doesn't a grand jury meet after charges have been filed? Or is federal different than state in that respect?

mic47(10000) 5 days ago [-]

Yeah, I was kind of writing that under assumption 'if it would have been found criminal'.

bitxbit(10000) 5 days ago [-]

When are all the revelations going to end? It's been every few months for the past five years.

dd36(3872) 5 days ago [-]

They're a regular Wells Fargo.

dustinmoris(929) 4 days ago [-]

> F.T.C. officials, who spent the past year investigating whether Facebook violated the 2011 agreement, are now weighing the sharing deals as they negotiate a possible multibillion-dollar fine. That would be the largest such penalty ever imposed by the trade regulator.

A multibillion dollar fine? That's great, but even greater would be to put Facebook's exces behind bars. A CEO shouldn't walk away with a stuffed bank account after years of criminal offences, violating the privacy of millions of people all around the world and then not take any personal responsibility for it in front of our jurisdiction. The fine is attributed to Facebook, but there also needs to be a heft penalty for the people who ran Facebook and that is the executive team. Jail terms must be given. In the long term this will set an important precedent and detract possible future offenders!

creaghpatr(3714) 4 days ago [-]

Which Facebook exec(s) would you put behind bars and for how long?

navigatesol(10000) 4 days ago [-]

>but even greater would be to put Facebook's exces behind bars.

What is with this place wanting to throw everyone in prison? Is there some thrill you get from seeing executives in an orange jumper?

Fines can be far more beneficial to society. Make them pay in a way that actually helps other people.

hodder(3547) 4 days ago [-]

What specific charges can be filed against him? What laws have Facebook broken?

I suspect FB have broken many laws but perhaps the country simply has inadequate consumer protection.

Advocating for jail terms without specific charges is not the direction to go.

3xblah(10000) 5 days ago [-]

'It is not clear when the grand jury inquiry, overseen by prosecutors with the United States attorney's office for the Eastern District of New York, began or exactly what it is focusing on.'

What is NYT source for this story?

A leak about the existence of an investigation?

NYT journalist saw entries for grand jury subpoenas on PACER?

How do they know the crime has to do with data deals?

We must wait until complaint is filed before anyone can disclose the statute allegedly violated, correct?

ovi256(3800) 5 days ago [-]

>NYT source for this story

Most probably a US DA turning the screws on Facebook.

IfOnlyYouKnew(3770) 5 days ago [-]

"according to two people who were familiar with the requests and who insisted on anonymity to discuss confidential legal matters."

Yes, there is a lot (more) we would like to know.

But this is as good a time as ever to do a little experiment regarding the practice of anonymous sources: at some point, we are likely to learn more about this investigation. Then, you can check if the information we have now was correct. Or, as the common accusation goes, it was a wholesale fabrication by the Times.

onetimemanytime(2792) 4 days ago [-]

Are you suggesting NYT is lying...? I mean, why they'd lie about it? Prosecutors use the media routinely to push their POV so it's not that strange...

Not revealing sources is SOP

mudil(314) 5 days ago [-]

Wholesale surveillance of individuals by internet companies has to stop. And that includes Google.

panarky(155) 4 days ago [-]

That's the 'deflect' part of 'Delay, Deny and Deflect'

https://www.nytimes.com/2018/11/14/technology/facebook-data-...

luckycharms810(10000) 5 days ago [-]

It has always been the advertisers that are asking for the sort of tooling that can be taken advantage of during the election cycle. If you don't think Russia should be able to manipulate the black voting populace then maybe it's worth thinking about whether Unilever should be able to specifically target and market axe body spray in the same fashion. The tools only exist because advertisers are paying for them. Blaming Facebook is akin to being mad at a Martin Shkreli, both are playing within the rules of a bad system.

3327(3832) 5 days ago [-]

Facebook is a criminal corp. And indeed when you are selling data to clients that resell or do with the data as they please (including mining them) they should be under criminal investigation. Thank God the FBI exists and counting for the day that criminal charges are brought against executives who broke the law.

evolvedcleaning(10000) 5 days ago [-]

Considering they are accused of broad criminal activity, your statement should be considered a valid expression of public opinion on the matter, and should not be downvoted.

_underfl0w_(10000) 4 days ago [-]

Did anybody else catch this?

> Apple was able to hide from Facebook users all indicators that its devices were even asking for data.

I've seen a lot of discourse here seem to favor Apple over the big 'G' but... this seems pretty shady. Anyone else know anything about this practice or what specifically they might've been referring to?

r00fus(4031) 4 days ago [-]

It was opt-in. And hardly used.

denzil_correa(69) 4 days ago [-]

This was discussed the previous time this topic came up [0, 1, 2]. This is basically OS level integration which enabled users to share Facebook updates without opening Facebook like OS level integration. Apple removed the Facebook, Twitter, Flickr etc. integration in iOS11 [3].

[0] https://news.ycombinator.com/item?id=17223926

[1] https://news.ycombinator.com/item?id=17229301

[2] https://news.ycombinator.com/item?id=17224071

[3] https://www.cultofmac.com/485346/ios-11-ditches-facebook-twi...

vesinisa(3013) 4 days ago [-]

Apple fights the privacy battles on its users behalf only where it sees a PR benefit, like the high publicity FBI's iPhone encryption dispute a few years ago.

sonnyblarney(3260) 5 days ago [-]

I was personally involved in a leadership role in one of these key strategic deals while at a major handset maker.

To be clear, we built the 'Facebook experience' for our device because only we really could. During this era, APIs were a disaster of a mess, moreover, the special API that made the device so special was not available to the public. Ironically it was our internal APIs that were making the special sauce!

For this purpose, Facebook provided users of the app we designed access to their own profiles. Obviously, this is a fairly wide API and it had to be made available specially for users of our app.

At no time did we ever have access to FB users private information. At no time did anyone even remotely suggest anything inappropriate or nefarious. There were simply no moral or legal discussions on this front because it was moot.

The situation, net, was akin to Facebook having hired a 3rd party to design an app for them, giving that app the internal FB API necessary to function, and then distributing the app.

This isn't an issue of 'times have changed' or 'looking back we'd have done something different' rather - I can affirm that there was simply no bad acting, no breach of individuals accounts, and no undue risk to individuals accounts.

Obviously this situation is very specific, and that conditions will have varied.

If FB was truly giving Bing special access without people's consent - this is a big problem.

The Cambridge issue - well - this is a tricky one because Cambridge merely took advantage of the API's the entire world had access to. There was little if any discussion of the inherent problems with those APIs, and when it looked like maybe they were being abused, Facebook did the right thing and closed them. They even went ahead and investigated Cambridge to ensure the data was gone, and Cambridge presented them with evidence that it had been deleted. I think in this case Facebook was a responsible actor.

Clearly there are more situations to consider, but we should be thoughtful in terms of how we approach newly released information and not get caught up with the mob.

Personally, I loathe the koolaid mob that built Facebook up, but I'm equally loathe the hate mob wanting to take them down.

artificial(10000) 5 days ago [-]

As far as the responsibility goes I'll raise you a Zygna, and especially the Obama campaign which was lauded by the media and facilitated by the platform because after they noticed the abuse "they're on the same side". I doubt this would be such a pernicious issue if another candidate won.

corebit(10000) 4 days ago [-]

I'm so glad you wrote this. It's mind-bogglingly stupid what people are saying about the nefarious purposes this stuff was put to when that's obviously so far from the case and EVERYBODY involved in working on this stuff 10 years ago understands it.

thinkcomp(970) 5 days ago [-]

See also, for the next likely criminal investigation:

http://www.plainsite.org/realitycheck/facebook.html

rhizome(3994) 5 days ago [-]

That page reads like a mishmash. Fake sites is it? 'Plainsite not so plain,' it would be helpful to provide some kind of flashlight.

NN88(940) 5 days ago [-]

IS THIS WHY THE DATA SERVICE WAS DOWN ALL DAY? https://www.theverge.com/2019/3/13/18264092/facebook-instagr...

pageald(10000) 5 days ago [-]

My first thought after reading the headline was that they had been placed under some kind of litigation hold and had to immediately back up all of their data on US customers to comply.

Just baseless speculation; I dont work for Facebook or have an inside source.

drugme(3955) 5 days ago [-]

Not directly.

But employee burnout (and the nascent mass exodus of top talent we'll inevitably be hearing about shortly) may very well be.

subfay(10000) 5 days ago [-]

OT: While I like how FB is recently struggling, I have to say that their open source contributions and the teams working on those are by far the best in this industry. I hope they keep up doing this great work despite all their problems.

cat199(10000) 5 days ago [-]

by far?

redhat, ibm, (linux, java, etc) for starters...

Dahoon(10000) 4 days ago [-]

Looking at code in Linux contributed by Googlers I don't see how what Facebook has made is better 'by far'. In my opinion it isn't better at all but that's just me.

8bitsrule(3960) 5 days ago [-]

IANAL. But this:

'reports last June and December that Facebook had given business partners — including makers of smartphones, tablets and other devices — deep access to users' personal information, letting some companies effectively override users' privacy settings.'

suggests breach-of-contract. Just by creating those privacy settings, the company is 'saying' to (promising) the customer who elects to use them that it will protect that information. That's an (implied) contract. Subsequently allowing anyone else to access that information is a breach.

It'd be interesting to see FB argue in court that breaking promises was okay as a form of restitution for the services they provide.

Crontab(4015) 5 days ago [-]

Referring to Facebook users as customers is quaint.

foamflower(10000) 4 days ago [-]

IANAL either, but two things stand out. First, breach of contract is not a criminal wrong, only a civil one. Second, it would be a big stretch to say that the relationship between users and Facebook (or any nominally free service) is a contract, as valid contracts must follow certain requirements like consideration or a meeting of the minds. This is partly why things like the Computer Fraud and Abuse Act that e.g. Aaron Swartz was being prosecuted under (persecuted, IMHO) can be so alarming: things like terms of service normally wouldn't even stand up as a contract, but under CFAA, they can give rise to criminal charges.

I have no idea what charges the Eastern District of NY might be seeking pursuant to these data deals, but maybe something like mail/wire fraud or honest services fraud? Again IANAL, but those are fairly broad and the government could make the case that Facebook fraudulently breached its duties to its users.

sixtram(10000) 5 days ago [-]

If it's for one user then it's a breach of contract, if you do that for 10 million, that's a deliberate act of breaching all contracts and that is usually criminal.

scale matters on these issues as well.

and if you are doing it as a big utility company (e.g. banks) then you are free to go :)

minimaxir(111) 5 days ago [-]

FB Response: https://twitter.com/fbnewsroom/status/1105993038671691776?s=...

> It's already been reported that there are ongoing federal investigations, incl. by the Dept of Justice. As we've said, we're cooperating w/ investigators and take those probes seriously. We've provided public testimony, answered questions, and pledged that we'll continue to do so

mehrdadn(3544) 5 days ago [-]

Does anyone ever say they're not cooperating with investigators?

decebalus1(10000) 5 days ago [-]

Is facebook still down or something? Why are they posting the responses on twitter?

JumpCrisscross(52) 5 days ago [-]

There has been a tremendous amount of grassroots lobbying, fundraising, and private investigation in New York over the past two years with respect to Facebook. It's a serious area I feel Silicon Valley has abdicated its moral obligation to stand up to its own. Hoping we can develop the evidence that comes out of this case into criminal charges for individual engineers and senior officers.

godzillabrennus(10000) 5 days ago [-]

There is a movement starting to put ownership of data back into the hands of the people who created it.

https://hu-manity.co/

Feels like the right solution even if the task is monumental.

darawk(10000) 5 days ago [-]

Criminal charges for what exactly? They shared their user data with other companies. Since when is that criminal?

resters(3479) 5 days ago [-]

It started after Facebook started getting blamed for landing Trump in the White House, and the pile-on has increased steadily since then.

I'm no fan of FB's extensive use of dark patterns, but the concerted attack on FB is meant as pressure so that Zuck agrees to let FB be used as a great firewall.

Thank you, Zuck, for resisting the pressure if indeed you have. It will only increase as 2020 approaches.

elorant(546) 5 days ago [-]

So let's incriminate engineers for building a faulty airplane too.

peteradio(10000) 5 days ago [-]

What kind of scenario could you see an engineer getting charged? I find that pretty hard to imagine here.

Despegar(10000) 5 days ago [-]

It's going to be great seeing all the years of HN commenters complaining that no bankers went to jail, insist that it's bad policy for software engineers to be held criminally liable.

jdc(3248) 5 days ago [-]

If in order to get equal liability we have to _really_ advocate for our interests.

samename(10000) 5 days ago [-]

Usually when I saw those calls to hold bankers accountable (especially when referring to the recent recession), they are talking about the executives of the company. While Zuckerberg was an engineer, his day-to-day role now is the executive of the company. So, I think the comparison's to bank executives and tech executives are similar. However, pinning this on software engineers is an entirely different ballgame.

1000units(10000) 5 days ago [-]

I'd like to go on record advocating both groups be imprisoned.

dvtrn(10000) 5 days ago [-]

Meanwhile I hope this thread stays at the top of HN for the day to see how the thread summary reads over at n-gate

mic47(10000) 5 days ago [-]

Ok, so those partnerships were basically allowing Samsung and others to build Facebook app on their phones (i.e. allowing alternative client).

Would this means that it will be criminal to allow companies to create alternative clients? That is really interesting development.

nradov(988) 5 days ago [-]

Samsung devices shipped with the same Facebook client app as on the Google Play store, it wasn't an alternative. The back end data access was through other means.

jxdxbx(10000) 5 days ago [-]

Individually negotiated confidential deals are not the same as open APIs and protocols which is where alternative clients normally come into play. And the "alternative client" stuff doesn't even apply to Amazon, Bing etc

sintaxi(3411) 5 days ago [-]

My guess would be data harvesting to FB without the app installed.

IfOnlyYouKnew(3770) 5 days ago [-]

No, of course not. Why would I you ask this?

It seems like Facebook somehow gave these partners rather deep access. To all users, not just those using those phones or who opted into the arrangement.

donohoe(236) 5 days ago [-]

Eh, no. Dig deeper.





Historical Discussions: Write yourself a Git (2018) (March 14, 2019: 673 points)

(675) Write yourself a Git (2018)

675 points 5 days ago by adamnemecek in 16th position

wyag.thb.lt | Estimated reading time – 4 minutes | comments | anchor

Now that we have repositories, putting things inside them is in order. Also, repositories are boring, and writing a Git implementation shouldn't be just a matter of writing a bunch of mkdir. Let's talk about objects, and let's implement git hash-object and git cat-file.

Maybe you don't know these two commands — they're not exactly part of an everyday git toolbox, and they're actually quite low-level ("plumbing", in git parlance). What they do is actually very simple: hash-object converts an existing file into a git object, and cat-file prints an existing git object to the standard output.

Now, what actually is a Git object? At its core, Git is a "content-addressed filesystem". That means that unlike regular filesystems, where the name of a file is arbitrary and unrelated to that file's contents, the names of files as stored by Git are mathematically derived from their contents. This has a very important implication: if a single byte of, say, a text file, changes, its internal name will change, too. To put it simply: you don't modify a file, you create a new file in a different location. Objects are just that: files in the git repository, whose path is determined by their contents.

Warning

Git is not (really) a key-value store

Some documentation, including the excellent Pro Git, call Git a "key-value store". This is not incorrect, but may be misleading. Regular filesystems are actually closer to a key-value store than Git is. Because it computes keys from data, Git should rather be called a value-value store.

Git uses objects to store quite a lot of things: first and foremost, the actual files it keeps in version control — source code, for example. Commit are objects, too, as well as tags. With a few notable exceptions (which we'll see later!), almost everything, in Git, is stored as an object.

The path is computed by calculating the SHA-1 hash of its contents. More precisely, Git renders the hash as a lowercase hexadecimal string, and splits it in two parts: the first two characters, and the rest. It uses the first part as a directory name, the rest as the file name (this is because most filesystems hate having too many files in a single directory and would slow down to a crawl. Git's method creates 256 possible intermediate directories, hence dividing the average number of files per directory by 256)

Note

What is a hash function?

Simply put, a hash function is a kind of unidirectional mathematical function: it is easy to compute the hash of a value, but there's no way to compute which value produced a hash. A very simple example of a hash function is the strlen function. It's really easy to compute the length of a string, and the length of a given string will never change (unless the string itself changes, of course!) but it's impossible to retrieve the original string, given only its length. Cryptographic hash functions are just a much more complex version of the same, with the added property that computing an input meant to produce a given hash is hard enough to be practically impossible. (With strlen, producing an input i with strlen(i) == 12, you just have to type twelve random characters. With algorithms such as SHA-1. it would take much, much longer — long enough to be practically impossible.

Before we start implementing the object storage system, we must understand their exact storage format. A object start by an header that specify its type: blob, commit, tag or tree. This header is followed by an ASCII space (0x20), then the size of the object in bytes as an ASCII number, then null (0x00) (the null byte), then the contents of the object. The first 48 bytes of a commit object in Wyag's repo look like this:

00000000  63 6f 6d 6d 69 74 20 31  30 38 36 00 74 72 65 65  |commit 1086.tree|
00000010  20 32 39 66 66 31 36 63  39 63 31 34 65 32 36 35  | 29ff16c9c14e265|
00000020  32 62 32 32 66 38 62 37  38 62 62 30 38 61 35 61  |2b22f8b78bb08a5a|

In the first line, we see the type header, a space (0x20), the size in ASCII (1086) and the null separator 0x00. The last four bytes on the first line are the beginning of that object's contents, the word "tree" — we'll discuss that further when we'll talk about commits.

The objects (headers and contents) are stored compressed with zlib.




All Comments: [-] | anchor

ansible(3877) 5 days ago [-]

It might be fun do try this in Rust as well.

Vogtinator(10000) 4 days ago [-]

Or in Haskell. Or in C++.

nukeop(4026) 4 days ago [-]

If you find these kinds of 'build your own X' articles interesting, there's a repo on Github that aggregates them, sorted in various categories:

https://github.com/danistefanovic/build-your-own-x

jyriand(4023) 4 days ago [-]

I think I found my learning projects for the next 10 years.

kiuraalex(10000) 4 days ago [-]

Thank you!

driusan(10000) 4 days ago [-]

Having written my own git client, I can tell you that 'the most complicated part will be the command-line arguments parsing logic' doesn't go away. I wouldn't be surprised to wake up one day and find someone published a proof that NP != P, and the proof involved trying to parse the git command line.

cabalamat(3905) 4 days ago [-]

I have an admission to make: I don't understand git. By this I mean I have a few simple commands I use (status/add/commit/push/pull) and if I try to do anything more complicated it always ends up with lots of complex error messages that I don't understand and me nuking the repository and starting again.

So I think: there must be a better way.

I have often thought about implementing a VCS. The idea behind one doesn't seem particularly complex to me (certainly it's simpler than programming languages). If I did I would quite probably use WYAG as a starting point. My first step would be to define the user's mental model -- i.e. what concepts they need to understand such that they can predict what the system will do. Then I would build a web-based UI that presents the status of the system to the user in terms of that model.

neals(10000) 4 days ago [-]

I think this is a large majority of Git users. The last company I worked at had 30 good developers, 1 of which I think really deeply understood git.

So I guess 97% of users don't really get git.

derekp7(3974) 4 days ago [-]

Look for a video on YouTube called Git Happens. I've found it fairly effective with my coworkers. It doesn't go over the command syntax, but instead dives into a logical overview of the underlying data structures.

tsukikage(10000) 4 days ago [-]

git is to version control systems as vim is to text editing or dwarf fortress is to god sims.

(dear everyone here and elsewhere recommending git incantations 'but of course you have to know what you're doing': if you regularly have to take a backup of your working area before interacting with the vcs, because the interaction may do things you did not intend from which the simplest way back is to reset hard and start over, I humbly suggest that the vcs has failed in its primary purpose)

rsp1984(3714) 4 days ago [-]

Just curious: Have you tried any of the git GUIs out there? There are many good ones: Fork, Git Tower, GitX-dev.

I guess I would be lost as well just using the command line.

rezeroed(10000) 4 days ago [-]

I'm not much better. I knew SVN inside out. I've read up on git internals, poked around with them, but eventually forget it all because 99% of the time I only use the same five commands. It's a real iceberg, Pareto principle piece of software for me.

js2(730) 4 days ago [-]

Perhaps the Git Parable will help:

http://tom.preston-werner.com/2009/05/19/the-git-parable.htm...

To provide an alternate viewpoint, I have never had trouble with Git. I'm a bottom-up how-does-this-thing-work sort of person so when I first started using Git, I sought to understand how it worked. That part of Git is pretty easy to understand. Knowing that made its CLI a lot easier to grok. Of course, at the time I was having to use ClearCase at work and Subversion on the side so Git, IMO, was a vast improvement to either of those tools.

cmroanirgo(4002) 4 days ago [-]

I'm like you. I use SourceTree to get a 'visual grasp' on what I find to be the noise of git commands. However, if you're into command line, you can try fossil: it's got lots going for it.

Your idea of a 'user's mental model' might land you into trouble though, because all of us come from different backgrounds (subversion, SSafe, git, HG...) and they all maddeningly redefine terms in different ways (eg branch, forks, commits, checkout).

AnIdiotOnTheNet(3888) 4 days ago [-]

Yeah, I too don't really understand git. It seems that it was developed without any concern for affording a good mental model of its operation to its users, and thus it is just a complex black box you chant arcane rituals at and hope it doesn't decide to burn your world down. I know I could build a mental model of it if I put enough time into it, but who wants to do that when there's actually useful things to do? So instead when I have to use it to contribute to open source projects I have a sheet of notes with incantations to cover the specific things I've had to do with it in the past.

elviejo(3999) 4 days ago [-]

I'm in the same boat. I could teach subversion in 1 hour and people would get it. I can't teach git in a whole week. So in the end my students use the same 5 commands.

dprophecyguy(3893) 4 days ago [-]

Does any body know any more write your own type tutorials for any other projects ?

Please point them out here

inetsee(3103) 4 days ago [-]

Doing a search on HN for 'write your own' returns a lot of answers, including 'Ask HN: "Write your own" or "Build your own" software projects' https://news.ycombinator.com/item?id=16591918 from a year ago.

ssivark(3866) 5 days ago [-]

I have nothing to directly comment on the tutorial. Just a tangential mention regarding the tedious argument parsing boilerplate in Python, I have found Python Fire to be much more convenient: https://github.com/google/python-fire

It would have shaved off another 15-20 lines from the 503 line example ;-)

Myrmornis(3619) 4 days ago [-]

docopt deserves a link in this thread: https://github.com/docopt/docopt

It's a magical idea if you haven't seen it. You just write the help text and it automatically creates the argument parsing code.

oweiler(4017) 4 days ago [-]

For Java (or any other JVM language), nothing beats https://picocli.info/. Works well with GraalVM, too.

eadan(10000) 5 days ago [-]

When using argparse, I find subparsers [1] useful in these situations.

  import argparse
  
  def add_file(file):
      print('Added ' + file)
  
  def main():
  
      parser = argparse.ArgumentParser()
      subparsers = parser.add_subparsers(title='Sub Commands')
  
      # add parser
      add_parser = subparsers.add_parser('add', help='Add a file')
      add_parser.add_argument('file', help='File to add')
      add_parser.set_defaults(func=lambda args: add_file(args.file))
  
      args = parser.parse_args()
      args.func(args)
[1] https://docs.python.org/3/library/argparse.html#sub-commands
lapinot(10000) 5 days ago [-]

Yeah, i couldn't help but notice the first piece of code is an ugly 'switch case'. There's a python idiom for this, it's putting your functions in a dictionary and doing something like `cmds_dict.get(args.command, default)(args)`. I guess we all have our religious habits for argument parsing (more of a docopt-er myself).

bibyte(3893) 5 days ago [-]

Python Fire looks much more concise. But do you know of any other languages that handles argument parsing better then Python ?

aepiepaey(10000) 5 days ago [-]

I largely find argparse to be OK apart from a couple of issues.

First, it allows the user to abbreviate flags. They can pass --fl and it will be interpreted as --flag, assuming no other flag shares the same prefix.

This sucks for maintainability: add a new flag and any abbreviation for a previously existing flag that shares the same prefix will now stop working, breaking user workflows.

Since Python 3.5 there's the allow_abbrev parameter that allows disabling this behaviour, but then you also lose the ability to combine multiple single-character flags (so you can't pass e.g. '-Ev' any more, and would have to pass '-E -v' instead[1].

The other issue is that it's tedious to keep all the .add_argument calls readable, while maintaining a reasonable maximum line length.

[1]: https://bugs.python.org/issue26967

giancarlostoro(3206) 5 days ago [-]

Reminds me of CherryPy. An object oriented app can become a web server with a few function decorators (to make them public endpoints). Coincidentally still my favorite Python web framework.

https://cherrypy.org/

mcqueenjordan(10000) 5 days ago [-]

I'm a huge fan of the Click library.

Click => command line interface creation kit.

Ironic little name...

sransara(10000) 4 days ago [-]

Thanks for sharing. I too agree on this: '... Git is complex is, in my opinion, a misconception... But maybe what makes Git the most confusing is the extreme simplicity and power of its core model. The combination of core simplicity and powerful applications often makes thing really hard to grasp...'

If I may do a self plug, I had recently written a note on 'Build yourself a DVCS (just like Git)'[0]. The note is an effort on discussing reasoning for design decisions of the Git internals, while conceptually building a Git step by step.

[0] https://s.ransara.xyz/notes/2019/build-yourself-a-distribute...

s4vi0r(10000) 4 days ago [-]

Inconsistent commands are pretty annoying - why is it git stash list instead of git stash -ls or --list?

jordigh(427) 4 days ago [-]

While this is nice, I think it should be emphasised that the blob-tree-commit-ref data structure of git is not essential to a DVCS. One of the disadvantages of everything being git is that everyone can only think in terms of git. This makes things like Pijul's patch system, Mercurial's revlogs, or Fossil's sqlite-based data structures more obscure than they should be. People not knowing about them and considering their relative merits has resulted in a bit of a stagnation in the VCS domain.

linkmotif(3622) 4 days ago [-]

What makes it hard is that it's taught wrong. All this pull/checkout/commit/push whereas for me it took a long time to discover that fetch/rebase/show-branch/reset/checkout —amend, and especially the interactive -p variants, are the core tools that really make it a pleasure to use. They give you flexibility and let you write and rewrite your story, whereas the commands you're introduced with provide no control to the user. It's remarkable the number of users who think you can't rewrite a Git branch.

asdkhadsj(10000) 4 days ago [-]

On the note of Git being difficult, I'm really curious to see if Pijul[1] ends up being easier to understand than Git.

[1]: https://pijul.org

thomascgalvin(10000) 4 days ago [-]

I really like the way Pijul 'thinks'. Unfortunately, I can't see myself using it (or Fossil, for that matter) for anything except toy code, because I have contract requirements to store everything in GitLab.

nemetroid(10000) 4 days ago [-]

From my limited knowledge (mostly based on jneems article series[1]), I think Pijul is more powerful, but for the same reason also considerably more difficult to understand than Git.

In particular, Pijul supports (and depends on) working with repository states that are, in Git terms, not fully resolved. In addition, those states are potentially very difficult to even represent as flat files (see e.g. [2]). Git is simpler in that it mandates that each commit represents a fully valid filesystem state.

That said, I still think Pijul might have a place, if it turns out that it supports superior workflows that aren't possible in Git. But the 'VCS elitism' would probably become worse than it is today.

[1]: https://jneem.github.io/merging/ [2]: https://jneem.github.io/cycles/

seleniumBubbles(10000) 5 days ago [-]

This is great, thanks for sharing.

People in this thread might also appreciate this essay: https://maryrosecook.com/blog/post/git-in-six-hundred-words

And the more expanded version: https://maryrosecook.com/blog/post/git-from-the-inside-out

It really helped me comprehend Git enough to start understanding the more complex work flows.

JustSomeNobody(3792) 4 days ago [-]

I really enjoy the way MRC explains topics. I, too, would recommend her essay.





Historical Discussions: DARPA Is Building a $10M, Open-Source, Secure Voting System (March 14, 2019: 639 points)

(646) DARPA Is Building a $10M, Open-Source, Secure Voting System

646 points 4 days ago by shpat in 4007th position

motherboard.vice.com | Estimated reading time – 10 minutes | comments | anchor

For years security professionals and election integrity activists have been pushing voting machine vendors to build more secure and verifiable election systems, so voters and candidates can be assured election outcomes haven't been manipulated.

Now they might finally get this thanks to a new $10 million contract the Defense Department's Defense Advanced Research Projects Agency (DARPA) has launched to design and build a secure voting system that it hopes will be impervious to hacking.

The first-of-its-kind system will be designed by an Oregon-based firm called Galois, a longtime government contractor with experience in designing secure and verifiable systems. The system will use fully open source voting software, instead of the closed, proprietary software currently used in the vast majority of voting machines, which no one outside of voting machine testing labs can examine. More importantly, it will be built on secure open source hardware, made from special secure designs and techniques developed over the last year as part of a special program at DARPA. The voting system will also be designed to create fully verifiable and transparent results so that voters don't have to blindly trust that the machines and election officials delivered correct results.

But DARPA and Galois won't be asking people to blindly trust that their voting systems are secure—as voting machine vendors currently do. Instead they'll be publishing source code for the software online and bring prototypes of the systems to the Def Con Voting Village this summer and next, so that hackers and researchers will be able to freely examine the systems themselves and conduct penetration tests to gauge their security. They'll also be working with a number of university teams over the next year to have them examine the systems in formal test environments.

"Def Con is great, but [hackers there] will not give us as much technical details as we want [about problems they find in the systems]," Linton Salmon, program manager in DARPA's Microsystems Technology Office who is overseeing the project, said in a phone call. "Universities will give us more information. But we won't have as many people or as high visibility when we do it with universities."

The systems Galois designs won't be available for sale. But the prototypes it creates will be available for existing voting machine vendors or others to freely adopt and customize without costly licensing fees or the millions of dollars it would take to research and develop a secure system from scratch.

"We will not have a voting system that we can deploy. That's not what we do," said Salmon. "We will show a methodology that could be used by others to build a voting system that is completely secure."

Joe Kiniry is the principal scientist at Galois who is leading the project at his company. Kiniry has been involved in efforts to secure elections for years as part of a separate company he runs called Free & Fair. He's consulted with foreign governments about their election systems, and his company has been working with states in the US to design robust post-election audits. But the idea to create a secure voting system didn't come from Kiniry; it came from DARPA.

"DARPA was searching for a sexy demonstration for the [secure hardware] program. What could you put on secure hardware that people would care about and understand?" Kiniry said.

They needed a project that would be unclassified so DARPA could talk about it publicly.

"We wanted something where there could be a lot of people who could look at this in an open way and critique it and find problems," said Salmon.

The project will leverage the hefty resources of DARPA and its considerable security experience, and if it works, it could help solve a pressing national problem around election security and integrity.

"If we were to build a fake radar system, it could demonstrate secure hardware, but it wouldn't be useful to anybody. [DARPA] love the fact that we're building a demonstrator that might actually be useful to the world," Kiniry said.

Kiniy said Galois will design two basic voting machine types. The first will be a ballot-marking device that uses a touch-screen for voters to make their selections. That system won't tabulate votes. Instead it will print out a paper ballot marked with the voter's choices, so voters can review them before depositing them into an optical-scan machine that tabulates the votes. Galois will bring this system to Def Con this year.

Many current ballot-marking systems on the market today have been criticized by security professionals because they print bar codes on the ballot that the scanner can read instead of the human-readable portion voters review. Someone could subvert the bar code to say one thing, while the human-readable portion says something else. Kiniry said they're aiming to design their system without barcodes.

The optical-scan system will print a receipt with a cryptographic representation of the voter's choices. After the election, the cryptographic values for all ballots will be published on a web site, where voters can verify that their ballot and votes are among them.

"That receipt does not permit you to prove anything about how you voted, but does permit you to prove that the system accurately captured your intent and your vote is in the final tally," Kiniry said.

Members of the public will also be able to use the cryptographic values to independently tally the votes to verify the election results so that tabulating the votes isn't a closed process solely in the hands of election officials.

"Any organization [interested in verifying the election results] that hires a moderately smart software engineer [can] write their own tabulator," Kiniry said. "We fully expect that Common Cause, League of Women Voters and the [political parties] will all have their own tabulators and verifiers."

The second system Galois plans to build is an optical-scan system that reads paper ballots marked by voters by hand. They'll bring that system to Def Con next year.

*

The voting system project grew out of a larger DARPA program focused on developing secure hardware. That program, called System Security Integrated Through Hardware and Firmware or SSITH, was launched in 2017 and is aimed at developing secure hardware, and design tools to build that hardware, so that hardware would be impervious to most of the software attacks prevalent today.

Currently most security is focused on software protections for operating systems, browsers, and other programs.

"This is only the beginning. This is a problem that is so big that one DARPA program isn't going to solve even 20 percent of the problem."

"In general, software has been the way people try to solve the problems because software is adaptable," Salmon noted. There are some hardware security solutions already, he said, 'but they don't go far enough and ... require too much power and performance....We want to fix this in hardware, and then no matter what [vulnerabilities] you have in software, [attackers] would not be able to [exploit] them."

The basic problem, he said, is that most hardware is gullible and has no way of distinguishing between acceptable and unacceptable behavior. If an attacker's exploit tells the machine to do something malicious, the hardware complies without making judgments about whether it should do this.

"I'm trying to change that and make hardware part of the solution to security rather than a bystander," said Salmon. "This is only the beginning. This is a problem that is so big that one DARPA program isn't going to solve even 20 percent of the problem."

In a voting system, this means the hardware would prevent, for example, someone entering a voting booth and slipping a malicious memory card into the system and tricking the system into recording 20 votes for one vote cast, as researchers have shown could be done with some voting systems.

"Our goal is to make this so that the hardware is blocked against all of these various types of attack from the external world. If this is successful, and if the software put on top is equally successful, then it means people can't hack in and ... alter votes. It would also mean that the person who votes would get some verification that they did vote and all of that would be done in a manner that hackers couldn't change," Salmon said.

The DARPA secure hardware program involves six teams from several universities as well as Lockheed Martin. Each team was tasked with creating three secure CPU designs. Galois, which is part of the SSITH project, plans to build its voting system on top of the secure hardware designed by these teams, and will create a prototype for each CPU design.

"It's normal, open source voting system software, which just happens to be running on top of those secure CPUs," said Kiniry. "Our contention is... that a normal voting system running on COTS [commercial off-the-shelf hardware] will be hacked. A normal voting system running on the secure hardware will probably not be hacked."

Not only are teams developing secure CPUs but to best take advantage of what a secure CPU offers, they're developing new versions of open source C-compilers so they can recompile the entire software stack on a system—the operating system, the kernel, all the libraries and all the user software that's written in C.

"So it really is a powerful software play and hardware play," Kiniry said.

The program isn't about re-architecting new CPUs, but proving that existing hardware can be modified to be made secure, thereby avoiding the need to re-design hardware from scratch.

"Galois and DARPA have just stepped up and filled a vacuum of leadership at the federal level to address the well-documented vulnerabilities in US voting machines that constitute a national security crisis."

But even so, the secure designs are expected to change how new CPUs are architected going forward.

Joe Fitzpatrick, a noted hardware security expert who trains professionals on hardware hacking and security, calls the DARPA secure hardware project a lofty goal that will be great if it succeeds.

"I can't tell if they truly are architecting a new CPU that is truly resistant to all these [attacks]. But if they designed a new CPU that was able to understand and determine malicious or correct operations from the software, that's not trivial. That would be pretty amazing," said Fitzpatrick.

Peiter "Mudge" Zatko, a former program manager at DARPA and noted security professional who has testified to Congress on security issues, said this and other DARPA projects are beneficial because they usually spawn new solutions. But he cautions that secure CPUs won't solve all security problems.

"We should [also] work towards building processors that have more security principles inherent in them," he told Motherboard.

Susan Greenhalgh, policy director for the for the National Election Defense Coalition, an election integrity group, hopes the systems Galois and DARPA are building will finally change the status quo of insecure voting.

"The [current systems are] woefully equipped and too prosaic to drive the quantum changes needed to face the nation-state actors that are threatening our democracy," she told Motherboard. "Galois and DARPA have just stepped up and filled a vacuum of leadership at the federal level to address the well-documented vulnerabilities in US voting machines that constitute a national security crisis."




All Comments: [-] | anchor

cabalamat(3905) 4 days ago [-]

> allow voters to verify that their votes were recorded accurately

This sounds like it means it's no longer a secret vote and voters can be bribed or blackmailed to vote a particular way.

themacguffinman(10000) 4 days ago [-]

Only if the voter is allowed to keep the receipt. The system could require voters to put the paper in a box before they leave like we do now.

gsich(10000) 4 days ago [-]

Nothing beats paper.

samirm(4024) 4 days ago [-]

scissors does

sverige(2060) 4 days ago [-]

Why does this keep coming up? What is the compelling argument against paper ballots? There is no need for results to be known immediately, so how does making voting an exercise done by computers make anything better, particularly when computers are much more vulnerable to remote interference?

sonnyblarney(3260) 4 days ago [-]

'What is the compelling argument against paper ballots?'

Repudiation, verification etc..

I suggest this technology is part of a 'pro democracy' agenda, as opposed to some kind of existential need within the US.

The tech might ostensibly be destined for S. America, Africa and parts of Asia.

abecedarius(2263) 4 days ago [-]

If you could make voting much cheaper and faster, it could be used to decide more things. (If your immediate reaction is 'But voting is a terrible way to make decisions!', well, there's considerable evidence in your favor. I think we should be researching collective decision-making a lot more broadly, but voting tech could be one building block.)

brownbat(2738) 4 days ago [-]

> What is the compelling argument against paper ballots?

To play devil's advocate...

Paper is just a medium. With apologies to Claude Shannon, critical properties of information are best ensured through secure protocols, not by picking a particular medium.

E.g., if the property you want is security, encryption is more provably secure than invisible ink. The properly encrypted message can be stored on paper, radio, magnets, or neurons, it doesn't matter.

The properties we want from ballots are somewhat uncommon and therefore very unintuitive. They are still properties of information. Availability and deniability simultaneously? (So you can personally confirm, but never provably sell your vote).

We could design a cryptographic protocol to meet those unique design goals. But not using paper alone, because the math would be too hard.

Paper appears to guarantee availability and privacy, just as invisible ink appears to guarantee security. In practice, each often fall short. Ballot boxes disappear. Absentee ballots travel through the postal system, which is a bit like blasting one unencrypted UDP packet and hoping for the best. No individual can take their paper ballot and later confirm how it was counted.

You could do these things with electrons though. It would require some fast math, like almost all useful protocols in information theory.

chapium(10000) 4 days ago [-]

why cant we just issue paper ballots with a signed sha256 hash?

adrianmonk(10000) 4 days ago [-]

What's wrong with paper as a technology? Nothing. What's wrong with paper as a proposed solution? Education and public perception.

People who work with computers understand their limitations. But the average person on the street doesn't seem to see them the same way. They think computers equal modernization equal reliability. True or not, if you want to voting system to be a political reality, you'd have to change public opinion, and we've spent more than a decade trying to but haven't gotten that done.

IshKebab(10000) 4 days ago [-]

Because of the reasons explained in the article - you can verify that your vote was recorded, and you can calculate the total yourself. There's also no need for recounts, it uses less labour and you know the result immediately.

Paper voting isn't perfect.

zepearl(4012) 4 days ago [-]

In Switzerland the swiss Post is implementing something similar => my thoughts are very similar to yours (we can even vote by letter, and an electronic vote might in comparison save me at most 5 seconds out of the avg 3 hours of debate with friends and family & reading & watching debates on TV for each round of voting).

The swiss Post organized recently a public review (with awards to identify bugs - see another older thread on HN) for the software that they'll try to launch.

On one hand the swiss Post's solution would allow me to actively check if my vote was part of the total, which I think is absolutely fantastic.

On the other hand I did access the source repository of the new potential voting system <with sparkling eyes expecting something 'special'> but I didn't even start digging into it as soon as I saw that it was written in Java.

I thought that such a software, which is the foundation to the future of a nation (voting system), would have as its foundation 1) a language that leaves very little room for technical and functional bugs (e.g. something used in the aerospace industry?), 2) would be structured using an extremely well-known-for-its-reliability workflow-engine and 3) was submitted to testing covering basically ALL possible combinations at ALL levels (not just e.g. '10000 cycles of randomness' but all possible input-values, for all layers).

When I saw that it was written in Java (nothing against Java - same thing for e.g. C/C++) I immediately gave up because, even if that SW is made to be absolutely unhackable >>now<<, this won't be true anymore starting from the next releases as the $ and 'attention' will inevitably be reduced more and more and the whole tower will start to crumble.

Summarized: I'd like such a system, but I would need it to implemented in an extremely strict way that is able to survive times of low budgets and/or bad employees and/or bad management and/or of course corruption, which is when coincidentally a stable solution would be needed the most.

I usually (have to) choose between dark- or light-grey when I vote, but in this case, to replace the current system, it's one of the rare occasions for which I would need a 'pure white' solution :)

hansjorg(3734) 4 days ago [-]

Because paper ballots increase the cost of manipulating elections.

samirm(4024) 4 days ago [-]

Paper ballots aren't scalable or transparent. Open source hardware and software can be audited by anyone and every one and can be formally verifiable.

therealdrag0(10000) 4 days ago [-]

Aren't counting ballots always wrong? Like every time there is a recount the number changes...

What's wrong with electronic ballots? If we can have a secure and audit-able banking system (and every other aspect of our lifes), surely we can have the same for voting?

deogeo(3999) 4 days ago [-]

Open source, open hardware? What a joke. Neither are resistant to chip/compiler level attacks such as https://www.schneier.com/blog/archives/2018/03/adding_backdo... and https://www.win.tue.nl/~aeb/linux/hh/thompson/trust.html

That's all assuming the voting machine is actually running the software/hardware they tell you - how would a voter check?

The article briefly mentions 'That receipt does not permit you to prove anything about how you voted, but does permit you to prove that the system accurately captured your intent and your vote is in the final tally,'. But if that receipt doesn't let you prove anything about how you voted, how can you tell from it that your vote was captured 'correctly'? The machine can print anything on the receipt!

Then there is the question - what problem is e-voting trying to solve? Hand-counting scales perfectly and is extremely difficult to covertly tamper with. So the only 'problem' e-voting solves is that of being unable to covertly and fully subvert elections.

kevin_thibedeau(10000) 4 days ago [-]

> That's all assuming the voting machine is actually running the software/hardware they tell you - how would a voter check?

Have dedicated hardware compute a hash from the content of program ROM on demand with a button press and present it on an auxilliary 7-segment display. Compare against the hash of the vetted image. No software need be involved.

At some point in the process, machines will be used for tabulation. You have to trust the hardware to some extent. Just keep it as simple as possible to minimize confounding complexity that an attacker can hide in.

unethical_ban(3994) 4 days ago [-]

I think it's unfair to say there is no point in e-voting besides malice.

e-Voting could make it easier / cheaper to deploy polling stations, collect ballots faster, and potentially to use more complex (but more fair and accurate) voting methods like Ranked Choice or others.

As for the 'We won't tell you how you voted but you can validate it', my first guess would be some kind of PKI where you are given the equivalent of a private key, and your results are signed.

There are issues trusting hardware vs. trusting the sight of paper and two humans, I get that. But it's worth researching.

LinuxBender(551) 4 days ago [-]

Have there been any competitions to make an open source, highly scalable and verifiable anti-tampering voting system? Maybe even a competition to see how few resources can be allocated to facilitate millions of simultaneous voters? i.e. 'did it in 50 lines of python!' like the javascript 1k competitions. [1]

[1] - https://js1k.com/

zAy0LfpBZLC8mAC(10000) 4 days ago [-]

> Have there been any competitions to make an open source, highly scalable and verifiable anti-tampering voting system

Yes, for thousands of years. The result is called the paper ballot.

You cannot have a verifiable anti-tampering voting system using computers. You need verifiability by the general public. Auditing a microchip is not something members of the general public know how to do, and in any case, it detroys the chip, so it's kinda useless anyway.

abakker(3456) 4 days ago [-]

>The systems Galois designs won't be available for sale. But the prototypes it creates will be available for existing voting machine vendors or others to freely adopt and customize without costly licensing fees or the millions of dollars it would take to research and develop a secure system from scratch.

I guess the devil is always in the details. 'freely adopt and customize' to me says that the code will not be verifiable or open source anymore? Or that the implementation could be flawed. Open sourcing the code, and then letting commercial entities change it, cut corners, make money, etc seems to be a good way to ensure that all the hard work that went into designing the system is rapidly compromised.

devoply(10000) 4 days ago [-]

When the military is building voting systems you should be a little leery.

masswerk(3501) 4 days ago [-]

Isn't there a law in the US prohibiting public institutions from competing with private businesses? This may provide a cause for not rolling it out, but rather handing it over to private enterprises for implementation.

Edit: I recall the US having to withdraw from the Human Genome Project because of this as soon as a private enterprise claimed it as a field of business.

KurtKoolbrain(10000) 4 days ago [-]

> 'impervious to certain kinds of hacking'

guess that about sums it up. it's DARPA after all folks..

bluedino(2207) 4 days ago [-]

Could this be a useful application of blockchain?

bushin(10000) 4 days ago [-]

Yes.

mspecter(10000) 4 days ago [-]

No.

thanatos_dem(4011) 4 days ago [-]

Could be a good application of hash chaining as it has existed since the 80's. Block chains wouldn't add much value over that here, however.

weej(3615) 4 days ago [-]

Title is misleading. This is 3rd party contractor that won an RFP bid yo push out hard copy verification of ballot and voter's choice with some 'DARPA techniques'. Not quite the secure confidential system with data integrity I was hoping for.

> We will show a methodology that could be used by others to build a voting system that is completely secure.

This really feels like a Proof-of-concept or reference architecture, at best.

weej(3615) 4 days ago [-]

That said, at least it's progress in the right direction (I Hope). We'll see how it turns out.

rossdavidh(3966) 4 days ago [-]

'This really feels like a Proof-of-concept or reference architecture, at best.'

I think that's DARPA's primary mission, though, isn't it?

swalsh(1926) 4 days ago [-]

My ideal voting system would allow me to have a real time feed of votes as they come in, so that at the end of the night I can check my records vs the 'official' records. Names can be detached, all I need is a Ballot id. BallotId can be something as simple as the hash of RegisteredVoterId + password + Salt + ElectionId.

As long as the voter remembers their password, they can look up their record, and the record can be a fully public record with anominity.

arendtio(10000) 4 days ago [-]

How do you validate that there are no 'additional' votes? Why do you require a password? Simply give them an anonymous id when they vote.

beat(3624) 4 days ago [-]

Altering your vote after the fact is not the actual problem, though.

jedberg(2122) 4 days ago [-]

The problem with any voting system that allows you to verify the vote after the fact is that it makes it too easy to coerce someone to vote a certain way.

I can promise you money (or threaten you with violence) to vote a certain way, but you can't follow me into the booth, and no matter how you make me 'verify' I can always change the vote between verification and depositing it in the box.

If there is a way to verify after, then I can withhold payment until you verify your vote, or hurt you after I've seen your vote isn't what I wanted. By not allowing after the fact verification, it means that can't happen, and greatly reduces coerced votes.

So as cool as it would be to verify my vote after the fact, it has too many unintended consequences.

thanatos_dem(4011) 4 days ago [-]

In addition to the issues of vote buying described in other comments, you're also amplifying the spoiler effect to a massive degree with a real time vote feed.

Anyone in later time zones will be less incentivized to vote if they can see the results of all the votes that came before them.

IMHO even exit polling should be outlawed. This day-long televised circus during elections is really damaging to democracy...

michaelbuckbee(3034) 4 days ago [-]

A feature/detriment of per vote verification is that it opens up the entire system to vote buying - are you describing verifying that your vote happened, or who it was cast for?

snowwrestler(3910) 4 days ago [-]

Your ideal voting system is vulnerable to coercion ('log in and show me who you voted for or else') and phishing.

Voting systems should provide confidence to voters that votes are counted correctly, but not permit anyone, including the voters themselves, to learn how they voted after the ballot is cast.

teawrecks(10000) 4 days ago [-]

Allowing everyone to verify that their vote was counted as they intend is a start, but....I'm not saying it has to use block chain, but for its veracity to actually be openly verifiable, the voting ledger has to be publicly visible.

exolymph(492) 4 days ago [-]

Votes can't be public. Leads to coercion.

MrXOR(3646) 4 days ago [-]

Good news. An Agora voting system's fork powered by SGX/TrustZone and verified by Cryptol?

kajecounterhack(3738) 4 days ago [-]

https://www.youtube.com/watch?v=HVmHruNg6m0

This amazing talk by Ben Adida is really relevant. He has worked on solving voting for a long time now and does a great job here of breaking down some of the salient parts of the problem.

specialist(4030) 3 days ago [-]

I have the impression that Ben Adida is no longer advocating cryptographic voting technologies. Which is encouraging.

https://www.usenix.org/conference/enigma2019/presentation/ad...

equalunique(3951) 4 days ago [-]

I'm a fan of Galois, so I'll keep tabs on this project.

danpalmer(3561) 4 days ago [-]

Agreed. I was about to write this off as a boring project that might go nowhere, but I have a huge confidence that Galois will treat this with the gravitas necessary from a computing and security theory point of view.

It might still go nowhere, but I expect there will be very interesting developments as a result of it.

pmoriarty(48) 4 days ago [-]

Say goodbye to democracy wherever electronic voting is rolled out.

jonahhorowitz(10000) 4 days ago [-]

You still have paper ballots - with audits.

sonnyblarney(3260) 4 days ago [-]

If there are decent identity foundations, then we have a repudiation benefit here which is better than what we have now.

In places where elections are fabricated, this might help quite a lot.

It won't make a difference in well functioning countries.

bushin(10000) 4 days ago [-]

But think of the children!

masswerk(3501) 4 days ago [-]

Thought experiment: Have, like in aviation, units built of two separate, but parallel architectures designed and built by unrelated, independent manufacturers with software written by independent teams in different languages and deploy them redundantly. (E.g., Airbus does this.) Now you have cranked up the cost for any manipulations to the requirements of successfully attacking two separate architectures in the same realtime timeframe, maybe at several redundant units at once. Leaving the message path. So you're still screwed. (Simply, because the win to cost ratio may be near to infinity. If we have concerns regarding personal messages, how could we possibly guarantee for this one?) Enter the paper trail and printers. – However, does anyone remember the Xerox scanner debacle of misarranged and falsely duplicated data by the compression algorithm, or the debates about Obama's birth certificate (due to image portions duplicated by the compression algorithm)? Things like these went unnoticed for years.

What we may learn from this, a) there's no perfect system involving software, b) if we do not want to invest as much in democracy as we do in shuffling around a few people by aviation, how may we be worth it? Anyway, voting methods shouldn't be about cost reduction.

grepper(3941) 4 days ago [-]

For those who were perhaps intrigued, as I was--here is a bit more information I found through a cursory search about how Airbus's consensus system works. Interesting stuff. [0][1]

[0] https://aviation.stackexchange.com/questions/15234/how-does-...

[1] https://aviation.stackexchange.com/questions/21744/how-do-re...

thanatos_dem(4011) 4 days ago [-]

I use this premise as one of my architectural interview questions- design a voting system.

Having asked it dozens of times, I've come to the conclusion that I don't trust anyone to build a voting system. I like it as a question tho, since it's open ended enough to really let the candidate focus on the domains interesting to them; scalability, security, data modeling, whatever they want really.

tommd(4028) 1 day ago [-]

That's a huge leap from 'arbitrary candidates can't give a satisfactory answer during an interview' to 'I don't trust it can be done.'

Do you apply the same test to cryptographic algorithms?

nathan_long(3811) 4 days ago [-]

> Kiniy said Galois will design two basic voting machine types. The first will be a ballot-marking device that uses a touch-screen for voters to make their selections. That system won't tabulate votes. Instead it will print out a paper ballot marked with the voter's choices, so voters can review them before depositing them into an optical-scan machine that tabulates the votes. Galois will bring this system to Def Con this year.

This sounds great: paper trail, no chance of 'hanging chads' or bad handwriting, verifiable by the voter at the moment before scanning and hand-countable if necessary.

cmonnow(10000) 4 days ago [-]

Close to a billion people are going to vote using this method in a month's time.

tvbusy(10000) 4 days ago [-]

The code should be anonymous, so that it can't be used to trace who made the vote, yet still can be used to verify that is counted. This way, anyone can verify that they're vote was actually counted, so the voting system will be verifiable later on.

bdamm(10000) 4 days ago [-]

The paper trail is not so wonderful.

What we saw in 2016 was that even if a candidate were to contest a result, none of the election committees were willing to commit to a full hand recount; instead, the only options were to retabulate through the very same tabulation processes and machines that had produced the questionable results in the first place.

Without low barrier to recount by hand, the electronic systems production of paper trails is worthless. Arguably worse than worthless, because it leaves everyone thinking there is a usable backup, when there isn't.

tivert(10000) 4 days ago [-]

> The first will be a ballot-marking device that uses a touch-screen for voters to make their selections. That system won't tabulate votes. Instead it will print out a paper ballot marked with the voter's choices, so voters can review them before depositing them into an optical-scan machine that tabulates the votes.

That seems backwards. Touch screens suck. Why not build a validation machine that voters can feed manually-completed optical scan ballots into, before they go to the tabulator? Clear feedback would help catch incorrectly filled out votes before they're cast, no touch screen required.

The validation machine could have a very clear and user-friendly display, which candidate pictures are large type. That would definitely be easier to verify than a computer-generated optical scan ballot.

fixermark(3856) 4 days ago [-]

Only part I don't care for is the touchscreen.

People consistently overestimate the reliability of that solution, especially for older voters with mobility challenges. Pushbuttons or levers that demand macroscopic elbow/shoulder motion are easier for that demographic to use than sensitive screens requiring fine motor control.

And that's all to say nothing of what happens when the screens become miscalibrated and accept taps a few pixels off. I'm fairly confident most of the 'It switched my vote' reports we hear are actually this category of 'user-error' (which should really be counted as 'machine malfunction').

simongr3dal(10000) 4 days ago [-]

I hate being outright dismissive but it sounds like an expensive html/pdf form with a printer attached.

I do agree that the paper trail is a great thing. I'm not fundamentally against electronic voting, but I haven't heard of a system that can really compete with the simplicity and verifiability of the immutablility you get from paper ballots inside ballot boxes being watched over by interested parties on all sides.

drilldrive(10000) 4 days ago [-]

The best thing about it is assuring voter confidence. And this is something I have been looking forward to for years; I hope it will be implemented soon enough.

Entangled(10000) 4 days ago [-]

Software is perfectible, skinware is not. As long as corruptible human beings are in charge, there will be room for fraud.

k_sh(4000) 4 days ago [-]

You're right, but that doesn't mean it's a waste of time to design systems more resilient to the human element.

reaperducer(3842) 4 days ago [-]

Software is perfectible, skinware is not. As long as corruptible human beings are in charge, there will be room for fraud.

Skinware writes the software.

(Is 'skinware' the new 'wetware?')

hannasanarion(10000) 4 days ago [-]

A corrupt human being can change one vote, or a few hundred if they're very industrious, in a paper ballot system. A corrupt human being can change every vote in an electronic ballot system. I would rather use the system where fraud is difficult and expensive and low-impact.

bdamm(10000) 4 days ago [-]

Corruptible humans will always be in charge, until Terminator. The question is, how much corruption are we willing to put up with, how would we know it is happening, and how robust are the apparatus for correcting those abuses?

Beefin(4018) 4 days ago [-]

What I truly don't understand is why we can't vote with our phones in this age

zanny(10000) 4 days ago [-]

Because you cannot verify your phone is not compromised at either a software or hardware level.

You would need independently verifiable hardware and all software running on a closed system (ie, no third party modifications to running software which would mean at most a trusted sandbox for other applications outside the proven path) to be able to trust it to reliably take your vote.

Thats on the order of correctness provability that NASA puts into launch vehicles but NASA doesn't have to contend with hostile actors seeking to undermine their software and hardware.

rtkwe(10000) 4 days ago [-]

TL;DR: hardware security, software security, authentication of voters, and the tech literacy of the average person.

Because now instead of securing centralized voting locations and machines you somehow have to create perfectly secure software running on you Aunt Flourence's machine with 51 tool bars and 3 different bot nets installed and also make sure she can use it properly and securely. Oh also now you're accepting votes as bits over the internet giving nation states probably the juiciest target and the widest possible attack surface (see securing every voters computer).

Even using something like the IME and secure enclaves to take the computation outside the the range of your average exploit it's still vulnerable to attack.

Then even if you've perfectly secured the hardware and software you're just left with the largest login/key infrastructure problem of all time with the average voter having to understand how to not be tricked into not actually using your secured software and hardware environment...

anth_anm(10000) 4 days ago [-]

My design uses paper and pen.

Deployment requires mailing ballots out and having places where people can come in to fill them out.

10 million dollars please.

TomMarius(10000) 4 days ago [-]

How is that better than whatever we have now?

gjs278(10000) 4 days ago [-]

homeless people can't under this system

MBCook(639) 4 days ago [-]

How well does it work for people with motor disabilities? Vision disabilities? Does an X mean a choice or they crossed out their choice? What happens when the pens run out of ink? What if they can't read English?

Helpers? What do you pay them? Can they understand that dialect of that obscure language? Do you trust them not to lie about what they're marking on the ballot for someone?

The truth is electronic voting machines have upsides. Having the system fill out the ballot which the voter then hands in seems like an almost ideal use to me. It's totally verifiable but can help many people who wouldn't be able to vote without help.

Barrin92(10000) 4 days ago [-]

I legitimately don't understand what's the invention here. If all you're trying to do is avoiding having an invalid or ambiguous ballot and you print out a paper copy anyway, why invest 10 million into a new system instead of just using some bog standard computer + printer?

If you're going to get the physical ballot anyway what's the point?





Historical Discussions: Tesla Model Y (March 15, 2019: 641 points)
Model Y (March 15, 2019: 26 points)
Model 3: Now Available to Order (July 25, 2018: 1 points)
Model 3 achieves the lowest probability of injury of any car tested by NHTS (October 08, 2018: 322 points)
Model S or Model 3 (April 06, 2017: 3 points)
Model X Wins the Golden Steering Wheel (November 09, 2016: 2 points)
Tesla Model X Is First SUV to Achieve 5-Star Crash Rating in Every Category (June 13, 2017: 182 points)
$35,000 Tesla Model 3 Available Now (February 28, 2019: 785 points)
New Tesla Model S Now the Quickest Production Car (August 23, 2016: 269 points)
$35,000 Tesla Model 3 Available Now (February 28, 2019: 10 points)

(645) Tesla Model Y

645 points 4 days ago by kiddz in 3549th position

www.tesla.com | | comments | anchor

Model Y is fully electric, so you never need to visit a gas station again. If you charge overnight at home, you can wake up to a full battery every morning. And when you're on the road, it's easy to plug in along the way—at any public station or with the Tesla charging network. We currently have over 12,000 Superchargers worldwide, with six new locations opening every week.

Model Y is fully electric, so you never need to visit a gas station again. If you charge overnight at home, you can wake up to a full battery every morning. And when you're on the road, it's easy to plug in along the way—at any public station or with the Tesla charging network. We currently have over 12,000 Superchargers worldwide, with six new locations opening every week.




All Comments: [-] | anchor

vcavallo(3924) 4 days ago [-]

How is that an suv?

trymas(2516) 4 days ago [-]

What is an SUV?

lisper(118) 4 days ago [-]

'Production is expected to begin late next year.'

I'll give long odds against.

tristanperry(3748) 4 days ago [-]

Their new Shanghai factory (Gigafactory 3) seems to be partially built with the Model Y in mind so I wouldn't count against this actually being true.

redindian75(3860) 4 days ago [-]

He just announced the prices (took a screengrab)

Standard Range (230miles) -> $39K (Spring 2021)

Long Range (300miles) -> $47K (Fall 2020)

Dual ($51k) Performance ($60k) (Fall 2020)

mortenjorck(1137) 4 days ago [-]

The first question in my mind is 'are these retail prices, or are they gas-savings-spitball prices?'

matz1(10000) 4 days ago [-]

I guess I'm not much a car person but I can't see much difference between all the model.

azhenley(2802) 4 days ago [-]

They certainly have a common theme but they are fairly easy to distinguish in person. The 3 and S are really close until you compare their size or interior.

m463(10000) 4 days ago [-]

I usually check the door handles, but now the model Y door handles look like the model 3.

sillypuddy(2835) 3 days ago [-]

I think the exteriors are driven by aerodynamic requirements to maximize their range. So they all basically take the most aerodynamic shape possible which makes them look the same.

jcfrei(2411) 3 days ago [-]

'Production is expected to begin early 2021'. So first major deliveries in 2022, I wonder how much excitment will be left at that point.

leesec(10000) 3 days ago [-]

Where does is say that?

samcheng(4032) 4 days ago [-]

The website is up: https://www.tesla.com/modely

300 mile range, seats 7, looks a lot like a model 3 (so I guess kind of like a Mercedes GLC?)

$51,000 for the all-wheel-drive version, although Tesla is notorious for playing games with their pricing...

mandeepj(3069) 4 days ago [-]

I had to do - 'Empty cache and hard reload' to get the updated page.

Model Y looks like a child of Model 3 and Model X.

tqi(10000) 4 days ago [-]

That third row looks like it will have 0 headroom

myself248(4005) 4 days ago [-]

Snow-land wants to know: Front-wheel drive, ever?

I'm not driving rear-only, ever. Did that once, not again.

I'm not paying another $11,000 to get the dual-motor version when all I care about is the front. That's a whole 'nother car worth of money.

Keep on building California cars, Elon. I'll buy one as soon as it fits both my budget and my climate. Some models fit one or the other, but nothing does both.

ramenmeal(10000) 4 days ago [-]

Interesting point. This should actually be an advantage for Tesla as the engineering and manufacturing costs would be much lower to produce front, rear and all wheel drive options than a traditional ICE car. I'm guessing they're hoping people buy the dual motor option for this scenario though.

colechristensen(10000) 4 days ago [-]

Silicon Valley transport tech needs to visit the north some time. There is going to be a very rude awakening when everyone realizes their machine learning models only handle light rain in the southwest.

kolinko(3101) 4 days ago [-]

Hm, on the website it says that there is an all-wheel drive version just 10-15% more expensive ($4-$5k).

Drunk_Engineer(10000) 4 days ago [-]

You realize the Tesla does not have a heavy engine in the front? The term FWD is meaningless here.

bryanlarsen(3464) 3 days ago [-]

I bet RWD drive cars have fewer fatalities in the snow than AWD drive cars do.

While their ability to go forward is much better, their ability to stop is very similar.

I see lots of SUVs driving stupid fast on snow.

JustSomeNobody(3792) 3 days ago [-]

I'm from and in Florida and have never driven in ice or snow. Why not just call an uber on snow days? :)

audunw(10000) 4 days ago [-]

RWD EV is probably better than FWD ICE. They'll generally have better weight distribution and the fine-grained control over the torque is better.

Anyway, if grip is important, you might as well go for AWD. The difference from RWD to FWD is tiny, if any (I also live in snowy-lands and have driven both). But going to AWD, especially with the dual motor AWD you get with EVs, is a world of difference and probably worth the upgrade.

fingerlocks(10000) 4 days ago [-]

I prefer to have an optional locking differential on the drive wheels for snow.

In my experience, the most common problem with snow is getting enough traction to move out of parking spot or up a steep incline. Like most AWD vehicles, the AWD Tesla has an open differential which does not help much in these scenarios. When traction gets dicey it behaves like a 2WD with torque spread between one front and one rear wheel.

sliken(10000) 4 days ago [-]

FWD has a reputation for being good in the snow, I grew up with a FWD Saab that was pretty awesome in the snow. Mostly because it had tall narrow tires, FWD, and well over 60% of the car's weight over the front wheels.

However RWD is actually better... assuming the car has a good front/rear balance near 50%... like the Telsa. Any FWD gets LESS traction when climbing than driving on the flats. A RWD gets MORE traction when climbing than driving on the flats. Of course when driving down hills you can always throttle off and just use the brakes.

Additionally your limited traction budget in a RWD allows the front wheels to be dedicated to steering only. On a FWD you have to spend part of your traction budget on acceleration.

So get the RWD, it's cheaper than AWD, and better in the snow than FWD.

jakobegger(3429) 4 days ago [-]

You can just put sand bags in the trunk in winter :)

My parents used to do that so they could drive our Toyota Hiace in winter. As a bonus, there's fresh sand for the sandbox every spring!

eemil(10000) 4 days ago [-]

Not an issue in this day and age, IMO. While FWD is more controllable, and safer in the snow, modern stability control systems have done a _lot_ to bridge the gap.

My 2006 era RWD car has the manufacturer's optional stability control, and I'm constantly impressed by how effectively it keeps the car in-line in snow. Even on long, sweeping turns, where it's hard to know when you're beginning to slide. This is without any kind of steering control, with only engine power limiting and individual-wheel braking.

Much more important, is to have proper tires for the season. Even if you don't experience snow, you need tires with the appropriate temperature ratings. Summer tires will turn rock-hard in freezing temperatures, and lose a lot of grip. Even on bare pavement.

statictype(3226) 4 days ago [-]

Note how the links are arranged on the website:

S 3 X Y

dwd(10000) 4 days ago [-]

He's been waiting 10 years to make the joke about his s3xy lineup of cars.

I prefered the BFR working name (with a nod to Doom).

hi5eyes(10000) 4 days ago [-]

elon also tweeted that meme yesterday

https://twitter.com/elonmusk/status/1106063248581894144

awad(3440) 4 days ago [-]

Fun fact: it really was meant to be Model E, so as to spell out S E X Y properly but Ford, the trademark owner, was having none of it.

sjwright(10000) 4 days ago [-]

It was a moderately funny lame joke 8 years ago. Today it might be the most drawn-out lame joke in corporate history.

elchief(3267) 4 days ago [-]

I want my Tesla truck

biswaroop(4000) 4 days ago [-]

Check out Rivian: https://products.rivian.com/

a13n(3056) 3 days ago [-]

I gotta give it to them, it's insane how fast they're moving.

In less than a decade they've gone from one highly niche electric supercar to a luxury sedan, a luxury SUV, a mid-range sedan, and a mid-range SUV.

And in sales they're crushing competition that have been building cars for literally a hundred years.

So excited for the zero-exhaust future.

tootie(10000) 3 days ago [-]

I live near one of the busiest highways in the country and it's just sorta background noise for the neighborhood. I'm not sure people realize not only how much exhaust is spewing out of that road, but also the noise. The volume level of major cities is going to decrease dramatically.

techslave(10000) 3 days ago [-]

how again are they crushing the competition? they sell in a year what toyota manufactures in a day. a long reservation list isn't an indicator of success, it's an indicator of demand. and the demand, in relative scale, is low.

magicnubs(10000) 3 days ago [-]

> a mid-range sedan, and a mid-range SUV

Is there a definition for mid-range vs luxury? I wouldn't consider a sedan that starts at $35K (with an average sale price of $60K), or an SUV that starts at $47K (over 50% higher than the median US worker's gross personal income[0]) mid-range. Both the Mercedes A-class and Audi A3 start at $32.5K even.

[0] https://en.wikipedia.org/wiki/Personal_income_in_the_United_...

jes5199(3965) 3 days ago [-]

in what sense is this an SUV? It looks just like a sedan

kop316(10000) 3 days ago [-]

'So excited for the zero-exhaust future.'

Is it zero exhaust? Where do you get your energy from? Is it from a caol plant? A nuclear plant? Its more you don't see the exhaust.

glaberficken(3980) 3 days ago [-]

Can someone explain to me how EV's can scale in older urban residential areas (most european cities) where cars are typically parked on the street (i.e. no private parking spaces)?

I just can't see how the logistics of charging would work for more than a few EVs per city block...

andrepd(3537) 3 days ago [-]

I get what you're saying, but 55,000€ is not midrange. It's more expensive than a BMW series 5.

clouddrover(838) 3 days ago [-]

> And in sales they're crushing competition that have been building cars for literally a hundred years.

I don't think that's a realistic assessment of where Tesla is at as a car company. Tesla is still not very good at the actual making of cars. For example, in the last five years Tesla has had more health and safety violations in their factory than the top ten automakers in the US combined:

https://www.forbes.com/sites/alanohnsman/2019/03/01/tesla-sa...

Tesla cars have among the worst reliability of any car brand:

https://www.consumerreports.org/media-room/press-releases/20...

https://www.truedelta.com/car-reliability-by-brand

Consumer Reports no longer recommends the Model 3 due to its lack of reliability:

https://www.consumerreports.org/car-reliability-owner-satisf...

In 2018, Toyota and Volkswagen each sold over 10 million cars:

https://www.autoblog.com/2019/01/11/vw-group-2018-total-sale...

Whereas Tesla has sold about 550,000 cars in 11 years:

https://en.wikipedia.org/wiki/Tesla,_Inc.#Sales

Volkswagen is starting its push into EVs. They'll be releasing multiple electric models across multiple brands every year from now on. Porsche, Audi, VW, Skoda, and SEAT to start. I'm sure there'll be electric Lamborghinis, Bentleys, and Bugattis eventually (if you're in the market for those):

https://www.reuters.com/article/us-volkswagen-electric-insig...

Volkswagen also wants to license its MEB electric car platform to other manufacturers. They already have one licensee:

https://cleantechnica.com/2019/03/05/volkswagen-wants-to-sha...

I think Tesla's main problems are that they are a small car company with an erratic CEO, inefficient and unreliable manufacturing, and they're about to face a lot of electric car competition from one of the biggest car companies in the world.

Rebelgecko(10000) 4 days ago [-]

Model Y will have Full Self-Driving capability, enabling automatic driving on city streets and highways pending regulatory approval, as well as the ability to come find you anywhere in a parking lot.

Pending regulatory approval, and also they need figure out how to make it work first.

Off the top of my head I remember similar claims being made about the summoning feature of the model S. Has it lived up to the marketing promises?

unethical_ban(3994) 3 days ago [-]

I assume they mean they have equipped it with the sensor and computational capacity, and motor control, required (in their analysis) to implement self-driving when the software and regulators are ready.

sandworm101(3930) 3 days ago [-]

What if i get that regulatory approval? There is plenty of privately owned land where i could tesla all day, some with traffic lights and everything. Or perhaps this feature is 'pending' far more than regulatory approval.

x38iq84n(10000) 4 days ago [-]

Nope, Tesla is still blatantly lying.

simonebrunozzi(911) 3 days ago [-]

I call BS on this. EVERY autonomous driving expert agrees that Tesla is years away from Waymo or GM, and years away from their own claims.

ernesth(10000) 4 days ago [-]

In the hope to get standard unit, I pretended to be german. https://www.tesla.com/de_DE/modely?redirect=no

Tells me 65 cuft cargo, 540 km range.

I asked wolfram alpha what 65 cuft meant. I now know that it is 1/10 the volume of a gray whale. Or 1841 L.

Compared to the american version (66cuft, 300mi), the german has a marginally smaller cargo volume but a far greater range (480 km vs 540 km)

rypskar(10000) 4 days ago [-]

>(480 km vs 540 km)

Probably because of different standards for calculating range in USA and Europe (EPA vs NEDC)

sliken(10000) 4 days ago [-]

The EPA testing is well known to return lower numbers than the EU version. Generally it seems like the EPA numbers are pretty good for electric cars. Without trying, consumers are pretty close. If you are careful you can beat them. The EU numbers on the other hand tend to be overly optimistic.

VikingCoder(3978) 4 days ago [-]

Towing capacity? Anybody know?

jpgvm(3135) 4 days ago [-]

Probably more than you would expect. Electric motors have frankly insane torque compared to their ICE equivalents.

LeoPanthera(2780) 4 days ago [-]

I refuse to sign up for spam just to watch their announcement.

xxpor(3886) 4 days ago [-]

I hope they enjoy sending email to [email protected]

rmason(70) 4 days ago [-]

I was a bit shocked at the price being as low. They're putting in a Supercharger two miles away. Before today I appreciated what Musk was doing but never considered getting a Tesla. As of tonight I am reconsidering. My only unknown right now is going to be service.

elcomet(4029) 4 days ago [-]

I don't get it, this is more expensive than model 3, so the price can't be the thing that made you change your mind. Was it the size ?

walrus01(1801) 4 days ago [-]

Supercharging does measurable damage to batteries. If you're on a long distance road trip and need to use it, okay, but if you regularly supercharge your car WILL have poorer battery condition by the time it reaches 80,000 to 100,000 miles.

This is not a problem for most owners who charge overnight at home.

The battery chemistry and heating / high amperage damage issues are unavoidable with current lithium ion chemistry.

https://electrek.co/2017/05/07/tesla-limits-supercharging-sp...

https://cleantechnica.com/2017/07/09/tesla-limiting-supercha...

I don't think you will find this information anywhere on Tesla's website. It's kind of bullshit in my opinion that they don't have at least a medium sized disclaimer saying 'hey, don't supercharge all the time... or this will happen'. I'm sure it's buried deep in the sales contract terms and conditions.

oblio(3158) 4 days ago [-]

What's the price? Can't see it outside of the US.

jread(3628) 4 days ago [-]

I'm in So Cal. My S got backed into - repair of quarter and door panels took about 6 weeks.

For annual service and repairs, it's been pleasant for me. Granted, it takes a while to get the appointment now, but they've always given me a loaner that's often nicer/newer than mine (or $700 Lyft credit one time), so I haven't minded delays. Mobile service has also been great, responsive and very convenient.

Contrast that to the Mercedes dealership. Every time we take in our warrantied SUV, I feel like they're trying to take us for every penny they can - very unpleasant.

samcheng(4032) 4 days ago [-]

Service with Tesla has been pretty good. For small things (like problems with those notorious doors) they will drive to you and fix it, even if the car is in a parking lot at work. You can schedule over text message and in the app. It's a refreshing improvement over your average luxury car dealer, who treats service as a profit center.

The issue is with parts - delays for some body parts mean your car may be sitting in the shop for MONTHS waiting for key pieces.

driverdan(1345) 3 days ago [-]

Since when is $48k considered a 'low' new car price? That's a lot of money for a car. Only a small fraction of the population is willing to spend that much.

Cyclone_(4025) 4 days ago [-]

Is it just me or does that thing not really look like an SUV? Looks a little small, when I heard it was being called an SUV I was a little surprised.

WhompingWindows(10000) 3 days ago [-]

Have you been in a Tesla before? They are surprisingly roomy on the inside. If you've been in ICE cars with much less efficient use of volume, you will be impressed when you spend time in an EV without all the huge engine, transmission, and various extra parts that aren't a flat battery and watermelon-sized electric motor.

amyjess(3045) 3 days ago [-]

Yes, it looks much more like an MPV/minivan to me, crossed with one of those SUV Coupes (think BMW X4).

bryanlarsen(3464) 4 days ago [-]

The specs say it seats seven adults, so even if that's a good squeeze, it must be larger than it looks.

audunw(10000) 4 days ago [-]

It's usually been called a CUV though

BEVs have more interior space compared to exterior size. The Model 3 is already comparable to ICE CUVs in cargo volume (not shape of the volume of course)

https://cleantechnica.com/2019/03/11/who-needs-the-model-y-h...

gambiting(10000) 4 days ago [-]

It's probably the same size as a Nissan Qashqai, maybe Skoda Karoq since it can seat 7.

gamblor956(3869) 4 days ago [-]

I was thinking the same thing. Unless the angle is off in all the videos, it's not much taller than some sedans/hatchbacks like the Avalon and most Subarus.

Even calling it a crossover seems like a stretch, since it appears to have a foot less headspace (or more!) than crossovers like the RAV4 or CRV.

josefresco(3811) 3 days ago [-]

Well it's not that weird. Kia calls my car, the Soul a 'compact SUV' in their latest commercials which I find hilarious. It's a tiny, tiny car.

walkingolof(1510) 4 days ago [-]

I also wonder how you would fit a roof box on that thing, sort of a requirement for calling a car a SUV IMO

abhisuri97(2806) 4 days ago [-]

Is it supposed to be a midway point between Model 3 and Model X? Or is it supposed to be the cheaper version of Model X? I can't tell exactly where it fits into the lineup since 'midsize SUV' can mean a lot of different things and is pretty vague (especially since the model X is considered a 'compact crossover SUV' per wikipedia).

Gaelan(4021) 4 days ago [-]

Y is to X as 3 is to S

davedx(2807) 4 days ago [-]

This is the electric car I've been waiting for. We're a family of 6 and for the longest time the Model X or some huge hybrids were our only options. This is half the price of an X. I can afford it. Bring on the 7 seat version... in Europe... (starts waiting).

koonsolo(3925) 4 days ago [-]

I'm also looking for a 7 seat model. But does it look like the 3rd row has a lot of space?

A quick search didn't show me any details on this. Not even a single picture of the rear part inside.

kieranyo(10000) 3 days ago [-]

I'm in the same situation as you. This is perfect. 2021 feels so far away though!

vowelless(3677) 4 days ago [-]

Here is the direct, unlisted YouTube link: https://www.youtube.com/watch?v=3ydPFR6xb3I

WestCoastJustin(381) 4 days ago [-]

Thank you. This should be the main link.

iceninenines(10000) 4 days ago [-]

Thanks. Lmao at all the fangirls in the audience who hang on every little thing Musk says.

dang(163) 4 days ago [-]

All the Youtube links in this thread are now pointing to unavailable videos, so we switched to the tesla.com URL above.

drilldrive(10000) 4 days ago [-]

Is there any way to just receive the audio for these sorts of things? Live video is too hard on my bandwidth.

dlgeek(2884) 4 days ago [-]

Elon Musk is many things, but a charismatic public speaker is not one of them.

slg(3227) 4 days ago [-]

He also has the humor of a 6th grader. He is able to accomplish a lot of impressive things, but his humor almost makes it painful to watch one of these presentation or follow him on Twitter.

sbr464(3441) 4 days ago [-]

Idk, I appreciate a sincere, non-corporate spokesperson in our current climate.

Phobophobia(10000) 4 days ago [-]

He's also not the founder of Tesla :P

Waterluvian(3927) 4 days ago [-]

Just my own opinion but that's exactly what I like. I am completely tired of and done with Silicon Valley Startup CEO types.

agorabinary(3933) 4 days ago [-]

I've always felt that Elon talks like his tongue is numb

lettergram(1518) 4 days ago [-]

"Full self-driving capabilities"

One of the items listed is your car finding you in a parking lot. I gotta day... I really don't want to get hit by a self-driving Tesla in a parking lot.

Having experienced some of the auto-pilot issues first hand... I have serious doubts about this one.

mikejb(10000) 4 days ago [-]

'pending regulatory approval'

They won't get that for a long time. Autopilot is a driver assist feature. The step to 'the driver doesn't have to pay attention/be present' is gigantic, and I'm not holding my breath until Tesla will get that right.

ec109685(4005) 4 days ago [-]

It's not like humans are perfect in parkings lots though.

samcheng(4032) 4 days ago [-]

Here's the unveil (over an hour into the video): https://youtu.be/3ydPFR6xb3I?t=4775

emehrkay(10000) 4 days ago [-]

They're lined up by model S, 3, X, Y. Mature. Oh he says it multiple times at the end. Dork

userbinator(871) 4 days ago [-]

I wonder what will come after the Model Z (if they do make one.)

perilunar(4032) 4 days ago [-]

If they do a light truck it could be the Tesla U (for ute), then R for the Roadster, giving RU S3XY ?

newnewpdro(10000) 4 days ago [-]

I presume the next model is the 2: 2S3XY

jfoutz(3787) 4 days ago [-]

So far they have been S3XY. i'm pretty sure the next model has to be a space character so they can move on to the next word.

SamuelAdams(4028) 3 days ago [-]

Is it possible to buy a version without self-driving capabilities? Personally just having a nice EV would be great, especially if that reduces the cost by 5-10k.

robotresearcher(10000) 3 days ago [-]

Autopilot USD $3k option, FSD $5K option.

isolli(4027) 4 days ago [-]

Touch screens should be banned on security [edit: safety] grounds. You need to take your eyes off the road to perform simple operations such as adjusting heating. Relying on muscle memory with physical knobs is much safer. And it's not just Tesla, it's a worrying trend for many car manufacturers.

kondro(3659) 4 days ago [-]

Whilst I agree with you and I honestly don't know whether I would ever buy a car that was basically touch-screen only (although there are controls on the wheel), Telsas have a bunch of features that allow them to safely self-pilot, at least for the small amounts of time that you would take your eyes off the road to make adjustments like this.

I've only driven cars with radar cruise control (with always-on auto-breaking) and lane & blind-spot detection and I now find these features invaluable to feeling safe whilst driving and avoiding the pitfalls distractions can cause.

yzfr12006(10000) 4 days ago [-]

Fully agree with this statement. As a person who study UX for living a lot of people actually doesn't understand for what User Experience stands for. It is not just nice interface and simple clear minimalistic design although those are also important factors.

When you put a large touchscreen on the right side of the driver seat the moment you need to get feedback from the vehicle or do something you are distracted and your eyes are not on the road.

Now before all Tesla owners say, yea but my autopilot is on, and yea it is not as bad as I though it will be, you are still distracted and the primary task that you have in the vehicle (driving) is now with not optimal user experience which might lead to the worse case scenarios.

The physical knobs are far better way to perform tasks in your vehicle while driving, mainly because of the muscle memory your body and brain will generate, you will not only perform the task faster but you will not put constrain on your brain to read, watch or whatever you need to do to perform a task.

You also forget that you are not alone in your Tesla at the road. You have thousands of other drivers who might be as equality distracted or even worse. So imagine what happens when you play with your screen on autopilot and you are not watching your back mirror, while maybe a drunk driver is approaching very fast.

Now I am not against Tesla or autonomous driving, quite the opposite I can't wait the day the autonomous driving will be so advanced that people won't need to drive, mainly because majority of the people don't take driving seriously and the end results is the worst one possible, people loosing their life over car accidents.

There were statistics in my country alone that more people die every year from car accidents than people in active war times.

I believe that Tesla can do a much better job to build futuristic vehicles rather than just placing a tablet in the middle of the car.

Shivetya(612) 4 days ago [-]

disclaimer, own a TM3.

At first I was worried about the center console but quickly came to realize it was a non issue. Even the speed being displayed there did not matter as normal eye movement when driving would pick it up if not out of the 'corner' of my eye. I was totally comfortable with it before I made it home from picking up the car. My eighty year plus old father had zero issues with it. He is of the type where you don't play with buttons/etc unless stopped. to each his own

As for buttons, many cars have automatic climate control and I rarely if ever have changed mine. If I need the front or rear defroster its merely a glance and tap; muscle memory almost as much as with a button. Heat seater, my seat is right there on the bottom of the display. temp is a simple tap left or right. all again 'muscle memory' because I am used to the car. Same as if I had to drive a friends car - you learn and you learn quite quickly.

I give my friends a simple test with their cars. Put yellow dots on each you use during a drive to and from work. You can do this before or during. You would be surprised how much you don't use center console buttons. There is a reason why some controls are replicated to the steering wheel.

Plus if you want to get down to it, if I really want to change something in traffic that is involved; though honestly I don't know what that would be; I let the car drive for awhile. It can do that.

As for the presentation, I had to laugh. My TM3 is blue and for a bit when watching the TMY driving videos I was hard pressed to see the physical difference

HyperTalk2(10000) 4 days ago [-]

I believe Elon is part of a tragically large group of people that continue to stubbornly define their expectations of the future based entirely on stuff they saw in Star Trek TNG. 'Just because you can put a touch screen somewhere doesn't mean you should' is a thought that will never enter their minds regardless of how many people die as a result.

ps(10000) 4 days ago [-]

Model S owner here and as many others in this thread noted - it is not any more dangerous than knobs. I also heard opinion that UX is much worse than with physical controls. Indeed if you design the UI poorly, but this is not the Tesla's case. And I would yet have to see any physical UI that can be improved during the car's lifetime.

natch(3956) 4 days ago [-]

This is such a red herring. The critical factor isn't whether an operation is simple (adjusting heating) but rather whether it is time sensitive (force a wiper swipe, hit the brakes, etc.) and the also how frequent the operation is. With something like heating, there is much more leeway for giving attention to the road as needed, because it's not as time sensitive. Nor is it a very frequent operation especially if the car has excellent climate control which makes the example verge on moot.

If people are going to adjust the volume more often than adjusting the heat, then you get a hardware button in the Model 3 for example, but not for heat adjustment.

There's a lot of misinformation out there. I've seen HN posters flat out state (wrongly) that you have to use the screen for things where you don't have to. Examples: volume control, pause/play, wipers, windshield wash. Take these skeptics with a grain of salt when they assert opinions about stuff they have no experience with.

Flankk(10000) 4 days ago [-]

This is why I use a BlackBerry. So I can safely text while driving. I'm actually writing this on the freeway and

josefresco(3811) 3 days ago [-]

Anecdotally I have a 'normal' car with hardware buttons and still look down to adjust the AC/heat etc. because there are many, many controls. My car also has a touch screen that controls the stereo and navigation - many other cars have this same setup. I don't see this as an issue worthy of concern, over say cell phone use while driving.

Dumb question maybe (excuse me for not searching this out) but does Tesla have voice control? Seems like an obvious way to tackle this problem.

'Tesla turn my heat up' 'Tesla tune to SiriusXM channel 100' etc.

martin_bech(10000) 4 days ago [-]

How do you explain Teslas higher safety record if that was true?

Its absolutely no problem using a Touch screen, have been for 4 years and 100.000km in my Model S.

sliken(10000) 4 days ago [-]

I was hoping the Y would add just one dial for whatever is the most popular thing the touch screen is used for. Surely one nice dial wouldn't cost much or ruin the rather spartan aesthetic.

jfoster(3800) 4 days ago [-]

Is this a response to a problem or just the imagining of a problem? Regulations for the sake of regulations? It's not as though Tesla have just switched to touch screens.

EngineerBetter(4009)