with Comments/Articles inlined

Reload to view new stories
Last updated: December 17, 2018 04:02
  1. My Dad's Friendship with Charles Barkley
  2. Google AMP case study: leads dropped by 59%
  3. Can Repelling Magnets Replace the Spring in a Pogo Stick?
  4. 4,400 year old Egyptian tomb discovered in the Saqqara pyramid complex
  5. German exclaves in Belgium separated by a bicycle path from the rest of Germany
  6. Show HN: Minimal Google Analytics Snippet
  7. Thoughts on, and pictures of, the original Macintosh User Manual
  8. Show HN: Free and open-source home for art made with code
  9. Nvidia's $1,100 AI brain for robots goes on sale
  10. Organizational Debt
  11. Show HN: Vaex - Out of Core Dataframes for Python and Fast Visualization
  12. Show HN: CHIP-8 console implemented in FPGA
  13. Show HN: Egeria, a multidimensional spreadsheet for everybody
  14. Why Design Thinking Works
  15. Browsers
  16. Show HN: High-performance ahead-of-time compiler for Machine Learning
  17. Two Chinese Stalagmites Are a 'Holy Grail' for Accurate Radiocarbon Dating
  18. Systematic Parsing of X.509: Eradicating Security Issues with a Parse Tree
  19. A lesson on thinking and acting long term
  20. Show HN: Debucsser, CSS debugging made easy
  21. Show HN: Single-header C++11 HTML document constructor
  22. How Peter Jackson Made WWI Footage Seem Astonishingly New
  23. Pendulum Waves
  24. Show HN: Try running deep learning inference on Raspberry Pi
  25. Keyboardio Kickstarter Day 1278: A startling discovery
  26. Show HN: My 7th Grade Young Entrepreneur Project -- interactive holiday cards
  27. Typewriter Cartography
  28. LiveAgent: Over $250K monthly recurring revenue with a spin-off project
  29. Show HN: Rendora – Dynamic server-side rendering for modern JavaScript websites
  30. Show HN: Minimal game with procedural graphics in JavaScript/GLSL
  31. Pushback Derails Company That Thrived on Patent Lawsuits
  32. Show HN: CertMagic – Caddy's automagic HTTPS features as a Go library
  33. Moral Machine
  34. Pampy: Pattern Matching for Python
  35. Show HN: Simple tool to upload and paste URL's to screenshots and files
  36. No Haunted Forests
  37. Few people are actually trapped in filter bubbles. Why do they say they are?
  38. Show HN: Revealer – seed phrase visual encryption backup tool
  39. Show HN: Deploying a stateful distributed service on k8s the easy way
  40. Show HN: Software Stickers Co – Simple Software Funding
  41. Show HN: DPAGE – publish webpages on the decentralized internet
  42. Show HN: Pown Proxy – MITM web proxy with text ui
  43. Show HN: Meetup Utils – Create ready to print posters and indicators for meetups
  44. Show HN: Stock Market Forecast Based on Most Similar Historical Patterns
  45. Show HN: Hardware-agnostic library for near-term quantum machine learning
  46. Show HN: Build a Slack Clone with WebRTC Video Calling
  47. Show HN: A WebGL EWA Surface Splatting Renderer
  48. Show HN: JSON.equals in Java to compare two JSON's
  49. Show HN: Kuzushiji-MNIST
  50. Show HN: Stig – A CLI tool for searching GitHub from the terminal
  51. Show HN: Telegram Directory – Find top Telegram channels, bots and groups
  52. Show HN: Elixir/Unix style pipe operations in Ruby
  53. Show HN: I made a better Secret Santa generator
  54. Show HN: Gmail Add-On: Collect Emails from Slack for Use in to Field
  55. Show HN: Kube – Deploy auto-scaled containers with one command
  56. Show HN: ChauffeurNet – Learning to Drive Beyond Pure Imitation
  57. Show HN: Element, use Puppeteer to load test your app

Some Front Page/ShowHN stories over 4 points from last 7 days, with back button working as expected
If internet connection drops, page remains available to you. Update every 30 minutes, whenever connection is resumed.
If there were anu historical discussions on the story, links to all the previous stories will appear just above the comments.
Top HN posters ranked based on their post scores.

Historical Discussions: My Dad's Friendship With Charles Barkley (December 15, 2018: 3 points)
My Dad's Friendship with Charles Barkley (December 15, 2018: 3 points)

My Dad's Friendship with Charles Barkley

341 points about 21 hours ago by weitingliu in 3156th position | Estimated reading time – 12 minutes | comments

When Charles Barkley's mother, Charcey Glenn, passed away in June 2015, Barkley's hometown of Leeds, Alabama, came to the funeral to pay respects. But there was also an unexpected guest.

Barkley's friends couldn't quite place him. He wasn't a basketball player, he wasn't a sports figure, and he wasn't from Barkley's hometown. Here's what I can tell you about him: He wore striped, red polo shirts tucked into khaki shorts and got really excited about two-for-one deals. He was a commuter. He worked as a cat litter scientist in Muscatine, Iowa. In short, he was everyone's suburban dad. More specifically, he was my dad.

Charles Barkley and Lin Wang (Courtesy Shirley Wang)

'You know, it was obviously a very difficult time,' Barkley told me recently. 'And the next thing I know, he shows up. Everybody's like, 'Who's the Asian dude over there?' I just started laughing. I said, 'That's my boy, Lin.' They're, like, 'How do you know him?' I said, 'It's a long story.' '

My Dad: Lin Wang

The long story started four years ago.

'You know, [Barkley] has a big personality,' my dad, Lin Wang, told me last year, when I recorded him talking about Barkley.

My dad told me that he knew about Barkley long before he met him.

'Well, yeah, he's a top-50 player in the history of the NBA,' he said. 'For many years, he was the No. 2 guy, right after Michael Jordan.'

Whenever we attended dinner parties, my dad would talk about his friend Charles Barkley. The first time my dad told the story, I didn't pretend to know who this person was. Basketball has never been my thing.

Like a good millennial, I Googled Charles Barkley. He seemed pretty famous — and definitely not like anyone who would be friends with my dad. But again, as a good millennial, I knew that people have very loose definitions of the word 'friend.'

(Courtesy Shirley Wang)

About two years ago, I asked my dad if I could see their texts. My dad handed me his phone. Their texts were mostly messages from my dad that ended with an excessive number of exclamation points.

I told my dad the conversation seemed pretty one-sided and handed the phone back.

As I talked about the relationship with more and more people, I began to think that either my dad was one of the luckiest basketball fans ever — or this whole thing was an elaborate joke, a 'Dinner For Schmucks'-type situation.

But no. The friendship was real.

The Origin

'It was, like, one of the most random things,' Barkley recalled with a laugh.

'I was on a business trip,' my dad said, 'and stayed in one of the hotels and was walking in the lobby, and I saw Charles Barkley.'

'I was in Sacramento speaking at a charity event,' Barkley said.

'So, I just went to say hi and take a picture with him,' my dad said.

'I was just sitting at the bar,' Barkley said. 'And me and your dad were the only two people in there. And we just sit down and started talking.'

'He's a super nice guy,' my dad said.

Story continues below

Subscribe to the podcast

'And, before we know it, we looked at each other, like, 'Yo, man, I'm hungry. Let's go to dinner,' ' Barkley said. 'It turned into a two-hour dinner. And then we actually went back to the bar and just sit there and talked for another couple of hours. And the rest is history.'

My dad and Barkley saw each other again in the bar the next night. And the night after that. At the end of the third night:

'Certainly, I told him I had a good time talking with him, hanging out with him,' my dad said. 'He said the same thing to me, and he left the phone number. He said, 'Whenever you're in Atlanta, New York City or Phoenix, check out with me. If I'm in town, we'll hang out and have a good time.' '

Hanging Out With Charles Barkley

Over the next few years, whenever my dad was in those cities, he would text Barkley, and they would hang out.

'I mean, it was just a fun time,' Barkley said. 'My friends — Shaq, Ernie, Kenny — they enjoyed just meeting him.'

They got dinner together.

Lin Wang and Shaquille O'Neal (Courtesy Shirley Wang)

'I think I had Thai basil noodle,' my dad recalled. 'It was pretty good. I had it right inside the office.'

They spent time on the set of Barkley's TNT show, 'Inside the NBA.'

'He likes to clean,' my dad said. 'There were several big can of cleaning wipes right on his desk. Every time he sit down, he cleaned his desk.'

They watched basketball games.

'Iowa lost to Maryland that day,' my dad said.

I'm pretty sure they did some partying too. But that, I don't know much about.

'Your dad is one of the happiest people I've ever met in my life,' Barkley said. 'I'm not just saying that — I mean, think about it: It's fun to be with your friends, you know? 'Cause, I don't have that many friends that I want to be around, to be honest with you. I mean, you know a lot of people. But when you go spend time with your friends, it's a whole different animal.'

Back Home ...

Back home, my dad's coworkers would tease him about Barkley and ask him about the story all the time. My dad didn't mind that they didn't believe him. He even made a slideshow of photos of him and Barkley together for our community's Chinese New Year celebration — totally irrelevant to the holiday.

I asked my dad what he thought it was about him, of all people, that made him and Charles Barkley become friends.

'I think we had a good conversation,' he said. 'We agree with each other [on] a lot of point of views.

'You know, he grown up in the '70s in Alabama. His father left him and his mother when he was little. He grown up with grandma and mother. And the grandma and mother cleaned up houses for somebody else to make a living.

'Tough life for him. But he's well-respected professionally. And that's his story.'

My dad moved to Iowa from China in the '90s. He felt that Barkley and him had similar experiences.

'So, to me, as an Asian in the U.S., I felt as long as I do a good job, people will respect me,' my dad said.

Barkley and my dad both worked hard — so hard, they believed, that the color of their skin didn't matter. In Chinese, we'd say that dad sometimes would 胡说八道(hú shuō bā dào) — that meant that sometimes he was known for spewing rubbish. I know that basketball fans might say Barkley often does the same.

In June 2015, Barkley's mother passed away. When my dad heard the news, he looked up the funeral details and hopped on a plane to Leeds, Alabama.

'It ain't easy to get to those places,' Barkley said. 'I'm from a very small town.'

And my dad showed up for his friend. Afterward, he went to dinner with Barkley and his family.

'For your dad to take the time to come to the funeral meant a great deal to me,' Barkley said.

Then, in May 2016, my dad was diagnosed with cancer. He had tumors in his heart.

I took that fall off from school. My dad and I watched mobster movies together. Action movies. Kung Fu movies. When the credits rolled, we'd flip to a basketball game. Just me and him, watching a lot of TV in our living room.

Days passed by. Then months.

Then, it was two years.

My dad never told Barkley that he was sick.

'I called him and got mad at him when I found out,' Barkley said. 'I was, like, 'Dude, we're friends. You can tell me. You're not bothering me. You know me well enough — if you were bothering me, I would tell you you were bothering me.' '

What Barkley didn't know was that my dad watched him almost every night on TNT. And while he rested and healed, my dad was laughing along with Barkley. He kept my dad company.

The NBA Finals

June 2018. NBA Finals. The Golden State Warriors vs. the Cleveland Cavaliers. My dad was staying in palliative care at the hospital. He loved the Warriors. I visited and read him sports highlights.

He didn't get to watch J.R. Smith's late mistake in Game 1 live. I tried to get him to laugh about Smith dribbling away from the hoop because he thought his team was ahead.

But it was a Sunday afternoon, and my dad was tired. The summer light filled his room. Then, the day faded, and dusk began to enter.

After it was all over, I went through my dad's phone and texted all his friends. I wrote:

Hi. This is Shirley. My dad just passed away.

The funeral was the day after the NBA Finals. My dad's favorite team, the Golden State Warriors, had won the night before.

The funeral was set near the outskirts of Iowa City in a house by the woods. I was talking to my childhood friend when she suddenly looked stunned. I turned to look behind me.

And standing there — drenched in sweat from the Iowa summer, towering over everyone in the room at 6 feet, 6 inches tall — was Charles Barkley.

'I had not met anybody in your family,' Barkley said. 'I didn't know anybody there.'

Everyone watched, astonished, as this man — this man we only knew from TV, this worldwide celebrity — walked down the aisle, looked at us and sighed.

'Why My Dad?'

Later, after it all, I texted Barkley and asked him: 'Why my dad? Why did he matter so much to you?' And recently, I called him up and asked: 'What did you even have to talk about?'

'Well, I think — first of all, clearly, he was a fan,' Barkley said. 'But I think the main thing we talked about was you and your brother.'

'What did you guys talk about — what did he say?' I asked.

'I think it was more that he was proud,' Barkley said. 'Because I've got a daughter, too. I'm just really, really proud of her, because I think she's a good person. And your dad was so proud of you and your brother.

'Listen: As an adult — and you're too young to understand this now — all you want is your kids to be happy. That's what you work for. To give your kids everything in life.'

The more Charles Barkley and I talked, I realized just how close he and my dad were. Barkley knew so much about me and my life — even though this was the first time he and I had ever talked.

(Courtesy Shirley Wang)

'It gives me great memories and great joy to know that I was a friend of his,' Barkley said. 'Just hearing about him at the funeral — what he had accomplished and what he was trying to help other people accomplish, just made me even — I wished he bragged more about himself.'

'So, let me get this straight: you were impressed by him?' I asked.

'Yeah,' Barkley said.

'I Was Blessed To Know Him'

At the funeral, people shared memories of my dad and made me realize that, for example, he was not just a cat litter chemist — but an industry-changing scientist with a Ph.D. And not just an immigrant — but someone who reached out to Chinese newcomers. And not just a thoughtful guy — but someone people trusted for advice. I realized that, even after he passed away, I would continue to learn things about my dad.

Before Barkley and I hung up, he had one more thing to say:

'Hey, listen. You stay in touch. Please tell your mom I said hello. Give her a big kiss. Tell your brother I said hello. And listen: Just keep doing you. It's your time now. Don't forget that. That's the most important thing.

'Your dad prepared you to take care of yourself. He prepared you for that. I was blessed to know him — and know you, too.'

'Thank you for your time,' I said.

'You're welcome, baby. You take it easy, you hear?'

'You too.'

I know how much his friendship with Charles Barkley meant to my dad. It was not just a relationship with a celebrity — it shed light on the possibilities of this world. A world where someone like him could just say something cool, something charming, and befriend someone like Charles Barkley.

I'm so glad that now I get to share my dad's No. 1 dinner party story.

(Courtesy Shirley Wang)

All Comments: [-]

muhneesh(4002) about 9 hours ago [-]

I watch Inside the NBA almost religiously - it's such a perfect show sometimes.

It's not your typical ESPN or Fox Sports commentary show where an anchor talks about a player's 'tenacious tenacity'. It's a show that presents itself as a place where four friends talk about a shared interest.

To accomplish this, they need to be comfortable talking to each other without being confined to the sports equivalent of political correctness - to have unconstrained degrees of freedom in criticism, humor and general skylarking. This happens oftentimes to the point of controversy, with Charles usually being involved at the center of any such controversy.

This story is beautiful, but to anyone that is a fan of Charles Barkley, it is unsurprising. He's always been a beacon of genuineness through his time as a player, as a commentator and this article simply extends that same light to his personal life.

duxup(10000) about 7 hours ago [-]

Actual chemistry between the folks on screen that produces results that seem like ... actual chemistry seems really undervalued / rare these days.

I love reading / watching sports.... sports shows that aren't the sports them-self are almost always so bad and wonky.

Inside the NBA is a noteworthy example of what it should be.

billforsternz(3864) about 9 hours ago [-]

This is a lovely and charming story...but. The whole premise is that the relationship between the suburban dad and the celebrity was somehow weird and unbelievable. I see that, but it's a shame we can't turn the world on its head somehow so that it would be just everyday normality. Celebrity culture has elevated some people to a kind of otherness. But they're just people.

sueders101(10000) 31 minutes ago [-]

I agree that this should just be everyday normality. However there seems to be wave of recent popular articles and essays about an epidemic of loneliness. I think essays like this can serve as a positive example of how to interact with/behave towards others and hopefully try to improve social well-being. It's certainly not the end, but I like to think it can help.

ronyeh(3810) about 8 hours ago [-]

In the end it's a story of a dude who ran into another dude and they hit it off over drinks and dinner. Then over the years they'd catch up and share stories of their lives and their children.

But the difference here is one guy is a celebrity and probably had a very very busy day to day schedule. Even then, he'd make time to meet up with Mr. Wang when they were in the same town.

I think the story is interesting because the two men came from very different backgrounds, yet discovered lots of common ground (minority skin color, achieved success despite modest upbringing, pride in their children). It's a story of a good friendship, one where you'd deliver a glowing eulogy at the other's funeral, if it ever came to that. I mean, I'm not sure how many folks I have outside of my own family that I'd have enough to say to deliver a eulogy. Maybe I need to make more friends?

danso(5) about 7 hours ago [-]

It's not just celebrity, but two guys from completely different and far separated social spheres, managing to strike up and more importantly, keep a friendship going despite the distance.

I don't think it's reasonable to expect famous people — or anyone, for that matter — to stay in touch with literally anyone who wants to befriend them. Our attention and time is a finite resource, we have to make hard choices and commitments about who we give to and withdraw from. This is a story that wouldn't happen without celebrity. Not just because Mr. Wang would not have otherwise recognized and talked to Barkley in their chance encounter, but seeing Barkley on TV continually helped keep the familiarity going, even if it was a mostly one-way street.

I bet Barkley gets many solicitations for his attention, and on the surface Wang doesn't seem any different than other fans. But I think Wang showed his the genuineness of his friendship by showing up for the funeral service for Barkley's mom. Anyone, fan or friend, could have seen the news and booked a flight to be there. That Wang dropped whatever work he was doing to be there for someone who he wasn't close to (socially/geospatially) was a strong sign of how much he cared about Barkley.

tptacek(73) about 7 hours ago [-]

The whole premise of the story is exactly what you say you wish it was.

pessimizer(1886) about 1 hour ago [-]

> The whole premise is that the relationship between the suburban dad and the celebrity was somehow weird and unbelievable. I see that

I'm not sure why, and it seems to me like I must be in a minority. I'm pretty sure every other upper-middle class person has one or two wealthy and/or famous friends. All wealthy and famous people can't only have friends that are wealthier or more famous than themselves, the math doesn't work out.

Instead, I see this story as an example of a very NPResque genre: upper middle-class parents often get cancer and die, surprisingly almost as often (proportionally) as the parents of people who do not work for NPR. But in the case of the children of the upper middle-class it probably hurts more because it's the first significant suffering they have ever experienced.

edit: I also assume it's pretty difficult to reject a story about a dead parent's 'simple rules for living,' interesting celebrity-filled early life with some artistic potential gone unfulfilled due to the choice to settle down and have children, or their immigrant story. They have a sentimental appeal for some readers/listeners, and your sad staffer won't be turning one in every week.

fipple(10000) about 8 hours ago [-]

Usually human lives are a continuous process of self-selecting our social circle to include those most similar to us in most ways. This is unusual because the two friends were so different, but still found common ground. Celebrities become g friends with non celebrities is also rare because there is always the sense that the ordinary person wants fame or money by association from the friendship, which was even true in this case to some extent (Wang proudly showing a slide show of photos with Barkley).

a-wu(10000) about 8 hours ago [-]

Listen to the narrated version of the article. It's much more powerful hearing the author and Barkley talk about Lin, especially when Barkley talks about how he and Lin would talk about their kids.

antibland(3995) about 7 hours ago [-]

Complete opposite experience for me. I found Lin's daughter's voice quite grating and reverted to reading the text. The part that did me in was when she gleefully enunciated 'exclamation point exclamation point.'

gwern(124) about 6 hours ago [-]

I was hoping to hear more about his cat litter work. What did he do? Did he ever talk about it with Barkley? etc

sndean(164) about 5 hours ago [-]

> I was hoping to hear more about his cat litter work

Of course it could be a different Lin Wang working in this area, but it looks like he has a few patents for new absorbent materials and related things [0, 1].



dvasdekis(3427) about 5 hours ago [-]

We should also never forget the classic gaming inspired by Charles Barkley - Barkley, Shut Up and Jam![1], or the spin-off RPG, Barkley, Shut Up and Jam: Gaiden[2]

[1]! [2],_Shut_Up_and_Jam:_Gaid...

thedailymail(10000) 39 minutes ago [-]

The plot of Barkley, Shut Up and Jam: Gaiden sounds awesome:

'The game starts off in 2041, twelve years prior to the main part of the game, in 'post-cyberpocalyptic Neo New York'. Charles Barkley performed a powerful dunk called a Chaos Dunk at a basketball game, inadvertently killing most of the people in attendance. As a result, basketball was outlawed and many basketball players were hunted down and killed. In 2053, another Chaos Dunk is performed in Manhattan, killing millions. Barkley is blamed for the Chaos Dunk and is hunted by the B-Ball Removal Department, led by Michael Jordan.'

(It just gets weirder from there)

Historical Discussions: Google AMP Case Study – Leads Dropped by 59% (How to Disable It) (August 30, 2018: 9 points)
Google AMP Case Study – Leads Dropped by 59% (How to Disable It) (September 11, 2017: 4 points)

Google AMP case study: leads dropped by 59%

278 points about 15 hours ago by krn in 1922nd position | Estimated reading time – 12 minutes | comments

If you run a WordPress site you have probably contemplated already at some point whether or not you should implement the hot new Google AMP for mobile. We had the same dilemma here at Kinsta and ended up testing it for a while. In the end, we didn't see good results and it ended up hurting our conversion rate on mobile devices. So today we are going to dive into how to disable Google AMP on your blog, and how to safely do it without 404 errors or harming your SEO. Simply deactivating the AMP plugin alone could end up really harming your site, so be careful. The good news is that both methods mentioned below don't require a WordPress developer and can be done in a few minutes!

Google AMP

Google AMP (Accelerated Mobile Pages Project) was originally launched back in October 2015. The project relies on AMP HTML, a new open framework built entirely out of existing web technologies, which allows websites to build light-weight web pages. To put it simply, it offers a way to serve up a stripped down version of your current web page. You can read more about it in our in-depth post on Google AMP, as well compare all the pros and cons.

Why Google AMP Didn't Work For Us

Due to all the hype around Google AMP, we decided to give it a try on our Kinsta site. You never really know what will happen until you test something. So we let it run for two months and here the conclusions we came to. Note: This could vary in almost every industry, so we highly recommend testing it on your own site before drawing conclusions. A couple ways to test this include:

Still looking for that perfect WordPress host?

Try Kinsta!

See our plans
  • Looking at data in Google Search Console before and after.
  • Comparing data from Google Analytics on your /amp/ URLs vs original URLs from organic traffic before and after.

Here is some data from during the time AMP was enabled on our site.

Google AMP Positions

As you can see, after enabling Google AMP and allowing them time to index we definitely saw a decrease in average positions in SERPs on mobile.

Google AMP positions data

Google AMP CTR

After enabling Google AMP we saw a decrease in CTR on mobile.

Google AMP Impressions

After enabling Google AMP we did see a higher number of impressions.

Google AMP Clicks

After enabling Google AMP we saw a slight increase in total clicks.

So for us, there was good and bad in the data above. However, the most important part was looking at the data in Google Analytics for the time AMP was enabled:

  • Our mobile leads dropped by 59.09%.
  • Our newsletter email sign-ups from mobile dropped by 16.67%.
  • Our account creations from mobile devices dropped 10.53%.

Because of this, we decided that Google AMP was not working for our business model. So why did we not see good results, when others do? Well, probably one of the biggest reasons is that our site is already pretty fast on mobile. So we didn't see a huge increase in speed, as some other ad-heavy sites might have. According to Google, 70% of cellular network connections globally will occur at 3G or slower speeds through 2020. So while it is super important to optimize for mobile, those that already have a well-optimized site probably won't notice huge differences.

Another reason is that we don't publish news. A lot of big publications are using AMP and taking advantage of the carousel in SERPs. A lot of big companies like The Washington Post, Gizmodo, and Wired all saw big improvements with Google AMP, but these are all news-oriented and ad-heavy content sites. We, of course, publish a lot of content, but our primary focus is still on generating leads and signing up customers.

Could we have done more conversion rate optimization to our AMP install? Probably yes. There are ways to add CTAs, newsletter signups, etc. We did optimize for some of this. But after seeing the conversion data above it wasn't worth managing Google AMP separately, which can be a pain, just to have a slightly faster mobile site. Also, a lot of our traffic and audience to the Kinsta blog is not from mobile to begin with, so we decided to disable Google AMP.

Also, currently there are no SEO benefits from AMP yet unless you are a news site trying to score the carousel in SERPs. We analyzed our mobile rankings and after AMP was fully removed, our rankings actually went up. Again, this could just be from natural progress. But we saw no increases in SERPs from running AMP. If your site is slow to begin with though you might, so we always advise testing on your own site.

mobile rankings after removing AMP

Other brands have also seen no harmful impact from removing AMP, and like us, actually saw improvements. Outside Magazine increased pageviews per visit 13 percent after ditching Google AMP.

How to Disable Google AMP

There are a couple different ways you disable Google AMP. Google has their official documentation on how to remove AMP from Google Search. A big problem with this though is that it usually requires a developer and their instructions are not very WordPress friendly. Their very first step is to remove the rel="amphtml" link from the canonical non-AMP page, while still leaving the AMP page in place. Thankfully though there are a few different ways to approach this without harming your SEO. You don't want to simply disable the Google AMP plugin as this will result in 404 pages.

Option 1 – Search and Replace (Regex)

The first option involves using a search and replace plugin to remove the rel="amphtml" code while no-indexing the AMP pages. We can thank Gulshan Kumar who originally posted this strategy. This assumes you are utilizing the free AMP for WP plugin.

Step 1

First, you will need to download and install the free WordPress Real-Time Find and Replace plugin. One of the great things about this plugin is that it doesn't modify your database or site, so it is very safe to use on your site without worrying about breaking anything. Basically, it finds and replaces rules that are executed AFTER a page is generated by WordPress, but BEFORE it is sent to a user's browser.

If you are a developer, you could, of course, do a normal search and replace. And we normally we would recommend doing changes in the database long-term, but in this scenario, it works great to temporarily remove the AMP code while things are re-indexing. It also means you can easily do this without a developer. Although we always recommend still taking a backup!

Real-Time Find and Replace WordPress Plugin

The plugin currently has over 60,000+ active installs with a 4.5 out of 5-star rating. You can download it from the WordPress repository or by searching for it within your WordPress dashboard under "Add New" plugins.

Step 2

Click on Real-Time Find and Replace under tools in your WordPress dashboard. Click on "Add" and add the following code into the Find: field:

<link rel='amphtml' href='(.+)' >

Click on the box next to "RegEx" and then click on "Update Settings." This will replace/remove that important AMP tag as Google recommends on your canonical non-AMP pages and or posts.

Regex in Real-time find and replace

Step 3

The next step is to mark the AMP pages as no-index. Click into the AMP for WP options panel and on the "SEO" section. In the Additional tags for Head section input the following code and hit "Save Changes."

<meta name='robots' content='noindex,follow'/>

This will tell Google to no longer index your AMP pages in search and therefore it will start re-indexing your original URLs for mobile.

No-index Google AMP pages

We recommend leaving the AMP plugin enabled until all of your AMP posts/pages have re-indexed over to the original URLs.

Step 4

We also recommend following the 301 redirects in option 2 below just to be safe!

Option 2 – Disable and Add Redirects

The second option is a little messier, but we've also seen this work fine. We recommend this option only if you have any issues implementing the following method above. This involves simply disabling the AMP plugin and adding 301 redirects. Thanks to the AMP for WP for originally posting this.

Step 1

The first step is to simply add 301 redirects for everything that has an AMP URL. First, you will need to download and install the free WordPress Redirection plugin. You could use your own redirect solution or plugin, but the reason we recommend this one is because you will need one that supports regular expressions. You can always uninstall the plugin after everything has re-indexed.

Redirection WordPress plugin

The plugin currently has over 1 million active installs with a 4 out of 5-star rating. You can download it from the WordPress repository or by searching for it within your WordPress dashboard under "Add New" plugins.

Step 2

Click on Redirection under tools in your WordPress dashboard. Then add the following code into the Source URL field and ensure you check the "Regex" box:


Then add the following to the Target URL field (updating the domain with your own):$1

Ensure Redirections is selected and click on "Add Redirect."

Redirections regular expression

After adding this we recommend browsing to a couple of your AMP blog posts or pages and test to make sure they are redirecting properly. Also, if you are a Kinsta customer, you can skip installing the above plugin and simply add the global redirect from the redirects tool in your MyKinsta dashboard. Our tool supports regular expressions.

AMP redirect in MyKinsta dashboard

Alternatively, if you are running Apache, you could also add the following to the top of your .htaccess file:

# Redirect from AMP to non-AMP path
RewriteEngine On
RewriteCond %{REQUEST_URI} (.+)/amp(.*)$
RewriteRule ^ %1/ [R=301,L]

Step 3

You can then deactivate and uninstall the AMP for WP plugin.

Deactivate AMPforWP plugin

Make sure to check out our additional tips further below to monitor the re-indexing process.

Monitoring Re-Indexing

You can monitor the progress of removing Google AMP in Google Search Console under "Search Appearance > Accelerated Mobile Pages." As you can see below, the moment we implemented the above strategy our AMP URLs started to de-index. You can also try resubmitting your sitemap file to speed up the process. Depending upon the number of AMP pages you have indexed this process could take anywhere from a couple days to a couple weeks.

We also recommend utilizing a keyword rank tracking tool. For example, we monitor our desktop and mobile keywords here at Kinsta using Accuranker. It allowed us to easily see the history of each keyword and that the /amp/ URLs on mobile were re-indexing over to the original URL. This can be a quick and easy way to verify that everything goes smoothly.

re-indexing Google AMP keywords


We all love faster mobile sites, and we applaud Google for trying to make the web a better place. But as we discussed above, perhaps you aren't seeing the results you hoped for with Google AMP. We always recommend testing it as it could vary per industry. The amount of mobile traffic your site gets could also greatly impact your results.

We aren't the only ones that have had issues with AMP. Rock Star Coders saw a 70% drop in their conversion rate after testing AMP on their sites.

Thankfully there are easy ways to disable Google AMP if you want to resort back to your original setup. Both of the options above don't require a developer and will ensure your visitors don't see nasty 404 errors, while retaining your rankings in SERPs.

Have any questions or have you encountered your own issues when trying to disable Google AMP? If so, we would love to hear about them below in the comments.

All Comments: [-]

izacus(10000) about 11 hours ago [-]

What exactly are 'leads' in this context?

tyingq(3978) about 10 hours ago [-]

The article heavily implies sales leads...collected email addresses to try and convert into sales.

They sell managed wordpress hosting.

However, their site doesn't appear to have much in terms of collecting leads. No newsletters, no trial accounts, etc. They just have paid plans you can sign up for. I guess the 'contact us' page could be considered a lead generator.

So, I'm as confused as you are. Lead generation would be pretty bad if you don't collect leads :)

asdfologist(10000) about 12 hours ago [-]

As a user I like AMP because it's guaranteed to be very fast. There, I said it.

fatjokes(10000) about 12 hours ago [-]

Completely agree as a user.

anothergoogler(10000) about 11 hours ago [-]

As user I dislike AMP because it adds a JavaScript requirement to read articles that don't have any JavaScript. There, I said it.

OnlyRepliesToBS(10000) about 11 hours ago [-]

no, you can't assert how fast the cache gets updated

it becomes indeterminate to the user, and when the harvester decides it

tyingq(3978) about 11 hours ago [-]

Well sure, Google isn't dumb. Adding in enough benefits (speed, carousel placement) to your Trojan horse ensures there are plausible reasons to bring it into the fort.

Publishers certainly wouldn't give up the most important bits of their page (the top), and cede left/right swipe hijacking (on carousel loaded pages) if there weren't some perceived benefit.

shaki-dora(3363) about 7 hours ago [-]

HN starts caring a lot about the integrity of the sacred HTML when it's their own content being mangled.

User experience is only valid as an argument in the context of that darn, biased 'mainstream media' being dismembered by ad blockers, or held for ransom by 'Brave'.

lpasselin(10000) about 10 hours ago [-]

The analysis on the google analytics data seems a bit biased against AMP.

- decrease in average positions in SERPs on mobile - decrease in CTR on mobile - higher number of impressions - slight increase in total clicks

These measures are correlated. You can't just add them all together.

More impressions with the same amount of clicks means CTR (click through rate) will drop. Average position should also drop.

In this case we see a slight increase in total clicks and almost double impressions!

What if adding AMP makes Google show your site to a wider range of audience (more variance on target audience), while assigning your pages a lower weight (average position). You would get this behaviour. Lower average position but more clicks. This could lower your account creation and similar metrics because users are less interested.

What I mean is for some websites, getting more clicks would be great. If your total clicks is the goal, you would like these results (ex: page with ads). For search users, this is not good because we are shown pages that are less relevant.

I would like more explanations on mobile leads, account creation, and newsletter email sign-ups dropped. Why not show plots? Are these compared to total from previous months or percentages? We should also need to be able to compare both websites (mobile vs AMP) to see if there are any major differences.

I am not an SEO expert or web designer but I personally use AMP on my modest website. I don't even have a desktop version. Only AMP. It is simpler that way and took only an afternoon (for a css noob) to switch. What made me switch was their amp-img carousel lightbox. They are simple to use and just work. I tried a plethora of css/js carousel or album viewers and chose AMP. Bonus page speed and better mobile search cards.

le205(10000) about 8 hours ago [-]

Very much agree with this comment. It's quite likely that the AMP pages received incremental impressions in lower positions, which reduced the overall average position but increased clicks.

The screenshot of rankings from another tool (looks like Accuranker) with a few +1 ranking improvements after disabling AMP also seems insignificant. Often this kind of fluctuation is very normal. Without knowing the baseline level of ranking fluctuation, it's hard to read too much into this.

This is the danger of analysing 'totals' without segmentation to better track incrementality.

Any test needs to be properly controlled to form clear conclusions and I don't see enough rigor here.

I would however commend the article on its advice to avoid simply 'disabling' AMP after using it for a period of time. There is cleanup to be done as the article touches on, and I suspect many may not be aware.

emayljames(10000) about 15 hours ago [-]

Hopefully this one-sided, web damaging framework will die a death like IE finally did, with its awful standards.

setquk(3852) about 13 hours ago [-]


I can't understand why the hell anyone would want to use it anyway. The experience is terrible.

GuB-42(10000) about 3 hours ago [-]

There are two sides with AMP:

- A set of enforced good practices that make sites faster and generally more user friendly

- A way for Google to get more control by introducing a semi-proprietary framework

The disturbing thing is that webmasters usually criticize the first aspect. Basically they want to bloat their sites and AMP doesn't let them do it. They don't really care about the second part. They are already using Google analytics, Google ads and optimize for Google search anyways. It means that the ones who can do something want AMP to die for all the wrong reasons.

For that reason, I think that AMP is good (or less bad). Even better would be to do what AMP does but without the Google framework. That would make it even faster than AMP because there won't be any Google bloat.

xiphias2(10000) about 12 hours ago [-]

Front end engineers are not selected for their skills in understanding algorithms and data structures. It's quite easy to speed up a web site if you understand how CPU works on a low level and how Javascript is compiled to machine code, but it requires a lot of knowledge.

hellisothers(10000) about 11 hours ago [-]

The obsession with data structures and algos is getting out of control, cargo culting this idea around is going to lead to requiring an L5 to change a button color.

walshemj(3897) about 12 hours ago [-]

Err you don't need to know how Js is compiled.

You need to understand how html css and js work and not cut and paste megabytes of cruft.

mercer(1982) about 12 hours ago [-]

Very little knowledge is needed to know how to speed up a website (or keep it from becoming slow). Almost every front-ender I've met, including those who never got beyond inserting jQuery snippets, knows enough.

untog(2258) about 12 hours ago [-]

Haha what? You absolutely do not need to understand how a CPU works to make a fast site. 90% of the job is getting rid of shitty third party ad and tracking code, something most front engineers don't control. Beyond that most is just sensible practices.

In depth knowledge of the DOM would serve you a lot better than knowing about CPUs. What triggers a repaint, how you can avoid it, and so on.

dang(160) about 6 hours ago [-]

We detached this subthread from and marked it off-topic.

yowlingcat(10000) about 10 hours ago [-]

The reason you're being downvoted is not necessarily because the first part of your broad sweeping generalization is always wrong. At least at the more junior levels, frontend focused engineers don't always have the same level of DS&A fundamentals as backend focused engineers. However, many engineers usually run into their first performance related issue in the first few years of their on the job experience, at which point they learn to speed things up.

It's in the second part of your statement where you're completely and totally wrong. Speeding things up on the whole does not require a lot of knowledge, and it does not require understanding how the cpu works at a low level and how javascript is compiled to machine code. Knowing how that stuff works can help you squeeze the last 1% of optimization out of code, but in practice, you get the first 99% of speed from decisions:

- Which frameworks do you use?

- What 'add-ons' that are key to business do you end up embedding in the front-end and how much page size and slow down do they add?

- Where are your hot loops? Are you doing anything expensive inside them?

- Do you have things that are synchronous where they could be in parallel?

- How much eye candy are you adding to the page? Does it all need to be there?

- How much CSS are you using? Are you using it in a manner where CSS optimizations can do heavy lifting?

- What does the frontend and the backend API contract look like? Are there places where excess requests are occurring, and could they be rolled up so that there is less waste?

You may have noticed that many of these decisions boil down to architectural concerns as well as product and business level decisions, which are tangentially related to the labor of front-end engineering. I don't want to lob ad-hominems at people, but I find this kind of attitude one of the most tiresome parts about certain parts of the engineering community. There's this haughty, holier than thou mentality that places data structures and algorithms at the very top of engineering skills. There's a giant world out there where those skills are not at the top of the hierarchy, and actually are least useful because any sufficiently advanced development there ends up being commodified and available as open source software or as a paid SaaS (IE AWS).

Based on this short, flippant comment, it's obvious you actually have no idea how to actually speed up a web site because you have no idea about what the actual top ten things are that you'd do to speed it up in any kind of commercial production usage. What's even worse is that instead of trying to figure out something you don't know, you're instead making up an answer that sounds reasonable but is actually completely wrong and something any experienced frontend or full-stack engineer would understand is poor judgment and an ineffective approach. This is an extremely dangerous attitude to allow into an organization, because you end up with a culture where people are focused on one-upping each other and attempting to look elite as opposed to pragmatically arriving at the right approach for the problem, specific to all of its constraints. I've worked on these kinds of teams before, and it ends up being a miserable waste of time for everyone involved, and I've endeavored to work on teams that don't behave this way and to create teams that don't suffer from this.

Any engineer that gave this kind of a response to an interview question of 'We've got a web app that's slow on the frontend and exhibiting XYZ symptoms -- how would you determine the root cause and diagnose it' would be rejected on the spot by me. That's the kind of attitude that can cause engineering organizations millions of dollars a year in engineering resource misallocation.

As a professional community, we need to evolve away from attitudes like yours. They symbolize an idealized, fictional world that is anything but the pragmatic reality of what good software engineering is.

privateSFacct(10000) about 10 hours ago [-]

Here's the reality. The Taboola news carousals at the bottom of most slow loading actual news sites are filled with total junk. One page per image websites, super slow, adblock goes crazy. I've literally NEVER had a good experience with these 'Stories you may like'

Google does something with a news carousel that loads fast and the content isn't crap. I get why publishers running things like taboola think it's terrible, but as a user interested in reading some quick news without going blind, I like it.

Hacker news is the exception in terms of a quick loading / clean website.

I was a google reader user heavily - so I like the more stripped down view of things even if it's 'terrible'.

One other note about this study. This company's goal is not to educate but to sign up users. At least for myself, Folks who want / like a stripped down experience may just be less interested in being marketed to.

lol768(3927) about 10 hours ago [-]

> The Taboola news carousals at the bottom of most slow loading actual news sites are filled with total junk.

This, a thousands times. For both Taboola and Outbrain it's clickbaity trash - usually misleading and low quality content. They're designed to 'disrupt' so that people click on them and have in the past featured on blatantly false news stories. I'd honestly be ashamed to work at one of these sort of organisations and it further underlines how awful the web experience can be if you're not using an ad blocker.

It's no surprise to me that AMP is popular among users if news sites are not including such garbage in their AMP renditions.

manigandham(948) about 6 hours ago [-]

AMP is a fork of HTML that pressures limited developer resources to maintain a version for Google's visitors while only being fast because of arbitrary framework limitations.

You can get a far better outcome by having search results consider site speed as a factor in rankings, which would make all publishers improve their sites overnight. The reason Google doesn't do this is because they also run the biggest ad network in the world which runs 80% of the slow ads you see on websites.

These projects within Google are common, similar to how it ranks down websites forcing web users to download an app, when their own websites do the same. AMP allows Google to keep people on Google domains, with better data collection and more control over the independent ad networks and analytics that can run on AMP pages.

tyingq(3978) about 9 hours ago [-]

I assume you're free to implement taboola like widgets on an AMP page that link out to garbage.

krn(1922) about 10 hours ago [-]

I think that AMP made Google's search results feel like they were essentially just Google's own pages. Because that's exactly what the user sees in his address bar, when he visits an accelerated mobile page:<...>. And that encourages him to hit 'Back' as soon as he is done with it, instead of checking out the newly discovered website. Because he never felt like he had discovered anything. He never felt that he had left Google.

The fact, that there are no AMP pages in Google's search results on Firefox Mobile, was the final reason for me to make the switch. I feel like I am browsing the world wide web again, not just a Google's snapshot of it.

zavi(3993) 33 minutes ago [-]

Users don't see, they see delivered by Google

jaredcwhite(3980) about 10 hours ago [-]

Over and over again I'll click a link from Twitter or elsewhere, and in mere moments I'm thinking to myself...waaait a sec, is this another crappy AMP page?

It never looks right. It's always 'off' in some weird (and sometimes very obvious) way. Nearly every time I go to the real page on the real website, it looks better and functions better. Since I use an ad blocker, it's never a problem for me that the real site might want to load heavier ads on mobile.

AMP is a plague on the open web. It's offensive, I never request it, and it never solves any problem for me. I'm glad I use DuckDuckGo as my search engine so I never get kicked to AMP pages from search. As a web developer, I've vowed never to implement AMP on any of my client sites and will explain to my clients why if asked (and so far I've never been asked).

Just say no to AMP.

emayljames(10000) about 7 hours ago [-]

You have hit the nail on the head with this explanation. Just.Say.No!

Rjevski(3883) about 9 hours ago [-]

I don't like AMP, but I am so happy to see when websites complain that "leads dropped by 59%" because of it.

For "leads" to drop that much it must mean they weren't legitimate, happy leads in the first place. I bet those "leads" came from a shit letter popup or other similar dark pattern, in which case AMP works as designed and benefits the user by shielding them from such garbage.

AMP is bad for the web in the long term. But it is a decent stop-gap solution given that nobody is willing to make their websites bearable without Google's pressure.

EDIT: I just tried their website again without an ad blocker and sure enough, I was asked to subscribe to their shitletter.

drb91(10000) about 3 hours ago [-]

> But it is a decent stop-gap solution given that nobody is willing to make their websites bearable without Google's pressure.

Somehow I don't think Google wants them to stop showing ads. Everything else is a distraction that ignores the root cause of the customer experience being shitty.

codazoda(3925) about 9 hours ago [-]

I'm also an Amp hater and welcomed this news. Then, I went to the site. Because I really hate AMP I dismissed all three of their mobile pop ups; one after another. After reading the article I tried to use the back button to go back to hacker news; took three clicks (maybe one for each popup)? I'm guessing you are correct and they need to take another look for causation.

kentt(10000) about 5 hours ago [-]

Agreed. However, I'm more in favour of AMP because if the reasons you mentioned. Essentially I think Google has tricked a lot of publishers into using less dark patterns and avoiding shooting themselves in the foot with heavy ads and tracking. It's not perfect but from an end user perspective I'm grateful for it.

Historical Discussions: Can Repelling Magnets Replace the Spring in a Pogo Stick? (December 04, 2018: 2 points)

Can Repelling Magnets Replace the Spring in a Pogo Stick?

251 points about 11 hours ago by mhb in 80th position | Estimated reading time – 9 minutes | comments

Can Repelling Magnets Replace the Spring in a Pogo Stick?

We receive quite a few questions about replacing compression springs with repelling magnets.� Is it possible?� Can it be done?� What magnets should be used to replace a given spring?

It���s possible, but tricky.� Magnets aren���t a one-to-one replacement ��� magnets behave differently than springs.

There are many of pros and cons using springy magnets in such situations.� Magnets are more expensive than coil springs, but you can have them act across an air gap.� We���re not going to focus on these comparisons here.� We wanted to explore the differences in the behavior of springs vs. magnets.

Let's Try It!

We got our hands on an old coil-spring pogo stick and measured the strength of its spring.� After bouncing around the office a bit, we got a good sense of how much force it provides.� Our adult weight compressed it by about 2'.� We calculated that the force at the full 7' travel is over 500 lb!

To get a quick idea of what magnets could do, we built something we could stand on. �We had a number of scrap, 2' diameter RY046 ring magnets available, so that seemed like a good place to start.

We constructed a small contraption that stacked them together on threaded rod, repelling with a lot of force.� Is it the start of a good pogo stick replacement?

The Pogo Stick, Scaled Down

A 3D-printed, 1/8 Scale Model of a Pogo Stick

The forces involved in a real pogo stick are quite powerful, pushing with many hundreds of pounds of force.� To investigate this topic in more detail without a lot of expensive and dangerously powerful magnets, we created a small scale model, roughly 1/8 the size.

A simple mechanical spring from the hardware store seemed a good place to start.� Fit inside a 3D-printed holder we made, we can bounce our miniature pogo stick.

Can we remove the spring and replace it with some powerful repelling magnets?

Spring theory

Coil Spring, Force vs. Distance

Let���s review how springs work before exploring how magnets are different from springs.� As a coil spring is squished, the force increases.� The force follows a simple, linear equation:

F = k x

The force (F) equals the spring constant (k) multiplied by the distance the spring is compressed (x).

The spring constant (k) of our spring is about 8 lb/in.� This describes how much force you can expect as you push down on the spring.

If you squish this spring by 1���, you expect the force to be 8 lb.� If you go to 2���, you expect 16 lb.� Compress the spring 1/2���, and the force should be 4 lb.� It���s easy to predict.


For a pogo stick, we don���t want the spring to start with zero force.� We want there to be some force right at the beginning of the travel.� How can we do this?

The solution is to preload the spring.� We took a 2.938��� long spring and squished it into an opening that was only 2.4���.� Even before we jump on the pogo stick, the start position has that spring compressed by about 0.538���.

Using our handy spring formula, we can predict that the initial force is about 8 lb/in x 0.538��� = 4.3 lb.


Preloaded Coil Spring, Force vs. Distance, where the energy in the spring is shown by the area under the line.

A key measure of pogo stick performance is how much it shoots up into the air.� Consider the pogo stick pressed down with the spring fully compressed.� Imagine a 15 lb miniature person standing on it.� How far will he shoot up when the spring is released?

This is really another way of asking how much energy is stored in the spring.� The energy in this compressed pogo stick is equal to the area under the spring curve.� The bigger the area, the more energy, the higher we bounce.

Magnets, we were supposed to be talking about magnets.

Now let���s replace the spring with two repelling, 3/4' O.D. x 1/4' I.D. x 1/4' thick RC44 ring magnets.� What happens?

Playing with it, something is obvious right away.� There���s not much force at all in the first half-inch of travel.� We really only feel the repulsion force when the magnets get close together.

To evaluate the strength of this two-magnet system, we simulated and experimentally measured the forces. The difference between the magnets vs. the coil spring is dramatic.� The magnet force doesn���t increase in a straight line like the spring.� It���s very weak, and doesn���t start increasing until close to the end.� When it finally does, it increases very quickly!� You don���t get that nice, gradually increasing force of the coil spring.

Not only doesn���t it feel right, it doesn���t perform well either.� Remember what we said about the energy being the area under the curve?� You don���t have to be a mathematician to see that the area under this magnet curve is substantially lower!

When fully compressed, the coil spring had a total energy of about 14.6 inch-pounds (1.65 joules).� This setup with two magnets is less than 2.8 inch-pounds (0.32 joules).� That���s a lot weaker!

Got a problem?� Add more magnets.

What can we do to increase the strength, especially over a greater portion of the travel?� Add more magnets.

We added a third magnet, alternating which way the polarity faces with each magnet.� Each magnet repels any adjacent magnets.

This still isn���t anywhere near the strength of the coil spring, but it���s improving in the right direction.� We get a bit less force at the fully compressed position, but it increases the force along more of the pogo stick���s travel.

Four magnets

This raises up the force even more, but it���s still dramatically weaker than the spring over most of the range of travel.

As we add more and more magnets, the thickness of the magnets limits the overall travel of the stick.� The more magnets we add, the less travel we get in the pogo stick.� With four magnets, the fully compressed magnets are about the same height as the fully compressed coil spring.

Five magnets

Now we���re getting even more force, but the overall travel is less than the old coil spring.

Six magnets

This sturdy setup showed the most promise.� It���s still not the same as the old spring, but the force is similar.� It���s weaker in the first 3/4��� of travel, then gets stronger at the end.

If we only consider the travel between 0 and 1���, the energy stored in the compressed magnets is nearly the same as the spring.� That���s not fair to the spring, though, since the next half-inch of travel doubles the energy it stores.

That drives home the problem we face -- the repelling magnet setup doesn't have nearly as much travel as the coil spring.� If your spring application needs a tight range where the force is delivered, magnets might be great.� For our pogo stick where we want to spread the force over a long springy distance, the coil spring is better.


Should you use repelling magnets instead of a coil spring in your next device?� As with so many magnet answers, it depends.� Do they give you the performance you want where you need it?

We found that repelling magnets are very different compared to springs.� The way they feel and perform are not the same.

For a pogo stick, it's pretty clear that magnets are not a good choice.� We probably should have chosen a more pro-magnet application!

Interesting topics we haven���t fully explained

The two gaps in the middle look smaller than the outer gaps.

We compressed a bunch of magnets in a stack, varying the number of magnets.� During this testing, we noticed that the spacing between the magnets wasn���t even.� For the 4, 5 and 6 magnet stacks, the outermost gaps tended to be a bit larger than the inner gaps.� Why?

We noticed something that was odd about the maximum pull force when the magnets are fully compressed.� As we add more magnets to the stack, the maximum pull force alternates up and down.� If 2 magnets repel with some force, 3 is a little less, but 4 is a little more 3, but 5 is a little less than 4, etc.� Why is this so?� We don't know.� This pattern turned up in a number of different scenarios we tried, beyond the one shown here.

Highlighting the maximum forces of repelling RC44 ring magnets on a force vs. distance graph

All Comments: [-]

tedunangst(3977) about 6 hours ago [-]

The response could be a closer approximation of linear by using six tubes, two magnets each, with varying initial distances. As the stick compresses, additional magnets contribute force, but not all at once.

scythe(3961) about 4 hours ago [-]

Just have two fixed magnets and two moving ones:

[+- +-> ===== <-+ -+]

As long as the open-wide distance is capped you should have a smoother distance-energy diagram.

axaxs(10000) about 10 hours ago [-]

Loved the video. Didn't love the loud beep near the end, that's at about 3x the volume of the speaker the whole video...

CamperBob2(10000) about 7 hours ago [-]

YouTube desperately needs an audio compressor feature. Can't imagine why they haven't implemented that yet, it's more or less trivial.

umvi(10000) about 10 hours ago [-]

Seems like electromagnets could help compensate for distance by drawing more current when far apart, and less when close together. A microcontroller could essentially make the force linear like a spring. Of course, then you need wires hooked up to your pogo, and that might be scary if the microcontroller has a bug (or somehow fails) since you are now essentially hopping onto a railgun.

ehsankia(10000) about 5 hours ago [-]

Could you not hardcode limits in your circuit, such that the maximum power it supplies is never enough to be deadly?

dmurray(3937) about 4 hours ago [-]

Maybe you can use regenerative braking to charge a battery, to avoid having wires and keep a bit closer to the spirit of the pogo stick.

azhenley(3310) about 9 hours ago [-]

About as scary as my Tesla on autopilot going around a curve at 75mph. Every time I ask myself, what if there is a bug or if it turns off...

gumby(3301) about 7 hours ago [-]

You could have controls in the handgrips to change the shape and amplitude of the curve.

As far as bugs go, you could clamp the output in the good old fashioned analog way (i.e. via circuitry) so the worst case is you just fall/jump off which is already the case for the sprung ones.

jtms(3999) about 10 hours ago [-]

This is the best comment I have ever read

WhompingWindows(10000) about 1 hour ago [-]

Honestly, someone might want to hop on a railgun. Extreme pogo

starbeast(3996) about 6 hours ago [-]

Alternatively a lever system or non-circular gear arrangement could convert the force from fixed magnets into a linear response entirely mechanically.

...edited for sense

x220(3978) about 6 hours ago [-]

You don't have to think about buggy software to come up with death scenarios. What if a driver in the other lane is drunk or not paying attention? Another driver could kill you at any moment and there's nothing you could do to prevent it.

applecrazy(3748) about 9 hours ago [-]

Not only that, but you can adjust the "springiness" by having a variable spring constant.

amelius(867) about 10 hours ago [-]

They should try this with electromagnets, and make the curve linear using electronics (and perhaps find the optimum curve in terms of fun when jumping around).

TeMPOraL(3113) about 7 hours ago [-]

Wouldn't that consume a lot of electricity?

truethrowa(10000) about 8 hours ago [-]

How can they stack three magnets with the repelling side facing each other on all three magnets? Isn't one side repelling and the other attracting?

shagie(3977) about 6 hours ago [-]

You can do some interesting things with magnets. Consider the Halbach array ( ) often used for refrigerator magnets.

That said, the 3 magnet configuration would be:

NS - SN - NS

From the article:

> We added a third magnet, alternating which way the polarity faces with each magnet. Each magnet repels any adjacent magnets.

Xcelerate(1307) about 9 hours ago [-]

I've often thought it might be interesting to replace the springs in a piano's action with electromagnets so you could have control over the exact touch profile.

beefman(744) about 4 hours ago [-]

A guy named David Stanwood has a network of licensed piano techs all around the country who can replace the lead weights in the keys with magnets. One of the stated benefits is reduced key inertia.

callalex(10000) about 6 hours ago [-]

I don't think pianos typically have any springs, everything is gravity powered with counterweights. The only springiness comes from striking the strings as far as I know. Only the really cheap electronic keyboards use springs. Even halfway decent electronic pianos still rely on counterweights.

tomcam(676) about 8 hours ago [-]

Or you could pay a piano tech a couple hundred, then have nothing to worry about for the next few generations

mikepurvis(4000) about 9 hours ago [-]

I don't have a full text link, but definitely there's been some work done on at least the front end of this problem:

0ld(10000) about 8 hours ago [-]

they did freedom units math! quite a feat

v768(10000) about 5 hours ago [-]

Yep, still quite hard to read.

Sharlin(3996) about 7 hours ago [-]

> For the 4, 5 and 6 magnet stacks, the outermost gaps tended to be a bit larger than the inner gaps. Why?

> As we add more magnets to the stack, the maximum pull force alternates up and down. If 2 magnets repel with some force, 3 is a little less, but 4 is a little more 3, but 5 is a little less than 4, etc. Why is this so?

Shouldn't these guys have figured it out pretty easily? Both seem to be readily explained by the fact that it's not just the adjacent pairs of magnets that interact. Given magnets ABCD, the A magnet is repelled by B and D but attracted by C. B is equally repelled by A and C, but attracted by D. And so on.

hinkley(3901) about 3 hours ago [-]

Seems like they missed a trick by arranging the magnets above three so that some of them are paired. You'd probably get more travel but not sure how the force would be affected and neither did they.

You could try A BC D, AB CD, AB C DE and A BCD E.

CapacitorSet(3154) about 7 hours ago [-]

Unsurprisingly, an inverse-square mechanism can't replace a linear one over a wide working range.

beefman(744) about 4 hours ago [-]

Inverse-cube for magnets

bigiain(2282) about 6 hours ago [-]

I wonder if there's a Fourier transform equivalent that'd let you stack up various inverse square curves to appropriate a linear curve?

dTal(3938) about 2 hours ago [-]

If you really wanted to do this, I reckon the way would be to have one set of magnets on the moving shaft and another set surrounding it, so the the repulsion tends to keep the shaft centered. Then you make the shaft surround taper towards the top, so that the further the pole is pushed in, the closer together the magnets are forced. You should be able to achieve any force curve desired with this setup by making the taper non-linear.

harshreality(3373) about 2 hours ago [-]

Just use current designs which, I would think, mechanically constrain the pogo stick except in one direction. Replace spring with powerful magnet (pair), and make sure there's a stop pin or something similar to keep the magnetic pogo stick from disassembling itself.

I think you're making things too difficult and expensive if you're trying to magnetically suspend and stabilize the pogo stick in multiple/all directions.

Also, magnetic radial stabilization would be stable only in a rough sense. The pogo stick handle would still wobble and be disturbingly/unexpectedly unsteady. I think for pogo sticks you'd really want a mechanically confined, piston-like design. You could still have magnetic stops on both extremes.

rrggrr(3842) about 10 hours ago [-]

Fully compressed I believe the opposite poles fields begin to attract each other, counteracting the matching poles repelling one another, and explaining the alternating pull force.

jaytaylor(966) about 10 hours ago [-]

This is easily solved by including a minimal height spacer between the two magnets to prevent them from getting too close.

madengr(10000) about 8 hours ago [-]

Where does the heat go? The spring will have a much larger surface area to dissipate the heat. Maybe surrounding the magnets with a copper tube will conduct some eddy currents.

notacoward(3935) about 8 hours ago [-]

A lot of science museums have a display showing how magnets and copper pipes interact. In short, because of Lenz's Law, the magnet's motion induces an opposing magnetic field in the pipe ... which suggests another possible way to make a pogo stick with the right kind of resistance curve as well as addressing the heat issue.

dsfyu404ed(10000) about 10 hours ago [-]

TL;DR Yes but not very well. A spring made from magnets doesn't have a close to linear spring rate like a traditional coil spring does.

jldugger(3963) about 9 hours ago [-]

But is it linear, from the data? They had two datapoints and projected from that sparse data a regression line.

jtms(3999) about 10 hours ago [-]

Coil spring + magnets = success?

hammock(2503) about 9 hours ago [-]

Can we make an electronagnetic-sprung mattress?

J5892(10000) about 9 hours ago [-]

That would really suck in a power outage.

diabeetusman(10000) about 6 hours ago [-]

Ignoring other problems mentioned, what would the benefit of such a system be? I suppose it'd be very customizable, but probably expensive (both initial cost plus the cost of the electricity to run it).

dchichkov(3880) about 8 hours ago [-]

Seem like an attempt of replacing a large, fine-tuned, atomic scale electromagnetically interacting system with a much simpler, less fine-tuned one.

sitkack(3929) about 7 hours ago [-]

The Universe is already running the best code for the job.

Historical Discussions: Egypt tomb: Saqqara 'one of a kind' discovery revealed (December 15, 2018: 1 points)

4,400 year old Egyptian tomb discovered in the Saqqara pyramid complex

241 points about 14 hours ago by open-source-ux in 636th position | Estimated reading time – 3 minutes | comments

Image copyright AFP/Getty Images
Image caption Journalists were allowed into the newly-discovered tomb, which experts have called 'exceptionally well-preserved'

Archaeologists in Egypt have made an exciting tomb discovery - the final resting place of a high priest, untouched for 4,400 years.

Mostafa Waziri, secretary-general of the Supreme Council of Antiquities, described the find as 'one of a kind in the last decades'.

The tomb, found in the Saqqara pyramid complex near Cairo, is filled with colourful hieroglyphs and statues of pharaohs. Decorative scenes show the owner, a royal priest named Wahtye, with his mother, wife and other relatives.

Archaeologists will start excavating the tomb on 16 December, and expect more discoveries to follow - including the owner's sarcophagus.

Here's what they've found already...

Image copyright AFP/Getty Images
Image caption The private tomb is part of a vast, ancient necropolis in Saqqara - where the earliest known Egyptian pyramids are located
Image copyright AFP/Getty Images
Image caption The tomb was found in a buried ridge, which may help explain why it escaped looters
Image copyright AFP/Getty Images
Image caption The walls of the tomb are covered in hieroglyphs, the writing system of ancient Egypt
Image copyright Reuters
Image caption The ancient Egyptians often carved sculptures into the walls of tombs and temples
Image copyright Reuters
Image caption Mustafa Abdo is the project's chief of excavation. The tomb is 10m (33 ft) long, 3m (9.8ft) wide, and a little under 3m high
Image copyright Reuters
Image caption Priests were important people in ancient Egyptian society, as pleasing the gods was a top priority
Image copyright EPA
Image caption The tomb's colours have survived unusually well for almost 4,400 years, experts said
Image copyright Reuters
Image caption The artefacts date from the Fifth Dynasty, which ruled Egypt from around 2,500 BC to 2,350
Image copyright EPA
Image caption The coloured wall scenes depict the owner, a high priest, with his family
Image copyright Reuters
Image caption Archaeologists plan to explore the tomb further, and should find the priest's coffin soon

Media playback is unsupported on your device

Media captionThe 4,400-year-old tomb is filled with hieroglyphs and statues

All pictures subject to copyright

All Comments: [-]

Alex3917(408) about 12 hours ago [-]

The YouTube channel Ancient Architects is pretty good at covering the latest ancient history news:

vatueil(3738) about 11 hours ago [-]

From a cursory glance at the channel's descriptions and video titles it looks sort of sketchy, to be honest. Is that impression wrong?

> Ancient history and civilisations channel brought to you by Matt Sibson, offering alternative interpretations on the most well-known ancient sites in the world. History isn't always as it seems.

> I find that most mainstream historical interpretations are full of holes, both historically and scientifically and I intend to look at all of the evidence and offer my own unique interpretation on ancient history and the countless ancient mysteries.

dandare(3845) about 12 hours ago [-]

Shouldn't we keep a couple of these unopened for the future? Maybe open one every time we make a radical progress in some measurement technique?

Is this already a thing?

Alex3917(408) about 12 hours ago [-]

Usually, but sometimes they're going to decay anyway because of shifting groundwater. C.f. this video as an example:

henrikeh(3995) about 12 hours ago [-]

I believe it is so, at least here in Denmark where any new hole in the ground is potentially an uncovered settlement.

I recently had a chance to visit the dig at "Ringborgen" near Køge. They only dug as little as possible and with a very directed effort: answering questions such as "what is the age", "did they have fireplaces", "are the signs of buildings" etc. once they got the "data" they cover the site again and preserve until new questions need answers (which potentially involves new methods).

In this case they had learned that the found settlement in Køge was build and deserted in a quite short time span. This raised a question with regards to other possible sites in Denmark: are there other sites with this pattern of it being deserted quickly? So old sites might now be redug.

Sources: had an excellent tour at the site in Køge; a friend who works as archaeologist.

jbuzbee(3364) about 11 hours ago [-]

I recall there is a known 'tomb' of a ceremonial ship next to the great pyramids. They started displaying its twin back in '82 (amazing exhibit [1]), but have left the other in place.

I also suspect that the Egyptians have a stash of known tombs that they periodically 'discover' in order to keep their tourism market in the news


JorgeGT(3981) about 7 hours ago [-]

The inner part of the Mausoleum of the First Qin Emperor, said to contain rivers of mercury depicting the rivers of China, etc., is still unopened:

sizzzzlerz(3952) about 11 hours ago [-]

In the desert southwest of the US, there are thousands of Anasazi sites that haven't been excavated. Archaeologists have intentionally left them alone so that future scientists may be able to open fresh sites using new techniques and equipment. The problem is that these places become susceptible to pot hunters because they usually aren't protected. Once they've desecrated a site, it becomes worthless to science.

EGreg(1712) about 4 hours ago [-]

How do they possibly know the age of this tomb with certainty?

geuis(826) about 2 hours ago [-]

Often it's a combination of radiocarbon dating and historical records. Though probably not in this instance, there are sometimes records found in other places that say this or that person lived at the same time as another, so that can help to date things like this. Radiocarbon dating is highly accurate and 4400 years isn't that long ago, so it's generally accurate at dating organic remains to within a few decades. It gets less accurate over longer periods of time, but 4400 years is well within that window.

muratgozel(10000) about 13 hours ago [-]

I felt like looking to google drive of an ancient man. I really would like to read about the meanings of the objects found in those walls.

wl(10000) about 10 hours ago [-]

Egyptian funerary practice is pretty formulaic. A quick glance at the inscriptions, and I spot some traditional offering formulae. Here's a translation from the register on the right in the picture with the label beginning 'Priests were important people':

'An offering the king gives to Osiris, foremost of the westerners, lord of Djedu, a voice offering [of bread and beer] for him at every festival and every day...'

and then it cuts off. I'd expect the inscription to continue stating that these offerings are for the spirit of the tomb's owner, along with a list of some of his offices. I spot similar offerings to Anubis. I spot the epithet 'venerable', indicating that the tomb owner had a funerary cult.

xaedes(3686) about 13 hours ago [-]

It really looks in good shape (and color)!

Regarding the meaning of objects found I want to share this humorous story in which future archaeologists find a motel room and they believe to have found a great tomb similar to that of Tutankhamun:

'Motel of the Mysteries'

onetimemanytime(3492) about 10 hours ago [-]

4400 years ago and we are rightfully in awe. What a way to leave a history.

Will my Geocities page be there 4400 years from now ? ;)

geuis(826) about 2 hours ago [-]

Doubtful your geocities page is around even now unless you backed it up or it got slurped by neocities.

donedealomg(10000) about 6 hours ago [-]


limeblack(4000) about 12 hours ago [-]

In college and high school I was taught they used radar years ago to find what was thought to be all of the remaining tombs. Anyone know why this one hadn't been found yet?

ronald_raygun(3871) about 5 hours ago [-]

So the technology has improved quiet a bit since then, now they are using muon scans! But I think they might have just been referring to hidden chambers in the places we know about

gammateam(10000) about 11 hours ago [-]

My experience with "found" in egypt means "hasnt been completely sacked by raiders 200 years ago"

I would love to be proven wrong this time

realthing(10000) about 12 hours ago [-]

Edgar Cayce made this predication many years before radar technology was even invented. To my mind, the mystery surrounding ancient Egypt and the culture of its people still laregly remains a mystery..

Hoasi(3964) about 13 hours ago [-]

Letting a horde of journalists rush to photograph the tomb at once seems like a terrible idea. That goes against the most basic precaution you could take to protect archeological artifacts. What kind of professional archeologist would let that happen?

canada_dry(3944) about 12 hours ago [-]

I don't think I'm being overly dramatic to say that the antiquities governance in Egypt is a bloated, top heavy and mostly useless bureaucracy which - in the end - only has a cursory interest in archeology - aside from keeping the money rolling in.

It's all about giving high paying, long term jobs to political allies and family members and less about preserving one of the most fascinating cultures the world has ever seen.

eponeponepon(10000) about 13 hours ago [-]

These things can be done in an organised and controlled fashion - one has to hope that those in charge did so. There's no particular reason to assume that 'a horde' of journalists were allowed to rush in, though.

Fundamentally it's a judgement call for the archaeologists as to whether the tomb is in good enough condition and the photographers are trustworthy enough.

dominotw(1856) about 10 hours ago [-]

> Letting a horde of journalists rush to photograph the tomb at once seems like a terrible idea.

Its a good idea if your tourism industry is feeling the slump from terrorist bombings, revolutions, political instability. Any publicity is to revive tourism is good.

adawoud(10000) about 11 hours ago [-]

As an Egyptian, I can assure you nobody in our entire government body knows what they're doing.

tomcam(676) about 13 hours ago [-]

We're not dealing with the British Museum here. Money changed hands through informal channels, though no journalists will publicly acknowledge what happens. The Egyptian government has underfunded antiquities greviously for years, and this has not been a stable system like the West is accustomed to at any time in modern history.

German exclaves in Belgium separated by a bicycle path from the rest of Germany

230 points about 14 hours ago by rwmj in 999th position | Estimated reading time – 5 minutes | comments

The exclaves on Google Maps

These enclaves/exclaves are located along the border of the European countries of Belgium and Germany, just south of the city of Aachen. Here is an example of what these borders actually mean at some locations:

Pretty surreal, isn't it? If you would like to know more about why these strange borders were created, read their history below.

What are enclaves and exclaves?

First of all, let's clarify what exactly enclaves and exclaves are as well as the difference between them.

The explanation from Wikipedia describes them quite well:

An enclave is a territory that is completely surrounded by the territory of one other state.

An exclave is a part of a state or territory geographically separated from the main part by surrounding alien territory.


So following the definitions, these German territories are exclaves of Germany and enclaves of Belgium.

Why do these particular exclaves exist?

The reason for their existence is one to find in the history books.

The area was annexed by the Kingdom of Prussia in 1822 and it became part of the Rhine Province. [1]

An Act in 1882 stipulated the construction of a railway to help integrate the borderland better into the newly unified German state. The Germans opened the line from Aachen to Monschau in 1885.

Soon it was expanded south, reaching the town of Ulflingen (today Troisvierges, Luxembourg). This link was quite important because of industrial interests. Major iron ore deposits were discovered in Luxembourg and Lorraine at the time and the need for a transport corridor to them was inevitable. [2]

Map of the Vennbahn

The transport of coal began southbound from Germany to fuel the booming steel industry in the region, which in turn supplied the raw-material needs of the Ruhr. The entire line soon became known as the Vennbahn.

After the war

The German defeat in World War I and the provisions of the Treaty of Versailles completely transformed the political map of the region. The treaty moved the border eastward by as much as 20 kilometres, and most of these former German territories became part of Belgium.

However, the question of the Vennbahn was quite complex. Even though some of the lands west of the line were to remain part of Germany, Belgium claimed sovereignty over the trackbed and the stations. The Belgians argued that the railway was a vital communication route for their new eastern territory given to them by the treaty.

The commission which was set up to decide the matter agreed and the Raeren-Kalterherberg section of the Vennbahn was ceded to Belgium in 1921. [2] It took a further year to finalize the details that left five German exclaves – called Munsterbildchen, Rötgener Wald, Rückschlag, Mützenich and Ruitzhof – west of the railway.

The line remained in service until 2001 before it was dismantled in 2007-08.

The five enclaves today

(from north to south) [3]

Munsterbildchen Area: 182 hectares. Population: 50.

Rötgener Wald Area: 998 hectares. Population: 1000.

Rückschlag Area: 1.6 hectares. Population: 4. Smallest of the five, this exclave/enclave is literally just a house and a garden. [4]

Mützenich Area: 1211 hectares. Population: 2237.

Ruitzhof Area: 93 hectares. Population: 70.

There is no border control due to the Schengen Agreement, therefore daily life is not really impacted. Recently, there has been speculation whether Belgium would hand the meters wide path back to Germany, nevertheless, both countries have denied this.

The Belgian Government rehabilitated the Vennbahn route after the dismantling of the track by creating a scenic bicycle trail. If you are visiting the area it is highly recommended to cycle along it. You can find useful information regarding the bike path here:






5: All five screenshots of the exclaves were created with the Keene University Polyline Tool

Found this article interesting? Give it a share so more people can enjoy it!

All Comments: [-]

k__(3120) about 11 hours ago [-]

What are the reasons such things aren't resolved?

lucideer(3947) about 11 hours ago [-]

An absence of good reasons to?

lagadu(10000) about 10 hours ago [-]

What's there to resolve if it works perfectly fine as it is?

Faaak(10000) about 13 hours ago [-]

> There is no border control due to the Schengen Agreement, therefore daily life is not really impacted.

That's the most interesting part to me. I cross the border every day to go to work. Yet, when a friend came to visit us and we we to the city, he couldn't believe that we were crossing a border that easily (France-Switzerland). It was literally an open fence with a sign welcoming you.

It's really awesome when you think about it

mikeash(3584) about 13 hours ago [-]

It's interesting and sad to contrast it with the India-Bangladesh enclaves, where there were great hardships due the lack of such international cooperation.–Bangladesh_enclaves

ceejayoz(2028) about 12 hours ago [-]

> It was literally an open fence with a sign welcoming you.

I was once on the TGV from Paris to Switzerland and passport control consisted of the conductor coming into the car and asking if anyone wasn't allowed in Switzerland.

flurdy(3982) about 5 hours ago [-]

I lived for awhile inside Schengen and enjoyed the near total frictionless border crossing, train, air travel etc.

When I later moved back to the UK, I had a faint hope the UK would one day see sense and join Schengen as well, but I am pretty sure that boat has now sailed...

FigBug(3708) about 12 hours ago [-]

This is what I was expecting when I drove between Switzerland-Italy in 2016. Except I was stopped, told to park, had my passport examined with with a loupe, and then was sent on my way. Are the borders back to normal now?

OscarCunningham(10000) about 12 hours ago [-]

People in the U.S. have forgotten the meaning of the word 'state'.

ThinkingGuy(10000) about 9 hours ago [-]

Schengen Agreement was only implemented relatively recently (80's/90's?). How were border controls near these enclaves/exclaves handled for all the years before that?

coliveira(3180) about 10 hours ago [-]

This also happens in the border of Germany and Switzerland. A few places are part of Germany but inside Swiss territory.

mtmail(3644) about 6 hours ago [-]

Büsingen is one. People pay with Swiss Franc, houses have two postcodes, the local soccer club plays in the Swiss league. 'Vehicles with BÜS licence plates are treated as Swiss vehicles for customs purposes.'

ilamont(86) about 11 hours ago [-]

There are some weird border situations up on US/Canada border, including:

* Library in Vermont and Quebec straddling the border:

* Canadian train that crosses northern Maine and used to have U.S. border guards board it (can't find the resource, but I saw this on TV show some years ago)

* Thousand Islands border between Ontario and NY State, which can be easily swum in the summer and freezes over in the winter (

* Akwesasne Mohawk reservation further east that straddles the NY/Ontario/Quebec border but tribal members do not recognize (

* Northwest Angle attached to Manitoba, but technically part of Minnesota:

flyinghamster(3985) 24 minutes ago [-]

Add Hyder, Alaska, accessible by road only from Canada. It also gets its telephone service and utilities from the Canadian side.

jwr(3581) about 10 hours ago [-]

This is a good time to consider what a ridiculous concept 'borders' are, especially within the EU. What makes land and people on one side of an arbitrary line so different from land and people on the other side of it?

Borders are necessary because of power-hungry politicians: without borders, entire levels of governments would become unnecessary. This is why we hear so much about 'patriotism' these days.

tim333(1485) about 8 hours ago [-]

It goes back millions of years with territorial animals but hopefully we are gradually getting over it with the odd glitch like Brexit en route.

badpun(10000) about 10 hours ago [-]

> What makes land and people on one side of an arbitrary line so different from land and people on the other side of it?

Cultures? People on one both sides of the border generally have different culture and hierarchies of values. Thanks to borders, every nation gets to organize their piece of earth as they see fit (at least in democracies).

coloradoKid(10000) about 10 hours ago [-]

If you want to have government provided services funded by taxes, you need to determine who will be paying those taxes. Borders provide a clean division between the paying customers for a particular government's services, and those who are not.

johannes1234321(3868) about 8 hours ago [-]

There is a need for rule setting. Starting from rules about capital crimes to local organisational of roads or house construction. If we rule out anarchy as a way to rule this this needs some form of governing body and the exact form of those have cultural, geographic, population, ... differences which need to be respected. There is no global agreement on many of those things. How are different crimes valuated and punished, is a high density road network needed or more public infrastructure. Doing those things in smaller units has benefits, for doing that you need borders to define which rules are valid where. Of course such borders are to a large degree virtual (i.e. you often don't see the exact city borders) and not borders with guards and fences.

On top of that then come all sorts of historic, cultural, ... reasons to 'protect' each other which have all complex reasoning. Power-hungry politicians are only a quite limited reasoning, as many borders are supported by societies (by far not all, though)

IAmEveryone(3815) about 11 hours ago [-]

Which reminds me of the rather young saying: "No fences make for excellent neighbors."

As to the bike path: that photo made my legs itch. Old train tracks are an excellent resource for bike paths, at least for recreational riding. They are flat, because trains are unable to climb anything more than 2% or so. Plus no cars, perfect width, and (because they are established rather recently) often perfectly smooth pavement.

I've done a thousand km or so in both Spain, and across the alps. It's almost as breathtaking as crossing the mountain passes, if you allow me to mix the literal and figurative meaning of the word.

stevoski(3940) about 10 hours ago [-]

Can you tell us what cycle paths in Spain are on former rail trails? I live in Spain and would love to experience some of these.

tormeh(3037) about 11 hours ago [-]

My favorite is Lake Constance. Switzerland and Austria disagree on where exactly the borders are, but both think Germany holds a part of it. Germany has no stated position on the subject. It has to be the world's most relaxed border dispute.

danbruc(3991) about 6 hours ago [-]

Short (2:31) video [1] on this by Tom Scott.


dividuum(3752) about 12 hours ago [-]

Odd borders are funny. Here's two other examples: The utter chaos in Baarle. They even have nested enclaves. So area of Netherlands surrounded by Belgium surrounded by Netherlands.

Between Germany and Switzerland, there's the German exclave Büsingen am Hochrhein. They have odd tax rules as Swiss VAT is applied.

j1vms(3819) about 5 hours ago [-]

Another interesting one is Saint Pierre and Miquelon (France), fully contained within the eastern maritime borders of Canada. (edit: It is within Canadian exclusive economic zone which are international waters, not sovereign territory).



gammateam(10000) about 11 hours ago [-]

The german entry for Büsingen is MUCH more detailedüsingen_am_Hochrhein

This really frustrates me about Wikipedia because its like a totally parallel site and experience thats just right there and can even have conflicting information. Not helping the world as much as we think, but its so close to being able to.

lisper(125) about 9 hours ago [-]

My favorite is the border between India and Bangladesh:,89.16078...

You have to zoom in to fully appreciate it. It looks like a fractal.

Historical Discussions: Show HN: Minimal Google Analytics Snippet (December 13, 2018: 170 points)

Show HN: Minimal Google Analytics Snippet

170 points 3 days ago by davidkuennen in 3844th position | Estimated reading time – 9 minutes | comments


(function(a,b,c){var d=a.history,e=document,f=navigator||{},g=localStorage,

h=encodeURIComponent,i=d.pushState,k=function(){return Math.random().toString(36)},

l=function(){return g.cid||(g.cid=k()),g.cid},m=function(r){var s=[];for(var t in r)

r.hasOwnProperty(t)&&void 0!==r[t]&&s.push(h(t)+'='+h(r[t]));return s.join('&')},

n=function(r,s,t,u,v,w,x){var z='',

A=m({v:'1',ds:'web',aip:c.anonymizeIp?1:void 0,tid:b,cid:l(),t:r||'pageview',

sd:c.colorDepth&&screen.colorDepth?screen.colorDepth+'-bits':void 0,dr:e.referrer||

void 0,dt:e.title,,ul:c.language?

(f.language||'').toLowerCase():void 0,de:c.characterSet?e.characterSet:void 0,

sr:c.screenSize?(a.screen||{}).width+'x'+(a.screen||{}).height:void 0,vp:c.screenSize&&

a.visualViewport?(a.visualViewport||{}).width+'x'+(a.visualViewport||{}).height:void 0,

ec:s||void 0,ea:t||void 0,el:u||void 0,ev:v||void 0,exd:w||void 0,exf:'undefined'!=typeof x&&

!1==!!x?0:void 0});if(f.sendBeacon)f.sendBeacon(z,A);else{var y=new XMLHttpRequest;'POST',z,!0),y.send(A)}};d.pushState=function(r){return'function'==typeof d.onpushstate&&

d.onpushstate({state:r}),setTimeout(n,c.delay||10),i.apply(d,arguments)},n(),{trackEvent:function o(r,s,t,u){return n('event',r,s,t,u)},

trackException:function q(r,s){return n('exception',null,null,null,null,r,s)}}})



Setup: Just put the snippet into your html, replace 'XX-XXXXXXXXX-X' with your tracking id and you're ready to go. You can also add options for what information you want to track.

All Comments: [-]

ieq8(10000) 3 days ago [-]

could you create github repo?

davidkuennen(3844) 3 days ago [-]

Sure! Will do so as soon as I'm home.

davidkuennen(3844) 2 days ago [-]

Have now added the original Snippet Code as GitHub gist to the website.

largehotcoffee(3995) 3 days ago [-]

Can you include this?

ga('set', 'anonymizeIp', true);

davidkuennen(3844) 3 days ago [-]

Will add an option for that as soon as I'm home. Also got many feature requests for exposing an event function. Both of these things shouldn't add any significant size to the snippet.

nwellnhof(4002) 3 days ago [-]

Should be as simple as adding `aip:1` to the parameter list (the `var j={...}` part).

OnlyRepliesToBS(10000) 2 days ago [-]

'the smallest ever!'

'can you make it bigger'


davidkuennen(3844) 2 days ago [-]

You can now give the option to anonymize ips. It's also default true.

jrockway(3213) 3 days ago [-]

Isn't the tag manager and analytics library likely to be cached, thus using zero bytes on average versus the non-zero number of bytes for this library?

rovr138(10000) 3 days ago [-]

A quick check, GTM is only cached for 15mins.

If users spend a bit of time navigating your site, you might get better caching by adding this to your JS file and caching for longer.

FoxInBoxers(10000) 3 days ago [-]

Very cool, thank you!

Would you expect Google's API to change often? My biggest concern with something like this is their /collect endpoint changing and analytics data stops being tracked.

davidkuennen(3844) 3 days ago [-]

It's officially documented here:

So I think they won't change it or just add a different version with the old version still working.

zyx321(10000) 3 days ago [-]

First impulse: 'That sounds like a great way to get your GA account banned for using their API in undocumented ways.'

sudhirj(3532) 3 days ago [-]

GA has had an open tracking API for a while, it's what powers tracking inside mobile apps, custom events, server side tracking etc.

davidkuennen(3844) 3 days ago [-]

They have it documented here:

So I think it's a totally legit way to use GA, since you don't always have the ability to add libraries. For example if you want to track GA server side.

gobengo(3988) 3 days ago [-]

It's cool that in also works with SPA frameworks. Thanks for sharing.

In other surveilance news, I've been adding Matomo[1] for web analytics to my site for the first time. Used to be called Piwik? It's free/selfhostable web analytics, which is more than enough for small projects and landing pages.

I've been pleasantly surprised with how it works so far, but I had to write some code to trigger it in the right part of my react-router initialization.


(and its cool I'm not helping Google spy on people who visit and help me out)

masa331(10000) 3 days ago [-]

I can recommend for Rails

siddhant(2422) 3 days ago [-]

For the surveillance conscious I can also recommend using Fathom ( I've been using it for a few weeks now and have only good things to say.

napsterbr(2912) 3 days ago [-]

Piwik (matomo) is amazing!

denormalfloat(3952) 3 days ago [-]

Small piece of history: I used to work on Analytics and they would give out swag. One piece was a T-shirt with the entire tracking snippet on it. It was a great conversation starter.

Sad to see that Tag Manager and the currently tracking JS are big enough to force website owners to compromise.

louismerlin(3933) 3 days ago [-]

They are also probably too big to put on a t-shirt !

ApolloRising(3982) 3 days ago [-]

FYI: On your site you use snipped, you may want to consider using the word snippet instead.

davidkuennen(3844) 3 days ago [-]

Fixed it. Thank you.

RodgerTheGreat(3613) 3 days ago [-]

While you're at it, why not be less invasive and neuter some of the data you send across? Does Google need to know about screen colordepth, language settings and text encodings, or fine-grained viewport dimensions, or is that stuff just bits of entropy to fingerprint users? Hardcode it!

davidkuennen(3844) 2 days ago [-]

Added options for the information you want to track.

latchkey(3297) 3 days ago [-]

Could you make it a tiny bit smaller by aliasing `window` and `document` and `localStorage` to remove the duplicated calls? `var w=window;var d=document;var l=localStorage;`

Interstingly, copying/pasting the code into gains 69b of savings, but shows an error: `Invalid regular expression: /+/: Nothing to repeat (line: 3, col: 48)`

s_y_n_t_a_x(10000) 3 days ago [-] shaves off 9% w/ no errors

moviuro(2007) 3 days ago [-]

Wouldn't gzip encoding (apache/nginx/whatever is used today) compress that anyway?

franciscop(1746) 3 days ago [-]

I get a SyntaxError when embedding it:

> SyntaxError: nothing to repeat

The line seems to be this:

> return l?l[2]?decodeURIComponent(l[2].replace(/+/g,' ')):void 0:void 0},

Might it be related to AdBlock? I'm using Firefox with AdBlockPlus, Cookiebro and the default Privacy settings turned up in Firefox.

Edit: oops, it's clearly coming from the regex: /+/g → /\+/g

davidkuennen(3844) 3 days ago [-]

Thanks for sharing! Will look into it.

davidkuennen(3844) 3 days ago [-]

Created this, cause I hated having to slow down my websites by adding the gigantic Google Tag Manager and Analytics libraries, combined with their randomly slow CDN, just to track some basic page views.

C1sc0cat(10000) 3 days ago [-]

Compared to all the other stuff that gets puts on websites these days most sites wont notice.

jopsen(10000) 3 days ago [-]

I'm not an expert, just curious... But aren't these files cached on most users computers?

onion2k(2143) 3 days ago [-]

Both of Google's recommended methods of adding Analytics to a website load Google's JS library asynchronously[1]. Including it has no meaningful impact a site's time to first paint or interactivity.

There's a good argument to say loading 70Kb of JS you're not using is a bad idea, and if that's the case then your smaller script is an improvement, but there's also a good argument to say Google Analytics is powerful and useful and maybe you should be using the features that those 70Kb give you in order to improve you site in ways the user would notice. I guess it's down to your specific use case.


nwellnhof(4002) 3 days ago [-]

The snippet seems to circumvent the Google Analytics Opt-out Browser Add-on which many sites mention as a way to disable GA tracking in their privacy policy. So if you use the snippet, don't tell your users that they can opt out with the add-on.

davidkuennen(3844) 3 days ago [-]

Are you sure it circumvents it? My uBlock Origin is still properly blocking the API call to google analytics. But since the Add-on is from Google itself it could work in different ways indeed.

pspeter3(3109) 3 days ago [-]

Is there a reason you are not using the beacon API for this work?

davidkuennen(3844) 3 days ago [-]

Didn't know about it until now. Will look into it, thanks!

Historical Discussions: The original Macintosh user manual (December 11, 2018: 4 points)

Thoughts on, and pictures of, the original Macintosh User Manual

149 points about 9 hours ago by tosh in 16th position | Estimated reading time – 3 minutes | comments

I recently purchased an original Macintosh User Manual (thanks eBay!). I had seen one at a garage sale, and was struck by how it had to explain a total paradigm shift in interacting with computers. I figured I could learn something about helping make innovation happen.

It's been an intriguing read. It's a remarkably handsome manual, beautifully typeset, which, considering par for the course at the time was probably Courier with few illustrations, is saying something.

Also, even back in 1984, there was no definite article. You get phrases like "With Macintosh, you're in charge." No "the"s or "a"s.

One of the more striking things was how every Chapter is introduced with a full-color photo of Macintosh being used. Here they are (click on them to see bigger sizes):


The first thing I appreciated was how Macintosh is set within somewhat normal (and quite varied) contexts of use.

Then I noticed that, with the exception of Chapter 5, every photo shows a preppy white male using the computer. Women and people of color need not apply! (The dude in Chapter 4 even has a *sweater* around his shoulders!!!)

And Chapter 5 exudes preppiness with the glass brick backdrop.

Also, why is the keyboard in Chapter 3 positioned like that? Why on earth was it posed that way?


The thing you'll notice in Chapter 6 (and maybe you saw it in the Appendix) was the infamous Mac carrying case. There's a page about it, which I photographed:

Carrying Case – On The Go!

The introduction of the manual greets you with this image:

Introduction Dig that reflection! Apple returned to the reflection as a visual element a few years ago...

Some of the best stuff, of course, is explaining how the thing works.

Clicking and Dragging (pretty straightforward)

My favorite is scrolling. I can imagine the discussion: "Well, it's called a scroll bar... I know, let's use a drawing of a scroll!" Yes. Because people in the mid-80s were all about scrolls...

And, hey, Where Does Your Information Go?

You'll probably want to click for details

Oh! That's where that metaphor comes from...

And perhaps the strangest sentence: "The Finder is like a central hallway in the Macintosh house."

(And the disk is a... guest? Someone looking for the bathroom?

It's been surprisingly delightful flipping through this little bit of computer history. The pace, and deliberateness, with which the system and its interface are explained are quite impressive.

All Comments: [-]

malvosenior(2748) about 9 hours ago [-]

Doesn't hold a candle to the VIC-20 user manual:'s...

TheOtherHobbes(3996) about 7 hours ago [-]

'3583 BYTES FREE' :)

andai(3982) about 8 hours ago [-]

This is neat! Doesn't just teach you how to use the computer, but it's a complete introduction to programming. (At the time, I suppose, there was not much difference :)

drfuchs(3811) about 8 hours ago [-]

The original Macintosh also came with a cassette tape in the box, with audio lessons on how to use the 'mouse' to move the 'cursor' around on the screen, and how to 'click' and 'double-click' it to make a 'selection' -- new terms and new concepts for the vast majority of early users. And, of course, everybody had a cassette player.

GeekyBear(10000) about 5 hours ago [-]

The cassette provided the audio track for the on screen tutorial that ran at the same time.

There was no way you were going to fit an audio track onto a 400 kilobyte floppy disk along with the demo software.

My favorite bit was the randomized maze generator that you used to develop your fine grained mousing skills.

westoncb(2759) about 6 hours ago [-]

I remember my dad telling me about how when he was in grad school in the mid-late eighties—for cognitive psychology with plans to go into Human Computer Interaction—they needed to give mouse training and proficiency tests to folks before they could participate in experiments.

Hearing that struck me as so strange since I have no recollection of ever learning to use a mouse—I had always assumed it was just completely intuitive.

open-source-ux(636) about 6 hours ago [-]

In a similar vein, when Microsoft released Windows 3.1 in 1992 (much later than the 1984 Macintosh) they also included lessons on moving a mouse cursor on screen and double-clicking. However, they turned their lessons into a simple but effective interactive tutorial. Here's a screen recording of the entire tutorial - it still holds up as quite informative and well put-together:

foobiekr(10000) about 8 hours ago [-]

this is wonderful. it's especially interesting how dramatically different Apple's idea of what a computer user is and wants to do vs. how IBM viewed it. They are coming at it from two fundamentally different mindsets.

a full PDF can be found here:

userbinator(852) about 6 hours ago [-]

Ironic that one of the sentences in the introduction is 'With Macintosh, you're in charge.' and there's plenty of references to 'your Macintosh'. Ditto for the IBM PC/AT manual I linked to in another comment here, although the latter explicitly mentions near the very beginning that it has built-in BASIC (and the manual for that is also supplied) while the Mac manual never mentions programming until the very end where it briefly references the Programmer's Switch and warns users not to use it.

Apple makes it difficult from the beginning to do anything other than use the computer as an appliance, while IBM seemed to be the exact opposite.

zydeco(10000) about 8 hours ago [-]

Is it just me, or do the pictures on chapter 3 and 4 look like prototypes with a 5.25" drive?

fzzzy(10000) about 1 hour ago [-]

Those particular pictures don't look like that to me, but some prototype Macintosh machines had a Twiggy drive. There's a great story about how the mac designers had to hide their negotiations with Sony from Steve Jobs so he wouldn't get mad on

Animats(1907) about 8 hours ago [-]

Pretty cool how they got the computers to work w/o plugging them in.

That was a Jobs thing. For years, Mac ads didn't show cables. Then came the iDweeb earbuds.

briandear(1843) about 2 hours ago [-]

What is "iDweeb?"

jiveturkey(3968) about 7 hours ago [-]

it likely predated jobs by a long shot. i don't know how far back it goes, but all add for electrical appliances (even lamps) don't show cords. cords are ugly.

even ads for other things, say a desk, don't show cords of the computer or lamp on the desk.

userbinator(852) about 9 hours ago [-]

Compare the IBM PC/AT 'Guide to Operations' of the same time:

No colour, and of course a CLI doesn't need a lesson in how the UI works, but plenty of technical information.

kerouanton(3918) about 3 hours ago [-]

It's striking to see that there is a full section of Troubleshooting and test procedures, over 250 pages for the PC manual, as if it was supposed to fail by default.

Apple did the opposite, user-centered approach meaning it should work by default.

Opposite cultures, indeed.

acqq(1144) about 8 hours ago [-]

'The scroll lock light comes on when you press the Scroll Lock key or when your program is in the scroll lock mode. The scroll lock light goes off when you press the Scroll lock again, or your program is no longer in the scroll lock mode.'

'The System (Sys) key has its functions defined in your operating system or application program manual.'

Heh, it's like it's written by my co-workers that simply can't do anything user friendly but like to make totally redundant 'documentation.' E.g.:

'The option --result-dir specifies the result directory'

'The function int getSize( int frob ) returns the size of the frob. The parameter is frob.' (God forbid they inform you what the allowable range of input or output is, in which units they measure the size or what the frob in their program means or what is actually measured or counted, and they never include an example of using anything).'

jasomill(10000) about 1 hour ago [-]

And compare that to the corresponding IBM PCjr 'Guide to Operations',

which combines the conventional IBM-style step-by-step troubleshooting guide with an extensive keyboard tutorial featuring full-color, cartoon-style artwork on nearly every page, illustrating...the fact that someone at IBM thought a bland and largely uninteresting tutorial could be made more approachable by adding full-color, cartoon-style artwork to nearly every page.

That, or else there's some connection I'm missing between the Ctrl key, say, and towing childhood pets around in an improbably stable two-wheeled trailer behind one's tricycle...

duxup(10000) about 7 hours ago [-]

It's interesting how that computer 'fits' on a desk very nicely space wise.

I have a huge Ikea desk and some monitors. I never feel like it all 'fits'. I won't give up my big monitors... at the same time it irks me that it never feels it all fits on my desk.

kalleboo(3740) about 1 hour ago [-]

I got monitor arms to lift mine off the desk. Really freed up space and I could never go back now.

sverige(2036) about 6 hours ago [-]

The basic nature of the instructions reminded me of talking to a guy I met in Charlotte who was mad about how young people today don't really know why editing text or photos is called 'cut and paste.' He was a professional photographer who took many iconic photos of stock car races from the 50s through the 80s or 90s, and was good friends with (for example) Bill France of NASCAR.

Anyway, he published a magazine (or maybe more than one, I can't remember) with photos and stories about racing. They had to be literally cut and pasted onto boards before printing. It was genuinely irritating to him that people could simply point and click at a scissors icon to edit their layout and still call themselves 'editors.'

Of course, I listened with great interest, since I'm still irritated that CLI isn't the standard any more.

foobar1962(10000) about 4 hours ago [-]

> They had to be literally cut and pasted onto boards before printing.

Well resourced offices had a machine that you'd feed the photos into to put a thin layer of wax on the back. The wax remained tacky (like post-it note glue) to hold the photos and text elements in place but let them be easy to reposition.

gumby(3301) about 8 hours ago [-]

What a great article!

I liked the commentary about preppy white guys. I think this was deliberate (perhaps subconsciously so), but not for reasons why it might happen today.

First, back in the 80s there was still a big issue as to whether execs would use computers. Most could not type; that was at that time considered a female activity. Sounds absurd, but this was literally a topic not just in newspapers talking about computers but also in computer journals wondering if PCs could break into business. And most execs were (and still are :-( ) male.

Second, the Mac was quite a bit more expensive, really at the edge of what an individual could afford (Apple offered a finance plan, if not at launch then soon after). So they were making an affinity pitch to people who could afford it.

> Also, why is the keyboard in Chapter 3 positioned like that? Why on earth was it posed that way?

I suspect it was to show you could use the mouse and not be intimidated by that scary keyboard thing.

I was also struck by the Chapter 3 photo as it seemed to be the only one that could have been shot today (except for the Mac itself of course). All the others had hairstyles, color palette, and/or artifacts (desk phone, tape dispenser) that you'd never see today. Even the final shot of the Stanford Campus has bikes that look old fashioned.

OpenBSD-empire(10000) 1 minute ago [-]

>And most execs were (and still are :-( ) male.

... and there is nothing wrong with that, unless you are a sexist.

User23(10000) about 2 hours ago [-]

> I liked the commentary about preppy white guys.

I'm just going to eat the downvotes, but surely I'm not the only one who's tired of the groundbreaking too many white people commentary on everything? It really doesn't improve the conversation in any way, on any subject.

Edit: less than a second for the first one. Is there a bot?

pvg(4003) about 5 hours ago [-]

not be intimidated by that scary keyboard thing.

I think it's more about showing you can use the computer with just the mouse. They went to great lengths to avoid having the series-of-menu-selections/commands-by-keystroke-and-arrow-keys type text mode UIs (which were the norm on other personal computers) replicated on Macs. The original Mac keyboard didn't even have arrow keys.

bsenftner(3978) about 7 hours ago [-]

I was a beta tester for the original Mac, as well as in the first Mac Professional Developer Program, introduced at Harvard University as a summer session in '83. I still have my mimeographed partially typeset, partially typed version of Inside Macintosh. It is filled with hand written notes by the original developers, and was copied directly from their copies as they worked to complete the beta version that summer. I had it appraised around the time of Job's death, and the response was it's priceless and needs to be in a museum. It's still in a air sealed box at the moment.

twoodfin(3511) about 5 hours ago [-]

Wow! I hope you can find a way to digitize it. I picked up the giant one-volume version of Inside Macintosh from an MIT Library sale, and even without the rich added character of yours it's an extraordinary piece of technology history.

jumelles(3579) about 1 hour ago [-]

It would be awesome if you put this somewhere like!

gedy(3812) about 6 hours ago [-]

Would love to see some pictures!

Historical Discussions: Show HN: Free and open-source home for art made with code (December 12, 2018: 143 points)
Show HN: 2D mountains shader (July 20, 2018: 14 points)
ShaderGif 0.0.19: Make & share gifs with JavaScript (canvas element) (November 16, 2018: 2 points)
Show HN: ShaderGif, like ShaderToy, but with gifs (December 24, 2017: 2 points)
Show HN: Visual patterns made from integer math (August 03, 2018: 2 points)
About ShaderGif – Why Is ShaderGif Better Than ShaderToy (October 02, 2018: 1 points)
Show HN: A simple 2D bird made with 83 lines of GLSL (July 18, 2018: 1 points)

Show HN: Free and open-source home for art made with code

143 points 4 days ago by antoineMoPa in 3758th position | Estimated reading time – 4 minutes | comments

ShaderGif is a free and open source home for art made with code

Languages and platforms

Currently, ShaderGifs allows you to make gifs using :

  • GLSL - WebGL1
  • GLSL - WebGL2
  • Javascript, using <canvas>
  • Javascript, with p5.js

More languages are coming!

The editor

The editor is a simple web coding tool divided in 2 columns.

Left: a visual preview + tools to generate gifs. Right: a text editor where you type code (Javascript or GLSL, depending on selected language).


  • Texture uploads (for Shaders only)
  • Private drafts
  • localStorage backup (useful when you crash your browser/tab, you can then CTRL+R)
  • Comments and notifications when somebody comments your gif
  • PNG .zip - Download all frames in zip to create videos with external software - works with 100s of frames
  • Use your favorite editor - Download standalone offline copy to edit with your favorite editor.

Supported browsers

Editor supports Chrome and Firefox. (Other browsers might experience problems)

See the possibilities!

Scroll through the feed to get an idea of the possibilities :


Everything that can be done in your browser is free.

In the future, we might introduce paid plans for :

  • Video export using our servers
  • Running code in our containers

Since ShaderGif is open source, you can always download the code and run it on your own hardware.

Why register?

Registering allows you to publish gifs on the feed and save drafts for later.

How to use the editor?

The editor contains a preview at the left and the code at the right. Pro tip: The various panels at the left can be collapsed by clicking on the title bar. To create gifs, there is a '+ Create gif' button. Be patient, since this process takes some time.

I frequently start from the 'circle.glsl' example and build up from there:

How to get all frames as png?

  • Click the 'Create .zip file' button.
  • Once ready, click the 'Save Zip' button that appears

How to upload a gif?

  1. First you need to register.
  2. Then, in the shader editor, click the 'Create gif' button.
  3. The encoding will take a few seconds.
  4. Once it's ready, the gif will appear at the bottom of the left division of the screen.
  5. If you wish, you can add a title and post it. (It will be shown in the feed)
  • The uploading will take a few seconds, it's normal.
  • The server is rendering a video preview of the gif for the feed.


There are 78 registered users and a total of 306 shared gifs .

For more details :

View stats page

Contributing and sending feedback

To create github issues

To contribute code and documentation:

Start Coding

For the GLSL modes: If you have never used shaders before, you will get more pointers in the documentation ( For the Javascript/P5.js mode, just read the comments, they should guide your journey!

Don't have an account yet?

With an account, you can post public gifs and save drafts.

Sign up!

All Comments: [-]

antoineMoPa(3758) 4 days ago [-]

For some info about the tech stack, I use:

- Ruby on Rails

- Vue

- CodeMirror

- gif.js

- avconv (FFMPEG) for video previews

- A $5/Month Linode server

The server handled being top 3 on HN with about 20 reqs/second with less than 15% CPU load.

The editor uses an architecture that allows me to quickly add a new language. Just create a class & reference it.

EDIT: Forgot to mention important parts:

- Bulma CSS

- feather SVG icons (

- zip.js

brian_herman__(10000) 4 days ago [-]

Impressive 20 req a second with just 15% cpu!

cdubzzz(2584) 4 days ago [-]

The dark theme is beautiful — I really like everything about the font, styling and layout. Well done.

sarreph(2115) 4 days ago [-]

This is a refreshing take on a shader-sharing platform. My first thought was 'Here's another Shadertoy competitor', but this was quashed by two core features that I think are done really well:

- A cleaner / extensive code editor (that also supports JS, woop!)

- Conversion of shader output to GIFs... Even on a laptop with a dedicated GPU, Shadertoy pulls the rug from underneath my system resources

I would like to see a grid-view (unless this already exists and I missed it) of the feed, as I don't think scrolling through a feed of linear, huge one-by-one posts is good for discovery.

antoineMoPa(3758) 4 days ago [-]

Thanks for the feedback, I like the grid idea!

geo_mer(3977) 4 days ago [-]

This is actually a very neat and useful website. Also respect for making the website fast and not adding any 3rd party scripts.

antoineMoPa(3758) 4 days ago [-]

Yeah, it's terrible how things like Google analytics, web fonts, can slow things down. I tried to keep this app as light as possible. Of course, I will not get a lot of data for analysis, but as a user, I know what can be improved, I don't need to monitor every click, typed keys, mouse moves, ...

crummy(3754) 4 days ago [-]

This may sound like an odd request but at a creative codejam in Berlin a while back I saw something like this but 'multiplayer' - different people could log on to a room and edit live. In this example they couldn't edit each others code but all shared the same output window. It used Processing.

It ended up being kind of a shared creation/remix experience which was really cool. I figure adding live simultaneous edits is a fairly difficult ask but it's immediately what I looked for when I saw your page.

Regardless, looks really cool!

antoineMoPa(3758) 4 days ago [-]

Complicated, but not impossible! In fact, I already have an issue in github for this:

I will make it one of my priorities for the next months (Studying still being priority number 1, of course).

speps(3357) 4 days ago [-]

One feature I found myself missing from ShaderToy is that you can't have render targets of arbitrary size. You have Buffer A/B/C/D but they're all the same size as the canvas. I would love to see a way to specify a shader that generates a 64x64 texture for example, that would allow some nice effects like a simple low res fluid sim.

antoineMoPa(3758) 3 days ago [-]

Currently, I support many passes, but also at the canvas size. I took a note in my github issues and I will try to implement it when I have time!

In the meantime, it is still possible to create 2D toy fluid sim:

Nvidia's $1,100 AI brain for robots goes on sale

124 points about 15 hours ago by elorant in 575th position | | comments

The Jetson Xavier system-on-chip at the heart of the module relies on no less than six processors to get its work done. There's a relatively conventional eight-core ARM chip, but you'll also find a Volta-based GPU, two NVDLA deep learning chips and dedicated image, video and vision components. This is while it uses 'as little as' 10W of power. All told, it can juggle many AI-oriented tasks at once (30 trillion computing operations per second, to be exact) in a relatively compact space.

NVIDIA already as a number of customers lined up, including Chinese shopping giant (delivery bots), Yamaha (drones) and Nanopore (DNA sequencing). Although it's far from certain that this module will make NVIDIA a staple of the robotics scene, it at least signals that the company is serious about sticking around.

All Comments: [-]

amelius(867) about 13 hours ago [-]

How does it compare to e.g. an Intel Movidius neural compute stick?

jahewson(4001) about 11 hours ago [-]

That's an apples and oranges comparison - the Xavier is an entire computer, the Movisius is just a single accelerator chip.

techsin101(3949) about 7 hours ago [-]

Could someone eli5 this? I assume it runs coffee in gpu so do you need to know since special programming language

p1esk(2738) about 5 hours ago [-]

It's a Linux board with an ARM processor and 30W Volta GPU. You connect one or more cameras to it, and develop GPU accelerated computer vision apps using the supplied SDK (CUDA, CuDNN, TensorRT, OpenCV, etc):

You can also install Tensorflow on it.

mark_l_watson(2526) about 13 hours ago [-]

Impressive compute in a small form and running on 10 watts. Also interesting going after a non-consumer market although I think the chip would be a good fit in a handheld gaming device that supported some inputs from watching the player and had the power for very interesting/fun 'game AI.'

dejv(3892) about 12 hours ago [-]

Havent tried Xavier yet, but I am using TX2 in my work (which is previous generation of this type of device) and CPU is very weak to allow any serious gaming.

monocasa(2632) about 9 hours ago [-]

I'm not sure a JITing ARM->VLIW core like that is the best choice for a game console. It'd probably lead to all sorts of weird perf inconsistencies that are hard to track down.

srcmap(10000) about 8 hours ago [-]

Not sure if 10 watts CPU can work in any handheld gaming device:

A typical 3000 mAH cell phone battery would last only 18 minutes of time just for 10 watts CPU alone, adding Display, wifi, DDR, etc would be less.

For automobile connected or wall powered devices (robot), 10 watts for CPU/GPU power is fine.

tim333(1485) about 11 hours ago [-]

>You're not about to buy one yourself -- it costs $1,099 each in batches of 1,000 units

On the site it has: 'Members of the NVIDIA Developer Program are eligible to receive their first kit at a special price of $1,299 (USD)' (

The specs seem quite impressive really.

pj_mukh(3503) about 9 hours ago [-]

Is there a devboard?

saosebastiao(3948) about 9 hours ago [-]

Does it have onboard memory? I feel like calling it a system-on-chip kind of implies it, but I didn't see anything about it.

monocasa(2632) about 9 hours ago [-]

'SoC' is pretty orthogonal to having memory, but system on modules almost always do. This one has 16GB.

jkravitz61(10000) about 6 hours ago [-]

The development kit has been available for the last month. One of the problems no one talks about is that this platform (along with tx2/tx1) runs arm64 which makes it a HUGE pain for getting many libraries to work. I've been using these for a while, and consistently need to hunt down library source code and compile it for arm64 since most libraries are distributed without arm64 support. There's also plenty of device specific closed source SDKs (such as point grey ladybug cameras) which just don't support arm64 so your only option is to attempt to write your own or pressure the manufacturer to publish an arm64 version. I do not recommend this platform for hobbyists for this reason- go buy a small x64 computer and spend 1/10th the time designing a better battery system.

jacquesm(40) about 5 hours ago [-]

If you're into this kind of thing a little bit of compilation should not scare you. The norm in the embedded world is to bootstrap your toolchain first, the availability of a good compiler and libraries that are endian clean is amazing progress.

null000(10000) about 5 hours ago [-]

> I do not recommend this platform for hobbyists for this reason

There's also the fact that the article suggests that they come in batches of 1000 at $1100 each.

scottlocklin(3744) about 8 hours ago [-]

Maybe this should be an 'ask HN' thread. About 10 years ago I considered taking up robotics as a hobby, and thought better of it upon asking a robotics professor I did deadlifts with in the gym. My goal was an autonomous robot which could fetch me arbitrary things from a refrigerator with minimal trickery (aka radio tags on beer cans, magnetic tape on the floor, etc). Seemed impossible at the time, or at least a pretty serious Ph.D. thesis type of effort.

Is there some list of 'open problems in robotics' by which I could inform myself if this is still an insane goal?

robkop(10000) about 6 hours ago [-]

Absolutely not an insane goal. If you want to see progress in picking items a good starting point with a bunch of resources is the Amazon picking challenge.

In terms of navigation, magnetic line following is trivial and more complex navigation really isn't that hard these days. Just look at how any of the modern robovacs navigate.

If you want to generally see where things are I'd recommend checking out some papers or even some writeups from the last icra.

EDIT: Just looked at the Amazon picking challenge for the first time in quite a while and it's not as impressive as I remember it to be.

Jack000(2135) about 5 hours ago [-]

not that hard with some fiducial tags. Here's my (hobbyist) effort that cost a bit more than 1k

tlb(1313) about 6 hours ago [-]

That goal has gotten easier in the last 10 years, mainly due to machine learning in the vision system. You can reliably train a neural net to find things in a fridge from a depth camera.

The rest of the problem: navigation, motion planning, etc. hasn't changed that much, but is definitely possible on an amateur budget.

The problem with a list of 'open problems in robotics' is that just about everything people have come up with has been demonstrated in a lab somewhere. Walking, grasping, manipulation, navigation, swarming, etc. But nobody has managed to combine all those capabilities in a single robot. So the remaining open problem is to solve all the individual problems with one piece of hardware and software.

Qworg(2230) about 6 hours ago [-]

It is an open problem still. I'm unsure of a list though.

There have been some great advancements in grasping and embodied decision making lately though, so it could fall soon.

sjf(10000) about 8 hours ago [-]

Depending on your definition of trickery, you could probably do it right now with a vending machine fridge and a conveyer belt.

perpetualcrayon(3780) about 1 hour ago [-]

I think most consumer robots will be driven by centralized computing power. There's probably no need for the brain to be on the robot, just a good wifi connection.

EDIT: That is of course for robots that won't need to leave the house. Then again, I can't imagine the future won't have global high bandwidth cellular coverage with at least 5 9's availability.

ianai(4000) about 1 hour ago [-]

So long as there's enough low latency bandwidth.

xvilka(3341) about 10 hours ago [-]

Too bad this comes from the worst company to the open source. I wish something other than CUDA and NVIDIA dominated modern AI industry.

joefourier(10000) about 9 hours ago [-]

Nvidia isn't just unfriendly towards open-source, their embedded division is unfriendly to anyone that isn't a large corporation involved in AI, self-driving cars and such applications. Good luck if you try to develop something else on the Jetson as a small-scale company and have to deal with device-specific bugs and driver issues.

twtw(3842) about 10 hours ago [-]

I wish the economics of the present were somewhat different, and that money didn't exist in the 21st century.

And yet, in the world we live in, I have a hard time faulting a corporation for not giving away their core products for free.

TomVDB(10000) about 9 hours ago [-]

"The worst company to the open source" is a bold claim when there are thousands (millions?) of companies that don't open source anything at all.

Last time I checked, Nvidia has quite a bit of open source software on GitHub.

Open sourcing something that you have developed and paid for(!) should always be at the discretion of those who did so.

KineticLensman(10000) about 12 hours ago [-]

I clicked through the various 'manage settings' dialogues starting at the 'before you continue...' splashscreen and eventually found a list [0] of Oath partners who 'participate and allow choice via the Interactive Advertising Bureau (IAB) Transparency and Consent Framework (TCF)'. The list contains more than 200 different organisations.

I decided not to read the article.


dgzl(10000) about 11 hours ago [-]

> Interactive Advertising Bureau

Why does this just sound terrifying to me?

hetspookjee(10000) about 6 hours ago [-]

Whenever I get prompted by such an aggresive cookie wall I just open it in a private tab with uBlock Origin loaded as well.

I already use Firefox Focus as a standard browser on mobile and use private tabs on the desktop more and more as well. Whenever I want to reply or use a service that requires cookies (such as logging in on HN) I use the regular session.

Subpar but it works with minimal effort.

Semaphor(10000) about 8 hours ago [-]

uBlock Origin & uMatrix allow you to read the site without seeing that popup or sending data to the advertisers. I realize that you shouldn't need to do that, but imo it's the reality we leave in.

nightcracker(3954) about 7 hours ago [-]

Do these companies not realize that this is just in violation of GDPR? Might as well entirely ignore it if you are just going to blatantly violate it anyway.

The GDPR is pretty clear that fully opting out should be as easy as opting in. 'Going to a third party site and manage your choices with 200 organizations' will not hold up in court.

Symmetry(2552) about 12 hours ago [-]

This look really compelling for cases where a robot isn't big or stationary enough to just use an industrial PC. I'm really looking forward to seeing how NVIdia's newest iteration on Transmeta's core does in benchmarks. From the Wikichip Spec results[1] and quick Phoronix tests[2] it doesn't seem too far off from an Intel chip clocked down to a similar speed. The whole approach of JITing form x86 or ARM instructions to an exposed pipeline VLIW design is just a really interesting one. For the last generation that was used in the Nexus 6 it did very well in areas that VLIWs are traditionally good at like audio processing and did sort of mediocre in areas where VLIW tends to be bad. A JIT running underneath the OS has the freedom, in theory, to add things like memory speculation across library calls that an OoO processor could do. But the software to do that is, of course, really hard to write. I hope it's improved in the years since the Nexus 9 came out.



dejv(3892) about 12 hours ago [-]

Also the GPU is usually lacking in typical industrial PC. I am using TX2 for this exact reasons: small form factor and good performance for gpu enabled code (running openCV and ML models). Plus you can easily add your own hardware and it act as kind of a RaspberryPI on steroids.

Organizational Debt

124 points about 10 hours ago by ingve in 1st position | Estimated reading time – 14 minutes | comments

We all know that classic aphorism: Year comes to an end, Rust blog post press send. This is mine.

There are lots of cool technical improvements to Rust that I want the project to achieve this year, and a few in particular that I'm definitely going to be putting a lot of time into. But this blog post is going to talk about none of them. Instead, I want to talk about organizational debt, and how badly the Rust project needs to deal with it in 2019.

The Rust project has been growing like a startup for the last several years. This has some good aspect - "hockeystick curves" - but it has some very bad aspects as well. If this project is going to have a chance of sustaining itself in the long term, we need to get real about dealing with the organizational debt we have accumulated. I think we have serious problems at every level of our organization that need to be addressed, and I'm going to enumerate them from my personal perspective. Press send.

Using GitHub issues to discuss design is like drinking from a firehose

I've collected some of the GitHub issues and internals threads related to one specific API addition I worked on this year: the pinning APIs. This API is far from one of the most controversial APIs - not even in the top 10. I would guess (with no overall numbers) that its pretty close to the mean, maybe a little above.

Here's a list of the threads I found about the design of the pin API; none or very little of this discussion is the sort of code review back and forth you get on pull requests, because none of these are pull requests to the rust repo, only design discussion threads:

That's a total of 770 comments on the design of the API for the Pin type, which has not yet been stabilized; I expect it will pass 800 comments before this is done. This is just one significant but ultimately fairly small std API addition, and it doesn't isn't including the discussion that's gone on around the features that are built on top of it, like async/await, generators and futures. And it doesn't include discussion outside of official venues, like reddit threads and IRC or Discord chatter.

Rust is my full time job and even I find it impossible to keep up on every design discussion happening related to the teams that I am on (lang, libs, and cargo). Its been a regular event this year that I discover during lang team triage that there's a new RFC I hadn't seen that already has more than 50 comments. For someone who isn't paid to do this, trying to participate productively in any of these discussions seems extremely difficult.

And this compounds itself. When there have already been 770 comments on a subject, you're obviously not going to read them all. Its been very common for users to make comments that are repetitious of previous discussions, re-opening questions that were previously resolved (sometimes for good reason, but sometimes spuriously), making the conversation even harder to follow as the issue has to be relitigated or a fine and easily confused point explained yet again. Every comment added to a discussion thread is ultimately a form of debt, and its a form of debt with compound interest.

What's worse is that its become clear that breaking the discussion into smaller components is not a solution to the problem. No matter how many GitHub issues we create, it seems that every single one will grow in length until it becomes an unsustainable conversation. We have a problem of induced demand: just as adding lanes to a highway does not resolve traffic congestion, creating more threads does not resolve comment congestion.

All of this discussion is having several negative consequences:

  1. It becomes overwhelming and exhausting for people whose role is to drive consensus in these discussions.
  2. It becomes harder for users with genuinely new insights to participate because they can't tell if their insight is new or not and may decide just not to post it.
  3. It creates conflict as miscommunication occurs between people with most or all of the comment context already absorbed and people who are entering the discussion for the first time.

The RFC process we've been using has not scaled to our level of participation. It's hurting the project as a whole and having a toll on many of the contributors who work within that process as one of their primary activities. I personally would feel very unhappy about needing to initiate a new major consensus discussion (e.g, proposing a major new language feature) until we have overhauled this process.

The project is not coordinating smoothly

In order to produce a coherent user experience, Rust needs to have a cohesive design vision across different aspects of the product. It used to be, when the total team membership was under 30 people, that a shared vision could be diffused naturally across the project, as people involved in mutual projects coordinated and discussed and developed their viewpoints together, like a beautifully evolving neural network.

But we've long since reached the point where coordinating our design vision by osmosis is not working well. We need an active and intentional circulatory system for information, patterns, and frameworks of decision making related to design. I've run into threads repeatedly this year in which decision making felt frought to me because we had no guidelines from which to make decisions. Different teams begin to diverge in product viewpoint, different components become managed by subsets of teams and other contributors without much interaction from the rest of the team, and the project risks becoming scattered and irregular.

In theory, the core team could play this role, but it does not seem well equipped to do so. I think its time that we rethink the organization of a unitary core team in itself and recognize the different kinds of coordination that need to happen between the teams, and create the appropriate cross-team bodies tasked with and capable of providing each specific form of coordination.

The teams are experiencing growing pains

Rust project management is largely performed by the various teams responsible for various areas of the project. I'm a member of three of these teams, and I've felt this year like they've experienced some serious growing pains:

  • Teams have vague balliwicks, being responsible for vast sections of the project. To some extent we've solved this by breaking up teams (the "tools and infrastructure" team of 2016 is now 5 different teams!), but I think there is still difficulty spinning off smaller groups to work on specific issues.
  • At the same time, teams often feel directionless. Because of their "governance" orientation, teams do not have a specific goal, and so the teams I'm on spend the majority of their time triaging issues raised by other people. It feels hard now to imagine the team as a site capable of initializing new projects.
  • The membership of many teams has grown, in some cases stressing the ability the team to meet synchronously - both in terms of coordinating peoples' schedules but also in terms of the effectiveness of synchronous meetings of so many people.
  • As the membership has grown, the institutional memory of the teams has weakened. It's hard to transfer knowledge from previous team members to new team members naturally, and we have no intentional structure to perform that kind of knowledge transfer.

We need more infrastructure for team organization and an ability to more quickly and easily break off discussion groups on specific matters. The working groups have been a step in the right direction, but the teams themselves need to be more proactive in spinning off working groups to respond proactively to specific needs.

In this last year, we've experimented with working groups as a way to alleviate some of the pressure on teams. A working group is a group with relatively relaxed membership tasked with iterating on a specific problem, leaving ultimate decision making in the hands of the team. This is a great idea, and some working groups have worked well and produced a lot of good results.

But not every working group has been a success. I'll own: of the four initial "domain working groups," the networking working group has been the least functional. The reason is pretty straightforward: both Taylor Cramer and I, the people initially assigned to lead it, focused our time on the design of the futures APIs and async/await feature instead of on building out the working group. Leadership of the group has since been passed on to other people, with somewhat better results, but this initial failure has a lot of important lessons in it.

As Nick Fitzgerald pointed out, we need to abstract the ways working groups have worked into a template that other people can use. But even more importantly, we need more people who are capable and interested in doing the very important coordination and leadership work necessary for working groups to succeed.

In the television series The Wire, which is about the dysfunctional nature of the modern American city, the newly elected mayor has coffee with a man who had been mayor in the past. The mayor-elect asks the former mayor why he had chosen not to run for re-election, and he tells him a parable:

Let me tell you a story, Tommy. The first day I became mayor, they set me down at the desk, big chair and dark wood, lots of beautiful things. I'm thinking, "how much better can it get?" There's a knock at the door in the corner of the room, and Pete comes walking in carrying this gorgeous sieve silver bowl, hand chased...

So I think it's a present, something to commemorate my first day as mayor. And he walks over, puts it on a desk. I look down at it. It's disgusting. I say, "What the hell is this?"

He said, "What the hell's it look like?"

I said, "It looks like sh*t. Well, what do you want me to do with it?"

He says, "Eat it."

"Eat it?"

He says, "Yeah. You're the mayor. You gotta eat it."

So, OK. It was my first day, and Pete knows more than I do. So I go at it. And just when I finish, there's a knock on the door. And in walks Pete carrying another silver bowl...

And you know what, Tommy? That's what it is. You're sitting eating sh*t all day long, day after day, year after year.

Being a public leader of a major open source project practicing "radical openness" can feel a lot like this.

Someone, somewhere, is always upset about something you are responsible for, and a part of your job is understanding and responding to their concerns. To them, this is the thing that they are upset about, and probably they experience being upset only occassionally. But to you, life is a stream of different people being upset about different things, and you have an obligation to be empathetic and responsive with their concern in each new instance.

That's fine as far as it goes: after all, that's the job. But what is so exhausting for me personally is that there is so often not a reciprocity of empathy from the other side of the discussion. Frankly, what I experience often is a startling lack of professionalism from community members, a standard of conduct which - while not below the bare minimum of appropriate behavior set by our Code of Conduct - I cannot imagine would be acceptable as a manner of communication to a colleague in a workplace.

Unlike the project, the community has no defined boundary of membership: different people engage on different subject matters, community members leave, new members join. And so its much harder to pinpoint specific people who need to modify their behavior in specific ways. But I think that it is time that we adopt higher standards of conduct for engaging in the work of the Rust project, as opposed to the Code of Conduct which sets out standards for merely interacting casually with other members of the community. We need to establish the norm that discussion forums for work on Rust are professional spaces, and professional communication in mandatory.

It's time to talk about pay

There's one more important point that I need to address here. We've been growing like a start up, and that include a lot of our contributors working for made up internet money. Tons of people are devoting tons of time into Rust and getting paid nothing. A lot of these people are doing it because it is intrinsically interesting and fun, a lot of them are in a life position in which they have an excess of free time, and I'm sure a lot of them have the hope that it will pay off in the long term in developing leads for future employment.

I'm fortunate enough to be one of the very few people who do get paid to work on Rust, and my gig is great, but this is a broken and unsustainable situation that is creating lots of problems. Volunteers driving vitally important projects have dropped out as their life situation changes, leaving us in a situation where feature work is incomplete and hard to resume. The scarcity of money inevitably leads to bad feelings and distrust in regard to how it ultimately gets distributed. And only people who the privilege of a lot of free time and confidence are able to get significantly involved in the project on a volunteer basis.

Open source is the unpaid internship of tech. It's often suggested that the problem is best modeled with open source contributors as an underpaid proletariat, but that is wishful thinking: right now, open source contributors are bourgeois dilettantes; before we can talk pie in the sky fantasies about organizing open source as some kind of workers' cooperative or union, we need to become workers at all.

I don't know what a solution to this issue looks like, and I think proposals for a "Rust Foundation" are largely magical thinking that reorganize the problem statement without solving it.

I can give this anecdote, speaking in regard to my particular situation. I don't know the numbers of course, but I think the work that has been done by project contributors on async/await this year will create really significant, massive value for just the companies that are already adopting Rust for asynchronous networking. But very little of that value will be captured by Mozilla, the only company paying anyone (me) to work on async/await syntax as their primary task for their full time job. More of the value being generated by the Rust project needs to start getting routed back into contribution on Rust than it is now if Rust is going to be sustainable.


So my proposal for Rust 2019 is not that big of a deal, I guess: we just need to redesign our decision making process, reorganize our governance structures, establish new norms of communication, and find a way to redirect a significant amount of capital toward Rust contributors.

I don't think that its feasible to imagine the Rust project, or any project accomplishing anything significant, as a smoothly functioning body which satisfies and fullfills everyone involved in it while always making the best decisions with the least organizational overhead. But we have been putting off dealing with our organizational debt for too long now, and we need to acknowledge it and work actively on reducing it before it overflows our stacks and crashes our project.

All Comments: [-]

unixhero(3716) about 8 hours ago [-]

So... Change to Gitlab?

Vinnl(731) about 7 hours ago [-]

I don't think GitLab's issues are that different from GitHub's that it would solve these issues. Perhaps something like Loomio might, which is made for use cases like this.

revskill(3846) about 8 hours ago [-]


dasmoth(3161) about 8 hours ago [-]

There is. Try clicking the [-] beside a comment title.

SkyMarshal(2209) about 6 hours ago [-]

@ OP & HN mods - just fyi, it's not clear from the title or URL that this is about the Rust project. Might be worth adding "(Rust)" or something to clarify.

Anecdotally, I for one am not interested in the topic of organizational debt in general or wrt to the "boats" project, whatever that may be, as I've read and experienced quite a lot about it already. But I am very interested in it wrt to the Rust project, given their clarity of thinking in other domains. I clicked through and read this anyway, only b/c it's Sunday and I'm not in a hurry to scan the titles and urls only and quickly decide what's interesting to me. But on other days I might have skipped this one just due to the lack of important information in the post line.

justinpombrio(4002) about 5 hours ago [-]

> the "boats" project, whatever that may be

'boats' is a person. Short for 'withoutboats'. From their about page:

> This is the blog of withoutboats, a Rust contributor and a stack of labrador puppies in an overcoat.

curiousgal(2654) about 1 hour ago [-]

I thought it would be about how firms leverage financial debt and got excited.

z3t4(3723) about 4 hours ago [-]

Just use IRC ... the problem with writing down all ideas and all discussion like with Git issues or a forum is that the few good ones get buried in all the shit. With IRC you can only have one or two discussions at the time, so you won't have the problem of following 30+ simultaneous discussions. IRC is a form of grinding, after some time, the community will have reached a consensus, and all arguments for and against something will be known by most regulars, so when someone new joins and say something that already has been discussed a thousand times, one of the regulars can say that these are the trade-offs and this is why it has not been implemented. Good ideas or important issues will be repeated over and over until they are either implemented or fixed, so prioritizing issues becomes natural.

marcus_holmes(3846) about 3 hours ago [-]

IRC (and Slack) are really vulnerable to people missing out on the discussion altogether, though.

Especially in a global organisation, where only part of the team are awake at any one time, so it's impossible to have a real-time discussion regardless of platform.

The advantage of forums is that it's not a real-time discussion and everyone gets to contribute.

MaulingMonkey(10000) about 3 hours ago [-]

> Just use IRC ... the problem with writing down all ideas and all discussion like with Git issues or a forum is that the few good ones get buried in all the shit.

Buried is still searchable and linkable, IRC generally isn't. Poorly rehashing old discussions generally doesn't help improve the signal to noise ratio either IME.

It's not necessarily obvious where all discussion should take place either. I started writing a VisualRust patch to add stdlib natvis files to MSVC projects, which instead morphed into a better rustc patch:

Thanks to this buried gem I stumbled across in a search:

If that was an IRC discussion somewhere instead, it probably wouldn't have gotten done (or at least not nearly so soon), because I'd be chatting and listening in the wrong channel/silo.

wgerard(10000) about 8 hours ago [-]

Re: Github Discussions

Preface: I don't think this resolves the issue completely by any means.

It's seemed clear to me for awhile that Github discussions need threading, badly. The 200 comments on one of those issues probably amounts to about 40 actual discussion topics. Many of them are the original poster adding follow-up comments to their original comment, or back-and-forth discussions about one single comment. It's especially noticeable on contentious discussion topics.

Basic conversation threads wouldn't completely alleviate the problem--200 threads is much better than 1000 comments, but probably still an intractable amount for the maintainers. Still, it might mean you're able to scan 75% of the discussion instead of 25%.

wtracy(3442) about 8 hours ago [-]

The only thing that I could imagine approaching a 'complete' solution would be to have a dedicated secretary who manually writes comprehensive summaries every few days.

Since that's not going to happen, I don't see any better solutions than yours.

kibwen(748) about 8 hours ago [-]

The hidden benefit might be that, with the existence of nested subthreads, many of those comments could be obviated, since it's easier to see which concerns have already been raised and which questions have already been asked/answered.

lkrubner(843) about 6 hours ago [-]

Indeed to take a terrible example from the Scala community, what about this:

That is nearly the exemplar of a conversation gone off the rails. At least to some extent, threading would help. And, since so many of those people keep getting sidetracked, it would be useful to the moderator to be able to break off a section of that debate and make it a separate conversation.

curuinor(3877) about 7 hours ago [-]

December 16, 2019?

keithnz(3724) about 6 hours ago [-]

the super organized manage to bend time to get things done

Historical Discussions: Show HN: Vaex - Out of Core Dataframes for Python and Fast Visualization (December 13, 2018: 123 points)

Show HN: Vaex - Out of Core Dataframes for Python and Fast Visualization

123 points 4 days ago by maartenbreddels in 3673rd position | Estimated reading time – 9 minutes | comments

So... no pandas 🐼?

There are some issues with pandas that the original author Wes McKinney outlines in his insightful blogpost: "Apache Arrow and the "10 Things I Hate About pandas". Many of these issues will be tackled in the next version of pandas (pandas2?), building on top of Apache Arrow and other libraries. Vaex starts with a clean slate, while keeping the API similar, and is ready to be used today.

Vaex is lazy

Vaex is not just a pandas replacement. Although it has a pandas-like API for column access when executing an expression such asnp.sqrt(ds.x**2 + ds.y**2), no computations happen. A vaex expression object is created instead, and when printed out it shows some preview values.

Calling numpy functions with vaex expression leads to a new expression, which delays a computation and saves RAM.

With the expression system, vaex performs calculations only when needed. Also, the data does not need to be local: expressions can be sent over a wire, and statistics can be computed remotely, something that the vaex-server package provides.

Virtual columns

We can also add expressions to a DataFrame, which result in virtual columns. A virtual column behaves like a regular column but occupies no memory. Vaex makes no distinction between real and virtual columns, they are treated on equal footing.

Adding a new virtual column to a DataFrame takes no extra memory.

What if an expression is really expensive to compute on the fly? By using Pythran or Numba, we can optimize the computation using manual Just-In-Time (JIT) compilation.

Using Numba or Pythran we can JIT our expression to squeeze out a better performance: > 2x faster in this example.

JIT-ed expressions are even supported for remote DataFrames (the JIT-ing happens at the server).

Got plenty of RAM? Just materialize the column. You can choose to squeeze out extra performance at the cost of RAM.

Materializing a column converts a virtual column into an in-memory array. Great for performance (~8x faster), but you need some extra RAM.

Data cleansing

Filtering of a DataFrame, such as ds_filtered = ds[ds.x >0] merely results in a reference to the existing data plus a boolean mask keeping track which rows are selected and which are not. Almost no memory usage, and no memory copying going on.

df_filtered has a 'view' on the original data. Even when you filter a 1TB file, just a fraction of the file is read.
Almost no memory usage, and no memory copying going on.

Apart from filtering a DataFrame, a selection can also define a subset of the data. With selections, you can calculate statistics for multiple subsets in a single pass over the data. This is excellent for DataFrames that don't fit into memory (Out-of-core).

Passing two selections results in two means in a single pass over the data.

Missing values can be a real pain, and is not always easy to decide how to treat them. With vaex, you can easily fill or drop rows of missing values. But here's the thing: both dropna and fillna methods are implemented via filtering and expressions. This means that, for example, you can try out several fill values at no extra memory cost, no matter the size of your dataset.

You can try several fill values at virtually no extra memory cost.

Binned Statistics

Vaex is really strong in statistics. Since we are dealing with Big Data, we need an alternative to groupby, something that is computationally much faster. Instead, you can calculate statistics on a regular N-dimensional grid, which is blazing fast. For example, it takes about a second to calculate the mean of a column in regular bins even when the dataset contains a billion rows (yes, 1 billion rows per second!).

Every statistic method accepts a binby argument to compute statistics on regular Nd array.
yes, 1 billion rows per second!


Making meaningful plots and visualizations is the best way to understand your data. But when your DataFrame contains 1 billion rows, making standard scatter plots does not only take a really long time, but results in a meaningless and illegible visualization. You can get much better insights about the structure in your data if you focus on aggregate properties (e.g. counts, sum, mean, median, standard deviation, etc.) of one or more features/columns. When computed in bins, these statistics give a better idea of how the data is distributed. Vaex excels in such computations and the results are easily visualized.

Let's see some practical examples of these ideas. We can use a histogram to visualize the contents of a single column.

This can be expanded to two dimensions, producing a heat-map. Instead of simply counting the number of samples that fall into each bin, as done in a typical heat-map, we can calculate the mean, take the logarithm of the sum, or just about any custom statistic.

We can even make 3-dimensional volume renderings using ipyvolume.

Since the underlying mechanism for calculating statistics on N-dimensional grids is so fast, we can do them on the fly, and have interactive visualizations (based on bqplot).

Interactively exploring 150 million taxi trips using vaex+bqplot


Yes, vaex includes a kitchen sink, but it is a modular kitchen sink. Vaex is actually a meta-package, which will install all of the Python packages in the vaex family. Here is a list of the packages:

  • vaex-core: DataFrame and core algorithms, takes numpy arrays as input columns.
  • vaex-hdf5: Provides memory mapped numpy arrays to a vaex DataFrame.
  • vaex-arrow: Similar, but using Apache Arrow.
  • vaex-viz: Visualization based on matplotlib.
  • vaex-jupyter: Interactive visualization based on Jupyter widgets / ipywidgets, bqplot, ipyvolume and ipyleaflet.
  • vaex-astro: Astronomy related transformations and FITS file support.
  • vaex-server: Provides a server to access a DataFrame remotely.
  • vaex-distributed: (Proof of concept) combined multiple servers / cluster into a single DataFrame for distributed computations.
  • vaex-ui: Interactive standalone app/GUI based on Qt.

Want more?

We are constantly working on making vaex better. But that is not all. We are also working really hard on vaex-ml, a package that adds machine learning capabilities to vaex. Some really cool stuff is coming soon, so stay tuned! In the meantime, check out this live demo for a hands-on demonstration of vaex and a preview of vaex-ml.

Learn more about vaex and vaex-ml from our live demo at PyParis 2018

You can also try out the snippets from this article online in a Jupyter notebook using mybinder:

Click the button to launch a Jupyter notebook to try out the code snippets from the article


Are you ready for Big Tabular Data? We are! The zero memory copy policy, memory mapping, the pandas-like API and the blazing fast computations of statistics on N-dimensional grids makes vaex the go-to Python library for the exploration and analysis of your massive datasets. All this from the comfort of your laptop or PC. Vaex is open source (MIT) and on GitHub, check out its homepage, the documentation, or ask questions on gitter. Try it out and let us know what you think.

All Comments: [-]

rax(10000) 3 days ago [-]

It looks quite nice, and I will have to explore the performance comparisons with Dask more.

I have recently started using Xarray for some projects, and really appreciate the usability of multidimensional labelled data. Are the memory mapping techniques used for speedup here only applicable to tabular data?

The support for Apache arrow is quite nice. Have you considered any other formats, such as Zarr?

maartenbreddels(3673) 3 days ago [-]

Thank you. Memory mapping could be used for other data as well, and I have looked into zarr (even opened an issue for that ). Memory mapping of contiguous data makes life much easier (for the application as well as OS), chunked data could be supported, but is more bookkeeping.

colobas(3261) 3 days ago [-]

Does it have python3 support? Tried installing it on a python3.7 environment and it failed

EDIT: I then tried a python3.6 environment and it worked. I guess it answers my question

maartenbreddels(3673) 3 days ago [-]

Absolutely, I think nowadays the question should be: 'does it still support Python2?' (it does btw)

My question is to you is, would you be so kind to open an issue to decribe the failure on ? Please share which OS, which Python distribution (anaconda maybe) and/or the installation steps and error msg.

ah-(3108) 3 days ago [-]

Great to see that you're supporting Apache Arrow! That makes it so much easier to gradually switch over.

wesm(3430) 2 days ago [-]

Note: Vaex has its own memory model. If you input Arrow, it converts to the Vaex data representation. Details here:

One of the primary objectives of Apache Arrow is to have a common data representation for computational systems, and avoid serialization / conversions altogether.

angelmass(10000) 3 days ago [-]

Very interesting! I will share it with my DS friends.

One thing I have struggled with optimizing is visualization and coordinate calculation of network graphs with 10s of millions of edges + nodes using networkX and most visualization tools. Have you looked into this utility for Vaex? Reading your article it sounds like it would be well-suited for it.

maartenbreddels(3673) 3 days ago [-]

I have not looked into it, maybe datashader can do this, which is a package purely focussing on viz, while vaex is more allround (although there is overlap). If you think vaex can be useful here, feel free to ask question/open issues

bayesian_horse(10000) 3 days ago [-]

The bigger question is what you want to achieve by visualizing so many nodes. If you want a map that can be zoomed in to view individual nodes, you mainly need to compute coordinates for every node. Finding the arrangement of the node is probably what gets you in trouble, so you probably need a custom algorithm which scales better (and does poorer, probably).

More interesting may be to identify clusters and either group them together or visualize these clusters as nodes themselves.

blattimwind(10000) 2 days ago [-]


wenc(3972) 2 days ago [-]

Nice work. This looks like it could add a lot of value to a DS's toolbox.

Exploratory data analysis of large (but not huge) datasets have always been a slow and frustrating experience.

In the enterprise, we have plenty of datasets that are 100s of millions to a few billion rows (and many columns), so big enough to make conventional tools sluggish but not quite big enough for distributed computing. It sounds like vaex can help with EDA of these types of datasets on a single machine. I'd be interested in exploring the out-of-core functionality, which I hope means it will continue chugging along without throwing 'out of memory' errors.

maartenbreddels(3673) 2 days ago [-]

That is exactly the sweet spot for vaex, and with a familiar DataFrame API (read pandas like) the transition does not hurt so much. It may sound cool to set up a cluster, but in many cases it is overkill, and vaex can get these kinds of jobs done.

aw3c2(769) 3 days ago [-]

> For example, it takes about a second to calculate the mean of a column in regular bins even when the dataset contains a billion rows (yes, 1 billion rows per second!).

A billion 32 bit floating point numbers are 4 Gigabytes. How can that be processed in one second unless there was any preprocessing?

fulafel(3289) 3 days ago [-]

Desktop PCs have about 35 GB/s of memory bandwidth and can do compute at ~200 Gflops, so this is just ~10% of peak bw and leaves you a budget of 200 flops computation per float value. If all 4 columns are accessed, there is still enough bandwidth (no idea of the data here was columnar layout or not).

The relevance to big data or out-of-core computation is left hazy, which would make this I/O bound in most cases? 4 GB fits easily in memory and is just mmap'ed from the OS disk cache if the data was recently touched. I guess with 4 columns you get to 16 GB which might be pushing it on a laptop.

stestagg(3883) 3 days ago [-]

This is big news.

I've used similar proprietary libraries before, and virtual operations can be really powerful

maartenbreddels(3673) 3 days ago [-]

Thank you, yes they give much more flexibility: optimization (JIT), derivatives, checking your calculations afterwards, sending them to a remote server etc. Glad you like that :)

themmes(10000) 3 days ago [-]

First of all, great to see more powertools to choose from for my ds workflow!

However, I am suprised to see no mention of Dask in the article. How do these libraries compare?

maartenbreddels(3673) 3 days ago [-]

Dask and vaex are not 'competing', they are orthogonal. Vaex could use dask to do the computations, but when this part of vaex was built, dask didn't exist. I recently tried using dask, instead of vaex' internal computation model, but it gave a serious performance hit.

There is some overlap with dask.dataframe, I think they are closer to pandas than vaex is. Vaex has a strong focus on large datasets, statistics on N-d grids and visualization as well. For instance calculating a 2d histogram for a billion row can be done in < 1 second, which can be used for visualization or exploration. The expression system is really nice, it allows you to store the computations itself, calculate gradients, do Just-In-Time compilation, and will be the backbone for our automatic pipelines for machine learning. So vaex feels like Pandas for the basics, but adds new ideas that are useful for really large datasets.

JPKab(3723) 2 days ago [-]

Such phenomenal work.

BTW, for anyone on a Windows machine, getting this to work is very trivial.

There is a unix only library for locking files (fcntl) which prevents it from working on Windows. I mocked it in the path and made a function that returns 0 to test it.

Obviously adding a check for os and switching to a cross platform file locker would be a great contribution. I'll see if I can make that happen in the next week.

maartenbreddels(3673) 2 days ago [-]

There is an issue open for this: It should have been fixed, some more detailed report (version numbers installed) would be good to know.

Historical Discussions: Show HN: CHIP-8 console implemented in FPGA (December 15, 2018: 113 points)

Show HN: CHIP-8 console implemented in FPGA

117 points 1 day ago by pwmarcz in 10000th position | Estimated reading time – 5 minutes | comments

CHIP-8 console on FPGA

This a CHIP-8 game console emulator working on FPGA chip (TinyFPGA BX).

Implementation notes and remarks

Writing unit tests (see cpu, gpu, bcd) helped me a lot, I was able to test most instructions in simulation. I wrote several simple assembly programs which I compiled to CHIP-8 using the Tortilla-8 project.

There was also some manual testing needed in order to get both the screen and the keypad to work. I wrote some simple programs to run on the chip, and corrected some misconceptions I have on CHIP-8 behavior (for instance, memory loads and stores include multiple registers, and sprite drawing wraps around). I was able to correct my Rust CHIP-8 emulator in the process; it's funny how it was able to run many games despite getting these things completely wrong.

The CHIP-8 specification includes 16 one-byte registers (V0 to VF) and 20 two-byte words of stack. Initially I wanted them to be separate arrays, but it was really hard to coordinate access so that they get synthesized as RAM, so I decided to map them to memory instead (see the memory map described at the top of cpu.v).

I also mapped screen contents to system memory, so two different modules (CPU and screen) had to compete for memory access somehow. I decided to pause the CPU (i.e. not make it process the next instruction) whenever we want to read the screen contents.

Working with screen was annoying. I'm storing the frame contents line by line (each byte is an 8-pixel horizontal strip), but the screen I'm using expects vertical strips of 8 pixels, so there's some logic necessary for rotating that around.

The BCD instruction (convert a byte to 3 decimal digits) was difficult to implement in circuitry since it involves diving by 10. I went with a method described in Hacker's Delight, which involves some bit shifts and a manual adjustments. See bcd.v.

Loading games into memory was interesting. The IceStorm tools include a nice icebram utility for replacing memory contents in an already prepared bitstream. This means I don't have to repeat the whole lengthy (~30s) build when I just want to run a different game.

The default clock speed on the BX is 16 MHz, which is way too fast for CHIP-8 games. So while the individual instructions run really quickly, I throttle down the overall speed to 500 instructions per second.

I added a 'debug mode' that activates when you press 1 and F at the same time. Instead of showing the screen buffer, I'm rendering registers, stack and (some of) program memory instead. It's fun to watch :)

Random number generation uses Xorshift. It calculates a new value every cycle, and iterates the calculation twice if any key is pressed so that the results depend on user input.

Finally, I'm very new to Verilog and so the project is somewhat awkward:

  • An individual instruction takes about 20 cycles.
  • Memory access follows a 'read -> acknowledge' cycle with an unnecessary single-cycle pause due to how the state automatons are specified.
  • iCE40 memory is dual port, so reads and writes could happen at the same time; I'm not taking advantage of that.
  • The whole project takes up 1600+ LUTs, I'm sure it could be packed down to less than 1000, then it could fit on an iCEstick.

I would love to hear from you if you have any advice on the code - just email me or add an issue to the project. Of course, pull requests are also welcome!


I'm using the following:

Source code outline

Verilog modules:

  • chip8.v - top-level module for TinyFPGA BX
  • cpu.v - CPU with memory controller
  • mem.v - system memory
  • gpu.v - sprite drawing
  • bcd.v - BCD conversion circuit (byte to 3 decimal digits)
  • rng.v - pseudorandom number generator
  • screen_bridge.v - bridge between OLED and CPU (to access frame buffer in system memory)


  • *_tb.v - test-benches for modules (see below on how to run)
  • asm/ - various assembly programs


The fpga-tools repo is included as a submodule:

  • - Makefile targets
  • components/oled.v - OLED screen
  • components/keypad.v - keypad

Running the project



By Paweł Marczewski [email protected].

Licensed under MIT (see LICENSE), except the games directory.

See also

All Comments: [-]

jerrysievert(3804) 1 day ago [-]

I think that the best part of this for me is that they managed to map pins 5-12 to match the screen SPI pins. I suppose that once you have a ground pin, the rest can be mapped in software (with the power being a high output), but it's still pretty cool to see.

pwmarcz(10000) about 15 hours ago [-]

Author here, thanks! What I'm doing in the code is outputting 1 to the VCC pin and 0 to GND, and that seems to be enough:

arcticbull(3804) 1 day ago [-]

Hah, cool, I'm working on one right now too. Got most of the way through it, it's an awesome learning experience about what you can achieve with FPGAs and what the limitations are. Especially implementing the 'complex' instructions like drawing.

Koshkin(3950) 1 day ago [-]

The way it's usually done now is, people simply go with implementing some kind of a PSM ('programmable state machine'), which is essentially a tiny microprocessor, and then let the program do the, say, drawing (by either setting bits in video memory, sending commands to an SPI display, or directly generating VGA signals).

dancek(10000) about 20 hours ago [-]

Thanks for sharing! I spent one day this autumn getting the open source toolchain working and dimming the onboard led on an upduino. Turns out testing on hardware when you don't know what you're doing is cumbersome (surprising!). Your codebase (hardware definition base?) and tests will make good reading material.

pwmarcz(10000) about 15 hours ago [-]

That's great to hear! Just don't take my code as an example of any good practices, I have no background in HDLs and this is my first big project, so it's all very hacky.

Historical Discussions: Show HN: Egeria, a multidimensional spreadsheet for everybody (December 11, 2018: 112 points)

Show HN: Egeria, a multidimensional spreadsheet for everybody

112 points 5 days ago by egeria_planning in 3869th position | Estimated reading time – 6 minutes | comments

What is Egeria Spreadsheets?

Egeria Spreadsheets is a collaborative multidimensional web-based spreadsheet service. It is designed to simplify creation and maintenance of large worksheets with complex calculations. Egeria can be used to quickly implement a wide range of planning, budgeting or reporting solutions, create financial models and perform what-if simulations.

Why another spreadsheet application?

I am working as an IT contractor (software development) for almost 10 years now. It mostly involves creating custom applications for non-techies (people from controlling, marketing, finance and so on) who work for large companies. Here are some observations I did so far:

  • Spreadsheet is the most used (sometimes overused) tool among the non technical people
  • Some of the projects I did were literally: We have a bunch of very complex spreadsheets here and we cannot maintain them any more. Could you please make a web application with the same functionality.
  • Spreadsheets are preferred over custom applications when the requirements are changing very quickly: sometimes people from controlling or marketing departments cannot wait the 4-8 weeks till their IT department implements a change request. Copy-pasting spreadsheet formulas is often faster than overcoming bureaucracy in an enterprise environment.

Here are the goals behind the Egeria system:

  • allow large and complex spreadsheets to remain maintainable
  • the majority of people familiar with a traditional spreadsheet application should ideally be able to use it without special training

Key differences to traditional spreadsheets

  • Multidimensional data model: worksheets are organized by business entities (SKUs, departments, years, months, scenarios and so on). The data is stored in a more structured way which has many benefits like simple and robust computations across multiple worksheets.
  • Robust formulas: while Egeria's formulas are very similar to the formulas from common spreadsheet applications, there are two major differences:
    • The cell reference syntax is a bit more complex to make computations along multiple dimensions possible
    • Formulas cannot be copied. Instead one defines an 'area of effect' for each formula. With a properly defined area of effect a formula will still function correctly when new dimension elements (or new dimensions) are added.
  • Web application: a single document can be viewed and edited by hundreds of users simultaneously

Key differences to OLAP-based business planning software

There are several products for enterprise planning and budgeting with a multidimensional data model. Egeria is different in the following ways:

  • Egeria is not specialized for a certain task (like budgeting). It is more of a spreadsheet with a multidimensional data model.
  • Egeria should be easier to use for people familiar with traditional spreadsheet applications.


I am working on the following features (which I think are critical for an MVP) at the moment:

  • User authentication and authorization: It will be possible to restrict a part of a cube a user can view or edit. Filters/rows/columns would show different items depending on privleges granted to the logged on user (e.g., a user from a certain region would only see points of sale from his region).
  • Improving data import and export capabilities
  • Documentation: Egeria's cell formatting and formula concepts are very versatile. Apart from computations it can also be used for input validation and definition of a workflow process (e.g., by using a hypersheet with checkboxes to submit/reject input data for a certain period). These functions should be documented and explained with examples.

I have started this website before the commercial release to better understand the demand for such a system and to learn the needs of the potentional users. I also hope to find some pilot users. Here are the main concerns I have right now:

  • Multidimensional data model: I was fascinated by the modeling power of a multidimensional spreadsheet, when I first started working on Egeria. A wide range of problems which are now solved by custom software systems (and months of work for skilled programmers and database engineers) can be solved in a matter of hours or days with a multidimensional spreadsheet. On the other hand things can get really complicated when a spreadsheet stops being flat. While showing the project to my friends I noticed that it is really hard to imagine cells in a multidimensional space for most of the people. I have read the story of Lotus Improv, but I still hope that things are different 30 years later. What do you think about the chances of adaptation of a multidimensional spreadsheet?
  • Formula language: I was asked why I do not use MDX or DAX or another already existing language. Why invent something new? I think that MDX queries are not much simpler than SQL. It is still a kind of programming language which is only used by specialists. On the other hand everybody understands spreadsheet formulas: I click on a cell here, press '+', than I click on the other cell and it does what I want. Egeria formulas work in a similar fashion. Formulas in most of the cells are simple. Complex logic can be implemented with simple steps by storing interim result in their own cells (you can have a lot of them in an n-dimensional spreadsheet). This also makes the computation logic much more transparent. So what do you think about the formula language?
  • Use cases: I have collected several example models inside the demo application. There are screencasts which explain them. So far I have financial planning, real estate valuation and project management. Do you have a use case for Egeria? Can you explain it?
  • Features: Do you think some important features are missing? What do you think should be included in an MVP so you could use it in your company?
  • Deployment: Should it be a cloud application? Would you prefer an on-premise installation? Does a single user desktop version make sense for you?
  • Marketing strategy: I am trying to develop a marketing strategy at the moment. Do you have a piece of advice for me? What is the best approach for selling business software for a startup with limited financial resources? I would also be happy to find partners who would help me on the sales side.
  • Other thoughts?

Please use the anonymous feedback button inside the application or drop me a mail. You can also post you thoughts in the google group.

Please also mail me if you'd like to schedule a demo or if you'd like to intall Egeria on your own machine.

All Comments: [-]

infinite8s(3086) 5 days ago [-]

Nice, this looks pretty similar to Quantrix Modeler and Javelin, it's precursor. Sadly Javelin suffered from MS's shenanigans with Excel bundling. I was wondering when someone would reimplement those two on the modern web stack. Before the web became popular for these types of apps I had prototyped something similar using pandas and PyQt.

qwerty456127(3980) 4 days ago [-]

> Before the web became popular for these types of apps I had prototyped something similar using pandas and PyQt.

Haven't you open-sourced your prototypes? A hackable desktop app made with PyQt and exposing Pandas functionality a nice GUI way seems a way more appealing to me than yet another heavy web application thet depends on a 3-rd party server and requires an Internet connection to use.

fiatjaf(1758) 4 days ago [-]

Most amazing thing ever. This tool will completely destroy online database apps that cost a fortune like Airtable.

hermitcrab(3143) 3 days ago [-]

Airtable is around $10/person/month. I wouldn't consider that a fortune.

nivenhuh(10000) 5 days ago [-]

How many rows can you have in your spreadsheet before performance starts to become an issue? (I noticed the table isn't lazy-rendered, which sparked the question.)

egeria_planning(3869) 5 days ago [-]

A single view should not get too large. The system will not show more than 60000 cells in one view. You will get an error message. The cube itself can grow very large: 10-100 million filled cells. The trick is to design the dimensions so, that only a small portion of the data is visible at once. You can group large dimensions by a column to build a hierarchy (category/model/sku or region/point of sale). This way only a portion of the data is shown at once.

oehpr(10000) 5 days ago [-]

I'm investigating this right now, I can't tell you much, as I don't understand some of the fundementals yet.

I have to say, first glance, I like the idea of this tool a lot. I normally use light weight spreadsheets like gnumeric when I feel the need to use them at all, and I'd love a more expressive kind of spreadsheet/database/pivot table like this.

At the outset however, I'd be concerned about integrating a tool into my workflow that, most dangerously, is a service that could disappear and take my work with it. Second most dangerously, is proprietary. Third most dangerously, lacks a community beyond the initial dev. Fourth, if it is a service, is not a 'zero knowledge' service.

So while I like this look of this tool very much, and I'm going through the tutorials and playing around with it, personally I see this as a difficult sell. At least for me. But... I dunno, I don't think I'm the kind of demographic that moves markets.

You might see some kinds of success with this by emulating the gitlab model. open source the central components of it for personal use, and try to get some companies looking at it with enterprise-y features?

gnat(10000) 5 days ago [-]

Agreed, those four barriers are barriers for me as well. However, I'd note that enterprise startups all face and must overcome the same obstacles. (I'm sure you know this, just stating it for others who might not)

In particular, proprietary is not a sine qua non. Some enterprise companies open source a figleaf to assuage proprietary concerns, while others simply stay proprietary. If you're a product manager and want to use Aha, you just have to use Aha and go with their pricing tiers (including 30 day free trial) and claims of security. There's no open source Aha. That hasn't stopped them from being popular.

Funding and courting early adopters can go a long way towards neutralise 1 ($ and momentum imply a longer lifetime) and 3 ($ implies can hire other devs). Nothing ever removes the risk of a company cancelling a product, but that's the nature of products even from BigCos like Google and Microsoft. Even open source can be abandoned.

egeria_planning(3869) 5 days ago [-]

I've got several mails from people who'd like to install the system on their own machines. Installing Egeria is not as easy as a desktop application because it is a web server. I will probably add a download page with installation instructions in the next weeks.

polskibus(1325) 5 days ago [-]

Couple of questions:

Is this an Anaplan competitor? If so, what are your key differences, value props?

What are you using as persistence layer? Postgresql? Mysql? Commercial? How well would it compose with existing ETL and BI tools? What is the concurrency model esp with regard to planned permissioning model? What happens when two differently permissions users run the same formulas?

egeria_planning(3869) 5 days ago [-]

- I am not really familiar with Anaplan. You could probably compare them, but as far as I understand Anaplan is a more specialized (planning) tool. - Egeria is a database itself. I use Sqlite as a key value store for persistence now. I will probably use Mongo in production. - I will provide a command line tool to automate import and export with ETL tools in CSV format first. A better integration with relational databases would probably come later on. - All formulas are computed on the server, so users don't run them. Only superusers/designers will be able to change formulas. A normal user will only be able to enter values and change metadata. The concurrency is implemented using locks and in-memory data structures on the server. Open two tabs on the same sheet and try changing values in one of the tabs to see how it works.

benj111(10000) 5 days ago [-]

Could you explain what you mean by multidimensional.

My first thought was that all spreadsheets are at least 2D, some like SC are explicitly 3D, but the write up seems to suggest something different?

'Multidimensional data model: worksheets are organized by business entities (SKUs, departments, years, months, scenarios and so on). The data is stored in a more structured way which has many benefits like simple and robust computations across multiple worksheets '

abathur(10000) 5 days ago [-]

The tutorial videos were useful for putting a handle on what it means.

egeria_planning(3869) 5 days ago [-]

Multidimensional sheet is like a tensor (a multidimensional matrix). You have a cell for every coordinate combination of it's axis. You can imagine a 3-D sheet as a stack of 2-D sheets. You can also check this one out:

egeria_planning(3869) 5 days ago [-]

The laptop screen is 2D, so you always see only one slice of the spreadsheet. You can choose which slice you see using the filters on the left.

DEADBEEFC0FFEE(10000) 5 days ago [-]

You mention that Excel is used by non-technical people,/( which is a generalisation), and then I see many comments on HN seeking clarification. It seems to be a complex product. What's the target market?

egeria_planning(3869) 5 days ago [-]

That's exactly what I try to understand now. I think that people who work with OLAP now should be able to use the system. But I also hope that some percentage of non database people would also be able to learn it. I don't think that the system is self explanatory, but watching a 10 minutes tutorial video could explain a lot.

gregburd(10000) 5 days ago [-]

Would you consider this in the spirit of Lotus Improv, but for the web?

egeria_planning(3869) 5 days ago [-]

From what I have heard of it: yes. I hope the formula language is also a bit better. AFAIK Improv could only have one formula per dimension element. Egeria allows formulas on any subspace inside the cube.

maximilianroos(4000) 5 days ago [-]

This is awesome - great job

One feature that would be immediately interesting for a specific userbase is to be able to import existing multi-dimensional arrays - e.g. netCDF / Zarr

There aren't tools available for being able to view those in a reasonable way currently (generally I'd use a REPL and be slicing & printing 2D chunks)

So even if it were import-only, this would be v useful

egeria_planning(3869) 5 days ago [-]

I was surprised that there is no common file standard for interchanging dimensional data. As far is I can see netCDF and Zarr only store values/arrays and not the metadata? I will check if there is an easy way to import one of them.

tudelo(10000) 4 days ago [-]

To be fair there are SOME tools for reading netCDF files (there is netCDF4 for python for example) but it's not all that great out of the box, you have to write your own method to walk through all variables etc... kind of a pain. ncdump also.

aldoushuxley001(10000) 5 days ago [-]

What is your business model? How will you make money from Egeria?

egeria_planning(3869) 5 days ago [-]

I plan to have a free version with some limitations (like the size of the model) and a paid version. Probably both on-premises and cloud.

aldoushuxley001(10000) 5 days ago [-]

How do you ensure the data is kept safe and private?

egeria_planning(3869) 5 days ago [-]

At the moment I don't. It's a beta test. The commercial version will support two factor authentication. I think a local installation on the client's server/workstation would be the best option for very sensitive data.

riffraff(1031) 5 days ago [-]

Egeria is a nymph name and associated with a spring in Rome, and my favourite naturally sparkling water.

It was super weird seing it on hn.

egeria_planning(3869) 5 days ago [-]

I was just looking for a nice name, which is not used much on the web. :) Maybe the name will change later...

penetrarthur(3835) 5 days ago [-]

Doesn't work on pixel 2 xl. Says Egeria requires at least 1280x400 pixels to display correctly.

egeria_planning(3869) 5 days ago [-]

It is a desktop only application at the moment. The mobile version will come later.

Historical Discussions: Why Design Thinking Works (September 02, 2018: 1 points)

Why Design Thinking Works

92 points about 5 hours ago by helloworld in 875th position | Estimated reading time – 19 minutes | comments

Idea in Brief The Problem While we know a lot about what practices stimulate new ideas and creative solutions, most innovation teams struggle to realize their benefits. The Cause People's intrinsic biases and behavioral habits inhibit the exercise of the imagination and protect unspoken assumptions about what will or will not work. The Solution Design thinking provides a structured process that helps innovators break free of counterproductive tendencies that thwart innovation. Like TQM, it is a social technology that blends practical tools with insights into human nature.

Occasionally, a new way of organizing work leads to extraordinary improvements. Total quality management did that in manufacturing in the 1980s by combining a set of tools—kanban cards, quality circles, and so on—with the insight that people on the shop floor could do much higher level work than they usually were asked to. That blend of tools and insight, applied to a work process, can be thought of as a social technology.

In a recent seven-year study in which I looked in depth at 50 projects from a range of sectors, including business, health care, and social services, I have seen that another social technology, design thinking, has the potential to do for innovation exactly what TQM did for manufacturing: unleash people's full creative energies, win their commitment, and radically improve processes. By now most executives have at least heard about design thinking's tools—ethnographic research, an emphasis on reframing problems and experimentation, the use of diverse teams, and so on—if not tried them. But what people may not understand is the subtler way that design thinking gets around the human biases (for example, rootedness in the status quo) or attachments to specific behavioral norms ("That's how we do things here") that time and again block the exercise of imagination.

In this article I'll explore a variety of human tendencies that get in the way of innovation and describe how design thinking's tools and clear process steps help teams break free of them. Let's begin by looking at what organizations need from innovation—and at why their efforts to obtain it often fall short.

The Challenges of Innovation

To be successful, an innovation process must deliver three things: superior solutions, lower risks and costs of change, and employee buy-in. Over the years businesspeople have developed useful tactics for achieving those outcomes. But when trying to apply them, organizations frequently encounter new obstacles and trade-offs.

Superior solutions.

Defining problems in obvious, conventional ways, not surprisingly, often leads to obvious, conventional solutions. Asking a more interesting question can help teams discover more-original ideas. The risk is that some teams may get indefinitely hung up exploring a problem, while action-oriented managers may be too impatient to take the time to figure out what question they should be asking.

It's also widely accepted that solutions are much better when they incorporate user-driven criteria. Market research can help companies understand those criteria, but the hurdle here is that it's hard for customers to know they want something that doesn't yet exist.

Finally, bringing diverse voices into the process is also known to improve solutions. This can be difficult to manage, however, if conversations among people with opposing views deteriorate into divisive debates.

Lower risks and costs.

Uncertainty is unavoidable in innovation. That's why innovators often build a portfolio of options. The trade-off is that too many ideas dilute focus and resources. To manage this tension, innovators must be willing to let go of bad ideas—to "call the baby ugly," as a manager in one of my studies described it. Unfortunately, people often find it easier to kill the creative (and arguably riskier) ideas than to kill the incremental ones.

Employee buy-in.

An innovation won't succeed unless a company's employees get behind it. The surest route to winning their support is to involve them in the process of generating ideas. The danger is that the involvement of many people with different perspectives will create chaos and incoherence.

Underlying the trade-offs associated with achieving these outcomes is a more fundamental tension. In a stable environment, efficiency is achieved by driving variation out of the organization. But in an unstable world, variation becomes the organization's friend, because it opens new paths to success. However, who can blame leaders who must meet quarterly targets for doubling down on efficiency, rationality, and centralized control?

To manage all the trade-offs, organizations need a social technology that addresses these behavioral obstacles as well as the counterproductive biases of human beings. And as I'll explain next, design thinking fits that bill.

The Beauty of Structure

Experienced designers often complain that design thinking is too structured and linear. And for them, that's certainly true. But managers on innovation teams generally are not designers and also aren't used to doing face-to-face research with customers, getting deeply immersed in their perspectives, co-creating with stakeholders, and designing and executing experiments. Structure and linearity help managers try and adjust to these new behaviors.

As Kaaren Hanson, formerly the head of design innovation at Intuit and now Facebook's design product director, has explained: "Anytime you're trying to change people's behavior, you need to start them off with a lot of structure, so they don't have to think. A lot of what we do is habit, and it's hard to change those habits, but having very clear guardrails can help us."

Organized processes keep people on track and curb the tendency to spend too long exploring a problem or to impatiently skip ahead. They also instill confidence. Most humans are driven by a fear of mistakes, so they focus more on preventing errors than on seizing opportunities. They opt for inaction rather than action when a choice risks failure. But there is no innovation without action—so psychological safety is essential. The physical props and highly formatted tools of design thinking deliver that sense of security, helping would-be innovators move more assuredly through the discovery of customer needs, idea generation, and idea testing.

In most organizations the application of design thinking involves seven activities. Each generates a clear output that the next activity converts to another output until the organization arrives at an implementable innovation. But at a deeper level, something else is happening—something that executives generally are not aware of. Though ostensibly geared to understanding and molding the experiences of customers, each design-thinking activity also reshapes the experiences of the innovators themselves in profound ways.

Customer Discovery

Many of the best-known methods of the design-thinking discovery process relate to identifying the "job to be done." Adapted from the fields of ethnography and sociology, these methods concentrate on examining what makes for a meaningful customer journey rather than on the collection and analysis of data. This exploration entails three sets of activities:


Traditionally, customer research has been an impersonal exercise. An expert, who may well have preexisting theories about customer preferences, reviews feedback from focus groups, surveys, and, if available, data on current behavior, and draws inferences about needs. The better the data, the better the inferences. The trouble is, this grounds people in the already articulated needs that the data reflects. They see the data through the lens of their own biases. And they don't recognize needs people have not expressed.

Read more

Design thinking takes a different approach: Identify hidden needs by having the innovator live the customer's experience. Consider what happened at the Kingwood Trust, a UK charity helping adults with autism and Asperger's syndrome. One design team member, Katie Gaudion, got to know Pete, a nonverbal adult with autism. The first time she observed him at his home, she saw him engaged in seemingly damaging acts—like picking at a leather sofa and rubbing indents in a wall. She started by documenting Pete's behavior and defined the problem as how to prevent such destructiveness.

But on her second visit to Pete's home, she asked herself: What if Pete's actions were motivated by something other than a destructive impulse? Putting her personal perspective aside, she mirrored his behavior and discovered how satisfying his activities actually felt. "Instead of a ruined sofa, I now perceived Pete's sofa as an object wrapped in fabric that is fun to pick," she explained. "Pressing my ear against the wall and feeling the vibrations of the music above, I felt a slight tickle in my ear whilst rubbing the smooth and beautiful indentation...So instead of a damaged wall, I perceived it as a pleasant and relaxing audio-tactile experience."

Katie's immersion in Pete's world not only produced a deeper understanding of his challenges but called into question an unexamined bias about the residents, who had been perceived as disability sufferers that needed to be kept safe. Her experience caused her to ask herself another new question: Instead of designing just for residents' disabilities and safety, how could the innovation team design for their strengths and pleasures? That led to the creation of living spaces, gardens, and new activities aimed at enabling people with autism to live fuller and more pleasurable lives.

Sense making.

Immersion in user experiences provides raw material for deeper insights. But finding patterns and making sense of the mass of qualitative data collected is a daunting challenge. Time and again, I have seen initial enthusiasm about the results of ethnographic tools fade as nondesigners become overwhelmed by the volume of information and the messiness of searching for deeper insights. It is here that the structure of design thinking really comes into its own.

One of the most effective ways to make sense of the knowledge generated by immersion is a design-thinking exercise called the Gallery Walk. In it the core innovation team selects the most important data gathered during the discovery process and writes it down on large posters. Often these posters showcase individuals who have been interviewed, complete with their photos and quotations capturing their perspectives. The posters are hung around a room, and key stakeholders are invited to tour this gallery and write down on Post-it notes the bits of data they consider essential to new designs. The stakeholders then form small teams, and in a carefully orchestrated process, their Post-it observations are shared, combined, and sorted by theme into clusters that the group mines for insights. This process overcomes the danger that innovators will be unduly influenced by their own biases and see only what they want to see, because it makes the people who were interviewed feel vivid and real to those browsing the gallery. It creates a common database and facilitates collaborators' ability to interact, reach shared insights together, and challenge one another's individual takeaways—another critical guard against biased interpretations.


The final stage in the discovery process is a series of workshops and seminar discussions that ask in some form the question, If anything were possible, what job would the design do well? The focus on possibilities, rather than on the constraints imposed by the status quo, helps diverse teams have more-collaborative and creative discussions about the design criteria, or the set of key features that an ideal innovation should have. Establishing a spirit of inquiry deepens dissatisfaction with the status quo and makes it easier for teams to reach consensus throughout the innovation process. And down the road, when the portfolio of ideas is winnowed, agreement on the design criteria will give novel ideas a fighting chance against safer incremental ones.

Consider what happened at Monash Health, an integrated hospital and health care system in Melbourne, Australia. Mental health clinicians there had long been concerned about the frequency of patient relapses—usually in the form of drug overdoses and suicide attempts—but consensus on how to address this problem eluded them. In an effort to get to the bottom of it, clinicians traced the experiences of specific patients through the treatment process. One patient, Tom, emerged as emblematic in their study. His experience included three face-to-face visits with different clinicians, 70 touchpoints, 13 different case managers, and 18 handoffs during the interval between his initial visit and his relapse.

The team members held a series of workshops in which they asked clinicians this question: Did Tom's current care exemplify why they had entered health care? As people discussed their motivations for becoming doctors and nurses, they came to realize that improving Tom's outcome might depend as much on their sense of duty to Tom himself as it did on their clinical activity. Everyone bought into this conclusion, which made designing a new treatment process—centered on the patient's needs rather than perceived best practices—proceed smoothly and successfully. After its implementation, patient-relapse rates fell by 60%.

Idea Generation

Once they understand customers' needs, innovators move on to identify and winnow down specific solutions that conform to the criteria they've identified.


The first step here is to set up a dialogue about potential solutions, carefully planning who will participate, what challenge they will be given, and how the conversation will be structured. After using the design criteria to do some individual brainstorming, participants gather to share ideas and build on them creatively—as opposed to simply negotiating compromises when differences arise.

When Children's Health System of Texas, the sixth-largest pediatric medical center in the United States, identified the need for a new strategy, the organization, led by the vice president of population health, Peter Roberts, applied design thinking to reimagine its business model. During the discovery process, clinicians set aside their bias that what mattered most was medical intervention. They came to understand that intervention alone wouldn't work if the local population in Dallas didn't have the time or ability to seek out medical knowledge and didn't have strong support networks—something few families in the area enjoyed. The clinicians also realized that the medical center couldn't successfully address problems on its own; the community would need to be central to any solution. So Children's Health invited its community partners to codesign a new wellness ecosystem whose boundaries (and resources) would stretch far beyond the medical center. Deciding to start small and tackle a single condition, the team gathered to create a new model for managing asthma.

The session brought together hospital administrators, physicians, nurses, social workers, parents of patients, and staff from Dallas's school districts, housing authority, YMCA, and faith-based organizations. First, the core innovation team shared learning from the discovery process. Next, each attendee thought independently about the capabilities that his or her institution might contribute toward addressing the children's problems, jotting down ideas on sticky notes. Then each attendee was invited to join a small group at one of five tables, where the participants shared individual ideas, grouped them into common themes, and envisioned what an ideal experience would look like for the young patients and their families.

Champions of change usually emerge from these kinds of conversations, which greatly improves the chances of successful implementation. (All too often, good ideas die on the vine in the absence of people with a personal commitment to making them happen.) At Children's Health, the partners invited into the project galvanized the community to act and forged and maintained the relationships in their institutions required to realize the new vision. Housing authority representatives drove changes in housing codes, charging inspectors with incorporating children's health issues (like the presence of mold) into their assessments. Local pediatricians adopted a set of standard asthma protocols, and parents of children with asthma took on a significant role as peer counselors providing intensive education to other families through home visits.


Typically, emergence activities generate a number of competing ideas, more or less attractive and more or less feasible. In the next step, articulation, innovators surface and question their implicit assumptions. Managers are often bad at this, because of many behavioral biases, such as overoptimism, confirmation bias, and fixation on first solutions. When assumptions aren't challenged, discussions around what will or won't work become deadlocked, with each person advocating from his or her own understanding of how the world works.

In contrast, design thinking frames the discussion as an inquiry into what would have to be true about the world for an idea to be feasible. (See "Management Is Much More Than a Science," by Roger L. Martin and Tony Golsby-Smith, HBR, September–October 2017.) An example of this comes from the Ignite Accelerator program of the U.S. Department of Health and Human Services. At the Whiteriver Indian reservation hospital in Arizona, a team led by Marliza Rivera, a young quality control officer, sought to reduce wait times in the hospital's emergency room, which were sometimes as long as six hours.

The team's initial concept, borrowed from Johns Hopkins Hospital in Baltimore, was to install an electronic kiosk for check-in. As team members began to apply design thinking, however, they were asked to surface their assumptions about why the idea would work. It was only then that they realized that their patients, many of whom were elderly Apache speakers, were unlikely to be comfortable with computer technology. Approaches that worked in urban Baltimore would not work in Whiteriver, so this idea could be safely set aside.

At the end of the idea generation process, innovators will have a portfolio of well-thought-through, though possibly quite different, ideas. The assumptions underlying them will have been carefully vetted, and the conditions necessary for their success will be achievable. The ideas will also have the support of committed teams, who will be prepared to take on the responsibility of bringing them to market.

The Testing Experience

Companies often regard prototyping as a process of fine-tuning a product or service that has already largely been developed. But in design thinking, prototyping is carried out on far-from-finished products. It's about users' iterative experiences with a work in progress. This means that quite radical changes—including complete redesigns—can occur along the way.


Neuroscience research indicates that helping people "pre-experience" something novel—or to put it another way, imagine it incredibly vividly—results in more-accurate assessments of the novelty's value. That's why design thinking calls for the creation of basic, low-cost artifacts that will capture the essential features of the proposed user experience. These are not literal prototypes—and they are often much rougher than the "minimum viable products" that lean start-ups test with customers. But what these artifacts lose in fidelity, they gain in flexibility, because they can easily be altered in response to what's learned by exposing users to them. And their incompleteness invites interaction.

Such artifacts can take many forms. The layout of a new medical office building at Kaiser Permanente, for example, was tested by hanging bedsheets from the ceiling to mark future walls. Nurses and physicians were invited to interact with staffers who were playing the role of patients and to suggest how spaces could be adjusted to better facilitate treatment. At Monash Health, a program called Monash Watch—aimed at using telemedicine to keep vulnerable populations healthy at home and reduce their hospitalization rates—used detailed storyboards to help hospital administrators and government policy makers envision this new approach in practice, without building a digital prototype.

Learning in action.

Real-world experiments are an essential way to assess new ideas and identify the changes needed to make them workable. But such tests offer another, less obvious kind of value: They help reduce employees' and customers' quite normal fear of change.

Consider an idea proposed by Don Campbell, a professor of medicine, and Keith Stockman, a manager of operations research at Monash Health. As part of Monash Watch, they suggested hiring laypeople to be "telecare" guides who would act as "professional neighbors," keeping in frequent telephone contact with patients at high risk of multiple hospital admissions. Campbell and Stockman hypothesized that lower-wage laypeople who were carefully selected, trained in health literacy and empathy skills, and backed by a decision support system and professional coaches they could involve as needed could help keep the at-risk patients healthy at home.

Their proposal was met with skepticism. Many of their colleagues held a strong bias against letting anyone besides a health professional perform such a service for patients with complex issues, but using health professionals in the role would have been unaffordable. Rather than debating this point, however, the innovation team members acknowledged the concerns and engaged their colleagues in the codesign of an experiment testing that assumption. Three hundred patients later, the results were in: Overwhelmingly positive patient feedback and a demonstrated reduction in bed use and emergency room visits, corroborated by independent consultants, quelled the fears of the skeptics.


As we have seen, the structure of design thinking creates a natural flow from research to rollout. Immersion in the customer experience produces data, which is transformed into insights, which help teams agree on design criteria they use to brainstorm solutions. Assumptions about what's critical to the success of those solutions are examined and then tested with rough prototypes that help teams further develop innovations and prepare them for real-world experiments.

Along the way, design-thinking processes counteract human biases that thwart creativity while addressing the challenges typically faced in reaching superior solutions, lowered costs and risks, and employee buy-in. Recognizing organizations as collections of human beings who are motivated by varying perspectives and emotions, design thinking emphasizes engagement, dialogue, and learning. By involving customers and other stakeholders in the definition of the problem and the development of solutions, design thinking garners a broad commitment to change. And by supplying a structure to the innovation process, design thinking helps innovators collaborate and agree on what is essential to the outcome at every phase. It does this not only by overcoming workplace politics but by shaping the experiences of the innovators, and of their key stakeholders and implementers, at every step. That is social technology at work.

All Comments: [-]

fermienrico(3632) about 4 hours ago [-]

I do not understand what Design Thinking is. Can someone explain in plain English without the business verbiage?

Every time I come across Design Thinking, it just smells like bullshit - get a bunch of people together, give them post it notes and open walls. IBM's DT website does not help nor do videos on YT.

What is Design Thinking?

wirrbel(3826) about 4 hours ago [-]

design thinking is a method that helps justify management opinion with a process of processing people's input so long until it is diluted/lost enough and replaced by the view of ppl in power.

AndyNemmity(3990) about 2 hours ago [-]

Structured process towards coming up with plans and decisions on actions that provides improved outcomes.

I've been a part of several Design Thinking workshops, and they always go better than workshops in other formats in my experience.

maire(10000) about 4 hours ago [-]

Design Thinking is a way of integrating user feedback into the development process. When your product is iterative it almost always works. When your product is outside the comfort zone of the current users - that is when it is a problem. At least it is good to know what the user perception is and experiment.

BjoernKW(3815) about 4 hours ago [-]

I like the summary given by Jeff Sussna in "Designing Delivery". In a nutshell, Design Thinking consists of these elements:

- abductive thinking

- an iterative process

- ethnography

- empathy

plainOldText(2993) about 4 hours ago [-]

Perhaps this video could help:

avinium(10000) about 4 hours ago [-]

> Every time I come across Design Thinking, it just smells like bullshit - get a bunch of people together, give them post it notes and open walls. IBM's DT website does not help nor do videos on YT.

Yeah, it definitely gives off the vibe of 'buzzword filler we can use to hock consulting services and conferences for clueless execs'.

The core of the idea seems to be fine - put the user/consumer first, accept that you won't know for sure how they'll react until they start using your product/service, and then iterate quickly based on feedback.

But that's hardly rocket science, nor does it justify the massive circlejerk and hundreds of breathless blog posts hailing this seemingly radical discovery.

ArtWomb(1160) about 4 hours ago [-]

Crash course on Stanford's D.School page:

Juul serves as a foundational example of design thinking. In the words of the founders, both grads of the Product Design masters program, the concept was not to create an 'electronic cigarette' but to annihilate the concept of cigarettes altogether in favor of a true 21st century innovative delivery system. Regardless of your position on the societal ills of underage nicotine dependency, it's a fascinating case study to see how a niche product (Plume, I believe it was called?) for loose-leaf cannabis and extracts dubbed the 'iPhone of vapes' evolved into a $15B household brand.

How Juul, founded on a life-saving mission, became the most embattled startup of 2018

I think the real skill involved is Visual Thinking. Good old fashioned pencil and paper flow state ideation. If there is one practice I wish I had spent more time acquiring it would definitely be illustration / drafting ;)

onoj(3806) about 4 hours ago [-]

Ultimately, and in addition to many existing 'innovation' or 'time and motion' tools, Design thinking is in-depth analysis of the customer and desperately hoping that somehow a 'new idea' comes out of it. Roll in billions of consultancy dollars.

Nothing at all to do with a professional design process however.

luckydata(3992) about 2 hours ago [-]

I hate that moniker and I hope it dies in a fire but it's nothing more than using the same thinking devices used by designers (contextual inquiry, making assumptions explicit, mapping journeys and outcomes) by people that are not professional designers. The thinking is teaching the fundamentals can help teams of non designers achieve better outcomes in collaboration with a designer.

IMHO design thinking is just... thinking.

maxxxxx(3873) about 4 hours ago [-]

This feels like all the 'innovation acceleration' programs coming from corporate in my my company. Combine a few buzzwords with the obvious. I think corporate types are into this stuff because they often kill any kind of improvement with the strict control they like to exert but then they have to do this stuff to make people more 'creative'. I am sure it's also better to have an open workspace so diverse teams can collaborate the whole day.

fishtank(10000) about 4 hours ago [-]

Basically: research, hypothesize, prototype, test, iterate, deliver. It's not new, but here we are with new terminology.

It is good because if you're a design firm you sometimes need an HBR-approved buzzword to get your client, the VP of Marketing, to let you do any kind of user research. But ultimately, like any business concept used primarily to sell in client work and justify it up the ladder, it will be replaced by 'whatever the stakeholder wants' when push comes to shove.

avip(3966) about 3 hours ago [-]

Thinking + doing, rebranded to confuse engineers.

baxtr(3291) about 3 hours ago [-]

As Steve Jobs put it: "you have to start from the customer experience and work backwards"

Jedi72(10000) about 3 hours ago [-]

How do you sell thinking outside the box to people who only think in boxes? Make a new box.

onoj(3806) about 5 hours ago [-]

Might work, otherwise just a word pitch on past 'business improvement' protocols and snake oil

adzicg(2024) about 4 hours ago [-]

If I remember correctly, the key case study in Change By Design (that started he whole Design Thinking movement) was Nokia Ovi. So even 'might work' needs to be put into a time bound context. At the point when Brown wrote the book it was 'working', and there were claims in the book how Nokia is reinventing itself driven by design thinking, but then the whole ship sank soon after.

revskill(3846) about 5 hours ago [-]

I prefer examples of Why 'XXX Thinking do not work'. Why ? Because learning from failures will shape correct lessons. That's how AI works, too.

yodon(10000) about 5 hours ago [-]

The convergence rate on learning from 'x didn't work' is much slower than from 'x did work'.

If you're looking for a peak, most of the time you want to climb the gradient rather than descend it. That statement is true even if you suspect there are multiple peaks in the landscape.

munchbunny(10000) about 4 hours ago [-]

For the people who are reading this and going 'well, duh?' I think that's actually a very instructive reaction.

The analysis/brainstorming/prototyping/testing cycle (usually what 'design thinking' refers to) is burned into many of us just because that's how we've been doing/aspiring to do things for years.

However, you have to remember that's not how a lot of people were doing things, and many of those people (I won't claim all of them, no process is universal) could probably benefit from judiciously adopting the practices.

freddie_mercury(10000) about 3 hours ago [-]

A lot of people say 'duh' but when was the last time they actually prototyped anything in the work or personal lives before doing it?

A lot of people may be 'aspiring' to do it but few people actually do it.

I worked at a relatively progressive, design-centric company and even there 'design thinking' tended to be relegated to a handful of projects and wasn't the norm/default.

lordnacho(3992) about 3 hours ago [-]

My wife went on a workshop at a Big 4 about Design Thinking.

Everyone sat through it and then one of the ladies turned about and blurted out 'it's just thinking! Why is it called Design Thinking?'

evrydayhustling(10000) about 2 hours ago [-]

Yeah... an acid test for these things is often, 'what are you telling me not to do?'

In my encounters with design thinking, the controversial 'don't' has actually been frequent, iterative release. For some folks, design thinking is a defense of a 'measure twice, cut once' waterfall-y approach to product development.

That might make a lot of sense for some things, like physical consumer products, where a ton of branding and manufacturing go into each release. I think it's a bad idea for most software, where the ease of distribution means that you can learn from your market much more dynamically.

Historical Discussions: Browsers (December 16, 2018: 13 points)
An nth-letter selector in CSS (October 23, 2018: 1 points)


91 points about 5 hours ago by indysigners in 3997th position | Estimated reading time – 6 minutes | comments

I don't sign non-disclosure agreements. I'm sure that's cost me projects in the past, but I just find it icky to put a personal request—"Hey, please don't mention this"—into a legal framework, as though that makes it any more enforceable. It's not that I can't keep a secret. I'm more likely to keep a secret if you just ask me than if you try to get me to sign a piece of paper first.

I have a friend at Microsoft who—fair play—did not ask me to sign an NDA when he pulled me aside at the Confront conference in Malmö back in October. "Can we talk in private?" he asked. "Sure", I said, and stepped aside. "Let's go outside", he said. This must be serious, I thought.

Standing out in the cold, he gave me the news (and asked me to keep it under my hat). Microsoft's Edge browser was going to switch its rendering engine over to Chromium.

My initial reaction was to be deflated and disappointed. I've always believed that healthy competition in the browser space is very important (having lived through the consequences of previous monopolies). But I can only assume that Microsoft was quietly telling some people about this in advance so that we would have time to mull it over, and avoid any knee-jerk reactions. The news was made public last week, so now that I've had quite a while to think about it, my considered reaction is be deflated and disappointed.

There's just no sugar-coating this. I'm sure the decision makes sound business sense for Microsoft, but it's not good for the health of the web.

Very soon, the vast majority of browsers will have an engine that's either Blink or its cousin, WebKit. That may seem like good news for developers when it comes to testing, but trust me, it's a sucky situation of innovation and agreement. Instead of a diverse browser ecosystem, we're going to end up with incest and inbreeding.

There's one shining exception though. Firefox. That browser was originally created to combat the seemingly unstoppable monopolistic power of Internet Explorer. Now that Microsoft are no longer in the rendering engine game, Firefox is once again the only thing standing in the way of a complete monopoly.

I've been using Firefox as my main browser for a while now, and I can heartily recommend it. You should try it (and maybe talk to your relatives about it at Christmas). At this point, which browser you use no longer feels like it's just about personal choice—it feels part of something bigger; it's about the shape of the web we want.

Jeffrey wrote that browser diversity starts with us:

The health of Firefox is critical now that Chromium will be the web's de facto rendering engine.

Even if you love Chrome, adore Gmail, and live in Google Docs or Analytics, no single company, let alone a user-tracking advertising giant, should control the internet.

Andy Bell also writes about browser diversity:

I'll say it bluntly: we must support Firefox. We can't, as a community allow this browser engine monopoly. We must use Firefox as our main dev browsers; we must encourage our friends and families to use it, too.

Yes, it's not perfect, nor are Mozilla, but we can help them to develop and grow by using Firefox and reporting issues that we find. If we just use and build for Chromium, which is looking likely (cough Internet Explorer monopoly cough), then Firefox will fall away and we will then have just one major engine left. I don't ever want to see that.

Uncle Dave says:

If the idea of a Google-driven Web is of concern to you, then I'd encourage you to use Firefox. And don't be a passive consumer; blog, tweet, and speak about its killer features. I'll start: Firefox's CSS Grid, Flexbox, and Variable Font tools are the best in the business.

Mozilla themselves came out all guns blazing when they said Goodbye, EdgeHTML:

Microsoft is officially giving up on an independent shared platform for the internet. By adopting Chromium, Microsoft hands over control of even more of online life to Google.

Tim describes the situation as risking a homogeneous web:

I don't think Microsoft using Chromium is the end of the world, but it is another step down a slippery slope. It's one more way of bolstering the influence Google currently has on the web.

We need Google to keep pushing the web forward. But it's critical that we have other voices, with different viewpoints, to maintain some sense of balance. Monocultures don't benefit anyone.

Andre Alves Garzia writes that while we Blink, we lose the web:

Losing engines is like losing languages. People may wish that everyone spoke the same language, they may claim it leads to easier understanding, but what people fail to consider is that this leads to losing all the culture and way of thought that that language produced. If you are a Web developer smiling and happy that Microsoft might be adopting Chrome, and this will make your work easier because it will be one less browser to test, don't be! You're trading convenience for diversity.

I like that analogy with language death. If you prefer biological analogies, it's worth revisiting this fantastic post by Rachel back in August—before any of us knew about Microsoft's decision—all about the ecological impact of browser diversity:

Let me be clear: an Internet that runs only on Chrome's engine, Blink, and its offspring, is not the paradise we like to imagine it to be.

That post is a great history lesson, documenting how things can change, and how decisions can have far-reaching unintended consequences.

So these are the three browser engines we have: WebKit/Blink, Gecko, and EdgeHTML. We are unlikely to get any brand new bloodlines in the foreseeable future. This is it.

If we lose one of those browser engines, we lose its lineage, every permutation of that engine that would follow, and the unique takes on the Web it could allow for.

And it's not likely to be replaced.

All Comments: [-]

tolmasky(3023) about 2 hours ago [-]

The biggest danger to the web in terms of control right now is browser monopoly, not engine monopoly. And this move is probably the most effective way at combatting that. I think this is the right move at this time not just for Microsoft, but for the web.

Borrowing from a previous comment I made, think of it this way: do you think it helps or hurts Google to have every version of Windows come pre-installed with what is essentially already Chrome, except, of course, it will probably have Bing as its default search engine. Do you think the odds of people just using Edge to download Chrome and nothing else go up or down with this move? Do you think it helps or hurts Google to have most tech people not bother telling their parents to download Chrome anymore? There is significantly less control from 'owning' an engine than owning an actual browser. I don't think I would have had much of an issue with the dominance of IE 20 years ago if I knew I could compile and modify (and release!) IE myself.

If you care about the state of search monopoly, out of control ads, and identity on the web, then you should be happy with this move. This is more akin to most browsers now having a common starting point. The problem with browsers is that if you truly want to make a new one you need to somehow replicate the decades of work put into the existing ones. What that means is that before you can exercise any of your noble privacy/security/UI/whatever goals, you must first make sure you pass Acid 1 and replicate quirks mode float behavior and etc. etc. etc. This is a non-starter. But now, Microsoft can launch from Chromium's current position and have a browser that can actually compete with Chrome. It's as if they've taken 'engine correctness' off the table, and can compete on cool features or 'we won't track you' or anything else. Websites will work in Edge by default, so if you like that one new feature in Edge, you can feel OK switching to it without compromising devtools/rendering/speed/etc.

Now I know that the initial response to this is 'but Google will call the shots!'. Not if the way this has gone down every other time has anything to do with it. Google's Chromium started as KHTML. When Apple based WebKit off of KHTML, the KHTML team had very little say in anything and they eventually forked of course. Then Google based Chromium off of Apple's WebKit, and once again, there was very little 'control' Apple could exercise here. Sure, they remained one monolithic project for a while (despite having different JS engines which just goes to show that even without forking you can still have differentiation), but inevitably, Chromium was also forked from WebKit into Blink.

And there should be no reason to think the same won't happen here, and it's a good thing! Microsoft in the past couple of years has demonstrated amazing OS culture. I can't wait to see what the same company that gave us VSCode is able to build on top of Blink, and eventually separate from Blink. Ironically enough, the worst thing that could have happened to Google's search dominance is have Blink win the 'browser engine wars': we all agree Blink is the way to go now, so we can all start shipping browsers that at minimum are just as good, and won't auto-log you in, or have their engine set to default, or etc. etc. etc.

pcwalton(2661) 25 minutes ago [-]

> I can't wait to see what the same company that gave us VSCode is able to build on top of Blink, and eventually separate from Blink.

The situation here is very different from WebKit and Blink. Google was already the plurality contributor to WebKit at the time of the fork. By contrast, Microsoft has contributed virtually nothing to Blink, and they intend to be identical to upstream Chromium. Microsoft is not going to fork Blink.

Blink is completely controlled by Google. (98% of Blink patches are reviewed by a Google employee.) This means that Google has complete control over the direction of the Web platform. While Google may not be able to set the default search engine in Edge, it has and will exert more subtle influence on the Web as a platform in ways that benefit Google. (Just to name one example, Chromium has deployed whitelists of Google properties for NaCl support.)

em-bee(3634) about 1 hour ago [-]

that is a very interesting take on the subject.

koboll(10000) about 2 hours ago [-]

Meh. I don't buy it.

First of all, individual boycotts will never, ever overcome a collective action problem, so a fundamentally futile solution is being proposed here.

Plus, this is only bad if Google act destructively evil and no one steps in to change it. If the endgame is control of Chromium being passed from Google to a neutral foundation -- which I suspect it will be -- then everyone wins.

It's always seemed inevitable to me that one browser engine will win out. This is the way of all web technologies. Should we go back to competing variants of (proto-)JavaScript to prevent web monoculture, too?

pcwalton(2661) 34 minutes ago [-]

> If the endgame is control of Chromium being passed from Google to a neutral foundation -- which I suspect it will be -- then everyone wins.

There is zero chance of that happening. Like all public companies, Google acts in its own self-interest. Google controls Chromium, and there is absolutely no benefit to Google to hand control of Chromium over to an outside entity. 98% of all patches to Blink are reviewed by a Google employee.

If establishing a foundation were going to happen, then it would have happened when Apple and Google were still collaborating on WebKit.

swalladge(3755) about 1 hour ago [-]

> It's always seemed inevitable to me that one browser engine will win out.

I think it should be that one competing _standard_ will win out. Ideally we can have infinite number of browser engines, as long as each one implements all widely used standards correctly. We already see this with most websites working fine whether in chromium or firefox. The only incompatibilities (in theory) should come from bugs or new standards/experiments which haven't been widely adopted or implemented yet.

The same for javascript - multiple implementations of a js engine is fine, as long as they all implement the ES* standard.

Also if everyone moves to a Chromium engine monoculture, what will happen to innovation? eg. Mozilla has made some major progress with it's work with Servo.

newscracker(4000) 25 minutes ago [-]

This move by Microsoft may probably help Microsoft have Windows users using its browser, but I seriously doubt if it's good for the web.

There are people claiming that Microsoft could easily fork from Chromium, but honestly, do you see that happening in the next few years? Having read that Microsoft's Edge team was a very small one, I don't believe Microsoft will fork Chromium anytime in the next two or three years (no financial incentive, which could be a big motive, to do so)...and that's a long time to keep pushing the Chrome/Chromium way ('what Google wants for the web').

As an aside, I very much liked this article linked within, titled 'The ecological impact of browser diversity' by Rachel Nabors. [1]

To all those who evangelize Firefox, please also see if you can donate money to Mozilla. It may seem like Mozilla has a lot of money (more than 90% coming from Google for being the default search engine added in the browser), but this is not enough, and could turn to be nothing if Firefox's market share becomes next to nothing and Google decides to pull the plug when the current contract expires. Partnerships with Bing (or other search engines that not many people use) may not bring in as much money to Mozilla.

Read about all the work that Mozilla does (aka 'not just Firefox') in its 'State of Mozilla 2017' annual report. [2] The audited financial report is here. [3]




oooglaaa(10000) 21 minutes ago [-]

So the only competitor to Google's monopoly on web browsers is... funded almost entirely by Google? Yikes

sys_64738(3885) about 2 hours ago [-]

This move is actually genius by Microsoft. They will negate the need to install Chrome on Windows. Microsoft will gain control of the browser market.

dmethvin(3339) about 2 hours ago [-]

I don't think they want control over the browser market, just the browser users on Windows. People are less likely to ditch Edge if it is more compatible with the way Chrome and Safari work (including the ability to run Chrome extensions). That means they have more visibility and control over those users.

krackers(10000) about 1 hour ago [-]

How so? I don't believe the layperson really cares all too much about the underlying engine, it's more a marketing thing. And the IE brand is forever tainted.

eddieroger(10000) about 1 hour ago [-]

One of the linked posts spent a great deal of time differentiating between the browser interface and the engine itself. The average user doesn't understand the difference, so being told "Edge is more like Chrome now" doesn't actually make it Chrome for them. Other than tricking user agent checkers from putting up the inferior browser warning (is that still a thing?) this won't stop people from using Edge to install Chrome.

awinder(4002) about 3 hours ago [-]

The last time I enjoyed using the Firefox rendering engine on a Mac was back in the Camino days. I'd almost say that that browser is a better Mac experience than Firefox is today.

I've started using Safari and it's actually pretty OK and the resource utilization is a lot better than Chrome (no fans blowing!). I empathize with Firefox making a big deal out of avoiding a monoculture but i really hope they're figuring out how to do a better job on things like the Mac internally, because I don't think that people are really going to use an inferior product as some moral statement.

vSanjo(10000) about 2 hours ago [-]

Correct me if I'm wrong, by all means, but I recently read somewhere that Firefox is genuinely trying to improve it's MacOS performance in one of the new releases.

No sources, it was just in passing somewhere, but I distinctly remember it.

wyqydsyq(10000) about 3 hours ago [-]

Honestly I just see all this, even despite the author claiming taking time to 'mull it over', as a knee-jerk reaction. There is little evidence that MS changing to Blink/V8 will have any tangible impact on the web.

Microsoft has hardly offered much as far as competition and diversity goes since IE6, basically the only web 'innovations' they're responsible for is a bunch of IE-specific APIs that didn't work in any other browser.

The garbage rhetoric that Mozilla and their supporters have been spreading is pure FUD:

> By adopting Chromium, Microsoft hands over control of even more of online life to Google.

MS' new default browser being based on Chromium does not give any additional control of online life to Google. They're open-source projects that, while Google manages, does not have absolute control over the consumption of. If Google did ever try to muscle control on Chromium like they've been doing with Android, Microsoft could simply fork it without the hostile Google changes.

If anything I see this whole thing being beneficial for the web in the long run - now instead of MS engineers pissing their time and effort down the drain on a dead browser and needlessly fragmenting the market with engine differences, even if people don't use the default Windows browser Microsoft's engineers can contribute to and benefit the web by contributing their improvements upstream to the Chromium/Blink/V8 projects.

mcny(10000) about 2 hours ago [-]

Downvoters, please stop and think.

> If anything I see this whole thing being beneficial for the web in the long run - now instead of MS engineers pissing their time and effort down the drain on a dead browser and needlessly fragmenting the market with engine differences, even if people don't use the default Windows browser Microsoft's engineers can contribute to and benefit the web by contributing their improvements upstream to the Chromium/Blink/V8 projects.

This is absolutely spot on. Keep in mind that Microsoft's EdgeHTML team went as far to almost declare that any difference from Chrome is a bug that Microsoft intends to fix. I mean I get it if Edge was free and open source, there would be some value in seeing reimplementation of a browser that is compatible with Chrome but I doubt making Edge free and open source was ever in the works (I think, I am not sure if Edge was actually a clean slate. It might just have been removing 'bad' code from IE tbh). I simply don't understand why we should mourn Edge when the Edge team pretty much says any difference from Chrome is a bug.

exogen(2589) about 2 hours ago [-]

> Microsoft has hardly offered much as far as competition and diversity goes since IE6, basically the only web 'innovations' they're responsible for is a bunch of IE-specific APIs that didn't work in any other browser.

I don't have much love for MS, but this is just wrong.

XMLHttpRequest, the precursor to fetch, pretty much allowed 'web 2.0' to exist. Now we use fetch, but they introduced it first.

box-sizing: border-box, a much better way to reason about the CSS box model, is almost always introduced as 'the way Internet Explorer had always done things in older versions and quirks mode.' Surely its existence in IE (and the experience people had w/ CSS there, preferring its box model) had some historical weight in this property coming about?

Lastly, and something still not standardized: setImmediate. Lots of libraries currently have to polyfill this with MessageChannel, postMessage, and other hacks. Clearly there's high demand for it.

tannhaeuser(3171) about 2 hours ago [-]

The problem is this: HTML (+CSS) is supposed to be a standardized and recommendable format for publishing rich text with the expectation that it can be rendered for a long time to come. But if only a single browser will be able to display it, this not only questions the longevity claim, but also questions the whole web stack. New CSS specs can't be reviewed and proven with independent implementations, and web specs will, even more so than they already do, become 'whatever Chrome does'.

This is a terrible and fatal result for the web as we know it. Because why would we continue the practice of creating baroque, power-inefficient web frontends with JavaScript and the browser stack monstrosity when we're essentially targetting a single browser? We could as well use a much leaner and lighter GUI framework designed for the purpose, and a saner language.

What did we really expect from the way so-called web standards are created? WHATWG (who write the HTML5 specs and call this a 'standard') pride themselves in creating a 'living standard' where nothing ever is locked down, and ECMA has put itself up to creating yearly JavaScript updates (after the language has stagnated for 15 years). W3C creates an enormous amount of CSS specs and has said goodbye to versions/levels and profiles a long time ago. The result is that there is no reasonable spec as an implementation target for browser development. Those in the game have closed the door behind them.

Firefox has, unfortunately, a shrinking user base, is financially dependent on Google, and might suffer from problematic incentives in the future.

The web is in tatters.

stupidcar(3975) about 3 hours ago [-]

I find it kind of hilarious that people seem to think the solution to the Chrome monoculture is them personally switching to Firefox and then writing a blog post / comment / tweet about it. It's like saying we need to get serious about climate change, so you're going to start cycling to work, and encourage your friends to do the same. If that salves your conscience, then great, but your individual actions are statistically irrelevant.

There are systemic trends and market forces at play here involving billions of people. And nowhere amongst the significant forces and trends will you find 'there are insufficient nerds evangelising Firefox'. Hacker News can upvote as many as these kinds of articles as it likes, but Chrome's market share will continue to grow until there is something far more substantial to stop it than philosophical objections by tech insiders.

tlb(1313) about 2 hours ago [-]

Nerd evangelism might not get something to 100%, but it can make something large enough to matter. Linux on the desktop matters, for example, because nerds like it and talk about it. The fact that some single-digit percentage of people use it means that the duopoly doesn't have absolute power.

Firefox, even in single-digit percentages, affects the ecosystem in the same way. If there weren't a choice, the browser would gradually turn into a cable TV interface.

philliphaydon(3489) about 2 hours ago [-]

So when MS controller the browsers we threw our arms in the air. But when google controls it we just sit here and go meh?

yawn(3652) about 2 hours ago [-]

> I find it kind of hilarious that people seem to think the solution to the Chrome monoculture is them personally switching to Firefox and then writing a blog post / comment / tweet about it.

That's how change starts. 'First they laugh at you...'

> nowhere amongst the significant forces and trends will you find 'there are insufficient nerds evangelising Firefox'.

Were you around when nobody knew what Firefox was?

snarfy(3903) about 2 hours ago [-]

Firefox has one feature the other browsers will never have - privacy.

mixedCase(10000) about 2 hours ago [-]

Brave seems to go for the same, but with a Chromium base (currently in beta). They seem to de-claw whatever's left by Google.

tomjakubowski(2935) about 2 hours ago [-]

Why wouldn't Safari have the same feature? I'm not a huge fan of Apple's generally closed ecosystem, but acknowledge that they have unusual respect for user privacy.

EGreg(1712) about 1 hour ago [-]

Can I question the orthodoxy on this one?

Suppose WebKit or Blink are the one engine that everyone uses. Do you know how much uncertainty and effort that will save around the world that goes into browser compatibility and quirks? Look at the x86 architecture or POSIX as well.

Here is my serious practical question: this is open source, so if you want to make some extension, you should be able to distribute it. And if it becomes so popular as to be merged into the core, or included with the main distribution, then it will be.

It seems on balance, this would be a good thing.

sdwisely(10000) about 1 hour ago [-]

it's more complicated than that though.

Nobody is lamenting the loss of the edge engine. Things like choice of video codec, drm module, etc have shown is it's a good thing to have a few different vendors with a seat at the table for standards.

Most important of those is Firefox.

ben-schaaf(10000) 42 minutes ago [-]

x86 and POSIX are actually great comparisons to the current state of browsers. They all have well defined standards that everyone should follow in theory. In practice CPUs have bugs[0] that kernels and applications make a lot of effort to work around. There are also enough implementation/non-compliance quirks in POSIX[1][2] implementations to matter when porting. Just like all of the web standards, x86 and POSIX don't solve the compatibility problems that come from innovation.




21(3923) about 2 hours ago [-]

Sometimes I still browse Digg (it's different now). Since the last Firefox update from a week ago which disabled Symantec certificates the site doesn't work anymore. I bet the day Chrome disables Symantec certificates they will immediately notice and fix it.

It's sad that in one week not even one of their people opened it in Firefox, and not many of their users either (there was just a couple of mentions on Twitter about this issue)

chrisseaton(3036) about 2 hours ago [-]

> Since the last Firefox update from a week ago which disabled Symantec certificates the site doesn't work anymore.

Is this the fault of Firefox or Digg though?

lern_too_spel(4000) 17 minutes ago [-]

Chrome also shows an error. The reason nobody at Digg noticed is that Digg is not https by default, so only people using certain extensions would hit that error.

ryuuchin(10000) 43 minutes ago [-]

Chrome distrusted Symantec certs when 70 got pushed to stable (at the end of October)[1]. Digg shows NET::ERR_CERT_SYMANTEC_LEGACY error when I navigate to it. Firefox was supposed to remove the Symantec certs in 63 (also to be released at the end of October) but I guess they pushed it back a version[2].

All major browser coordinated to distrust the certs at the same time.



eddieroger(10000) about 1 hour ago [-]

I feel like I'm missing something, because I keep seeing blog posts that tell me a browser monoculture is bad (and the irony of Microsoft saying this is nearly too much), but none of them tell me why. I understand that it was bad in the IE6 days, but IE6 was also closed source and could only be run on Windows (see also the irony of Microsoft discontinuing IE on Mac, not wanting to compete with Safari, a browser made by the OS manufacturer). Chromium is open source - the only lock in is the decision to use it. Sure, Google may be at the reigns of where the project goes, but then differentiate on features, or fork it and make it different. Differentiate on the user interface, or integrations with other services, or go hard on privacy. Maybe the rendering engine should be more akin to the Linux kernel and less akin to Windows? One operating system would be bad, but one kernel has spawned many, many Linuxes. I guess I'm doing my part by using Firefox, but that's more because I am worried about just how much Google knows about me, and not because I think Gecko is remarkably better than Chromium.

pcwalton(2661) 39 minutes ago [-]

> Sure, Google may be at the reigns of where the project goes, but then differentiate on features, or fork it and make it different.

And if the fork diverges, then what? This is not a theoretical concern: Blink and WebKit are now quite different, and it's best to regard them as two different rendering engines. Is that bad?

> Maybe the rendering engine should be more akin to the Linux kernel and less akin to Windows?

This presupposes that the Linux kernel is at an optimum point in the design space. It's pretty clear to me that it isn't. In fact, I think the monoculture around open source kernels has been bad for OSS. I look forward to Fuchsia providing some much-needed competition, much in the same way LLVM provided much-needed competition to GCC.

austincheney(3681) 33 minutes ago [-]

IE, at its height, was available outside windows.

At that time, 2001-2002, IE6 was highly praised. You don't get to 96% marketshare by being something everybody hates. The hate came later when CSS demand ramped up and it became clear IE6 used a non-standard box model. Worse still MS essentially abandoned the browser, the result of total marketshare dominance, and tied it directly to the operating system.

The reason people are complaining about this is the lack of diversity. This has proven very bad in the past. It essentially makes the platform a proprietary space even if the dominant browser is open source. The key here is who directs the technology and direction of the platform.

the_clarence(10000) 23 minutes ago [-]

You are right. A monoculture IS a good thing for browsers engine.

dewiz(3533) 19 minutes ago [-]

> the irony of Microsoft saying this is nearly too much

Where is Microsoft saying that?

Problems with monoculture: lack of innovation, a single agenda being pushed forward, anti-competitive behavior. Google doesn't want technology X? X is dead. Google thinks Ad platform Y is bad? Y is dead. etc

newscracker(4000) 18 minutes ago [-]

Please read this different take and explanation in this article linked within the main article, titled 'The ecological impact of browser diversity', by Rachel Nabors. [1]


new12345(10000) 13 minutes ago [-]

Seems like you have yourself answered your question in your list line. One browser owned by one organization could hijack whole internet for its own interests. Just like in any other necessary item we need multiple supple options are good for creating healthy competition.

wvenable(3939) 10 minutes ago [-]

I completely agree. I think there is a bit of unjustified fear here given that Edge's ridiculously small installed base had absolutely no affect on the browser mono-culture anyway. Most developers did not go out of their way to test in Edge.

This is really no different than the Linux Kernel. People will take KHTML, Webkit, Blink, V8, JavaScriptCore and remix those into dozens of different products all with better web compatibility than something home grown from Microsoft with no chance of adoption.

bhauer(1506) 7 minutes ago [-]

I think we first need to agree that competition in a general sense—that is including browsers and everything beyond—is a good thing. It's healthy for any market to have viable competitors; principally for the consumers, but also for the vendors (so that they do not stagnate). If we don't agree that competition is healthy, then I don't believe it's possible to agree on the second-order matters.

There are several second-order reasons why a monoculture is damaging for the browser industry. I will give a few examples:

1. A monoculture encourages web developers to test on a single platform with the assumption that all other platforms, and even industry standards, are meaningless. Many of us are web developers, and most of us are guilty of dismissing low-share platforms during testing. But as long as some viable competitors exist, we nominally target the industry standards in the hope that by doing so, the platforms we do not test on will have a modest/tolerable (maybe even good) experience.

2. A monoculture is self-reinforcing. Losing established viable competitors makes it more difficult for new competitors to enter the market. If we allow second-tier browsers to be rendered meaningless, we all but ensure a long and (eventually) painful stagnation, and create an ever-larger hurdle for a new entrant to clear to reach any significance (as more an more of the web is designed to work with a single platform). While it may sound passably agreeable to have a Chrome monoculture in 2019, do we want to still have a Chrome monoculture in 2029? For my part, I hope we see an increasing set of options in most areas of life over time, even in 'browsing,' whatever that ends up looking like in 10 years.

3. We don't know what future stagnation will look like. We don't know what opportunities we will have lost by making it more difficult to compete or by losing the healthy diversifying force of competition. It's a classic problem of the unseen. Projecting forward, we predict testing will be a bit simplified, but we cannot know what innovations we'll never see (or won't see as quickly) because the hurdles for experimentation were too high. Today, if Firefox introduces a new feature, even though its usage is relatively small the usage is still large enough in absolute terms that the innovation isn't entirely under the radar for most of us. If Firefox's share were 0.8% instead of 8%, far fewer of us would even notice if they added a slick new feature, leaving the feature unheralded and obscure despite its potential wide appeal.

revskill(3846) about 2 hours ago [-]

I still remembered when i first time used Firefox in 2004, and it's like a life-saver to me to get out of the terrible Internet Explorer. What i want now, maybe from 2020, i could use Firefox, but as a replacement for Mobile App container.

IntelMiner(10000) about 1 hour ago [-]

I still remember when my Dad marched me over to his laptop, to boldly proclaim the new Internet Explorer 7 had TABS

He legitimately looked as though he'd unlocked some kind of arcane technology that mortal man should never have wielded

I still feel a little bad when I casually shrugged it off with 'Firefox has had that for years'. He looked legitimately shattered

Historical Discussions: Show HN: High-performance ahead-of-time compiler for Machine Learning (December 13, 2018: 83 points)
Show HN: High-performance header-only C++ template library for CNNs (November 22, 2017: 8 points)
Header-only C++ template library with over 70 DNN convolution algorithms (November 21, 2017: 4 points)

Show HN: High-performance ahead-of-time compiler for Machine Learning

83 points 4 days ago by andrew-wja in 3919th position | Estimated reading time – 3 minutes | comments

triNNity DNN tools

The triNNity DNN toolkit (compiler, optimizer, and primitive library)

triNNity primitive library

triNNity is a header-only C++17 template library with over 80 DNN convolution algorithms. It's a collaborative effort with several other people in our research group to collect as many DNN convolution algorithms as possible in one place, and give them clean, simple, and performant implementations. It is also a testbed for algorithm design for DNN convolution.

The library implements normal dense convolution (both direct and GEMM-based), strided convolution, dilated convolution, group convolution, sparse convolution, Winograd convolution, FFT convolution, and more, including super high performance specialized algorithms for cases like 1x1 convolution.

Many libraries and frameworks present algorithms like im2col, fft, and others, as monolithic operations, but there are in fact dozens of algorithmic variants of these approaches, all of which are better suited to some kinds of convolutions than others. Our paper in ASAP 2017 details many of these algorithms.

Under the hood, the library uses BLAS, OpenMP multithreading, SIMD vectorization, and more, without any programmer intervention required. It can also run completely standalone, without any, or with only a subset, of these components enabled. We currently support x86_64 and aarch64, but support for more platforms is planned. Since the library is released as header-only C++, all that's really required to bring up a new platform is a working compiler supporting the C++17 standard.

We have working, well-tested integration with the Intel MKL, OpenBLAS, ARM Compute Library, FFTW, and libxsmm, among others, as back-end libraries providing specific functionality (such as optimized GEMM routines).

The library is released under the BSD3 license, and is accompanied by an extensive performance benchmark suite.

triNNity DNN compiler and optimizer

We've developed a sophisticated ahead-of-time optimization framework for DNNs, based on the PBQP formulation, which uses profiled layer timings from performance benchmarking to build a cost model which can statically choose from among the 80+ convolution algorithms in the primitive library to produce a provably-optimal instantiation of a full CNN.

Our compiler turns your Caffe deploy.prototxt directly into highly efficient native code, which can be run standalone to perform inference.

You can obtain the compiler and optimizer from our public BitBucket, and there is also a demonstration project with benchmarking workflows: demos.

Our paper on the DNN optimizer appeared at CGO 2018.


We've run some performance comparisons with Intel's native MKL-DNN framework:

All Comments: [-]

dragandj(2739) 4 days ago [-]

A few minor notes:

1. This seems to be oriented to convolutions. While convolution is rather important for image-oriented DNN workloads, there is DNN beyond that, and DNN are not the only technique for machine learning.

2. The graph shows 2-3x speedup (of convolutions, I suppose) over Intel's MKL-DNN 0.17 on the i5-2500K processor, which is a rather old low-end device. If the format used for convolutions in the test used 8-bit integers for storing image channels (which is possible) this is to be expected, since i5-2500 does not suport AVX-512 integer instructions that are employed there by MKL-DNN. It does not even have AVX-2! If that's the case, just switching to 32-bit float could speed up MKL-DNN almost an order of magnitude. The most informative test would be something run on SkylakeX, since it does support AVX-512...

andrew-wja(3919) 3 days ago [-]

The speedup is for the whole network, as the graph labels show! The point of the compiler is that you produce code that implements the entire forward pass so you can deploy that code where you need to do the inference.

I agree we need AVX-512 -- hopefully I can get access to a SkylakeX machine in the next few days.

scottlocklin(3744) 3 days ago [-]

Is there any plan to make this compile to cuda? I can't imagine bothering with DNNs on an intel cpu in virtually any situation, even if it is 3x faster this way.

disdi(10000) 3 days ago [-]

Agree and compile to Opencl?

andrew-wja(3919) 4 days ago [-]

If anyone would like to know more about the toolkit, I'll be checking the comment thread here periodically today!

andrew-wja(3919) 4 days ago [-]

If you really want to know more right now, you can read our paper and look at the slides here:

creatio(10000) 4 days ago [-]

Might want to change ML to Machine Learning. I thought it was about a compiler for ML, the programming language (

p1esk(2738) 3 days ago [-]

How does this approach compare to Tensor Comprehensions, or TF XLA, or whatever are the best DNN compilers these days?

andrew-wja(3919) 3 days ago [-]

Full disclosure: I'm a compiler person who for funding reasons moved into performance of machine learning systems.

None of those things should be called compilers. At best, they are scaffolding for peephole optimization. When you can get these crazy speedups just from bolting on an instruction selector, that's a real indicator that a lot of stuff is just waiting to be done.

For context, MKL-DNN embeds XBYAK, an optimizing JIT targeting SSE4.2, AVX2, and AVX512. It sees all the dimensions of the tensors, it knows the strides of the kernel, and so on and so forth. So for us to be able to beat it by such a margin just by stepping back and doing something simple at a higher level of abstraction kinda indicates that the approach it's using is running up against some conceptual limits. It's not that MKL-DNN's JIT isn't good -- it's great, and it's a credit to the engineers working on it. But the problem is that the smarts are being applied in the wrong place!

fmap(10000) 4 days ago [-]

Ok, my first reaction is that this is that it's really wonderful work - straightforward and with a big payoff at the end.

But this really begs the question: why hasn't this been done before? People have been throwing resources at machine learning for a decade now, and somehow nobody has thought to perform instruction selection before executing a model to optimize the kernels used?

What other low-hanging fruit is out there? Automatic partitioning of networks over several GPUs and CPUs? Such dynamic load balancing algorithms have been available in the HPC literature since there was HPC literature. Fusing multiple primitives to simpler kernels? That's what linear algebra libraries have been doing for decades. Optimizing internal data layout (although that seems to be part of this paper)? Optimizing scheduling decisions to minimize data movement?


Also since the author seems to be reading this thread: Have you tried measuring the tree-width of the instruction selection DAGs you generate for the PBQP problem? The heuristics for solving these problems in llvm are applicable to tree-width <= 2, but could be extended to, e.g., tree-width <= 4 without too much slowdown. I wonder if there is still an iota of performance to be gained here. :)

andrew-wja(3919) 4 days ago [-]

Hi, author here. There is a staggering amount of low hanging fruit. I have been half-seriously blaming GEMM in correspondence. When you have a problem that looks like GEMM, it's such an attractive hammer to pick up that people just don't look beyond it to other techniques!

To answer your other questions: we already have auto load balancing and primitive fusion, albeit rudimentary, but optimizing scheduling is the obvious next step. We've extended this stuff to use ILP, and we're on our way to press at the moment!

Re: tree width: the tree widths are huge, but the solver library we're using handles them :)

andrew-wja(3919) 3 days ago [-]

I just realised the implication in your last question is that a heuristic solution to the PBQP problem is obtained. In fact, for a lot of networks, we get the optimal solution :)

DNN DAGs are big, but they are very simple structurally compared to the kind of stuff you see in a real ISA!

disdi(10000) 3 days ago [-]

Amazing work. What changes are needed in this library to make it benchmark GPUs ?

andrew-wja(3919) 3 days ago [-]

No changes are required in the library -- you just need to have some way of generating the code for the forward pass using e.g. cuDNN (which already has a heuristic selector!)

snaky(1999) 4 days ago [-]

'ML' is really confusing nowadays. Especially with such words as 'compiler' and 'optimizer'.

andrew-wja(3919) 4 days ago [-]

Fair! Edited.

Historical Discussions: Two Chinese Stalagmites Are a 'Holy Grail' for Accurate Radiocarbon Dating (December 15, 2018: 63 points)
2 Stalagmites Found in Chinese Cave Are 'Holy Grail' for Radiocarbon Dating (December 13, 2018: 6 points)

Two Chinese Stalagmites Are a 'Holy Grail' for Accurate Radiocarbon Dating

79 points 1 day ago by curtis in 59th position | Estimated reading time – 7 minutes | comments

The stalagmites from Hulu Cave, with sampling etch marks.
Image: Hai Cheng et al., 2018/Science

Since its inception in the 1950s, radiocarbon dating has proven indispensable to archaeologists and climate scientists, who rely on the technique to accurately date organic compounds. But a good thing just got better, owing to the discovery of two stalagmites in a Chinese cave containing a seamless chronological atmospheric record dating back to the last Ice Age.

An unbroken, high-resolution record of atmospheric carbon-12 and carbon-14 was found in a pair of stalagmites located within Hulu Cave near Nanjing, China, according to new research published today in Science. Because this record extends back to the last glacial period, to around 54,000 years ago, scientists are now equipped with a more accurate standard for use in radiocarbon calibration.

There's no question that radiocarbon dating has revolutionized archaeology. Armed with this technique, scientists can date organic compounds, such as bone, hair, wood, seeds, and shells. The further back in time we go, however, the less reliable carbon dating becomes, as the technique is reliant upon accurate historical measurements of atmospheric carbon, specifically the ratio of carbon-12 to carbon-14.

Carbon-14, or C14, is a rare form of carbon that, unlike carbon-12 (so-called "normal" carbon), is radioactive. C14 is an isotope consisting of six protons and eight neutrons, and it's in a perpetual state of decay, featuring a generous half-life of 5,370 years. Like normal carbon, C14 combines with oxygen to create carbon dioxide, which is absorbed by all living creatures, whether they're animals or plants. Consequently, the ratio of C12 to C14 in all living organisms is always the same as the ratio in the atmosphere.

Because atmospheric levels of C12 and C14 change over time, the specific ratio in an organic sample (e.g. bones, wood) serves as a timestamp for a living creature's death. When an organism dies, it stops acquiring new carbon. As time passes, the C14 decays like a ticking clock, but it's not replaced. By measuring the amount of radioactive decay, scientists can determine when a formerly living organism died.

But there are limits to this dating approach, and it has to do with the C14 half-life. Organic objects can only be dated up to around 55,000 to 60,000 years ago, after which time the amount of C14 in a sample dwindles down to negligible proportions. What's more, calibration is critical to this technique; changes in the amount of atmospheric radiocarbon over time means that radiocarbon dates have to calibrated against a chronological, or calendrical, timescale.

Building these calendars is easier said than done. Ideally, scientists would like to have an accurate and unbroken chronological record of changing C12 and C14 atmospheric concentrations over time. This can be done, for example, by counting tree rings (also known as dendrochronology), which, as any 8-year-old will happily tell you, is a reliable way of determining the age of a tree. Unfortunately, few calibrated datasets that directly sample atmospheric carbon exist further back in time than the Holocene tree ring record, at approximately 12,600 to 14,000 years ago (obviously, trees don't live to be tens of thousands of years old, but ancient, fossilized trees can be dated using other methods). Radiocarbon dating is thus limited by the ability of a given material to provide an absolute age, while also preserving a record of changing atmospheric conditions.

But now, with the discovery and analysis of two special stalagmites in Hulu Cave, scientists have stumbled upon an unbroken record of atmospheric carbon dating back some 54,000 years. Instead of counting tree rings or studying coral reefs (another technique used to infer absolute dates), the researchers, led by Hai Cheng from the Institute of Global Environmental Change, at Xi'an Jiaotong University, analyzed the mineral composition inside the stalagmites. By dating hundreds of layers within these structures, which was done by using a highly reliable isotopic dating technique known as thorium-230 dating, the researchers were able to establish an unprecedented chronological baseline that can now be used for radiocarbon dating.

"Up to now, different approaches for C14 calibration have their own constraints," Hai told Gizmodo. "For instance, it remains difficult [to use] tree-rings to calibrate the atmospheric C14 beyond the current limit of around 14,000 years before present. Corals do not accumulate continuously over thousands of years and are difficult to collect since those in the time range of interest are now largely submerged. Stalagmites, which can be excellent choices for thorium-230 dating, typically contain a significant fraction of carbon ultimately derived from limestone bedrock."

UC Berkeley geologist Larry Edwards, a co-author of the new study, helped to develop the thorium-230 method back in the late 1980s, but he wasn't able to find ideal cave deposits to perform a study like this one.

"In addition to carbon from the atmosphere, cave deposits contain carbon from the limestone around the cave," Edwards told Gizmodo. "We thus needed to make a correction for the limestone-derived carbon. We discovered that the Hulu Cave samples contain very little limestone-derived carbon, and are therefore nearly ideal for this kind of study—hence our ability to complete a precise calibration of the C-14 timescale, a goal of the scientific community for the last nearly seven decades."

In the study, Hai and his colleagues present around 300 paired carbon 14 and thorium-230 dates extracted from the thin calcite layers within the Hulu Cave stalagmites. The average temporal resolution between each pair is about 170 years. These particular stalagmites, said Hai, are very special, containing "dead carbon" that's remarkably stable and reliable.

"As such, the C14 in the Hulu samples are mainly derived from atmospheric sources, which allows us to make a milestone contribution towards the refinement of the C14 calibration curve through the paired measurements of the C12/C14 and thorium-230 ages," said Hai, adding: "The new Hulu record has less uncertainty and resolves previously unknown fine-scale structure."

As the researchers write in their paper, the new calendrical record represents a "holy grail" for scientists, offering a high-resolution and continuous record of atmospheric C14 that covers the full range of the radiocarbon dating method. For archaeologists, it also means they can now date organic compounds between 14,000 to 54,000 years with greater confidence, especially the older samples.

"For a sample that is actually 40,000 years old, the nominal C14 age would be about 35,000 years, and the age you would calculate from previous calibration data would be about 38,000 years, with a large uncertainty," explained Edwards. "So a difference of 2,000 to 5,000 years, depending upon how you chose to calibrate your age, prior to our work."

Excitingly, this research will also be of interest to climate scientists, who can use this data to study atmospheric changes over time.

It's a very cool result from a very cool and unlikely source—the slow drip, drip, dripping within a dark cave in eastern China.


All Comments: [-]

bostonpete(10000) about 9 hours ago [-]

Wouldn't Rosetta Stone be a better metaphor than Holy Grail?

hetman(10000) about 6 hours ago [-]

'Holy grail' is being used as an idiom and not a metaphor here.

nvahalik(3868) about 9 hours ago [-]

So... how do people actually know how much C14 was in something? Does dating only work if you know how much C14 was there originally? Otherwise how would you be able to measure the difference between what is there and what was there originally?

InclinedPlane(2250) about 3 hours ago [-]

Precisely, that's why these calibrations are so important.

You can start from the assumption that the natural C14 to C12 ratio is a constant. And from there you can measure the actual ratio in the present time for a sample. Using math you can estimate how long ago the material was part of a living creature, since in the meantime the ratio would not have kept pace with the natural environmental ratio and instead would change depending on the half-life of C14. However, the assumption of a constant natural abundance of C14 is incorrect, there are lots of complex effects (like solar activity) which impact the ratio, which in turn affects the estimated age of an object. The more data you have on ancient C14 ratios the better able you are to calibrate radiocarbon dating.

In the past ancient trees and lake/seafloor samples have been used because they have many individual samples from separate years, each of which can be independently dated (separate from radiocarbon analysis).

nonbel(3532) about 9 hours ago [-]

They calibrated it to the amount of C-14 found in artifacts with 'known' dates, eg from ancient egypt. Most of those 'known' dates originally come from this guy:

crygin(3724) about 9 hours ago [-]

The C14 timeline is calibrated from radiocarbon years to calendar years:

cpburns2009(10000) about 9 hours ago [-]

The idea is the level of C14 in the atmosphere is constant, and living organisms will have about about the same ratio of C14 to C12. When they die they no longer replenish the C14 so the amount decreases over time as it decays (it's radioactive). You can then calculate how old something is based on the ratio of remaining C14 to C12. I'm not sure what the process is that relates C14 to stalagmite because it's inorganic.

acidburnNSA(3715) about 7 hours ago [-]

As others have stated radiocarbon measurements can be compared to calibrations to some degree. Along these lines, see also the isochron method, which uses the slope of a line to back-calculate the original isotopic concentration. The fundamental assumptions is that isotopic concentrations are uniform throughout a sample at the time of birth, but different elements solidify at different concentrations. So by looking at isotopic ratios around a sample you'll find higher concentrations of daughter nuclides where higher concentrations of parents once were. If you plot the data you can look at the slope and see the age. It's easier to see graphically [1].


Historical Discussions: 20% of HTTPS server cert are incorrect, half considered valid by libs (December 16, 2018: 15 points)
Nanotechnology: The New Features (December 13, 2018: 1 points)
How to Increase Global Wealth Inequality for Fun and Profit (November 14, 2018: 9 points)
Applying Deep Learning to Airbnb Search [pdf] (November 09, 2018: 1 points)
Applying Deep Learning to Airbnb Search (October 29, 2018: 1 points)
Applying Deep Learning to Airbnb Search (October 24, 2018: 1 points)

Systematic Parsing of X.509: Eradicating Security Issues with a Parse Tree

75 points about 4 hours ago by snaky in 1999th position | Estimated reading time – 1 minutes | comments

Title:Systematic Parsing of X.509: Eradicating Security Issues with a Parse Tree

(Submitted on 12 Dec 2018)

Abstract: X.509 certificate parsing and validation is a critical task which has shown consistent lack of effectiveness, with practical attacks being reported with a steady rate during the last 10 years. In this work we analyze the X.509 standard and provide a grammar description of it amenable to the automated generation of a parser with strong termination guarantees, providing unambiguous input parsing. We report the results of analyzing a 11M X.509 certificate dump of the HTTPS servers running on the entire IPv4 space, showing that 21.5% of the certificates in use are syntactically invalid. We compare the results of our parsing against 7 widely used TLS libraries showing that 631k to 1,156k syntactically incorrect certificates are deemed valid by them (5.7%--10.5%), including instances with security critical mis-parsings. We prove the criticality of such mis-parsing exploiting one of the syntactic flaws found in existing certificates to perform an impersonation attack.

Submission history

From: Nicholas Mainardi [view email] [v1] Wed, 12 Dec 2018 14:16:02 UTC (139 KB)

All Comments: [-]

k-ian(10000) about 2 hours ago [-]

[comment about x509 being bad]

Dylan16807(10000) about 1 hour ago [-]

Sucks that you're being downvoted since at the time you posted 'X.509' wasn't in the title and it was reasonable context on why '20% of HTTPS server cert are incorrect, half considered valid by libs'

A lesson on thinking and acting long term

72 points about 11 hours ago by seapunk in 2267th position | Estimated reading time – 5 minutes | comments

Julien 🌲 @julien Co-founder and Chairman @Breather. Also New York Times bestselling author of Flinch, Trust Agents, etc. Dec. 15, 2018 3 min read

1/ there is this sense when you are young that your accomplishments need to be a list of things that seem impressive to others. A list of several items you did.

This isn't actually right, so here is another suggestion.

2/ I remember being 26 and writing about reading 52 books a year. I wrote blog posts about it. They got copied. It became 'a thing.' Now it's in Twitter bios. It looks impressive but it's insanely useless and I shouldn't have done it.

3/ what I should have known at that time is that only young idiots like myself, with no accomplishments, find list of tiny achievements impressive. Anyone who has actually done anything of substance doesn't gaf

4/ what is actually difficult, and worthwhile, instead is to do ONE single thing for a very, very long time. It's much harder and much rarer and results in outlier outcomes much more often.

Of course you can find this out too late if you are chasing the dragon of Ted talks etc

5/ if I had only worked on a startup for a year, I would've gotten nowhere, the same way that if you lift for 3 months, it achieves nothing. Everything good in life comes from perseverance, but at the beginning, you're just like 'I need to be somebody!!!'

If I had read one book 52 times - the right one - instead of racing through 52 books year after year, I think I would have been able to write Moby Dick by now. But the surface level stuff was too attractive, too shiny.

7/ all of this is because it's the nature of the mins and the body to give up once things are hard- it's why grit is so valuable. It's why Jeff Bezos is the richest guy and not the dude who did 10 startups for that same period. Compounding efforts produce outlier results.

I'm lucky that I am 39 now and have done enough to feel that my monkey ambition brain is satisfied (for now). I was meeting a dude the other day and he goes 'why did you start your company, did you get sick of writing New York Times best sellers?'

Like ha ha, but he's right.

Now that I'm on the other side of it, I realize a ton of that time was wasted. Focus is what gets you places. Being deeply good at a single thing, or good enough at two things.

In case you're wondering, for me, that's a-product and b-getting people to believe in me + my thing.

10/ so conclusion- choose one thing and spend 5 years on it. At the end of one year you won't have a ton of signal that it's working.

Example - My gf is one year into her ceramic sculpting and she just did her first show. People like what she does but she wants it to go faster.

11/ if she quits now, it dies (and she proves herself right).

But year 2 is easier. Your network is wider. More people see your thing and recognize it. Your second set of pieces get seen enough to develop your reputation. Etc.

12/ so on with year 3, 4, 5, etc. Now you're really somewhere! And most people have quit. So you're now way ahead in a much less crowded pack!

PS this is her thing in case you're wondering. ...

13/ in startups, same issue. How credible is the guy who raised 100m$ vs the guy who raised 10.

Not 10 times more.

100x more.

14/ real conclusion now

When you feel like quitting, the thing you should really get out of it is not 'I quit' but instead

'ah! Most people probably quit at this time. If I continue, good things will happen and it'll be less competition.'

Have a good weekend, and get to work.

You can follow Julien 🌲.

____ Tip: mention @threader_app on a Twitter thread with the keyword "compile" to get a link to it.

Enjoy Threader? Tell @jack.

Download Threader on iOS.

All Comments: [-]

meta_AU(3988) about 2 hours ago [-]

the same way that if you lift for 3 months, it achieves nothing.

I'm probably missing the point - but that first 3 months of training is where you make the fastest growth and can double the weight you lift (from untrained to novice). Sure - you aren't going to compete at a world level after 3 months, but it doesn't seem right to describe it as 'nothing'.

pseudoramble(10000) about 2 hours ago [-]

Agreed. Though re-reading it, I think the point may have been more about quitting after 3 months and not so much that you don't gain anything the first 3 months.

> 5/ if I had only worked on a startup for a year, I would've gotten nowhere, the same way that if you lift for 3 months, it achieves nothing. Everything good in life comes from perseverance, but at the beginning, you're just like 'I need to be somebody!!!'

It may be more about persevering beyond that arbitrary time limit and trying to get to a certain real goal instead... maybe.

ThomPete(639) about 8 hours ago [-]

4/ what is actually difficult, and worthwhile, instead is to do ONE single thing for a very, very long time. It's much harder and much rarer and results in outlier outcomes much more often.

A friend of my parents spent more than 30 years building a famous watch called 'jens olsens world watch' in half size by hand (as in litreally everyhing is done in hand also the wrenches which for some of them takes 400 years to go around their own axis)

This is on top of a bunch of other things he built. If there is a modern day da vinchi he would be it (he paints, plays music, writes assembler like it's an artform, understand mechanical engineering and electrical engineering he reversed engineered the Mac2 back in the days, I could go on)

He never brags about it so I have to do it because I think there are far too few people like him left and they are the ones who really do things in life.

There is much to be learned by someone like him.

captainperl(10000) 39 minutes ago [-]

Similarly, you can see a remarkable model airplane collection by Edward Chavez at SFO now. It was sponsored over two decades by the Nut Tree restaurant owner:

SubiculumCode(3868) about 7 hours ago [-]

So many undergraduate research assistants want to be in the lab to do the minimum time and effort, get the check mark in their cv, and move on. Then they go interview with an impressive looking list of labs and projects they've worked at, but when you inspect their thinking there is little depth. Then you find the jr specialist applicant that stayed at one lab their entire undergraduate career, shows some understanding and language surrounding the basic problems in that field, and mental dedication to work through the hard problems without bailing. That is what I valued in candidates. No matter how tough it gets, they didn't give up.

nopinsight(742) about 2 hours ago [-]

Markets reward useful uniqueness. Long-term focus yields mastery few have achieved.

A path even more likely to lead to success is combining two types of expertise in a fruitful way. In today's global economy, it is very hard to be among the world's top 100 or 1,000 for a particular skill, even with years of practice. It is quite possible to be among the top 1,000,000 in each of the two skills that, when usefully integrated, result in a fairly unique output or skill set.

To elaborate on YC's motto:

Creative integration toward things people want.

hyperpallium(2650) about 2 hours ago [-]

Markets reward useful defendable uniqueness.

Making something people want is a foundation, but no good if someone else copies it (and perhaps does it better amd markets it better through capitalization).

I think the YC playbook is if you keep improving it, it's hard for copiers to catch up (so don't worry about competitors/copiers in themselves, look fotward, not behind you). There's also a market advantage of being first - people like that, and you've have longer for news to spread about you.

Historical Discussions: Show HN: Debucsser, CSS debugging made easy (December 11, 2018: 63 points)
Debucsser: CSS debugging tool with an unpronounceable name (December 10, 2018: 1 points)

Show HN: Debucsser, CSS debugging made easy

63 points 5 days ago by lucagez in 3814th position | Estimated reading time – 3 minutes | comments


CSS debugging tool with an unpronounceable name



If you are using a bundler:

npm install debucsser

Alternatively download debucsser.js in /module folder and link it in your html

a chrome extension is under development


Debucsser is a simple CSS debugging tool made to be unobtrusive in your workflow.

I find myself often apply 'outline' to a lot of elements on the page to see their dimensions.

With Debucsser I simply have to hold CTRL and move my mouse around to see the dimensions in px and apply an outline class to every element I hover.

If you hold CTRL + SHIFT you apply the outline class to all the elements on the page by adding a global class.

You can configure some parameters.

I find handy the possibility to specify a custom class I want to apply to different elements without the need to comment and uncomment the my css files.


// only if you installed via NPM
import Debucsser from 'debucsser';
// pass all the custom properties you want
const config = {
        color: 'palevioletred', // color of the outline
        width: '4px', // width of the outline
        grayscaleOnDebugAll: true, // apply grayscale filter to every element 
        customClass: 'exampleClass',  // a class existent in your stylesheet
// init the debugger
const debug = new Debucsser(config).init();

When you have done this, simply hold CTRL and move the mouse around on the page or hold CTRL + SHIFT.



outline color.

Type: string. Default: palevioletred


outline width.

Type: string. Default: 3px


outline style.

Type: string. Default: solid


Apply grayscale filter on hovered element while holding CTRL.

Type: bool. Default: false


Apply grayscale filter on all elements while holding CTRL + SHIFT.

Type: bool. Default: false


Apply custom class on hovered element while holding CTRL.

Type: string. Default: null


Set the key to use alternatively to CTRL.

Type: number Default: 17


Set the key to use alternatively to SHIFT.

Type: number Default: 16



  • make a usable chrome extension (very experimental by now)
  • improve default styling of label

If you have any idea on how to make Debucsser better don't hesitate

Fork ➡ new branch ➡ PR



All Comments: [-]

iraldir(3888) 5 days ago [-]

Feel like this should be a browser extension instead of a library

Kyro38(10000) 5 days ago [-]

or just use it as a bookmarklet ?

m1guelpf(3940) 4 days ago [-]

Thinking of making one actually

bobthepanda(10000) 5 days ago [-]

Does this offer the ability to copy the applied styles of the element you're hovering over?

lucagez(3814) 5 days ago [-]

Kind of. You can apply a custom class to every element you hover on

ZachSaucier(3836) 5 days ago [-]

Seems pretty useless to me. It's way less effective than dev tools and more of a hassle to implement into projects.

dang(160) 4 days ago [-]

This breaks both the HN guidelines and the Show HN guidelines. Could you please review them and not do that in the future?

mkoryak(3982) 5 days ago [-]

Don't worry about the name. A bad name won't hurt your project. What will hurt your project is calling out how bad your name is in the project description.

You have a short sentence to sell me a reason to read more of your readme but instead you use that time to point out the issues with the project name.

(I wrote a thing called floatthead which is an aweful name, but no one has ever complained about _that_)

cjohansson(10000) 5 days ago [-]

Calling out the bad name was what really got my attention for this project, in a good way, so I don't think it's generally a bad practice

jspash(10000) 5 days ago [-]

Introduce the project like this

'Debucsser, CSS debugging made easy' (pronounced de-buk-sir)

better yet (or not)

You could find someone to draw a little cartoon in the style of the New Yorker showing a herd of deer, one of them with enormous antlers, and two men with hunting gear (a butler and his master, or a king and his servant. something like that).

The caption would read. 'Which one do I shoot?' 'De buck, sir. Only de buck.'

ironmagma(3824) 5 days ago [-]

floatthead isn't an awful name. It's essentially meaningless. Bad names can make projects not take off as much as they would have due to ambiguity, unpleasant associations, and difficulty of expressing support. We often don't hear about those projects precisely because they don't succeed.

cldellow(3928) 5 days ago [-]

This is neat!

My wife is beginning to learn some web design. Her laptop has a low-res screen (1366x768) so dev tools takes up a lot of space when open. When she's tinkering with layouts, she occasionally adds/removes borders to help interpret what's going on.

It's probably most useful to people who are starting out, so my one suggestion might be to make it even easier to install -- e.g. if `npm install debucsser` would make it work with a create-react-app app (and other common scaffoldings).

lucagez(3814) 5 days ago [-]

I didn't thought about that, thank you for pointing out

Historical Discussions: CTML – a simple header-only HTML document generator (December 09, 2018: 1 points)

Show HN: Single-header C++11 HTML document constructor

62 points about 7 hours ago by tinfoilboy in 10000th position | Estimated reading time – 5 minutes | comments


CTML is a C++ HTML document constructor, that was designed to be simple to use and implement. Has no dependencies on any other projects, only the C++ standard library.


For use in a project, you may copy the ctml.hpp file into your project and include that way. Alternatively, if you use CMake, you could add CTML as a dependency to your project.


Tests are included with the library and are written using the Catch2 header-only test library. These tests are located in the tests/tests.cpp file.



Every class and enum in CTML is enclosed in the CTML namespace. For instance, the Node class would be under CTML::Node.


Most methods for operating on CTML::Node instances are chainable, meaning that you may run these operations multiple times on the same expression.


The basis of CTML is the CTML::Node class, which allows you to create simple HTML nodes and convert them to a std::string value.

The most simple valid HTML node that can be represented is, for instance, an empty paragraph tag, which can be created with the following code.

Which would output in string form as:

To get this string output, you would use the CTML::Node::ToString(CTML::StringFormatting) method. This method allows for passing in either CTML::StringFormatting::SINGLE_LINE or CTML::StringFormatting::MULTIPLE_LINES.

You can add simple text content to this Node by changing that line to the following:

CTML::Node node('p', 'Hello world!');

Which would output as the following:

You can quickly add classes and IDs to a Node (possibly attributes in the future) with a syntax in the name field that mimics Emmet Abbriviations. This is shown in the following definition:

CTML::Node node('p.text#para', 'Hello world!');

Which would output the following HTML:

<p class='text' id='para'></p>

You can then append children to these Node instances by using the CTML::Node::AppendChild(CTML::Node) method, like below:

CTML::Node node('div');
node.AppendChild(CTML::Node('p', 'Hello world!'));

Which would give this output:

<div><p>Hello world!</p></div>

You can also append more text to the parent node with the CTML::Node::AppendText(std::string) method, which simply adds a Node with the type of TEXT to the children. This is shown below:

CTML::Node node('div');
node.AppendChild(CTML::Node('p', 'Hello world!'))
    .AppendText('Hello again!');

Which would output as:

<div><p>Hello world!</p> Hello again!</div>

You can also set attributes on a Node, modifying the below example to do so looks like:

CTML::Node node('div');
node.SetAttribute('title', 'Hello title!')
    .AppendChild(CTML::Node('p', 'Hello world!'))
    .AppendText('Hello again!');

Which would output as:

<div title='Hello title!'><p>Hello world!</p> Hello again!</div>


To create an HTML document that contains these nodes, you can use the CTML::Document class. This class includes doctype, head, and body nodes for adding nodes to.

A simple HTML document would be created with:

You can then output this as a string with the CTML::Document::ToString(CTML::StringFormatting) method.

Using that you would get an output of:

<!DOCTYPE html><html><head></head><body></body></html>

You can then append nodes to it using the CTML::Document::AppendNodeToHead(CTML::Node) method or the CTML::Document::AppendNodeToBody(CTML::Node) method.


CTML is licensed under the MIT License, the terms of which can be seen here.

All Comments: [-]

Negitivefrags(10000) about 5 hours ago [-]

I thought about this for a little while, and I think I don't like the API for this library. There are two things in particular I don't like. The first is that I think I would prefer to have helper types for each tag type so you don't have to include them as strings all the time, and the second is that I don't like the AppendChild approach that this library takes.

I would change it so that you pass the document to the constructor of each element and the scoping of each variable effectively determines the relationships. As a 10 second example of what I mean:

    HTML::document d;
    HTML::body b( d );
    HTML::div div( d );
    HTML::p p( d );
    d.text( 'Hello world' );
    <html><body><div><p>Hello world</p></div></body></html>
The reason I like this because it maps very well onto a C++ programmers natural understanding of the stack frame and RAII, and in addition it can be implemented with needing to store any state inside the node classes. This means that only HTML::document would need to actually allocate any memory, and it would just be a single text stream.

This wouldn't create a node hierarchy in memory, so it's not a DOM like this library creates, but if you are just looking to output HTML quickly, then I think it would be easier to use and more efficient.

johannes1234321(3868) about 1 hour ago [-]

A reason why I don't like this approach is that I have to know where an element goes when creating it.

I prefer creating an element and then moving it into place, so a module of my application can create some structure in isolation.

ajss(10000) about 4 hours ago [-]

I really dislike that.

The document is mutable and goes through a whole sequence of states that aren't what you want?

Constructing a body object with a document argument modifies the document?

tinfoilboy(10000) about 5 hours ago [-]

While I do agree with the idea of outputting HTML quickly, I think that in the end I'd want to emulate the DOM more closely. For instance, I've been thinking about adding a simple parser to the library so that I could use it in making a web scraper. With a replication of the DOM, I could then easily find nodes that I'd like to grab from the scraper. In addition, I was also thinking of adding actual element grabbing via selectors a la CSS, which would require a DOM representation.

However, the helper types could be useful, and might be able to be implemented as simple aliases to a Node. Also, the reason I take the approach of appending child nodes is for representing actual HTML easier. For instance, with your .text example (at least with my limited glance on it), you can't do something such as <p>Hello, <span>world!</span> Welcome!</p>, which was actually a previous problem that I had with another version of the library.

tlb(1313) about 5 hours ago [-]

It's worth getting the escaping right in a library like this. For example, at

  for (const auto& attr : m_attributes)
    output << ' ' << attr.first + '=\'' << attr.second + '\'';
it'll generate incorrect HTML if an attribute value has a ' character. Although early versions of HTML were kind of vague on how escaping was supposed to work, the HTML5 standard explains it in detail.
tinfoilboy(10000) about 5 hours ago [-]

I'll add escaping attributes to the library according to the spec. Thank you for the heads up!

Historical Discussions: How Peter Jackson Made WWI Footage Seem Astonishingly New (December 16, 2018: 39 points)

How Peter Jackson Made WWI Footage Seem Astonishingly New

57 points about 8 hours ago by IfOnlyYouKnew in 10000th position | Estimated reading time – 6 minutes | comments

Stereo D also worked on converting the film to 3-D for a more immersive effect, a sense of being on the battlefield. And Park Road enhanced the experience with sound editing to rival that of "The Lord of the Rings." But explosions, gunshots and tank engines aren't as surprising as the moments when the soldiers speak.

"We got some forensic lip readers, who, before this, I had no idea actually existed," Jackson said. These experts, who often work with law enforcement to help determine the words of people in security camera video, reviewed the archival footage to reconstruct, as nearly as possible, what the soldiers were saying.

Voice performers were hired to stand in for the soldiers, but Jackson's team, mindful that regiments were drawn from different regions of Britain, made sure the actors came from those areas and had accurate accents. In a similar vein, military historians provided ideas for what off-camera officers' commands might have been, and that information made its way into the film as well.

Even with all of these moving parts, and with footage that could have told a dozen different war stories, Jackson tried to keep his film specific.

"I didn't want to do a little bit of everything," he said. "I just wanted to focus on one topic and do it properly: the experience of an average soldier infantryman on the Western Front."

"They Shall Not Grow Old" is playing Dec. 17 and Dec. 27 at theaters around the country. Go to for more information.

No comments posted yet: Link to HN comments page

Pendulum Waves

52 points about 3 hours ago by kp25 in 2344th position | Estimated reading time – 3 minutes | comments

What it shows:

Fifteen uncoupled simple pendulums of monotonically increasing lengths dance together to produce visual traveling waves, standing waves, beating, and random motion. One might call this kinetic art and the choreography of the dance of the pendulums is stunning! Aliasing and quantum revival can also be shown.

How it works:

The period of one complete cycle of the dance is 60 seconds. The length of the longest pendulum has been adjusted so that it executes 51 oscillations in this 60 second period. The length of each successive shorter pendulum is carefully adjusted so that it executes one additional oscillation in this period. Thus, the 15th pendulum (shortest) undergoes 65 oscillations. When all 15 pendulums are started together, they quickly fall out of sync—their relative phases continuously change because of their different periods of oscillation. However, after 60 seconds they will all have executed an integral number of oscillations and be back in sync again at that instant, ready to repeat the dance.

Setting it up:

The pendulum waves are best viewed from above or down the length of the apparatus. Video projection is a must for a large lecture hall audience. You can play the video below to see the apparatus in action. One instance of interest to note is at 30 seconds (halfway through the cycle), when half of the pendulums are at one amplitude maximum and the other half are at the opposite amplitude maximum.


Our apparatus was built from a design published by Richard Berg 1 at the University of Maryland. He claims their version is copied from one at Moscow State University. Dr. Jiří Drábek at Palacký University in the Czech Republic has informed us that it was originally designed and constructed by Ernst Mach when he was Professor of Experimental Physics at Charles-Ferdinand University (today known as Charles University) in Prague around the year 1867. The demonstration is used in the Czech Republic under the name Machuv vlnostroj—the 'Wavemachine of Mach.' The apparatus we have was designed and built by Nils Sorensen.

James Flaten and Kevin Parendo2 have mathematically modeled the collective motions of the pendula with a continuous function. The function does not cycle in time and they show that the various patterns arise from aliasing of this function—the patterns are a manifestation of spatial aliasing (as opposed to temporal). Indeed, if you've ever used a digital scope to observe a sinusoidal signal, you have probably seen some of these patterns on the screen when the time scale was not set appropriately.

Here at Harvard, Prof Eric Heller has suggested that the demonstration could be used to simulate quantum revival. So here you have quantum revival versus classical periodicity!

1Am J Phys 59(2), 186-187 (1991).

2Am J Phys 69(7), 778-782 (2001).

All Comments: [-]

aethr(3602) about 2 hours ago [-]

Without analyzing it too much this seems to be a perfect visualization of the mathematics of music theory. The lengths of string become quite a direct metaphor for the wavelengths of notes on the music scale, and seeing them move together in progressively different 'groups' of notes I imagine closely matches traditional chord structures in different keys.

Quite mesmerizing, and mathematically satisfying at the same time!

agumonkey(929) 17 minutes ago [-]

jazz is variable actuators/pendulums

avip(3966) about 1 hour ago [-]

For this analogy to hold you'd need the 8th ball thread x4 longer than the first, which is very far from the actual ratios used.

mrob(10000) about 1 hour ago [-]

It's a closer analogy to the 'beat frequencies' you get with constructive and destructive interference of waves of similar frequency. The envelope of the total amplitude of the pendulums oscillates with lower frequency than the amplitude of any individual pendulum.

I used Audacity to mix sine waves of the frequencies of the pendulums (51/60Hz, 52/60Hz, ..., 65/60Hz), multiplied by 440 to get them to audio frequency, and it sounds a lot like the UFO sound effect from the 1970s TV series 'UFO'.

jkqwzsoo(10000) 44 minutes ago [-]

Less the notes and scales themselves, and more polyrhythms and phasing. This reminded me of Steve Reich's Clapping Music, and similar phasing-heavy works (Piano Phase, Drumming, Music for Pieces of Wood, 6 Pianos/Marimbas, and Pendulum Music).

madelinw(10000) about 1 hour ago [-]

I recreated this in CSS/JS a few years ago, inspired by this video.

abecedarius(2229) 29 minutes ago [-]

Same principle, though not a physical simulation:

By moving the mouse you can amuse yourself with various Moiré effects.

Historical Discussions: Show HN: Try running deep learning inference on Raspberry Pi (December 13, 2018: 52 points)

Show HN: Try running deep learning inference on Raspberry Pi

52 points 4 days ago by nineties in 3610th position | | comments

What is Actcast?

Deep learning has enabled remarkable progress on a variety of pattern recognition tasks such as image recognition. Actcast is an IoT platform service in which users can obtain physical world information with deep learning inference on the edge devices and link them to the Web to construct advanced IoT solutions.


Actcast utilizes the concept of edge computing which can significantly reduce costs for data transfer and servers and decrease leakage risk of privacy and confidential information. Raspberry Pi, a low-price and pocket-sized popular computer, is the first supported edge device.

Getting Started

All Comments: [-]

moconnor(2381) 2 days ago [-]

You can run TensorFlow Lite on a Pi with no problems at all. You can even train and run basic gesture recognition with full TensorFlow on a Pi.

Source: wrote a tutorial doing this for Arm (

contingencies(3286) 2 days ago [-]

There are gesture recognition sensors available that don't even need a general purpose CPU, eg. Broadcom APDS-9960.

kankroc(10000) 3 days ago [-]

Since this is about computing at the edge, I was wondering if anyone had an opinion on the intel neural compute stick 2?

I got one recently and I am not convinced, but I suppose someone on HN might have legitimate use cases.

tanujt(10000) 2 days ago [-]

I think the NCS2 has potential.. it's still early though. One thing on their roadmap is to support ARM, which I view as a must-have for low powered edge applications. Right now it only works with OpenVINO and certain x86 systems. So I view the real use cases coming when it can be used in settings with limited power (and limited or variable network connectivity).

nineties(3610) 4 days ago [-]
metildaa(10000) 3 days ago [-]

Why would I use this rather than Snips? Edge computing is great, but in a closed source, hard to audit form it will be a tough sell to get 3rd party developers onboard.

Stammon(10000) 3 days ago [-]

We don't need more proprietary machine learning devices in our homes. I'd appreciate this so much more, if it was open source, so I can reshape it to whatever use case I have.

There are plenty of viable business models, that give you your well earned money and us the option to customize and understand our devices.

zozbot123(10000) 3 days ago [-]

> We don't need more proprietary machine learning devices in our homes. I'd appreciate this so much more, if it was open source

It doesn't seem to do anything special? You can probably run your favorite machine learning framework on the Raspberry Pi, and it will work - albeit using the ARM cores and NEON only. Now, machine learning and inference _using the Raspberry Pi's GPU part_ (which is broadly documented, unlike most GPU hardware) would be a gamechanger, if only for educational scenarios.

bko(1508) 3 days ago [-]

I agree, so I looked at running image detection offline on a raspberry pi and wrote a post about it. It can't do anywhere near real time object detection on the larger YOLO models, but real time detection is often unnecessary. Also there are smaller models (e.g. yolo-mini) that can likely give you an acceptable frame rate.

Historical Discussions: Keyboardio Kickstarter Day 1278: A startling discovery (December 16, 2018: 17 points)

Keyboardio Kickstarter Day 1278: A startling discovery

52 points about 20 hours ago by paulannesley in 3875th position | Estimated reading time – 30 minutes | comments

TL;DR: Due to serious financial misconduct at the factory that makes the Model 01, keycap sets that we believed were in the mail to you have yet to be shipped. We're working to resolve the problem, but it may take us a while. The past few weeks have been stressful but Keyboardio is in good shape. Curious about what happened? Read on.

In happier news, the Model 01 is back in stock for immediate shipment at

Words we never wanted to hear

'I'm not saying anything else without a lawyer present.'

There's basically no situation in which these words indicate that things haven't gone badly, badly wrong.

These are the words someone says after they get caught.

These are the words that our account manager from the factory that makes the Model 01 said at about 6PM on Tuesday, November 27, after we'd finally figured out how much she'd stolen.

A backer update we never expected to write

A month ago, we were pretty sure that this backer update was going to be the last update about the Model 01. We thought we were going to be able to report that all the keycaps you've ordered had been shipped, and that most had already been received. We had thought that we were going to be able to report that our initial contract with our factory had been (mostly) successfully completed with the delivery of the MP7 shipment of keyboards, and that we were in discussions about whether to move production to a new supplier or to continue production with them now that we'd solved most of the manufacturing issues.

This is not that backer update. And we're pretty sure it won't be the last substantial backer update.

This is not a backer update that makes us look good. This is not a backer update that makes the factory look good. This is not a backer update that makes our account manager look good.

This is a backer update that has been hard to write. This is a backer update that includes details we're hesitant to share. This is a backer update that doesn't include all the details we'd like to share.

The situation we're writing about is not yet resolved. It is unlikely that it will be resolved to the satisfaction of all parties involved. Before writing about this situation, we had to check with our lawyers. There is a chance that writing about this situation may influence the outcome, but we've decided that we're willing to take that risk. From the start, we've tried to be as up-front with you as we can about the trials and tribulations about low-volume manufacturing in China. We're not about to stop now.

Whatever the outcome of this situation, we still expect to honor all of our commitments to you.

Back to the beginning

One of the reasons we chose the factory we ended up using for the Model 01 was that the account manager seemed a little bit more engaged and collaborative than her peers at many of the other factories we met with. (In previous updates, we've sometimes referred to her as our "salesperson", as that was the department she was in, although her role encompassed a lot more than that.) Her English was good. It seemed like she fought hard for what her customers wanted and that she was committed to managing the full production process. She told us that because the factory's regular project management team didn't speak English, she'd be our point-person throughout the manufacturing process. She seemed like a little bit of a control freak. It was nice that she was the 'Director of Overseas Sales' and seemed to have significant pull inside the organization. In contrast, our sales contact at our second choice factory was so new and had so little internal influence that she couldn't get us even a single customer reference.

What we didn't know at the time was that our account manager had been with the company for only a few months, and that we were her first project with them.

We paid the initial deposit for the tooling and the keyboards directly to the factory and got started.

Right from the beginning, there were problems. The factory started making injection molding tooling before we'd approved the final design. That caused months of delays. The factory outsourced the injection molding to partners without telling us (despite assurances to the contrary in the contract). Small communications issues caused outsized delays. Throughout this process, our account manager kept in constant contact with us, to the point of nightly calls on both weekdays and weekends. We genuinely believed she was working hard on our behalf.

There were occasional 'weird' things that felt like they might be lies, but every time we independently verified one of them, it checked out. As time went on, we talked to a lot of folks who've been manufacturing hardware in China. It became clear that small companies doing business in China just run into weird problems. However, if you've read our backer updates over the past few years, you will be well aware that we've run into more than our share of weird problems. In the words of one industry veteran we talked to, 'every problem you guys have run into is something that happens. But nobody has all the problems.'

It was only much later that we would realize how prescient this comment had been.

A new bank account

When it came time to pay for the first mass production run, our account manager sent us an invoice that included bank details that differed from those on the initial invoice. When we asked her about this, she said that the new account belonged to one of the factory's partners.

This is not entirely unheard of, but immediately set off alarm bells. We asked the account manager to confirm in writing that this change was 100% correct and above-board. When she did, Kaia forwarded this confirmation onward by email to the factory owner. The factory owner didn't get back to us and we were already desperately late to get these keyboards shipped. The total amount of the invoice was relatively small compared to the deposit we'd already paid the factory. And Jesse was physically present in China while this was happening. So we paid.

The keyboards shipped out.

We didn't think too much more about that alternate bank account because, well, all the right things appeared to happen.

Controlling the relationship

Unbeknownst to us, our account manager had gotten the factory to ship out our order of keyboards, even though they hadn't received any payment from us. Much later, we learned that she'd lied to the factory owner, telling him that we were broke and needed to sell that batch of keyboards before we could afford to pay for them. She held onto our money for a while and then paid it out to the factory.

This was how she started to poison the factory against us.

At around the same time, she started telling us about how poorly managed the factory was, but that she was running interference, solving problems on our behalf. At one point she told us about how gullible the factory owner was and that he got taken advantage of by his workers all the time.

If this were a novel, the foreshadowing would have been a little heavy-handed.

Over time, she told us lies about serious moral failings on the part of everybody at the factory we might approach to talk about the problems we were having. At the time, the things she said sounded believable. Much later, we'd realize that they were part of a plan to ensure that she was the trusted gatekeeper for all interactions between us and the factory.

When it came time to ship the second mass-production run of keyboards, Jesse had a meeting with our account manager and the factory owner in the factory's conference room. The first part of the meeting was in Chinese and then the factory owner had another commitment. After he left, the account manager told us that the project had been dragging out and the factory was out of pocket for raw materials, many of which had increased in price since we signed the contract. She said that the factory owner demanded we pay a little more up front and that this would reduce the amount we needed to pay for the remainder of the shipment. We're nice people who wanted to have a good relationship with our factory. And the money was going to the factory eventually. We knew that the bit about raw materials getting more expensive was true—China's been on a major anti-pollution kick, which has caused a lot of prices to spike. All in all, it felt a little bit funny, but we agreed.

You can probably guess most of what happened next. The account manager held onto all of our money for as long as she possibly could. When we started to get frantic about shipping, she paid the factory owner just enough money to convince him to release the shipment. When we met a couple weeks ago, the factory owner said that she paid this money in Chinese RMB. When he expressed concern about this, since we'd agreed to pay in US Dollars, she told him that, again, we were broke. She said that she was loaning us the money for this shipment, so we could try to recover our business.

Similar situations played out for subsequent shipments of keyboards. To us, the factory seemed moderately incompetent and disorganized. To the factory, we seemed like a small-time deadbeat client who might never make good on their promises to pay.

Over the course of 2017, Jesse made several extended trips to Shenzhen to work with the factory to solve a variety of design, manufacturing, and supply chain issues. When he was on the ground at the factory, issues got solved far faster, but that itself wasn't a huge red flag. Our relationship with the account manager seemed pretty good.Before one of the trips, she asked if Jesse could bring American prenatal vitamins, as she and her husband were trying to have a baby. She even commissioned a portrait of our son from an artist at the famous Dafen Artist Village in Shenzhen. (Perhaps unsurprisingly, while she sent us photos of the portrait, she never sent the actual portrait.)

Sometime in 2018, the account manager started telling us that she was planning to quit working for the factory once our project was over. She said that she'd actually started a factory with some partners and that they were already making mice and planned to expand to keyboards in the near future. She even tried to get us interested in investing in her new factory.

Pushing things too far

Keyboard shipments in 2018 happened. They weren't on time. They were not without issues. But they happened. Well, up until we got to the MP6 shipment, which was supposed to happen at the end of August. The account manager told us that due to scheduling issues, they wouldn't start assembly until early September. And then things kept dragging out by days and weeks. Finally in the middle of October, the account manager told us that we would need to prepay for MP6 and MP7 or the factory would refuse to complete the assembly.

Jesse actually came close to getting on a plane at this point, but we had some unmoveable family commitments and already had a trip to Shenzhen on the books in December to talk to factories about our next product.

We figured that this was part of a ploy on the part of the factory to get us to break the contract, so they could quit. Jesse told the account manager this. She swore that she would not quit her job at the factory until this project was successfully delivered.

We were upset. This was a straight up violation of the contract and of normal standards of business. The account manager told us that the factory was having cashflow issues and that if we didn't prepay, there was no way we were getting our keyboards. Since we had over 100 customers who'd bought these keyboards when we'd been promising that they'd ship in August or September, we bit the bullet and agreed to pay for MP6 before delivery. MP7 would complete our order with the factory. We said we couldn't possibly pay in full, as we'd have zero leverage if the factory failed to deliver. The account manager took a day to 'negotiate' with the factory and said she'd gotten them to agree to accept half of the remaining amount due as a gesture of good faith. She wrote into the payment agreement that the factory would return the payments if they did not hit the committed delivery dates.

When we paid this money, we knew the MP6 keyboards existed—we'd already had our third-party quality control agent check both the assembly line and the assembled keyboards. On October 26th or so, our account manager committed to get the keyboards shipped out ASAP. That should mean that keyboards arrive in Hong Kong the same day they were shipped. More typically, that means that keyboards will arrive at our warehouse in Hong Kong within 48-72 hours. Three days later, they still weren't there.

Our account manager said that she was in Malaysia and would dig into it when she was back in town in a couple days. She told us that she'd visited the trucking company and that they'd agreed to expedite our shipment. Two days later, the keyboards still hadn't arrived.

We were flipping out. We had daily calls with our account manager explaining just how bad it would be if these keyboards didn't arrive at our warehouse before Black Friday. She told us that she understood entirely and that she promised that the factory would compensate us USD1000 per day for every day they were late, going all the way back to October.

Over the course of 3 weeks, excuses included:

  • The factory has not paid their bill with the shipping agent, so all of the goods shipped by the factory have been impounded
  • Two of the factory's biggest customers didn't pay their bills, so the factory hasn't met their contractual minimums and the shipping agent won't do anything until this is resolved
  • The shipping agent has agreed to ship your keyboards, but they're fully booked up today
  • Your keyboards are on a truck! The truck is in Hong Kong. But it will arrive after the warehouse closes. (In this case, our account manager called the warehouse and got their staff to wait around for a couple hours after work.)
  • The factory has taken the keyboards back from the trucking agent because they're incompetent. They need to redo the customs paperwork and then send them to a new trucking company.
  • The new trucking company hadn't finalized their contract with the factory so refused to send the goods to Hong Kong
  • The new trucking company has handed the goods off to their Hong Kong partner. They will be delivered tomorrow morning.

A conversation with the factory owner

'Tomorrow morning' was, by now, the day before Thanksgiving. This was beyond the pale. We were absolutely livid. It was clear that something was very wrong, though we couldn't begin to guess at the magnitude of the problem. We sent our on-the-ground manufacturing consultant to the factory to meet with the factory owner, without our account manager present. The factory owner was angry, too. The conversation went something like this:

'Why the heck haven't Keyboardio's keyboards been delivered?'

"I've told the account manager over and over: Those keyboards will never leave our warehouse before Keyboardio pays for them."

'But Keyboardio has paid for them. Here are the wire transfer confirmations.'

"Well, we haven't received any of the last four payments."

"What about the 5000 sets of keycaps Keyboardio ordered?"

'You mean the 2700 sets they ordered? They're ready, as soon as they pay for them..."

'It sounds like there's a big problem.'

"Yes, it sounds like there's a big problem. We've not received at least USD30,000 that Keyboardio think they've paid."

Our manufacturing consultant reported all of this to us while we were in the middle of hosting Thanksgiving dinner.

He also shared an additional anecdote with us: The account manager had apparently been notorious for borrowing money (up to around USD1000 at a time) from coworkers and had been spotty about paying them back on time. The problem had become so bad that the factory owner had forbidden his staff from loaning her money. (To this day, she still owes at least one of them.)

36 hours later, Jesse was on the way to the airport, en route to Shenzhen.

Unravelling the scam

Monday morning, Jesse and our manufacturing consultant were due to meet with the factory owner, but he wasn't answering his phone. The account manager told us that she was unavailable until the afternoon, as she had to go visit the factory working making replacements for the keycaps that "got lost in the mail" again. (More on that later.)

After lunch, Jesse and the manufacturing consultant showed up at the factory to find the factory owner in the middle of a heated conversation. The manufacturing consultant told Jesse that they were discussing commission. The account manager was asserting that they'd had an oral agreement about her commission (as a percentage of factory profit.)

That's when we learned that she was no longer an employee of the factory. Remember how we said she'd promised not to quit until this project was finished? That was a lie. It turns out she'd quit 18 months prior. The factory had allowed her to keep our project as an independent 'sales agent.' This, at least, is not uncommon when doing business in China. What did surprise us was that the account manager had kept such tight control of our account that other people at the factory were afraid of angering her by speaking to us directly. The last time Jesse had been in Shenzhen, the factory owner had tried (unbeknownst to us) to take Jesse to lunch. Our account manager had refused to allow it. Folks at the factory said that they were worried that any direct contact with us would be seen as trying to steal her client.

During the first day of meetings, the account manager agreed to pay the remainder due to ship the MP6 keyboards on Tuesday morning, and that Jesse could come to the factory on Wednesday morning to watch them depart for Hong Kong on a truck.

Throughout the first day of meetings, all of the discussion of fraud and embezzlement was in Chinese. Eventually, the account manager left, supposedly to go arrange payment.

When Jesse discussed this with the factory team with the account manager out of the room, they told him that they were trying to allow her to save face, as they thought this was the most likely way to recover the stolen money.

Lies all the way down

On Tuesday morning, Jesse and our manufacturing consultant sat down with the factory's management team to discuss what had really happened and what needed to happen going forward. The first thing they did was to compare, in detail, what we'd paid and what the factory had received. The discrepancy wasn't USD30,000. It was over USD100,000.

We started looking at how the number could have gotten so big.

Part of it was the amounts the account manager had told us we needed to prepay, which she pocketed.

Part of it was the order for 5000 sets of keycaps we'd placed in January (along with shipping costs for 2700 sets that the factory was supposed to send directly to you.) As it turned out, she'd doctored the invoices she sent to us and to the factory. She'd dramatically inflated the unit cost of the keycaps and associated packaging on the invoice presented to us. At the same time, she'd halved the size of the order she sent to the factory. And those keycaps that the factory shipped out in August and October? They simply never existed. Total fabrication. At this point, Jesse took a break to walk into the factory's warehouse, where he found pallets of thousands of QWERTY, Unpainted, and Black keycap sets literally gathering dust. They'd been sitting there for months. The factory said they needed to get paid before they'd release the keycaps.

And, it turns out, part of the discrepancy was due to the fact that the account manager had negotiated aggressive discounts and price breaks on 'our' behalf and not passed them on to us.

When we asked if our account manager had done this to any of her other clients, the factory owner told us that we were the only customer she'd brought in before she'd quit.

Having come to understand the scale of the fraud, we told the factory that we were worried that she might try to disappear. We asked if someone at the factory could call her husband to see if he could tell us what was going on. That resulted in some pretty confused looks from the factory team.

"She's single. She's never been married."

It turns out she'd lied about that, too.

At this point, we were pretty sure that nobody was ever going to see our account manager again. Boy were we surprised when the account manager agreed to show up at the factory to continue our discussions that afternoon.

You're probably asking yourself 'why didn't they call the police?' Jesse was asking himself the same thing. When he asked the factory team the same question, their response was both understandable and somewhat unsatisfying: 'It's not enough money to destroy her life over. We think we can solve this without the police.'

It all got pretty weird. The factory owner posted guards at the factory's front gate to make sure that she didn't simply walk out and disappear. At one point, Jesse found himself standing in a dark stairwell to make sure she didn't sneak out the back when she said she had to go to the toilet.

Tuesday afternoon's discussions were...somewhat more frank. Jesse was very clear with the account manager that he knew she had been lying about everything from the beginning and that he did not believe she still had the money. He asked her to prove it by showing bank balances to him or the team at the factory. She claimed that her Hong Kong account had been 'blocked' due to an issue with the bank. Astonishingly, most of the people in the room seemed to take both this explanation and the claim that she was withholding the money as a bargaining tactic about her commission at face value.

A couple times during the afternoon, while arguing with the factory owner, the account manager called a friend or compatriot of some kind to get 'advice' about the negotiation.

(In the end, MP6 did ship: she paid about half of the required amount on Wednesday, half on Thursday, the truck shipped out on Thursday afternoon, and the keyboards got to the warehouse on Friday afternoon. As of this writing, they're currently in stock at

Lawyering up

Wednesday morning, Jesse met with our new lawyer to discuss our options. He told her that we'd like to resolve the situation by making sure that the theft was an internal matter between the factory and their former employee and that we'd like to maintain good relations with the factory. The lawyer explained that we could pursue both civil and criminal options, but that the civil option was more likely to lead to a positive resolution, wherein the money was recovered and our relationship with the factory was preserved. She said that the criminal penalty for what our account manager had done ranged from ten years to life in prison. She said that as soon as we called the police, the account manager would be unable to travel internationally or to buy plane or train tickets. More importantly, she said that once the police were involved, they'd have full control of the investigation and that there'd be very little we could do to get an outcome we wanted. She did agree that it was our ultimate fallback and that it was important that the account manager be aware of how severe the penalties for her actions are.

Wednesday afternoon, Jesse contacted our wood supplier to make sure that he didn't engage with our account manager. When Jesse told him a bit about what was going on, he revealed that a couple months back, our account manager had called him up to say that she was travelling internationally, had lost her wallet and needed him to loan her some money so that she could get a flight home. When Jesse asked if he'd paid, the wood supplier said that he told her that it was wildly inappropriate to be asking a suppliers for personal loans, and that she ought to ask a close friend or family member instead. He may be the only person in this whole story who didn't get conned by our account manager.

Thursday, Jesse, our lawyer, the factory owner, and the account manager sat down at the factory to hash out a new legal agreement between all the parties. Everyone agreed that we had paid everything we thought we had. The account manager confirmed in writing that she had received all the money we sent and agreed to repay it to the factory. Everyone agreed about the products (and quantities) we've ordered. We agreed to new delivery dates for everything that hasn't yet shipped. The factory agreed in writing that we own all the tooling we've paid for and that they will make it available for us to move to another factory if we want to.

The meeting didn't come to quite the resolution we'd hoped it would, but it turned out dramatically better than it might have.

Our lawyer has advised us not to go into more detail about the rest of the agreement or the shipping and delivery schedule we agreed to. At this point, we need to let things play out a bit more. We expect to have an update on keycaps and MP7 by mid-January at latest.

A new beginning

Saturday, Jesse and our manufacturing consultant sat down with the factory team. We knew our account manager had tightly controlled all information about the Model 01, so we wanted to make sure they had all the details they'd need to finish the work they've agreed to.

What we found...probably shouldn't have shocked us. In August, we'd sent 20 defective circuit boards back to our account manager at the factory, so she could give them to the engineering team to study and improve future production runs. In September, she'd confirmed receipt and said that the engineering team had been studying the failures. Nobody at the factory had heard anything about this. Jesse went strolling through the warehouse to find where everything from the account manager's office had been shelved. There was our box of defective circuit boards. Unopened. Jesse physically handed them to the manager of the R&D department.

We talked about the kinds of problems we'd seen in the field with the Model 01. Jesse was trying to reassure the factory when he told them that we weren't going to hold them responsible for the 100 defective wooden enclosures from the first mass production run, and that we understood that the problems for that, at least, mostly rested with the original supplier. This all had the opposite effect of what Jesse had intended. It turns out that the account manager had never bothered to tell the factory management about the defective enclosures, either.

It became clear that just about every conversation she'd reported having with the factory on our behalf over the past 18 months had been a total fabrication.

Finally we got to the point of talking about the future. Jesse explained that a few weeks before, Keyboardio had been dead-set on moving production to a new factory as soon as the keyboards we'd paid for were shipped. Now that we had a better understanding of what's going on, he said, we're happy to treat this as an opportunity to reboot the relationship. So long as the new agreement is honored and things ship on schedule, we effectively have changed factories.

Our account manager was a poor project manager and a poor salesperson, but she was a pretty skilled con-artist. It's not 100% clear if it would have protected us in this instance, but we've now learned (the hard way) that one should never pay an invoice from a Chinese company unless it's been stamped with the company's official 'chop' or seal.

Going forward, our new account manager at the factory has asked that all correspondence be CC'ed to the factory owner, the factory's CFO, her, the junior sales assistant, Jesse, Kaia, and our manufacturing consultant. Similarly, when discussing things on WeChat, we should use a group chat where everybody sees everything. Most importantly (to everybody), future payments, if any, should only ever be to the factory's primary bank account. And that we never again pay an un-stamped invoice.

Where we are today

So, that's most of what's new with us.

Is this catastrophically bad news? Yes and no.

On the one hand, there's a lot of money missing. We think there's a decent chance that money has vanished never to be seen again. Products that we said we sent you...simply never existed. We're genuinely sorry about that.

And of course it never feels good to realize that you got scammed.

On the other hand, Kaia pointed out to Jesse the other night that this actually makes her more confident about our ability to manufacture products in China in the future. When that industry veteran told us that all of these problems never happen to the same company, it turns out that they were right. All those uncontrollably crazy problems and delays we had? While many of them had a grain of truth, the vast majority of the issues we thought we had.. simply didn't exist. And, indeed, when we've worked directly with other suppliers for replacement wooden enclosures, travel cases for the Model 01, and on other bits and pieces, everything has gone a good deal more smoothly.

We're hopeful that we have a stronger, better relationship with the factory than we've had in the past. If worst happens and the new agreement with the factory falls apart, we're still potentially out a lot of money, but it's not enough money to kill the company. The part of the agreement that says we own all the tooling for the Model 01 has the force of law. (And yes, it's stamped with the corporate seal.) We are, of course, hoping that everything works out with our current factory, but if we need to, we can to move the tooling to a new factory, and to produce and ship your keycaps and more Model 01s.

A bit of good news

Oh. We do have a little bit of good news. The company that's making the Model 01 travel cases finished the second production run of cases a couple weeks early. We reported to them that one case from the first production run had a stitching error, so they threw in 10 extra cases, just in case any other issues get discovered. This second production run of travel cases arrived at our Hong Kong warehouse on Tuesday. On Thursday, we asked the warehouse to ship out cases to every Kickstarter backer who's filled out the backer survey. We expect most of them to arrive on your doorsteps before the end of 2018. In the coming weeks, we'll make cases available for sale on

We also have several hundred Model 01s in stock at our warehouse in Hong Kong. Orders should ship out within 2 business days.

As we wrote at the start of this update, it contains some sensitive information and we're a little hesitant about posting it. In order to be able to post this publicly, we've got to set a few ground rules.

It's perfectly ok to post comments telling us that we were naive and too trusting. We're well aware.

Please do NOT post any comments or speculation about the identity of our account manager or factory or their motivations. Doing so may limit our ability to share details with you in the future. And we really want to be able to share details with you in the future.

<3 Jesse + Kaia

All Comments: [-]

7e(10000) 35 minutes ago [-]

This is par for the course for Kickstarter projects. Naive principals with no experience repeatedly failing, but writing about it, so it's all good. The Kickstarter public seems to select for presentation skills, not for execution ability, so in the end backers get more story than product.

felixgallo(10000) 28 minutes ago [-]

Don't be like that.

burke(3941) 25 minutes ago [-]

Except that, despite all this drama, they delivered an excellent product (eventually).

falcor84(3628) 24 minutes ago [-]

I tend to agree somewhat, but think that this is mostly a strength. As a Kickstarter supporter, I tend to intentionally choose to support smaller and less experienced creators, because I believe in their vision. I want to join them in this journey to see whether they can, or maybe cannot, bring that vision to life. As long as the creators are doing what they can and being honest about their process, I get exactly what I paid for. If I just wanted a product, I would've gone to the store.

wvenable(3939) 23 minutes ago [-]

That's the point of Kickstarter, isn't it? I've backed some pretty awesome products in the past and have, so far, avoided the failures.

cjbprime(2270) 22 minutes ago [-]

This is both unnecessarily unkind, and inaccurate: keyboardio seems like a successful use of Kickstarter to me, even with these issues. They have produced and distributed product to their backers which delivers on their promises as far as I can tell.

And, of course, there's a sampling bias at work: even very experienced commercial users of Chinese ODMs have these stories. They just don't write them up publicly for everyone else to learn from.

Show HN: My 7th Grade Young Entrepreneur Project -- interactive holiday cards

49 points about 8 hours ago by ellailan in 10000th position | Estimated reading time – 2 minutes | comments

Colour Me Cards

Colour Me Cards are your classic holiday cards, with a modern twist.

It's a black and white printed greeting card that comes in 4 different designs. The front of the card has original line-art, to be used as a colouring activity. The back of the card has a QR code and URL, leading to an interactive game inspired by the front. The inside of the card includes a cute message relating to the card design.

See the website in action:

Offline component

There are four ready-to-print PDFs, inside or the print directory, in the public directory. Each pdf file makes two cards, after being cut in the middle. Print the pdf double sided, so the messages are on the inside of the card. You can print them at home, or send them to a printing service like Staples.

Online component

Make sure you have node installed, and then:

$ git clone
$ cd ColourMeCards
$ npm install
$ npm start 


Modifying the cards for a different domain

There are MS Word files, in the print directory. Create a new QR code (with a tool such as and put in in place of the current one. Change the URL. There are two cards on each page, so make sure to change both of them. Be careful, editing Word documents can be tricky.

About the project

This is my 7th grade Power Play Young Entrepreneur project. I designed the cards with Scratch, and wrote the code with BlockLike. I sold each card for 50¢, and 4 for $1.50.

Made with


  • All art by Ella Ilan
  • Front end code by Ella Ilan and Liam Ilan
  • Back end code by Liam Ilan
  • Music by Liam Ilan



All Comments: [-]

ellailan(10000) about 7 hours ago [-]

My brother was looking at the database, and he saw that people were playing, so if you have any questions, feel free to ask!

yorwba(3623) about 6 hours ago [-]

Was randomly generating user names a deliberate choice to prevent people from putting messages onto the leaderboard? In any case, it worked. You might want to kick the legit maroon dog off his not-so-legit top spot, though.

modin(10000) about 6 hours ago [-]

You might want to change your mongodb password[1]. One easy way is to do it the same way as how you did it with listening port, e.g. to use environment variables.


liamilan(2934) about 6 hours ago [-]

Thanks, fixed!!!!

I'm Liam, Ella's brother, I built the server. It was the first time I wrote a node server. It's a rookie mistake...


Typewriter Cartography

42 points 1 day ago by Jureko in 10000th position | Estimated reading time – 15 minutes | comments

While this sort of work is cool and interesting, and I give away high-resolution versions of it for free, I can only do it when I take time away from my regular paid client work. If you derive some value from what you've seen here, you are welcome to make a donation to support my continued efforts.

This is my father's manual typewriter, a Royal Safari II. Or maybe it's mine — I appropriated it quite a long time ago.

I remember playing with it a bit as a child in the 1980s, but for the most part I've rarely used it. But I've kept it around anyway, because I've always had a nostalgia for old technologies. Maybe I liked the idea of being a person who owns a typewriter.

A couple of weeks ago, I remembered that it was in the basement, and I thought — as I do from time to time — about how nice it would be to have a reason for using it. And then it occurred to me that I should just go with my default reason: maps.

After a few hours of planning and typing, I managed to create a typewriter map and I put it out on Twitter, where it ended up being by far the most popular thing I've ever put on that platform. Or probably ever, anywhere.

It's probably of no surprise to anyone who's known me for more than five minutes that I chose to start this project by mapping my homeland in the Great Lakes. I think it's always useful to begin with somewhere familiar when trying something new, because you can use your local knowledge to confirm whether or not the technique is doing justice to the place.

Click here if you want to see a giant high-resolution scan. It's full of smudges from the ribbon, alongside errors corrected with a generous application of Wite-Out. But I'm quite pleased with its messy, organic, analog nature. Other seemed to be, too.

I hadn't expected such a warm reception from the internet, but even before that happened, I had considered my experiment a success. So I followed it up with a couple more maps, to get a feel for some different styles. You can click on either of them to have a look in more detail.

It was an interesting diversion from the digital precision of my normal workflow. Sometimes fun, sometimes frustrating, but in any case a chance to mess around with some new challenges.

The ideas here aren't new. John Krygier has a post about typewriter mapping. Early computer graphics, such as ASCII art, along with early mapping software (like SYMAP), use essentially the same style as what I am doing (though mine is much more rudimentary): constructing images through individual characters.

In any case, now that you've seen the maps, read on to learn more about the challenges and decisions that went into their creation.

Map 1: Rivers of Lake Michigan

Though I just called this project a "diversion" from a digital workflow, all of these maps actually started on the computer. For this particular one, I began with a grid in Adobe Illustrator. Each rectangle in the grid represented one character position on the typewriter. There are ten characters to the inch, at an aspect ratio of 0.6. The final grid was 75 × 60, which would fill a 7.5′′ × 10′′ space.

Atop that, I dropped some data from Natural Earth. And from there, I began "tracing": plotting out which characters I could type to represent the rivers and coastline, and where each one should go.

After a little experimentation, I decided that if I wanted to draw linear features, there were three characters that were best to use: ! / _. Together, I could create rudimentary lines that roughly connected together in a pseudo-vector style, even if the typewriter grid itself is basically a raster.

A backslash (\) would also have been great, but that was a character invented pretty much exclusively for use on computers, so it's not found on my typewriter. As such, I had diagonal lines that sloped somewhat cleanly in one direction, while they stairstepped back down in the opposite direction. Compare the coastline on both sides of Lake Michigan, below.

For the state boundaries, I decided to try something different. I simply filled a bunch of "pixels" in with asterisks, rather than using more "linear"-looking characters. A raster, rather than a pseudo-vector, approach. It creates a small visual distinction between the boundaries and the coastline, which might be pretty hard to do otherwise. There aren't a lot of symbology options in a situation like this.

The biggest of those options, though, is color: my typewriter has a two-color ribbon, so I tried to make the most of it by setting the rivers off in red. This also helped with a labeling problem: I could name the rivers in red, to distinguish them from any other features. Other than color, though, the only way to vary my labels was to set some in capitals, and some in title case. I'm used to labeling most every class of feature on a map in a different style, but that's just not possible here. My islands and my cities, for example, look the same (black, title case). The states are lakes are the same, too (black, capitals).

Once I had spent a couple of hours or so on developing a plan, it was time to start typing. I loaded some paper into the typewriter and got to work. At first, I proceeded very linearly: left-to-right, top-to-bottom. But that was tedious. There's a lot of white space in this pattern, so sometimes I was forced to hit the space bar a few dozen times to advance to the next character on the line, and there was always a chance I might miscount and make a mistake. More importantly, though, following this workflow revealed a problem with my typewriter. Whenever I hit the carriage return lever to go to the next line, there was a chance that I'd somehow get a misalignment. Have a look at these patterns I typed:

Notice how the characters don't all line up along the left side, but then become more aligned on the right? I'm not sure why it kept happening, but it seemed most often to appear when I would use the carriage return lever. So, instead, I shifter to a different style of typing. I would start to trace features somewhat linearly. For the top left part of the map, for example, I began by typing three asterisks, then I manually moved down one line, then typed four more, then moved down another line, and typed four more, etc. following the line of the state border.

I manually moved the paper up and down and used the backspace and spacebar keys to align myself to where I needed to be at any time. In this way, I mostly avoided misalignments, though smaller ones still kept creeping in. About three-quarters of the way down the page I got a minor leftward shift that you can see in the final product. You can also see where I typed some periods over again to check if it was just my imagination or if it really was misaligned.

Fortunately, it wasn't enough to ruin my work, but it was a constant danger, and something I am still trying to figure out.

The final product has various interesting smudges where the paper accidentally contacted the ribbon. In particular, I noticed that typing in red always produced a faint black "shadow" a couple of lines above. When the slug hit the red part of the ribbon, a small portion of it would lightly hit the black portion of the ribbon, too. Later on, I started holding scrap paper over my map in order to prevent this, so that the black shadow would go on the scrap.

In sum: my typewriter is not a precision instrument. This makes it a somewhat uncomfortable-feeling tool for a detail-oriented designer like me. I like being able to zoom in to 64,000% in Illustrator and correct errors that are small enough that no human eye could possibly ever see them. But, there's something attractive about the organic messiness of the typewriter.

Once I was done, I scanned it, and then turned it over to the Robinson Map Library, since I wasn't sure what to do with it now that I was finished. So, come to Madison if you ever want to see the real thing (this goes for all three maps).

Map 2: Shadow Contours

For this one, I wanted to try and see if I could squeeze some sort of terrain representation out of the typewriter. As I mentioned, early digital graphics used printed characters to create images. And shading could be simulated by using characters of different darkness. The ASCII Art page on Wikipedia has some examples of this.

My goal was to do something like illuminated contours: lines that would get darker on the lower-right side and lighter on the upper-left, to create a depth illusion. So I needed to do something rather like what John Nelson calls "Aspect-Aware Contours."

Setting this one up required a whole different workflow than my first map. I began with a DEM of Michigan that I always keep on my Google Drive, ready to test out a terrain technique at a moment's notice:

First off, I cropped and shrank it down to 75 × 100 pixels. Then I further compressed the vertical dimension to 60 pixels. These two separate steps were necessary because the pixels aren't square on my typewriter, as we saw in the grid earlier: they are taller than they are wide. I needed something that had the same aspect ratio as a 75 × 100 image, but once I had the overall image aspect ratio correct, I needed it to really only use 60 characters vertically, since each character is so tall. In the images below it looks a little squished because it's being shown with square pixels. But in the end it stretches back out correctly.

From there, I classified it into just a few elevation levels, and smoothed them out a bit via a median filter.

And then it was time to calculate the aspect of each pixel in the raster.

Flat areas have no aspect. Pixels on the boundary between elevation classes, on the other hand, are assigned values based on which direction they are facing. So, now I could tell which areas would be in shadow (facing toward the lower right), and which would be lighter (facing upper left). No, I didn't compensate for the vertical stretching when calculating aspect, but I should have.

The aspect calculation produced a double line of pixels, one on each side of the boundary between classes. But I really only needed a single line of pixels to represent the contours, so I first cleaned those up. And then I grouped the various aspects into three shades: light, medium, and dark, based on the particular direction they were facing.

Now I had contours with some shading. All that was left was to turn them into individual typewriter characters. I converted this raster into an ASCII file, which looks like this:

Each pixel is represented by a number, and there are four numbers: one for light, one for medium, one for dark, and one for white. From there, it was simply a matter of doing a Find & Replace in a word processor to convert them to the three shading characters I had chosen to use: . + $.

And from there, it was just a matter of typing things out on the typewriter.

I tried a couple of other variations on this idea, as well. I initially hoped to do a proper set of Tanaka contours, with a medium-grey background and white highlights. But the white areas weren't obvious enough amidst all the typed characters, so it wasn't working.

Keeping white as the background color helped a lot, so I decided to go with contours that started with at least some darkness to them even on their light side. I also tried doing it with five different shades: . : + & $.

However, I think that was too many — the distinctions aren't really sufficiently clear between some of them, especially when each character varies so much in darkness just based on how hard I hit the key. So, when I had to halt that particular attempt partway through due to me misreading part of my pattern, I decided to start over with a simpler set of three characters. I reclassified the aspect analysis and re-converted it to characters, and that became the basis of the final attempt described above.

Map 3: Shaded Relief

Since I had been at least modestly successful applying shading to contours, I decided finally to see if I could render a rudimentary shaded relief on the typewriter, as well. I knew it wouldn't look particularly realistic, but I was hoping it would at least be sufficient.

This time I decided to change geographies and map Africa. As with the previous map, I took a DEM and shrank it down to 75 × 60 pixels, then I generated a shaded relief. I did it in Blender, but I turned off the various realistic shadows, as I thought they'd muddle things up. The end result was basically just a simple GIS hillshade.

I did try to compensate for the fact that the image, which had square pixels, would get stretched vertically once it made it to the typewriter. I set my lighting angle to be about 15° off from the typical upper-left light source that is used in shaded relief. However, I think I shifted it 15° in the wrong direction. But the end result seemed to come out well enough.

Once I had the relief, I then classified it into five levels: white, and four shades of grey. While I'd used three shades for my contour map, after having decided five was too many, this time I decided to split the difference.

And then, as before, I converted it to text characters. This time, I used . + @ $ as my set of shades.

I brought this into Illustrator and added it to the same planning grid that I'd developed for my first map. I also brought in more Natural Earth data so that I could include a coastline and some rivers. The relief would be in shades of black, and the other features would be in red.

The end result would be a combination of the techniques from the first map (mostly pseudo-vector) and the second map (more raster-y).

I removed bits of the relief that crept into the ocean, and also in order to make space for the rivers and a few labels I decided to cram in. After about three hours, I had the whole thing planned out. I printed my pattern and typed it up. I had a few false starts where I missed or added a character here or there, but after another three or four hours, I finished the third map.

This one has fewer of those "shadows" that accompany the use of the red portion of the ribbon. I spent a lot of time with a piece of scrap paper trying to prevent those, mostly successfully. This map also only involved me making two mistakes that required Wite-Out. I'm clearly getting better, as the first map had probably closer to ten.

The shaded relief is obviously pretty coarse. I think close up it's more of just an interesting texture, rather than anything that suggests depth. But, it was still fun to try. If you shrink the map, or step far away from it, or blur it, the relief starts to come out a little bit as the eye focuses less on the individual characters and more on the pattern. I think that's also true of the contour map.

I may do some more maps later on, but I think now that I've explored some of the basic challenges of typewriter mapping, I've reached a good point to pause in my efforts. Maybe I'll come back to it some other time, or maybe I'll get diverted into another novelty use of old technology. Or maybe I'll spend time doing all the stuff I was supposed to be doing instead of this. We'll see.

All Comments: [-]

pella(1880) about 6 hours ago [-]

~ related: 'telnet'

'MapSCII is a Braille & ASCII world map renderer for your console - enter => telnet <= on Mac and Linux, connect with PuTTY on Windows'

wl(10000) about 4 hours ago [-]

As a reminder, Apple removed the telnet client from 10.13 in a frustrating exercise of short-sighted security paternalism.

LiveAgent: Over $250K monthly recurring revenue with a spin-off project

40 points about 6 hours ago by nicoserdeir in 3031st position | Estimated reading time – 11 minutes | comments

Sponsor Failory and get your business & product in front of +20,000 CEOs, startup founders, entrepreneurs, developers and marketers every month.

Do you want to grow your business? With GenM you can get free marketing from an apprentice as part of their training. The student will work 40 hours per month creating content, increasing SEO rankings, carrying out advertisement campaigns...

Hi David! What's your background, and what are you currently working on?

Ahoy! I'm David Cacik and I'm the Head of Growth at LiveAgent. I started my first company in high school and then another one in college. I failed the first one and sold the second one and later took an exciting role at LiveAgent. I was the first growth guru to join the company, and helped it grow from $20k to $250k MRR.

LiveAgent helps improve interactions between customers and companies. We are a bootstrapped SaaS, based out of Bratislava, Slovakia (Central Europe). Our competitors are venture funded companies like Zendesk and Freshdesk which makes my job super challenging and attractive.

What's your backstory and how did you come up with the idea?

When I was 15, me and 2 of my high school buddies started a game hosting company. We were kids and we were selling virtual server space for games like Counter Strike or World of Warcraft to other kids. We failed and discontinued the project when we all went our separate ways to different universities.

During my studies, I started a new eCommerce project, an online pharmacy with automated delivery of commonly used products like toothbrush, razors, toilet paper etc. Even though I made it to the final of the Student Entrepreneur Awards, I decided to sell the project and move on.

I joined LiveAgent in 2013, and I was employee #10, the 1st and only marketing guy. I was 21 years old, not knowing what I was getting into but I experimented a lot and over the course of 4 years, I've helped the company grow from $20k to $250k in Monthly Recurring Revenue.

How did you build LiveAgent?

LiveAgent was built as a spin-off project. Our first product was Post Affiliate Pro, an affiliate management platform, which popularity was growing rapidly. We were in the market for a new customer service solution when we realized none of the tools were sufficient. That's when we started to build LiveAgent. It took 3 years for it to become a multichannel help desk software, like it is today but is still improved daily by new updates pushed by our devs.

We didn't plan on selling LiveAgent in the early days, we only wanted to use it internally. Some of our customers, mainly B2Bs, were asking us about the support tool we used. That's how we sold the first license.

Today, LiveAgent makes up 75% of the company's MRR, overcoming Post Affiliate Pro in both revenue and number of customers.

Both PAP and LiveAgent are built with similar technologies, utilizing PHP & Java as the main programming languages and using MySQL, Kibana, Elastic and also Grafana for performance monitoring.

Our servers run in multiple locations (EU, US, Asia) with multiple providers starting with Linode where most of our accounts run on the fast SSD nodes, AWS where the bigger files are stored. Recently, we started building our own "in-house" server farm in EU to ensure the highest performance possible and increase uptime. We will be migrating our EU customers there soon.

Reach +20,000 Startup Founders!

If you are looking to get your product in front of founders, CEOs, VPs, web and mobile developers, makers, consultants, marketers, bloggers, product managers, and many other thought leaders, then we can help you.

👉 Sponsorships

Which were your marketing strategies to grow your business?

The initial traction was achieved by upselling former customers. It wasn't sufficient enough though. In order to be ROI positive, we needed more customers and better growth.

We started experimenting with PPC, content marketing, improving onboarding experience, SEO outbound sales and many more. At first, we didn't have much success but we were still growing continuously.

We've tried Google AdWords, Bing Ads, Facebook, Twitter and all kinds of other minor PPC networks. The fact that the average customer value was lower than most of our competitors' and also that they were literally burning investors' money with PPCs made it especially hard to compete in the Pay Per Click space. We had to find long tail keywords, and competitor keywords which worked out pretty well (be careful to obey Google's rules). Generic terms like "customer service software" were super expensive and we've burned a lot of cash trying to compete the big players. I would not recommend going down this way for other startups. Eventually, we figured out the keywords that were performing well and we've been bidding on them until today.

Do content, they said. So we did - we pushed out valuable, fact-based, well-formatted blog posts supported by infographics and images. 99% of the blog posts didn't get much traction. In fact, there are only 2-3 articles that rank well on relevant keywords. Doing content right is hard and we have yet to figure it out.

On the other hand, one of the growth strategies, that helped boost growth particularly well, was including LiveAgent in software directories and boosting our presence there. Ever since the beginning, we've been a customer-centric company so we had a pretty solid base of satisfied customers. After listing LiveAgent on websites like G2Crowd, Capterra, GetApp or FinancesOnline, we've reached out to our audience and invited them to leave a review, good or bad, on one of these websites. We even incentivized their efforts by providing $20 Amazon coupons to everyone who shares their experience. We received hundreds of positive reviews which helped us rank high in comparisons and gained a lot of traffic.

What were the biggest challenges you faced and obstacles you overcame?

When we started actively promoting LiveAgent, we quickly realized the big players like Zendesk and Freshdesk dominated the market. We were struggling with positioning ourselves and setting the right USP to stand out. Also, we are a bootstrapped company and we had to compete with companies with more than 300M+ in funding, which didn't make acquiring customer easy.

Another struggle was finding the right talent. We've always had a hard time to find good sales reps and developers.

Recently, we also encountered quite a few problems with our datacenter provider which caused downtimes and we had to restructure our infrastructure completely. Thousands of companies rely on LiveAgent when supporting their end users, including fintech and telco companies, where even seconds matter. We had to invest a lot and act quickly because the consequences could have been fatal.

Last but not least, new competitors are popping up like a mushroom, offering free plans with no viable business model. With that comes a higher demand for better functionality, improved UI and customer experience.

It's a constant game which is changing and maybe that's what intrigues me about my work the most, it wouldn't be so much fun it was that easy, right?

💀 Startup Cemetery - Being Built 🚧

We're collecting and analyzing why +100 big companies have failed. Learn from their mistakes to avoid being part of the 90% of startups that shut down within the first year. Sign up here and we'll send you an email when it's done!

Get the next interview in your inbox!

We're always digging for more entrepreneurial success stories like this one. Sign up for our newsletter to keep updated on the latest additions.

Thank you! Your submission has been received!

Oops! Something went wrong while submitting the form.

Which are your greatest disadvantages?

I think that my biggest disadvantage is that I want to know about everything that's happening, what our customers are saying, what's the quality of their interaction with our team, how is our marketing performing on a daily basis and sometimes I get lost in all the information and lack the general image.

At LiveAgent, we try to be disruptive and push out new features very often which sometimes brings complications like bugs or functionality not working 100%. On the other hand, no software is perfect and if somebody says, their software is bug-free, they are lying. Recently, we hired a new software tester so I'm confident this move will help us to deliver more stable releases.

During the process of building & growing LiveAgent, which were the worst mistakes you committed?

We should've optimized our onboarding process earlier, instead, we focused on adding more features. When we introduced a new getting started guide (which didn't take much time to implement), we immediately saw a spike in conversions. We definitely should've done this earlier.

We hired an agency to help us with our marketing initiatives and failed. They didn't know the product well enough, they've focused on redesigning our pricing plans and website instead of finding new leads which cost us money and work time of our internal team.

We also hired an external team which didn't work out as well. They knew even less about the product and did not close a single deal. From that moment, we decided to minimize our outsourcing efforts and focused on hiring internally instead.

Apart from mistakes, what are other sources for learning you would recommend for entrepreneurs who are just starting?

I recently stumbled upon a group on Facebook called Saas Growth Hacks. It's a free to join community of entrepreneurs and growth hackers so make sure to check it out. Also, there's a similar community on Slack, which is paid and called The 10xFactory.

As any other growth hacker, I regularly check GrowthHackers. IndieHackers posts a lot of interesting interviews with startup founders which is a great resource.

If you are a bootstrapped company, I also recommend You can discuss your ideas and questions on their forum, for free.

Quora offers a ton of valuable content, just search tags like SaaS, eCommerce and you will find a plenty of relevant questions answered by experienced professionals.

Where can we go to learn more?

On LiveAgent's blog, we post lessons on growth, marketing and customer service in B2B so make sure to check it out.

Recently, I started blogging about SaaS Growth Hacking on my personal blog. In my latest post, I wrote about how boosting our presence on software directories brought a 300% spike in MRR.

✉️ Subscribe to receive weekly startup related articles!

We're always digging for more entrepreneurial success stories like this one. Sign up for our newsletter to keep updated on the latest additions.

Thank you! Your submission has been received!

Oops! Something went wrong while submitting the form.

👇 Other Interviews

All Comments: [-]

ClassyJacket(10000) about 3 hours ago [-]

'Do you want to grow your business? With GenM you can get free marketing from an apprentice as part of their training. The student will work 40 hours per month creating content, increasing SEO rankings, carrying out advertisement campaigns...'

'The student will work'?

So they just straight up admit they're exploiting students as unpaid employees? Students aren't supposed to work, they're supposed to learn. Damnit, America.

dx87(10000) about 3 hours ago [-]

According the the DoL, it looks like there's nothing wrong with what they're doing. As long as they only work on the free marketing projects, it thier work would just count as training.

lojack(3995) about 3 hours ago [-]

> Students aren't supposed to work, they're supposed to learn.

Why not both? Don't get me wrong, I think its shady to make money off someone without paying them. I personally just had a really positive experience working while I was in school, and never really stopped learning just because I left school. Curious why you think this is actually a bad thing?

freyir(10000) about 2 hours ago [-]

> Damnit, America.

Hey now, they're based in Bratislava.

sannee(10000) about 1 hour ago [-]

The student might be still getting paid by the company. I see no admission of them being unpaid...

Historical Discussions: Show HN: Rendora – Dynamic server-side rendering for modern JavaScript websites (December 10, 2018: 35 points)

Show HN: Rendora – Dynamic server-side rendering for modern JavaScript websites

35 points 7 days ago by geo_mer in 3977th position | Estimated reading time – 8 minutes | comments


Rendora is a dynamic renderer to provide zero-configuration server-side rendering mainly to web crawlers in order to effortlessly improve SEO for websites developed in modern Javascript frameworks such as React.js, Vue.js, Angular.js, etc... Rendora works totally independently of your frontend and backend stacks

Main Features

  • Zero change needed in frontend and backend code
  • Filters based on user agents and paths
  • Single fast binary written in Golang
  • Multiple Caching strategies
  • Support for asynchronous pages
  • Prometheus metrics
  • Choose your configuration system (YAML, TOML or JSON)
  • Container ready

What is Rendora?

Rendora can be seen as a reverse HTTP proxy server sitting between your backend server (e.g. Node.js/Express.js, Python/Django, etc...) and potentially your frontend proxy server (e.g. nginx, traefik, apache, etc...) or even directly to the outside world that does actually nothing but transporting requests and responses as they are except when it detects whitelisted requests according to the config. In that case, Rendora instructs a headless Chrome instance to request and render the corresponding page and then return the server-side rendered page back to the client (i.e. the frontend proxy server or the outside world). This simple functionality makes Rendora a powerful dynamic renderer without actually changing anything in both frontend and backend code.

What is Dynamic Rendering?

Dynamic rendering means that the server provides server-side rendered HTML to web crawlers such as GoogleBot and BingBot and at the same time provides the typical initial HTML to normal users in order to be rendered at the client side. Dynamic rendering is meant to improve SEO for websites written in modern javascript frameworks like React, Vue, Angular, etc...

Read more about dynamic rendering from these articles by Google and Bing. Also you might want to watch this interesting talk at Google I/O 2018

How does Rendora work?

For every request coming from the frontend server or the outside world, there are some checks or filters that are tested against the headers and/or paths according to Rendora's configuration file to determine whether Rendora should just pass the initial HTML returned from the backend server or use headless Chrome to provide a server-side rendered HTML. To be more specific, for every request there are 2 paths:

  1. If the request is whitelisted as a candidate for SSR (i.e. a GET request that passes all user agent and path filters), Rendora instructs the headless Chrome instance to request the corresponding page, render it and return the response which contains the final server-side rendered HTML. You usually want to whitelist only web crawlers like GoogleBot, BingBot, etc...

  2. If the request isn't whitelisted (i.e. the request is not a GET request or doesn't pass any of the filters), Rendora will simply act as a transparent reverse HTTP proxy and just conveys requests and responses as they are. You usually want to blacklist real users in order to return the usual client-side rendered HTML coming from the backend server back to them.

Install and run Rendora

First, run a headless Chrome instance

If Chrome/Chromium is installed in your system, you can run it using

google-chrome --headless --remote-debugging-port=9222

or simply using docker

docker run --tmpfs /tmp --net=host rendora/chrome-headless

note: the tmpfs flag is optional but it's recommended for performance reasons since rendora/chrome-headless runs with flag --user-data-dir=/tmp

Then, run Rendora

you can build and run Rendora from source code, (NOTE: please read the configuration manual before running Rendora)

git clone
cd rendora/rendora
go build
./rendora --config CONFIG_FILE.yaml

or simply using docker

docker run --net=host -v ./CONFIG_FILE.yaml:/etc/rendora/config.yaml rendora/rendora


You can read the docs here or here


Configuration is discussed in detail in docs here or here

A minimal config file example

    url: '' # this is the base url addressed by the headless Chrome instance, it can be simply your website url
    url: '' # your backend server url
    userAgent: # .i.e. only whitelist useragents containing the keywords 'bot', 'slurp', 'bing' or 'crawler'
        defaultPolicy: blacklist
                - bot
                - slurp
                - bing
                - crawler

A more customized config file

    port: 3001
    type: redis
    timeout: 6000
        address: localhost:6379
    url: '' 
    url: ''
    waitAfterDOMLoad: 0
      url: http://localhost:9222
    minify: true
        defaultPolicy: blacklist
                - bot
                - slurp
                - bing
                - crawler
        defaultPolicy: whitelist
             - /users/


What is the difference between Rendora and Puppeteer?

Puppeteer is a great Node.js library which provides a generic high-level API to control headless Chrome. On the other hand, Rendora is a dynamic renderer that acts as a reverse HTTP proxy placed in front of your backend server to provide server-side rendering mainly to web crawlers in order to effortlessly improve SEO.

What is the difference between Rendora and Rendertron?

Rendertron is comparable to Rendora in the sense that they both aim to provide SSR using headless Chrome; however there are various differences that can make Rendora a much better choice:

  1. Architecture: Rendertron is a HTTP server that returns SSR'ed HTML back to the client. That means that your server must contain the necessary code to filter requests and asks rendertron to provide the SSR'ed HTML and then return it back to the original client. Rendora does all that automatically by acting as a reverse HTTP proxy in front of your backend.

  2. Caching: Rendora can be configured to use internal local store or Redis to cache SSR'ed HTML.

  3. Performance: In addition to caching, Rendora is able to skip fetching and rendering unnecessary content CSS, fonts, images, etc... which can substantially reduce the intial DOM load latency.

  4. Development: Rendertron is developed in Node.js while Rendora is a single binary written in Golang.

  5. API and Metrics: Rendora provides Prometheus metrics about SSR latencies and number of SSR'ed and total requests. Furthermore, Rendora provides a JSON rendering endpoint that contains body, status and headers of the SSR response by the headless Chrome instance.


Many thanks to @mafredri for his effort to create cdp, a great Chrome DevTools Protocols client in Golang.

Follow rendora news and releases on Twitter

George Badawi - 2018

All Comments: [-]

gitgud(3811) 6 days ago [-]

So a regular browser will receive a js bundle, but a crawler will recieve a html page?

How does Rendora determine which version to give the requester? Is it the 'User Agent String'?

geo_mer(3977) 6 days ago [-]

No, both receive HTML, the real problem is that websites built with modern javascript frameworks (e.g. react, vue, angular, etc...) render most of the content using the javascript engine in browsers, search engines don't execute javascript and thus they see almost nothing but a header and almost an empty body (of course it depends on your use case to be more accurate). Rendora solves the SEO problem for such websites, by being a lightweight reverse HTTP proxy in front of your backend server, it detects crawlers by checking whitelisted user agents and paths, if it finds one, it instructs a headless Chrome instance to request and render the corresponding page and then returns the final server-side rendered HTML back to the crawler (or according to whatever whitelisted in your config file). This is called dynamic rendering and has been recommended by Google and Bing very recently (see the links in

chatmasta(1051) 4 days ago [-]

Cool idea and implementation.

Question: If the "initial render" of client side code involves JavaScript mounting nodes on the DOM, how is this transformed into static HTML that can be rendered without JavaScript? Does headless chrome offer a way to "snapshot" the DOM in its current state (basically like copy pasting the "elements" pane in dev tools)? Just wondering if there is a term for this, and/or where it's documented.

geo_mer(3977) 4 days ago [-]

Rendora uses internally cdp, a golang client implementation for 'Chrome Devtools Protocols' which works on top of websockets, when Rendora detects a witelisted request, it instructs the headless Chrome instance using cdp to render the corresponding page and waits for the DOM load event, after that the content of the page is copied to rendora and then returned back to the client while preserving HTTP headers and status code.

It's all about using the chrome-devtools-protocol, if you are interested, you may want to read about it in

>If the "initial render" of client side code involves JavaScript mounting nodes on the DOM

That's the raison d'être of rendora, if your website doesn't add DOM nodes using some javascript framework (e.g. React, Vue, etc..)in client, basically the HTML will be exactly the same and thus you don't need rendora in that case

geo_mer(3977) 7 days ago [-]

Hello HN, I've been developing and testing rendora for some time now and decided to put it publicly today on github, I would love to take feedbacks or answer any question!

InGodsName(3415) 5 days ago [-]

How do you test that it works when there is no display in headless chrome?

InGodsName(3415) 5 days ago [-]

What's the disadvantage or advantage of headless chrome over chrome with head intact?

Is headless chrome faster? Can we control popup windows in headless one? Does it support proxies? Is it possible to control tabs? Close and open in new tab etc...

geo_mer(3977) 5 days ago [-]

>What's the disadvantage or advantage of headless chrome over chrome with head intact?

I am not aware of any. You just don't use headless Chrome for typical browsing :D

>Is headless chrome faster?

I guess it slightly is, because it doesn't need to paint the rendered page but tbh Chrome/Chromium is an enormous project so I can't comment in details.

>Can we control popup windows in headless one? >Does it support proxies?

I believe they are not possible because chrome-devtools-protocol doesn't support that yet, but you can add http/socks proxies inside Chrome and then run it as headless to use a proxy if you insist

> Is it possible to control tabs? yes

if you're that interested about headless Chrome, I recommend you read about chrome devtools protocol

chrischen(1859) 6 days ago [-]

I'd be curious between using Chrome headless vs a Node server. Was this tested?

lucideer(3947) 6 days ago [-]

node is a JavaScript runtime, it has no rendering component, and—more notably—no DOM or any other equivalents to many browser APIs in its core API. This would mean you would have to emulate the browser environment with many 3rd-party implementations which are unlikely to match a modern, complex browser in aggregate.

geo_mer(3977) 6 days ago [-]

Thank you! I actually developed it because I have developed a somewhat complex website using vue.js and it was too complex/too late for my architecture to do the native Vue SSR using Node.js so I built rendora instead. I have some really complex pages that are rendered below 150ms (uncached SSR requests). I also tested it on some simple React pages and got as low as 70ms! so I now guess headless Chrome isn't as slow as you might think. Also some other optimizations are made to help lower latencies (skip fetching&rendering CSS, images, fonts, etc..) so that the initial DOM load can be faster.

I also added Prometheus metrics and a rendering API endpoint so that you can get an SSR latency histograms with buckets 50ms, 100ms, ..., 500ms and see the avg latencies for your use case.

TLDR; headless Chrome is actually faster than you might initially think. It might be 200-300ms at maximum to get the initial DOM load assuming that you have complex enough pages.

murukesh_s(3951) 6 days ago [-]

Good luck with the launch. Looks like a great idea, was wondering if there is any other existing product in the market.

Ability to provide ssr without changing a single line of code is awesome and can help teams who are trying for a solution which requires code changes.

Wish the crawlers incorporate something like this into their crawling logic so that we really don't need to worry about server rendering anymore. Until then this product can be a bridge!

geo_mer(3977) 6 days ago [-]

Thank you for the kind response

>was wondering if there is any other existing product in the market

yes, there is rendertron; here is the comparison between rendora and rendertron and why I think rendora is better

also there is but this is a commercial paid product and also needs change in your backend; and I guess they didn't use headless Chrome until recently

>Wish the crawlers incorporate something like this into their crawling logic so that we really don't need to worry about server rendering anymore

Except for Google, probably all other search engines don't even execute javascript to render pages (Bing claimed to be rendering js very recently but I haven't seen it in my production website :D), they claim that the web is too big and the computational complexity of executing javascript in every page is unreasonable. Even Google can't render pages correctly if you have asynchronous content from my own experience.

candtoro(10000) 5 days ago [-]

So, Rendora is comparable with

geo_mer(3977) 5 days ago [-]

* is a paid service; Rendora is FOSS and self hosted

* needs additional code in your backend to filter requests and asks for SSR from a remote server then returns back the resulting HTML; Rendora doesn't because it does all that automatically by being a lightweight reverse HTTP proxy server in front of your backend

* with Rendora you can control caching (using an internal local store or Redis), claims caching on their servers I guess but then you can't control it espcially if you have fast chaing pages

* Rendora provides Prometheus metrics to measure your SSR latencies, count of cached and uncached requests and an API rendering endpoint

* Rendora supports asynchronous pages where content is loaded after the initial DOM load

* you can simply choose which Chrome version you want to use with Rendora

* Rendora instructs headless Chrome to skip fetching unncessary assets (e.g. fonts, CSS, images, etc...) which makes the DOM load much faster then when loaded and rendered by default

Historical Discussions: Show HN: Minimal game with procedural graphics in JavaScript/GLSL (December 10, 2018: 34 points)
Show HN: Minimal game with 100% procedural graphics in JS/GLSL (November 30, 2018: 6 points)

Show HN: Minimal game with procedural graphics in JavaScript/GLSL

34 points 6 days ago by westoncb in 2759th position | Estimated reading time – 6 minutes | comments


Under is a minimal game written in JavaScript and GLSL with procedural graphics produced mostly by noise, signed distance functions, and boolean/space-folding operators applied to those functions. The codebase is small and fairly well-documented.

Controls: Press up to go up, otherwise you'll go down. Skim the cave edge for more points—but don't run into it!


Project Background

I recently wrapped up a contract and had some free time on my hands, so I decided to make something 80% for fun. The other 20% was to test out some architecture ideas and to see if I could learn something about my personal bottlenecks in doing side projects. I originally planned to spend only 5 days on it, but that rapidly turned into 9 (full days), and I've been tweaking it and adding sounds when I get a few free moments since. So it's an approximately 10 day project.

The pure fun part was largely that I already knew pretty clearly how to make the game—and in fact, I'd made essentially the same game about 12 years ago—and I knew the technologies involved well, so I could focus almost solely on experimenting with making pretty graphics in GLSL using distance functions and creative ways of combining them (such as you'd run into on e.g. Shadertoy). Additionally, I could enjoy the contrast in what I was able to do with the same project now vs. 12 years go when I was first learning to code :)

The architecture experiment concept is summed up in something I tweeted the other day:

How about an architecture like a discrete dynamical system driven by events instead of time, where the state evolution and event generation logic changes according to a quasi-FSM where nodes are defined by boolean functions (of system state) instead of explicit graph structure?

(Note: my conception was only clear enough to describe it that way after mostly completing this project. What you'll find in the code here isn't quite so neat.)

I was reading about about 'simulating the physical world' via Jamie Wong's excellent article and then started thinking about how 'normal' apps are different, and whether they could benefit by sharing some ideas. It seemed to me that Redux for instance must have been inspired by thinking along these lines (no idea if that's true), and that the general notion of 'operation' has a strong resemblance to differentials in a numerical integration process.

The other aspect of the architecture experiment was to attempt a pragmatic balance between functional and OO styles. I did all my high-level planning in terms of pure functions, and felt like I'd got most of the app's essentials down in that fashion—but once I started coding I let go of any strict constraints on functions being pure or data being immutable, hoping that the functional conception of the main structures/algorithms would be sufficient in whatever traces it left.

I had an overall positive experience with the architecture. There are still some kinks to work out, but my plan is to extract a super minimal library/framework from it to use in future projects. I partly want that for doing more games—but I'm also curious how it would extend to domains outside of games.

Code Overview

If you want to get to the meat of how the game itself works, it's all in gameStateTransformer.js

It uses quadShaderCanvas.js to set up three.js with a single rectangular Mesh using a ShaderMaterial, which is fit exactly to the canvas dimensions. All of the visuals are created by a fragment shader applied to that Mesh surface.

The fragment shader is in gameFragmentShader.js. I've written a few of these now, but I'm still no pro. Expect some rookie mistakes. And I'd be glad for some optimization tips if anyone notices some easy changes that could be made...

The cave shape generation is done in caveGenerator.js

The entry point to the code is in index.js. It sets up the main update/render loop and initializes a Simulation object, telling it to use a GameStateTransformer.

There are a few framework-ey classes which are the primary components of the 'architecture experiment' described above. They are:

StateTransformer is the core structure of the framework. The idea is that programs would be defined as a set of StateTransformers (potentially arranged in a hierarchy, but no use of that is made here—in fact this program only uses one real StateTransformer). And Each StateTransformer defines logic for transforming some state in response to a sequence of events and/or time passage. It will also likely generate its own events when certain conditions are met, or in response to system input events. As an example, GameStateTransformer generates an event when the worm collides with the cave wall.

Simulation is a special StateTransformer which does the actual work of triggering the methods defined by StateTransformers at the correct times. It is always active and manages some actual StateTransformer.

Events is a simple queue. Events may be added to it like Events.enqueue('event_name', eventData);. Every frame/step while the app is running Simulation will remove events from the queue one at a time, passing them to the current StateTransformer via a call to activeStateTransformer.handleEvent(event);.

EvolveAid EvolveAid makes 'transient state' and 'contingent evolvers' work (these are used by StateTranformers). Check out the documentation in evolveAid.js for more info. (Thinking about it, this probably should have just been a part of Simulation.)

Build and Run

git clone
cd under-game
npm install

index.html loads ./build/bundle.js. I wasn't sure about best practices for including build software in the package.json for an open source javascript project like this. I imagine you could build with whatever you prefer. I used Watchify personally, which you can set up like:

npm i watchify
watchify ./js/index.js -o './build/bundle.js'

Then I serve the project with http-server (npm i http-server):

http-server -p 4000

Unfortunately, because of concerns I have about licensing issues with the sound files, they are not included in the repo; so when you run the game locally it will be silent unless you add your own sounds :/

All Comments: [-]

azhenley(3310) 6 days ago [-]

Doesn't work for me. The play page just shows a '0' and no graphics:

westoncb(2759) 6 days ago [-]

Mind if I ask which browser you're using? Or to copy/paste any console errors? Could be that I'm using something in webgl not supported by your graphics card/driver...

martinlofgren(10000) 6 days ago [-]

I'm on mobile right now and couldn't play it, but looks good. Nice work!

westoncb(2759) 6 days ago [-]


I set it up to just show a video on mobile since it wasn't performing well on my phone. It may actually work alright on newer phones, though... —not sure.

Historical Discussions: Pushback Derails Company That Thrived on Patent Lawsuits (December 16, 2018: 6 points)

Pushback Derails Company That Thrived on Patent Lawsuits

34 points about 4 hours ago by seibelj in 3190th position | Estimated reading time – 5 minutes | comments

Shipping & Transit LLC sued more than 100 mostly small companies in 2016, making it the largest filer of patent lawsuits that year. But when the Florida company recently declared bankruptcy, it valued its U.S. patents at just $1.

Its demise followed three cases where companies fought back and were awarded legal fees after Shipping & Transit decided not to pursue the patent claims against them. Judges in the cases awarded a total of more than $245,000 in attorneys' fees and costs to businesses in 2017.

Shipping & Transit doesn't sell tracking systems or anything else. Instead, it claims to own patents "for providing status messages for cargo, shipments and people," according to court filings. The company typically demanded licensing fees of $25,000 to $45,000 from companies it said were infringing on its patents. Most agree to pay small amounts to avoid costly litigation.

Martin Kelly Jones, a Shipping & Transit co-owner, filed the first patent held by Shipping & Transit in 1993. Mr. Jones, who didn't respond to requests for comment, previously said that in the 1980s he came up with an idea to notify families of arriving school buses but was unsuccessful in bringing a tracking product to the market.

Despite the many lawsuits filed by Shipping & Transit, a court has never ruled on the validity of the company's patent claims, said Daniel Nazer, a senior staff attorney at the Electronic Frontier Foundation, a nonprofit that works on intellectual property issues. But Shipping & Transit's business model took a hit after multiple courts ordered it to pay attorneys fees and questioned the company's motives in bringing the patent lawsuits.

In 2016, Shipping & Transit filed 107 patent lawsuits, according to legal analytics firm Lex Machina. It filed five patent lawsuits in 2017 and none this year. The company had revenue of $707,000 in 2016 and $348,000 in 2017, but none in 2018, according to bankruptcy filings.

The challenges to its patent claims and the fee awards were "really the death knell for the company" because they increased the chances that other companies facing patent lawsuits from Shipping & Transit would fight back rather than settle, Mr. Nazer said.

In one ruling, a U.S. district judge in Santa Ana, Calif., called Shipping & Transit's patent claims "objectively unreasonable" in light of a 2014 Supreme Court decision that held that certain kinds of abstract ideas weren't patentable. There was a "clear pattern of serial filings," the judge said, even when Shipping & Transit "should have realized it had a weak litigation position."

Patent assertions by companies that don't make products and are primarily focused on making money off of patents have declined since the Supreme Court decision, but still "remain extremely high," said Shawn Ambwani, chief operating officer of Unified Patents, which specializes in challenging these types of assertions.

In another of the cases from 2017, a federal magistrate judge in West Palm Beach, Fla., said Shipping & Transit's actions suggest that the company's "strategy is predatory and aimed at reaping financial advantage from defendants who are unwilling or unable to engage in the expense of patent litigation."

Peter Sirianni, a co-owner of Shipping & Transit, said the California decision "unjustly" went against the company and "knocked the asset out from under us." Given the court rulings, the patents "are worth $1 to me," he said. "I am not licensing them anymore."

In its chapter 7 bankruptcy filing in West Palm Beach, Shipping & Transit listed more than $420,000 in secured and unsecured claims. That total doesn't include specific amounts for the three judgments against it for attorneys' fees.

Also in 2017, 1A Auto Inc., an online auto-parts retailer, was awarded $120,000 in attorneys' fees and costs, but the bankruptcy has stopped efforts to collect that money. "I believe they pocketed all the money or they pocketed money they could have used to pay our attorneys' fee judgment," said the company's attorney, Philip Swain.

Mr. Swain described Shipping & Transit's strategy as "a stickup based on the cost of litigation. Our client had the guts to fight," he said. "Not many do."

Stephen Orchard, an attorney representing Shipping & Transit in the bankruptcy proceeding, said he has seen nothing to support the allegation that assets were transferred to the company's owners ahead of the bankruptcy filing. He said Shipping & Transit filed for bankruptcy liquidation because collection efforts against the company "had ramped up." It made more sense "to have an orderly resolution of the business affairs," Mr. Orchard said, rather than face "litigation and collection efforts from coast-to-coast."

Mr. Sirianni said he is still involved with another patent filer, Electronic Communications Technologies. The company filed 35 patent lawsuits between 2016 and 2018, according to Lex Machina.

Mr. Sirianni and Mr. Jones are associated with another company, Motivational Health Messaging, that says it has patents on "unique solutions to maximize the effectiveness of 'health trackers.' " The company hasn't filed any patent lawsuits, according to a review of court filings, but has issued letters demanding licensing fees for use of its patents.

Write to Ruth Simon at [email protected]

No comments posted yet: Link to HN comments page

Historical Discussions: Show HN: CertMagic – Caddy's automagic HTTPS features as a Go library (December 10, 2018: 33 points)

Show HN: CertMagic – Caddy's automagic HTTPS features as a Go library

33 points 6 days ago by mholt in 1674th position | Estimated reading time – 23 minutes | comments

Easy and Powerful TLS Automation

The same library used by the Caddy Web Server

Caddy's automagic TLS features, now for your own Go programs, in one powerful and easy-to-use library!

CertMagic is the most mature, robust, and capable ACME client integration for Go.

With CertMagic, you can add one line to your Go application to serve securely over TLS, without ever having to touch certificates.

Instead of:

// plaintext HTTP, gross 🤢
http.ListenAndServe(':80', mux)

Use CertMagic:

// encrypted HTTPS with HTTP->HTTPS redirects - yay! 🔒😍
certmagic.HTTPS([]string{''}, mux)

That line of code will serve your HTTP router mux over HTTPS, complete with HTTP->HTTPS redirects. It obtains and renews the TLS certificates. It staples OCSP responses for greater privacy and security. As long as your domain name points to your server, CertMagic will keep its connections secure.

Compared to other ACME client libraries for Go, only CertMagic supports the full suite of ACME features, and no other library matches CertMagic's maturity and reliability.

CertMagic - Automatic HTTPS using Let's Encrypt

Sponsored by Relica - Cross-platform local and cloud file backup:



  • Fully automated certificate management including issuance and renewal
  • One-liner, fully managed HTTPS servers
  • Full control over almost every aspect of the system
  • HTTP->HTTPS redirects (for HTTP applications)
  • Solves all 3 ACME challenges: HTTP, TLS-ALPN, and DNS
  • Over 50 DNS providers work out-of-the-box (powered by lego!)
  • Pluggable storage implementations (default: file system)
  • Wildcard certificates (requires DNS challenge)
  • OCSP stapling for each qualifying certificate (done right)
  • Distributed solving of all challenges (works behind load balancers)
  • Supports 'on-demand' issuance of certificates (during TLS handshakes!)
    • Custom decision functions
    • Hostname whitelist
    • Ask an external URL
    • Rate limiting
  • Optional event hooks to observe internal behaviors
  • Works with any certificate authority (CA) compliant with the ACME specification
  • Certificate revocation (please, only if private key is compromised)
  • Must-Staple (optional; not default)
  • Cross-platform support! Mac, Windows, Linux, BSD, Android...
  • Scales well to thousands of names/certificates per instance
  • Use in conjunction with your own certificates


  1. Public DNS name(s) you control
  2. Server reachable from public Internet
    • Or use the DNS challenge to waive this requirement
  3. Control over port 80 (HTTP) and/or 443 (HTTPS)
    • Or they can be forwarded to other ports you control
    • Or use the DNS challenge to waive this requirement
    • (This is a requirement of the ACME protocol, not a library limitation)
  4. Persistent storage
    • Typically the local file system (default)
    • Other integrations available/possible

Before using this library, your domain names MUST be pointed (A/AAAA records) at your server (unless you use the DNS challenge)!


$ go get -u


Package Overview

Certificate authority

This library uses Let's Encrypt by default, but you can use any certificate authority that conforms to the ACME specification. Known/common CAs are provided as consts in the package, for example LetsEncryptStagingCA and LetsEncryptProductionCA.

The Config type

The certmagic.Config struct is how you can wield the power of this fully armed and operational battle station. However, an empty config is not a valid one! In time, you will learn to use the force of certmagic.New(certmagic.Config{...}) as I have.


For every field in the Config struct, there is a corresponding package-level variable you can set as a default value. These defaults will be used when you call any of the high-level convenience functions like HTTPS() or Listen() or anywhere else a default Config is used. They are also used for any Config fields that are zero-valued when you call New().

You can set these values easily, for example: certmagic.Email = ... sets the email address to use for everything unless you explicitly override it in a Config.

Providing an email address

Although not strictly required, this is highly recommended best practice. It allows you to receive expiration emails if your certificates are expiring for some reason, and also allows the CA's engineers to potentially get in touch with you if something is wrong. I recommend setting certmagic.Email or always setting the Email field of the Config struct.

Development and Testing

Note that Let's Encrypt imposes strict rate limits at its production endpoint, so using it while developing your application may lock you out for a few days if you aren't careful!

While developing your application and testing it, use their staging endpoint which has much higher rate limits. Even then, don't hammer it: but it's much safer for when you're testing. When deploying, though, use their production CA because their staging CA doesn't issue trusted certificates.

To use staging, set certmagic.CA = certmagic.LetsEncryptStagingCA or set CA of every Config struct.


There are many ways to use this library. We'll start with the highest-level (simplest) and work down (more control).

First, we'll follow best practices and do the following:

// read and agree to your CA's legal documents
certmagic.Agreed = true
// provide an email address
certmagic.Email = '[email protected]'
// use the staging endpoint while we're developing
certmagic.CA = certmagic.LetsEncryptStagingCA

Serving HTTP handlers with HTTPS

err := certmagic.HTTPS([]string{'', ''}, mux)
if err != nil {
	return err

This starts HTTP and HTTPS listeners and redirects HTTP to HTTPS!

Starting a TLS listener

ln, err := certmagic.Listen([]string{''})
if err != nil {
	return err

Getting a tls.Config

tlsConfig, err := certmagic.TLS([]string{''})
if err != nil {
	return err

Advanced use

For more control, you'll make and use a Config like so:

magic := certmagic.New(certmagic.Config{
	CA:     certmagic.LetsEncryptStagingCA,
	Email:  '[email protected]',
	Agreed: true,
	// plus any other customization you want
// this obtains certificates or renews them if necessary
err := magic.Manage([]string{'', ''})
if err != nil {
	return err
// to use its certificates and solve the TLS-ALPN challenge,
// you can get a TLS config to use in a TLS listener!
tlsConfig := magic.TLSConfig()
// if you already have a TLS config you don't want to replace,
// we can simply set its GetCertificate field and append the
// TLS-ALPN challenge protocol to the NextProtos
myTLSConfig.GetCertificate = magic.GetCertificate
myTLSConfig.NextProtos = append(myTLSConfig.NextProtos, tlsalpn01.ACMETLS1Protocol}
// the HTTP challenge has to be handled by your HTTP server;
// if you don't have one, you should have disabled it earlier
// when you made the certmagic.Config
httpMux = magic.HTTPChallengeHandler(httpMux)

Great! This example grants you much more flexibility for advanced programs. However, the vast majority of you will only use the high-level functions described earlier, especially since you can still customize them by setting the package-level defaults.

If you want to use the default configuration but you still need a certmagic.Config, you can call certmagic.Manage() directly to get one:

magic, err := certmagic.Manage([]string{''})
if err != nil {
	return err

And then it's the same as above, as if you had made the Config yourself.

Wildcard certificates

At time of writing (December 2018), Let's Encrypt only issues wildcard certificates with the DNS challenge.

Behind a load balancer (or in a cluster)

CertMagic runs effectively behind load balancers and/or in cluster/fleet environments. In other words, you can have 10 or 1,000 servers all serving the same domain names, all sharing certificates and OCSP staples.

To do so, simply ensure that each instance is using the same Storage. That is the sole criteria for determining whether an instance is part of a cluster.

The default Storage is implemented using the file system, so mounting the same shared folder is sufficient (see Storage for more on that)! If you need an alternate Storage implementation, feel free to use one, provided that all the instances use the same one. :)

See Storage and the associated godoc for more information!

The ACME Challenges

This section describes how to solve the ACME challenges. Challenges are how you demonstrate to the certificate authority some control over your domain name, thus authorizing them to grant you a certificate for that name. The great innovation of ACME is that verification by CAs can now be automated, rather than having to click links in emails (who ever thought that was a good idea??).

If you're using the high-level convenience functions like HTTPS(), Listen(), or TLS(), the HTTP and/or TLS-ALPN challenges are solved for you because they also start listeners. However, if you're making a Config and you start your own server manually, you'll need to be sure the ACME challenges can be solved so certificates can be renewed.

The HTTP and TLS-ALPN challenges are the defaults because they don't require configuration from you, but they require that your server is accessible from external IPs on low ports. If that is not possible in your situation, you can enable the DNS challenge, which will disable the HTTP and TLS-ALPN challenges and use the DNS challenge exclusively.

Technically, only one challenge needs to be enabled for things to work, but using multiple is good for reliability in case a challenge is discontinued by the CA. This happened to the TLS-SNI challenge in early 2018—many popular ACME clients such as Traefik and Autocert broke, resulting in downtime for some sites, until new releases were made and patches deployed, because they used only one challenge; Caddy, however—this library's forerunner—was unaffected because it also used the HTTP challenge. If multiple challenges are enabled, they are chosen randomly to help prevent false reliance on a single challenge type.

HTTP Challenge

Per the ACME spec, the HTTP challenge requires port 80, or at least packet forwarding from port 80. It works by serving a specific HTTP response that only the genuine server would have to a normal HTTP request at a special endpoint.

If you are running an HTTP server, solving this challenge is very easy: just wrap your handler in HTTPChallengeHandler or call SolveHTTPChallenge() inside your own ServeHTTP() method.

For example, if you're using the standard library:

mux := http.NewServeMux()
mux.HandleFunc('/', func(w http.ResponseWriter, req *http.Request) {
	fmt.Fprintf(w, 'Lookit my cool website over HTTPS!')
http.ListenAndServe(':80', magic.HTTPChallengeHandler(mux))

If wrapping your handler is not a good solution, try this inside your ServeHTTP() instead:

magic := certmagic.NewDefault()
func ServeHTTP(w http.ResponseWriter, req *http.Request) {
	if magic.HandleHTTPChallenge(w, r) {
		return // challenge handled; nothing else to do

If you are not running an HTTP server, you should disable the HTTP challenge or run an HTTP server whose sole job it is to solve the HTTP challenge.

TLS-ALPN Challenge

Per the ACME spec, the TLS-ALPN challenge requires port 443, or at least packet forwarding from port 443. It works by providing a special certificate using a standard TLS extension, Application Layer Protocol Negotiation (ALPN), having a special value. This is the most convenient challenge type because it usually requires no extra configuration and uses the standard TLS port which is where the certificates are used, also.

This challenge is easy to solve: just use the provided tls.Config when you make your TLS listener:

// use this to configure a TLS listener
tlsConfig := magic.TLSConfig()

Or make two simple changes to an existing tls.Config:

myTLSConfig.GetCertificate = magic.GetCertificate
myTLSConfig.NextProtos = append(myTLSConfig.NextProtos, tlsalpn01.ACMETLS1Protocol}

Then just make sure your TLS listener is listening on port 443:

ln, err := tls.Listen('tcp', ':443', myTLSConfig)

DNS Challenge

The DNS challenge is perhaps the most useful challenge because it allows you to obtain certificates without your server needing to be publicly accessible on the Internet, and it's the only challenge by which Let's Encrypt will issue wildcard certificates.

This challenge works by setting a special record in the domain's zone. To do this automatically, your DNS provider needs to offer an API by which changes can be made to domain names, and the changes need to take effect immediately for best results. CertMagic supports all of lego's DNS provider implementations! All of them clean up the temporary record after the challenge completes.

To enable it, just set the DNSProvider field on a certmagic.Config struct, or set the default certmagic.DNSProvider variable. For example, if my domains' DNS was served by DNSimple (they're great, by the way) and I set my DNSimple API credentials in environment variables:

import ''
provider, err := dnsimple.NewProvider()
if err != nil {
	return err
certmagic.DNSProvider = provider

Now the DNS challenge will be used by default, and I can obtain certificates for wildcard domains. See the godoc documentation for the provider you're using to learn how to configure it. Most can be configured by env variables or by passing in a config struct. If you pass a config struct instead of using env variables, you will probably need to set some other defaults (that's just how lego works, currently):

PropagationTimeout: dns01.DefaultPollingInterval,
PollingInterval:    dns01.DefaultPollingInterval,
TTL:                dns01.DefaultTTL,

Enabling the DNS challenge disables the other challenges for that certmagic.Config instance.

On-Demand TLS

Normally, certificates are obtained and renewed before a listener starts serving, and then those certificates are maintained throughout the lifetime of the program. In other words, the certificate names are static. But sometimes you don't know all the names ahead of time. This is where On-Demand TLS shines.

Originally invented for use in Caddy (which was the first program to use such technology), On-Demand TLS makes it possible and easy to serve certificates for arbitrary names during the lifetime of the server. When a TLS handshake is received, CertMagic will read the Server Name Indication (SNI) value and either load and present that certificate in the ServerHello, or if one does not exist, it will obtain it from a CA right then-and-there.

Of course, this has some obvious security implications. You don't want to DoS a CA or allow arbitrary clients to fill your storage with spammy TLS handshakes. That's why, in order to enable On-Demand issuance, you'll need to set some limits or some policy to allow getting a certificate.

CertMagic provides several ways to enforce decision policies for On-Demand TLS, in descending order of priority:

  • A generic function that you write which will decide whether to allow the certificate request
  • A name whitelist
  • The ability to make an HTTP request to a URL for permission
  • Rate limiting

The simplest way to enable On-Demand issuance is to set the OnDemand field of a Config (or the default package-level value):

certmagic.OnDemand = &certmagic.OnDemandConfig{MaxObtain: 5}

This allows only 5 certificates to be requested and is the simplest way to enable On-Demand TLS, but is the least recommended. It prevents abuse, but only in the least helpful way.

The godoc describes how to use the other policies, all of which are much more recommended! :)

If OnDemand is set and Manage() is called, then the names given to Manage() will be whitelisted rather than obtained right away.


CertMagic relies on storage to store certificates and other TLS assets (OCSP staple cache, coordinating locks, etc). Persistent storage is a requirement when using CertMagic: ephemeral storage will likely lead to rate limiting on the CA-side as CertMagic will always have to get new certificates.

By default, CertMagic stores assets on the local file system in $HOME/.local/share/certmagic (and honors $XDG_DATA_HOME if set). CertMagic will create the directory if it does not exist. If writes are denied, things will not be happy, so make sure CertMagic can write to it!

The notion of a 'cluster' or 'fleet' of instances that may be serving the same site and sharing certificates, etc, is tied to storage. Simply, any instances that use the same storage facilities are considered part of the cluster. So if you deploy 100 instances of CertMagic behind a load balancer, they are all part of the same cluster if they share the same storage configuration. Sharing storage could be mounting a shared folder, or implementing some other distributed storage system such as a database server or KV store.

The easiest way to change the storage being used is to set certmagic.DefaultStorage to a value that satisfies the Storage interface. Keep in mind that a valid Storage must be able to implement some operations atomically in order to provide locking and synchronization.

If you write a Storage implementation, let us know and we'll add it to the project so people can find it!


All of the certificates in use are de-duplicated and cached in memory for optimal performance at handshake-time. This cache must be backed by persistent storage as described above.

Most applications will not need to interact with certificate caches directly. Usually, the closest you will come is to set the package-wide certmagic.DefaultStorage variable (before attempting to create any Configs). However, if your use case requires using different storage facilities for different Configs (that's highly unlikely and NOT recommended! Even Caddy doesn't get that crazy), you will need to call certmagic.NewCache() and pass in the storage you want to use, then get new Config structs with certmagic.NewWithCache() and pass in the cache.

Again, if you're needing to do this, you've probably over-complicated your application design.


Can I use some of my own certificates while using CertMagic?

Yes, just call the relevant method on the Config to add your own certificate to the cache:

Keep in mind that unmanaged certificates are (obviously) not renewed for you, so you'll have to replace them when you do. However, OCSP stapling is performed even for unmanaged certificates that qualify.

Does CertMagic obtain SAN certificates?

Technically all certificates these days are SAN certificates because CommonName is deprecated. But if you're asking whether CertMagic issues and manages certificates with multiple SANs, the answer is no. But it does support serving them, if you provide your own.

How can I listen on ports 80 and 443? Do I have to run as root?

On Linux, you can use setcap to grant your binary the permission to bind low ports:

$ sudo setcap cap_net_bind_service=+ep /path/to/your/binary

and then you will not need to run with root privileges.


We welcome your contributions! Please see our contributing guidelines for instructions.

Project History

CertMagic is the core of Caddy's advanced TLS automation code, extracted into a library. The underlying ACME client implementation is lego, which was originally developed for use in Caddy even before Let's Encrypt entered public beta in 2015.

In the years since then, Caddy's TLS automation techniques have been widely adopted, tried and tested in production, and served millions of sites and secured trillions of connections.

Now, CertMagic is the actual library used by Caddy. It's incredibly powerful and feature-rich, but also easy to use for simple Go programs: one line of code can enable fully-automated HTTPS applications with HTTP->HTTPS redirects.

Caddy is known for its robust HTTPS+ACME features. When ACME certificate authorities have had outages, in some cases Caddy was the only major client that didn't experience any downtime. Caddy can weather OCSP outages lasting days, or CA outages lasting weeks, without taking your sites offline.

Caddy was also the first to sport 'on-demand' issuance technology, which obtains certificates during the first TLS handshake for an allowed SNI name.

Consequently, CertMagic brings all these (and more) features and capabilities right into your own Go programs.

You can watch a 2016 dotGo talk by the author of this library about using ACME to automate certificate management in Go programs:

Credits and License

CertMagic is a project by Matthew Holt, who is the author; and various contributors, who are credited in the commit history of either CertMagic or Caddy.

CertMagic is licensed under Apache 2.0, an open source license. For convenience, its main points are summarized as follows (but this is no replacement for the actual license text):

  • The author owns the copyright to this code
  • Use, distribute, and modify the software freely
  • Private and internal use is allowed
  • License text and copyright notices must stay intact and be included with distributions
  • Any and all changes to the code must be documented

No comments posted yet: Link to HN comments page

Historical Discussions: Moral Machine (October 04, 2016: 148 points)
Moral Machine (June 26, 2016: 5 points)
MIT Online Activity Lets You Choose Who Gets Killed If a Self-Driving Car Wrecks (October 02, 2016: 4 points)
Moral Machine: Gathering human views on moral choices made by machines (October 03, 2016: 3 points)
MIT Moral Machine (August 08, 2016: 3 points)
MIT Moral Machine – Decide How an Autonomous Vehicle Should Behave (June 28, 2016: 3 points)
The Moral Machine (August 11, 2016: 3 points)
Moral Machine (August 09, 2016: 3 points)
Moral Machine – Platform for human perspective on machine-made moral decisions (July 10, 2016: 3 points)
Moral Machine – Help MIT to program ethical dilemmas (June 30, 2016: 3 points)
Moral Machine (September 25, 2018: 2 points)
The Moral Machine (October 01, 2016: 2 points)
Moral Machine (August 16, 2016: 2 points)
Moral Machine: Gathering human perspective on moral decisions made by machines (August 14, 2016: 2 points)
Moral Machine: Gathering human perspective on moral decisions made by machines (August 09, 2016: 2 points)
Moral Machine: human perspectives on AI's moral decisions (August 04, 2016: 1 points)
Moral Machine: A platform for public participation in machine-made decisions (September 16, 2016: 1 points)
Moral Machine. Gathering a human perspective on moral decisions made by IA (September 07, 2016: 1 points)
MoralMachine – Human perspective on moral decisions made by machines (August 19, 2016: 1 points)

Moral Machine

31 points about 17 hours ago by based2 in 112th position | Estimated reading time – 4 minutes | comments

This website has three main functional interfaces that can be accessed from the menu bar.


You will be presented with random moral dilemmas that a machine is facing. For example, a self-driving car, which does not need to have passengers in it. The car can sense the presence and approximate identification of pedestrians on the road ahead of it, as well as of any passengers who may be in the car.

The car also detects that the brakes failed, leaving it with two options: keep going and hit the pedestrians ahead of it, or swerve and hit the pedestrians on the other lane. Some scenarios will include the case of a non-empty car; in those cases, one of the two lanes have a barrier that can be crashed into, affecting all passengers. One or two pedestrian signals may also be included in a given scenario, changing the legality of a pedestrian's position on their respective lane.

You are outside the scene, watching it from above. Nothing will happen to you. You have control over choosing what the car should do. You can express your choice by clicking on one of the two choices in front of you. In each of the two possible outcomes, the affected characters will be visually marked with the symbol of a skull, a medical cross, or a question mark to signal what will happen to this character, corresponding to death, injury, or an uncertain outcome, respectively.

You may proceed from scenario to scenario by selecting the outcome you feel is most acceptable by you. This can be done by clicking the outcome of your choice, which will be highlighted when you hover your cursor over it. A button below each outcome depiction will let you toggle the display of a textual summary of the outcome that you can read. A counter at the top right will let you know your progress in the sequence of scenarios.

Upon finishing all the scenarios, you will see a summary of the aggregated trend in your responses in the session of the game you just played, compared to the aggregated responses of other players, along several different dimensions. You may 'Share' or 'Link' your results using the corresponding buttons, and/or play another scenario sequence by clicking 'More'.


Additionally, you may try to create a new scenario yourself. You will first be asked to choose whether you want to have the dilemma be between two sets of pedestrians, or, if between pedestrians and passengers, whether the self-driving car will have to swerve to save the passengers or pedestrians. You can then choose to add legal complications in the form of a pedestrian signal.

Finally, you can choose characters to add to each possible location in the scenario. The default fate for the characters who crash or are hit is death, but you can change this using the dropdown for each location. Note that the fates of impacted characters can be set independently for each character, even within the same location.

You can reset the scenario creation interface at any time by clicking the 'Start Over' button on the left. Once you are done creating a scenario, and have given it a creative title, you can submit it by clicking the "Submit" button on the right. Once you do, your scenario will be added to the database of scenarios created by users of the platform.


This interface lets you view the scenarios you and other users of this platform have created. The scenarios are arranged chronologically; you can click '❮' or '❯' to move ahead or back in this arrangement. Alternatively, you can click the 'Random' button to take you to any random scenario in the timeline. As with the Judge interface, you are able to toggle the display of textual descriptions for each outcome using the button below the respective outcome.

You can show your appreciation for particularly interesting scenarios by clicking the 'Like' button, and can 'Share' or 'Link' such scenarios using the corresponding buttons. A discussion thread for each scenario is displayed below the depiction of the scenario; we encourage you to participate.

All Comments: [-]

k2xl(3877) about 4 hours ago [-]

I remember seeing this last year. Issue I have with the choices is that it is missing an option: flip a coin.

On some of the questions, I find the options morally equivalent. So in these situations, if I were programming a solution, I would leave it up to chance and use a random number generator to decide the fate.

baroffoos(10000) about 3 hours ago [-]

I had the same feeling about this. On the questions I felt were exactly equivalent I tried to guess what most people would pick and picked the opposite to balance it out.

zzzcpan(3706) about 3 hours ago [-]

Why would you program a solution that has to kill people? If you are aware of specific situations, shouldn't you program a solution that completely avoids such situations and saves everyone?

protomyth(92) about 4 hours ago [-]

How about, if you are in the car that is going to kill people, you die first because you made the choice to get in the car, but the poor person on the sidewalk wasn't part of that choice?

I get throwing in animals and the car should not avoid animals and kill people. Heck, we have people who do that and end up killing more people just because of their poor judgement.

zzzcpan(3706) about 3 hours ago [-]

I guess it makes sense. If questions were presented on the meta level about who made the choice, most people would probably agree that those with the choice have to sacrifice themselves first.

baroffoos(10000) about 3 hours ago [-]

Should have 2 options on the car, first one is for it to drive at a speed where no one will die on impact and the second one is full speed and if something goes wrong you will be the one who pays the price.

People might be totally ok with their self driving car moving at 30-40km/h if it meant they could sit on their laptop and get stuff done on the way.

TheSmiddy(10000) about 5 hours ago [-]

I think these trolley problems are a waste of everybody's time. Building redundant reliable braking systems will be orders of magnitudes easier than creating a system to fairly and accurately assess who is the best set of people to kill in a disaster scenario.

cscurmudgeon(3379) about 3 hours ago [-]

Trolley problems are not about trolleys. Don't take them literally (e.g. Schrodinger's cat).

whyte_mackay(10000) about 1 hour ago [-]

I hear this often (that trolley problem is not relevant), but then I discovered that a lot of realistic ML fairness problems can be restated as a trolley problem.

You have a classifier for credit assignment (giving a loan, etc.). The classifier is 99% accurate on the entire population. The classifier is 55% accurate on a small minority. You can improve the minority accuracy to 90% at the cost of 0.3% decrease of general accuracy. What do you do?

For self-driving: Your accident rate is 0.0001% for the entire population. Your accident rate is 0.0003% for black pedestrians at night. You can allocate more compute/research/resources to equalize the accident rate of black pedestrians at the cost of increasing accident rate for the entire population to 0.00011%. What do you do?

oska(729) about 4 hours ago [-]

> I think these trolley problems are a waste of everybody's time.

I agree. It's an angle that the media love but has little real world applicability.

baroffoos(10000) about 3 hours ago [-]

It seems totally useless for self driving cars but it is an interesting view in to human priorities

1999(10000) about 3 hours ago [-]

These scenarios are idiotic. If you want to wank off about self driving car ethics, here is a much more realistic scenario: should all self-driving cars report their location to 911 dispatch to allow any vehicle to be re-purposed as an emergency vehicle at any time? That might actually save someone.

Also, can anyone identify a useful idea that philosophers have come up with in the last 50 years?

chongli(10000) 31 minutes ago [-]

should all self-driving cars report their location to 911 dispatch to allow any vehicle to be re-purposed as an emergency vehicle at any time?

That doesn't make much sense. The primary advantage of emergency vehicles is to transport medics to the site of the emergency so that they can administer first aid and stabilize the person for safe transport to the hospital.

Having just any person off the street pick up a critically injured person is not going to go well. In all likelihood, they'll further injure the person due to their lack of training.

zappo2938(3731) about 2 hours ago [-]

It is a tradition that all boats respond to distress signals in their vicinity. "A master of a ship at sea, which is in a position to be able to provide assistance on receiving a signal from any source that persons are in distress at sea, is bound to proceed with all speed to their assistance." [0]


claudiawerner(10000) about 3 hours ago [-]

>Also, can anyone identify a useful idea that philosophers have come up with in the last 50 years?

What are you counting as useful, and to what extend must the results from one field be useful in another for such a field to appease you? In mathematics, the proof of Fermat's Last Theorem isn't very useful, but various attempts to prove it opened new branches of maths. I also question whether usefulness should be an end in itself.

Your 'realistic scenario' can be reasoned with, and that reasoning is called philosophy. But the other aspect of philosophy is critically examining what we think is obvious. Your statement assumes various ideas of metaethics (that there are good and bad things, and we should strive for the good), ethics (i.e that saving someone, no matter who, is a good thing) and political philosophy (that the state should have the right to demand knowledge of the car's position) and leads the way to questions on the philosophy of law (to what extent one's rights to property and full control over a car coincide with the aims of civil society).

cscurmudgeon(3379) about 3 hours ago [-]

Did Schrodinger put a cat in a box?

woodruffw(3356) about 3 hours ago [-]

If you'll give me 5 more, the Gettier Problem[1] turned 55 this year. Most work in nonmonotonic reasoning is also under 50 years old.


Historical Discussions: Pampy: Pattern Matching for Python (December 16, 2018: 32 points)
Pampy: Pattern Matching for Python (December 12, 2018: 4 points)

Pampy: Pattern Matching for Python

31 points about 20 hours ago by fagnerbrack in 318th position | Estimated reading time – 10 minutes | comments

Pampy: Pattern Matching for Python

Pampy is pretty small (150 lines), reasonably fast, and often makes your code more readable and hence easier to reason about. There is also a JavaScript version, called Pampy.js.

You can write many patterns

Patterns are evaluated in the order they appear.

You can write Fibonacci

The operator _ means 'any other case I didn't think of'.

from pampy import match, _
def fibonacci(n):
    return match(n,
        1, 1,
        2, 1,
        _, lambda x: fibonacci(x-1) + fibonacci(x-2)

You can write a Lisp calculator in 5 lines

from pampy import match, REST, _
def lisp(exp):
    return match(exp,
        int,                lambda x: x,
        callable,           lambda x: x,
        (callable, REST),   lambda f, rest: f(*map(lisp, rest)),
        tuple,              lambda t: list(map(lisp, t)),
plus = lambda a, b: a + b
minus = lambda a, b: a - b
from functools import reduce
lisp((plus, 1, 2))                 	# => 3
lisp((plus, 1, (minus, 4, 2)))     	# => 3
lisp((reduce, plus, (range, 10)))       # => 45

You can match so many things!

    3,              'this matches the number 3',
    int,            'matches any integer',
    (str, int),     lambda a, b: 'a tuple (a, b) you can use in a function',
    [1, 2, _],      'any list of 3 elements that begins with [1, 2]',
    {'x': _},       'any dict with a key 'x' and any value associated',
    _,              'anything else'

You can match [HEAD, TAIL]

from pampy import match, HEAD, TAIL, _
x = [1, 2, 3]
match(x, [1, TAIL],     lambda t: t)            # => [2, 3]
match(x, [HEAD, TAIL],  lambda h, t: (h, t))    # => (1, [2, 3])

TAIL and REST actually mean the same thing.

You can nest lists and tuples

from pampy import match, _
x = [1, [2, 3], 4]
match(x, [1, [_, 3], _], lambda a, b: [1, [a, 3], b])           # => [1, [2, 3], 4]

You can nest dicts. And you can use _ as key!

pet = { 'type': 'dog', 'details': { 'age': 3 } }
match(pet, { 'details': { 'age': _ } }, lambda age: age)        # => 3
match(pet, { _ : { 'age': _ } },        lambda a, b: (a, b))    # => ('details', 3)

It feels like putting multiple _ inside dicts shouldn't work. Isn't ordering in dicts not guaranteed ? But it does because in Python 3.7, dict maintains insertion key order by default

You can match class hierarchies

class Pet:          pass
class Dog(Pet):     pass
class Cat(Pet):     pass
class Hamster(Pet): pass
def what_is(x):
    return match(x,
        Dog, 		'dog',
        Cat, 		'cat',
        Pet, 		'any other pet',
          _, 		'this is not a pet at all',
what_is(Cat())      # => 'cat'
what_is(Dog())      # => 'dog'
what_is(Hamster())  # => 'any other pet'
what_is(Pet())      # => 'any other pet'
what_is(42)         # => 'this is not a pet at all'

All the things you can match

As Pattern you can use any Python type, any class, or any Python value.

The operator _ and built-in types like int or str, extract variables that are passed to functions.

Types and Classes are matched via instanceof(value, pattern).

Iterable Patterns match recursively through all their elements. The same goes for dictionaries.

Pattern Example What it means Matched Example Arguments Passed to function NOT Matched Example
'hello' only the string 'hello' matches 'hello' nothing any other value
None only None None nothing any other value
int Any integer 42 42 any other value
float Any float number 2.35 2.35 any other value
str Any string 'hello' 'hello' any other value
tuple Any tuple (1, 2) (1, 2) any other value
list Any list [1, 2] [1, 2] any other value
MyClass Any instance of MyClass. And any object that extends MyClass. MyClass() that instance any other object
_ Any object (even None) that value
ANY The same as _ that value
(int, int) A tuple made of any two integers (1, 2) 1 and 2 (True, False)
[1, 2, _] A list that starts with 1, 2 and ends with any value [1, 2, 3] 3 [1, 2, 3, 4]
[1, 2, TAIL] A list that start with 1, 2 and ends with any sequence [1, 2, 3, 4] [3, 4] [1, 7, 7, 7]
{'type':'dog', age: _ } Any dict with type: 'dog' and with an age {'type':'dog', 'age': 3} 3 {'type':'cat', 'age':2}
{'type':'dog', age: int } Any dict with type: 'dog' and with an int age {'type':'dog', 'age': 3} 3 {'type':'dog', 'age':2.3}
re.compile('(\w+)-(\w+)-cat$') Any string that matches that regular expression expr 'my-fuffy-cat' 'my' and 'puffy' 'fuffy-dog'

Using strict=False

By default match() is strict. If no pattern matches, it raises a MatchError.

You can prevent it using strict=False. In this case match just returns False if nothing matches.

>>> match([1, 2], [1, 2, 3], 'whatever')
MatchError: '_' not provided. This case is not handled: [1, 2]
>>> match([1, 2], [1, 2, 3], 'whatever', strict=False)

Using Regular Expressions

Pampy supports Python's Regex. You can pass a compiled regex as pattern, and Pampy is going to run, and then pass to the action function the result of .groups().

def what_is(pet):
    return match(pet,
        re.compile('(\w+)-(\w+)-cat$'),     lambda name, my: 'cat '+name,
        re.compile('(\w+)-(\w+)-dog$'),     lambda name, my: 'dog '+name,
        _,                                  'something else'
what_is('fuffy-my-dog')     # => 'dog fuffy'
what_is('puffy-her-dog')    # => 'dog puffy'
what_is('carla-your-cat')   # => 'cat carla'
what_is('roger-my-hamster') # => 'something else'


Currently it works only in Python >= 3.6 Because dict matching can work only in the latest Pythons.

I'm currently working on a backport with some minor syntax changes for Python2.

To install it:

$ pip install pampy

or $ pip3 install pampy

All Comments: [-]

marmaduke(3989) about 15 hours ago [-]

It's neat that Python allows writing such things, but it'd be nice to see what the effect on debugging and stack traces are before writing anything with it.

bjoli(10000) about 9 hours ago [-]

Pattern matching is actually not very hard to write. There is a portable pattern matcher for scheme that uses macros to provide high quality code generation.

In guile the module (ice-9 match) generally has zero runtime cost compared to equal hand-written code.

crimsonalucard(10000) about 16 hours ago [-]

One of the very reasons why Haskell and Rust are so safe is because pattern match checking in these languages is exhaustive. If you don't cover every possible type constructor for an enum or pattern the compiler will throw an error.

For example, the Maybe monad used with match must have Nothing and Just handled during pattern matching. Precompile time logic checking.

The below will throw a precompile time error:

  handleMaybeNum :: Maybe int -> int
  handleMaybeNum Just a = a
The below will not:

  handleMaybeNum :: Maybe int -> int
  handleMaybeNum Just a = a
  handleMaybeNUm Nothing = -1
Could the same be said for this library? If so when combined with mypy type checking and functional programming this can transform python into a language with haskell level saftey.
freddie_mercury(10000) about 15 hours ago [-]

> Could the same be said for this library?

This is clearly answered on the page.

'By default match() is strict. If no pattern matches, it raises a MatchError'

marmaduke(3989) about 15 hours ago [-]

It seems match checking requires static typing, so that'll be a job for a mypy extension.

Historical Discussions: Show HN: Simple tool to upload and paste URL's to screenshots and files (December 11, 2018: 20 points)

Show HN: Simple tool to upload and paste URL's to screenshots and files

20 points 5 days ago by OkGoDoIt in 3939th position | Estimated reading time – 4 minutes | comments

Upload And Paste

This is a small Windows tool that allows you to paste the contents of the clipboard as plaintext (removing formatting). Additionally it automatically uploads images and files on the clipboard to a server and pastes the url. This provides functionality similar to CloudApp, except on your own server for free. It supports FTP, SFTP, SCP, AWS S3, WebDav, via the included WinSCP library.

Here is an example screenshot from my dev machine uploaded this way: I simply pressed Alt-PrintScreen on my keyboard to take a screenshot of my active app, then Ctrl-Shift-V to paste the URL of the uploaded screenshot.


  1. If content of clipboard is a file, it uploads that file to the server and pastes the public URL.
  2. If content of the clipboard is an image (such as a screenshot or other raw bitmap data), it saves to a png and uploads to the server, pasting the public URL.
  3. If content of clipboard is rich text or HTML, it pastes the plain text without formatting.
  4. If none of the above, it silently aborts.

In all supported cases it pastes a plain text, easily sharable representation of the content.


  1. Either build from source in Visual Studio 2017 or download the pre-built binary from (yes that was uploaded using this tool!)

  2. Rename one of the server-config.example.json files to server-config.json and fill in the details.

    • 'baseUploadPath': '/var/www/mysite/' This is the path relative to the root of your server where files should be uploaded

    • 'baseUrl':'' This is the root public URL that the files are accessible from.

    • 'fileDir': 'share/' Optional You can specify a subdirectory where non-screenshot files are uploaded. Since these files may be of any type, you should configure your server/host to serve these files directly without running/executing them (for example a shared hosting provider may assume you want to execute a php script, which may result in unforeseen issues)

    • 'ssDir': 'ss/' Optional You can specify a subdirectory where screenshots uploaded.

    • The remaining items are configuration for WinSCP to connect to the server. You can generate the values directly via WinSCP as documented here:

  3. If you want to use a hotkey other than Ctrl-Shift-V, you need to modify UploadAndPaste.hotkeyLoader.ahk and use AutoHotKey to recompile the script

  4. Run UploadAndPaste.hotkeyLoader.exe or set it to run on startup. Depending on how you set it to run on startup, you may need to ensure the working directory is specified.

  5. With some data on the clipboard and a focused text area to paste into, press Ctrl-Shift-V to test it out.

Known Issues

  1. When there are multiple files on the clipboard, it only uploads one.
  2. There is no progress indicatior, so large uploads may appear to stall on slow connections. The mouse cursor changes to the wait cursor so you know it's working.


I created this tool to scratch my own itch. I hope you find it useful! Feel free to report any bugs or suggestions you may find. More information about this clipboard upload tool on my homepage.

All Comments: [-]

user9182031(10000) 5 days ago [-]

This might be a silly question but I've honestly tried to Google it without any success. Is there an easy way to put something like this behind a web based 'portal'. Like, I'd love to have an external facing login site that would allow me to access internal resources like this. I currently use OpenVPN but it'd be neat if this could be done via a web based portal without the need for a heavy VPN solution. Anyone have any suggestions?

OkGoDoIt(3939) 5 days ago [-]

I think you're asking how to make the public URL shares accessible only with a password, in which case could just use basic password auth on your web server. Something like for Linux or for Windows.

cosmie(3992) 5 days ago [-]

Something like Cloudflare Access[1] maybe? Their basic plan is free for up to 5 seats/login accounts.


tofu8(10000) 5 days ago [-]

Awesome work! I love when people build productivity tools for Windows.

What's the difference between this and ShareX?


OkGoDoIt(3939) 5 days ago [-]


I wasn't aware of ShareX, perhaps it handles this use case just as well. That looks like a super powerful tool, whereas my project is very focused on a specific workflow I personally hit a lot. I assume ShareX is probably a better tool for general use.

tenryuu(10000) 5 days ago [-]

less features

Historical Discussions: SRE School: No Haunted Forests (November 01, 2018: 13 points)
On rewriting code: No haunted forests (December 16, 2018: 9 points)

No Haunted Forests

18 points about 4 hours ago by fanf2 in 82nd position | Estimated reading time – 6 minutes | comments

SRE School: No Haunted Forests

All industrial codebases contain bad code. To err is human, and situations get very human when you're staring down the barrel of a launch deadline. You've heard the euphemism tech debt, where like a car loan you hold a recurring obligation in exchange for immediate liquidity. But this is misleading: bad code is not merely overhead, it also reduces optionality for all teams that come in contact with it. Imagine being unable to get indoor plumbing because your neighbor has a mortgage!

Thus a better analogy for bad code is a haunted forest. Bad code negatively affects everything around it, so engineers will write ad-hoc scripts and shims to protect themselves from direct contact with the bad code. After the authors move to other projects, their hard work will join the forest.

Healthy engineering orgs do not tolerate the presence of haunted forests. When one is discovered you must move vigorously to contain, understand, and eradicate it.

Make this the motto of your team: No Haunted Forests!

Engineer debugging a Puppet manifest (2018, colorized)

Not all intimidating or unmaintained codebases are haunted forests. Code may be difficult for a newcomer to come up to speed, or it might be a stable implementation of some RFC. A couple rules of thumb to identify code worthy of a complete rewrite:Nobody at the company understands how the code should[1] behave.It is obvious to everyone on the team[2] that the current implementation is not acceptable.The project's missing features or erroneous behavior is impacting other teams.At least one competent engineer has attempted to improve the existing codebase, and failed for technical reasons.The codebase is resistant to static analysis, unit testing, interactive debuggers, and other fundamental tooling.Fresh graduates often push for a rewrite at the first sign of complexity, because they've spent the last four years in an environment where codebase lifetimes are measured in weeks. After their first unsuccessful rewrite they will evolve into Junior Engineers, repeating the parable of Chesterton's Fence and linking to that old Joel Spolsky thunkpiece about Netscape[3].Be careful not to confuse this reactive anti-rewrite sentiment with true objections to your particular rewrite. Remind them that Joel wrote that when source control meant CVS.Rewriting an existing codebase should be modeled as a special case of a migration. Don't try to replace the whole thing at once: systematize how users interact with the existing code, insert strong API boundaries between subsystems, and make changes intentionally.User Interaction will make or break your rewrite. You must understand what the touch-points are for users of the existing system to avoid exposing them to maintain UI Compatibility. Often rewrites mandate some changes, so try to put them all near the start (if you know what the final state should be) or delay them to the end (when you can make it seem like a big-bang migration). If the user-facing changes are significant, see if you can arrange for separate opt-in and opt-out periods during which both interaction modes co-exist.Subsystem API Boundaries let you carve up the old system into chunks that are easier to reason about. Be fairly strict about this: run the components in separate processes, separate machines, or whatever is needed to guarantee that your new API is the only mechanism they have to communicate. Do this recursively until the components are small enough that rewriting them from scratch is tedious instead of frightening.Intentional Changes happen when the new codebase's behavior is forced to deviate from the old. At this point you should have a good idea which behavior, if either, is correct. If there's no single correct behavior, it's fine to settle for 'predictable' or (in the limit) 'deterministic'. By making changes intentionally you minimize the chances of forced rollbacks, and may even be able to detect users depending on the old behavior.Work incrementally. A good rewrite is valid and fully functional at any given checkpoint, which might be commits or nightly builds or tagged releases. The important thing is that you never get into a state where you're forced to roll back a functional part of the new system due to breakage in another part.All bad code is bad in its own special way, but there are some properties that are especially likely to make it hard to refactor incrementally. These are generally programming styles that hide state, obscure control flow, or permit type confusion.Hidden State means mutable global variables and dynamic scoping. Both of these inhibit a reader's understanding of what code will do, and forces them to resort to logging or debuggers. They're like catnip for junior developers, who value succinct code but haven't yet been forced to debug someone else's succinct code at 3 AM on a Sunday.Non-Local Control Flow prevents a reader from understanding what path execution will take. In the old times this meant setjmp and longjmp, but nowadays you'll see it in the form of callbacks and event loops. Python's Twisted and Ruby's EventMachine can easily turn into global callback dispatchers, preventing static analysis and rendering stack traces useless.Dynamic Types require careful and thoughtful programming practices to avoid turning into 'type soup'. Highly magical metaprogramming like __getattr__ or method_missing are trivially easy to abuse in ways that make even trivial bug fixes too risky to attempt. Tooling such as Mypy and Flow can help here, but introducing them into an existing haunted forest is unlikely to have significant impact. Use them in the new codebase from the start, and they might be able to reclaim portions of the original code.Distributed Systems can become haunted forests through sheer size, if no single person is capable of understanding the entire API surface they provide. Note that microservices don't automatically prevent this, because merely splitting up a monolith turns the internal structure into API surface. Each of the above per-process issues has distributed analogues, for example S3 is global mutable state and JSON-over-HTTP is dynamically typed.A codebase where nobody knows what behavior it currently has is materially different from one where nobody understands what behavior it should have. The former don't need to be rewritten, because you can grind their test coverage up and then safely refactor.You will sometimes hear objections from people who have not worked directly on the bad code, but have opinions about it anyway. Let them know that they're welcome to help out and you can arrange for a temporary rotation into the role of Forest Ranger.The real reason Netscape failed is they wrote a dreadful browser, then spent three years writing a second dreadful browser. The fourth rewrite (Firefox) briefly had a chance at being the most popular browser, until Google's rewrite of Konqueror took the lead. The moral of this story: rewrites are a good idea if the new version will be better.

No comments posted yet: Link to HN comments page

Historical Discussions: Few people are actually trapped in filter bubbles. Why do they say they are? (December 16, 2018: 10 points)

Few people are actually trapped in filter bubbles. Why do they say they are?

16 points 1 day ago by Reedx in 3037th position | Estimated reading time – 6 minutes | comments

The growing stream of reporting on and data about fake news, misinformation, partisan content, and news literacy is hard to keep up with. This weekly roundup offers the highlights of what you might have missed.

Few people are in complete filter bubbles in which they only consume, say, Fox News, Matt Grossmann writes in a new report for Knight (and there's a summary version of it on Medium here). But the "popular story of how media bubbles allegedly undermine democracy" is one that people actually seem to enjoy clinging to.

"Media choice has become more of a vehicle of political self-expression than it once was," Grossmann writes. "Partisans therefore tend to overestimate their use of partisan outlets, while most citizens tune out political news as best they can." We use our consumption of certain media outlets as a way of signaling who we are, even if we A) actually read across fairly broad number of sources and/or B) actually don't read all that much political news at all. This makes sense when you think about it in contexts beyond news — food, for instance. I might enjoy identifying myself on Instagram as a foodie who drinks a lot of cold brew and makes homemade bread, but I am also currently eating at a Chic-fil-A.

Grossmann looks at two different types of studies of media consumption: Studies that ask people to name news sources they consume, and studies that actually track their news consumption behavior (by, say, recording what they do online). The results of these two types of studies are different:

The key insight is that people overreport their consumption of news and underreport its variety relative to the media consumption habits revealed through direct measurement. Partisans especially seem to report much higher rates of quintessential partisan media consumption (such as Rush Limbaugh listenership) and underreport the extent to which they use nonpartisan or ideologically misaligned outlets. People may explicitly tell interviewers they rely mostly on Fox News, while their web browsing histories and Facebook logs suggest they visit several different newspapers and CNN's website (along with many apolitical sites).

This may seem like kind of a good thing, but don't get too excited, says Grossmann:

Republicans are not as addicted to Fox News as they claim, nor are Democrats as reliant on Rachel Maddow as they say. But that also means partisans now think of media consumption as an expressive political act, and therefore believe that they should stick to Fox, as right-thinking Republicans, or that they should be loyal to MSNBC, as right-thinking Democrats.

There is also some useful stuff on how partisan trust in the media has shifted: "Democratic trust in media is now higher than it has been in over 20 years, while the reverse is true for Republicans."

"Research findings thus far do not support expansive claims about partisan media bubbles or their consequences," Grossmann writes, though this doesn't mean that we can totally stop worrying about this; he argues we particularly need to work on strengthening local political news, as "we have a hyperpartisan and engaged subset of Americans who consume mostly national news of all kinds," and a more robust local media could be a useful tool in drawing in the majority of Americans who consume little to no news at all.

Search engine DuckDuckGo — which, to be clear, is a Google competitor — published an examination of how Google's search results differ by user.

We asked volunteers in the U.S. to search for "gun control", "immigration", and "vaccinations" (in that order) at 9pm ET on Sunday, June 24, 2018. Volunteers performed searches first in private browsing mode and logged out of Google, and then again not in private mode (i.e., in "normal" mode). We compiled 87 complete result sets — 76 on desktop and 11 on mobile. Note that we restricted the study to the U.S. because different countries have different search indexes.

The main finding was that "most people saw results unique to them, even when logged out and in private browsing mode." This, says DuckDuckGo, is a sign that "private browsing mode and being logged out of Google offered almost zero filter bubble protection."

But are filter bubbles really the problem here? Danny Sullivan — cofounder of SearchEngineLand and now, yep, Google's public search liasion — argues fairly persuasively that they're not because even DuckDuckGo users see different results.

My Duck Duck Go results compared to colleagues were also different. Again applying DDG's own statement to itself: "With no filter bubble, one would expect to see very little variation of search result pages — nearly everyone would see the same single set of results."

— Danny Sullivan (@dannysullivan) December 6, 2018

Google also responded in this thread — watch that passive voice, though.

Useful thread on what is actually going on with Google Search now. But it's disingenuous to say 'a myth has developed' in passive voice. Google actively promoted the message of 'search personalized on your browsing history' when it launched, e.g.

— Emma Llanso (@ellanso) December 5, 2018

Over the years, a myth has developed that Google Search personalizes so much that for the same query, different people might get significantly different results from each other. This isn't the case. Results can differ, but usually for non-personalized reasons. Let's explore...

— Google SearchLiaison (@searchliaison) December 4, 2018

I don't think this is saying 'Filter bubble' as much as saying 'Google is damn nondeterministic on controversial topics'

— Nicholas Weaver (@ncweaver) December 5, 2018

In particular, the conversation often begins and ends with concerns around the filter bubble. I think many of the discussions in Silicon Valley are overly fixated on filter bubble concerns- while in the academic lit the filter bubble has often not stood up to empirical scrutiny.

— David Lazer (@davidlazer) December 5, 2018

It's not clear that a pizza-optimized search engine is optimal for the broader public sphere.

— David Lazer (@davidlazer) December 5, 2018

All Comments: [-]

JohnJamesRambo(3992) 29 minutes ago [-]

My parents are completely in a Fox News bubble and so are lots of other people unfortunately. It changed their views completely. Prior to Fox News they were reasonable humans that saw things in a rounded way and that eroded upon repeated exposure to it. I wish they had never gotten cable.

dahart(3571) 22 minutes ago [-]

My wife and I were able to help her father escape the Fox bubble. The film "Outfoxed" was useful in our case, FWIW.

gundmc(10000) about 2 hours ago [-]

'Filter Bubbles' are a real phenomenon to be wary of, particularly for something like YouTube that aggressively suggests videos and autoplays by default or Facebook that injects stories into your news feed. I think what the article misses on that front is not that users don't have access to other sources, but they're passively spoon fed stories that reinforce their beliefs.

But I'm not convinced this expands to Search. Search algorithm implementations are enough of a nebulous black box that it makes for a convincing story, and Duck Duck Go has been shamelessly spinning that FUD for advertising, but the claims don't really stand up to scrutiny.

spondyl(3632) about 1 hour ago [-]

It's somewhat anecdotal but in my experience, Twitter search can be quite filter bubbly.

I've had this recording sitting around in my YouTube account which illustrates an example of this: CES 2015 which was around the time of the shit show known as Gamergate.

The event aside, it was interesting to understand how sides were inflamed based on what they were seeing thanks to Twitter effectively amplifying similar opinions, creating an echo chamber.

Here's a link to the recording:

On the left is an account that was heavily skewed on purpose, towards the more conservative group while on the right is what a regular, unlogged in user would see.

A regular user would have no idea of such an event going on while the user on the left would think that this event was engulfing the planet based on the sheer amount of noise being generated.

For users on the left, it would be near impossible to penetrate the noise, based on retweets and likes, short of a tweet being passed around for users to jeer at. A literal bubble in that sense.

The sad part is that a great number of people from both sides seemingly had a lot in common without realising it. Unfortunately they had no visibility of the 'others' short of going out of their way to meet people and talk with them one on one.

As we all know, it's easier to just label a foreign group and pretend we're objectively right.

Anyway, that's just my experience with filter bubbles anyway. Hopefully you found it interesting.

Historical Discussions: Show HN: Revealer – seed phrase visual encryption backup tool (December 14, 2018: 15 points)

Show HN: Revealer – seed phrase visual encryption backup tool

15 points 2 days ago by tiagotrs in 4001st position | Estimated reading time – 2 minutes | comments

Revealer is a backup tool that encrypts your secrets visually.

Our first release is a free software seed backup plugin for the Electrum Bitcoin Wallet. It allows you to generate both shares and print them out yourself, or to type the code of a physical Revealer and encrypt your seeds or any arbitrary secret with it.

Like other encryption methods, the user has two factors that allow to create a redundancy while minimizing risk of compromise. Differently to usual encryption methods, the secret can be decrypted optically – without a computer or any special knowledge. This makes it specially useful for cold storage and inheritance planning.

Electrum Bitcoin Wallet comes with the Revealer Plugin.

Electrum was the first wallet to introduce a seed phrase concept (2011) and is one of the most advanced, flexible and simple to use Bitcoin wallet available. We are proud partners of Electrum and a percentage of our products sale goes to Electrum Technologies.

If you are running version Electrum 3.2 or newer you can activate the Revealer Plugin at 'Tools-> Plugins'. Otherwise, download electrum from the official website

Create a backup directly from your Ledger hardware wallet

You can type a revealer code in the Ledger Nano S and it will export the seed encrypted for that code. In this case it is strongly recommended that you use a Revealer that was *not* generated on the computer receiving the encrypted seed, for instance bought from the official shop, or generated on a different computer. Beta version of the app available at:

Encrypt your seeds visually today, add a layer of security to your backups. You can order some cards from our shop, or print out your own on a transparency.

No comments posted yet: Link to HN comments page

Historical Discussions: Show HN: Deploying a stateful distributed service on k8s the easy way (December 11, 2018: 14 points)

Show HN: Deploying a stateful distributed service on k8s the easy way

14 points 5 days ago by Kartveli in 3965th position | Estimated reading time – 29 minutes | comments

It has been a few months since we first released the Kubernetes operator for ArangoDB and started to brag about it. Since then, quite a few things have happened.

For example, we have done a lot of testing, fixed bugs, and by now the operator is declared to be production ready for three popular public Kubernetes offerings, namely Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS) and Pivotal Kubernetes Service (PKS) (see here for the current state of affairs).

We have developed a semi-automatic "acceptance test" to validate production readiness for each new platform, and therefore you can expect quick progress on this topic in the near future. However, experience shows that one has to test every single platform individually, in particular with respect to volume support, networking (load balancer services etc.) and access control.

Furthermore, the ArangoDB 3.4 release has finally happened and we have put particular emphasis on a seamless upgrade path from 3.3. In the context of k8s this means that performing a fully automatic rolling upgrade from 3.3.20 to 3.4 is as easy as editing the name of the Docker image used in the k8s specs.

In this article I would like to highlight again how convenient deployment and maintenance of a distributed ArangoDB cluster has become with the help of k8s, custom resources and our operator.

Getting started

The first to sort out is access to a Kubernetes cluster with the right credentials. With the above mentioned cloud platforms, this is essentially just a few clicks away, but one needs administrator rights for a few steps, for example for the deployment of the custom resource definitions. Therefore, we have put together detailed tutorials for the steps to set up the k8s cluster and authentication for individual cloud platforms, see this page for details, so far we have GKE, EKS and AKS (Azure Kubernetes Service), but we are going to extend this to other platforms soon.

Once your kubectl tool is installed and credentials are sorted, you can essentially just do

kubectl apply -f

kubectl apply -f

to deploy the custom resource definitions as well as the operator itself. Note that these particular URLs contain the version which is current at the time of this writing, but the latest command is always available on this page, where you also find instructions to set up our storage operator and the one for DC2DC replication.

Furthermore, note that the custom resource definitions are global to the k8s cluster, whereas our deployment operator is deployed into a particular namespace.

You can also use helm, if you have installed the k8s package manager, instructions for this can be found here.

You can tell if the deployment operator is up and running by looking at the pods in the namespace to which you deployed it:

% kubectl get pod


arango-deployment-operator-7564c5d8cb-8bgs8 1/1 Running 0 1m

arango-deployment-operator-7564c5d8cb-lgsqc 1/1 Running 0 1m

There are two copies for fault-tolerance, they agree on who is the current leader via the API of k8s.

Example deployment

Deployment of an ArangoDB cluster is now fairly straightforward, the minimal example uses this YAML file:

apiVersion: ''

kind: 'ArangoDeployment'


name: 'my-arangodb-cluster'


mode: Cluster

image: 'arangodb/arangodb:3.3.20'

You essentially just have to specify the custom resource type, the fact that you would like to have a cluster and the Docker image name.

With this in the file cluster.yaml you just do:

% kubectl apply -f cluster.yaml created

I have intentionally used 3.3.20 since I will demonstrate the seamless upgrade to 3.4 in just a few minutes.

After usually less than a minute you see the cluster that has been deployed:

% kubectl get pod


arango-deployment-operator-7564c5d8cb-8bgs8 1/1 Running 0 7m

arango-deployment-operator-7564c5d8cb-lgsqc 1/1 Running 0 7m

my-arangodb-cluster-agnt-n7aus7hc-d02e07 1/1 Running 0 51s

my-arangodb-cluster-agnt-svdqknuq-d02e07 1/1 Running 0 50s

my-arangodb-cluster-agnt-zq0i9hsv-d02e07 1/1 Running 0 48s

my-arangodb-cluster-crdn-n67yuq6f-d02e07 1/1 Running 0 44s

my-arangodb-cluster-crdn-qnbkdc0y-d02e07 1/1 Running 0 42s

my-arangodb-cluster-crdn-varovat2-d02e07 1/1 Running 0 41s

my-arangodb-cluster-prmr-5hjg9ggs-d02e07 1/1 Running 0 47s

my-arangodb-cluster-prmr-ikboimi8-d02e07 1/1 Running 0 46s

my-arangodb-cluster-prmr-pxuxgocl-d02e07 1/1 Running 0 45s

What you see here is a cluster with the following components:

  • three "agents" (with agnt in the pod name), which are the central, RAFT-based key/value store which holds our cluster configuration and handles supervision and automatic fail-over,
  • three "dbservers" (with prmr in the pod name), which are the instances which actually hold your data,
  • three "coordinator" (with crdn in the pod name), which take the client request, handle query planning and distribution.

Additionally, the operator has set up a load balancer to sit in front of the coordinators for us. Since this is done on GKE, the load balancer is by default for external access using a public IP address:

% kubectl get service


arango-deployment-operator ClusterIP <none> 8528/TCP 53m

kubernetes ClusterIP <none> 443/TCP 1h

my-arangodb-cluster ClusterIP <none> 8529/TCP 46m

my-arangodb-cluster-ea LoadBalancer 8529:31194/TCP 46m

my-arangodb-cluster-int ClusterIP None <none> 8529/TCP 46m

The line starting with my-arangodb-cluster-ea is the external access service, the type LoadBalancer is the default if this is possible on the k8s platform being used. Since by default, everything is deployed using TLS and authentication, one can now point a browser to for ArangoDB's web UI:

Insecure Connection Warning Browser This is, because the operator uses self-signed certificates by default, therefore we have to accept an exception now in the browser, once this is done (potentially up to 3 times because there are 3 coordinators behind the load balancer), you get the actual login screen:

Login screen Choose the user name root with an empty password, select the _system database and change the password immediately!

Indeed, if we click to "NODES" in the left navigation bar, we see that we have a cluster with three coordinators and three dbservers:

Cluster overview We have added quite some explanations in this section, but at the end of the day, all that was needed was a 7 line YAML file and a single kubectl command.


After the initial deployment, scaling your ArangoDB cluster is the next task we want to demonstrate. By far the easiest way to do this is by simply clicking a button in the above overview screen, and indeed, when I add a dbserver, it takes a few seconds and a new one shows up:

Scaled up cluster The same is visible in the list of pods:

% kubectl get pod


arango-deployment-operator-7564c5d8cb-8bgs8 1/1 Running 0 1h

arango-deployment-operator-7564c5d8cb-lgsqc 1/1 Running 0 1h

my-arangodb-cluster-agnt-n7aus7hc-d02e07 1/1 Running 0 1h

my-arangodb-cluster-agnt-svdqknuq-d02e07 1/1 Running 0 1h

my-arangodb-cluster-agnt-zq0i9hsv-d02e07 1/1 Running 0 1h

my-arangodb-cluster-crdn-n67yuq6f-d02e07 1/1 Running 0 1h

my-arangodb-cluster-crdn-qnbkdc0y-d02e07 1/1 Running 0 1h

my-arangodb-cluster-crdn-varovat2-d02e07 1/1 Running 0 1h

my-arangodb-cluster-prmr-5f4ughlx-0e03d5 1/1 Running 0 28s

my-arangodb-cluster-prmr-5hjg9ggs-d02e07 1/1 Running 0 1h

my-arangodb-cluster-prmr-ikboimi8-d02e07 1/1 Running 0 1h

my-arangodb-cluster-prmr-pxuxgocl-d02e07 1/1 Running 0 1h

Note the one with AGE 28s, this has been deployed by the operator after it has spotted that I had clicked + in the ArangoDB UI for the dbservers. I could have achieved the same by simply editing the deployment specs with kubectl, a simple

kubectl edit arango my-arangodb-cluster

brings up my favorite editor and all the specs in all their glory: Editing the specs In this screen shot you see me editing the number of dbservers from 4 back to 3. Yes! Indeed it is possible to scale down the number of dbservers as well. The operator will first clean out dbservers gracefully and make it so that the data is moved over to the remaining ones. In the end, it will shut down the pod and remove the dbserver from the cluster automatically.

The same works for the coordinators. You can scale the coordinator layer as well as the dbserver layer independently. If you need more (or less) storage, get more (or less) dbservers. If you need more (or less) CPU power to optimize and coordinate queries, get more (or less) coordinators.

External access

Obviously, the situation with the self-signed TLS certificates is not entirely satisfactory. If you want to have your own CA whose certificate is once and for all stored in your browsers and accepted, then you can tell this to the operator, such that it will sign all certificates used by your ArangoDB cluster with your own CA key. This establishes a valid chain of trust, so you get rid of the security warnings.

The right place to store such secret keys are Kubernetes secrets. All you have to do is to create a secret with this command:

kubectl create secret generic my-test-ca \

--from-file=ca.crt=/path/to/ca.crt \


where /path/to/ca.crt is the CA certificate and /path/to/ca.key is the private key. Then you have a k8s-secret called my-test-ca.

Furthermore, we must use the following cluster.yaml file:

apiVersion: ''

kind: 'ArangoDeployment'


name: 'my-arangodb-cluster'


mode: Cluster

image: 'arangodb/arangodb:3.3.20'


caSecretName: my-test-ca

altNames: [ '' ]

This tells the operator two things, first of all, it uses the secret with the name my-test-ca to sign all TLS certificates for all ArangoDB instances, and secondly, these certificates will say that they are for a server with the DNS name, you would obviously put a name in your own domain here.

Note that we have a hen-egg-problem here. The external IP address will only be known during the deployment, but the signed certificates are already needed at that time. We solve this with the DNS name, at the cost that we need to be able to change the IP address to which this particular DNS name resolves, once we know it. So I put the IP address of the load balancer in the DNS server and then I can actually point my browser to and get the UI without any security alert, since I have already registered the CA certificate with my browser:

Secure TLS deployment Please note that for this example I had to get rid of my cluster and deploy it anew, since we do not yet support exchanging the CA certificate in a running deployment.

Rolling upgrades

Finally, I would like to take the advent of ArangoDB 3.4 as an opportunity to mention rolling upgrades. All I have to do to upgrade my already running cluster to the new version is to edit the line with the Docker image name. Thus, I simply change cluster.yaml into this:

apiVersion: ''

kind: 'ArangoDeployment'


name: 'my-arangodb-cluster'


mode: Cluster

#image: 'arangodb/arangodb:3.3.20'

image: 'arangodb/arangodb:3.4.0'


caSecretName: team-clifton-test-ca

altNames: [ '' ]

Note that I have left the old name as a comment to highlight the difference. I deploy this to k8s by doing:

kubectl apply -f cluster.yaml

The operator will now do the following: It will launch a test balloon which is a container running the new image. From this the operator can read off the SHA256 of the Docker image as well as the version of ArangoDB. It then notices that this is 3.4.0 instead of 3.3.20 and automatically knows that this is an upgrade procedure from one minor release version to the following. Therefore, it will automatically perform a rolling upgrade, starting with the agents, proceeding with the dbservers and then the coordinators. It will do one by one, run the appropriate upgrade procedure for each, and then redeploy a new pod with the new Docker image.

As long as you are using synchronous replication and a replication factor of at least 2 this works without service interruption. Here is an intermediate display of kubectl get pod during the procedure:

% kubectl get pod


arango-deployment-operator-7564c5d8cb-8bgs8 1/1 Running 0 2h

arango-deployment-operator-7564c5d8cb-lgsqc 1/1 Running 0 2h

my-arangodb-cluster-agnt-kxm1nxqg-789e15 1/1 Running 0 37s

my-arangodb-cluster-agnt-uvi3imhq-789e15 0/1 PodInitializing 0 10s

my-arangodb-cluster-agnt-wpek6bdp-d7b7cb 1/1 Running 0 43m

my-arangodb-cluster-crdn-oxieqvz2-d7b7cb 1/1 Running 0 43m

my-arangodb-cluster-crdn-s7bovzfr-d7b7cb 1/1 Running 0 43m

my-arangodb-cluster-crdn-vrvlubyv-d7b7cb 1/1 Running 0 43m

my-arangodb-cluster-prmr-dsibgeeg-d7b7cb 1/1 Running 0 43m

my-arangodb-cluster-prmr-rjqumkxb-d7b7cb 1/1 Running 0 43m

my-arangodb-cluster-prmr-y9eslmij-d7b7cb 1/1 Running 0 43m

You can see that one of the agents has been restarted 37 seconds ago, whereas the rest is up for 43 minutes. One of the agents has just been redeployed and the corresponding pod is just being initialized. And indeed, the UI now shows a cluster with Version 3.4:

Upgraded cluster running Version 3.4

Outlook and closing words

Kubernetes and our operator are here to stay. We expect that a lot of customers will give k8s a try and will eventually use it as the primary way to deploy ArangoDB. Therefore, we are going to put considerable effort into the development of the operator. We will add features like easy backup and restore, and improve convenience for monitoring and other topics that are relevant for production. Furthermore, we will test the operator on more platforms and eventually declare it production ready on them once we are really convinced.

If you need to know more details, then we have you covered with more documentation: Our manual has a chapter which serves as a tutorial, which can be found here. Furthermore, reference information about deployment on k8s is found in this chapter. The Kubernetes operator is fully open source and the code is out there on GitHub which has its own copy of the documentation in this directory. The contents of this article are available as on our YouTube channel.

Finally, if you are new to ArangoDB, give the new Version 3.4 a spin or read about its features in our release article. If you like what you see, tell others and star us on GitHub. If you run into problems, let us know via GitHub issues or ask questions on StackOverflow.

No comments posted yet: Link to HN comments page

Historical Discussions: Show HN: Software Stickers Co – Simple Software Funding (December 15, 2018: 11 points)
Are you a repo maintainer and want to sell stickers? (December 10, 2018: 2 points)

Show HN: Software Stickers Co – Simple Software Funding

12 points 1 day ago by ____Sash---701_ in 3585th position | Estimated reading time – 1 minutes | comments

We want to help any open source project by taking away the costs and hassle of running an online store. By providing a transparent and simple way for supporting continued developement. All proceeds are donated every month to the respective maintainers, we mainly use Open Collective, Patreon and Paypal. Software Stickers Co takes a 10% fee for operating costs such as hosting, handling, payment provider fees, shopify, and the rest is donated, 90% goes to open source! This works out to be much more cost effective and our hope is to encourage all repos to try selling stickers! We are language, platform, library and even module agnostic which means no matter how many stars your repo has or how 'unlikely' someone out there may want your sticker, we recommend to try getting featured. All repo owners receive a monthly report for amount of stickers sold and amount donated including the receipts.

As developers this is a great a way to support your favourite projects and get some swag along the way to prove it! We have just launched this December 2018 and know that its the festive season with Channukah and Christmas coming up and then New Years, so everything is 20% off until January 2019 :) Early Bird Special!

No comments posted yet: Link to HN comments page

Historical Discussions: Show HN: DPAGE – publish webpages on the decentralized internet (December 12, 2018: 10 points)

Show HN: DPAGE – publish webpages on the decentralized internet

10 points 4 days ago by vrepsys in 3622nd position | | comments

Publish web content on the decentralized internet

Combine videos, images and social media posts from the web to create webpages.

Keep your pages private and encrypted or make them public.

Login with Blockstack

No comments posted yet: Link to HN comments page

Historical Discussions: Show HN: Pown Proxy – MITM web proxy with text ui (December 15, 2018: 9 points)

Show HN: Pown Proxy – MITM web proxy with text ui

9 points 2 days ago by _pdp_ in 3604th position | Estimated reading time – 3 minutes | comments

Pown Proxy

Pown Proxy is a versatile web debugging proxy. You can use the proxy to monitor, intercept and investigate web traffic in active or passive mode.


If installed globally as part of Pown.js invoke like this:

Otherwise install this module from the root of your project:

$ npm install @pown/proxy --save

Once done, invoke pown proxy like this:

$ ./node_modules/.bin/pown-cli proxy


pown proxy [options]
HTTP proxy
  --version                 Show version number                        [boolean]
  --modules, -m             Load modules                                [string]
  --help                    Show help                                  [boolean]
  --log, -l                 Log requests and responses[boolean] [default: false]
  --host, -h                Host to listen to      [string] [default: '']
  --port, -p                Port to listen to           [number] [default: 8080]
  --text, -t                Start with text ui        [boolean] [default: false]
  --ws-client, -c           Connect to web socket         [string] [default: '']
  --ws-server, -s           Forward on web socket     [boolean] [default: false]
  --ws-host                 Web socket server host [string] [default: '']
  --ws-port                 Web socket server port      [number] [default: 9090]
  --ws-app                  Open app
                                [string] [choices: '', 'httpview'] [default: '']
  --certs-dir               Directory for the certificates
                              [string] [default: '/Users/pdp/.pown/proxy/certs']
  --server-key-length       Default key length for certificates
                                                        [number] [default: 1024]
  --default-ca-common-name  The CA common name
                                             [string] [default: 'Pown.js Proxy']

Text Mode

Pown Proxy comes with intriguing text-based user interface available via the -t flag. The interface resembles popular security tools such as Burp, ZAP and SecApps' HTTPView, but only utilizing console capabilities such as ANSI escape sequences.

Web Sockets Mode

Pown Proxy provides a handy WebSocket-based API, backed by a simple binary protocol to interface with other tools, thus allowing it to be used as a backend proxy service. This technique is used to power tools such as SecApps' HTTPView.

The WebSocket server can be accessed via the -s and --ws-server flags. You can also connect to existing servers with the -c and --ws-client flags. This opens some interesting use-cases. For example you could start a proxy server in headless-mode (default) and connect to it with the text mode client.


While Pown Proxy is a great tool it still requires some work to be truly amazing. In no particular order here is the current wish list:

  • Extension system so that additional features can be added with the help of user-supplied modules.
  • Active interception feature (already possible but no UI)
  • Request reply feature (already possible but no UI)


This tool will not be possible without the awesome Open Source community that exists around Node.js. However, all of this work is heavily inspired and in many cases directly borrowed from SecApps' HTTPView.

No comments posted yet: Link to HN comments page

Historical Discussions: Show HN: Meetup Utils – Create ready to print posters and indicators for meetups (December 13, 2018: 9 points)

Show HN: Meetup Utils – Create ready to print posters and indicators for meetups

9 points 3 days ago by eg312 in 3875th position | Estimated reading time – 2 minutes | comments

Failed to load latest commit information. client misc fixes 4 days ago .babelrc init commit 2 months ago .gitignore init commit 2 months ago LICENSE Initial commit 2 months ago update readme 4 days ago package.json update readme 4 days ago screenshot.png update readme 4 days ago webpack.config.js init commit 2 months ago yarn.lock misc changes; responsive layout 22 days ago

No comments posted yet: Link to HN comments page

Historical Discussions: Show HN: Stock Market Forecast Based on Most Similar Historical Patterns (December 10, 2018: 8 points)

Show HN: Stock Market Forecast Based on Most Similar Historical Patterns

8 points 7 days ago by zechs in 4004th position | | comments

Mark Twain's adage couldn't be more true when it comes to the stock market. The chart above is a carefully constructed prediction of where the stock market will be over the next 3 months, based on aggregating together historical periods of time that most closely match the market's observed behavior over the last 12 months, and then peering into the future from those past cases to estimate what may be in store for us this time around.

The charts below show the cases from which these estimates are drawn, along with a flavor of what was playing on the radio at these points in history that so closely match where the market is today.

No comments posted yet: Link to HN comments page

Show HN: Hardware-agnostic library for near-term quantum machine learning

7 points about 6 hours ago by infinitewalk in 10000th position | Estimated reading time – 4 minutes | comments

PennyLane is a cross-platform Python library for quantum machine learning, automatic differentiation, and optimization of hybrid quantum-classical computations.


  • Follow the gradient. Built-in automatic differentiation of quantum circuits
  • Best of both worlds. Support for hybrid quantum & classical models
  • Batteries included. Provides optimization and machine learning tools
  • Device independent. The same quantum circuit model can be run on different backends
  • Large plugin ecosystem. Install plugins to run your computational circuits on more devices, including Strawberry Fields, ProjectQ, and Qiskit

Available plugins

  • PennyLane-SF: Supports integration with Strawberry Fields, a full-stack Python library for simulating continuous variable (CV) quantum optical circuits.
  • PennyLane-PQ: Supports integration with ProjectQ, an open-source quantum computation framework that supports the IBM quantum experience.
  • PennyLane-qiskit: Supports integration with Qiskit Terra, an open-source quantum computation framework by IBM. Provides device support for the Qiskit Aer quantum simulators, and IBM QX hardware devices.


PennyLane requires Python version 3.5 and above. Installation of PennyLane, as well as all dependencies, can be done using pip:

$ python -m pip install pennylane

Getting started

For getting started with PennyLane, check out our qubit rotation, Gaussian transformation, hybrid computation, and other machine learning tutorials.

Our documentation is also a great starting point to familiarize yourself with the hybrid classical-quantum machine learning approach, and explore the available optimization tools provided by PennyLane. Play around with the numerous devices and plugins available for running your hybrid optimizations — these include the IBM QX4 quantum chip, provided by the PennyLane-PQ plugin.

Finally, detailed documentation on the PennyLane API is provided, for full details on available quantum operations and expectations, and detailed guides on how to write your own PennyLane-compatible quantum device.

Contributing to PennyLane

We welcome contributions — simply fork the PennyLane repository, and then make a pull request containing your contribution. All contributers to PennyLane will be listed as authors on the releases. All users who contribute significantly to the code (new plugins, new functionality, etc.) will be listed on the PennyLane arXiv paper.

We also encourage bug reports, suggestions for new features and enhancements, and even links to cool projects or applications built on PennyLane.

Don't forget to submit your PennyLane contribution to the Xanadu Quantum Software Competition, with prizes of up CAD$1000 on offer.

See our contributions page for more details.


Ville Bergholm, Josh Izaac, Maria Schuld, Christian Gogolin, and Nathan Killoran.

If you are doing research using PennyLane, please cite our paper:

Ville Bergholm, Josh Izaac, Maria Schuld, Christian Gogolin, and Nathan Killoran. PennyLane: Automatic differentiation of hybrid quantum-classical computations. 2018. arXiv:1811.04968


If you are having issues, please let us know by posting the issue on our Github issue tracker.

We also have a PennyLane discussion forum - come join the discussion and chat with our PennyLane team.


PennyLane is free and open source, released under the Apache License, Version 2.0.

All Comments: [-]

infinitewalk(10000) about 6 hours ago [-]

I'm one of the developers on PennyLane, a cross-platform Python library for quantum machine learning (QML), automatic differentiation, and optimization of hybrid quantum-classical computations.

For a while now, QML has been getting a lot of hype --- at the Quantum2Business conference the other day, a quote that made the rounds was 'QML: most overhyped and underestimated field at the same time' (attributed to Iordanis Kerenidis, I believe).

However, current research has been showing a lot of promise, especially as an application for near-term quantum devices, that doesn't require an exceptionally large number of fault tolerant qubits.

At the moment, the main approach to QML has been the so-called 'variational circuit' approach, where a parameterised quantum circuit is evaluated on quantum hardware, with optimization/machine learning then performed by an external classical ML library, such as TensorFlow/PyTorch. However, this is not the most optimal approach - the most optimal approach is to take advantage of the quantum hardware to also perform the optimization.

This was our goal with PennyLane. Before we could even start designing the library, we needed to know how to analytically evaluate gradients on quantum circuits; so we performed the research, discovered some cool analytic tricks, and published this separately [1]. This forms the backbone of PennyLane - the exact same quantum circuits used in the machine learning model are also used to calculate the gradient during backpropagation. As a result, you can construct arbitrarily complex classical-quantum models, with both the quantum and classical parts natively 'backpropagation aware'.

Even more ambitiously, we wanted an environment where you can build a hybrid classical-quantum computational model, using not only different quantum hardware devices at once, but different hardware devices from different hardware vendors. By taking advantage of all near-term quantum hardware currently available - even those using fundamentally different models, such as qubits vs. photonic modes - you can build significantly more powerful computations. Currently, we have plugins available for [ProjectQ](, [Strawberry Fields](, [Qiskit](, and more to come.

Feel free to ask any questions you might have on PennyLane, the state of QML, and quantum computation in general!

[1] Evaluating analytic gradients on quantum hardware (

[2] Check out the PennyLane documentation for the nitty-gritty on our analytic gradient approach to QML:

p1esk(2738) about 5 hours ago [-]

Is there any actual QC hardware that can run these algorithms? Does it even make sense to say that you can 'run' code on a quantum computer?

I don't follow this field much, but I remember there was a company called D-Wave, and people saying their product was not a 'real' quantum computer. Has anything changed since?

Historical Discussions: Show HN: Build a Slack Clone with WebRTC Video Calling (December 12, 2018: 7 points)

Show HN: Build a Slack Clone with WebRTC Video Calling

7 points 4 days ago by ajb413 in 3786th position | Estimated reading time – 6 minutes | comments

WebRTC Video Chat Plugin for ChatEngine

Adds the ability to do WebRTC audio/video with ChatEngine using direct events for signaling

Quick Start

  1. Have a ChatEngine server running already, instantiate a client and connect it
const ChatEngine = ChatEngineCore.create({
    publishKey: 'pub-key-here',
    subscribeKey: 'sub-key-here'
ChatEngine.on('$.ready', (data) => {
    // Set up ChatEngine and WebRTC config options
    // ...
    const webRTC = ChatEngineCore.plugin['chat-engine-webrtc'];;
  1. Set WebRTC configuration and event handlers for WebRTC related events
const rtcConfig = {
    iceServers: [] // See
let localStream;
getLocalStream().then((myStream) => { localStream = myStream; }
const onPeerStream = (webRTCTrackEvent) => {
    // Set media stream to HTML video node
const onIncomingCall = (user, callResponseCallback) => {
    // Give user a chance to accept/reject
    const acceptedCall = true; // true to accept a call
    callResponseCallback({ acceptedCall });
const onCallResponse = (acceptedCall) => {
    if (acceptedCall) {
        // Show video UI, ect.
const onDisconnect = () => {
    // Hide your video UI, ect.
  1. Set configuration and attach this plugin to the Me object.
const config = {
    rtcConfig,             // An RTCConfiguration dictionary from the browser WebRTC API
    ignoreNonTurn: false,  // Only accept TURN candidates when this is true
    myStream: localStream, // Local MediaStream object from the browser Media Streams API
    onPeerStream,          // Event Handler
    onIncomingCall,        // Event Handler
    onCallResponse,        // Event Handler
    onDisconnect           // Event Handler
const webRTC = ChatEngineCore.plugin['chat-engine-webrtc'];;
  1. Send a call request to another user
const userToCall = aChatEngineUserObject;, {
    // 2nd chance to set configuration options (see object in step 2)

Frequently Asked Questions (FAQ) about the WebRTC Plugin

What is WebRTC?

WebRTC is a free and open source project that enables web browsers and mobile devices to provide a simple real-time communication API. Please read this PubNub blog to learn more about WebRTC and how to implement the code in this repository.

What is ChatEngine?

PubNub ChatEngine is an object oriented event emitter based framework for building chat applications in Javascript. For more information on ChatEngine, and what its plugins are for, go to the PubNub website.

What is PubNub? Why is PubNub relevant to WebRTC?

PubNub is a global Data Stream Network (DSN) and realtime network-as-a-service. PubNub's primary product is a realtime publish/subscribe messaging API built on a global data stream network which is made up of a replicated network with multiple points of presence around the world.

PubNub is a low cost, easy to use, infrastructure API that can be implemented rapidly as a WebRTC signaling service. The signaling service is responsible for delivering messages to WebRTC peer clients. See the next question for the specific signals that PubNub's publish/subscribe API handles.

Does ChatEngine stream audio or video data?

No. ChatEngine pairs very well with WebRTC as a signaling service. This means that PubNub signals events from client to client using the ChatEngine #direct events. These events include:

  • I, User A, would like to call you, User B
  • User A is currently trying to call you, User B
  • I, User B, accept your call User A
  • I, User B, reject your call User A
  • I, User B, would like to end our call User A
  • I, User A, would like to end our call User B
  • Text instant messaging like in Slack, Google Hangouts, Skype, Facebook Messenger, etc.

Is this repository's plugin officially part of ChatEngine?

No. It is an open source project that is community supported. If you want to report a bug, do so on the GitHub Issues page.

Can I make a group call with more than 2 participants?

Group calling is possible to develop with WebRTC and ChatEngine, however, the current ChatEngine WebRTC plugin can connect only 2 users in a private call. The community may develop this feature in the future but there are no plans for development to date.

I found a bug in the plugin. Where do I report it?

The ChatEngine WebRTC plugin is an open source, community supported project. This means that the best place to report bugs is on the GitHub Issues page in for the code repository. The community will tackle the bug fix at will, so there is no guarantee that a fix will be made. If you wish to provide a code fix, fork the GitHub repository to your GitHub account, push fixes, and make a pull request (process documented on GitHub).

All Comments: [-]

qwerty456127(3980) 4 days ago [-]

Make your Slack clone more HN-like (I mean threaded discussions) and it'll be more useful than the original.

ajb413(3786) 3 days ago [-]

That's a pretty good idea. Creating that is entirely possible with ChatEngine. The example chat app in the repository mainly showcases the WebRTC functionality:

Historical Discussions: Show HN: A WebGL EWA Surface Splatting Renderer (December 15, 2018: 7 points)

Show HN: A WebGL EWA Surface Splatting Renderer

7 points 1 day ago by Twinklebear in 3926th position | Estimated reading time – 1 minutes | comments


Mouse Controls: Left-click + drag to rotate, scroll to zoom, right-click + drag to pan. Touch Controls: One finger drag to rotate, pinch to zoom, two finger drag to pan. Number of splats:

Splat Radius

Loading Dataset


This is a WebGL implementation of the papers Object Space EWA Surface Splatting: A Hardware Accelerated Approach to High Quality Point Rendering by Ren, Pfister and Zwicker, and High-Quality Point-Based Rendering on Modern GPUs by Botsch and Kobbelt, with a few shortcuts. Get the code on GitHub!

The Dinosaur, Man, Santa and Igea datasets are from Pointshop3D, the Sankt Johann is from the University of Stuttgart. The Warnock Engineering Building dataset is from the State of Utah Wasatch Front LiDAR dataset.

No comments posted yet: Link to HN comments page

Historical Discussions: Show HN: JSON.equals in Java to compare two JSON's (December 10, 2018: 6 points)

Show HN: JSON.equals in Java to compare two JSON's

6 points 7 days ago by sanketsarang in 3991st position | Estimated reading time – 2 minutes | comments


The function returns true if the two JSON's are equals and false if they are unequal. The parameter can be a JSONObject of type org.json.JSONObject or a JSON String.

The JSON utility is available as part of BlobCity Commons

Download JAR | View Source on GitHub

com.blobcity.json.JSON.areEqual('{}', '{}'); -> true
JSON.areEqual('{\'a\': \'1\'}', '{}'); -> false

The function checks for the complete JSON. Every element of the JSON must be equal for the equals check to pass. The following gives areEquals => false

  'name': 'Tom',
  'country': 'USA'
  'name': 'Tom',
  'country': 'US'

Deeps checks are also support. Nested JSON's must be equal for the JSON to be equal. The below 2 conditions emails the same.

areEqual => true

  'name': 'Tom',
  'country': 'USA',
  'address': {
    'line1': 'Lane 1, USA'
  'name': 'Tom',
  'country': 'USA',
  'address': {
    'line1': 'Lane 1, USA'

areEqual => false

  'name': 'Tom',
  'country': 'USA',
  'address': {
    'line1': 'Lane 1, USA'
  'name': 'Tom',
  'country': 'USA',
  'address': {
    'line1': 'My lane'

areEqual => false

  'name': 'Tom',
  'country': 'USA',
  'address': {
    'line1': 'Lane 1, USA'
  'name': 'Tom',
  'country': 'USA',
  'address': {
    'line1': 'Lane 1, USA',
    'zip': '19700'

Array comparions are also supported, and array elements must be in same order in both JSON's for the equals to pass.

areEqual => true

  'name': 'Tom',
  'roles': ['admin', 'user']
  'name': 'Tom',
  'roles': ['admin', 'user']

areEqual => false

  'name': 'Tom',
  'roles': ['user', 'admin']
  'name': 'Tom',
  'roles': ['admin', 'user']

Like this:

Like Loading...

All Comments: [-]

jermo(2482) 7 days ago [-]

Thought I'd share something I discovered about Json comparison. This simple line also works:

  return jsonObject1.toMap().equals(jsonObject2.equals);
because JsonObject uses HashMap internally which supports equals(). The downside is it would be slower because toMap() creates a copy.

Btw, downloaded java-commons as Maven dependency from JitPack:

sanketsarang(3991) 7 days ago [-]

Yes this also works. We were using this for a while until we landed up with very large JSON's. .toMap() is very expensive and takes significantly longer to execute on large JSON's.

mscasts(10000) 7 days ago [-]

Can't one just simply check if the strings are equal?

sanketsarang(3991) 7 days ago [-]

No string compare does not work, as the order of elements may not be the same so the strings may not be the same. The utility will report both JSON's as equal even if order of elements is different.

Historical Discussions: Show HN: Kuzushiji-MNIST (December 12, 2018: 6 points)

Show HN: Kuzushiji-MNIST

6 points 4 days ago by hardmaru in 932nd position | Estimated reading time – 4 minutes | comments


Read the paper to learn more about Kuzushiji, the datasets and our motivations for making them!

Kuzushiji-MNIST is a drop-in replacement for the MNIST dataset (28x28 grayscale, 70,000 images), provided in the original MNIST format as well as a NumPy format. Since MNIST restricts us to 10 classes, we chose one character to represent each of the 10 rows of Hiragana when creating Kuzushiji-MNIST.

Kuzushiji-49, as the name suggests, has 49 classes (28x28 grayscale, 270,912 images), is a much larger, but imbalanced dataset containing 48 Hiragana characters and one Hiragana iteration mark.

Kuzushiji-Kanji is an imbalanced dataset of total 3832 Kanji characters (64x64 grayscale, 140,426 images), ranging from 1,766 examples to only a single example per class.

The 10 classes of Kuzushiji-MNIST, with the first column showing each character's modern hiragana counterpart.

Get the data

You can run python to interactively select and download any of these datasets!


Kuzushiji-MNIST contains 70,000 28x28 grayscale images spanning 10 classes (one from each column of hiragana), and is perfectly balanced like the original MNIST dataset (6k/1k train/test for each class).

Mapping from class indices to characters: kmnist_classmap.csv (1KB)

We recommend using standard top-1 accuracy on the test set for evaluating on Kuzushiji-MNIST.

Which format do I download?

If you're looking for a drop-in replacement for the MNIST or Fashion-MNIST dataset (for tools that currently work with these datasets), download the data in MNIST format.

Otherwise, it's recommended to download in NumPy format, which can be loaded into an array as easy as: arr = np.load(filename)['arr_0'].


Kuzushiji-49 contains 270,912 images spanning 49 classes, and is an extension of the Kuzushiji-MNIST dataset.

Mapping from class indices to characters: k49_classmap.csv (1KB)

We recommend using balanced accuracy on the test set for evaluating on Kuzushiji-49.


Kuzushiji-Kanji is a large and highly imbalanced 64x64 dataset of 3832 Kanji characters, containing 140,426 images of both common and rare characters.

The full dataset is available for download here (310MB). We plan to release a train/test split version as a low-shot learning dataset very soon.

Benchmarks & Results

Have more results to add to the table? Feel free to submit an issue or pull request!

For MNIST and Kuzushiji-MNIST we use a standard accuracy metric, while Kuzushiji-49 is evaluated using balanced accuracy (so that all classes have equal weight).


Both the dataset itself and the contents of this repo are licensed under a permissive CC BY-SA 4.0 license, except where specified within some benchmark scripts. CC BY-SA 4.0 license requires attribution, and we would suggest to use the following attribution to the KMNIST dataset.

'KMNIST Dataset' (created by CODH), adapted from 'Kuzushiji Dataset' (created by NIJL and others), doi:10.20676/00000341

Citing Kuzushiji-MNIST

If you use any of the Kuzushiji datasets in your work, we would appreciate a reference to our paper:

Deep Learning for Classical Japanese Literature. Tarin Clanuwat et al. arXiv:1812.01718

  author       = {Tarin Clanuwat and Mikel Bober-Irizar and Asanobu Kitamoto and Alex Lamb and Kazuaki Yamamoto and David Ha},
  title        = {Deep Learning for Classical Japanese Literature},
  date         = {2018-12-03},
  year         = {2018},
  eprintclass  = {cs.CV},
  eprinttype   = {arXiv},
  eprint       = {cs.CV/1812.01718},

Related datasets

Kuzushiji Dataset offers 3,999 character types and 403,242 character images with CSV files containing the bounding box of characters on the original page images. At this moment, the description of the dataset is available only in Japanese, but the English version will be available soon.

No comments posted yet: Link to HN comments page

Historical Discussions: Show HN: Stig – A CLI tool for searching GitHub from the terminal (December 11, 2018: 6 points)

Show HN: Stig – A CLI tool for searching GitHub from the terminal

6 points 6 days ago by octobanana in 3999th position | Estimated reading time – 5 minutes | comments


A CLI tool for searching GitHub from the terminal.


Stig is a CLI tool for searching GitHub from the terminal. With the ability to sort and filter results, Stig makes it easy to find what you're looking for. Stig can also print out a repositories to stdout, so you can quickly learn more about a project.


  • search GitHub from the terminal
  • use flags and options to filter the results
  • print the readme of a specified repo to stdout

Result Breakdown

A typical search result item will look like the following:

- owner
|          - repo
|          |    - stars
|          |    |   - forks
|          |    |   |  - issues
|          |    |   |  |  - language
|          |    |   |  |  |     - last updated
|          |    |   |  |  |     |  - summary
|          |    |   |  |  |     |  |
octobanana/stig *12 <3 !4 [C++] 5h
  A CLI tool for searching GitHub from the terminal.

A forked repository will show a > symbol, instead of the default < symbol.

The last updated symbols are mapped to the following:

s : seconds
m : minutes
h : hours
D : days
W : weeks
M : months
Y : years

At the end of the results, a summary will be shown:

Summary Breakdown

- current results
|   - total results
|   |            - current page
|   |            | - total pages
|   |            | |          - requests remaining
|   |            | |          | - requests limit
|   |            | |          | |
1-5/81 results | 1/17 pages | 9/10 limit

GitHub Token

By default, the GitHub API allows up to 10 search queries per minute. To extend the limit to 30 search queries per minute, you can pass a GitHub token with the --token option.

For more information regarding creating a new personal access token, refer to the following GitHub help article.

GitHub Enterprise Compatibility

It is possible to use a custom API endpoint for compatibility with GitHub Enterprise installations using the --host option. The host should be formatted as, subdomain.domain.tld. It's expected that the endpoint is served over HTTPS on port 443.

Terminal Compatibility

A terminal emulator that supports ansi escape codes and true color is required when colored output is enabled. The majority of the popular terminal emulators should support both. While having the colored output enabled provides the best experience, it can be adjusted using the --color option, taking either on, off, or auto as inputs, with auto being the default value.


1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29
# query 'stig' showing '20' results from page '1'
$ stig --query 'stig' --number 20 --page 1
# query 'stig' with filter 'language:cpp'
$ stig --query 'stig' --filter 'language:cpp'
# query 'stig' and pipe into less
$ stig --query 'stig' | less
# query 'all' sorted by 'stars' with filter 'language:js'
$ stig --query '' --sort 'stars' --filter 'language:js'
# query 'http server' with filters 'language:cpp' and 'stars:>10'
$ stig --query 'http server' --filter 'language:cpp stars:>10'
# output the for 'octobanana/stig' on default branch
$ stig --readme 'octobanana/stig'
# output the for 'octobanana/stig' on branch 'master'
$ stig --readme 'octobanana/stig/master'
# output the for 'octobanana/stig' on default branch and pipe into less
$ stig --readme 'octobanana/stig' | less
# output the program help
$ stig --help
# output the program version
$ stig --version



  • Linux (supported)
  • BSD (untested)
  • macOS (untested)


  • C++17 compiler
  • Boost >= 1.67
  • OpenSSL >= 1.1.0
  • CMake >= 3.8


  • ssl (libssl)
  • crypto (libcrypto)
  • pthread (libpthread)
  • boost (libboost_system)


  • my belle library, for making HTTPS requests, included as ./src/ob/belle.hh
  • my parg library, for parsing CLI args, included as ./src/ob/parg.hh
  • nlohmann's json library, for working with JSON, included as ./src/lib/json.hh

The following shell command will build the project in release mode:

To build in debug mode, run the script with the --debug flag.


The following shell command will install the project in release mode:

To install in debug mode, run the script with the --debug flag.


This project is licensed under the MIT License.

Copyright (c) 2018 Brett Robinson

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the 'Software'), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.


No comments posted yet: Link to HN comments page

Historical Discussions: Show HN: Telegram Directory – Find top Telegram channels, bots and groups (December 12, 2018: 6 points)
Show HN: Web-based GUI for MongoDB, using the ACE editor (February 28, 2013: 2 points)
Show HN: api python client (November 12, 2013: 1 points)

Show HN: Telegram Directory – Find top Telegram channels, bots and groups

6 points 4 days ago by poeti8 in 3997th position | | comments

Top ChannelsView allTelegram Directory261 members100%+The Hire3226 members96%+The Devs17812 members94%+Programming Challenges4560 members100%+Full-Stacks219 members100%+TheFrontEnd4210 members91%+Cool ProductHunt456 members90%+QMRes55 members100%+theCuts154 members84%+Top BotsView allQuadnite Bot86%+Rextester87%+TempMail100%+Guggy100%+Questable Bot80%+Nukist Bot100%+YouTube Subscriptions Bot100%+Octanite Bot100%+Smokey: Air Quality100%+Top GroupsView allQt People87 members100%+C/C++5255 members100%+Whatever Wolf640 members100%+Flutter India59 members100%+Fortran Group6 members100%+Daily Moron Chat365 members100%+Tricksinfoclub1759 members100%+pyTeens76 members100%+Mathematics1676 members100%+

No comments posted yet: Link to HN comments page

Historical Discussions: Show HN: Elixir/Unix style pipe operations in Ruby (December 11, 2018: 5 points)

Show HN: Elixir/Unix style pipe operations in Ruby

5 points 6 days ago by bonquesha99 in 3973rd position | Estimated reading time – 16 minutes | comments


Elixir/Unix style pipe operations in Ruby - PROOF OF CONCEPT

''.pipe do
  yield_self { |n| 'Ruby has #{n} stars' }
#=> Ruby has 15120 stars
-9.pipe { abs; Math.sqrt; to_i } #=> 3
# Method chaining is supported:
-9.pipe { abs; Math.sqrt.to_i } #=> 3
# Pipe | for syntactic sugar:
-9.pipe { abs | Math.sqrt.to_i } #=> 3
# If we actually need to pipe the method `|` on
# some other object then we can just use `send`:
-2.pipe { abs | send(:|, 4) } #=> 6
sqrt = Math.pipe.sqrt #=> #<PipeOperator::Closure:[email protected]_operator/closure.rb:18>          #=> 3.0         #=> 8.0
[9, 64].map(&Math.pipe.sqrt)           #=> [3.0, 8.0]
[9, 64].map(&Math.pipe.sqrt.to_i.to_s) #=> ['3', '8']
# Still not concise enough for you?
Module.alias_method(:|, :__pipe__)
[9, 64].map(&Math.|.sqrt)           #=> [3.0, 8.0]
[9, 64].map(&Math.|.sqrt.to_i.to_s) #=> ['3', '8']


There's been some recent activity related to Method and Proc composition in Ruby:

This gem was created to propose an alternative syntax for this kind of behavior.

Matz on Ruby


Ruby is a language of careful balance of both functional and imperative programming.

Matz has often said that he is trying to make Ruby natural, not simple, in a way that mirrors life.

Building on this, he adds: Ruby is simple in appearance, but is very complex inside, just like our human body.


The general idea is to pass the result of one expression as an argument to another expression - similar to Unix pipelines:

echo 'testing' | sed 's/ing//' | rev
#=> tset

The Elixir pipe operator documentation has some other examples but basically it allows expressions like:


To be inverted and rewritten as left to right or top to bottom which is more natural to read in English:

# left to right
url.pipe { URI.parse | Net::HTTP.get | JSON.parse }
# or top to bottom for clarity
url.pipe do

The differences become a bit clearer when other arguments are involved:

loans = Loan.preapproved.submitted(Date.current).where(broker: Current.user)
data = { |loan| }
json = JSON.pretty_generate(data, allow_nan: false)

Using pipes removes the verbosity of maps and temporary variables:

json = Loan.pipe do
  where(broker: Current.user)
  JSON.pretty_generate(allow_nan: false)

While the ability to perform a job correctly and efficiently is certainly important - the true beauty of a program lies in its clarity and conciseness:

''.pipe do
  yield_self { |n| 'Ruby has #{n} stars' }
#=> Ruby has 15115 stars

There's nothing really special here - it's just a block of expressions like any other Ruby DSL and the pipe | operator has been around for decades!, &:expressive).that(you can) do
  pretty_much ANYTHING if it.compiles!

This concept of pipe operations could be a great fit like it has been for many other languages:



This has only been tested in isolation with RSpec and Ruby 2.5.3!

# First `gem install pipe_operator`
require 'pipe_operator'


The PipeOperator module has a method named __pipe__ which is aliased as pipe for convenience and | for syntactic sugar:

module PipeOperator
  def __pipe__(*args, &block), *args, &block)
BasicObject.send(:include, PipeOperator)
Kernel.alias_method(:pipe, :__pipe__)
Module.alias_method(:|, :__pipe__)

When no arguments are passed to __pipe__ then a PipeOperator::Pipe object is returned:

Math.pipe #=> #<PipeOperator::Pipe:Math>

Any methods invoked on this object returns a PipeOperator::Closure which calls the method on the object later:

sqrt = Math.pipe.sqrt       #=> #<PipeOperator::Closure:[email protected]_operator/closure.rb:18>               #=> 4.0
missing = Math.pipe.missing #=> #<PipeOperator::Closure:[email protected]_operator/closure.rb:18>                #=> NoMethodError: undefined method 'missing' for Math:Module
Math.method(:missing)       #=> NameError: undefined method 'missing' for class '#<Class:Math>'

When __pipe__ is called with arguments but without a block then it behaves similar to __send__:

sqrt = Math.pipe(:sqrt) #=> #<PipeOperator::Closure:[email protected]_operator/closure.rb:18>           #=> 4.0
sqrt = Math.pipe(:sqrt, 16) #=> #<PipeOperator::Closure:[email protected]_operator/closure.rb:18>                   #=> 4.0               #=> ArgumentError: wrong number of arguments (given 2, expected 1)

These PipeOperator::Closure objects can be bound as block arguments just like any other Proc:

[16, 256].map(&Math.pipe.sqrt) #=> [4.0, 16.0]

Simple closure composition is supported via method chaining:

[16, 256].map(&Math.pipe.sqrt.to_i.to_s) #=> ['4', '16']

The block form of __pipe__ behaves similar to instance_exec but can also call methods on other objects:

'abc'.pipe { reverse }        #=> 'cba'
'abc'.pipe { reverse.upcase } #=> 'CBA'
'abc'.pipe { Marshal.dump }                   #=> '\x04\bI\'\babc\x06:\x06ET'
'abc'.pipe { Marshal.dump | Base64.encode64 } #=> 'BAhJIghhYmMGOgZFVA==\n'

Outside the context of a __pipe__ block things behave like normal:

Math.sqrt     #=> ArgumentError: wrong number of arguments (given 0, expected 1)
Math.sqrt(16) #=> 4.0

But within a __pipe__ block the Math.sqrt expression returns a PipeOperator::Closure instead:

16.pipe { Math.sqrt }     #=> 4.0
16.pipe { Math.sqrt(16) } #=> ArgumentError: wrong number of arguments (given 2, expected 1)

The piped object is passed as the first argument by default but can be customized by specifying self:

class String
  def self.join(*args, with: '')
'test'.pipe { String.join('123', with: '-') }       #=> 'test-123'
'test'.pipe { String.join('123', self, with: '-') } #=> '123-test'

Instance methods like reverse below do not receive the piped object as an argument since it's available as self:

Base64.encode64(Marshal.dump('abc').reverse)            #=> 'VEUGOgZjYmEIIkkIBA==\n'
'abc'.pipe { Marshal.dump | reverse | Base64.encode64 } #=> 'VEUGOgZjYmEIIkkIBA==\n'

Pipes also support multi-line blocks for clarity:

'abc'.pipe do

Notice the pipe | operator wasn't used to separate expressions - it's actually always optional:

# this example from above
'abc'.pipe { Marshal.dump | reverse | Base64.encode64 }
# could also be written as
'abc'.pipe { Marshal.dump; reverse; Base64.encode64 }

The closures created by these pipe expressions are evaluated via reduce:

pipeline = [
  -> object { Marshal.dump(object) },
  -> object { object.reverse },
  -> object { Base64.encode64(object) },
pipeline.reduce('abc') do |object, pipe|

Intercepting methods within pipes requires prepending a PipeOperator::Proxy module infront of ::Object and all nested constants:

define_method(method) do |*args, &block|
  if, *args, &block)
    super(*args, &block)

These proxy modules are prepended everywhere!

It's certainly something that could be way more efficient as a core part of Ruby.

Maybe somewhere lower level where methods are dispatched? Possibly somewhere in this vm_eval.c switch?

  switch (cc->me->def->type) {

Then we'd only need Ruby C API ports for PipeOperator::Pipe and PipeOperator::Closure!

All other objects in this proof of concept are related to method interception and would no longer be necessary.


This test case doesn't work yet - seems like the object is not proxied for some reason:

class Markdown
  def format(string)
'test'.pipe(, &:format) # expected 'TEST'
#=> ArgumentError: wrong number of arguments (given 0, expected 1)


    • Constants flagged for autoload are NOT proxied by default (for performance)
    • Set ENV['PIPE_OPERATOR_AUTOLOAD'] = 1 to enable this behavior
    • Objects flagged as frozen are NOT proxied by default
    • Set ENV['PIPE_OPERATOR_FROZEN'] = 1 to enable this behavior (via Fiddle)
    • Object and its recursively nested constants are only proxied ONCE by default (for performance)
    • Constants defined after __pipe__ is called for the first time are NOT proxied
    • Set ENV['PIPE_OPERATOR_REBIND'] = 1 to enable this behavior
    • The following methods are reserved on PipeOperator::Closure objects:
      • ==
      • []
      • __chain__
      • __send__
      • __shift__
      • call
      • class
      • kind_of?
      • |
    • The following methods are reserved on PipeOperator::Pipe objects:
      • !
      • !=
      • ==
      • __call__
      • __id__
      • __pop__
      • __push__
      • __send__
      • instance_exec
      • method_missing
      • |
    • These methods can be piped via send as a workaround:
      • 9.pipe { Math.sqrt.to_s.send(:[], 0) }
      • example.pipe { send(:__call__, 1, 2, 3) }
      • example.pipe { send(:instance_exec) { } }




  • Fork the project.
  • Make your feature addition or bug fix.
  • Add tests for it. This is important so we don't break it in a future version unintentionally.
  • Commit, do not mess with the version or history.
  • Open a pull request. Bonus points for topic branches.



MIT - Copyright © 2018 LendingHome

All Comments: [-]

pmontra(3049) 6 days ago [-]

I quote one of the examples

    ''.pipe do
      yield_self { |n| 'Ruby has #{n} stars' }
    #=> Ruby has 15115 stars
Not having to type the |> like in Elixir is two shift keys less, which is good. I'm not sure about readability, because one has to spot the .pipe at the beginning of the block, but it shouldn't be a problem.

Now, if we only had pattern matching with the exact syntax of Elixir and not some monstrosity I saw around in proposals and other languages...

bonquesha99(3973) 5 days ago [-]

Thanks for your feedback!

Check out this other proof of concept demonstrating ES6 style object destructuring in Ruby:

I think this same type of concept could be applied to port Elixir style pattern matching as well e.g.

    data = {
      name: 'John Smith',
      age: 35,
      prefs: {
        lang: 'en',
        tz: 'UTC',
    User = Pattern { name age prefs[lang] }
    user = User =~ data
    case object
    when Pattern { some attrs[:nested][real][deep, fields] }
      Pattern!.nested #=> NoMethodError
    # or define 'locals' by defining temporary methods on
    # the block receiver when the 'then' block is evaluated
    case object
    when Pattern { some nested[data] }.then do
      puts some
      puts data

Historical Discussions: Show HN: I made a better Secret Santa generator (December 10, 2018: 5 points)

Show HN: I made a better Secret Santa generator

5 points 7 days ago by diogoredin in 10000th position | | comments






All Comments: [-]

diogoredin(10000) 7 days ago [-]


I was tired of all the existing alternatives so I built a better Secret Santa organiser. This one works with mobile numbers, so you don't have to bug your friends for their emails.

Once the participants receive their assigned Secret Santa they can message the app what they would like to receive and the person that will be gifting them receives that message anonymised.

I have been getting conflicting feedback regarding this SMS solution and the need for paying for the messages. What do you think? Does this improve the existing solutions or not?

chucktorres(10000) 6 days ago [-]

I'm liking your no-nonsense interface but unfortunately email is still king imo.

Giving out phone numbers feels icky - a violation of privacy.

There needs to be an opt-in component which is very easy via email.

A nice to have would be the ability to recompute matches with minimal disruption when someone drops out.

Just something to consider.

Historical Discussions: Show HN: Gmail Add-On: Collect Emails from Slack for Use in to Field (December 11, 2018: 5 points)

Show HN: Gmail Add-On: Collect Emails from Slack for Use in to Field

5 points 5 days ago by buzzfeedmax in 10000th position | Estimated reading time – 3 minutes | comments

I pitched this idea to someone at Slack and immediately realized it would go nowhere. Slack is incentivized to replace email, and will gladly let you pipe an email into a Slack channel, but to my suggestion, they asked "why not just Slack that channel?" It's a rational question, but humans are irrational. Slack is amazing, but it's just not going to kill email (yet, anyways). When it comes to forwards, custom links, formatted campaigns, and important to-dos you just don't want people to miss, we still prefer to email.

I decided to use Google's new Gmail Add-ons feature like I did with Face to a Name. I'm going it alone with only moderate engineering skills, so I'd love to get feedback from any developers reading this (and user feedback from anyone who would be interested in using this!). Here's what I've got:

My add-on is built into Gmail's composer. A little Slack icon appears in the bottom of every new mail:

See the little logo? Innit pretty?

On click, a modal pops up with the list of select Slack channels (our E.R.G. channels to start). Simply click on one to scrape that entire channel for the members' emails!

Now, my problem: I can't find any method to directly update the "To" field of the draft with these emails. If you're reading this from Google, or if you have the answer, please let me know!

I settled on outputting the email list so you can copy and paste. It's annoying that I cannot get multi-line support (another code issue), but this does technically allow the user to conjure up all those emails in a channel:

People kind of hate it, though, so I'd love to see this through to the final step of automatically populating the Gmail fields.

Do you think this is a tool you would use? If you are interested, let me know or leave a comment below. If you want to help me modify the To field from an add-on, please let me know! And if you're from Google, I look forward to seeing the add-on functionality expand in the near future.

Max is Head of People Analytics at BuzzFeed and loves to write, share and consult on his work. If you liked this story, I bet you'll love my HR music video. Reach out: i[email protected]

No comments posted yet: Link to HN comments page

Show HN: Kube – Deploy auto-scaled containers with one command

5 points 1 day ago by theo31 in 3953rd position | Estimated reading time – 1 minutes | comments

🚀 Autoscale

Each application is automatically configured to scale up to 20 containers based on CPU usage.

Deploy and Forget

One line deployment with our cli.

> kube deploy appName

Image-building happens in the cloud.


Each deployment gets a free automatic SSL and a subdomain on


We keep the last 5 deployed images so you can rollback when something goes wrong.

Zero Downtime Rollout

While deploying a new version, we make sure your app always has enough active containers.

Always ON

We never freeze your deployments. We guarantee that at least one container will always be running.

No Lock In

You can migrate away from Kube anytime, Docker is a technology supported in some way by all major clouds.

Bring your own domain

Configure as many custom domains as you need without additional fees.

No comments posted yet: Link to HN comments page

Historical Discussions: Show HN: ChauffeurNet – Learning to Drive Beyond Pure Imitation (December 11, 2018: 5 points)

Show HN: ChauffeurNet – Learning to Drive Beyond Pure Imitation

5 points 6 days ago by lawrenceyan in 3283rd position | Estimated reading time – 7 minutes | comments

Creating ChauffeurNet: A Recurrent Neural Network for Driving

In order to drive by imitating an expert, we created a deep recurrent neural network (RNN) named ChauffeurNet that is trained to emit a driving trajectory by observing a mid-level representation of the scene as an input. A mid-level representation does not directly use raw sensor data, thereby factoring out the perception task, and allows us to combine real and simulated data for easier transfer learning. As shown in the figure below, this input representation consists of a top-down (birds-eye) view of the environment containing information such as the map, surrounding objects, the state of traffic lights, the past motion of the car, and so on. The network is also given a Google-Maps-style route that guides it toward its destination.

ChauffeurNet outputs one point along the future driving trajectory during each iteration, while writing the predicted point to a memory that is used during its next iteration. In this sense, the RNN is not traditional, because the memory model is explicitly crafted. The trajectory output by ChauffeurNet, which consists of ten future points, is then given to a low-level controller that converts it to control commands such as steering and acceleration that allow it to drive the car.

In addition, we have employed a separate "PerceptionRNN" head that iteratively predicts the future of other moving objects in the environment and this network shares features with the RNN that predicts our own driving. One future possibility is a deeper interleaving of the process of predicting the reactions of other agents while choosing our own driving trajectory.

Rendered inputs and output for the driving model. Top-row left-to-right: Roadmap, Traffic lights, Speed-limit, and Route. Bottom-row left-to-right: Current Agent Box, Dynamic Boxes, Past Agent Poses, and the output Future Agent Poses.
ChauffeurNet has two internal parts, the FeatureNet and the AgentRNN. The AgentRNN consumes an image with a rendering of the past agent poses, a set of features computed by a convolutional network "FeatureNet" from the rendered inputs, an image with the last agent box rendering, and an explicit memory with a rendering of the predicted future agent poses to predict the next agent pose and the next agent box in the top-down view. These predictions are used to update the inputs to the AgentRNN for predicting the next timestep.

Imitating the Good

We trained the model with examples from the equivalent of about 60 days of expert driving data, while including training techniques such as past motion dropout to ensure that the network doesn't simply continue to extrapolate from its past motion and actually responds correctly to the environment. As many have found before us, including the ALVINN project back in the 1980s, purely imitating the expert gives a model that performs smoothly as long as the situation doesn't deviate too much from what was seen in training. The model learns to respond properly to traffic controls such as stop signs and traffic lights. However, deviations such as introducing perturbations to the trajectory or putting it in near-collision situations cause it to behave poorly, because even when trained with large amounts of data, it may have never seen these exact situations during training.

Agent trained with pure imitation learning gets stuck behind a parked vehicle (left) and is unable to recover from a trajectory deviation while driving along a curved road (right). The teal path depicts the input route, yellow box is a dynamic object in the scene, green box is the agent, blue dots are the agent's past positions and green dots are the predicted future positions.

Synthesizing the Bad

Expert driving demonstrations obtained from real-world driving typically contain only examples of driving in good situations, because for obvious reasons, we don't want our expert drivers to get into near-collisions or climb curbs just to show a neural network how to recover in these cases. To train the network to get out of difficult spots, it then makes sense to simulate or synthesize suitable training data. One simple way to do this is by adding cases where we perturb the driving trajectory from what the expert actually did. The perturbation is such that the start and end points of the trajectory stay the same, with the deviation mostly occurring in the middle. This teaches the neural network how to recover from perturbations. Not only that, these perturbations generate examples of synthetic collisions with other objects or the road curbs, and we teach the network to avoid those by adding explicit losses that discourage such collisions. These losses allow us to leverage domain knowledge to guide the learning towards better generalization in novel situations.

Trajectory perturbation by pulling on the current agent location (red point) away from the lane center and then fitting a new smooth trajectory that brings the agent back to the original target location along the lane center.

This work demonstrates one way of using synthetic data. Beyond our approach, extensive simulations of highly interactive or rare situations may be performed, accompanied by a tuning of the driving policy using reinforcement learning (RL). However, doing RL requires that we accurately model the real-world behavior of other agents in the environment, including other vehicles, pedestrians, and cyclists. For this reason, we focus on a purely supervised learning approach in the present work, keeping in mind that our model can be used to create naturally-behaving "smart-agents" for bootstrapping RL.

Experimental Results

We saw how the pure imitation-learned model failed to nudge around a parked vehicle and got stuck during a trajectory deviation above. With the full set of synthesized examples and the auxiliary losses, our full ChauffeurNet model can now successfully nudge around the parked vehicle (left) and recover from the trajectory deviation to continue smoothly along the curved road (right).

In the examples below, we demonstrate ChauffeurNet's response to the correct causal factors on logged examples in a closed-loop setting within our simulator. In the left animation, we see the ChauffeurNet agent come to a full stop before a stop-sign (red marker). In the right animation, we remove the stop-sign from the rendered road and see that the agent no longer comes to a full stop, verifying that the network is responding to the correct causal factors.

In the left animation below, we see the ChauffeurNet agent stop behind other vehicles (yellow boxes) and then continuing as the other vehicles move along. In the right animation, we remove the other vehicles from the rendered input and see that the agent continues along the path naturally since there are no other objects in its path, verifying the network's response to other vehicles in the scene.

In the example below, the ChauffeurNet agent stops for a traffic light transitioning from yellow to red (note the change in intensity of the traffic light rendering which is shown as the curves along the lane centers) instead of blindly following behind other vehicles.

After testing in simulation, we replaced our primary planner module(s) with ChauffeurNet and used it to drive a Chrysler Pacifica minivan on our private test track. These videos demonstrate the vehicle successfully following a curved lane and handling stop-signs and turns.

The example below demonstrates predictions from PerceptionRNN on a logged example. Recall that PerceptionRNN predicts the future motion of other dynamic objects. The red trails indicate the past trajectories of the dynamic objects in the scene; the green trails indicate the predicted trajectories two seconds into the future, for each object.

No comments posted yet: Link to HN comments page

Historical Discussions: Show HN: Element, use Puppeteer to load test your app (December 10, 2018: 5 points)

Show HN: Element, use Puppeteer to load test your app

5 points 7 days ago by Bockit in 3593rd position | | comments

Browser vs. Protocol

Load testing has barely kept pace with the rate of innovation on the web as a platform over the last 20 years. We set out to change this with Flood Element.

Traditionally, load testing meant simulating network calls as quickly as possible, either using scripting, log replay, or a network recorder. But these approaches have always suffered from a high cost of script maintenance due to the fickle nature of network requests, lack of maintenance due to complexity, or simulating unrealistic load due to a misunderstanding of the workload patterns of regular users of the product.

These are just some of the problems we're solving by load testing in a similar way to real users of your application.

No comments posted yet: Link to HN comments page