Hacker News with comments/articles inlined for offline reading

Authors ranked on leaderboard
Last updated:
Reload to view new stories

February 16, 2019 20:03



Front Page/ShowHN stories over 4 points from last 7 days
If internet connection drops, you can still read the stories
If there were any historical discussions on the story, links to all the previous stories on Hacker News will appear just above the comments.

Historical Discussions: Show HN: DeskGap – Like Electron, but uses the system webview (February 13, 2019: 316 points)

(317) Show HN: DeskGap – Like Electron, but uses the system webview

317 points 4 days ago by patr0nus in 10000th position

deskgap.com | Estimated reading time – 6 minutes | comments | anchor

DeskGap

DeskGap is a framework for building cross-platform desktop apps with web technologies (JavaScript, HTML and CSS).

To enable native capabilities while keeping the size down, DeskGap bundles a Node.js runtime and leaves the HTML rendering to the operating system's webview.

Supported Platforms

  • Mac OS X Yosemite (version 10.10) or later
  • Windows 10 October 2018 Update (version 1809) or later

Downloads

Prebuilt Binaries

npm install --save-dev deskgap

API Demos

The DeskGap API Demos app shows some of the DeskGap features and APIs with interactive scripts.

Pym: A Real-Life App Built With DeskGap

To test DeskGap on field, squoosh is wrapped into a desktop app 'Pym' with DeskGap and submitted to the app stores.

Getting Started

Creating a Node.js Package for your app

hello-deskgap/
├── package.json
├── index.js
└── index.html

package.json points to the app's entry file and provides the script that starts your app:

{
  'name': 'hello-deskgap',
  'main': 'index.js',
  'scripts': {
    'start': 'deskgap .'
  }
}

index.js is the entry file that creates a window which will render an HTML page:

const { app, BrowserWindow } = require('deskgap');
app.once('ready', () => {
    const win = new BrowserWindow();
    win.loadFile('index.html');
});

index.html is the page to render:

<!DOCTYPE html>
<html>
  <head><meta charset='utf-8' /><title>Hello DeskGap</title></head>
  <body><h1>Hello DeskGap</h1></body>
</html>

Installing DeskGap

npm install --save-dev deskgap

Starting Your App

Documentation

Work in Progress

FAQ

What's the difference between DeskGap and Electron?

DeskGap is designed to be a more lightweight alternative to Electron. It does not bundle Chromium or any other web engines. Instead, the ability of rendering HTML pages comes from the webview provided by the operating system, specifically, WKWebView on macOS and Microsoft.Toolkit.Forms.UI.Controls.WebView on Windows.

DeskGap is at its early stage. The API is still quite limited compared to Electron. Many functionalities are under development and some of them will probably never be possible. See this and this for more information.

There are already similar attempts (electrino and Quark for instance) out there. What makes DeskGap different?

With a Node.js runtime bundled, DeskGap comes with support for npm packages and all the battle-tested native capabilities in Node.js such as fs, net, http. The price is a larger executable size (about 8 MB zipped and 20 MB unzipped).

Why is the supported version of Windows so high? Any plan of supporting Windows 7 and Linux?

Older Windows' do not have a modern browser engine, only the one that powers Internet Explorer. Windows 10 1809 is the first version that provides a modern webview with enough functionalities for DeskGap to be possible.

To support Windows 7, app developers would have to face compatibility issues coming from as low as IE 8. I personally don't have enough motivation and interest to do this, but pull requests are always welcome.

Linux support would be great but I have little knowledge of Linux app development. For now I am looking at Qt WebEngine. Any advice & help is appreciated.

If you want to try DeskGap but dropping Windows 7 (or Linux) support is a no-go for your app, consider packaging the app with Electron for the unsupported platform. The DeskGap API is intentionally designed to be like Electron's. The following code is a good start:

let appEngine;
try {
  appEngine = require('deskgap');
}
catch (e) {
  appEngine = require('electron');
}
const { app, BrowserWindow } = appEngine;

So I can port my Electron app to DeskGap?

Probably no. The DeskGap API is still quite limited. If you start building an app with DeskGap, getting it running on Electron may be easy, but not the other way around.




All Comments: [-] | anchor

vijaybritto(3947) 4 days ago [-]

The main problem that everyone has with electron is the RAM consumption right?! If the webviews are anyway gonna increase the RAM to electron levels while being significantly hard to test and deploy then this will not work out ever. I'm only bothered about low RAM memory usage not the disk space.

accatyyc(4018) 3 days ago [-]

If you're concerned about RAM, using this approach makes a lot of sense. Every process of your system browser running, (different tabs, different apps with web view), the OS can share memory between since it's the same binary. With electron, each app bundles their own browser and memory cannot be shared since it's different binaries.

snarfy(3931) 4 days ago [-]

A simple hello world style app in electron and in my own project [1] show electron using ~100mb while the webview based project ~50mb, so there's that.

[1] https://github.com/zenakuten/webview-cs

Klonoar(3275) 4 days ago [-]

These projects completely overlook _why_ people choose Electron over the system view.

- Nobody wants to be testing against multiple browser/rendering engines in 2019.

- Nobody wants to wait for a vendor to update their implementation when Chrome has the feature available almost immediately.

Edit: Since I can already see the litany of armchair-quarterback-desktop-app-authors, I'm just going to link to the comment from the guy who actually migrated Slack away from WKWebView.

https://news.ycombinator.com/item?id=18763449

Blows my mind we're still debating this.

laumars(3241) 4 days ago [-]

I get that writing tests often isn't fun but it's as much a part of responsible development as writing useful error messages and not using experimental features on production builds.

What really blows my mind is that we've reached a point in computing where the developers laziness now outranks the customers needs.

> Since I can already see the litany of armchair-quarterback-desktop-app-authors

I think you'll find those who are offering counter arguments aren't just armchair critics. Eg I've been writing software since the 80s. This includes a considerable amount of desktop software. So I'd like to think my opinion is just as valid as your own. ;)

_pmf_(10000) 3 days ago [-]

> Since I can already see the litany of armchair-quarterback-desktop-app-authors

I'm so very sorry our 20 years of experience that causes us to reject the subpar monstrosities you like to call 'dektop apps' rubs you the wrong way.

atoav(10000) 3 days ago [-]

As somebody who deals with HTML and CSS often I like the idea to be able to use these layout skills in a desktop GUI.

But size and speed matter (you don't see much elektron based games..). We as a humanity can't just always build faster and more powerful computers and then let developers throw these gains away by building more elaborate Rube-Goldberg-machines, that take 100 times the space they need, running more things taking the same old time we are used to wait, with dependencies no single human ever checked at once.

Many of same people who waste tremendous amounts of energy and collective human time writing ineffective applications might end up to be really pro-environmentalists outside their job, but when it comes to computers suddenly we don't care. As long as it is acceptable on a new machine, who cares, right?

It would be entirely possible to write perfectly fast and efficient GUI applications using HTML and CSS as layout engines without dragging whole browsers into this. Using system webview might not be the solution. But maybe something that doesn't feel like it has been taped together with duct tape would help. A fully fleged browser engine for a desktop app is overkill in most common usecases and while I understand why one decides for Elektron, I'd rather see a really thought out solution than one that has been duct taped together.

darkblackcorner(3976) 3 days ago [-]

> Nobody wants to be testing against multiple browser/rendering engines in 2019

If you don't want to test multiple (browsers) ANYTHING, you shouldn't be a (web) developer.

Seriously, this is why we still have sites that only work in IE.

EGreg(1741) 4 days ago [-]

I wonder how all those commenters from HN who always say "we should have more competition in the brower engines" feel about your sentiment.

Octoth0rpe(10000) 4 days ago [-]

> Nobody wants to be testing against multiple browser/rendering engines in 2019.

That's certainly _one_ reason why.

I would posit that the main reason is that web developers would like to reuse their webapp skillset for building desktop apps. This seems like it accomplishes the goal.

Re: testing against multiple browsers, frankly this is _mostly_ a solved problem. The big gaps between rendering engines is on the real fringe of css at this point, which most of us aren't bothering to use. Also, many people builting on top of electron (and presumably this platform) are using css toolkits like blueprint, and most of those take care of most of your cross browser issues anyway. So, largely not a concern.

iffycan(3711) 4 days ago [-]

Also, Electron provides things like single command building, automatic updates and menu bar support (which looks supported in DeskGap).

zaarn(10000) 4 days ago [-]

These two 'nobody' reasons is why we are tettering on the edge of a browser monopoly, which will likely hurt desktop apps too if they use electron.

pjmlp(363) 4 days ago [-]

Right on, this generation has no right to complain about IE only web sites. /s

CyberDildonics(10000) 4 days ago [-]

Doesn't anyone who makes a webpage need to test it in different browsers?

patr0nus(10000) 4 days ago [-]

> Nobody wants to be testing against multiple browser/rendering engines in 2019.

That's why DeskGap uses EdgeHTML on Windows. With toolchains such as webpack and babel, building an app that runs on WebKit and EdgeHTML (which is going to be replaced by Chromium[0]) can't be that hard.

> Nobody wants to wait for a vendor to update their implementation when Chrome has the feature available almost immediately.

DeskGap doesn't mean to be a complete replacement for Electron. But after a glance of Electron Apps[1], I suppose many simple apps do not require start-of-art features.

[0] https://blogs.windows.com/windowsexperience/2018/12/06/micro...

[1] https://electronjs.org/apps

maaaats(2796) 4 days ago [-]

> Blows my mind we're still debating this.

Maybe you should consider that a sign that it's not as black&white as you tout it to be? I mean, what a way to just dismiss everything..

petecox(4029) 4 days ago [-]

That would be an argument for Microsoft switching to Chromium, albeit 'Edgium' might not entail the bleeding edge 6 week Chrome release cycle.

microcolonel(4016) 4 days ago [-]

> Nobody wants to be testing against multiple browser/rendering engines in 2019.

Soon they'll all be WebKit or Chromium.

josteink(3411) 3 days ago [-]

> Nobody wants to be testing against multiple browser/rendering engines in 2019.

Speak for yourself.

I hate to break it to you, but in the real world there are multiple browsers.

Not testing in more than one is lazy, standards ignorant and an insult to your users.

Let's not repeat the MSIE-fiasco again. We know better. We can do better.

aylmao(3651) 4 days ago [-]

> Nobody

This is like claiming nobody wants to write C because no one wants to manage their own memory in 2019, or miss out on all the cool new packages in the JS ecosystem.

Evidently _some people_ do. I can assure you there's at least number _n > 1_ of people who care more about app size than either of your points.

I personally wrote a side project in system web-view, because I don't want my macOS-only system-tray application to weigh 115+MB to make sure I have APIs I don't need in platforms I don't support.

stonogo(10000) 4 days ago [-]

Then again, the Slack app segfaulted on launch for months because of a node.js incompatibility with newer glibc. So I guess what you've traded for is

- Nobody bothers to be testing against multiple operating systems in 2019.

- Everybody gets to wait for Slack to get around to it while the system webview gets the compatibility update almost immediately.

Great for the authors, shit for the users. On the bright side, it was the straw that broke the camel's back, and my org doesn't pay Slack any more.

tambourine_man(108) 4 days ago [-]

>Nobody wants to be testing against multiple browser/rendering engines in 2019

Nobody does. CSS is pretty mature, a difference in rendering between modern engines is extremely rare

blauditore(10000) 3 days ago [-]

> Nobody wants to be testing against multiple browser/rendering engines in 2019

You mean like when building websites?

mch82(4013) 4 days ago [-]

> Nobody wants to be testing against multiple browser/rendering engines in 2019

Yes, lots of projects want multiple, competitive rendering engines.

It used to be that Windows was the only viable target OS for commercial software. Firefox and Safari gave IE competition and gave developers who wanted to target Mac and Linux a way to get a foot in the door. That kicked off the whole generation of web apps we've just lived through. If developers give up on Gecko and WebKit, then they allow a monopoly again. And since Electron is made by GitHub and is now owned by Microsoft, Microsoft will again have the monopoly (with the caveat that Google has primary influence over Blink).

Render engine competition keeps web standards relevant, which keeps the web open. Take away competition & we'll be back in a world where people build to IE6 for a decade instead of targeting standard HTML/CSS/JS.

kalleboo(3769) 4 days ago [-]

The Slack dev you're quoting doesn't mention any of your points. His points were that the native extension API was clunky (DeskGap could improve upon this), that users of old OS versions are completely left out in the cold for updates (he mentions severe UI bugs, which is very different from 'we need bleeding-edge Chrome features'), and that the difference would be negligible anyway since the poor RAM and CPU usage is from their own garbage JavaScript code anyway, not the browser engine.

finchisko(3958) 4 days ago [-]

I want small app package. By not bundling webview, deskgap looks perfect to me.

eridius(3786) 4 days ago [-]

Slack migrating to Electron was by far the worst decision they've ever made for UX. That one migration introduced a ton of bugs and platform inconsistencies, most of which are still around.

One I just complained about the other day is Electron apparently doesn't handle font fallback properly, so if I type an emoji that Slack doesn't support, such as [frozen face], it just renders as a placeholder square in desktop Slack even though it renders as the emoji on both the website and in the iOS app.

Edit: Apparently HN strips the emoji from the comment [facepalm]

disillusion(10000) 3 days ago [-]

I think it's interesting how debates about Electron inevitably end up with the arguments 'the developer wants X' vs 'the user wants Y'. However, there's one aspect a lot of people seem to overlook in their arguments: pragmatism.

The ideal application:

- uses almost no memory

- uses almost no disk space

- is extremely fast

- costs (next to) nothing

- and has all the features in the world

- presented in a manner that automatically shows the user only the exact features (s)he cares about

In the real world, we have to balance the project/product requirements. In the end, only these things matter:

- It yields a net profit (monetary or otherwise) for the company or owner

- It has (and keeps) added value in comparison to similar software

- It's fun to design and develop in/for

- It has bugfixes and new features in a timely manner, without taking up too much development time

- It has a pretty, easy to use interface

- It's quick and snappy enough to run

Adding that all up, and developing in something like Electron is a no-brainer: mean time between iterations is faster, design and development is more fun and the end user has a product that is fast enough for their needs packed with features. Try that in any low level language or without control over the engine and you'll have to severely hamper one of these goals.

sigi45(10000) 3 days ago [-]

Slack is Electron. Slack is installed on many laptops. Slack App just sucks and wastes real resources.

blablabla123(10000) 3 days ago [-]

>The ideal application:

>- and has all the features in the world

In this case it's okay if it uses up all disc and memory. Actually it can even cost something in this case because only one application is needed.

It should be open source though so users can audit it and fix bugs if required. You see, just by requiring consistency in the requirements, one can get very far. Optimizing for typical startup metrics is just one way to do things but not the only valid one.

diegoperini(3934) 3 days ago [-]

This comment should be posted under each Electron vs The World discussion as a reminder by a bot.

Disclaimer: I hate Electron. (but really appreciate and admire its developers)

sh4rk(10000) 3 days ago [-]

I disagree.

The ideal application:

- uses no memory

- uses no disk space

- is instant

- costs nothing

- and has all the features in the world

- presented in a manner that automatically shows the user only the exact features (s)he cares about

The real-world application, when the developers work for the customer and not for themselves:

- uses almost no memory

- uses almost no disk space

- is extremely fast

- costs next to nothing

- and has almost all the features in the world

- presented in a manner that automatically shows the user only the exact features (s)he cares about

jamil7(4018) 3 days ago [-]

Pretty perfect summary and the reason why I pick native for side projects and electron for paid work (unless explicitly requested).

fxfan(10000) 3 days ago [-]

> snappy enough to run > Electron is a new brainer

Not everyone on earth is a dev/gamer/video-editor. For regular people, electron is a cancer that needs to be nipped in the bud. Remember, 91% of the desktop on earth is Windows (meaning regular USD 300 machines). And of them I'd wager less than 10% fit have machines where electron is snappy (1.2K Thinkpads).

What the earth needs is a better cross platform native GUI framework. NOT electron.

tobyhinloopen(10000) 3 days ago [-]

I feel like i've seen this argument before, for Java desktop applications

janci(10000) 4 days ago [-]

You mean IE on windows? No, thanks.

zapzupnz(10000) 3 days ago [-]

According to the website, it doesn't use IE's engine, it uses Edge's. Soon, that will be Chromium.

patr0nus(10000) 4 days ago [-]

Hi HN,

DeskGap is another try to build an lightweight alternative to Electron. Compared to the existing attempts [0, 1], I choose not to go that far and bundle a real Node.js with it[2].

And to battle test the framework, I wrapped squoosh[3] into a desktop app[4] with DeskGap, and successfully submitted it to the app stores.

[0] https://github.com/pojala/electrino

[1] https://github.com/jscherer92/Quark

[2] https://deskgap.com/#faq

[3] https://squoosh.app

[4] https://github.com/patr0nus/Pym/

styfle(3105) 3 days ago [-]

Thanks for making DeskGap!

I have been watching these types of tools (desktop js frameworks) and I'm glad the DeskGap docs[0] explain the difference between existing tools like Electrino and Quark although the most notable difference is that those two projects are no longer maintained.

I just added DeskGap to my list of Awesome Desktop JS frameworks[1].

[0]: https://deskgap.com/#there-are-already-similar-attempts-elec...

[1]: https://github.com/styfle/awesome-desktop-js

kodablah(3391) 4 days ago [-]

Howdy, can you compare the system-side webview with https://github.com/zserge/webview? Specifically, what control do you use on windows, MSHTML or have you incorporated the recently-freed-from-UWP (I think) Edge API?

EDIT: Appears the latter [0]. Great work. I wonder how this affects bundling...does this make it a UWP app?

0 - https://github.com/patr0nus/DeskGap/blob/master/core/src/win...

alexandernst(3978) 4 days ago [-]

How is Deskgap lightweighter than Electron if, at the end of the day, it's using the browser on my OS (which, let's assume, is Chrome)?

Wehrdo(10000) 4 days ago [-]

Looks like a great project!

The webpage mentions the app size, but no mention of RAM usage, which is a bigger concern with Electron apps to me. Can you comment on how it compares?

cotelletta(10000) 3 days ago [-]

I'm going to avoid installing Windows 10 for as long as possible, so I hope this never takes off. No offense.

pault(3964) 3 days ago [-]

It has barely started and still pre-alpha, but I would add revery[0]. I know the team working on it and they have a good history of finishing what they start.

[0] https://github.com/revery-ui/revery

hardwaresofton(3220) 3 days ago [-]

BTW for those looking for interesting cross-platform focused UI systems, Nuklear is pretty cool: https://github.com/vurtun/nuklear

atrilumen(4021) 3 days ago [-]

Cool, thanks. I'm tempted to make a Node binding.

tptacek(75) 4 days ago [-]

There was about a year and a half worth of security work done on Electron (particularly targeting the Node integration and how Node APIs were exposed). I worry that not a lot of people know just how insecure Electron apps used to be, and would generally worry that new Electron frameworks not designed specifically to be secure are going to recapitulate a lot of that.

Erlich_Bachman(10000) 4 days ago [-]

What is the attack vector that this protects against? Electron apps don't usually just run user-provided code off the internet? They just run the code provided by the app vendor?

Gaelan(4021) 4 days ago [-]

I mean, the easy answer is make the web view send messages to Node instead of doing the node integration.

wolframhempel(1839) 3 days ago [-]

I'm glad to see how this debate has matured from focusing on the technical downsides to focusing on the business and end-user benefits. But at the same time, whenever I have Blender open, effortlessly displaying eight different 3D views and UV mappers and the Spotify app, almost crashing my system by playing music, I can't help but wish that more developers would walk the extra mile for the user experience.

fauigerzigerk(3065) 3 days ago [-]

>Spotify app, almost crashing my system by playing music

Apparently the debate has matured from focusing on technical downsides to making unfalsifiable claims about technical downsides.

flintchip(10000) 3 days ago [-]

random and probably irrelevant spotify tip:

I absolutely hated how slow and buggy the app was, until I realised it was down to it attempting to index music locally all the time

my app speed increased massively by going to settings > local files, and turning off all of the options of where to show songs from

vianneychevalie(10000) 3 days ago [-]

Some of the apps like Spotify, or more recently Microsoft Teams are completely equivalent in their web version. The browser is better at preventing system crashes from a tab than a single encapsulated web app.

tbodt(2788) 4 days ago [-]

Requiring your Windows users to have the October 2018 update makes no sense.

airstrike(3121) 4 days ago [-]

Read the entire article till you get to this part:

https://deskgap.com/#why-is-the-supported-version-of-windows...

userbinator(871) 4 days ago [-]

I find it amusing that the opening paragraph describes it as a 'framework for building cross-platform desktop apps' and then shortly below that in the 'Supported Platforms' section it lists one Mac OS and one very specific version of Windows. No Linux at all. Even native Windows apps are more cross-platform than that! (I have written a few which will work on any version of Windows starting at Win95...)

patr0nus(10000) 4 days ago [-]

I shared your concern before I started the project, and talked about this in the readme's FAQ[0].

[0] https://deskgap.com/#faq

filmor(10000) 3 days ago [-]

People are always complaining about Electron for shipping along with every single application, but actually, modern Linux desktop is not that far away from this just by itself. On a relatively standard desktop installation I currently have

- webkit-gtk2 - webkit-gtk3 - qtwebkit - qtwebengine

Additionally I have two Geckos for Firefox and Thunderbird. All of them use skia, which then also gets compiled 6 times in total.

Using Gentoo, I'm a bit sensitive to this sort of thing ;)

Razengan(3892) 3 days ago [-]

As a user, I don't like Electron; 'webapps' always feel clunky and alien compared to native apps and the rest of the OS.

As a developer, I can appreciate Electron's utility in targeting multiple platforms, but one has to wonder:

Why isn't there a good, open, cross-platform UI library already, that compiles to the native UI of each OS?

Has there even been an initiative to make one?

You'd think the global developer community would have come together to tackle this problem by now, consider how it's such a pain point for all of us and users as well.

pault(3964) 3 days ago [-]

Native UI frameworks don't have a 1:1 match on features and UI components, and a cross-platform library would have to compromise on the lowest common denominator or use so many conditionals that you would be maintaining three different codebases anyway.

cpburns2009(10000) 3 days ago [-]

There's actually plenty of them. Off the top of my head: (C) libui; (C++) wxWidgets; (Java) SWT; (JavaScript) ReactNative; (Pascal) Lazarus. Then there's the non-native toolkits that either emulate the OS: (C) GTK; (C++) Qt; (Java) Swing. There's also Tk (Tcl) but I'm not sure where that falls.

c-smile(3909) 3 days ago [-]

> Why isn't there a good, open, cross-platform UI library already, that compiles to the native UI of each OS?

It is either 'good native' or cross-platform. But not both in reality.

In principle can do something very basic using stock platform widgets - some application that uses only basic widgets: buttons and plain text textareas - all others are too different on different platforms.

If your app is slightly more than that you will have problems even with basic stuff. Consider this native UI example (Notepad++) : https://i.kinja-img.com/gawker-media/image/upload/c_lfill,w_... and something like Sublime Text

Notepad++ is a disaster even native platform API. Sublime Text uses custom non-native renderer - consistent UI styling.





Historical Discussions: Google needed to build a graph serving system (February 15, 2019: 8 points)
Why Google Needed a Graph Serving System (February 15, 2019: 6 points)

(235) Google needed to build a graph serving system

235 points 1 day ago by pplonski86 in 487th position

blog.dgraph.io | Estimated reading time – 25 minutes | comments | anchor

When I introduce myself and explain what we are building at Dgraph Labs, I am typically asked if I worked at Facebook, or if what I'm building is inspired by Facebook. A lot of people know about the efforts at Facebook to serve their social graph, because they have published multiple articles about the graph infrastructure they put together.

Word from Google has been limited to serving Knowledge Graph, but nothing has been said about the internal infrastructure which makes this happen. There are specialized systems in place at Google to serve the knowledge graph. In fact, we (at Google) placed big bets on graph serving system. I myself put at least two promotions at stake to go work on the new graph thing back in 2010, and see what we could build.

Google needed to build a graph serving system to serve not just the complex relationships in the Knowledge Graph data, but also all the OneBoxes which had access to structured data. The serving system needed to traverse facts, with high enough throughput and low enough latency to be hit by a good chunk of web search queries. No available system or database was able to do all three.

Now that I've answered the why, I'll take the rest of the blog post to walk you through my journey, intertwined with Google's, of building the graph system to serve the Knowledge Graph and OneBoxes.

How do I know anything about this?

I'll quickly introduce myself. I worked at Google from 2006 to 2013. First as an intern, then as a software engineer in Web Search Infrastructure. In 2010, Google acquired Metaweb and my team had just launched Caffeine. I wanted to do something different and started working with Metaweb folks (in SF), splitting my time between San Francisco and Mountain View. My aim was to figure out how we could use the knowledge graph to improve web search.

There were projects at Google which predate my entrance into graphs. Notably, a project called Squared was built out of the NY office, and there was some talk of Knowledge Cards. These were sporadic efforts by individuals/small teams, but at this time there was no established chain of command in place, something that would eventually cause me to leave Google. But we'll get to that later.

The Metaweb Story

As mentioned before, Google acquired Metaweb in 2010. Metaweb had built a high-quality knowledge graph using multiple techniques, including crawling and parsing Wikipedia, and using a Wikipedia-like crowd-sourced curation run via Freebase. All of this was powered by a graph database they had built in-house called Graphd – a graph daemon (now published on GitHub).

Graphd had some pretty typical properties. Like a daemon, it ran on a single server and all the data was in memory. The entire Freebase website was being run out of Graphd. After the acquisition, one of the challenges Google had was to continue to run Freebase.

Google has built an empire on commodity hardware and distributed software. A single server database could have never housed the crawling, indexing and serving for Search. Google built SSTable and then Bigtable, which could scale horizontally to hundreds or thousands of machines, working together to serve petabytes of data. Machines were allocated using Borg (K8s came out of that experience), communicated using Stubby (gRPC came out of that), resolved IP addresses via Borg name service (BNS, baked into K8s), and housed their data on Google File System (GFS, think Hadoop FS). Processes can die and machines can crash, but the system just keeps humming.

That was the environment in which Graphd found itself. The idea of a database serving an entire website running on a single server was alien to Google (myself included). In particular, Graphd needed 64GB or more memory just to function. If you are sneering at the memory requirement, note that this was back in 2010. Most Google servers were maxed at 32GB. In fact, Google had to procure special machines with enough RAM to serve Graphd as it stood.

Replacement for Graphd

Ideas were thrown around about how Graphd could be moved or rewritten to work in a distributed way. But see, graphs are hard. They are not key-value databases, where one can just take a chunk of data, move it to another server and when asked for the key, serve it. Graphs promise efficient joins and traversals, which require software to be built in a particular way.

One of the ideas was to use a project called MindMeld (IIRC). The promise was that memory from another server could be accessed much faster via network hardware. This was supposedly faster than doing normal RPCs, fast enough to pseudo-replicate direct memory accesses required by an in-memory database. That idea didn't go very far.

Another idea, which actually became a project, was to build a truly distributed graph serving system. Something which can not only replace Graphd for Freebase but also serve all knowledge efforts in the future. This was named, Dgraph – distributed graph, a spin on Graphd – graph daemon.

If you are now wondering, the answer is yes. Dgraph Labs, the company and Dgraph, the open source project are named after this project at Google.

For the majority of this blog post, when I refer to Dgraph, I am referring to the project internal to Google, not the open source project that we built. But more on that later.

The Story of Cerebro: A Knowledge Engine

Inadvertently building a graph serving system.

While I was generally aware of the Dgraph effort to replace Graphd, my goal was to build something to improve web search. I came across a research engineer, DH, at Metaweb, who had built Cubed.

As I mentioned before, a ragtag of engineers at Google NY had built Google Squared. So, doing a one up, DH built Cubed. While Squared was a dud, Cubed was very impressive. I started thinking about how I could build that at Google. Google had bits and pieces of the puzzle that I could readily use.

The first piece of the puzzle was a search project which provided a way to understand which words belonged together with a high degree of accuracy. For example, when you see a phrase like [tom hanks movies], it can tell you that [tom] and [hanks] belong together. Similarly, from [san francisco weather], [san] and [francisco] belong together. These are obvious things for humans, not so obvious to machines.

The second piece of the puzzle was to understand grammar. When a query asks for [books by french authors], a machine could interpret it as [books] by [french authors] (i.e. books by those authors who are French). But it could also interpret it as [french books] by [authors] (i.e. French language books by any author). I used a Part-Of-Speech (POS) tagger by Stanford to understand grammar better and build a tree.

The third piece of the puzzle was to understand entities. [french] can mean many things. It could be the country (the region), the nationality (referring to French people), the cuisine (referring to the food), or the language. There was another project I could use to get a list of entities that the word or phrase could correspond to.

The fourth piece of the puzzle was to understand the relationship between entities. Now that I knew how to associate the words into phrases, the order in which phrases should be executed, i.e. their grammar, and what entities they could correspond to, I needed a way to find relationships between these entities to create machine interpretations. For example, a query says [books by french authors] and POS tells us it is [books] by [french authors]. We have a few entities for [french] and few for [authors], and the algorithm needs to determine how they are connected. They could be connected by birth, i.e. authors who were born in France (but could be writing in English), or authors who are French nationals, or authors who speak or write French (but could be unrelated to France, the country), or authors who just enjoy French food.

Search Index Based Graph System

To determine if and how the entities are connected, I needed a graph system. Graphd was never going to scale to Google levels, and I understood web search. Knowledge Graph data was formatted in Triples, i.e. every fact was represented by three pieces, subject (entity), predicate (relationship), and object (another entity). Queries had to go from [S P][O], from [P O][S], and sometimes from [S O][P].

I used Google's search index system, assigned a docid to each triple and built three indices, one each for S, P and O. In addition, the index allowed attachments, so I attached the type information about each entity (i.e. actor, book, person, etc).

I built this graph serving system, with the knowledge that it had the join depth problem (explained below) and was unsuited for any complex graph queries. In fact, when someone from the Metaweb team asked me to make it generally accessible to other teams, I flat-out rejected the idea.

Now to determine relationships, I'd do queries to see how many results each yielded. Does [french] and [author] yield any results? Pick those results and see how they are connected to [books], and so on. This resulted in multiple machine interpretations of the query. For example, when you run [tom hanks movies], it would produce interpretations like [movies directed by tom hanks], [movies starring tom hanks], [movies produced by tom hanks], and automatically reject interpretations like [movies named tom hanks].

With each interpretation, it would generate a list of results – valid entities in the graph – and it would also return their types (present in attachments). This was extremely powerful because understanding the type of the results allowed capabilities like filtering, sorting or further expansion. For movie results, you could sort the movies by year of release, length of the movie (short, long), language, awards won, etc.

This project seemed so smart, that we (DH was loosely involved as the knowledge graph expert) named it Cerebro, after the device of the same name appearing in X-Men.

Cerebro would often reveal very interesting facts that one didn't originally search for. When you'd run a query like [us presidents], Cerebro would understand that presidents are humans, and humans have height. Therefore, it would allow you to sort presidents by height and show that Abraham Lincoln is the tallest US president. It would also let people be filtered by nationality. In this case, it showed America and Britain in the list, because US had one British president, namely George Washington. (Disclaimer: Results based on the state of KG at the time; can't vouch for the correctness of these results.)

Blue Links vs Knowledge

Cerebro had a chance to truly understand user queries. Provided we had data in the graph, we could generate machine interpretations of the query, generate the list of results and know a lot about the results to support further exploration. As explained above, once you understand you're dealing with movies or humans or books, etc., you can enable specific filtering and sorting abilities. You could also do edge traversals to show connected data, going from [us presidents] to [schools they went to], or [children they fathered]. This ability to jump from one list of results to another was demonstrated by DH in another project he'd built called Parallax.

Cerebro was very impressive and the leadership at Metaweb was supportive of it. Even the graph serving part of it was performant and functional. I referred to it as a Knowledge Engine (an upgrade from search engine). But there was no leadership for knowledge in place at Google. My manager had little interest in it, and after being told to talk to this person and that, I got a chance to show it to a very senior leader in search.

The response was not what I was hoping for. For the demo of [books by french authors], the senior search leader showed me Google search results for the query, which showed ten blue links, and asserted that Google can do the same thing. Also, they do not want to take traffic away from websites, because then the owners would be pissed.

If you are thinking he was right, consider this: When Google does a web search, it does not truly understand the query. It looks for the right keywords, at the right relative position, the rank of the page, and so on. It is a very complex and extremely sophisticated system, but it does not truly understand either the query or the results. It is up to the user to read, parse, and extract the pieces of information they need from the results, and make further searches to put the full list of results together.

For example, for [books by french authors], first one needs to put together an exhaustive list which might not be available in a single web page. Then sort these books by year of publication, or filter by publication houses, etc. — all those would require a lot of link following, further searches, and human aggregation of results. Cerebro had the power to cut down on all that effort and make the user interaction simple and flawless.

However, this was the typical approach towards knowledge back then. Management wasn't sure of the utility of the knowledge graph, or how and in what capacity that should be involved with search. This new way of approaching knowledge was not easily digestible by an organization that had been so massively successful by providing users with links to web pages.

After butting heads with the management for a year, I eventually lost interest in continuing. A manager from the Google Shanghai office reached out to me, and I handed the project over to him in June 2011. He put a team of 15 engineers on the project. I spent a week in Shanghai transferring whatever I had built and learned to the engineers. DH was involved as well and he guided the team for the long term.

Join-Depth Problem

The graph serving system I had built for Cerebro had a join-depth problem. A join is executed when the result set from an earlier portion of the query is needed to execute a later portion of it. A typical join would involve some SELECT, i.e. a filter in certain results from the universal data set, then using these results to filter against another portion of the dataset. I'll illustrate with an example.

Say, you want to know [people in SF who eat sushi]. Data is sharded by people, and has information about who lives in which city and what food they eat.

The above query is a single-level join. If an application external to a database was executing this, it would do one query to execute the first step. Then execute multiple queries (one query for each result), to figure out what each person eats, picking only those who eat sushi.

The second step suffers from a fan-out problem. If the first step has a million results (population of San Francisco), then the second step would need to put each result into a query, retrieving their eating habit, followed by a filter.

Distributed system engineers typically solve this by doing a broadcast. They would batch up the results, corresponding to their sharding function, and make a query to each server in the cluster. This would give them a join, but cause query latency issues.

Broadcasts in a distributed system are bad. This issue is best explained by Jeff Dean of Google in his "Achieving Rapid Response Times in Large Online Services" talk (video, slides). The overall latency of a query is always greater than the latency of the slowest component. Small blips on individual machines cause delays, and touching more machines per query increases the likelihood of delays dramatically.

Consider a server with 50th-%ile latency of 1ms, but 99th-%ile latency of 1s. If a query only touches one server, only 1% of requests take over a second. But if the query touches 100 of these servers, 63% of requests would take over a second.

Thus, broadcasts to execute one query are bad for query latency. Now consider if two, three or more joins need to happen. They would get too slow for real-time (OLTP) execution.

This high fan-out broadcast problem is shared by most non-native graph databases, including Janus graph, Twitter's FlockDB, and Facebook's TAO.

Distributed joins are a hard problem. Existing native graph databases avoid this problem by keeping the universal dataset within one machine (standalone DB), and doing all the joins without touching other servers, e.g. Neo4j.

Enter Dgraph: An Arbitrary Depth Join Engine

After wrapping up Cerebro, and having experience building a graph serving system, I got involved with the Dgraph project, becoming one of the three tech leads on the project. The concepts involved in Dgraph design were novel and solved the join-depth problem.

In particular, Dgraph sharded the graph data in a way where each join can be executed entirely by one machine. Going back to subject-predicate-object (SPO), each instance of Dgraph would hold all the subjects and objects corresponding to each predicate in that instance. Multiple predicates would be stored on the instance, each predicate being stored in its entirety.

This allowed the queries to execute arbitrary depth joins while avoiding the fan-out broadcast problem. Taking the query [people in SF who eat sushi], this would result in at most two network calls within the database irrespective of the size of the cluster. The first call would find all people who live in SF. The second call would send this list of people and intersect with all the people who eat sushi. We can then add more constraints or expansions, each step would still involve at most one more network call.

This introduces the problem of very large predicates located on a single server, but that can be solved by further splitting up a predicate among two or more instances as the size grows. Even then, a single predicate split across the entire cluster would be the worst-case behavior in only the most extreme cases where all the data corresponds to just one predicate. In the rest of the cases, the design of sharding data by predicates works better to achieve much faster query latency in real-world systems.

Sharding was not the only innovation in Dgraph. All objects were assigned integer IDs and were sorted and stored in a posting list structure to allow for quick intersections of posting lists. This would allow fast filtering during joins, finding common references, etc. Ideas from Google's web serving system were involved.

Uniting OneBoxes via Plasma

Dgraph at Google was not a database. It was a serving system, the equivalent of Google's web search serving system. In addition, it was meant to react to live updates. As a real-time updating serving system, it needed a real-time graph indexing system. I had a lot of experience with real-time incremental indexing systems having worked on Caffeine.

I started a project to unite all Google OneBoxes under this graph indexing system, which involved weather, flights, events, and so on. You might not know the term, but you definitely have seen them. OneBox is a separate display box which gets shown when certain types of queries are run, where Google can return richer information. To see one in action, try [weather in sf].

Before this project, each OneBox was being run by standalone backends and maintained by different teams. There was a rich set of structured data, but no data was being shared between the boxes. Not only was maintaining all these backends a lot of work operationally but also the lack of knowledge share limited the kind of queries Google could respond to.

For example, [events in SF] can show the events, and [weather in SF] can show the weather. But if [events in SF] could understand that the weather is rainy and know whether events are indoors or outdoors, it can filter (or at least sort) the events based on weather (in a heavy rainstorm, maybe a movie or a symphony is the best option).

Along with help from the Metaweb team, we started converting all this data into the SPO format and indexing it under one system. I named the system Plasma, a real-time graph indexing system for the graph serving system, Dgraph.

Management Shuffle

Like Cerebro, Plasma was an under-funded project but kept on gaining steam. Eventually, when management came to realize that OneBoxes were imminently moving to this project, they needed the "right people" in charge of Knowledge. In the midst of that game of politics, I saw three different management changes, each with zero prior experience with graphs.

During this shuffle, Dgraph was considered too complex by management who supported Spanner, a globally distributed SQL database which needs GPS clocks to ensure global consistency. The irony of this is still mind-boggling.

Dgraph got canceled, Plasma survived. And a new team under a new leader was put in charge, with a hierarchy in place reporting to the CEO. The new team – with little understanding or knowledge of graph issues – decided to build a serving system based on Google's existing search index (like I had done for Cerebro). I suggested using the system I had already built for Cerebro, but that was rejected. I changed Plasma to crawl and expand each knowledge topic out a few levels, so this system could treat it like a web document. They called it TS (name abbreviated).

This meant that the new serving system would not be able to do any deep joins. A curse of a decision that I see in many companies because engineers start with the wrong idea that "graphs are a simple problem that can be solved by just building a layer on top of another system."

After a few more months, I left Google in May 2013, having worked on Dgraph/Plasma for just about two years.

After Story

  • A few years later, Web Search Infrastructure was renamed to Web Search and Knowledge Graph Infrastructure and the leader to whom I had demoed Cerebro came to lead the Knowledge effort, talking about how they intend to replace blue links with knowledge — directly answering user queries as much as possible.

  • When the team in Shanghai working on Cerebro was close to getting it into production, the project got pulled from under their feet and moved to the Google NY office. Eventually, it got launched as Knowledge Strip. If you search for [tom hanks movies], you'll see it at the top. It has sort of improved since the initial launch, but still does not support the level of filtering and sorting offered by Cerebro.

  • All the three tech leads (including me) who worked on Dgraph eventually left Google. To the best of my knowledge, the other two leads are now working at Microsoft and LinkedIn.

  • I did manage to get the two promotions and was due for a third one when I left Google as a Senior Software Engineer.

  • Based on some anecdotal knowledge, the current version of TS is actually very close to Cerebro's graph system design, with subject, predicate and object each having an index. It, therefore, continues to suffer from the join-depth problem.

  • Plasma has since been rewritten and renamed, but still continues to act as a real-time graph indexing system, supporting TS. Together, they continue to host and serve all structured data at Google, including Knowledge Graph.

  • Google's inability to do deep joins is visible in many places. For one, we still don't see a marriage of the various data feeds of OneBoxes: [cities by most rain in asia] does not produce a list of city entities despite that weather and KG data are readily available (instead, the result is a quote from a web page); [events in SF] cannot be filtered based on weather; [US presidents] results cannot be further sorted, filtered, or expanded to their children or schools they attended. I suspect this was also one of the reasons to discontinue Freebase.

Dgraph: A Phoenix

Two years after leaving Google, I decided to build Dgraph. Outside of Google, I witnessed the same indecision about graph systems as I did inside. The graph space had a lot of half-baked solutions, in particular, a lot of custom solutions, hastily put together on top of relational or NoSQL databases, or as one of the many features of multi-model databases. If a native solution existed, it suffered from scalability issues.

Nothing I saw had a coherent story with a performant, scalable design at heart. Building a horizontally scalable, low-latency graph database with arbitrary-depth joins is an extremely hard problem and I wanted to ensure that we built Dgraph the right way.

The Dgraph team has spent the last three years — not only learning from my own experience but also putting a lot of original research into the design — building a graph database unparalleled in the market. So companies have the choice of using a robust, scalable, and performant solution instead of a yet another half baked one.

Thanks for reading this post. If you liked this post, show us some love and give us a star on GitHub.


P.S. Special thanks to Daniel Mai for helping put together the graphics in the blog post.




All Comments: [-] | anchor

mark_l_watson(2554) about 8 hours ago [-]

Thanks for writing this up! I worked with the Knowledge Graph as a contractor at Google in 2013. My manager had a neat idea for adding our own Schema and triples (actually quads) for a specific application.

It surprises me how many large companies do not have a 'knowledge graph strategy' while everyone is on board with machine learning (which is what I currently do, managing a machine learning team). I would argue that a high capacity, low query latency Knowledge Graph should be core infrastructure for most large companies and knowledge graphs and machine learning are complementary.

taherchhabra(3999) about 6 hours ago [-]

Sounds like a good Enterprise saas product idea

mrjn(3690) about 3 hours ago [-]

I agree. And I saw how averse companies were to graph databases because of the perception that they are 'not reliable.' So, we built Dgraph with the same concepts as Bigtable and Google Spanner, i.e. horizontal scalability, synchronous replication, ACID transactions, etc.

Once built, we engaged Kyle and got Jepsen testing done on Dgraph. In fact, Dgraph is the first graph database to be Jepsen tested. http://jepsen.io/analyses/dgraph-1-0-2 (all pending issues are now resolved).

Dgraph is now being used as a primary DB in many companies in production (including Fortune 500), which to me is an incredible milestone.

chao-(10000) about 11 hours ago [-]

This a really neat historical perspective, although I do find one detail odd:

>If you are sneering at the memory requirement, note that this was back in 2010. Most Google servers were maxed at 32GB.

As I recall, Nehalem EP (launched 2008 or 2009?) could handle in excess of 100GB per socket? Not cheap necessarily, but definitely still counted as 'commodity hardware' for that era. I say this recalling that even my mid-tier workstation from then could handle 48GB (in that wonky era of triple-channel RAM), though I only had it loaded with 12GB. Then again I could see, if said servers in 2010 were at the end of a purchasing cycle from 2007 or so, that they were 'maxed' at 32GB?

Anyway, my from-memory nitpick doesn't detract from the article's ultimate point, though: distribution was an obvious need that would only become more pressing.

orestes910(10000) about 11 hours ago [-]

My first guess would be that power consumption drove those decisions.

shereadsthenews(10000) about 6 hours ago [-]

The hardware replacement cycle is much longer than you've implied. Servers I am building today will still be TCO-positive for 15 years. In 2010 Google data centers would still have been full of their first generation dual-socket Opteron machine.

mrjn(3690) about 4 hours ago [-]

Hi all, author of Dgraph and the article here.

Glad to see this post trending on HN. I'm around if you want to ask any questions, and will share whatever I can about my experience with graphs and the decisions at Google.

Since leaving Google, I've built Dgraph [1]. It is open source, designed for horizontal scalability, ACID transactions and provides incredible performance for both reads and writes. And of course, it solves the join-depth problem as explained in the post. So, do check it out!

And do remember to give us a star on GitHub [1]. Your love helps other open source users give Dgraph a real shot in their companies with similar leadership struggles as I experienced at Google.

[1]: https://github.com/dgraph-io/dgraph

puzzle(3943) about 2 hours ago [-]

Yours is the first public mention of MindMeld I've seen. There were a bunch of other related projects. Some of that experience eventually percolated to Tensorflow.

Anyway, good luck with Dgraph! It looks very useful.

v4r(10000) about 1 hour ago [-]

What is your opinion on network representation learning? Can it be used in Dgraph? For instance, do you think it is possible to use node embeddings as indices for faster retrieval?

ryanworl(3959) 27 minutes ago [-]

To summarize the join-depth problem as you describe, it is because other solutions use hash partitioning instead of range partitioning to distribute data?

i.e. if all S/P/O were stored in sorted order such that a given S/P/O were on one machine, you no longer have to do broadcasts and can instead issue queries to just the machines holding that range of the keyspace. Since most 'scalable' systems use hash partitioning, they have to broadcast, whereas dgraph uses something more like range partitioning.

Or is this too simplistic of an explanation?

Game_Ender(10000) about 4 hours ago [-]

I have some experience with Bazel, the open source version of Blaze, and I noticed that it's build graph implementation appears to be very memory hungry as well. Such that a few million lines of code already use 10-30GB of memory. Like you mentioned it requires the entire build graph to fit in memory on one machine.

Do you know what sort of graph solution Blaze uses to handle 2 orders of magnitude more code, ie. manage 100's of millions of lines of code? I always assumed it was a distributed graph database, but your article seems to indicate something else.

throwawaygoog10(10000) about 2 hours ago [-]

I'm sorry your efforts failed internally. Our infrastructure is somewhat ossified these days: the new and exotic are not well accepted. Other than Spanner (which is still working to replace Bigtable), I can't think of a ton of really novel infrastructure that exists now and didn't when you were around. I mean, we don't even do a lot of generic distributed graph processing anymore. Pregel is dead, long live processing everything on a single task with lots of RAM.

I suspect your project would have been really powerful had you gotten the support you needed, but without a (6) or a (7) next to your name it's really hard to convince people of that. I know a number of PAs that would benefit now from structuring their problems in a graph store with arbitrary-depth joins and transactions. I work on one of those and cringe at some of the solutions we've made.

We've forgotten what it's like to need novel solutions to performance problems. Instead, we have services we just throw more RAM at. Ganpati is over 128GB (it might even be more) of RAM now, I suspect a solution like dgraph could solve its problems much more efficiently not to mention scalably.

Good on you for taking your ideas to market. I'm excited to see how your solution evolves.

sandGorgon(896) about 13 hours ago [-]

There is also Janusgraph which Google contributes to http://janusgraph.org

ddorian43(3470) about 9 hours ago [-]

If you don't do datastructures right, then it will suck on performance.

Imagine doing a search-engine in a rdbms without special datastructures.

geuszb(10000) about 9 hours ago [-]

Insightful article / ad. Maybe the problem with the idea is that in practice not many users want to do joins at arbitrary depths levels, e.g. 'sort all the children of US presidents by height' is probably not a very common query needing a massively distributed architecture?

yorwba(3585) about 7 hours ago [-]

I suspect that although any given query that needs a join is rare, there is a long tail of many different such queries. Just having more than one predicate is enough, e.g. 'weather in <city> during <event>' or any of the queries in footnote 2 of the article.

mrjn(3690) about 3 hours ago [-]

Consider [sort all comedy movies by rating]. Such queries happen on a daily basis on movie sites like IMDB or Rotten Tomatoes.

The only way to avoid joins is when you specialize your data to a particular vertical. Therefore, your flat tables are then built to serve say movie data, and can avoid some joins.

But, if you're building something spanning multiple verticals, like Knowledge Graph is, which houses movie data, celebrity data, music data, events, weather, flights, etc. Then building flat tables specific to each vertical's properties is almost impossible.

pacala(4012) about 2 hours ago [-]

'Travel from NY to LA over the weekend' was probably not a very common demand needing a massive investment in air travel infrastructure and equipment circa 1900. User demand is bounded by the capability of the tools commonly available.

combatentropy(4015) about 3 hours ago [-]

This reminded me of SQL.

> Say, you want to know [people in SF who eat sushi]....If an application external to a database was executing this, it would do one query to execute the first step. Then execute multiple queries (one query for each result), to figure out what each person eats, picking only those who eat sushi.

A query like that in SQL could also suffer from 'a fan-out problem' and could get slow. It's often faster to put subqueries in the From clause than the Select. It's certainly faster than an app taking the rows of one query and sending new queries for each row, as many developers do. For example:

  select
      p.name,
      (
          select max(visit_date)
          from visits v
          where v.person = p.id
      ) as last_visit
  from people p
  where p.born < '1960-01-01'
can be slower than:

  select p.name, v.last_visit
  from people p
      join (
          select person, max(visit_date) as last_visit
          from visits
          group by person
      ) v on p.id = v.person
  where p.born < '1960-01-01' 
In the second example, you first form a new table through a subquery of the original. This is not what a new programmer would first try. The first example, with the subquery in the Select clause, is closer to the train of thought. Also you would guess that getting the last visit dates of each person is more efficient after you know who to look for (like, only the people born before 1960). But in my experience, it often hasn't been.

Therefore likewise with this San Francisco sushi query, I was thinking that if it were SQL then I would (1) get all people in San Francisco, (2) get all people who like sushi, and then (3) join them, to find their intersection. Lo and behold, I then read that it is the same solution in this humongous graph database:

> The concepts involved in Dgraph design were novel and solved the join-depth problem.... The first call would find all people who live in SF. The second call would send this list of people and intersect with all the people who eat sushi.

mrjn(3690) about 2 hours ago [-]

Now consider doing this in a distributed setting, with SQL tables splits across dozens or 100s of machines (see Facebook TAO).

wiradikusuma(1998) about 10 hours ago [-]

OOT, but the illustrations on their home page ( https://dgraph.io/ ) is veeery cute, you should check it out.

saagarjha(10000) about 7 hours ago [-]

Is that a skunk? Kind of an interesting choice...

mrjn(3690) about 3 hours ago [-]

We recently made this image, inspired by my favorite movie. It's licensed under CC-by-ND.

https://twitter.com/dgraphlabs/status/1088516069852012544

Feel free to use it! :-)

pvg(4032) about 2 hours ago [-]

It makes you wonder how the product handles mushroom and more importantly, snake.

Tharkun(3764) about 7 hours ago [-]

Slightly off topic, but while we're on the subject of graph databases: could anyone point me at some useful introductions to the subject? Thank you.

amirouche(2751) about 6 hours ago [-]

Look at my Python graph database 0.8.1 is the last release with the vanilla graphdb API, after that it only expose a triple store https://github.com/amirouche/hoply/tree/fa77d31757c835688098...

mrjn(3690) about 3 hours ago [-]

I'd suggest starting with https://docs.dgraph.io/get-started/, and then going to https://tour.dgraph.io. It gives you all you need to understand the query language (based on GraphQL) and get you up to speed building your first app on a graph DB.

agumonkey(925) about 6 hours ago [-]
bufferoverflow(3816) about 5 hours ago [-]

Thank you. That page freezes Chrome, ironically.

gibsonf1(460) about 6 hours ago [-]

For the scalability side of graph, we've just started using Amazon Neptune RDF, and have been amazed that we can easily and very quickly run sparql queries on 2.6 billion triples on their smallest 2 core 15 gig machine. Incredible capacity.

staticassertion(10000) 12 minutes ago [-]

Where Neptune appears to fall down is write performance. This is what made it non-viable for me. I have colleagues who are struggling / hacking around the write performance issues.

DGraph's write perf seems to be considerably better - I haven't benchmarked formally, just going off of discussions I've had with others.

mrjn(3690) about 3 hours ago [-]

As others mention, Neptune is based on Blazegraph. It is a layer on top of Amazon Aurora and has the typical graph layer issues I mention my blog post (in particular the join depth problem).

When Neptune was launched, I wrote another article critiquing its design. Worth a read: https://blog.dgraph.io/post/neptune/

macawfish(3908) about 6 hours ago [-]

FYI, Neptune is based on Blazegraph. Amazon's acquisition of Blazegraph halted the database's open development. I'm sure they would welcome interested contributors: https://github.com/blazegraph/database/issues/86

taherchhabra(3999) about 6 hours ago [-]

We are using AWS Neptune in building our marketing analytics product. What I have realized is that it has best of both worlds, the querying power of SQL databases and schemaless functionality like nosql databases.

macawfish(3908) about 6 hours ago [-]

Whenever I read mention of Neptune, I feel obligated to mention Blazegraph, which Amazon based Neptune on.

As I've shamelessly advertised elsewhere in these comments, Blazegraph is in the early stages of a fork and/or reboot, as Amazon's acquisition severely hampered progress on the open source project.

https://github.com/blazegraph/database/issues/86





Historical Discussions: Show HN: I implemented Pong as a cellular automaton (February 13, 2019: 225 points)

(225) Show HN: I implemented Pong as a cellular automaton

225 points 3 days ago by uranium in 10000th position

ericu.github.io | | comments | anchor

Left player human Left player AI

Right player human Right player AI

Keyboard controls:

Left player: w/s

Right player: arrows

Pause/continue: space

Use more revealing colors Show debugging controls Source



All Comments: [-] | anchor

uranium(10000) 3 days ago [-]

I'd been explaining sonic booms and high explosives to my kids, and thought that showing them a simulation of wavefront propagation might be a fun way to illustrate how they worked. I figured I could do that with a simple cellular automaton, so I threw something together in a few hours, but my sim didn't have anything like the right behavior. So there I was with a general cellular automaton framework, thinking about wave propagation, and wondering what else I could do with it. Inspiration struck, and I just had to try doing this.

basementcat(10000) 3 days ago [-]

Reminded me of a problem set I was once assigned. https://en.wikipedia.org/wiki/Firing_squad_synchronization_p...

RosanaAnaDana(10000) 3 days ago [-]

I wish I could give you an award for this. I've been working on developing cellular automata for solving sudoku, but you're lightyears beyond what I've so far been able to do.

sansnomme(10000) 3 days ago [-]

Very impressive! Congrats on shipping!

baddox(3975) 2 days ago [-]

This is awesome. Now I am wondering if you could do effectively the same thing in Conway's game of life, where if you zoomed out far enough you would see the game's graphics.

Cyphase(4030) 2 days ago [-]

You mean something like this?

If you don't want to be 'spoiled' as to what exactly the video is going to show as it zooms out, use this link.. and don't pay attention to the browser tab title, or the video title in the top left corner, until it disappears in about 3 seconds along with the rest of the UI:

https://www.youtube.com/embed/xP5-iIeKXE8?autoplay=1 [1:29]

Canonical link:

https://www.youtube.com/watch?v=xP5-iIeKXE8 [1:29]

himlion(3925) 2 days ago [-]

If you zoom out far enough you can do anything, I recall somebody made a whole system with pixels to display arbitrary data.

sgentle(3189) 2 days ago [-]

This is super cool! By any chance have you seen Dave Ackley's Movable Feast Machine and related projects? He's working on some similar ideas – cellular automata with different classes of cell that pass local state around to build large-scale systems.

Here's 'demon horde sort', which is a kind of stochastic self-healing bubble-sorting automaton: https://www.youtube.com/watch?v=lbgzXndaNKk

uranium(10000) 2 days ago [-]

I hadn't seen that; thanks for the pointer.

imode(4032) 3 days ago [-]

This is remarkably beautiful. Could you, perhaps, give some detail as to how it functions? I see wave propagation, but I'm interested in the state dynamics of a particular cell.

uranium(10000) 3 days ago [-]

There's some general background at https://github.com/ericu/CellCulTuring/blob/master/README.md but basically I've got a number of different general classes of cell [background, counter, wall, ball, paddle, etc.] and each knows the kinds of situations that will cause it to change into something else. Within each class of cell, there's further state, so e.g. the ball's a different color when it's moving up and to the right than when it's moving down and to the left. Each cell is basically its own state machine, taking its state and the states of its neighbors into account, and progressing through the substates of its class until it turns into something else.

Say a blank background cell sees a ball cell to its left. It looks at the precise color of the ball to determine that the ball's moving to the right, so the next cycle, the background cell must become a ball cell with the same motion. It's a bit more complicated than that because balls only move every other cycle, they've got internal counters [colors] to deal with moving diagonally, bouncing off walls, hitting the paddle, etc., but that's basically it.

The actual state machine code takes thousands of lines of JavaScript to express.




(193) Google .dev domain early access

193 points about 5 hours ago by jonseitz in 10000th position

domains.google | | comments | anchor

The Early Access Fee is a one-time payment to secure your desired .dev domain early. From February 19th at 8:00am PST to February 28th at 7:59am PST, you can get a .dev domain before General Availability for an additional fee (this fee decreases the closer we get to General Availability). During General Availability, starting February 28th at 8:00 PST, .dev domains will be available without an Early Access Fee.

During both the Early Access Program and General Availability, there is a $12/year cost for .dev domains. Annual fees may vary for Premium domains.




All Comments: [-] | anchor

40four(10000) about 4 hours ago [-]

I don't know how this stuff works, what gives google the ability to sell these domains early? Also why is everyone cursing google for taking their local .dev? Google is not the only place you can buy these. Namecheap has .dev in the works, and GoDaddy is taking pre-orders as well (I didn't keep searching but I'm sure it's available at many other registrars), but nobody is cursing and shaking their fist in the air at those services. So what does google have to do with it? Why are we all holding them responsible?

bhartzer(2172) about 4 hours ago [-]

Google is the registry and not the registrar. That's why they are ultimately responsible.

nisuni(10000) about 5 hours ago [-]

How much did it cost to buy the TLD? Looks like a very good investment that will repay itself quite easily.

kowdermeister(2387) about 5 hours ago [-]

I guess the usual ~$200k.

outime(10000) about 5 hours ago [-]

>The evaluation fee is US$185,000. Applicants will be required to pay a US$5,000 deposit fee per requested application slot when registering. The deposit will be credited against the evaluation fee. Other fees may apply depending on the specific application path. See the section 1.5 of the Applicant Guidebook for details about the methods of payment, additional fees and refund schedules.

Section 2.2 https://newgtlds.icann.org/en/applicants/global-support/faqs...

barbarr(10000) about 5 hours ago [-]

There's a nonzero chance that the investment could be a dud if people simply don't like the TLD, the same way how no one (except a few people [1]) like the .bike TLD.

[1] http://poop.bike

snek(3963) about 5 hours ago [-]

It costs USD$185,000 to submit a TLD application, and there are probably additional costs beyond that.

0x00000000(4032) about 4 hours ago [-]

I'm surprised there is no site to crowdfund TLDs by taking a deposit from people in order to reserve a name

c0llision(10000) about 5 hours ago [-]

Damn, now I have to change my hosts file, I've been using .dev for local development

pluc(3545) about 5 hours ago [-]

It literally happened to me yesterday, setting up a VM and aliasing it to a .dev domain for local testing. Took me an hour to figure out what was wrong =/

tedmiston(3519) about 3 hours ago [-]

I doubt they'll buy .local.

https://security.stackexchange.com/questions/14802/if-someon...

But generally speaking a subdomain on a domain you really own seems like a better idea.

https://serverfault.com/questions/17255/top-level-domain-dom...

sago(4023) about 2 hours ago [-]

This is common, but sadly has always been incorrect.

.test is the TLD you want, specified by RFC2606.

everdev(3005) about 3 hours ago [-]

localhost.dev should be fun

adtac(2744) 12 minutes ago [-]

If someone ends up buying it, please set your DNS records to point to something in 127.0.0.0/8

johnchristopher(3572) about 2 hours ago [-]

I would have thought localhost as a second level domain name would be reserved but it seems like it's not.

AznHisoka(3581) about 5 hours ago [-]

Would it be wise to hoard a bunch of brand name domains in hopes they become valuable?

ocdtrekkie(2602) about 5 hours ago [-]

No, because doing so is against ICANN rules, so the maximum value of a brand domain is the cost to open an ICANN dispute.

bhartzer(2172) about 5 hours ago [-]

Most likely the registry won't let you buy trademarked domains if you can't prove you have the trademark. There is a period set aside for tm holders.

I would stay far away from any brand or tm domains at any time. You will eventually lose the domain through a udrp. And you could likely get sued as well.

demarq(10000) about 4 hours ago [-]

Google is such a big company, I find it bizarre they are trying to make money in this way.

It would be interesting to know what they make at the end of the pre-sale.

gruez(3533) about 4 hours ago [-]

>I find it bizarre they are trying to make money in this way.

Why not? If they charged a flat rate at launch, domain speculators would snatch up all the valuable domains and you'll have to buy it from them at an inflated price. At least with this system google is pocketing the premium rather than third parties.

ameliaquining(10000) about 3 hours ago [-]

My guess would be that this isn't primarily about the money (there are probably domains they could get more than $11,500 for), but rather about allocating the most in-demand domains in a way that reflects how badly people want them. A Dutch-auction-type thing like they're doing is a crude solution to this problem, but if they just went straight to GA too much valuable real estate would immediately be claimed by squatters and trolls.

barbarr(10000) about 5 hours ago [-]

Just to make sure, you can't get multiple domains with early access, right?

bhartzer(2172) about 5 hours ago [-]

You can get multiple domains if you can justify it.

tedmiston(3519) about 3 hours ago [-]

> The Early Access Fee is a one-time payment to secure your desired .dev domain

That sounds like one fee per domain to me

sascha_sl(10000) about 5 hours ago [-]

Besides the usual disregard decency let's HSTS preload a TLD people have been using for local development for decades...

This is terrible gatekeeping and I hope Google perishes soon. Wow. I can't get over how fucking dumb this is.

This is the mark of a corporation filled with people that thinks putting a price tag on supposedly scarce good early will deter bad actors. Clearly everyone who thought this up has way too much money in their account.

ocdtrekkie(2602) about 5 hours ago [-]

Yes, this, though I really am disappointed in ICANN here. ICANN should not have sold a TLD known to be in heavy use already. Their desire for cash from this process seems to have overwhelmed their good sense in managing the domain system.

fastball(3950) about 5 hours ago [-]

If you've been using .dev for local development, that means you've either been setting it in your hosts or using your own DNS server until now, which you can continue to do with a .dev TLD.

taf2(3563) about 5 hours ago [-]

Wow we migrated our local development 5 or 6 years ago off of .dev, just buy a real domain and set its dns to localhost.

willmadden(10000) about 2 hours ago [-]

Alphabet has too much cash. They have a well established track record of enthusiastically backing exciting new projects way outside of their core competency just to dump them like hot garbage several years later.

They also compete in random new industries each time this happens.

It doesn't seem like a smart move to lease a domain from a politically active mega-monopoly that might decide to randomly become your competitor in 2 years.

tehlike(3800) 15 minutes ago [-]

To be honest, TLD business is probably not one of them. Even fi google becomes your competitor, they won't touch the domain.

rmoriz(1771) about 2 hours ago [-]

It's risk management through diversification attempts. Google, like any other large scale extremely successful single product[1] company, hedges market risks in occupying 'hot' spaces. Same with MS, Apple and all other giants, even in non-tech.

[1] see financial report of the last year (10K). Search for 'We operate our business in multiple operating segments. Google is our only reportable segment. None of our other segments meet the quantitative thresholds to qualify as reportable segments' and 'How we make money' (Source: https://www.sec.gov/Archives/edgar/data/1652044/000165204419... )

HillaryBriss(2125) about 1 hour ago [-]

> ... might decide to randomly become your competitor in 2 years

think of the bright side. they might be willing to buy your neato .dev domain name back ... for $12.

ghobs91(3813) 9 minutes ago [-]

If I had to bet, the majority of .dev pages will be engineers posting their portfolio, not startups building developer tools.

benologist(986) about 2 hours ago [-]

I think the risk isn't that they become your competitor, it's that an algorithm flags your website for perceived abuse and that cascades down to you and your workplace being banned forever from Google, with no recourse because they choose to provide fake support despite having $100+ billion in savings and plenty of funding for eg global tax evasion.

skybrian(1831) 37 minutes ago [-]

You made a fully general argument against anything that Google touches.

This isn't useful, because everyone knows that argument already. I'd rather know what Google's track record is specifically having to do with DNS (or fundamental Internet infrastructure).

CydeWeys(3866) 17 minutes ago [-]

Hi. I'm the Tech Lead of Google Registry, the team that is launching .dev (not to be confused with the linked Google Domains, which is one of many registrars selling .dev domains to end users).

You'll be glad to know that TLDs can't simply be discontinued like other products might be. ICANN doesn't allow it. The procedures in place preventing a live TLD from shutting down are called EBERO; more details here: https://www.icann.org/resources/pages/ebero-2013-04-02-en

The way it works is that all registries must send daily full backups to a third-party escrow provider, which are then used to restore the TLD under a different operator if the original operator shuts down unexpectedly. This is not some theoretical backup/restore procedure that goes untested; it's been used in the past, e.g. with .wed: https://www.icann.org/news/announcement-2017-12-08-en

But this typically only happens when the registry operator goes abruptly bankrupt, and is thus quite rare. Many, many widely used TLDs have been seamlessly sold/transferred across registry operators without you ever realizing it, including .io last year. That would be the 'worst' you would expect from TLDs launched by large established players like Google. You actually get a lot more protections with gTLDs than you do with ccTLDs (such as .io), as ccTLDs aren't bound by contract with ICANN and thus aren't forced to do EBERO, or anything else for that matter.

pluc(3545) about 5 hours ago [-]

The .dev TLD worked fine before it was bought. Now we have to use .test or .local.

I don't get what Google thinks they'll get out of sponsoring putting sites under development online.

fastball(3950) about 5 hours ago [-]

Why do you need to use .test or .local?

Presumably you were already editing your hosts file or running your own DNS server in order to make .dev resolve for local development, which you can continue to do?

jtreminio(3896) about 5 hours ago [-]

If you've any developers on your team that use MacOS, avoid .local since it does some listening on this for Bonjour: https://blog.scottlowe.org/2006/01/04/mac-os-x-and-local-dom...

I believe .localhost is the 'official' recommended TLD for local development.

solatic(10000) about 3 hours ago [-]

Why do you have to use either one of those?

If you're already running your own internal DNS servers (to serve .dev, .test, etc.) , then just buy a domain for your org for internal use (e.g. '<mycompany>-internal.<tld>' or '<mycompany>-private.<tld>', or if your company is '<mycompany>.com' then purchasing '<mycompany>.net' or similar), split-horizon so that queries from the Internet direct to some CDN-hosted static page saying 'nothing to see here, internal use only, if you are an employee please VPN in' and internally you find the actual services.

You never run the danger of your internal domain being unroutable (since you indisputably own it), none of the stuff on subdomains of your internal domain are internet-discoverable (since none of the internal services are exposed externally), you retain the flexibility of eventually making internal services Internet-routable when you get around to building out a BeyondCorp model (if you ever do), and it probably costs a negligible <$10/year in registration fees.

robjan(10000) about 4 hours ago [-]

It's always best to follow the RFCs [1] to avoid issues like this. '.dev' worked but it was never safe to use it.

1: https://tools.ietf.org/html/rfc2606#page-2

byuu(10000) about 2 hours ago [-]

.test also works, but I like .dev more, so I have and continue to use .dev via hosts file (edit: hearing Firefox is doing the .dev HSTS preload as well, that's very disappointing to hear.)

I do wish /etc/hosts accepted wildcards, though. It can be a touch annoying having to add a new rule every time I create a new subdomain.

sschueller(2659) about 5 hours ago [-]

How long until Google will start removing domains that don't fit their 'values'?

nabla9(705) about 1 hour ago [-]

Good question. Nobody seems to know the answer here.

HN seems to be generally knowledgeable, but in this case nobody seems to be able to provide answers with good references, just guesswork. Google operates the domain. How much regulatory power it has? What is the agreement with ICANN. What is the dispute solution mechanism?

----

edit

after looking around, operator agreement with ICANN seems include public interest commitments etc. Operator can't do whatever they want and their policy should be transparent.

> Registry Operator will operate the TLD in a transparent manner consistent with general principles of openness and non-discrimination by establishing, publishing and adhering to clear registration policies.

https://newgtlds.icann.org/sites/default/files/agreements/ag...

kabwj(10000) about 4 hours ago [-]

What if they remove domains owned by open source projects that refuse to implement a CoC?

I don't think they will, but buying a domain from a company that's so involved in politics doesn't seem wise.

wolco(3551) about 5 hours ago [-]

Very quickly.

hobs(3337) about 4 hours ago [-]

Surprise! Pretty much every provider has a terms of service.

bitL(10000) about 3 hours ago [-]

'Our ML predictor calculated with 95% confidence that in the next 3 years your software will become our competitor, damaging our projected profits. We decided to terminate your domain ownership and auctioned it off to PigsDoAds Inc., effectively immediately.

Our ensemble of customer support chatbots wishes you a wonderful day! '

have_faith(3979) about 5 hours ago [-]

> A domain just for developers

> $11,500 for 9 days early access

Makes video games early access look like childs play.

Izmaki(10000) about 5 hours ago [-]

It's a nice way to earn a bit of cash from larger companies who do not want 'company.dev' associated with porn or malicious content. Imagine if 'unity.dev' or 'disney.dev' pointed to a cesspool of viruses.

_wmd(2776) about 5 hours ago [-]

A domain just for developers! Until it abruptly shuts down Feb 19, 2021. Why would anyone trust Google with something with any kind of long-term requirements as important as a name? They should have registered '.chump' instead, because that is really what users of a Google-managed TLD are declaring

'Dear VALUED developer, thanks so much for being a part of the .dev experiment! After a long hard period of staring at our shoes, we realized we cannot extract sufficient value from our users in this manner. As of midnight, your name will automatically be migrated to our newer, better, faster TLD, .plus, the future of the open web'

z3t4(3752) about 4 hours ago [-]

Domains will be useful for Google search ...

lazyjones(3950) about 5 hours ago [-]

A TLD 'with benefits' from Google is a money-printing press. While Google has a certain track record, this makes it rather unlikely for them to discontinue it in the near future.

gkoberger(676) about 5 hours ago [-]

I mean, I get your point, but I think this is a meme exclusive to HN. Other than Google Reader, there really isn't much Google has shut down without having a newer, better product you can transition to.

When it comes to something like this (AKA not a consumer product, makes money, relatively easy to run, etc), I genuinely can't see them shutting down a TLD like this.

If anything, the complaint should be the lack of official support you should probably expect for it.

bhartzer(2172) about 5 hours ago [-]

Yeah, that won't happen. Even if a registry goes out of business the domains and tlds will still resolve, most likely operated by another registry.

gipp(10000) about 5 hours ago [-]

Somebody makes this post on every HN thread involving Google now. But the three big public deprecations I'm aware of are Reader, Inbox, and G+. All consumer products. All the other stuff on that killed by Google site is stuff I've barely even heard of, and all consumer products.

To my knowledge, no major GCP component has ever been deprecated. Am I missing something? This seems like it has just become a groupthink HN meme at this point.

chomp(3724) about 5 hours ago [-]

I don't think this is a huge concern - at least ICANN has the power to designate a successor registry, and I guarantee someone will break down the doors trying to get that particular gtld.

ceejayoz(2056) about 5 hours ago [-]

I was already irritated that they took the .dev TLD. This sort of blatant money grab over it is simply gross. $8k premium to get in on the first day.

bhartzer(2172) about 5 hours ago [-]

They didn't take it. They applied, went through the process, and paid a premium for it.

$8k for a domain is actually cheap compared to other premiums in other new gtlds.

unilynx(10000) about 5 hours ago [-]

120KB of javascript and still the accordion doesn't open on OSX Safari.

idiot900(10000) about 3 hours ago [-]

And, hilariously, iOS Chrome, because it uses the same renderer as Safari.

tambourine_man(108) about 5 hours ago [-]

Made for developers, by developers!

ebg13(10000) about 2 hours ago [-]

You may get a kick out of this...

Google Support

<SUPPORT_PERSON>12:53 PM Thank you for contacting Google Domains. My name is <SUPPORT_PERSON> and I'll be happy to assist you. Let me quickly read your notes here.

<SUPPORT_PERSON>12:54 PM Hi there

<SUPPORT_PERSON>12:54 PM How are you?

<ME>12:54 PM Hi. I'm trying to read your website but it's broken in one of the dominant web browsers in the world.

<SUPPORT_PERSON>12:54 PM Hi you said that the link https://domains.google/tld/dev/ doesn't work on Safari?

<ME>12:55 PM The accordion links are broken.

<SUPPORT_PERSON>12:55 PM Have you tried in Chrome already though or maybe a private window in Safari already?

<ME>12:55 PM 'Is this a one-time payment? Will I still need to pay $12 every year to keep my domain?' click (nothing happens)

<SUPPORT_PERSON>12:55 PM It's just maybe a cache

<ME>12:56 PM It's not just a cache

<SUPPORT_PERSON>12:57 PM Alright but have you tried other browsers maybe?

<SUPPORT_PERSON>12:57 PM I've checked it here and the link you sent works just fine

<ME>12:57 PM Did you test in the latest Safari on the latest macOS? Because it doesn't work fine.

<SUPPORT_PERSON>12:57 PM Sorry, not using Mac

<SUPPORT_PERSON>12:58 PM But we'll look into it if we get feed backs similarly

<SUPPORT_PERSON>12:59 PM We apologize for the inconvenience but please take a look into it on a different browser like Chrome for the time being

<ME>12:59 PM here are other reports https://news.ycombinator.com/item?id=19178833

<SUPPORT_PERSON>12:59 PM Oh alright thank you

<SUPPORT_PERSON>1:00 PM Let me check that

<SUPPORT_PERSON>1:04 PM We are already looking into it <ME>

lazyjones(3950) about 4 hours ago [-]

Also, the navigation and some other parts aren't usable with ad blocker enabled, since there are no non-JS links/anchors. Embarrassing for an entity that once tried to position itself as a supporter of usability and user-friendly web pages.

dictum(3372) about 4 hours ago [-]

I recently found out that the sidebar menus on pages within https://developers.google.com/web don't work with JS off (usually a trivial thing: show sub-items by default and hide them with JS). Almost every Alphabet website has something like that.

So yes, minor anecdote, and I genuinely appreciate the hundreds of Google employees who really help the Web and share useful knowledge (and don't lead developers into using techniques best suited for billion-user websites, as FB often does) but I'll reserve to right to side-eye anything Google says.

Drdrdrq(4024) about 5 hours ago [-]

> You can purchase an SSL certificate through one of our web partners or a Certificate Authority. Read this article to learn more.

Really? No mention of Lets Encrypt? Does anyone still buy certificates nowadays, especially for dev sites?

lawnchair(10000) about 2 hours ago [-]

Not all certificates are created equal

> Let's Encrypt offers Domain Validation (DV) certificates. We do not offer Organization Validation (OV) or Extended Validation (EV) primarily because we cannot automate issuance for those types of certificates.

byuu(10000) about 2 hours ago [-]

I buy them. I don't like paying for them, but I want a certificate I know will just work for years without having to run certbot or one of its clones on my server. Well, that and LE didn't yet allow wildcard certs when I bought mine. They've already dropped their maximum validity from three to two years though, so I'll probably throw in the towel when they further reduce their validity to less than six months.

henvic(4029) about 3 hours ago [-]

Of course they do.

Let's Encrypt provides DV (Domain Validation). Not OV (Organization Validation).

Obviously .dev is intended for software development and most domains there would probably be using DV only so this might not apply to it, though.

rndgermandude(10000) about 5 hours ago [-]

They explicitly mention letsencrypt as a free provider when you click the 'Read this article' link.

aboutruby(10000) about 5 hours ago [-]

Pretty sure the first domains acquired will be targeted at local development setups, like tmp.dev, staging.dev, production.dev, etc. etc.

edit: just a reminder that .local is the reserved one https://en.wikipedia.org/wiki/.local

bhartzer(2172) about 4 hours ago [-]

Most likely the domains like tmp.dev, staging.dev, and production.dev are premium domains that will never be released by the registry.

Many premium domains are held back by the registry, which are a part of icann rules. Those domains are typically not able to be registered and used by anyone.

wolco(3551) about 5 hours ago [-]

I've been collecting country level three character long domains and using them. I suggest everyone does the same.

dehrmann(10000) about 2 hours ago [-]

When in doubt, racketeer.

I've been annoyed by how Google uses Adwords for a while; suppose you're company in a competitive, undifferentiated space. I just searched for 'enterprise rental cars,' and the first thing below the search box, an ad, was for getaround.com. The second was an ad for Enterprise, the third was the organic result for Enterprise. Google is effectively telling these companies 'You wouldn't want someone to happen to see a competitor first and click them when they search for them, would you? Then pay up.' That's a racket.

Same with this. They're inventing the demand for this TLD, then telling developers to pay up if they don't want someone to take their name.

zanny(10000) 38 minutes ago [-]

google.com isn't a public place, it also isn't a particularly valuable plot of 'land' aside the private goliath built upon it.

The fact people go to Google gives them their power to extort business for ranking. But thats because Google remains valuable - people use it because it works, or at least because of inertia that nothing is remotely better yet, and so long as people still value the search companies can do the cost benefit analysis to know if paying the rent is worth it.

And its fortunate the only people who really care about search ranking are those trying to make money off it. There are a lot more egregious crimes being committed in the privacy space by Alphabet or by rent seekers across the economy than a business making money as a parasite off other businesses trying to make money.

GordonS(1058) about 2 hours ago [-]

You could make the argument with a lot of the (mostly pointless) TLDs that have been released recently, such as .uk, which was a clear money-grab targeting existing .co.uk owners, or .sucks, which feels like an attempt to extort all domain owners.

sirn(2589) about 5 hours ago [-]

I'm not familiar with ICANN rules, but I wonder if a company can do whatever they want to a gTLD once it's accepted, even if it's not within the intended usage in the gTLD application? As far as I know, Google said they registered .dev for internal use and for Google-related products and it will be completely closed to Google, according to their application[1]:

> The mission of this gTLD, .dev, is to provide a dedicated domain space in which Google can enact second-level domains specific to its projects in development. Specifically, the new gTLD will provide Google with greater ability to create a custom portal for employees to manage products and services in development.

> Charleston Road Registry intends to operate the proposed gTLD as a closed registry with Google as the sole registrar and registrant. The goal of the proposed gTLD is to allow Google to manage the domain name space for its projects in development. The proposed gTLD will provide Google with the ability to customize its domain and website names for its projects and signal to users that .dev websites are managed by Google

> Charleston Road Registry believes that given its intended use by Google, the .dev gTLD will best add value to the gTLD space by remaining completely closed for the sole use of Google.

...and now they're opening it up to the public... :\

[1]: https://gtldresult.icann.org/applicationstatus/applicationde...

CydeWeys(3866) 5 minutes ago [-]

To be fair, people are generally much happier with it being opened up than it being kept closed.

bhartzer(2172) about 5 hours ago [-]

I believe they can change their business plan. Even if they say it will be closed they could offer domains to the public.

sascha_sl(10000) about 5 hours ago [-]

That application is bullshit. Everything in Google's network already uses their pseudo-'goto' TLD.

I don't think they use any domains beyond the few used for the registry page either.

4ad(3249) about 4 hours ago [-]

I reserved unix.dev for the regular domain reservation price, not the crazy 'premium domain' price because unix.dev was not part of the premium domain list.

However, Google determined that no, unix.dev should be a premium domain, and 'stole' the reservation from me (after I have already paid for it). They later added it to the premium domain list, and they asked me for $11k to keep the reservation.

TBH, I expected to lose the domain because of trademarks or whatever, but apparently it was simple highway robbery.

Btw, I didn't even get my money back, just 'store credit'.

toyg(3898) about 3 hours ago [-]

And what were you going to do with unix.dev, if I may ask?

EamonnMR(3709) about 4 hours ago [-]

Is the gouge for early access normal for registrars with a new TLD?

toyg(3898) about 3 hours ago [-]

Sadly, yes. The whole domain business is like that, profiteering at every turn.

jasonbarone(3988) about 4 hours ago [-]

Does anyone know how the "premium" renewal prices are set and governed? I'm going through this right now with .app domains (also owned by Google). Apparently domains categorized as "premium" have a much higher yearly pricing, and how it's priced is not specified anywhere on the registry website. I called a few registrars and they said that the premium pricing is set by the registry. What's concerning is that the registrars admitted that there really isn't anything stopping the registry from increasing the premium pricing at will.

Anyone have anymore info about this? Seems concerning that a registry has no restrictions on what they can do with pricing after an individual has invested into an expensive domain.

towb(10000) about 4 hours ago [-]

On my so far unused premium .app it is the same as the initial price but who knows what happens if I put something successful on it.

tedmiston(3519) about 4 hours ago [-]

Are you saying the registry can arbitrarily change the .app domain renewal price without control, or that they can charge a different renewal price for different .app domains?

I've experienced weird price fluctuations with .io domain renewals in the past, but haven't owned one in a few years.

ioddly(4007) about 3 hours ago [-]

I'm unsure how it is determined but when I asked my .app registrar they said it should be $120/year for each renewal, which was less than I paid to register it. Guess I'll find out.

I believe in the original .app launch thread on HN they stated that premium domain pricing was based on machine learning. So...who knows.

icebraining(3455) about 3 hours ago [-]

AFAIU, the ICANN New gTLD registry agreement means you can't have 'premium' prices for renewals:

'Registry Operator must have uniform pricing for renewals of domain name registrations ("Renewal Pricing"). For the purposes of determining Renewal Pricing, the price for each domain registration renewal must be identical to the price of all other domain name registration renewals in place at the time of such renewal, and such price must take into account universal application of any refunds, rebates, discounts, product tying or other programs in place at the time of renewal.'

REGISTRY AGREEMENT - 2.10 (b)

https://newgtlds.icann.org/sites/default/files/agreements/ag...

hayksaakian(3212) about 4 hours ago [-]

You could try transferring your domain to another registrar if you feel like you're getting gouged

bigend(10000) about 4 hours ago [-]

There are quite a few Russian lastnames that end "dev", like Medvedev. I think there is going to be quite a rush for these.

wtmt(4026) about 3 hours ago [-]

There are also Indian (male) names that are just "Dev" or that end in "dev". The word means deity or god. There could be many Indians buying these when early access ends on February 27.

xiphias2(10000) about 3 hours ago [-]

,,.dev lets your clients know what you do before they even open your site.''

I hate the new trend of companies having multiple domains with different TLDs, as I never know this way if it's the same company or not.

lucideer(3976) about 3 hours ago [-]

I'm in favour of this trend for this very reason. One should never implicitly assume you know it's the company you expect in any case.

benatkin(3016) about 5 hours ago [-]

Having used 'sudo vi /etc/hosts' to add .dev domains, this gets no <3 from me.

I'm going to avoid buying one, but if there are any popular .dev domains I hope I will forget about it before long. I don't want negative feelings from this so often. On the other hand there are plenty of reminders of negative things in politics and the environment, so I guess I've gotten used to it and developed outrage fatigue.

eeeeeeeeeeeee(10000) about 2 hours ago [-]

We have RFCs for a reason, so people follow predictable behavior everywhere. And so you don't build up a ton of work/customization and then suddenly have a surprise like this. Worse, you embed that custom design into a system and then leave and someone else needs to fix that mess.

.test is what you should have been using, precisely for this reason.

glennpratt(10000) about 3 hours ago [-]

/etc/hosts doesn't accept wildcards, so you probably didn't catch much. This was settled a long time ago:

https://tools.ietf.org/html/rfc2606

.test is only one more letter.

tambourine_man(108) about 5 hours ago [-]

I'll never forgive them for competing with my hosts file.

.test is not as nice

gpm(10000) about 3 hours ago [-]

You could replace it with .d, ICANN isn't giving out single letter tld's yet.

whatshisface(10000) about 3 hours ago [-]

A while ago, some posts on HN went around saying that you shouldn't use third-world TLDs for your startup because some of them are unreliable stewards who would put your domain at risk. Does registering a .dev domain make you dependent on Google in the same way that registering a .ly makes you dependent on Libya?

dweekly(3035) about 2 hours ago [-]

The short answer is that yes, this domain is administered by Google through a holding entity. See https://www.iana.org/domains/root/db/dev.html

vtange(10000) about 2 hours ago [-]

There was this one guy who lost a valuable Twitter handle due to him using his custom domain email (via GoDaddy) as a login email.[0] I'd like to think imagine Google is better than GoDaddy when it comes to security and fighting social-engineering attacks but who knows. But if Google is a weak link wouldn't that also mean all our gmails are not as safe as we think?

[0] https://medium.com/@N/how-i-lost-my-50-000-twitter-username-...

dcplogic(10000) about 2 hours ago [-]

I trust Google a heck more (to stay around and honor contracts) than I trust Libya.

TaylorAlexander(3995) about 2 hours ago [-]

It's a good question. Does anyone know who legally owns the domain and what responsibility google has to ensure you can use the TLD in perpetuity?

jmiserez(10000) about 2 hours ago [-]

(removed)

cramforce(1247) about 2 hours ago [-]

They can use a different TLD. One big benefit is that the literal TLD is in the list (not every domain) keeping the size of the list O(1) instead of O(n) as it is for other TLDs.

bhartzer(2172) about 3 hours ago [-]

I really actually hope Google decides to not index .dev domains/sites. It's crazy how many times have I seen a site in dev or staging that gets indexed by google but shouldn't be indexed.

hombre_fatal(10000) about 3 hours ago [-]

That wouldn't make sense since the TLD isn't just for the staging phase of a website. It's for anything.

kemyd(4031) about 5 hours ago [-]

You can only purchase on Google Domains if your billing address is in a supported country (15 countries listed).

Would .dev domain be available from other countries?

CydeWeys(3866) 3 minutes ago [-]

.dev domains will be available from many dozens of registrars, including most likely the one(s) you're already using. You'll for sure be able to get one.

robjan(10000) about 4 hours ago [-]

I pre-ordered one on Gandi

shereadsthenews(10000) about 4 hours ago [-]

What will you build on .dev? A site so broken that it still has a horizontal scrollbar no matter how wide the browser window.

fgkramer(10000) about 3 hours ago [-]

Are you on a mac? 99% of times it's because of an external mouse connected.

edit: the css for the element that overflows has: `grid-template-columns: 50vw 50vw;` and based on the spec: https://www.w3.org/TR/css3-values/#viewport-relative-lengths, scrollbars are not taken into account, therefore as long as you have a _forced_ scrollbar, the content WILL overflow regardless of your screen size.

By forced I mean a scrollbar you cannot remove and that's not enforced by your systems settings, you can clearly see this behaviour on mac if you toggle the 'Show scroll bars' setting to 'Always'.

Safari and webkit based browsers can avoid this issues with `::-webkit-scrollbar{ display: none}` but it's not a cross browser solution nor a wise decision to hide scrollbars overall.

tedmiston(3519) about 4 hours ago [-]

It renders without a horizontal scroll bar on my iPad 6g in either orientation

cachvico(10000) about 3 hours ago [-]

.deveap-hero grid-template-columns needs to be 49vw 50vw

49vw not 50vw, to approximately make room for the vertical scrollbar when it appears.

OK, on with my day ;)

molteanu(3173) about 3 hours ago [-]

Chromium on Arch Linux. Can confirm.

bhartzer(2172) about 3 hours ago [-]

Or just build a site that pitches your development services.

PinguTS(4028) about 1 hour ago [-]

Is this .dev only avaliable through Google Domains? WTH? Google Domains is still not available in my country.

profmonocle(10000) about 1 hour ago [-]

It's available through other registrars as well. Gandi is a European registrar that supports it, for example.

chinathrow(3833) about 4 hours ago [-]

I hope they fail.

Google forced me to migrate my local .dev domains to .devo because Chrome refused to connect to my localy configured domain names via /etc/hosts.

stingraycharles(3598) about 4 hours ago [-]

Isn't this just as silly as wishing Cloudflare's public DNS fails because you were using 1.1.1.1 as a development IP ?

catern(3791) about 4 hours ago [-]

This 'early-access' price which decreases over time is an approximation of a Dutch auction: https://en.wikipedia.org/wiki/Dutch_auction

Dutch auctions are incentive-compatible - they allocate the resource to the person that gains the highest utility for having it. Maybe Google got some of the people working on ads auctions to design this pricing structure.

tedmiston(3519) about 4 hours ago [-]

Or companies that value the $100-10k for the early registration fee so little that it's a drop in the bucket compared to individuals.

I'd like to have a very short domain personally but it's hard to anticipate what the demand will be like here.

icebraining(3455) about 4 hours ago [-]

Google has a long history with the Dutch auction, they used it in their IPO, back in '04.

bhartzer(2172) about 3 hours ago [-]

I'm, that's an icann rule for launching new tlds. The early access period is always higher, then there are several other periods that happen before a tld gets to General availability.

This pricing structure is not just for .dev domains.




(128) We Must Revive Gopherspace (2017)

128 points about 5 hours ago by stargrave in 1045th position

box.matto.nl | Estimated reading time – 3 minutes | comments | anchor

We must revive Gopherspace

Last edited Thu Dec 28 15:49:13 2017

Both the world of html and the world of Gopher orginate from the same era. The world wide web of html has become huge. Gopherspace not.

The web has changed

Although the world wide web is a huge success, it also has turned into an area of great concern. Every step you take on the web, every site you visit, every page you view, is used to create and enhance a profile about you. Everything you do is carefully tracked and monitored.

Surveillance marketing is evil.

Google, who once started with the core value of 'do no evil' has become one of the most privacy invading organisations, just like Facebook and some others.

Webpages once started as just text, and later got also images. Today, a webpage consists of a number of javascript files. a lot of links to surveillance marketeers, links to Facebook Twitter, Google, Instagram and so on, and than some content in a html file.

Many websites and many webpages do not exist to give you information, but to sell you advertisements, to lure you to commerial webpages or to sell your private information to a surveillance marketing to build your profile.

Gopherspace is not evil

Maybe Gopher's weakness proves the be its biggest strength.

Gopher is a much feature-less protocol than html. This is probably why it lost the race from html. But this not only makes Gopher flying fast, it also protects you from all those evil properties of the world wide web. Trackers have no change in Gopherspace.

Surveillance marketing can not thrive in Gopherspace.

We must revive Gopherspace

Everybody has given op on Gopher. Hardly any browser supports it any more. There are just a few Gopher servers left.

In order to make a comeback, Gopherspace needs two things:

  • People visiting Gopherspace
  • Contemporary content

We need more people using Gopher. So, spread the word. Start using it yourself. Ask creators of webbrowsers to revive their support of Gopher. Or create a plugin.

If you build it, they will come.

Most of the content on Gopherspace is outdated. Often it is kept alive out of nostalgia. This is great, but not if there are no gopher sources that provide contemporary content.

This is where the technically less challanged folks come in. This is what you can do.

  • You can set up a gopher server. It is not that hard.
  • You can add content to Gopherspace.

Got a blog? Got a website? Put a Gopher server along side of it, and share content on both platforms.

I have already started




All Comments: [-] | anchor

icebraining(3455) about 4 hours ago [-]

HTML ain't the problem; you can build websites without tracking. If you somehow managed to pull enough users to Gopher, they'd just write Gopher Chrome and start adding new features that conveniently allow tracking into it, and gradually kill off the original protocol (see EEE). The problem is economic, and the solution must be too.

bunderbunder(3531) about 3 hours ago [-]

Moving to another protocol, like Gopher, seems like an economic solution to me.

A parallel protocol and hypermedia format that's restricted enough to prevent tracking isn't going to attract everyone. It's going to attract the subset of users who care enough about stuff privacy to give up 'rich Web' features like single page applications and animated HTML canvas elements in return.

That's not a group that's likely to start immediately demanding to to build in the features needed to create infiniscrolling Pinterest feeds. And it's also going to be a much restricted group, meaning businesses won't stand to profit much by pushing for it. So there might not be any economic incentive to do it.

At least, that's how it might be at first. If it remains a nice place to be for 5 years, I'd call it a decent run. 10, and I'd be ecstatic.

rixrax(3967) about 1 hour ago [-]

I'm old enough to have used gopher on vt100 terminals as an undergrad in college to try and do some 'work'. And when http/www arrived, it didn't take long to switch to a better mousetrap. And this wasn't just because you could now render a gif in NCSA Mosaic on indigo workstation. Everything was just better in this new http world.

Let's fast forward to today, yes, we've gone overboard all over, but then again, Gopher [i think] doesn't come standard with TLS, it hasn't gone through the evolution that http[s] has that makes it a robust and scalable backbone it is today.

What I'm trying to say is that we should not casually float around pipe dreams about switching to ancient tech that wasn't that good to begin with. Yes, electric cars were a thing already in early 1900s, and we maybe took a wrong turn with combustion engine, but with Gopher, I think we should let the sleeping dogs lie, and focus on improving next version of QUIC, or even inventing something entirely new that would address many of the concerns in the article without sacrificing years of innovation since we abandoned Gopher. Heck, this new thing might as well run on TCP/70, never mind UDP appears to be the thing now[0].

[0] https://en.m.wikipedia.org/wiki/HTTP/3

lazyjones(3950) about 2 hours ago [-]

> The problem is economic, and the solution must be too.

Alright, let's talk to our representatives and ask them to consider taxing tracking and data collection.

joering2(937) about 3 hours ago [-]

> The problem is economic, and the solution must be too.

FB for example is US-based company. Last time I checked you are not forced to accept outside money, get VC rounds or go IPO.

Mark chosen to go the 'American path' that is capitalism to the maximum so of course I will lose an argument over why he is trying to maximize profits. But nothing stopped him from building sponsorship agreements with i.e. fortune 500 corps, instead of building bidding platform ala Google.

I'm pretty sure if you would signup fortune 500 and have for example 500 rotating banners, it would give you enough founds to run operations and pay every employee $150,000 salary. Plus having exactly ZERO tracking cookies and ZERO malicious following you JS script. Its quite possible given FB size and reach, but again 'this is America, this is business.'

kleer001(3707) 34 minutes ago [-]

Pay people to use Gopher? Maybe state funded.

helij(10000) about 3 hours ago [-]

Indeed[1] You always have a choice!

[1] https://artlists.org/privacy-policy/

TeMPOraL(3142) about 2 hours ago [-]

That's totally true.

But then again, the harm of advertising and surveillance capitalism is a thing. So is the focus on data hoarding, vendor lock-in, favoring prettiness over utility.

I really wish we could run a parallel web. One optimized for utility, where data and content is available in maximally useful form, where users are in control of their rendering, and are free to use whatever automation they want. Not a replacement web, just a for people who are willing to jump through some hoops in order to avoid the crap that's on the mainsteram one.

I don't know much about Gopher yet (I'm starting to learn now), but maybe such a parallel web could be developed there?

petra(3943) about 3 hours ago [-]

Say we've built the system of economic incentives for privacy(opt-in). And managed to convince 10% of users and 50% of site owners to convert. Extremely hard.

What do users get in the end ? half of the web is still tracking them. And many of the big guys still track them.

Not enough if you ask me. That's what makes it so difficult.

So let's solve that: let's build a search engine that let me filter sites according to privacy, Ah and be perceived as good as google - because in today's world, in many jobs, you cannot give up on an information advantage.

That's kind of an impossible mission.

MistahKoala(10000) about 4 hours ago [-]

The article discusses reviving gopher, but doesn't mention how to access it (sure, I could invest a bit of time and effort googling how to do that, but that seems beside point for an article evangelising its revival).

classichasclass(3402) about 3 hours ago [-]

At the risk of shameless self-promotion, there are Firefox add-ons and at least several mobile clients. Disclaimer: I wrote a number of the ones on this page.

https://gopher.floodgap.com/overbite/

(Yes, it's accessible over Gopher too, just to be difficult)

Joe-Z(10000) about 3 hours ago [-]

Yes, I was a little disappointed by that too. It even has a 'gopher://...' link at the end and when I click on it I can't even open it. Just tell me how I can open the one example you provide man!

TheRealPomax(3721) about 3 hours ago [-]

nor does it try to explain what it actually is and why someone would need to care, let alone actually use it.

mahkoh(10000) about 4 hours ago [-]

Every step you take on the web, every site you visit, every page you view, is used to create and enhance a profile about you. Everything you do is carefully tracked and monitored.

Bold of the author to openly admit this.

peterkelly(1367) about 3 hours ago [-]

How is that bold? It's common knowledge.

clubm8(4026) about 3 hours ago [-]

>Every step you take on the web, every site you visit, every page you view, is used to create and enhance a profile about you. Everything you do is carefully tracked and monitored.

* if you use traditional web browsers.

I've been moving more and more of my browsing over to Tor.

Both Reddit and HN can be browsed, though the former requires JS to fully function. (A persistent problem across the web)

I can't do all my browsing on Tor, but I can do a substantial chunk. Conversely, I can maintain 'clean' profiles tied to my real name that seem to simply check email, read the news a bit, and check the weather.

psim1(10000) about 4 hours ago [-]

Why not just serve static text over HTTP? At least then you'd have the ability to inline images. This--the use of Javascript and other technology for tracking purposes--isn't a problem for Gopher to solve. It's a problem for web content creators.

fouc(3894) 7 minutes ago [-]

Instead of fixing the protocol, maybe the solution is to come up with a modern browser that doesn't support javascript at all. If there's widespread adoption of such a browser, then that would change the trend.

I think the USP could have to be something about 'reading friendly' or 'consistent reading experience' etc.

DebtDeflation(10000) about 2 hours ago [-]

I don't know if I necessarily want Gopher back, but I often dream of returning to the days when 'the Internet' was primarily Usenet, IRC, Telnet, and email.

teddyh(2639) about 2 hours ago [-]

Don't forget FTP. So much FTP.

fimdomeio(4025) about 3 hours ago [-]

I was toying with an idea a while back of making sites just for non visual browsers. There was basically just a piece of css, blocking the visualization of content and letting users know this 'this is a web 0.5 website. This site is best viewed in a terminal'. The enforced rules where kind of a gentleman's (gentleperson) code of no css no js.

Conclusions I got is that the thing had crazy fast loading (it even weird when you can no longer distinguish local for server), that it would be actually quite enjoyable coding experience has it's suddently is just 50% of the work and that the rendering of web pages in terminal browsers is actually really nice.

pmlnr(1507) about 2 hours ago [-]

It called txt files.

Good example: http://textfiles.com/magazines/LOD/lod-1

yoz-y(10000) about 2 hours ago [-]

How about reviving the "blogosphere" instead? Does it even need reviving? Most of the personal or tech blogs I visit do not have heavy ads or tracking on them, still offer full RSS articles and so on. People who care still have a lot of nice web sites to go to.

Maybe what we need is a search engine that penalises JS and tracker use.

vortico(3486) about 1 hour ago [-]

Fewer people read blogs. Why should more people write them?

superkuh(4032) 22 minutes ago [-]

The web of the 90s is alive on tor. On tor the idea of running third party executable code in the age of spectre is (properly) seen as absurd. We just need to bring back webrings and we'll be set.

Since it's on tor there's no need for evil centralization for DoS protection since it's baked into the protocol. Additionally your onion vanity name you brute forced cannot simply be taken away from you if there's political or social pressure on your registrar or above.

No, we don't need gopher. We need people to stop running third party code like it's some normal thing. We need devs to stop making websites that don't render unless you run their code.

It's really not that hard to run a hidden service. No harder than running a webserver. And everyone's home connections are fast enough now.

TheRealPomax(3721) about 3 hours ago [-]

As someone who grew up with a 1200 baud modem and never used gopher: why would I start using gopher? What even is it? Can I use it to host webpages? It sounds like if 'tracking is impossible' it probably can't use html+javascript? Why would I want to use that?

welly(3998) about 2 hours ago [-]

There is no good reason to use gopher other than for nostalgic reasons.

pbreit(2350) 33 minutes ago [-]

I was wondering if Net News / NNTP / Usenet could address some of the distributed use cases people are trying to throw at blockchain?

mrweasel(3923) 3 minutes ago [-]

Blockchain wasn't really on my mind, but NNTP is something I think we should consider reviving.

Reddit and Facebook has taken over the old forums and mailing lists, but I feel that those markets would be served equally well, or better by NNTP.

The Reddit redesign makes it clear what direction they are moving in, and I fear that it will kill of all the interesting subreddits, where people have real discussions. In it's place will be an endless stream of memes, pictures and angry anti-Trump posts. All these subreddits will scatter and their uses left without a 'home'.

The village I live in has a Facebook group, it's a closed group, so no browsing without a Facebook account. I'm relying on my wife to inform me, if anything interesting is posted. It's sad, because it's pretty much the only source you can turn to if it smells like the entire village is burning or the local power plant is making a funny sound. All the stuff that's to small for even local news, or is happing right now.

Usenet would, in my mind, be a create place to host the communities currently on Facebook and Reddit. They will be safe from corporate control, or shifts in focus from they 'hosting partner', and everyone will have equal and open access. Spam might be the unsolved problem, but I feel like that is something we can manage.

I know that a Usenet comeback, with all the hope and dream I have for it, isn't coming. People don't like NNTP, they like Facebook.

floatingatoll(3820) about 3 hours ago [-]

One tangent from this consideration would be:

What would it take to make Content-Type: text/markdown a reality for web publishers?

krapp(3994) 35 minutes ago [-]

What would be the point?

Markdown is a textual format intended to result in HTML anyway, and it includes the entirety of HTML already in its spec.

yati(3297) about 3 hours ago [-]

To start with, deciding what constitutes markdown, i.e., a spec. There are a bunch of incompatible 'flavours' out there.

giancarlostoro(3206) about 4 hours ago [-]

What do Gopher pages look like, are they mostly ascii or is the format weird? Why did HTML/HTTP become the standard over Gopher? It seems like Gopher could be capable of doing similar things to the web, just nobody bothered to expand on it or the standard is frozen in time.

decebalus1(10000) about 4 hours ago [-]

With the risk of sounding patronizing, the wikipedia page provides answers to all your questions https://en.wikipedia.org/wiki/Gopher_(protocol)

classichasclass(3402) about 3 hours ago [-]

I think the unchecked expansion of arguably questionable capabilities is exactly what the author's objection is. Otherwise, it just makes Gopher into an unnecessary second-rate HTTP.

(Disclaimer: I maintain gopher.floodgap.com)

diminish(1922) about 3 hours ago [-]

We need a new mode for Fırefox an extremely restricted form of html5 without javascript, call it html0.

<doctype html0>

No JS, no thirparty content, only html5+, css3+, text, images, videos, audio and other stuff.

okl(3911) about 3 hours ago [-]

Like reading mode (F9) on certain websites? Doesn't show ads, so why should big business support it?

ancarda(3227) about 3 hours ago [-]

It seems possible to achieve this today with Content-Security-Policy and Feature-Policy[1].

However, even without the help of those headers, one could also have some discipline (perhaps also respect for users) and refrain from putting tracking and other undesirable things onto their website.

This doesn't seem to be a technical problem, so a technical solution - especially an opt-in one - probably won't help.

[1] https://scotthelme.co.uk/a-new-security-header-feature-polic...

Casseres(3269) about 3 hours ago [-]

If you have the ability to set the doctype of a page, don't you also already have the ability to not load thirdparty content?




(123) Most Americans don't realize what companies can predict from their data

123 points about 8 hours ago by pseudolus in 165th position

theconversation.com | Estimated reading time – 9 minutes | comments | anchor

Sixty-seven percent of smartphone users rely on Google Maps to help them get to where they are going quickly and efficiently.

A major of feature of Google Maps is its ability to predict how long different navigation routes will take. That's possible because the mobile phone of each person using Google Maps sends data about its location and speed back to Google's servers, where it is analyzed to generate new data about traffic conditions.

Information like this is useful for navigation. But the exact same data that is used to predict traffic patterns can also be used to predict other kinds of information – information people might not be comfortable with revealing.

For example, data about a mobile phone's past location and movement patterns can be used to predict where a person lives, who their employer is, where they attend religious services and the age range of their children based on where they drop them off for school.

These predictions label who you are as a person and guess what you're likely to do in the future. Research shows that people are largely unaware that these predictions are possible, and, if they do become aware of it, don't like it. In my view, as someone who studies how predictive algorithms affect people's privacy, that is a major problem for digital privacy in the U.S.

How is this all possible?

Every device that you use, every company you do business with, every online account you create or loyalty program you join, and even the government itself collects data about you.

The kinds of data they collect include things like your name, address, age, Social Security or driver's license number, purchase transaction history, web browsing activity, voter registration information, whether you have children living with you or speak a foreign language, the photos you have posted to social media, the listing price of your home, whether you've recently had a life event like getting married, your credit score, what kind of car you drive, how much you spend on groceries, how much credit card debt you have and the location history from your mobile phone.

It doesn't matter if these datasets were collected separately by different sources and don't contain your name. It's still easy to match them up according to other information about you that they contain.

For example, there are identifiers in public records databases, like your name and home address, that can be matched up with GPS location data from an app on your mobile phone. This allows a third party to link your home address with the location where you spend most of your evening and nighttime hours – presumably where you live. This means the app developer and its partners have access to your name, even if you didn't directly give it to them.

In the U.S., the companies and platforms you interact with own the data they collect about you. This means they can legally sell this information to data brokers.

Data brokers are companies that are in the business of buying and selling datasets from a wide range of sources, including location data from many mobile phone carriers. Data brokers combine data to create detailed profiles of individual people, which they sell to other companies.

Combined datasets like this can be used to predict what you'll want to buy in order to target ads. For example, a company that has purchased data about you can do things like connect your social media accounts and web browsing history with the route you take when you're running errands and your purchase history at your local grocery store.

Employers use large datasets and predictive algorithms to make decisions about who to interview for jobs and predict who might quit. Police departments make lists of people who may be more likely to commit violent crimes. FICO, the same company that calculates credit scores, also calculates a "medication adherence score" that predicts who will stop taking their prescription medications.

Research shows that people are only aware of predictions that are shown to them in an app's user interface, and that make sense given the reason they decided to use the app. SIFO CRACHO/shutterstock.com

How aware are people about this?

Even though people may be aware that their mobile phones have GPS and that their name and address are in a public records database somewhere, it's far less likely that they realize how their data can be combined to make new predictions. That's because privacy policies typically only include vague language about how data that's collected will be used.

In a January survey, the Pew Internet and American Life project asked adult Facebook users in the U.S. about the predictions that Facebook makes about their personal traits, based on data collected by the platform and its partners. For example, Facebook assigns a "multicultural affinity" category to some users, guessing how similar they are to people from different race or ethnic backgrounds. This information is used to target ads.

The survey found that 74 percent of people did not know about these predictions. About half said they are not comfortable with Facebook predicting information like this.

In my research, I've found that people are only aware of predictions that are shown to them in an app's user interface, and that makes sense given the reason they decided to use the app. For example, a 2017 study of fitness tracker users showed that people are aware that their tracker device collects their GPS location when they are exercising. But this doesn't translate into awareness that the activity tracker company can predict where they live.

In another study, I found that Google Search users know that Google collects data about their search history, and Facebook users are aware that Facebook knows who their friends are. But people don't know that their Facebook "likes" can be used to accurately predict their political party affiliation or sexual orientation.

What can be done about this?

Today's internet largely relies on people managing their own digital privacy.

Companies ask people up front to consent to systems that collect data and make predictions about them. This approach would work well for managing privacy, if people refused to use services that have privacy policies they don't like, and if companies wouldn't violate their own privacy policies.

But research shows that nobody reads or understands those privacy policies. And, even when companies face consequences for breaking their privacy promises, it doesn't stop them from doing it again.

Requiring users to consent without understanding how their data will be used also allows companies to shift the blame onto the user. If a user starts to feel like their data is being used in a way that they're not actually comfortable with, they don't have room to complain, because they consented, right?

In my view, there is no realistic way for users to be aware of the kinds of predictions that are possible. People naturally expect companies to use their data only in ways that are related to the reasons they had for interacting with the company or app in the first place. But companies usually aren't legally required to restrict the ways they use people's data to only things that users would expect.

One exception is Germany, where the Federal Cartel Office ruled on Feb. 7 that Facebook must specifically ask its users for permission to combine data collected about them on Facebook with data collected from third parties. The ruling also states that if people do not give their permission for this, they should still be able to use Facebook.

I believe that the U.S. needs stronger privacy-related regulation, so that companies will be more transparent and accountable to users about not just the data they collect, but also the kinds of predictions they're generating by combining data from multiple sources.




All Comments: [-] | anchor

theNJR(10000) about 4 hours ago [-]

Funny how the brain works.

People do realize something is happening. It's why so many think Facebook is "listening" to their conversations then showing ads for "products I've never searched for then talked to about with a friend". No, FB inferred you would buy it because your friend just did.

It's hard to comprehend the effects of data collection. Which, of course, makes it even more powerful.

cc439(10000) about 1 hour ago [-]

How would FB know what my friends have purchased? I've experienced the 'ad that's way too specific to a conversation held just moments ago to be mere coincidence' phenomenon several times. Each incident has involved the kind of product neither of us owned, none of our other friends would ever buy, and dull/boring enough that no one would have a reason to post/chat about it on Facebook.

drdeadringer(4032) about 4 hours ago [-]

Just yesterday I read a post on Reddit where the OP was wondering about an online ad being mere coincidence or some deep data-collection plot.

Per the story: They had purchased ice cream at the grocery store using a credit card 'never used for online purchases', and then at home they see an online ad for that very brand//flavor of ice cream. This raised alarm bells, hence the post to sanity-check.

Sometimes it's like we shouldn't fear the Terminator but the access terminal in our pocket. Other times it seems like both, or neither.

crispyambulance(3699) about 5 hours ago [-]

The thing that makes me increasingly concerned is the possibility of an entity using this data-surveillance NOT so they can sell us more crap, but for ulterior malicious purposes.

We already got a taste of what this can mean with cambridge analytica.

But what if some hate group (or other extremist org) with deep pockets decided to buy up and use such data in more sinister ways, targeting individuals or organizations at large scale, developing 'Stasi style' dossiers to use as leverage for future actions?

The information would not need to be 'perfect' but it could get increasingly more accurate over time depending on how much attention they focus on their targets.

bilbo0s(4016) about 4 hours ago [-]

Just imagine if it's just some tech guy, maybe unemployed or something, tired of being broke, who just needs to pay rent or something? That's what's concerning, people don't even need to have a cause, they can just be desperate. The government will be tracking the people who have a cause. You can go to the government to get help against those people because those people have something to lose. What about the people with nothing to lose? More and more people are out of work or underemployed, but I'm pretty sure the number of people who need to pay their bills remains constant.

All of a sudden all this data starts to make well off people look more like meal tickets. Imagine how easy it would be to get money out of that rich looking lawyer guy who's having an affair? Or maybe the well off looking doctor lady who voiced some views about blacks that her hospital, and the local naacp, might find interesting?

This economy, combined with massive data retention and security breaches, will make for some real perverse incentives in the future. We could conceivably get to the point where all you'd need to be is some guy with internet access who needs to pay rent by the end of the month.

rixrax(3967) about 2 hours ago [-]

If you haven't seen Lives of Others[0], you probably should. Talks pretty directly to what parent is proposing.

[0] https://en.m.wikipedia.org/wiki/The_Lives_of_Others

dontbenebby(10000) about 3 hours ago [-]

>The thing that makes me increasingly concerned is the possibility of an entity using this data-surveillance NOT so they can sell us more crap, but for ulterior malicious purposes.

Like causing a measles outbreak?

https://www.oregonlive.com/clark-county/2019/02/measles-outb...

lettergram(1518) about 6 hours ago [-]

I wrote about this relatively recently, basically there's now enough data and enough good systems out there that companies can start predicting what you'll do next.

This has been a thing since credit scores. However, now it's to the point they can even mimick your voice and predict how you'll respond to situations.

We are walking dangerously and blindly into a nightmare right now, and no one seems to realize it.

echevil(3719) about 3 hours ago [-]

Seems like it. I tend to watch specific type of videos at specific time of day on YouTube. Even though I watch tons of other videos on the same account, YouTube can do a pretty good prediction at that time of day and present me the videos I'm going to watch. I find it so useful!

specialist(4030) about 3 hours ago [-]

Everything about every person, living or dead, is known in near real-time.

Seisent (bought by LexisNexus) was being used to solve cold cases in the mid naughts. Just by using fragmentary data and sifting thru millions of demographic profiles to see who matched.

[FWIW, that 'What data brokers know' table is pretty good.]

--

If we choose to protect people's privacy, give individuals control over what is publicly known about them, we'll need to encrypt demographic data at rest.

Meaning translucent database strategies. Just like how password files are salted and then encrypted. You need your pass

Meaning using universal identifiers. Like implementing Real ID.

It is counter intuitive that identifying (catalog) everyone is how we protect everyone. But if there's another way, I haven't heard of it.

--

There will be some upsides to finally having one master identifiers.

Data quality will dramatically improve.

Truly portable health records.

Nearly 100% accurate voter registration (& eligibility).

The government 'census' will be just running a report.

We'll daylight all the bad data broking actors.

lioeters(4014) about 2 hours ago [-]

Setting aside the ethical/political implications of a universal identifier for everyone, your description makes me wonder about the technical implementation of the ID itself.

My first thought was whether it's possible to design the syntax of the IDs, so that they're not just sequential or random, but have some inherent properties that make them easier to organize, i.e., for sorting/categorizing. Kind of like Open Location Code [0], but for people. Since they should be immutable (same ID for lifetime), I suppose it could encode birth date/location, or maybe genetic 'markers'.. (Edit: On the other hand, that would by itself be a leak of private data..)

Once that's globally practiced (easier said than done!), there could be a searchable database of all registered individuals on the planet. I could see the practical advantages of having such a system, but it sure does have a hint of dystopian future.

[0] https://en.wikipedia.org/wiki/Open_Location_Code

NeedMoreTea(3667) about 7 hours ago [-]

Is this the only reason the current house of cards stays up and functions? I think so.

Almost no one realises what companies predict and infer from their data, or the extent of its collection. Once they see some of the surface effects they start calling it creepy or scary.

If people ever start realising the true extent, expect a backlash, surely?

teddyh(2639) about 7 hours ago [-]

> If people ever start realising the true extent, expect a backlash, surely?

No. Since it has been going on for so long, people, when they realize it now, will rationalize:

• To not have realized it for this long, I must have been stupid.

• I am not stupid.

• Therefore, I must be OK with what is happening.

And they will come up with numerous ridiculous rationalizations – fake reasons to be OK with the current situation, all to avoid admitting to themselves that they did not realize it.

everdev(3005) about 6 hours ago [-]

> If people ever start realising the true extent, expect a backlash, surely?

I'm not sure. We seem to be in a sharing culture where we want to broadcast on the internet where we are, who we're with, what we're eating, what we think about current events, etc.

And it seems to be a source of pride to publicly identify with political groups and social movements.

So, I feel like people freely share much of this info.

mtgx(138) about 6 hours ago [-]

Yes it is. And I always mention it on this board and others whenever people start concluding that 'people just don't care about privacy.'

It's not that they don't care, they just don't understand the true implications of having someone like Google or Facebook have pixel tracking on all websites on the web tracking you, or them tracking wherever you go, and the thousand ways in which that data could be misused by them, their partners, or people stealing that data from those companies.

I've noticed from other older stories that even pro-surveillance politicians don't understand what they are pushing for, as some of them were later 'shocked' to discover that those very powers could also be used to gather information on them. And then they started singing a different tune about the surveillance powers spy agencies should be given.

imgabe(1552) about 7 hours ago [-]

> For example, data about a mobile phone's past location and movement patterns can be used to predict where a person lives, who their employer is, where they attend religious services and the age range of their children based on where they drop them off for school.

Is it me? I don't consider any of these to be particularly sensitive information.

Where you live: We used to have these things called 'phone books' where they listed the name, address, and phone number of everyone in town. The world didn't collapse.

Who their employer is: My name and picture are listed on my employer's public website. Not exactly hard to find.

where they attend religious services: I don't, but of the people I know who do, nobody has ever considered it something they need to hide. Many would want to tell you and ask you to join them.

age range of their children: So? If somebody knows you have a kid aged 5-10, then they can...what?

I mean, I still try to limit how much information I expose online, but if anything this makes me less worried rather than more.

grawprog(3801) about 5 hours ago [-]

> For example, data about a mobile phone's past location and movement patterns can be used to predict where a person lives, who their employer is, where they attend religious services and the age range of their children based on where they drop them off for school.

Well as far as google thinks I warp immediately from Vancouver to Calgary once I switch from wifi to data then back again every day. I'm guessing it has to do with my phone company but they always give me results from Calgary and show my location as being in Calgary based on my internet address.

Broken_Hippo(10000) about 6 hours ago [-]

Now pretend all this is public and someone stalks you. Maybe it is your abusive ex that occasionally sends you creepy messages and harasses anyone you date.

It isn't like we have many protections in place to keep the bad folks from the good. And I'll add that phone books didn't have everyone in town - only the person who paid for the line. For many, you had to look up their family member's or roomate's name to get their phone number and address. Your name and picture might be, but most people's are not. You might not want folks in your conservative town to know which brand of religion you follow. You might not want those folks in that conservative town to know you go to a gay bar most Saturday nights either.

throw2016(10000) about 5 hours ago [-]

This is a narrow and reductionist view that misses the scope and scale of the issue to make it personal when this is not personal.

Nobody is interested in the random individual so looking at this the purely personal level is pointless.

It's the ability to do this enmasse, 'collect' and 'collate', analyze and drill down to 'people of interest' and the power it gives the data holders that makes it toxic and ominous.

marcinzm(10000) about 6 hours ago [-]

In general, when you're in the majority this information doesn't matter much since you're washed out among the crowd. When you're in a minority however it can be used for nefarious purposed by the right group. For example, the KKK may like to know all the black people who live in predominantly white neighborhoods so they can set a few crosses on fire. Or maybe, you live in a fundamentalist Christian town and prefer your neighbors not know you're not actually Christian since they'd harass you (and the cops would help).

jacquesm(42) about 6 hours ago [-]

That's a rehash of a number of silly 'nothing to hide' pseudo arguments.

Phone books were not instantly searchable in bulk all across the globe. Who your employer is is not important, but in bulk to know who your employer employs and to be able to access that information in bulk and within a couple of milliseconds gives a lot of power to outsiders. I should know because I use that power regularly for my work, and trust me, lots of people we read up on would do better to keep a much lower profile online. It does not benefit their employers either.

Whether you attend religious services or not has been used to target (and kill) people in the past, and if you are willing to extrapolate a bit, was used for mass murder.

The age range of your children may not be so important, the fact that you have children may be, depending on your station in life.

The fact that you personally have not been inconvenienced by any of this - yet - is not a datapoint worth recording.

ses1984(10000) about 6 hours ago [-]

Putting all this info together used to be much more labor intensive, basically infeasible to do at scale. The scale problem is solved, and now companies can mine that data for profit.

Maybe it's not that scary in a western society that's generally considered free, but imagine that power in a fascist state.

rixrax(3967) about 2 hours ago [-]

I'm increasingly thinking that targeted ads are eerily similar to Isaac Asimov psychohistory[0]. E.g. you cannot reliably predict individual behavior, but with right|enough data you can reliably predict how a large enough population will act.

This is why individually we often feel that they're off the mark, or we're thrifty enough to ignore the ads or political or other targeting. But like others have pointed out, data is out there, and 'they' have infinite tries to get it right. And more importantly, it works already today. And it's impacting everyone, so as an individuals, we also get impacted in indirect and subtle ways when e.g. friend of ours raves about new toy she bought without even realizing that she chose this product over the other because of all the ads that she never clicked.

[0] https://en.m.wikipedia.org/wiki/Psychohistory_(fictional)

hari_seldon_(10000) about 1 hour ago [-]

Great analogy.

FlowNote(10000) about 1 hour ago [-]

If data collection can be sifted to discover behavior of groups...

And groups of people can have that behavior correlated to cultural and ethnic and racial factors...

Then, technically, isn't all of Silicon Valley violating the Civil Rights Acts of the 1960s? For example, let's say black males statistically swipe phones a certain length and certain time... doesn't this mean ads targeting them can be engaging in disparate impact?

fucking_tragedy(10000) about 1 hour ago [-]

> Then, technically, isn't all of Silicon Valley violating the Civil Rights Acts of the 1960s? For example, let's say black males statistically swipe phones a certain length and certain time... doesn't this mean ads targeting them can be engaging in disparate impact?

Yes, and lawyers have had, and will continue to have, a field day in every instance that they can prove this is true.

darkpuma(10000) about 1 hour ago [-]

Maybe. If such a swipe gesture discrepancy existed, ML could pick up on it as a proxy for race, despite having no concept of race and despite no human directing it to do so. One example I've heard of is lending software learning to use zip codes as a proxy for race, then systematically denying loans to minorities.

Much more egregious than this though is for years facebook was apparently allowing realtors to target only certain ethnicities. This was a case of deliberate human-driven discrimination, and as far as I know nobody has been held accountable for it. So far the tech industry has proven itself pretty good at getting around the law.

mattkrause(10000) about 6 hours ago [-]

I'm not thrilled about the amount of information being hoovered up, but....the predictions being made with it aren't terribly impressive.

Facebook should know a great deal about me, but the advertising categories it puts me in are either completely obvious (based on locations and group memberships that I've explicitly told it), or bonkers. It currently thinks I'm part of multiple wildly incompatible religions and political groups and am interested in a weird collection of abstract concepts ("decay"?)

Amazon thinks that my interests are dominated by textbooks and vacuum cleaners (if only!)

Twitter has correctly sussed out that someone who mostly follows scientists might be interested in science....or dogs.

mlthoughts2018(3498) about 3 hours ago [-]

There's a distinction between what these companies can predict about you vs what they can roll up into products that other entities want to pay for.

For example, a ton of advertisers will predetermine some targeting criteria based on simple demographics, brand loyalty or rewards program data, etc. and then commit to it for a whole ad campaign, even if it means leaving money on the table by not electing to dynamically shift into more precise targeting.

For these clients, no amount of fancy predictive capability or algorithmic targeting will matter, they just don't care. So Facebook or whomever just offers them big, sloppy and easy-to-conceptualize segments or buckets of users. They just want aggregate intelligence anyway, so nobody in the transaction cares much if it's wrong about your age bracket or general TV interests.

This doesn't mean Facebook is unable to produce far more alarming forecasts of your behavior, or assign you to categories based on things like political activism, privacy conscientiousness, or detect personal life details like your location trail, purchases, etc.

The really scary stuff just doesn't end up being surfaced in connection to lowest common denominator adtech products.

robertAngst(3996) about 6 hours ago [-]

Came here to say this. From my own experience + an Amazon Software engineer agreed.

You can do lots with data, but in the end, many times its uselessly applied.

qaq(4018) about 6 hours ago [-]

Totally agree I wonder if they ever had a control group that they showed things in random just to measure how good their tech is.

deogeo(3999) about 1 hour ago [-]

What if the data is used to find whistleblowers? Or predict them? I'm sure there's plenty of corporations and governments that will be willing to pay for such services.

https://www.abc.net.au/triplej/programs/hack/how-team-of-pre...

pdkl95(3153) about 5 hours ago [-]

> the predictions being made with it aren't terribly impressive

So what? They still have the data and can refine their methods tomorrow. Today their predictions might be low quality, but they can retry as many times as they want. The problem is not the predictions they are making today; it's the many predictions (or inferences) are able to keep making in the future.

> political groups

Remember that some types of advertising is not targeted. Some political advertising or branding advertising is intended to reach 'all voters'j or maybe a very broad category like 'Everyone Californian of voting age'. Branding campaigns don't care if you're interested in e.g. vacuum cleaners. They just want you to think of their name first every time you happen to think of or hear about vacuum cleaners.

edit: (Multiple contradicting groups could be pushing ads at your (very general) demographic.)

> science....or dogs

Many scientists like dogs?

> Amazon thinks that my interests

No, they think that showing you textbooks and vacuum cleaners has a greater chance of increasing their revenue, according to various statistical models. Targeted advertising isn't about targeting what your are interested in. It's about letting other people target your with what they think they can sell you.

edit2: Of course, it could also be a terrible model trying to use data in stupid ways. I'm just suggesting that there are many plausible explanations.

marcinzm(10000) about 6 hours ago [-]

Amazon has emailed me multiple times to answer questions about an item I never bought (in the email they literally say 'as the owner of' which I'm not). If they can't even get that right then I suspect they have a lot of data quality issues which would result in bad predictions.

darkpuma(10000) 37 minutes ago [-]

Something you should keep in mind is that companies like Amazon are all well aware of the anecdote about Target and teenage pregnancies. And the lesson they learned from that story is they should conceal from the user the full extent of what they know about that user, to avoid creeping people out.

If you receive 10 product recommendations, perhaps only one of them is actually targeted with a very high degree of confidence and the other 9 are basically noise added to deliberately deceive you, to lead you to believe Amazon knows less about you than they really do.

luckylion(10000) about 5 hours ago [-]

> I'm not thrilled about the amount of information being hoovered up, but....the predictions being made with it aren't terribly impressive.

Conspiracy theory: what if they intentionally throw random garbage in there so you don't get paranoid? The things they want to hit you with will still be there, but they'll be surrounded by misses, and you're inclined to think 'wow, they sent me an ad for a new dishwasher just as soon as mine broke, but they also sent me an ad for cat toys knowing full well I'm allergic to cats'? The dishwasher is still a great hit, but less suspiciously so.

brudgers(126) about 3 hours ago [-]

Low cost drives ordinary predictions. The average prediction is probably a few milliseconds of in memory operations at best. No network calls. Certainly no IO's to disk. Even with more work, you're targeted because you're among the best matches within your zipcode. The best results does not imply good results. Ordinarily ads reflect what an advertiser specified not what the platform thinks is most likely to be successfully sold to you.

cirgue(4023) about 5 hours ago [-]

I do modeling for an advertising recommendation engine (not at Amazon). We do controls against both human-made rulesets and random, and we consistently outperform both by a big margin. However, click through rate is always really low, and a model that massively increases revenue still serves irrelevant content most of the time, and still seems random from the POV of the user. The point of ML in advertising recommendation is to guess better in aggregate, not get it right all the time.




(84) Is it creepy when brands pester you on social media?

84 points about 8 hours ago by edent in 295th position

shkspr.mobi | Estimated reading time – 8 minutes | comments | anchor

You're sat in a pub, chatting with your mates. You start to moan about how the cheap lager they serve gives you a headache. All of a sudden, a stranger runs up to you and says:

'Oh no! Headaches? Have you tried the refreshing taste of Pepsi® Cola? It's the Flavour That Keeps GivingTM!'

I suspect you would complain to the bar manager and then find a new watering hole. You might, perhaps, tell the stranger never to contact you again.

And, yet, this is what happens fairly regularly on Twitter. I was complaining to my ISP (Virgin) when a different ISP (Sky) butted in to the conversation to sell their wares.

I can definitely see how this would get in the way of making your day a productive one. Do you find this happens often? If it does, I'd be happy to chat to you about a reliable alternative with us during your lunch break! ☕ PM me for a chat! ^JH

— Sky (@SkyUK) February 13, 2019

WHAT THE JUDDERING FUCK? Why would anyone want a brand hijacking a conversation like that? Is that sort of unsolicited electronic marketing even legal?

How are they doing this?

Companies like Sky are using BrandsEye to mine social media to send you unsolicited advertising.

Their whole shtick is 'find people complaining about your competitors - then bombard them with adverts.' Innovative... And no mention of legal obligations or data protection.

In fairness, BrandsEye have a privacy policy where they promise 'We will treat your information as confidential.' Except for all these people we're happy to give it to. Interesting use of the word 'confidential' there!

They do let you opt-out of their creepy marketing database (not that I ever opted in).

So I dropped a Subject Access Request to [email protected] - to see what fascinating insights they had on me. It was truly pathetic.

What data did they have on me?

A few days later, they sent me 104 JSON files of my data. Some only contained a single tweet, like this one:

Oooh! Saffa mock champagne. Pretty good! 🥂 . Wine from @BoschendalWines via @Vivino: https://t.co/CAweXonshx pic.twitter.com/NYDuKTUgKe

— Terence Eden (@edent) June 24, 2017

Here's the full data in the file:

[{
    'link': 'http://twitter.com/edent/statuses/878697839135731714',
    'published': '2017-06-24T19:34:24.000+0000',
    'authorLocation': 'Oxford, UK',
    'authorTimezone': 'London',
    'city': {
        'name': 'Oxford'
    },
    'region': {
        'name': 'England'
    },
    'country': {
        'name': 'United Kingdom'
    },
    'language': {
        'name': 'English'
    },
    'category': {
        'label': 'Consumer'
    },
    'gender': {
        'label':'Male'
    }
}]

Not much, is it? My location, gender, and that I'm a 'consumer'. What amazing insights they must be generating for trusted brands. FFS...

Other files held dozens of Tweets. Often on a single theme. Here's one which is just me talking about buses:

[{
        'link': 'http://twitter.com/edent/statuses/1062220277738860544',
        'published': '2018-11-13T05:47:03.000+0000',
        'extract': '@mskarengibb it is about a 30 minute walk into the city centre. Or, there's a bus stop just outside the venue. Google Maps will show you which bus to catch.',
    },
    {
        'link': 'http://twitter.com/edent/statuses/1061890559864963075',
        'published': '2018-11-12T07:56:53.000+0000',
        'extract': '@mskarengibb you'll need to go into the centre of Oxford and catch the busy from the Gloucester Green bus station,\nOxford OX1 2BX\nhttps://goo.gl/maps/kdJYicoRbpB2',
    },
    {
        'link': 'http://twitter.com/edent/statuses/1012358934521372672',
        'published': '2018-06-28T15:35:53.000+0000',
        'extract': '@TfL please can I get a response to this?\nHaving a bus stop listed in the wrong location is very annoying. Thanks.',
    },
    {
        'link': 'http://twitter.com/edent/statuses/1011984324982394882',
        'published': '2018-06-27T14:47:19.000+0000',
        'extract': 'Some idle thoughts while I wait for a bus...\n\nDo you notice when you receive a \'Nudge\'?\nIf so, how does it make you feel?\n\nI'm talking about those messages which include things like \'95% of people pay their bill on time\' or \'people in [your postcode] tend to donate £X\' etc.',
    },
    {
        'link': 'http://twitter.com/edent/statuses/1011857722038505472',
        'published': '2018-06-27T06:24:14.000+0000',
        'extract': '*sigh* How do I view the latest NaPTAN database & see who is responsible for incorrect information?\nI assume it is TfL's responsibility to update bus stops in the capital. https://twitter.com/OxfordBusCo/status/1011285002162667520',
    },
    {
        'link': 'http://twitter.com/edent/statuses/1011277489212198913',
        'published': '2018-06-25T15:58:36.000+0000',
        'extract': '@OxfordBusCo this *still* isn't fixed. Signs up at the bus stop - but your app is still wrong.\nI reported this a month ago, you keep promising to correct it.\nCan you please tell me what's going on? https://t.co/2yIR2tJrMM',
    },
    {
        'link': 'http://twitter.com/edent/statuses/1008732886378471424',
        'published': '2018-06-18T15:27:16.000+0000',
        'extract': '@OxfordBusCo the Marylebone Road bus stop listed in your app is wrong. When will it be fixed? https://t.co/QseqgIY29e',
    },
    {
        'link': 'http://twitter.com/edent/statuses/1008618346651242496',
        'published': '2018-06-18T07:52:07.000+0000',
        'extract': '@OxfordBusCo it is now Monday, and the app still isn't fixed.\nYour website says one thing, the app says another, the physical bus stops say a third.\nYour customers are confused.\nPlease tell me when this will be fixed? https://t.co/s9md2f8Nfq',
    },
    {
        'link': 'http://twitter.com/edent/statuses/1006518900278906880',
        'published': '2018-06-12T12:49:40.000+0000',
        'extract': 'Do you like open data? \uD83D\uDCCA\nLove statistics about bus travel? \uD83D\uDE8C\nCrave files in Open Document Format? \uD83D\uDD13\n\nDo we have a website for you! \n\nhttps://www.gov.uk/government/statistical-data-sets/bus01-local-bus-passenger-journeys https://t.co/OXmOBHiBOS',
    }]

I can only assume that a bus company wants to know what sad, middle-aged blokes think about buses in Oxford. Hopefully my dazzling Tweets will provide value added insights into their corporate synergies.

How are they doing this?

BrandEye's website claims that they use 'a proprietary mix of AI and crowdsourcing'.

I'm going to call shenanigans on that. As far as I can tell, they're using the bog-standard Twitter API to mine keywords and locations. They might be using some basic Natural Language Processing to weed out the obvious errors. But judging from their LinkedIn page, their 'crowdsourcing' is cheaply paid students in Kenya, and freelancers in Jordan, Venezuela, Mauritius, and Mumbai.

It's the old SpinVox trick - sell AI magic to customers, and get Mechanical Turks to do the real work.

Is this creepy?

OK, so BrandsEye don't appear to be doing a great job of mining social media. But is what they're doing ethical or useful?

On the one hand, I spew my tweets out into the ether. Anyone with an Internet connection has the technical capability to read them. But, those Tweets contain my personal data - be it my text, location, or images.

We've moved to a world where companies have to get an explicit opt-in to process our data and to use it for marketing.

I would argue that I have not opted-in to Sky's marketing. I have given no consent for them to send me electronic spam - no matter how highly targetted. Similarly, I haven't told BrandsEye that they can process my personal data. It isn't my job to opt-out of every half-baked business; it is their job to convince me to opt-in.

Even if the law were different, and companies could spam with impunity, is this an effective marketing strategy? Superficially, I get the appeal of searching for people complaining about your competitors - and then targetting them. But does it work?

Using special AI Algorithms and BlockChain Technology, I spent half-an-hour looking through Sky's BrandsEye-powered marketing Tweets.

It looks so needy and pathetic. Hundreds of spammy message. The only replies to Sky seem to be telling them to piss off. I'm sure once in a while they hit a receptive mark - but is it really worth annoying so many people?

The future is not brands sliding into your conversations like they're your mates.




All Comments: [-] | anchor

gnicholas(1632) about 2 hours ago [-]

Lots of opinions here! Would love to know what people think about:

1: Size of company — does it matter if the company engaging with a random tweet is big or small? If it's the founder/creator who is doing the outreach, versus some member of the marketing team?

2: Is it OK for a company's twitter account to "like" a tweet that complains about a competitor or voiced a problem that the company's products are aimed to solve?

yoz-y(10000) about 2 hours ago [-]

An opinion of one, but I think it really depends more on whether your service really is an answer for a specific problem. If you complain about an ISP, chances are than any other ISP will have the same problems. However if you complain about something that you currently can't do (e.g.: ask Twitter followers for a solution for X) then a comment from a company actually having a solution might be very welcome.

xefer(4019) about 5 hours ago [-]

If an account is blocked they shouldn't be able to see or interact with your Twitter feed.

Twitter allows for the mass importation of block lists.

Is there an active project to create a master list of corporate accounts to block?

tragic(3994) about 5 hours ago [-]

Apparently somebody tried this to get rid of Alex Jones[0]

[0] https://www.adweek.com/digital/twitter-users-are-blocking-hu...

makecheck(3790) about 3 hours ago [-]

EVERY communication from a company that isn't "thank you for your business" or "invoice attached" is creepy. We need to return to a strict relationship. And I want to have more control: I want key exchange, where once I've paid I can revoke your ability to reach me so I never hear from you again unless I want to reach you again.

Company communications out of the blue come off as having an extremely exaggerated sense of self-importance. I have a hard enough time keeping up with friends and other desirable contacts on a regular basis, yet your stupid company thinks I want to see your weekly E-mail spams or "notifications" (read: ads) every day on my phone? (And we all know they'd send you "notifications" every 5 minutes if the damn platform didn't step in to prevent it.)

implying(10000) 43 minutes ago [-]

Ideally, this would be the case, but information such as product recalls and safety information would be impossible.

8note(10000) about 2 hours ago [-]

I'm perfectly okay with beer brands telling me when they're going to be open with a new limited time product.

tyingq(4004) about 6 hours ago [-]

Still a bit creepy, but I enjoy Wendy's trolling:

https://static.boredpanda.com/blog/wp-content/uploads/2017/1...

https://cdn.funpic.us/wendys_tweets_are_great-47-242772.jpg

Edit: If they do ever bag on you, though...ask how they make the chili.

Any questionable patty from the grill goes in a unrefrigerated pan, all day long. Then, they mix in the beans/sauce and freeze it for 30 plus days. It's a notable exception to their militant 'never frozen', 'fresh', schtick. Odd, since the burgers are actually done well, and are fresh.

Alterlife(10000) about 4 hours ago [-]

These are funny.

It is a fine line between being charming and creepy. The person who made the tweets in the article isn't even trying.

rootusrootus(10000) about 3 hours ago [-]

It's not questionable patties, it's ones which did not get used in time for a burger. Cooked a little too long, that probably makes them better for chili, not worse. And they are put in a warming drawer, which is a lot different than an 'unrefrigerated pan'. Also, they freeze the meat for up to 7 days, not 30+.

Honestly, that sounds perfectly fine. The best ground beef has been thoroughly browned, that's probably why people think their chili tastes so good.

taeric(2411) about 5 hours ago [-]

This at least is just them being on social media. The article is about brands soliciting folks online in a clearly automated way with basically no personality.

sokoloff(3748) about 5 hours ago [-]

When you choose to call out a person/company (good or bad) in an electronic public square, I think it's fair for other people/companies to jump into the conversation if they so choose.

ardy42(3982) about 4 hours ago [-]

> When you choose to call out a person/company (good or bad) in an electronic public square, I think it's fair for other people/companies to jump into the conversation if they so choose.

It's also tacky and annoying. None of the intended participants in the conversation want that, and that kind of behavior is either going push the conversation onto private channels or motivate blocking.

A lot of people value an audience of 'real people like me, minus the corporate hustlers and corporate ad-bots.' It's unfortunate that are current society lets the latter swoop in an choke to death all kinds of good things.

dawnerd(10000) 7 minutes ago [-]

I've tried to get two competing companies to reply in the same thread multiple times. Pretty funny when it does happen and usually problems get solved very quickly. Most recently I had an issue with Tmobile and simply tagging Verizon two really got the ball rolling with Tmobile support.

NeedMoreTea(3667) about 4 hours ago [-]

So if you were having a discussion about someone in Starbucks you'd be happy for me to change table and jump into the conversation?

jdormit(3956) about 5 hours ago [-]

I honestly don't understand complaints like this. When you post something on the internet (on a service that has a documented, public API!) anyone can see it, by definition. Twitter isn't your local pub - it's a public forum visited by millions of people daily. If you don't want those people reading and reacting to the things you write, don't write them on Twitter!

And yeah, brands aren't people. But guess what? Brands are made out of people, and those people have just as much a right to be on Twitter as you do!

There are lots of forms of electronic communication that are non-public. If you don't want people reading what you say, use those.

/rant

mattmanser(3514) about 5 hours ago [-]

Isn't this actually a damning indictment of how normal humans expect social networks to work, versus how they actually do?

You don't expect big corporations to have spies listening to every conversation in a random party in your local park, but you don't mind other people joining in, it's kinda the point.

People think of social media as a big party of real people, not as a massive corporate espionage listening to every conversation for certain trigger words so they can slimely sidle up, butt into the conversation and try and sell you something.

gus_massa(1386) about 4 hours ago [-]

I don't mind if they read the post. The annoying part is that they send spam.

TheTruth321(10000) about 5 hours ago [-]

If you're using public information to make 'unsolicited' approaches, then, and please hear it clearly ...

[email protected] brands, and all who sail in them.





Historical Discussions: Show HN: Cortex – machine learning infrastructure for developers (February 14, 2019: 72 points)

(76) Show HN: Cortex – machine learning infrastructure for developers

76 points 2 days ago by deliahu in 10000th position

github.com | Estimated reading time – 7 minutes | comments | anchor

Failed to load latest commit information. .github Update pull_request_template.md 16 hours ago build Upload CLI without zipping it 5 days ago cli Expose additional csv parsing options 20 hours ago dev GPU support (#6) 16 hours ago docs GPU support (#6) 16 hours ago examples Expose additional csv parsing options 20 hours ago images GPU support (#6) 16 hours ago pkg Add GetFilePath() to *PythonPackage 15 hours ago .dockerignore Initial commit 23 days ago .gitbook.yaml Initial commit 23 days ago .gitignore Initial commit 23 days ago .travis.yml GPU support (#6) 16 hours ago CODE_OF_CONDUCT.md Initial commit 23 days ago LICENSE Initial commit 23 days ago README.md Move demo video to top of README instead of thumbnail 2 days ago cortex.sh GPU support (#6) 16 hours ago go.mod Initial commit 23 days ago go.sum Update to go 1.11.5 (#4) 18 days ago



All Comments: [-] | anchor

camuel(10000) about 4 hours ago [-]

How does this compare to KubeFlow?

deliahu(10000) about 3 hours ago [-]

We have a lot of respect for the work that the KubeFlow team is doing. Their focus seems to be on helping you deploy a wide variety of open source ML tooling to Kubernetes. We use a more narrow stack and focus more on automating common workflows.

For example, we take a fully declarative approach; the "cortex deploy" command is a request to "make it so", rather than "run this training job". Cortex determines at runtime exactly what pipeline needs to be created to achieve the desired state, caching as aggressively as it can (e.g. if a hyperparameter to one model changes, only that model is re-trained and re-deployed, whereas if a transformer is updated, all transformed_columns which use that transformer are regenerated, all models which use those columns are re-trained, etc). We view it as an always-on ML application, rather than a one-off ML workload.

gleenn(3934) about 19 hours ago [-]

Somewhat unfortunate name clash with the now-back-burnered deep learning framework written in Clojure, also called Cortex: https://github.com/originrose/cortex

TaupeRanger(10000) about 8 hours ago [-]

LOL...every other AI related project these days is called some variation or part of: Cortex Mental Mind Neural/Neurons

Which is, of course, hilarious, since none of them have anything to do with the brain or mind whatsoever, being slightly advanced computational statistics programs.

deliahu(10000) about 13 hours ago [-]

Thanks for letting us know, we'll take a look

SlowRobotAhead(10000) about 19 hours ago [-]

And as far as SEO goes, ARM Cortex.

Really, if you want people to use your thing, make it easy for them to find.

ipsum2(3586) about 14 hours ago [-]

I work on ML infrastructure at some big company, so I'm always interested to see whats new in this field. This seems to be a thin wrapper around tf.estimator, which provides a majority of the functionality. The only novel things are yaml config for defining some basic transformations and data format. It doesn't seem super useful, am I missing something?

deliahu(10000) about 13 hours ago [-]

The main thing we try to help with is orchestrating Spark, TensorFlow, TensorFlow Serving, and other workloads without requiring you to manage the infrastructure. You're right that we have a thin layer around tf.estimator (by design) because our goal is to make it easy to create scalable and reproducible pipelines from building blocks that people are familiar with. We translate the YAML blocks into workloads that run as a DAG on Kubernetes behind the scenes.

doppenhe(3418) about 12 hours ago [-]

curious what kind of infrastructure have you built for deployment, serving and managing models?

weitingster(10000) 2 days ago [-]

Congrats!

deliahu(10000) about 13 hours ago [-]

Thank you!





Historical Discussions: IIR filters can be evaluated in parallel (February 15, 2019: 10 points)

(65) IIR filters can be evaluated in parallel

65 points about 21 hours ago by zdw in 55th position

raphlinus.github.io | Estimated reading time – 8 minutes | comments | anchor

(Author's note: I got slightly stuck writing this, so am publishing a somewhat unfinished draft, largely so I can get to the many other items in my blogging queue (there's a thread on the xi Zulip for those more interested). I can come back to this and do a more polished version if there's interest.)

At the risk of oversimplification, there are basically two types of digital filter: finite impulse response and infinite impulse response. The former is basically taking the dot product of the filter response with a slice of input samples, for each input sample. Analysis of the latter is trickier, and involves internal state, which in general decays over time but never goes exactly to zero.

It is trivial to see how to evaluate FIR filters in parallel; each dot product is independent of the others, so it's basically embarrassingly parallel. There's more to to the story, especially as the filter kernel becomes larger. Then the simple O(nm) approach yields to O(n log n) techniques based on FFT, and these techniques are why convolutional reverb is practical. But we know how do FFT with high parallelism.

By contrast, IIR filters at first glance look like they must be evaluated in series. But the first glance is misleading. Their linear nature means they can be evaluated quite efficiently in parallel, but it's not obvious. I'm not stating a new fact here, it's in the literature, but I haven't found a particularly clear statement of it, nor a clear discussion of whether it only applies to linear time-invariant filters or whether the filter parameters can be modulated (spoiler alert: they can).

Take the simplest IIR filter, the so-called "RC lowpass filter", better known as a one-pole filter:

y[i] = x[i] * c + y[i - 1] * (1 - c)

From this formulation, it's not possible to start calculation of y[i] until y[i-1] is known. This is basically a serial chain of data dependencies. But there's more we can do, because of the linear nature of the filter. Let's see if we can unroll it to do two at a time:

y[i] = x[i] * c + (x[i - 1] * c + y[i - 2] * (1 - c)) * (1 - c)

This calculates the same value (subject to floating point roundoff), but the data dependency is twice as long. On the other hand, it seems to require more multiplications, so it's not obvious it's a win.

This idea generalizes to more sophisticated filters. Rather than writing it it in direct form, which emphasizes the serial evaluation strategy, it's better to use matrices. Basically, y becomes a state vector, and a becomes a matrix. This is known as the state space approach to filters and has many advantages. I'd go so far to say that direct form is essentially obsolete now, optimizing for the number of multiplies at the expense of parallelism, numerical stability, and good modulation properties.

When I implemented the filter in music-synthesizer-for-android, I was looking for techniques to speed up the code using SIMD. I came across some papers, notably Implementation of recursive digital filters into vector SIMD DSP architectures, that gave working recipes for more general filtering, optimized for SIMD. These techniques work, and I wrote about them a bit in a notebook entitled Second order sections in matrix form. In that notebook, I also argue for the advantages for numerical stability and modulation, and I won't go into that more here.

Reviewing the literature again, a much more detailed reference is Compiling High Performance Recursive Filters, which emphasizes code generation techniques but does cover the underlying math, including good references. Likely the earliest reference showing that IIR filters can be evaluated in parallel is the Sung and Mitra 1986 reference, "Efficient multi-processor implementation of recursive digital filters." (no direct link available, but of course sci-hub works). Indeed, Sung and Mitra state quite clearly that time-varying filters work, as long as they're linear.

Monoid homomorphism time

One of my favorite mathematical frameworks, monoid homomorphism, is powerful enough to accommodate parallel evaluation of IIR filters as well. The basic insight is that the target monoid is a function from input state to output state, and represents any integral number of samples. The monoid binary operator is function composition, which is by nature associative and has an identity (the identity function).

This is the same fundamental trick as lifting a regular expression (or, equivalently, finite state machine) into a monoid. In general, the amount of state required to represent such a function, as opposed to a single state value, is intractable, but in these two cases it works. In the case of a regular expression, it works because the domain of the regular expression is finite. In the case of an IIR, it works because the function is linear.

Let's dig into more detail. The target monoid is a function of this form:

This can obviously be represented as two floats, we can write the representation as simply (a, b). The homomorphism then binds a single sample of input into a function. Given the filter above, an input of x maps to (c, x * (1 - c)).

Similarly, we can write out the effect of function composition in the two-floats representation space. Given (a1, b1) and (a2, b2), their composition is (a1 * a2, a2 * b1 + b2). Not especially complicated or difficult to compute.

If we just evaluate this homomorphism, then we get the filter state at the end, which is nice, doesn't really count as evaluating the filter. What we want is the filter state at the end of each prefix of the input. But fortunately, that's possible too. It's most generally called "scan," and is often thought of as a generalization of prefix sum. Fortunately, there is a great literature on parallel evaluation of this primitive - one of the more sophisticated approaches has a depth of 2 * log2 n), and a work factor of 2, meaning twice the number of primitive evaluations as the serial approach. A number of other intermediates are possible, especially including the SIMD-friendly variants we saw earlier. A good read on scans is the paper that (I believe) introduced them is Scans as Primitive Parallel Operations.

Time-varying parameters

Note that there's nothing about the homomorphism above that requires the filter to be time-invariant. We can take both the input signal and the filter parameters as input to the map, and the math works out the same. Note that this is in stark contrast to convolutional reverb techniques, which do require that the filter kernel be linear time invariant.

Possible extension to nonlinearity

This technique is for linear filters, but nonlinear filters are also interesting in audio (and other) applications. For example, many subtractive synthesizers use a "virtual analog" technique, which is often based on a linear core but has nonlinearities, more faithfully capturing the actual performance of electronic components used in active filters. (For a comprehensive treatment of this topic, see The art of VA filter design (PDF).)

I think it's an interesting question whether the parallelization techniques described here can be extended to nonlinear filters as well. My guess is, probably not realistically for faithful simulation of analog circuits, but it might be possible to design a nonlinear response that can be represented with a relatively small number of parameters, and also composes (and respects associativity, the main requirement of monoid homomorphisms). I haven't come up with one yet, but would be interested to explore the space more.

Other references

There's a great historical article comparing FIR and IIR that talks about how the relative advantages and disadvantages of each has evolved over time, and cites exploitation of parallelism as one of the reasons for FIR's success.

For an extremely in-depth presentation of digital filters, with solid mathematical foundations, see Julius Smith's online book Introduction to Digital Filters. In particular, it's a great exposition of the state space approach.




All Comments: [-] | anchor

saagarjha(10000) about 9 hours ago [-]

Sorry for being 'that' person, but this article uses terminology I'm so unfamiliar with that I don't know what it's about, or what it's trying to explain. Would someone mind enlightening me? The Wikipedia page on IIR (which I assume means infinite impulse response) isn't very helpful: https://en.wikipedia.org/wiki/Infinite_impulse_response

SonOfLilit(3354) about 4 hours ago [-]

One of my favorite technical books teaches Digital Signal Processing (and on the way, a lot of engineering pearls of wisdom), and it's available for free as a PDF from the author's website:

http://www.dspguide.com/pdfbook.htm

acjohnson55(779) about 7 hours ago [-]

Click through to the page on digital filters for a better entry point. It'll still be pretty tough to understand though unless you get hands on and start playing with inputs and outputs and filters in something like Matlab or Numpy.

the8472(3978) about 9 hours ago [-]

Since an infinite impulse response is a specific type of impulse response its article might be a better introduction to the topic.

https://en.wikipedia.org/wiki/Impulse_response

panic(119) about 9 hours ago [-]

If x is an array of input samples and y is an array of output samples, a simple FIR filter might look like:

  y[t] = a*x[t] + b*x[t-1]
Whereas a simple IIR filter might look like:

  y[t] = a*x[t] + b*y[t-1]
In the first equation, only two input samples (x[t] and x[t-1]) are used to compute each output sample. A sharp 'impulse' at x[0] will only affect the values of y[0] and y[1]. That's what makes it an FIR filter—the number of output samples affected by an impulse is finite.

On the other hand, the second equation has a y[t-1] term on the right hand side. Each output sample feeds back into the next one. To see what this looks like purely in terms of x samples, you can substitute using the equation for y[t], then multiply:

  y[t] = a*x[t] + b*y[t-1]
  y[t] = a*x[t] + b*(a*x[t-1] + b*y[t-2])
  y[t] = a*x[t] + b*a*x[t-1] + b^2*y[t-2]
  y[t] = a*x[t] + b*a*x[t-1] + b^2*(a*x[t-2] + b*y[t-3])
  y[t] = a*x[t] + b*a*x[t-1] + b^2*a*x[t-2] + b^3*y[t-3]
  ...
Eventually you'll get the sum (assuming the samples start at x[0]):

  y[t] = sum from n=0 to t of b^n*a*x[t-n]
The sum ranges over the entire array of samples, all the way back to the beginning. Each input sample from x influences every output sample afterward. The output for an impulse—a single positive x[0] sample followed by x[1] = x[2] = ... = 0—is

  y[t] = b^t*a*x[0]
Since these samples are non-zero for all t (though decaying exponentially according to the value of b), the second equation describes a filter with infinite impulse response.
CapacitorSet(3183) about 9 hours ago [-]

Think of a filter as an electronic circuit that has a voltage input and a voltage output. In control theory a filter is described by how it reacts to an 'impulse', which is essentially a brief peak followed by zero voltage.

Finite Impulse Response means that the output of the filter will eventually go to zero - after a finite duration. Infinite Impulse Response means that the output of the filter may never go to zero.

It can be proved that FIR and IIR filters have different properties; the article mentions that IIR filters are necessarily stateful, for instance.

ttoinou(3946) about 7 hours ago [-]

  y[i] = x[i] * c + (x[i - 1] * c + y[i - 2] * (1 - c)) * (1 - c)
So you can compute y[i]=f(y[i-n],x) for any n, but that doesn't give you intermediary results : all y[i-j] for j between n and 0...

I feel like trying to compute (y[i-j] for j between n and 0) on the GPU (n+1 things to compute in parallel here) would not be efficient because the more you increase n the more y[i] becomes really longer to compute (at least linear with n) and you'll need to wait for the calculus of y[i] to finish before having the result of the result of y[i-n]. What am I not seeing here ?

mntmoss(10000) about 3 hours ago [-]

You can treat it as a multiplexing operation. The first y[i] is computed normally. y[i+1], y[i+2] etc. are computed with a parallel form, up to as many cores as is optimal. Normally cores will wait for data, finish it very quickly, and then sit idle since they're waiting on memory, but this allows each processing core to return more results from data at a given time i without a serial readback(which introduces memory bandwidth pressure). The optimal throughput strategy is to push the parallelization upwards until the latency of doing this heavy computation outstrips the memory latency savings.

Stenzel(10000) about 10 hours ago [-]

The transposed direct form II allows a biquad to be calculated with two scalar by vector multiplications and some shuffling, which should be faster than the proposed matrix solution I believe.

raphlinus(3363) about 5 hours ago [-]

I'd be happy to see benchmarks of that. The problem is that the 'shuffling' creates serial data dependencies, while the matrix form doesn't. Sure, the number of multiplications is smaller for direct forms, but that's not what has the most effect on performance.





Historical Discussions: Show HN: Pino – Open source web app for membership management (February 12, 2019: 54 points)

(54) Show HN: Pino – Open source web app for membership management

54 points 4 days ago by Risse in 4017th position

pinomembers.com | Estimated reading time – 1 minutes | comments | anchor

Pino is an open source web app built on Drupal 8. Pino can be easily extended with Drupal modules & PHP libraries.

Pino is built by Vaiste Productions and Kristian Polso. The project came to life when we realized that managing our associations' members with a spreadsheet program just wasn't suitable and we needed something better and easier.

Order now Try the demo

Ordering includes a 30-day trial, no payment information is required.

We also offer a free and self-hosted option of our web app. With this option you can install the Pino software on your web hosting platform of choice and enjoy the full features of Pino. Please read more how to install and use Pino in the Documentation section.

Due to being based on Drupal, Pino can be extended easily with plethora of Drupal modules and PHP libraries. The full source code of Pino can be found at our Gitlab page.

We are committed to providing this open source project for everyone, with focus on accessibility and ease of use! Contributions are also more than welcome on our Gitlab page or Drupal.org page.




All Comments: [-] | anchor

system2(10000) 3 days ago [-]

Wouldn't be more appropriate to describe this as a drupal plugin / extension?

Also, why would a extension / plugin cost monthly for CRUD type application? What kind of support do you provide?

Risse(4017) 3 days ago [-]

The term that Drupal uses for these 'full website' packages are 'distributions': https://www.drupal.org/docs/8/distributions

The monthly cost is mainly for support, hosting and email delivery. The support includes security updates for server and Drupal libraries, customer support via email and helping with import / export of member data (of course, due to GDPR and sensitive contact information, a processing agreement has to be signed and agreed upon)

joekrill(10000) 3 days ago [-]

Not to be confused with the relatively well-known JavaScript logging library called Pino.

jilles(10000) 3 days ago [-]

It always baffles me when people don't do their research before releasing a project / product. A few months ago there was a guy releasing his own language called Flux...

james_s_tayler(10000) 3 days ago [-]

Or the Japanese chocolate.

kowdermeister(2387) 3 days ago [-]

User / pass doesn't work.

Risse(4017) 3 days ago [-]

Heh, someone decided to be funny and changed it. It should be fixed, please try again.

masha_sb(10000) 3 days ago [-]

1. anything similar in python?

2. what is so special, about yet another membership management solution?

Risse(4017) 3 days ago [-]

1. There is at least Tendenci: https://github.com/tendenci/tendenci

2. It's open source and Drupal 8-based, meaning it's easy to extend with current selection of Drupal modules and PHP libraries

csixty4(3728) 3 days ago [-]

Is the domain name a play on GoMembers?

Risse(4017) 3 days ago [-]

Hah, actually not, never heard of GoMembers before. I of course wanted to have as short url as possible, but 'pino' with all common TLDs were taken. So putting 'members' to the end seemed to make sense to me.

funkaster(3507) 3 days ago [-]

> managing our associations' members with a spreadsheet program just wasn't suitable and we needed something better and easier

ok... what about not reinventing the wheel and using something like LDAP[0] and one of its many, many UIs? How is this different from all the other solutions?

[0]: https://en.wikipedia.org/wiki/Lightweight_Directory_Access_P...

h1d(10000) 3 days ago [-]

I have been using OpenLDAP to manage users for like 10 years but what GUI/web based tool do you suggest for managing its data?

The only non complicated solution looks to be PHPLdapAdmin which I use but it is pretty much abandoned and a fork is keeping it alive.

kplex(10000) 3 days ago [-]

Does the demo reset? Clicking around trying stuff resulted in the landing page showing not found?

Worth disabling the https://demo.pinomembers.com/admin/structure/member/settings page for the demo perhaps?

Risse(4017) 3 days ago [-]

Yes, the site should reset every 15 minutes, please try again soon.

sigfubar(10000) 3 days ago [-]

> Pino is an open source web app built on Drupal

closes tab

I do love me a hint of scandal, but the whole 'thou shalt not BDSM in your spare time' thing is a major turnoff.

sucrose(10000) 3 days ago [-]

Can you provide more context about whatever you're talking about?

noir_lord(3939) 3 days ago [-]

Having used drupal back in the day they definitely got the SM part right.

Never again.

headcanon(3967) 3 days ago [-]

Is there a repository (like an awesome-* list) of open-source self-hosted appliance apps like this? I'm sure there's an awesome-drupal list, but I'm thinking more like a list that might also include this and Discord.

Edit: I meant Discourse. and I also answered my own question: https://github.com/unicodeveloper/awesome-opensource-apps

eeZah7Ux(3031) 3 days ago [-]

+1, a repository with proper indexing would be nice, but also a place where people can request such applications or find contributors.

the_common_man(3943) 3 days ago [-]

search for awesome-selfhosted. one of the most comprehensive lists





Historical Discussions: Show HN: Purview – A server-side component framework (February 15, 2019: 52 points)

(52) Show HN: Purview – A server-side component framework

52 points 1 day ago by karthikksv in 3819th position

github.com | Estimated reading time – 14 minutes | comments | anchor

Purview

What if your React components ran on the server-side? The server renders components to HTML and sends it to the client. The client renders HTML and notifies the server of DOM events.

With this architecture, your components can directly make database queries, contact external services, etc, as they're running exclusively on the server. There's no more REST or GraphQL; the client-server interface is abstracted away, and all you deal with are standard components, event handlers, and lifecycle events.

Below is a snippet of an example; see full example code here.

import Purview from 'purview'
import * as Sequelize from 'sequelize'
const db = new Sequelize('sqlite:purview.db')
class Counter extends Purview.Component<{}, { count: number }> {
  async getInitialState(): Promise<{ count: number }> {
    // Query the current count from the database.
    const [rows] = await db.query('SELECT count FROM counters LIMIT 1')
    return { count: rows[0].count }
  }
  increment = async () => {
    await db.query('UPDATE counters SET count = count + 1')
    this.setState(await this.getInitialState())
  }
  render(): JSX.Element {
    return (
      <div>
        <p>The count is {this.state.count}</p>
        <button onClick={this.increment}>Click to increment</button>
      </div>
    )
  }
}

Benefits

  • Make database queries, contact external services, etc. directly within your components, with no need for REST or GraphQL.
  • Extensive type-checking: Comprehensive JSX typings ensure that your HTML tags/attributes, event handlers, component props, etc. are all statically type-checked, courtesy of TypeScript.
  • Server-side rendering is the default, so you get fast time to first meaningful paint.
  • Persistent two-way WebSocket connections allow the server to trigger updates at any time. You can push realtime changes from your database or external services directly to the client with a simple call to this.setState().
  • Client-side virtual DOM diffing for efficient updates.
  • Your front-end and back-end are both encapsulated into reusable components. It's easy to see and modify the functionality of any part of your page.

Caveats

  • Every event and re-render incurs a network round-trip cost. Applications that require minimal latency (e.g. animations, games) are not well suited for Purview. That being said, many applications are primarily CRUD-based, and hence work well under Purview's architecture.
  • Not React compatible due to the differences listed below, so you can't use existing React components/libraries with Purview.
  • You can't directly access the DOM within your components. For example, if you need to attach listeners to window, that's currently unsupported.

Installation

  1. Install with npm:

  2. Set your JSX transform to be Purview.createElem. For TypeScript, in your tsconfig.json, you can do this like so:

    {
      'compilerOptions': {
        'jsx': 'react',
        'jsxFactory': 'Purview.createElem'
      }
    }

    For other compilers/transpilers, you can use the JSX comment pragma: /* @jsx Purview.createElem */.

    You can also reference our full tsconfig.json, which enables various strict TypeScript features that we'd recommend.

Usage

  1. Write components by extending Purview.Component.
  2. Send down (a) the server-rendered HTML of your component and (b) a script tag pointing to Purview's client-side JS file.
    • For (a), call Purview.render(<Component />, req), where Component is your root component, and req is the standard request object, of type http.IncomingMessage, from express or http.createServer. This returns a promise with HTML.
    • For (b), either serve the JavaScript in Purview.scriptPath directly (see example below) or, in an existing client-side codebase, import 'purview/dist/browser'.
  3. Handle WebSocket connections by calling Purview.handleWebSocket(server, options), where server is an http.Server object. If you're using Express, call http.createServer(app) to a create a server from your app object. Then call server.listen() instead of app.listen() to bind your server to a port.
    • options should be an object with one key: origin, whose value is a string.
    • origin should be the protocol and hostname (along with the port if it's non-standard) of the server (e.g. https://example.com). This is used to perform WebSocket origin validation, ensuring requests originate from your server. You can set origin to null to skip origin validation, but this is not recommended.
    • Note that, if you incorrectly specify origin, the page will keep refreshing in an attempt to re-connect the WebSocket.

Below is a full working example:

import Purview from 'purview'
import * as Sequelize from 'sequelize'
import * as http from 'http'
import * as express from 'express'
const db = new Sequelize('sqlite:purview.db')
// (1) Write components by extending Purview.Component. The two type parameters
// are the types of the props and state, respectively.
class Counter extends Purview.Component<{}, { count: number }> {
  async getInitialState(): Promise<{ count: number }> {
    // Query the current count from the database.
    const [rows] = await db.query('SELECT count FROM counters LIMIT 1')
    return { count: rows[0].count }
  }
  increment = async () => {
    await db.query('UPDATE counters SET count = count + 1')
    this.setState(await this.getInitialState())
  }
  render(): JSX.Element {
    return (
      <div>
        <p>The count is {this.state.count}</p>
        <button onClick={this.increment}>Click to increment</button>
      </div>
    )
  }
}
async function startServer(): Promise<void> {
  // (2) Send down server-rendered HTML and a script tag with Purview's
  // client-side JavaScript.
  const app = express()
  app.get('/', async (req, res) => {
    res.send(`
      <body>
        ${await Purview.render(<Counter />, req)}
        <script src='/script.js'></script>
      </body>
    `)
  })
  app.get('/script.js', (_, res) => res.sendFile(Purview.scriptPath))
  // (3) Handle WebSocket connections.
  const server = http.createServer(app)
  const port = 8000
  Purview.handleWebSocket(server, {
    origin: `http://localhost:${port}`,
  })
  // Reset database and insert our initial counter.
  db.define('counter', { count: Sequelize.INTEGER }, { timestamps: false })
  await db.sync({ force: true })
  await db.query('INSERT INTO counters (count) VALUES (0)')
  server.listen(port, () => console.log(`Listening on localhost:${port}`))
}
startServer()

Differences from React

Purview mimics React in many ways, but differs significantly when it comes to event handlers, controlled form inputs, and getInitialState().

Event handlers

Because your components run on the server-side, your event handlers are not passed standard DOM event objects. Instead, Purview determines relevant information associated with certain events and creates its own event objects. Here's a description of the event object that Purview passes to your handler for various event types:

  • onInput: The event object is of type InputEvent<T> = { value: T }. T is boolean for checkboxes, number for <input type='number'>, and string for all other inputs.

  • onChange: The event object is of type ChangeEvent<T> = { value: T }. T is boolean for checkboxes, number for <input type='number'>, string[] for <select multiple> and string for all other inputs.

  • onKeyDown, onKeyPress, and onKeyUp: The event object is of type KeyEvent = { key: string }, where key is the key that was pressed.

  • onSubmit: The event object is of type SubmitEvent = { fields: { [key: string]: any } }. fields is a mapping of form field names to values. It is your responsibility to perform validation on fields for both the types and values, just as you would do if you were writing a server-side route handler. class-validator is a helpful library here.

    When you add an onSubmit handler, the default action of the submit event is automatically prevented (i.e. via event.preventDefault()). This stops the browser from navigating to a different page.

All other event handlers are passed no arguments.

Controlled Form Inputs

If you specify a value attribute on a text input or textarea, a checked attribute on a radio/checkbox input, or a selected attribute on an option, the form input will be controlled. Upon each re-render, the value will be forcibly set to the value you specify.

Unlike React, a controlled form input's value can be modified, but it'll be reset to the specified value when re-rendered. To prevent modification, use the standard readonly or disabled HTML attributes.

Purview does not let you specify a value attribute for select tags like React does. Instead, you must use the selected attribute on option tags, just like you would in regular HTML. Purview controls the select given at least one option has a selected attribute.

If you want to set an initial, uncontrolled value, use the attribute defaultValue for text inputs and textareas, defaultChecked for radio/checkbox inputs, and defaultSelected for options.

Do note that events require a round-trip to the server, so controlling form inputs is more expensive than in React. That being said, it's quite fast given a reasonable Internet connection, and this expense can often be ignored.

getInitialState()

Components can define a getInitialState() function that returns a promise with the initial state of the component. This can be used to e.g. fetch information from a database or service prior to the component rendering.

The call to Purview.render() returns a promise that resolves once all initial state has been fetched and components have been rendered. This prevents the user from seeing a flash of empty content before your components load their state.

Other differences

In addition to the above, Purview also differs from React in the following ways:

  • The only supported lifecycle methods are componentDidMount(), componentWillReceiveProps(), and componentWillUnmount().
  • Context, refs, fragments, error boundaries, portals, and hooks are unsupported.

Inspiration

Phoenix Live View -- https://www.youtube.com/watch?v=Z2DU0qLfPIY

Contributors

Karthik Viswanathan -- [email protected]

If you're interested in contributing to Purview, get in touch with me via the email above. I'd love to have you help out and potentially join the core development team.

License

Purview is MIT licensed.




All Comments: [-] | anchor

kgwxd(3425) 1 day ago [-]

I'm sure there's a use case for this but, for me, it's pretty rare to need pull new data or persist anything after most events, let alone every event. A single fetch and single save is usually all that's needed.

karthikksv(3819) 1 day ago [-]

Whenever any information needs to be saved or processed by the server, you can perform your server-side logic inline in the event handler (e.g. update the database, contact external services, etc.). There's no need to make an AJAX request and have a corresponding API route. This abstracts away the client-server interface, and you get safety guarantees with type-checking.

janpot(10000) 1 day ago [-]

looks like it's the react version of https://dockyard.com/blog/2018/12/12/phoenix-liveview-intera... ?

edit: nvm, it's listed in the readme as inspiration

mcintyre1994(4020) 1 day ago [-]

This looks awesome, thanks for sharing! :)

elmo2you(10000) 1 day ago [-]

I was thinking the same. I expect elixer/phoenix to easily outperform js/node. I believe with phoenix it actually makes sense what they are going, which I can't really say from this 'thing'.

karthikksv(3819) 1 day ago [-]

Yes, that was my inspiration! Chris gave a great talk about it at ElixirConf is anyone is interested in watching: https://www.youtube.com/watch?v=Z2DU0qLfPIY

Some differences compared to LiveView:

- Type-checking: there are extensive JSX typings (https://github.com/karthikv/purview/blob/master/src/types/js...) that ensure you're attaching event handlers with the correct signatures, specifying supported props, using valid HTML tags and attributes, etc. Static-typing guarantees are one of my big priorities.

- I'm not sure if LiveView intends to support nested components like React does. Having the ability to split up complex pages into components that you can nest and reuse (with mostly one-way data flow) is a key part of maintainability. I wanted to maintain a very familiar React interface, so you can pick up Purview quickly if you're comfortable with React.

root_axis(10000) about 22 hours ago [-]

> What if your React components ran on the server-side?

Huh? React components can run on the server side, that is one of the primary benefits of React compared to its predecessors.

karthikksv(3819) about 21 hours ago [-]

Perhaps I could've explained myself better. I meant that all the business logic of the components (i.e. event handlers, lifecycle hooks, setState() calls, etc.) run on the server, unlike just the initial server-side rendering that React provides. The server maintains the state of all components, and when an event occurs, the client notifies the server to run the appropriate event handler.

vlucas(2776) 1 day ago [-]

It's a JS framework that targets server-side (node.js), and is written in a language that is not natively supported - TypeScript, thus requiring transpilation right out the box to use. Does that seem odd to anyone else?

On the update: This does seem kinda neat, and certainly useful in some situations.

janpot(10000) 1 day ago [-]

Not to me, if it's distributed on npm in its transpiled form it doesn't matter.

karthikksv(3819) 1 day ago [-]

You're not required to use TypeScript. As janpot mentioned, the library is transpiled prior to being published on npm, so you can use it with regular JavaScript. This is true of most TypeScript libraries.

Scooty(10000) 1 day ago [-]

I've been playing around with a similar idea using nextjs. My idea was to create some kind of RPC interface with typescript that works seamlessly between server/client rendering, so during server rendering, RPC calls would just be regular function calls, and during client rendering/event handling, the calls would be made through sockets or HTTP requests.

There's a lot of criticism in this thread. I'm curious if you're using this in production. I love projects like this that really push the bounds of how these problems are typically solved.

karthikksv(3819) 1 day ago [-]

We've been using this in production at the company I work at for about the past ~1.5 months. We've gone through a fair number of fixes and improvements throughout that time (bugs, race conditions, performance, etc.). It's definitely still a nascent project, but we're excited to continue using Purview and developing it. Would love to get your thoughts if you try it out sometime.

stevebmark(4019) 1 day ago [-]

God, GOD WHY. Whhhyyyyy. Build APIs, not database calls from templates like Rails or PHP garbage. We know how to do this nowwwwwwww don't make this Rails User.friends.comments garbage DB calls from templates

benbristow(3311) 1 day ago [-]

Why's that garbage? Sometimes a full SPA is not needed.

pennaMan(3946) 1 day ago [-]

so it's a next.js clone?

karthikksv(3819) about 21 hours ago [-]

All the business logic of the components (i.e. event handlers, lifecycle hooks, setState() calls, etc.) run on the server, unlike just the initial server-side rendering that Next.js provides. The server maintains the state of all components, and when an event occurs, the client notifies the server to run the appropriate event handler.

solarkraft(3857) 1 day ago [-]

> Applications that require minimal latency (e.g. animations, games) are not well suited for Purview

It's not actually React compatible.

Just why?

janpot(10000) 1 day ago [-]

Because the DOM is not synchronously accessible, I guess.

weego(10000) 1 day ago [-]

So now we're stuffing sql in our react as well as html and CSS? React is on course to reinvent the PHP that everyone spent so long laughing about

throwawayy1001(10000) 1 day ago [-]

Personally I took a break from frontend work, I'll circle back in a few years once web assembly is ready.

The JS ecosystem is insane once you take a step back and look at it.

karthikksv(3819) 1 day ago [-]

A common criticism I heard about React a few years back was combining JS and HTML within the same file, as opposed to having them separate. As with most decisions, I think separation is an important trade-off you need to consider: it's easier to reason about the individual, separated parts that are well-contained, but it becomes harder to reason about the system as the whole.

If you have a complex front-end and back-end with a REST API, each time you modify a server route, you need to find all instances in your client where you make an AJAX request to that route and change them appropriately. The potential for inconsistency can cause numerous bugs.

GraphQL solves this by having a very flexible and standardized server (albeit complex), giving your client access to arbitrary structured information using a query language.

Purview solves this by moving your logic to the server-side, where you can access your database directly. Because everything runs on the server, the client-server interface is abstracted away, and you don't need to worry about using GraphQL or REST. You can make database queries, contact external services, etc. directly within your components.

To keep this maintainable, you split up your page into logical, reusable components, just like you would with React, and each component now not only contains your view logic, but also the server-side logic. Given that you've decomposed your components well, this makes it easy to reason about the whole system when it comes to any part of your page.

thecatspaw(4028) 1 day ago [-]

True, but you could move it into a model class, maybe use mobx or another state library

robinduckett(3607) 1 day ago [-]

<?php print('this is far easier lol'); ?>

forgotmyhnacc(10000) 1 day ago [-]

I know this comment is a joke, but this is actually how react was created. Facebook made a templating language in php called xhp (https://docs.hhvm.com/hack/XHP/introduction) and some people at fb developed react as a natural extension to xhp but in JavaScript.

rienbdj(4030) 1 day ago [-]

normally security would be implemented at an api level. you would have to be careful to implement that in the renderer here.

karthikksv(3819) 1 day ago [-]

You're right, there's validation for the WebSocket messages sent to the server. For certain events (e.g. submit), the validation of user form data is left to you, just as you would normally be responsible for.

mplewis(3812) 1 day ago [-]

In React, forms are often built with this pattern:

* Type/change events fire on the field (e.g. username)

* Username updates the internal model (this.username = 'karthikksv')

* The submit button fires an event so the parent page can act on the entire data model ({ username: 'karthikksv', password: 'fluffykittens' })

I'm concerned that this model won't work well using server-side React components on a network with high latency – each type event would have to round-trip to the server to update in the DOM.

karthikksv(3819) 1 day ago [-]

Higher latency connections will certainly have longer delays, but it's quite fast on a reasonable connection thanks to the persistent WebSocket, even if you're sending each letter that's being typed. We're using Purview in production at the company I work at, and we haven't had issues with input-related delays.

If you don't need to send each letter, it's recommended to not do so; the submit event object includes all form data, so you can do one final validation at the end, similar to what you'd do with a normal web server.

choeger(10000) 1 day ago [-]

Wait, why could we not render the HTML on the server and only let the browser to the compositing? If there only was a protocol that would allow us to exchange input events and some drawing commands over some arbitrary network.. let's phantasize for a moment and call that protocol X. I would use it when it's mature so maybe from X11 on onwards.

Naturally, sending draw commands has a big downside in efficiency. Maybe we could later exchange X with something that only sends bitmaps?

martell(3976) 1 day ago [-]

> Maybe we could later exchange X with something that only sends bitmaps?

Can we call the followup Wayland because that is the only way to go :)





Historical Discussions: The British-Irish Dialect Quiz (February 15, 2019: 2 points)
The British-Irish Dialect Quiz (February 15, 2019: 2 points)

(50) The British-Irish Dialect Quiz

50 points about 5 hours ago by open-source-ux in 589th position

www.nytimes.com | Estimated reading time – 5 minutes | comments | anchor

For each question, choose whichever answer comes closest to how you talk casually with friends.

Let's get one thing out of the way first:

I was raised in Ireland or the U.K.

I wasn't raised there, but I want to play anyway!

Your map

The map shows places where answers most closely match your own, based on more than 73,000 respondents who said they were from Ireland or Britain.

How did we do?

Let us know where you were raised, to help make this quiz better.

Want to keep going?

We have dozens more questions. Click the button below to continue answering, or keep reading to explore which words define your dialect best.

More, please!

Here are the answers most characteristic of the location our quiz placed you.

Do you call the common playground game tag, tig or it? Is that bit of bread on the table a roll, a bap or a bun? And do the words but and put rhyme when you say them out loud?

The answers to these questions and others like them divide the various regions of Ireland and Britain just as much as, say, the question of soda, pop or coke splits the United States.

The way that people speak — the particular words they use and how they sound — is deeply tied to their sense of identity. And it's not just about geography. Education, gender, age, ethnicity and other social variables influence speech patterns, too.

These dialect markers are so ingrained into people's sense of self that they tend to persist well after they move away from home. "Identity is what underlies most people's retention of at least some of their local features," said Clive Upton, professor emeritus of English language at the University of Leeds, "because ultimately what we say is who we are."

In Ireland and Britain, the local dialect can change wildly just 10 or 20 miles down the road. There's a vast amount of variation over a small area, especially when compared with a place like the United States.

Language differentiation takes time, so the longer a language has to simmer in one location, the more diverse it becomes, said Raymond Hickey, a professor of linguistics at the University of Duisburg-Essen. English speakers first settled in Ireland in the late 12th century, and Old English has its beginnings in, no surprise, England, almost 1,600 years ago. So it has had plenty of time to diversify.

Do you have suggestions or English language regionalisms that you're curious about? Please let us know.

For dialectologists, the patterns of people's speech reveal a great deal about the historical development of the English language. "Regional dialect variation allows you to hear echoes of earlier forms of the language — it isn't just about chronicling, 'Oh, that's a funny noise' or 'Oh, that's a strange word,'" Mr. Upton said. "Underneath all that it's very seriously trying to get to grips with the question of how language changes."

And the English language is always changing. While some of the finer village-by-village accent distinctions in Ireland and Britain are eroding, there is no evidence that regional speech differences are about to disappear, regardless of technological changes. We are not about to start all speaking the same way anytime soon — that's not how language works. People always form linguistic communities, each with its own speech patterns. Additionally, a generational component built into language development ensures that English will continue to evolve.

"Each generation wants to separate themselves from the previous one," Mr. Hickey said. "That change is not going to stop. You're not going to get teenagers who want to dress like their parents and listen to the same music as their parents, and there's a linguistic aspect to that as well."

As English continues to spread around the world, it will lead to an increasing number of new, emergent varieties of world Englishes. "It doesn't belong to England anymore," Mr. Upton said. "Just as you in North America have done things with it and own it yourselves, so people in Africa and the Far East and so on are doing the same. It's all very exciting."

Methodology

Once a good base of questions had been compiled, a pilot version of the quiz posted to Reddit garnered a few thousand responses and generated useful feedback and ideas for more questions. Further questions came from readers or from additional research and reporting.




All Comments: [-] | anchor

thechao(10000) about 2 hours ago [-]

This quiz located my Scots coworker to within a few miles of his home.

Marazan(10000) about 2 hours ago [-]

Located Scots me South of Oxford. So not perfect.

noir_lord(3939) about 1 hour ago [-]

They got it bang on for me.

I'm actually quite impressed because I don't have a particularly strong regional accent, people find it hard to place me other than 'Northern'.

mcjiggerlog(3752) 14 minutes ago [-]

Same for me - people are surprised I'm from Liverpool as I don't have a strong accent but this got it absolutely bang on.

Wildgoose(10000) about 2 hours ago [-]

Narrowed me down to Yorkshire very accurately.

jashmatthews(4023) about 2 hours ago [-]

Yeah, it got me too, despite that I moved from Yorkshire to New Zealand when I was 12.

cannam(3813) about 1 hour ago [-]

Narrowed me down to Yorkshire as well - wrongly, because I come from the north-east. My mother grew up in Derbyshire, so perhaps it split the difference.

noir_lord(3939) about 1 hour ago [-]

Aye, me too and our accent isn't noticeably Yorkshire, I suspect 'bread cake' would be enough on it's own though.

forinti(10000) 26 minutes ago [-]

I lived in Cambridge for 5 years in the 1980s. My English must be skewed to the American variety by now, but the result was quite accurate.

forinti(10000) 12 minutes ago [-]

Or should I say spot-on?

scj(10000) about 2 hours ago [-]

I'm in North America, and one of the questions that struck me was naming a running body of water smaller than a river.

My answer was based on the naming of bodies of water near where I grew up, which originated from English settlers...

I'd be interested in seeing something similar that captures all areas where English is spoken as a first language.

cannam(3813) about 1 hour ago [-]

This older quiz from the same source, covering the US, came up in a comment here the other day: https://www.nytimes.com/interactive/2014/upshot/dialect-quiz...

The questions are quite different (presumably chosen based on what will segment readers suitably) and I don't recall anything about running bodies of water. So we don't get to compare, and it isn't sufficient to answer what you want - but it's interesting if you haven't seen it already.





Historical Discussions: Show HN: HTML5 MMORPG – almost 7 years in the making (February 15, 2019: 40 points)

(40) Show HN: HTML5 MMORPG – almost 7 years in the making

40 points 1 day ago by marxdeveloper in 3954th position

data.mo.ee | | comments | anchor

A simple yet addictive multiplayer game where you can fight monsters and increase levels in 17 different skills. Come and invite your friends too, it is fun! Free to play!

50

Loading data...

Chat

Filters

Contacts

Quests

Buy items and coins

Spectate other players

RPG MO - Loading...




All Comments: [-] | anchor

mosselman(3735) 1 day ago [-]

Looks like that was a lot of work. Will try more later.

marxdeveloper(3954) 1 day ago [-]

It has been, we are still very actively doing updates. Small team of 2 people.

Konnstann(10000) about 23 hours ago [-]

I like the game and it is very impressive from a development perspective, but the amount of chat moderation combined with the slow (by design, probably) pace of the game made me stop playing. Might pick it up again, however.

marxdeveloper(3954) about 22 hours ago [-]

Thank you for trying! We do offer different public chat channels that have different rules, channel 18 has quite relaxed rules compared to other channels. If you are willing to spend 5$ once you can get your own private channel where you can decide the rules and invite who you like and make them moderators etc.





Historical Discussions: Show HN: Building a zero-latency WordPress front-end (February 12, 2019: 39 points)

(39) Show HN: Building a zero-latency WordPress front-end

39 points 4 days ago by chungleong in 4024th position

github.com | Estimated reading time – 37 minutes | comments | anchor

Zero-latency WordPress Front-end

In this example, we're going to build a zero-latency front-end for WordPress. When a visitor clicks on a link, a story will instantly appear. No hourglass. No spinner. No blank page. We'll accomplish this by aggressively prefetching data in our client-side code. At the same time, we're going to employ server-side rendering (SSR) to minimize time to first impression. The page should appear within a fraction of a second after the visitor enters the URL.

Combined with aggressive back-end caching, we'll end up with a web site that feels very fast and is cheap to host.

This is a complex example with many moving parts. It's definitely not for beginners. You should already be familiar with technologies involved: React, Nginx caching, and of course WordPress itself.

Live demo

For the purpose of demonstrating what the example code can do, I've prepared three web sites:

All three are hosted on the same AWS A1 medium instance. It's powered by a single core of a Graviton CPU and backed by 2G of RAM. In terms of computational resources, we have roughly one fourth that of a phone. Not much. For our system though, it's more than enough. Most requests will result in cache hits. Nginx will spend most of its time sending data already in memory. We'll be IO-bound long before we're CPU-bound.

pfj.trambar.io obtains its data from a test WordPress instance running on the same server. It's populated with random lorem ipsum text. You can log into the WordPress admin page and post a article using the account bdickus (password: incontinentia). Publication of a new article will trigger a cache purge. The article should appear in the front page automatically after 30 seconds or so (no need to hit refresh button).

You can see a list of what's in the Nginx cache here.

et.trambar.io and rwt.trambar.io obtain their data from ExtremeTech and Real World Tech respectively. They are meant to give you a better sense of how the example code fares with real-world contents. Both sites have close to two decades' worth of articles. Our server does not receive cache purge commands from these WordPress instances so the contents could be out of date. Cache misses will also lead to slightly longer pauses.

Server-side rendering

Isomorphic React components are capable of rendering on a web server as well as in a web browser. One primary purpose of server-side rendering (SSR) is search engine optimization. Another is to mask JavaScript loading time. Rather than displaying a spinner or progress bar, we render the front-end on the server and send the HTML to the browser. Effectively, we're using the front-end's own appearance as its loading screen.

The following animation depicts how an SSR-augmented single-page web-site works. Click on it if you wish to view it as separate images.

While the SSR HTML is not backed by JavaScript, it does have functional hyperlinks. If the visitor clicks on a link before the JavaScript bundle is done loading, he'll end up at another SSR page. As the server has immediate access to both code and data, it can generate this page very quickly. It's also possible that the page exists already in the server-side cache, in which case it'll be sent even sooner.

Back-end services

Our back-end consists of three services: WordPress itself, Nginx, and Node.js. The following diagram shows how contents of various types move between them:

Note how Nginx does not fetch JSON data directly from WordPress. Instead, data goes through Node first. This detour is due mainly to WordPress not attaching e-tags to JSON responses. Without e-tags the browser cannot perform cache validation (i.e. conditional request → 304 not modified). Passing the data through Node also gives us a chance to strip out unnecessary fields. Finally, it lets us compress the data prior to sending it to Nginx. Size reduction means more contents will fit in the cache. It also saves Nginx from having to gzip the same data over and over again.

Node will request JSON data from Nginx when it runs the front-end code. If the data isn't found in the cache, Node will end up serving its own request. This round-trip will result in Nginx caching the JSON data. We want that to happen since the browser will soon be requesting the same data (since it'll be running the same front-end code).

Uncached page access

The following animation shows what happens when the browser requests a page and Nginx's cache is empty. Click on it to view it as separate images.

Cached page access

The following animation shows how page requests are handled once contents (both HTML and JSON) are cached. This is what happens most of the time.

Cache purging

The following animation depicts what happens when a new article is published on WordPress.

Getting started

This example is delivered as a Docker app. Please install Docker and Docker Compose if they aren't already installed on your computer. On Windows and OSX, you might need to enable port forwarding for port 8000.

In a command-line prompt, run npm install or npm ci. Once all libraries have been downloaded, run npm run start-server. Docker will proceed to download four official images from Docker Hub: WordPress, MariaDB, Nginx, and Node.js.

Once the services are up and running, go to http://localhost:8000/wp-admin/. You should be greeted by WordPress's installation page. Enter some information about your test site and create the admin account. Log in and go to Settings > Permalinks. Choose one of the URL schemas.

Next, go to Plugins > Add New. Search for Proxy Cache Purge. Install and activate the plugin. A new Proxy Cache item will appear in the side navigation bar. Click on it. At the bottom of the page, set the Custom IP to 172.129.0.3. This is the address of our Node.js service.

In a different browser tab, go to http://localhost:8000/. You should see the front page with just a sample post:

Now return to the WordPress admin page and publish another test post. After 30 seconds or so, the post should automatically appear in the front page:

To see the code running in debug mode, run npm run watch. The client-side code will be rebuilt whenever changes occurs.

To populate your test site with dummy data, install the FakerPress plugin.

To shut down the test server, run npm run stop-server. To remove Docker volumes used by the example, run npm run remove-server.

If you have a production web site running WordPress, you can see how its contents look in the example front-end (provided that the REST interface is exposed and permalinks are enabled). Open docker-compose-remote.yml and change the environment variable WORDPRESS_HOST to the address of the site. Then run npm run start-server-remote.

Nginx configuration

Let us look at the Nginx configuration file. The first two lines tell Nginx where to place cached responses, how large the cache should be (1 GB), and for how long to keep inactive entries (7 days):

proxy_cache_path /var/cache/nginx/data keys_zone=data:10m max_size=1g inactive=7d;
proxy_temp_path /var/cache/nginx/tmp;

proxy_cache_path is specified without levels so that files are stored in a flat directory structure. This makes it easier to scan the cache. proxy_temp_path is set to a location on the same volume as the cache so Nginx can move files into it with a rename operation.

The following section configures reverse-proxying for the WordPress admin page:

location ~ ^/wp-* {
    proxy_pass http://wordpress;
    proxy_set_header Host $http_host;
    proxy_set_header X-Real-IP $remote_addr;
    proxy_set_header X-Forwarded-For $remote_addr;
    proxy_set_header X-Forwarded-Host $server_name;
    proxy_set_header X-Forwarded-Proto $scheme;
    proxy_pass_header Set-Cookie;
    proxy_redirect off;
}

The following section controls Nginx's interaction with Node:

location / {
    proxy_pass http://node;
    proxy_set_header Host $http_host;
    proxy_cache data;
    proxy_cache_key $uri$is_args$args;
    proxy_cache_min_uses 1;
    proxy_cache_valid 400 404 1m;
    proxy_ignore_headers Vary;
    add_header Access-Control-Allow-Origin *;
    add_header Access-Control-Expose-Headers X-WP-Total;
    add_header X-Cache-Date $upstream_http_date;
    add_header X-Cache-Status $upstream_cache_status;
}

We select the cache zone we defined earlier with the proxy_cache directive. We set the cache key using proxy_cache_key. The MD5 hash of the path plus the query string will be the name used to save each cached server response. With the proxy_cache_min_uses directive we tell Nginx to start caching on the very first request. With the proxy_cache_valid directive we ask Nginx to cache error responses for one minute.

The proxy_ignore_headers directive is there to keep Nginx from creating separate cache entries when requests to the same URL have different Accept-Encoding headers (additional compression methods, for example).

The first two headers added using add_header are there to enable CORS. The last two X-Cache-* headers are for debugging purpose. They let us figure out whether a request has resulted in a cache hit when we examine it using the browser's development tools:

Back-end JavaScript

HTML page generation

The following Express handler (index.js) is invoked when Nginx asks for an HTML page. This should happen infrequently as page navigation is handled client-side. Most visitors will enter the site through the root page and that's inevitably cached.

The handler detects whether the remote agent is a search-engine spider and handle the request accordingly.

async function handlePageRequest(req, res, next) {
    try {
        let path = req.url;
        let noJS = (req.query.js === '0');
        let target = (req.isSpider() || noJS) ? 'seo' : 'hydrate';
        let page = await PageRenderer.generate(path, target);
        if (target === 'seo') {
            // not caching content generated for SEO
            res.set({ 'X-Accel-Expires': 0 });
        } else {
            res.set({ 'Cache-Control': CACHE_CONTROL });
            // remember the URLs used by the page
            pageDependencies[path] = page.sourceURLs;
        }
        res.type('html').send(page.html);
    } catch (err) {
        next(err);
    }
}

PageRenderer.generate() (page-renderer.js) uses our isomorphic React code to generate the page. Since the fetch API doesn't exist on Node.js, we need to supply a compatible function to the data source. We use this opportunity to capture the list of URLs that the front-end accesses. Later, we'll use this list to determine whether a cached page has become out-of-date.

async function generate(path, target) {
    console.log(`Regenerating page: ${path}`);
    // retrieve cached JSON through Nginx
    let host = NGINX_HOST;
    // create a fetch() that remembers the URLs used
    let sourceURLs = [];
    let fetch = (url, options) => {
        if (url.startsWith(host)) {
            sourceURLs.push(url.substr(host.length));
            options = addHostHeader(options);
        }
        return CrossFetch(url, options);
    };
    let options = { host, path, target, fetch };
    let rootNode = await FrontEnd.render(options);
    let appHTML = ReactDOMServer.renderToString(rootNode);
    let htmlTemplate = await FS.readFileAsync(HTML_TEMPLATE, 'utf-8');
    let html = htmlTemplate.replace(`<!--REACT-->`, appHTML);
    if (target === 'hydrate') {
        // add <noscript> tag to redirect to SEO version
        let meta = `<meta http-equiv=refresh content='0; url=?js=0'>`;
        html += `<noscript>${meta}</noscript>`;
    }
    return { path, target, sourceURLs, html };
}

FrontEnd.render() returns a ReactElement containing plain HTML child elements. We use React DOM Server to convert that to actual HTML text. Then we stick it into our HTML template, where a HTML comment sits inside the element that would host the root React component.

FrontEnd.render() is a function exported by our front-end's bootstrap code:

async function serverSideRender(options) {
    let basePath = process.env.BASE_PATH;
    let dataSource = new WordpressDataSource({
        baseURL: options.host + basePath + 'json',
        fetchFunc: options.fetch,
    });
    dataSource.activate();
    let routeManager = new RouteManager({
        routes,
        basePath,
    });
    routeManager.addEventListener('beforechange', (evt) => {
        let route = new Route(routeManager, dataSource);
        evt.postponeDefault(route.setParameters(evt, false));
    });
    routeManager.activate();
    await routeManager.start(options.path);
    let ssrElement = createElement(FrontEnd, { dataSource, routeManager, ssr: options.target });
    return harvest(ssrElement);
}
exports.render = serverSideRender;

The code initiates the data source and the route manager. Using these as props, it creates the root React element <FrontEnd />. The function harvest() (from relaks-harvest) then recursively renders the component tree until all we have are plain HTML elements:

Our front-end is built with the help of Relaks, a library that let us make asynchronous calls within a React component's render method. Data retrievals are done as part of the rendering cycle. This model makes SSR very straight forward. To render a page, we just call the render methods of all its components and wait for them to finish.

JSON data retrieval

The following handler is invoked when Nginx requests a JSON file (i.e. when a cache miss occurs). It's quite simple. All it does is change the URL prefix from /json/ to /wp-json/ and set a couple HTTP headers:

async function handleJSONRequest(req, res, next) {
    try {
        // exclude asterisk
        let root = req.route.path.substr(0, req.route.path.length - 1);
        let path = `/wp-json/${req.url.substr(root.length)}`;
        let json = await JSONRetriever.fetch(path);
        if (json.total) {
            res.set({ 'X-WP-Total': json.total });
        }
        res.set({ 'Cache-Control': CACHE_CONTROL });
        res.send(json.text);
    } catch (err) {
        next(err);
    }
}

JSONRetriever.fetch() (json-retriever.js) downloads JSON data from WordPress and performs error correction to deal with rogue plugins:

async function fetch(path) {
    console.log(`Retrieving data: ${path}`);
    let url = `${WORDPRESS_HOST}${path}`;
    let res = await CrossFetch(url);
    let resText = await res.text();
    let object;
    try {
        object = JSON.parse(resText);
    } catch (err) {
        // remove any error msg that got dumped into the output stream
        if (res.status === 200) {
            resText = resText.replace(/^[^\{\[]+/, '');
            object = JSON.parse(resText);
        }
    }
    if (res.status >= 400) {
        let msg = (object && object.message) ? object.message : resText;
        let err = new Error(msg);
        err.status = res.status;
        throw err;
    }
    let total = parseInt(res.headers.get('X-WP-Total'));
    removeSuperfluousProps(path, object);
    let text = JSON.stringify(object);
    return { path, text, total };
}

Fields that aren't needed are stripped out before the JSON object is stringified again.

Purge request Handling

The Proxy Cache Purge sends out PURGE requests whenever a new article is published on WordPress. We configured our system so that Node would receive these requests. Before we carry out the purge, we check if the request really is from WordPress. It may give us either an URL or a wildcard expression. We watch for two specific scenarios: when the plugin wants to purge the whole cache and when it wants to purge a single JSON object. In the latter case, we proceed to purge all queries that might be affected.

async function handlePurgeRequest(req, res) {
    // verify that require is coming from WordPress
    let remoteIP = req.connection.remoteAddress;
    res.end();
    let wordpressIP = await dnsCache.lookupAsync(WORDPRESS_HOST.replace(/^https?:\/\//, ''));
    if (remoteIP !== `::ffff:${wordpressIP}`) {
        return;
    }
    let url = req.url;
    let method = req.headers['x-purge-method'];
    if (method === 'regex' && url === '/.*') {
        pageDependencies = {};
        await NginxCache.purge(/.*/);
        await PageRenderer.prefetch('/');
    } else if (method === 'default') {
        // look for URLs that looks like /wp-json/wp/v2/pages/4/
        let m = /^\/wp\-json\/(\w+\/\w+\/\w+)\/(\d+)\/$/.exec(url);
        if (!m) {
            return;
        }
        // purge matching JSON files
        let folderPath = m[1];
        let pattern = new RegExp(`^/json/${folderPath}.*`);
        await NginxCache.purge(pattern);
        // purge the timestamp so CSR code knows something has changed
        await NginxCache.purge('/.mtime');
        // look for pages that made use of the purged JSONs
        for (let [ path, sourceURLs ] of Object.entries(pageDependencies)) {
            let affected = sourceURLs.some((sourceURL) => {
                return pattern.test(sourceURL);
            });
            if (affected) {
                // purge the cached page
                await NginxCache.purge(path);
                delete pageDependencies[path];
                if (path === '/') {
                    await PageRenderer.prefetch('/');
                }
            }
        }
    }
}

For example, when we receive PURGE /wp-json/wp/v2/posts/100/, we perform a purge of /json/wp/v2/posts.*. The approach is pretty conservative. Entries will often get purged when there's no need. This isn't terrible since the data can be reloaded fairly quickly. Since e-tags are based on contents, when no change has actually occurred we would end up with the same e-tag. Nginx will still send 304 not modified to the browser despite a back-end cache miss.

After purging JSON data, we purge the /.mtime timestamp file. This act as a signal to the browser that it's time to rerun data queries.

Then we purge HTML files generated earlier that made use of the purged data. Recall how in handlePageRequest() we had saved the list of source URLs.

Only Nginx Plus (i.e. paid version of Nginx) supports cache purging. NginxCache.purge() (nginx-cache.js) is basically a workaround for that fact. The code is not terribly efficient but does the job. Hopefully cache purging will be available in the free version of Nginx in the future.

Timestamp handling

The handle for timestamp requests is extremely simple:

async function handleTimestampRequest(req, res, next) {
    try {
        let now = new Date;
        let ts = now.toISOString();
        res.set({ 'Cache-Control': CACHE_CONTROL });
        res.type('text').send(ts);
    } catch (err) {
        next(err);
    }
}

Front-end JavaScript

DOM hydration

The following function (main.js) is responsible for bootstrapping the front-end:

async function initialize(evt) {
    // create data source
    let host = process.env.DATA_HOST || `${location.protocol}//${location.host}`;
    let basePath = process.env.BASE_PATH;
    let dataSource = new WordpressDataSource({
        baseURL: host + basePath + 'json',
    });
    dataSource.activate();
    // create route manager
    let routeManager = new RouteManager({
        routes,
        basePath,
        useHashFallback: (location.protocol !== 'http:' && location.protocol !== 'https:'),
    });
    routeManager.addEventListener('beforechange', (evt) => {
        let route = new Route(routeManager, dataSource);
        evt.postponeDefault(route.setParameters(evt, true));
    });
    routeManager.activate();
    await routeManager.start();
    let container = document.getElementById('react-container');
    if (!process.env.DATA_HOST) {
        // there is SSR support when we're fetching data from the same host
        // as the HTML page
        let ssrElement = createElement(FrontEnd, { dataSource, routeManager, ssr: 'hydrate' });
        let seeds = await harvest(ssrElement, { seeds: true });
        plant(seeds);
        hydrate(ssrElement, container);
    }
    let csrElement = createElement(FrontEnd, { dataSource, routeManager });
    render(csrElement, container);
    // check for changes periodically
    let mtimeURL = host + basePath + '.mtime';
    let mtimeLast;
    for (;;) {
        try {
            let res = await fetch(mtimeURL);
            let mtime = await res.text();
            if (mtime !== mtimeLast) {
                if (mtimeLast) {
                    dataSource.invalidate();
                }
                mtimeLast = mtime;
            }
        } catch (err) {
        }
        await delay(30 * 1000);
    }
}

The code creates the data source and the route manager. When SSR is employed, we 'hydrate' DOM elements that are already in the page. We first perform the same sequence of actions that was done on the server. Doing so pulls in data that will be needed for CSR later (while the visitor is still looking at the SSR HTML). Passing { seeds: true } to harvest() tells it to return the contents of asynchronous Relaks components in a list. These 'seeds' are then planted into Relaks, so that asynchronous components can return their initial appearances synchronously. Without this step, the small delays required by asynchronous rendering would lead to mismatches during the hydration process.

Once the DOM is hydrated, we complete the transition to CSR by rendering a second <FrontEnd /> element, this time without the prop ssr.

Then we enter an endless loop that polls the server for content update every 30 seconds.

Routing

We want our front-end to handle WordPress permalinks correctly. This makes page routing somewhat tricky since we cannot rely on simple pattern matching. The URL /hello-world/ could potentially point to either a page, a post, or a list of posts with a given tag. It all depends on slug assignment. We always need information from the server in order to find the right route.

relaks-route-manager was not designed with this usage scenario in mind. It does provide a mean, however, to perform asynchronous operations prior to a route change. When it emits a beforechange event, we can call evt.postponeDefault() to defer the default action (permitting the change) until a promise fulfills:

routeManager.addEventListener('beforechange', (evt) => {
    let route = new Route(routeManager, dataSource);
    evt.postponeDefault(route.setParameters(evt, true));
});

route.setParameters() (routing.js) basically displaces the default parameter extraction mechanism. Our routing table is reduced to the following:

let routes = {
    'page': { path: '*' },
};

Which simply matches any URL.

route.setParameters() itself calls route.getParameters() to obtain the parameters:

async setParameters(evt, fallbackToRoot) {
    let params = await this.getParameters(evt.path, evt.query);
    if (params) {
        params.module = require(`pages/${params.pageType}-page`);
        _.assign(evt.params, params);
    } else {
        if (fallbackToRoot) {
            await this.routeManager.change('/');
            return false;
        } else {
            throw new RelaksRouteManagerError(404, 'Route not found');
        }
    }
}

The key parameter is pageType, which is used to load one of the page components.

As a glance route.getParameters() (routing.js) might seem incredibly inefficient. To see if a URL points to a page, it fetches all pages and see if one of them has that URL:

let allPages = await wp.fetchPages();
let page = _.find(allPages, matchLink);
if (page) {
   return { pageType: 'page', pageSlug: page.slug, siteURL };
}

It does the same check on categories:

let allCategories = await wp.fetchCategories();
let category = _.find(allCategories, matchLink);
if (category) {
    return { pageType: 'category', categorySlug: category.slug, siteURL };
}

Most of the time, the data in question would be cached already. The top nav loads the pages, while the side nav loads the categories (and also top tags). Resolving the route wouldn't require actual data transfer. On cold start the process would be somewhat slow. Our SSR mechanism would mask this delay, however. A visitor wouldn't find it too noticeable. Of course, since we have all pages at hand, a page will pop up instantly when the visitor clicks on the nav bar.

route.getObjectURL() (routing.js) is used to obtain the URL to an object (post, page, category, etc.). The method just remove the site URL from the object's WP permalink:

getObjectURL(object) {
    let { siteURL } = this.params;
    let link = object.link;
    if (!_.startsWith(link, siteURL)) {
        throw new Error(`Object URL does not match site URL`);
    }
    let path = link.substr(siteURL.length);
    return this.composeURL({ path });
}

In order to link to a post, we must download the post beforehand. Clicking on an article will nearly always bring it up instantly.

For links to categories and tags, we perform explicit prefetching:

prefetchObjectURL(object) {
    let url = this.getObjectURL(object);
    setTimeout(() => { this.loadPageData(url) }, 50);
    return url;
}

The first ten posts are always fetched so the visitor sees something immediately after clicking.

WelcomePage

WelcomePage (welcome-page.jsx)[https://github.com/trambarhq/relaks-wordpress-example/blob/master/src/pages/welcome-page.jsx] is an asynchronous component. Its renderAsync() method fetches a list of posts and passes them to WelcomePageSync for actual rendering of the user interface:

async renderAsync(meanwhile) {
    let { wp, route } = this.props;
    let props = { route };
    meanwhile.show(<WelcomePageSync {...props} />)
    props.posts = await wp.fetchPosts();
    meanwhile.show(<WelcomePageSync {...props} />)
    props.medias = await wp.fetchFeaturedMedias(props.posts, 10);
    return <WelcomePageSync {...props} />;
}

WelcomePageSync, meanwhile, delegate the task of rendering the list of posts to PostList:

render() {
    let { route, posts, medias } = this.props;
    return (
        <div className='page'>
            <PostList route={route} posts={posts} medias={medias} minimum={40} />
        </div>
    );
}

PostList

The render method of PostList (post-list.jsx)[https://github.com/trambarhq/relaks-wordpress-example/blob/master/src/widgets/post-list.jsx] doesn't do anything special:

render() {
    let { route, posts, medias } = this.props;
    if (!posts) {
        return null;
    }
    return (
        <div className='posts'>
        {
            posts.map((post) => {
                let media = _.find(medias, { id: post.featured_media });
                return <PostListView route={route} post={post} media={media} key={post.id} />
            })
        }
        </div>
    );
}

The only thing noteworthy about the component is that it perform data load on scroll:

handleScroll = (evt) => {
    let { posts, maximum } = this.props;
    let { scrollTop, scrollHeight } = document.body.parentNode;
    if (scrollTop > scrollHeight * 0.5) {
        if (posts && posts.length < maximum) {
            posts.more();
        }
    }
}

Cordova deployment

This is a bonus section. It shows how you can create a cheapskate mobile app with the help of Cordova. To get started, first install Android Studio or Xcode. Then run npm install -g cordova-cli in the command line. Afterward, go to relaks-wordpress-example/cordova/sample-app and run cordova prepare android or cordova prepare ios. Open the newly created project in Android Studio or Xcode. You'll find it in relaks-wordpress-example/cordova/sample-app/platforms/[android|ios]. If nothing has gone amiss, you should be able to deploy the example to an attached phone. Cordova is a notoriously brittle platform, however. Your mileage may vary.

The Cordova code in the repo retrieves data from https://et.trambar.io. To change the location, set the environment variable CORDOVA_DATA_HOST to the desired address and run npm run build.

Final words

I hope this example lend you some new inspirations. While WordPress is old software, with a bit of clever coding we can greatly enhance the end-user experience. Our demo system feels fast on initial load. It feels fast during subsequent navigation. More importantly perhaps, the system is cheap to operate.

The concepts demonstrated here aren't specific to WordPress. Server-side rendering (SSR) in particular is a very useful technique for any single-page web app. It lets us festoon our project with JavaScript libraries without having to worry too much about the negative impact on load time. For instance, no effort was made to optimize the example code. And as you can see in the WebPart build report, our front-end takes up a whopping 850KB (242KB gzipped). Yet thanks to SSR, the garbage has no discernible impact.




All Comments: [-] | anchor

martin_a(4030) 3 days ago [-]

Interesting, but somehow missing a point for me. You can have quite fast WordPress pages without rolling out this big of a tech stack.

It all comes down to a good theme (most themes are not good) and utilizing a little bit of tech, most of which is already integrated into WP.

For example, I love the Transients API. You can easily cache database calls for posts with the Transients API and save lots of lookup/query time for complex datasets. One could argue that the Transients API is not meant to be used this way but I've build large sites which run fine (and fast) with that.

In the end it comes down to making smart choices: No tracking, few dependencies, minify all your assets, use browser caching, use scaled images, the whole toolset of good practices.

This will get you fast WordPress pages without worrying much about the tech stack you have to build around it.

chungleong(4024) 3 days ago [-]

The setup is more useful in a mobile situation, where latency is high and connectivity isn't so certain. Imagine you're travelling by metro and reading something on your phone.

pushedx(3651) 4 days ago [-]

Live demo is hosed at the moment. I think you should host the one that allows cache purges on a separate instance.

chung-leong(4028) 3 days ago [-]

The server might be hitting the open file limit or something. A lot of sockets get opened when the server generates an HTML page.

Theodores(3998) 4 days ago [-]

It serves div soup rather than modern HTML5. I am a stuck record on this topic but I think your document structure needs to come first. Simplicity always trumps complexity when it comes to speed, no matter what the discipline.

zamadatix(10000) 4 days ago [-]

Does it matter what it serves considering it's written in something completely different and the output is generated by a server not a human?

chatmasta(1058) 4 days ago [-]

This is cool, but the end result is you are basically using WordPress as a document store. Because you need to implement an entirely custom frontend, you lose various wordpress templating abilities. Granted, that might be a worthwhile tradeoff.

lukifer(4032) 4 days ago [-]

Honestly, the sophistication, usability, and hackability* of modern wp-admin makes it worth the price of admission alone (even considering the performance hit from awkward wp_posts schema). I once used a siteless WP as a CMS for a lean CodeIgniter mobile app JSON API.

* Hackable in the positive sense; WP has its security reputation for a reason, but the majority of vulnerabilities are attributable to third-party plugins and themes.




(37) Microsoft's New MT-DNN Outperforms Google BERT

37 points about 2 hours ago by nshm in 4007th position

medium.com | Estimated reading time – 2 minutes | comments | anchor

Multi-task learning and language model pre-training are popular approaches for many of today's natural language understanding (NLU) tasks. Now, Microsoft researchers have released technical details of an AI system that combines both approaches. The new Multi-Task Deep Neural Network (MT-DNN) is a natural language processing (NLP) model that outperforms Google BERT in nine of eleven benchmark NLP tasks.

In their paper Multi Task Deep Neural Networks for Natural Language Understanding, the Microsoft Research and Microsoft Dynamics 365 authors show MT-DNN learning representations across multiple natural language understanding (NLU) tasks. The model "not only leverages large amounts of cross-task data, but also benefits from a regularization effect that leads to more general representations to help adapt to new tasks and domains."

MT-DNN builds on a model Microsoft proposed in 2015 and integrates the network architecture of BERT, a pre-trained bidirectional transformer language model proposed by Google last year.

As shown in the figure above, the network's low-level layers (i.e., text encoding layers) are shared across all tasks, while the top layers are task-specific, combining different types of NLU tasks. Like the BERT model, MT-DNN is trained in two phases: pre-training and fine-tuning. But unlike BERT, MT-DNN adds multi-task learning (MTL) in the fine-tuning phases with multiple task-specific layers in its model architecture.

MT-DNN achieved new SOTA results on ten NLU tasks, including SNLI, SciTail; and eight out of nine GLUE tasks, elevating the GLUE benchmark to 82.2% (1.8% absolute improvement). Researchers also demonstrate that using the SNLI and SciTail datasets, representations learned by MT-DNN allow domain adaptation with substantially fewer in-domain labels than the pre-trained BERT representations.

For more details, please find the paper on arXiv. Microsoft will release the code and pre-trained models.

The GLUE Benchmark leaderboard is here.




All Comments: [-] | anchor

braindead_in(3918) 40 minutes ago [-]

Was the code released?

phowon(10000) 35 minutes ago [-]

It's relatively small modification of BERT with multi-task fine-tuning and slightly different output heads. It should be easy for any NLP researcher to replicate.

pacala(4012) 34 minutes ago [-]

'Microsoft will release the code and pre-trained models.', though there is no pointer to where the release will happen. Training gargantuan language models is getting quite expensive, so releasing code + pre-trained models is significant.

The architecture is a derivation of PyTorch BERT [0], with an MTL loss function on top.

[0] https://github.com/huggingface/pytorch-pretrained-BERT

TaylorAlexander(3995) 27 minutes ago [-]

Compare also to OpenAI's recent post on their own NLP work.

https://blog.openai.com/better-language-models/

mlevental(3528) 8 minutes ago [-]

what exactly is the comparison?

buboard(3998) 1 minute ago [-]

they buried that model with their meme-like release





Historical Discussions: Show HN: Matlock Extension – Discover the Open Source Libraries Pages Are Using (February 12, 2019: 34 points)

(34) Show HN: Matlock Extension – Discover the Open Source Libraries Pages Are Using

34 points 4 days ago by onassar in 3916th position

getmatlock.github.io | Estimated reading time – 4 minutes | comments | anchor

🤷‍♂️ What is Matlock?

Matlock is our first attempt at building an extension (currently for Chrome and Firefox) which detects and lists the Open Source libraries a webpage is using, along with relevant data about those libraries.

At the moment, this is limited to GitHub-hosted libraries, but we do have tests for 1,000+ libraries (and counting).

📸 Example Screenshots


🔨 How does it work?

Matlock works by running a series of tests on each page you navigate to, checking for specific variables, strings, headers, cookies 🍪 or function responses.

The results of these tests tell Matlock whether a certain library is being used, and in some cases, which version.

👮‍♀ ️Why does Matlock require so many permissions?

Here's a breakdown of each of the permissions Matlock needs, and the reason why:

<all_urls>

This permission allows Matlock to run on each of the pages you browse.

It's the core permission required to test the Matlock breadcrumbs 🍞.

cookies 🍪

This permission allows Matlock to access the cookies 🍪 that are saved for the webpages you visit.

It's particularly useful for determining the frameworks that a webpage may possibly be running on.

storage

The storage permission is required to cache the response of certain requests.

tabs

This permission is required to load Matlock in each of your open tabs upon installation, without requiring you to reload each tab.

webRequest

This permission allows Matlock to access the headers for webpages you visit.

It's useful for determining Open Source libraries which include programming languages, frameworks, or servers that a webpage may be running on.

This one is important because there are ways where we can artifically determine this by re-requesting the page you're on, and checking those headers. However doing this is risky, and while some other extensions use this approach, this can cause serious security issues (eg. re-submitting requests on poorly designed financial websites).


🍞 What's a breadcrumb?

Breadcrumbs are JavaScript files that Matlock use to determine whether an Open Source library is being used, and if so, which version is being used.

Quick examples:

Existential testing

The following approaches are used for determining whether an Open Source library is being used on a page:

  • Searching for JavaScript references
  • Evaluating JavaScript statements (and/or functions)
  • Running a CSS selector query and checking for results
  • Checking a page's markup for specific strings
  • Checking a page's markup for specific patterns
  • Checking a request's cookies 🍪 for specific strings
  • Checking a request's cookies 🍪 for specific patterns
  • Checking a request's headers for specific strings
  • Checking a request's headers for specific patterns

Coming soon

  • Checking a request's stylesheets for specific strings
  • Checking a request's stylesheets for specific patterns
  • Checking a request's scripts for specific strings
  • Checking a request's scripts for specific patterns

Version testing

The following approaches are used for determining the (possible) version of an Open Source library that is being used on a page:

  • Searching for JavaScript references
  • Evaluating JavaScript statements
  • Running a CSS selector query and checking an attribute of the response for a specific pattern
  • Checking a page's markup for specific patterns
  • Checking a request's cookies 🍪 for specific patterns
  • Checking a request's headers for specific patterns

Coming soon

  • Checking a request's stylesheets for specific patterns
  • Checking a request's scripts for specific patterns

👬 Who's behind it?

Matlock was built by Oliver Nassar and Adam Masson. We're based in Toronto 🇨🇦, and we both like coffee ☕.

🙏 Acknowledgments




All Comments: [-] | anchor

njn(2085) 4 days ago [-]

I just received this email:

  Hi Nikolas,
 
   My friend Oliver and I have included d3 in our chrome 
 extension https://getmatlock.github.io/, which works by 
 identifying the open source libraries a webpage is using. 
 Our goal is to give credit to the work that open source developers make.
 
   We just submitted it to Hacker News news.ycombinator.com, 
 so if you think this would be useful for open source 
 developers, we hope you'd consider throwing an upvote our way ;)
 
  Hope you don't mind the unsolicited email
Lol. Okay, every time I submit something to Hacker News I'm just going to spam a bunch of people as well :P
onassar(3916) 4 days ago [-]

Hey njn; apologies for the annoyance. My buddy Adam who I worked on it with reached out because we included d3 as part of the extension. Thought it would be useful, but apologies that you got annoyed by it :(





Historical Discussions: Master art forger Eric Hebborn keeps 'playing a great trick from the grave' (February 14, 2019: 7 points)

(33) Master art forger Eric Hebborn keeps 'playing a great trick from the grave'

33 points 2 days ago by pseudolus in 165th position

www.cbc.ca | Estimated reading time – 5 minutes | comments | anchor

Some might consider Eric Hebborn one of the 20th century's greatest visual artists, but not for his own work.

Always a troublemaker, Hebborn was a master forger who created nearly identical copies of famous art works.

Though his penchant for pirating paintings and drawings has been known for more than three decades, a prominent art historian believes one of Hebborn's fakes may hang at the National Gallery in London, England, to this day.

Hebborn, whose original work was panned by the art community, died brutally by blunt force trauma in 1996.

He trained at the Royal Academy of Arts and was skilled in restoration and conservation, art fraud expert Colette Loll says.

But, 'he learned how to blur the lines between producing, enhancing and restoring artworks.'

This painting known as A Reading Man (Saint Ivo?) is credited to Rogier van der Weyden and believed to date back to about 1450. (National Gallery)

The disputed painting, known as A Man Reading (Saint Ivo?), depicts a holy man reading a legal text — possibly Saint Ivo, the patron saint of lawyers. It's believed to be by Rogier van der Weyden, a 15th century painter from the Netherlands.

But Christopher Wright, an expert in old masters paintings, told the Observer that modern inaccuracies point to Hebborn's hand in the work.

For its part, the National Gallery maintains that the painting is original, calling the claims 'baseless' in a statement to the Observer.

Punking collectors

Hebborn claims to have faked — and sold — more than 500 works. That number is disputed, however, as it was common for forgers to brag and inflate their numbers, Loll says.

She thinks there are two factors that likely drove Hebborn's fakes.

On one hand, the man lived a life 'that he could not afford' as a starving artist and supplemented his income with the forgeries.

Colette Loll, founder and director of Art Fraud Insights, is pictured in 2016 at the Winterthur Museum. (Johannes Worsøe Berg/Submitted by Colette Loll)

The second reason is more nefarious: he was pranking the art world.

'What also drives a lot of forgers is really this kind of perverse need to get back at the art establishment that may not have admired or appreciated their own work,' Loll said.

According to Loll, Hebborn would only sell his works to art dealers and historians who 'just should have known better.'

If an expert couldn't tell the difference between an authentic and an imitation piece, in Hebborn's eyes it was their own fault.

'The art historian is not really very interested in art. I mean, he studies it, but he's much more interested in his career, whether he's going to go up, whether he's going to become the head of some great museum,' he said in the 1991 BBC documentary Portrait of a Master Forger.

He is playing a great trick from the grave and I think that was part of his master strategy.- Colette Loll, art fraud expert

Chemically similar

Aside from Hebborn's technical prowess when it came duplicating works, the painter tricked experts on a deeper level.

He sourced materials commonly used by artists in their respectful time periods — paper similar to those used in the 14th and 15th centuries. He would even create his own inks.

'Chemically, oftentimes his fakes would pass muster if you were doing a scientific analysis,' Loll said.

But his 'master stroke,' Loll says, was convincing collectors that artists' original works were his own forgeries to keep experts on their toes.

'I think he loved to really create that chaos in that question in the minds of the experts who he so despised,' she said.

Art scholar Christopher Wright believes that the National Gallery in London, England, is home to one of Eric Hebborn's fakes. (Oli Scarff/AFP/Getty Images)

Artistic fraud like Hebborn's, though amusing to some, has a lasting impact on art history.

'If you insert things into the art historical record that don't belong there then you're distorting the record for future generations and I think that's important,' Loll said.

But as new methodologies to understand the origins of art emerge, Loll believes that the doubt Hebborn created — and indeed the paintings he forged — will become clearer.

Until then, she has no doubt about the legacy he left in the wake of his death.

'He is playing a great trick from the grave and I think that was part of his master strategy.'


To hear the full interview with Colette Loll, download our podcast or click 'listen' at the top of this page.




All Comments: [-] | anchor

StavrosK(527) about 7 hours ago [-]

> On one hand, the man lived a life 'that he could not afford' as a starving artist and supplemented his income with the forgeries.

> The second reason is more nefarious: he was pranking the art world.

More nefarious?

everdev(3005) about 6 hours ago [-]

> Hebborn would only sell his works to art dealers and historians who 'just should have known better.'

> If an expert couldn't tell the difference between an authentic and an imitation piece, in Hebborn's eyes it was their own fault.

It was more than pranking, it was selling forgeries.

mhh__(3995) about 6 hours ago [-]

Was he murdered?

selflesssieve(10000) about 4 hours ago [-]

From Wikipedia - 'On 8 January 1996, shortly after the publication of the Italian edition of his book The Art Forger's Handbook, Eric Hebborn was found lying in a street in Rome, having suffered massive head trauma possibly delivered by a blunt instrument. He died in hospital on 11 January 1996.[4]'

onetimemanytime(2792) about 3 hours ago [-]

Amazing, isn't it? If he can paint so good that experts can't tell him from extremely famous artists...shouldn't his work be valued a lot? I get the novelty on having, say, of one the few DaVinci paintings, but art in itself is valued as well.

>>'On 8 January 1996, shortly after the publication of the Italian edition of his book The Art Forger's Handbook, Eric Hebborn was found lying in a street in Rome, having suffered massive head trauma possibly delivered by a blunt instrument. He died in hospital on 11 January 1996.'

probably revenge...someone may have lost a fortune due to his antics.

thejohnconway(10000) about 2 hours ago [-]

The technique of the greatest painters is often not particularly difficult to copy. In my late teens I considered doing replica paintings to make money. I did a few, and I was somewhat surprised at how good they looked and how easy they were, and I'm only technically middling. (The dealer was only offering $200-$600 for them, and with material costs, it wasn't worth it in the end.)

Some painters are technically amazing and difficult to copy, but they are rarely the greatest artists. John Singer Sargent is one; he had an amazing brush technique that can't be overworked, yet captures forms and light very realistically - but you'd hardly call him a great artist.

weitzj(2840) about 10 hours ago [-]

Another interesting person who did forgery: https://en.m.wikipedia.org/wiki/Wolfgang_Beltracchi

eleclady(10000) 9 minutes ago [-]

He forges art, nets 100m+ euros, goes to prison for 3 years. I spend 40 years at a job I'd rather not be at, lucky to retire with 1m+ euros. Leaves a sour taste.

Someone(1031) 30 minutes ago [-]

There's also https://en.wikipedia.org/wiki/Han_van_Meegeren, who initially wasn't trialed for forgery, but for aiding and abetting with the nazis, selling the Vermeer's he 'found' to Hermann Göring.

His defense (facing, potentially, a death penalty) that the painting was a forgery was only believed after he had shown how to paint a 'Vermeer'.





Historical Discussions: Show HN: Synesthesia – Optimizing brainfuck compiler implemented as Nim macros (February 11, 2019: 33 points)

(33) Show HN: Synesthesia – Optimizing brainfuck compiler implemented as Nim macros

33 points 5 days ago by jeff_ciesielski in 10000th position

github.com | Estimated reading time – 8 minutes | comments | anchor

Synesthesia - A (mildly) optimizing brainfuck compiler implemented as Nim macros

How this came about

My career has been mostly in the embedded space, and while this arena is largely dominated by C (which I have a great affinity for), one thing I've always enjoyed is playing with interesting languages that work on small targets to scratch my language polyglot itch.

Nim has been my weapon of choice for this lately, but I had been toying around with the idea of writing a forth interpreter/compiler in Nim to work on small embedded targets.

While I was working on my first draft (which I don't think will ever see the light of day since it's so dreadful), it struck me that the self-modifying and compile-time-evaluation nature of forth programs were a very good fit for nim's compile time macro system, and that it would be a really neat project to implement a forth->nim compiler as nim macros, which could then be compiled targeting embedded devices to produce efficient native machine code rather than interpreting on the fly.

To that end, I decided that a proof of concept was in order, and decided that brainfuck would be a great target for a first attempt given its simplicity, and the wealth of knowledge on the subject on the internet and great sites like esolangs.

Once I got started, I found that there was also a bunch of great information about optimizing BF, so I figured 'why not implement some of that too?' and it just sort of ran away from me.

Whew, sorry about that novel, but before going any further, I'd like to thank the proprieters of the following sites for their excellent descriptions of various optimizations as they were critical for the outcome of this project:

Requirements

  • Nim compiler (v 0.18) (I recommend using the excellent choosenim)
  • Some brainfuck source code you'd like to compile

Use

Installation:

  • Install the nim compiler (see above)
  • Clone the repo
  • Type nimble install

(I plan to eventually upload this to the nimple package directory)

Compiling BF files

To compile, use the -c flag like so:

synesthesia -c mendel.bf

By default, the compiler will generate an a.out file in the current directory. If you'd like to specify an alternative output file, one can be specified with the -o flag;

synesthesia -c mendel.bf -o mendelbrot

Interpreting BF files

synesthesia also includes an optimizing brainfuck interpreter. To interpret a file, use the -i flag:

synesthesia -i mendel.bf

How compilation works

Nim includes a number of useful properties that uniquely position it for this sort of project. The first is its hygienic macro system which allows for compile time code generation.

The second is the ability to execute 'pure' code at compile time (pure being code that doesn't use FFI). Not everything works (I've found nested generators to fail pretty interestingly), but the vast majority of the nim language can be used. Combining this with the Macro/AST generation system allows one to perform interesting transforms on AST nodes.

Finally, nim allows one to read files at compile time and act on their contents. In the past, I've used this to generate register defnitions for microcontrollers from their header files, but in this instance, this functionality is used to slurp the BF source file and iterate over its contents.

(Note before reading further: I'm hardly an expert on compiler construction, so please be gentle if I use incorrect terminology :) )

Step 0: Generate a temp source file

For simplicity, we generate a very simple nim source file containing the imports required to use the compiler module, and a call to synesthesia.compile(<path/to/bf/source>). We then call out to the nim compiler with this file as the target to begin compilation.

This file is compiled with the release and optimize-for-size flags applied (size optimization tends to produce faster code than speed optimization due to the nature of the code generated)

Step 1: Transformation to a list of tokens

Once a BF source file has been opened and the contents read into a sequence of characters, this sequence is iterated over and each relevant character is converted into an object: BFToken. BFToken is a variant type (i.e. it includes a kind field, think tagged unions in c).

For example, the '>' character causes the AP (memory cell index) to be incremented by one, and '<' causes it to be decremented by one.

Given that, we can conclude that we need an ApAdjust token for +1, and another for -1. With variant types, we can simply include an amt field in the bfsApAdjust token, and generate an appropriate variant when each token is encountered.

(The same idea goes for memory adjustment with the bfsMemAdjust variant)

A full listing of charcter => token mappings can be found in src/synesthesiapkg/common.nim

Step 2: Optimization

synesthesia implements a set of peephole optimizers that are applied to the resulting list of tokens. Some of these optimizations are obvious from the top level BF source (coalescing adjustments for example), while others work best if applied after other optimizations have already been made (dead adjustments / combining memory sets)

A full accounting of the optimizations applied can be found in src/synesthesiapkg/optimizer.nim, but to give the reader an idea of the sorts of things that are going on:

  • Adjacent AP and Mem adjustments (i.e. >>>>> or +++) can be squished into single instructions (ap + 5 and mem[ap] + 3 accordingly). We use the amt field in the object to track the total amount. Note that this works by tracking the total amount, so +++--- becomes mem[ap] + 0
  • Dead adjustments can be eliminated, so any ap or mem adjustment with an amt of 0 can simply be removed from the set of instructions.
  • Clearing the current memory cell is a common pattern in BF [-]. Rather than sitting in a loop and decrementing the current cell until it hits zero, one can simply translate this to mem[ap] = 0, which is constant time.

More interesting optimizations include things like transforming loops into multiplication instructions and deferring AP adjustments by using offsets.

Step 3: AST Generation

Once all optimizations are applied, AST generation can begin. For the most part, ast generation is pretty strait forward, tokens are simply transformed into NimNode objects representing their underlying purpose.

For example:

  • bfsApAdjust(amount) => ap += amount
  • bfsMemAdjust(offset, amount) => mem[ap + offset] += amount
  • bfsPrint => putChar(mem[ap])

One notable exception to this is bfsBlock and bfsBlockEnd (i.e. loops in BF).

synesthesia implements blocks as while loops (sort of, but we use if => doWhile for performance reasons)

As we need to keep track of loops, we maintain a stack of 'blocks' during compilation. As other tokens are decoded, their NimNodes are added to the top block in the stack (i.e. their statements exist under the lexical scope of the last known open loop). When a new block is encountered ([ in BF), we generate a while loop scope and push it onto the stack. When a block ends (]), we pop the block off the stack and continue on.

Once all AST nodes have been generated, the resulting nim code (which we never see) is compiled to C, and then to machine code.

License

The synesthesia compiler is licensed under the GPLv2. Any resulting binaries are licensed at the creator's discretion.




No comments posted yet: Link to HN comments page



(32) Nuclear Waste Dumpsters in Massachusettes Are Costing Taxpayers a Fortune

32 points about 5 hours ago by mimixco in 3818th position

www.bostonglobe.com | Estimated reading time – 9 minutes | comments | anchor

ROWE — The nuclear plant deep in the woods of this Western Massachusetts town stopped producing power 27 years ago when George H.W. Bush was still president. It was dismantled, piece by piece. Buried piping was excavated. Tainted soil was removed. But nestled amid steep hills and farmhouses set on winding roads, something important was left behind.

Under constant armed guard, 16 canisters of highly radioactive waste are entombed in reinforced concrete behind layers of fencing. These 13-foot-tall cylinders may not be much to look at, but they are among the most expensive dumpsters in the country, monuments to government inaction.

Lawyers for Rowe's defunct plant and long-dismantled reactors in Maine and Connecticut are poised to march into a federal courtroom in coming weeks and, for the fourth time in recent years, extract a huge sum of taxpayer money to cover ongoing security and maintenance costs. Taxpayers have already ponied up $500 million as a result of lawsuits filed by the plants' owners, and they are poised to pay $100 million more this time.

Nationally, the US government's failure to keep its vow to dispose of spent nuclear fuel and other high-level waste is proving staggeringly expensive. So far, the government has paid out more than $7 billion in damages for violating its legal pledge to begin hauling away nuclear waste by 1998.

Get Metro Headlines in your inbox:

The 10 top local news stories from metro Boston and around New England delivered daily.

And costs are expected to soar as more of the nation's aging reactors close permanently: Pilgrim Nuclear Power Station in Plymouth, for instance, is slated to go offline by June. Eventually, the remaining staff may have the sole job of safeguarding the radioactive detritus.

For more than 60 years, the Globe covered the Yankee Rowe nuclear plant in western Massachusetts.

By the Department of Energy's own optimistic estimates, the government will be forced to cough up a whopping $28 billion more in taxpayer funds as a result of litigation in coming years.

Long before the 35-day partial government shutdown crippled Washington, the dug-in debate over where to dump the nation's civilian nuclear waste set the radioactive standard for government dysfunction. For more than 60 years, government officials have tried to solve the problem, but plan after plan has collapsed amidst nationwide cries of "Not in my backyard!" So far, all officials have to show for the work is an enormous $10 billion-plus hole in Nevada that will probably never be used.

Instead of consolidating waste in one place, it has left material that is toxic for thousands of years at scores of current and former civilian nuclear plants. Neighbors fear the waste will stay permanently, siphoning money from other needs, thwarting redevelopment, and eventually posing a safety risk.

Senator Edward J. Markey, a longtime nuclear skeptic, said lingering nuclear waste tends to focus the attention of nearby cities and towns on a simple question: "When is this problem going to be solved? Or am I going to have a nuclear waste site in my community for the rest of my family's life?"

***

The promise of nuclear power burned bright in 1960 when the Yankee Atomic Electric Co. first fired up its reactor in Rowe. But, even then, proponents of the new power source knew they were creating a problem: the super-hot, super-radioactive uranium fuel rods left over from generating power. Most plants dumped them in deep pools of water, but that was only a temporary solution.

Globe Staff

Yankee Atomic Electric Company was reflected in the dammed Deerfield River in 1996.

By the early 1980s, as waste accumulated, Congress made this pledge: The Department of Energy would haul away nuclear plants' spent fuel and other high-level waste starting by 1998 and the owners would pick up the tab, in part through a fee in customers' electric bills.

The law was supposed to jump-start a scientific process to choose the best repository for waste. But not-in-my-backyard politics repeatedly got in the way. Who, after all, wants a national nuclear waste dump buried nearby forever?

Congress later zeroed in on a remote desert site called Yucca Mountain in Nevada, about 75 miles from Las Vegas.

But Nevada didn't want the nation's spent nuclear fuel either, and the state's top politician, senator Harry Reid, the majority leader from 2007 to 2015, strongly opposed the plan. After the United States spent more than $10 billion drilling down into and studying the site, the Obama administration effectively killed Yucca around 2010. Congress has not restarted funding for the effort.

Globe Staff

The spent fuel storage pool at Yankee Rowe in 1995.

Proposals to create a consolidated repository to store the waste for an interim period in New Mexico and West Texas are moving forward. But those, too, face huge hurdles.

Meanwhile, electric ratepayers from New England, home to seven current and former nuclear power plants, have paid what is now an estimated $3 billion with interest into the fund to dispose of nuclear waste.

But the account has not brought its intended benefit.

Even with strong support for a permanent fix from the nuclear power industry, environmentalists, and local officials, Congress has remained deadlocked on a final resting place for spent fuel and other highly radioactive waste.

"It's the sad story of government ineptitude," said Andrew Kadak, who was chief executive of Yankee Atomic Electric for eight years, including during the Rowe reactor's closure in the early 1990s. "There are technical solutions. . . . It's the politics that is preventing implementation."

So nuclear plants continue to keep the waste on hand. And they continue to get reimbursed for payroll, security, supplies, and more, because the courts have found the government is in partial breach of its contract to haul away the waste.

In a twist, the government's payments can't come from that nuclear waste fund, a federal court ruled. Instead, it is taken from a separate pool of taxpayer dollars for court judgments and settlements of lawsuits against the government.

Globe Staff

Workers at the Yankee Rowe Atomic plant donned protective clothing before entering the nuclear area as part of decommisioning work in 1996.

The latest suit from Yankee Rowe and the two other fully shuttered New England plants in Wiscasset, Maine, and Haddam, Conn., is set to soon go to trial and cost taxpayers more than $100 million.

And it probably won't be the last lawsuit. Company officials say each plant spends about $10 million a year safeguarding its waste and maintaining corporate structures solely for that task.

Meanwhile, soon-to-close Pilgrim is getting ready to follow in Yankee Rowe's footsteps, moving its remaining spent fuel from cooling pools to huge concrete cylinders, known as dry cask storage, by 2022.

So far, across the country, there haven't been any serious accidents with the casks, according to the Nuclear Regulatory Commission. But as the time frame for their use stretches out indefinitely, no one can be sure how long before the waste poses a threat.

The uncertainty also is forcing plant operators to plan for longer-term issues including climate change and rising sea levels. Officials at Pilgrim, which is oceanfront property, said last year that the plant will move its current cylinders to higher ground and place new ones there, too.

The NRC believes the casks should be safe for years to come, licensing their use for up to 40 years at a time.

The agency has ruled that, with proper inspection and maintenance, casks could last more than 100 years before the waste would have to be transferred to a new steel canister and concrete shell.

But Allison M. Macfarlane, a former NRC chairwoman, said there's no guarantee the infrastructure will be in place to monitor them for safety.

"That assumes our institutions are robust and will last hundreds of years and I think that's a poor assumption based on no evidence whatsoever," Macfarlane said in the midst of the partial federal shutdown.

That is why, experts insist, a permanent subterranean repository like the one planned for Yucca Mountain is the only real solution.

"You should really put it underground where the risk is much lower and you don't have to worry about institutional failures," said MIT researcher Charles W. Forsberg, a chemical and nuclear engineer.

In the meantime, communities that host closed and closing nuclear plants face yet another cost: prime real estate that's potentially locked up for generations.

State Senator Viriato M. deMacedo of Plymouth said, "We have a mile of oceanfront property where that plant is. Once it closes, it will never be able to be used as long as those spent fuel rods are there."

Some still hope that politicians will find a final graveyard for the nuclear waste, and the bucolic valley where Yankee Rowe stood and the beach where Pilgrim stands are redeveloped.

But, after three generations of failed efforts to permanently dispose of the waste, another vision is more likely. Plymouth, where the Pilgrims made the West's first permanent mark in New England, could be home to its last: 61 gigantic casks of nuclear waste forever overlooking the sea.

Joshua Miller can be reached at [email protected]. Follow him on Twitter @jm_bos.

Continue Readingview more article content




All Comments: [-] | anchor

sandworm101(3930) about 4 hours ago [-]

There are so many layers this article has missed. That 'waste' is also viewed by some as a resource, a repository of potential material for weapons. So it cannot be put somewhere out of reach. There has also been progress in deep-drill disposal options. It could be placed a few miles down from where it is now, in bedrock that will one day melt back into the earth. Drill a really deep hole (many miles) cement it over and forget about it. But that means we wouldnt be able to get at it later.

nkurz(28) about 3 hours ago [-]

This sounds plausible, but I haven't previously heard of this being something that affects current policy. Do you have any sources you could point to that would defend the theory that a safe permanent disposal is being overlooked because it doesn't allow for future retrieval?

erentz(3238) about 2 hours ago [-]

It's only useful for dirty bomb style weapons that a terrorist might make. You can't make weapons grade plutonium from it. All plutonium used in US weapons was made in special purpose reactors at Hanford, WA. We made so much of it and have enough stockpiled to build all the bombs we'd ever need that we shut down the production there in 1987.

It may be a useful resource for other types of nuclear plants though.

Ideally we could bury this really deep (as you propose) and forget about it. Or buy out Nevada's opposition and reopen Yucca.

zunzun(3968) about 3 hours ago [-]

I don't see any mention of solar waste.

dj_gitmo(10000) about 2 hours ago [-]

I think they're pretty clear in the article that the problem is that there is no centralized place to ship the waste. Having it sit out in the open where it needs constant security, next to a plant that close decades ago, is the sign of a political failure.

They keep trying to put it in Nevada, but Nevadans aren't having it. They don't trust the federal government after being lied to about the dangers of the open-air atomic bomb testing that they carried out in the 1940s and 50s.

I understand storage and bomb testing are not equivalent, but I don't blame Nevadans for being skeptical. The government should look someplace else at this point.





Historical Discussions: Artificial intelligence, algorithmic pricing, and collusion (February 03, 2019: 4 points)
Artificial intelligence, algorithmic pricing, and collusion (February 14, 2019: 3 points)
Artificial intelligence, algorithmic pricing, and collusion (February 13, 2019: 3 points)

(32) Artificial intelligence, algorithmic pricing, and collusion

32 points about 1 hour ago by mgulaid in 3853rd position

voxeu.org | Estimated reading time – 8 minutes | comments | anchor

Emilio Calvano, Giacomo Calzolari, Vincenzo Denicolò, Sergio Pastorello 03 February 2019

Remember your last online purchase? Chances are, the price you paid was not set by humans but rather by a software algorithm. Already in 2015, more than a third of the vendors on Amazon.com had automated pricing (Chen et al. 2016), and the share has certainly risen since then – with the growth of a repricing software industry that supplies turnkey pricing systems, even the smallest vendors can now afford algorithmic pricing.

Unlike the traditional revenue management systems long in use by such businesses as airlines and hotels, in which the programmer remains effectively in charge of the strategic choices, the pricing programs that are now emerging are much more 'autonomous'. These new algorithms adopt the same logic as the artificial intelligence (AI) programs that have recently attained superhuman performances in complex strategic environments such as the game of Go or chess. That is, the algorithm is instructed by the programmer only about the aim of the exercise – winning the game, say, or generating the highest possible profit. It is not told specifically how to play the game but instead learns from experience. In a training phase, the algorithm actively experiments with the alternative strategies by playing against clones in simulated environments, more frequently adopting the strategies that perform best. In this learning process, the algorithm requires little or no external guidance. Once the learning is completed, the algorithm is put to work.

From the antitrust standpoint, the concern is that these autonomous pricing algorithms may independently discover that if they are to make the highest possible profit, they should avoid price wars. That is, they may learn to collude even if they have not been specifically instructed to do so, and even if they do not communicate with one another. This is a problem. First, 'good performance' from the sellers' standpoint, i.e. high prices, is bad for consumers and for economic efficiency. Second, in most countries (including Europe and the US) such 'tacit' collusion, not relying on explicit intent and communication, is not currently treated as illegal, on the grounds that it is unlikely to occur among human agents and that, even if it did occur, it would be next to impossible to detect. The conventional wisdom, then, is that aggressive antitrust enforcement would be likely to produce many false positives (i.e. condemning innocent conduct), while tolerant policy would result in relatively few false negatives (i.e. excusing anticompetitive conduct). With the advent of AI pricing, however, the concern is that the balance between the two types of error might be altered. Though no real-world evidence of autonomous algorithmic collusion has been produced so far,1 antitrust agencies are actively debating the problem.2

Those who are concerned (e.g. Ezrachi and Stucke 2015) argue that AI algorithms already outperform humans at many tasks, and there seems to be no reason why pricing should be any different. These commentators refer also to a computer science literature that has documented the emergence of some degree of uncompetitively high prices in simulations where independent pricing algorithms interact repeatedly. Some scholars (e.g. Harrington 2018), are developing paths towards making AI collusion unlawful.

Sceptics counter that these simulations do not use the canonical model of collusion, thus failing to represent actual markets (e.g. Kuhn and Tadelis 2018, Schwalbe 2018).3 Furthermore, the degree of anti-competitive pricing appears to be limited, and in any case high prices as such do not necessarily indicate collusion, which instead must involve some kind of reward-punishment scheme to coordinate firms' behaviour. According to the sceptics, achieving genuine collusion without communication is a daunting task not only for humans but even for the smartest AI programs, especially when the economic environment is stochastic. Whatever over-pricing is found in the simulations could be due to the algorithms' failure to learn the competitive equilibrium. If this were so, then there would be little reason to worry, given that the problem will presumably fade away as artificial intelligence develops further.

To inform this policy debate, in a recent paper (Calvano et al. 2018a) we construct AI pricing agents and let them interact repeatedly in controlled environments that reproduce economists' canonical model of collusion, i.e. a repeated pricing game with simultaneous moves and full price flexibility. Our findings suggest that in this framework even relatively simple pricing algorithms systematically learn to play sophisticated collusive strategies. The strategies mete out punishments that are proportional to the extent of the deviations and are finite in duration, with a gradual return to the pre-deviation prices.

Figure 1 illustrates the punishment strategies that the algorithms autonomously learn to play. Starting from the (collusive) prices on which the algorithms have converged (the grey dotted line), we override one algorithm's choice (the red line), forcing it to deviate downward to the competitive or Nash price (the orange dotted line) for one period. The other algorithm (the blue line) keeps playing as prescribed by the strategy it has learned. After this exogenous deviation in period , both algorithms regain control of the pricing.

Figure 1 Price responses to deviating price cut

Note: The blue and red lines show the price dynamic over time of two autonomous pricing algorithms (agents) when the red algorithm deviates from the collusive price in the first period.

The figure shows the price path in the subsequent periods. Clearly, the deviation is punished immediately (the blue line price drops immediately after the deviation of the red line), making the deviation unprofitable. However, the punishment is not as harsh as it could be (i.e. reversion to the competitive price), and it is only temporary; afterwards, the algorithms gradually return to their pre-deviation prices.

What is particularly noteworthy is the behaviour of the deviating algorithm. Plainly, it is responding not only to the rival but also to its own action. (If it responded only to the rival, there would be no reason to cut the price in period t = 2, as the rival has charged the collusive price in period t = 1). This kind of self-reactive behaviour is a distinctive sign of genuine collusion, and it would be difficult to explain otherwise.

The collusion that we find is typically partial – the algorithms do not converge to the monopoly price but a somewhat lower one. However, we show that the propensity to collude is stubborn – substantial collusion continues to prevail even when the active firms are three or four in number, when they are asymmetric, and when they operate in a stochastic environment. The experimental literature with human subjects, by contrast, has consistently found that they are practically unable to coordinate without explicit communication save in the simplest case, with two symmetric agents and no uncertainty.

What is most worrying is that the algorithms leave no trace of concerted action – they learn to collude purely by trial and error, with no prior knowledge of the environment in which they operate, without communicating with one another, and without being specifically designed or instructed to collude. This poses a real challenge for competition policy. While more research is needed before considering policy moves, the antitrust agencies' call for attention would appear to be well grounded.

References

Calvano, E, G Calzolari, V Denicolòand S Pastorello (2018a), "Artificial intelligence, algorithmic pricing and collusion," CEPR Discussion Paper 13405.

Calvano, E, G Calzolari, V Denicolòand S Pastorello (2018b), "Algorithmic Pricing What Implications for Competition Policy?" forthcoming in Review of Industrial Organization.

Chen, L, A Mislove and C Wilson (2016), "An Empirical Analysis of Algorithmic Pricing on Amazon Marketplace", in Proceedings of the 25th International Conference on World Wide Web, WWW'16, World Wide Web Conferences Steering Committee, pp. 1339-1349.

Ezrachi, A and M E Stucke (2015), 'Artificial Intelligence and Collusion: When Computers Inhibit Competition', Oxford Legal Studies Research Paper No. 18/2015, University of Tennessee Legal Studies Research Paper No. 267.

Harrington, J E, Jr (2018), "Developing Competition Law for Collusion by Autonomous Price-Setting Agents," working paper.

Schwalbe, U (2018), "Algorithms, Machine Learning, and Collusion," working paper.

Kühn K U and S Tadelis (2018), "The Economics of Algorithmic Pricing: Is collusion really inevitable?", working paper.

Endnotes

[1] The only antitrust case involving algorithmic pricing was the successful challenge by US and British antitrust agencies of a pricing software allegedly designed to coordinate the price of posters by multiple online sellers. See Wired Magazine, U.S. v. Topkins, 2015 and CMA case 2015 n. 50223.

[2] See, for instance, the remarks of M. Vestager, European Commissioner, at the Bundeskartellamt 18th Conference on Competition, Berlin, 16 March 2017 ('Algorithms and Competition'), and the speech of M. Ohlhausen, Acting Chairman of the FTC , at the Concurrences Antitrust in the Financial Sectorconference, New York, 23 May 2017 ('Should We Fear the Things That Go Beep in the Night? Some Initial Thoughts on the Intersection of Antitrust Law and Algorithmic Pricing'). The OECD sponsored a Roundtable on Algorithms and Collusionin June 2017, and in September 2017 the Canadian Competition Bureau released a discussion paper on the ability of algorithms to collude as a major issue for antitrust enforcement ("Big data and Innovation: Implications for Competition Policy in Canada"). More recently, the British CMA published a white paper on "Pricing Algorithms" on 8 October 2018. Lastly, the seventh session of the FTC Hearings on competition and consumer protection, 13-14 November 2018, centred on the 'impact of algorithms and Artificial Intelligence.'

[3] These simulations typically use models of staggered prices that do not fit well with algorithmic pricing (Calvano et al. 2018a, 2018b).




No comments posted yet: Link to HN comments page



(26) See China's Chang'e 4 on moon's far side

26 points about 3 hours ago by longdefeat in 3224th position

earthsky.org | Estimated reading time – 6 minutes | comments | anchor

Image of the moon's far side, taken January 30, 2019, via NASA's Lunar Reconnaissance Orbiter (LRO). At the time of this image, LRO was 205 miles (330 km) east of the landing site. Thus the Chang'e 4 lander is only about two pixels across (bright spot between the two arrows), and the smaller Yutu-2 rover is not detectable. For a closer look, see the image below this. Image via LROC.

The first-ever successful landing on the far side of the moon took place just last month – January 3, 2019 – when the the Chinese National Space Administration (CNSA) safely set down its Chang'e 4 spacecraft. One month later, NASA's Lunar Reconnaissance Orbiter (LRO) passed over the spot where the Chinese spacecraft and rover rested on the lunar surface. It rolled 70 degrees to the west to acquire the spectacular image above.

Help EarthSky keep going! Please donate what you can to our once-yearly crowd-funding campaign.

NASA released the LRO images on February 8, 2019. The first new image – shown above – was taken on January 30, 2019. It shows the landing site in an oblique limb-shot view, looking across the floor of Von Kármán crater. Only the lander, not the smaller rover (called Yutu-2), was visible in this image, since LRO was over 124 miles (200 km) from the area at the time. Even the lander was only a few pixels across.

Both the rover and lander are visible, however, in the second image taken the next day. The rover only shows as two tiny pixels, but it is there, as well as shadows from both the lander and rover. See the image below.

Both the Chang'e 4 lander and Yutu 2 rover are visible in this image, taken by the Lunar Reconnaissance Orbiter on January 31, 2019. Image via NASA/GSFC/Arizona State University.

Wider view of the Chang'e 4 landing site from LRO. Image via NASA/GSFC/Arizona State University. To see all the images, go to LROC's website.

The images were released at the website of the LROC, which stands for Lunar Reconnaissance Orbiter Camera. It's a system of three cameras mounted on the orbiter that capture high resolution black-and-white images – and moderate resolution multi-spectral images – of the moon's surface.

Go to LROC's website to view more images of Chang'e 4

Chang'e 4 landed on January 3 at 02:26 UTC (10:26 a.m. Beijing time; January 2 at 10:26 p.m. U.S. East Coast time). The event was covered extensively on Chinese media and by some media in the West. Jason Davis at the Planetary Society said:

Chang'e 4 itself launched on December 8, 2018. It entered lunar orbit four days later, where mission controllers spent 22 days testing the spacecraft's systems, waiting for the sun to rise at the landing site. [On January 2-3, 2019] Chang'e 4 successfully de-orbited and landed.

The Chinese Chang'e 4 lander touched down in the Von Kármán crater on the moon's far side. This image is a simulation of the lunar far side, via Alan Dyer (@amazingskyguy on Twitter).

The landing site is within Von Kármán crater – about 110 miles (180 km) in diameter – on the far side of the moon. After the crater first formed, its floor was covered by eruptions of basaltic lava, similar to the eruptions in Hawaii last summer. Scientists are wondering those basaltic rocks are any different from the basaltic rocks on the near near of the moon. Chang'e 4 should be able to help answer that question.

Since the crater is so large, it contains many much smaller craters inside it. Most of those are less than 660 feet (200 meters) in diameter, dating back more than 3 billion years. Interestingly, because of the high density of small craters, when a new crater would form, it would not increase the total number of crater much, if at all, since any new crater would tend to erase an older crater below it.

The Chang'e 4 lander, photographed by the Yutu 2 rover on the far side of the moon. Image via CNSA/CLEP.

The Yutu 2 rover, shortly after deployment. Image via CNSA/CLEP.

View of Earth and the far side of the moon on October 28, 2014, from the earlier Chang'e 5 test vehicle mission. Image via CNSA.

It's also thought that some of the moon's mantle – the layer just beneath the crust – may even have been exposed when Von Kármán crater formed.

The Yutu 2 rover was deployed about 12 hours after the landing of Chang'e 4. A few days later, the rover entered standby mode – i.e. "took a nap" – to protect itself from temperatures reaching close to 200 degrees Celsius. It later woke up again and continued its study of the landing area.

Bottom line: We've seen the fantastic views of the lunar surface from the Chang'e 4 landing site on the moon's far side – for the first time ever – and now we also have the first high-resolution images of the landing area from lunar orbit, thanks to NASA's Lunar Reconnaissance Orbiter.

Via LROC

  • 0

    Tweet

  • 136

    Share

  • 0

    Share

  • 4

    Pin

  • 0

    Mail

  • 0

    Share




All Comments: [-] | anchor

ttsda(10000) about 1 hour ago [-]

After this page finishes loading on Safari the scrollbar disappears and it becomes impossible to scroll.

porphyrogene(10000) 22 minutes ago [-]

It loads right away on Firefox! Unfortunately it also immediately loads a full-page ad that obscures all of the page's content to beg for money before I have even had a full second to look at the page.

tosca(10000) about 1 hour ago [-]

The earth should not be that small.

chmod775(3742) 3 minutes ago [-]

Nonsense. You can make 2 objects appear any size relative to each other depending on your viewpoint/what kind of lens you use.

In this case there's nothing special going on though. The moon is really as far away as it looks.

Earth and moon are separated by a distance roughly equal to 30 earth diameters.

jedberg(2122) about 2 hours ago [-]

FYI Nasa was given $1.5B more than they asked for in their budget this year, with the new funding bill signed yesterday. I suspect this has a lot to do with it.

Edit: Changed $6B to $1.5B per jakeinspace below.

jakeinspace(10000) about 1 hour ago [-]

I think it was more like 1.5B more, but still, a nice little boost.




(26) Awesome Startup Credits – List of free/discounted plans for startups

26 points about 1 hour ago by dakshshah96 in 10000th position

github.com | | comments | anchor

Failed to load latest commit information. CODE-OF-CONDUCT.md Create CODE-OF-CONDUCT.md an hour ago CONTRIBUTING.md Create CONTRIBUTING.md an hour ago README.md Add License to README an hour ago



No comments posted yet: Link to HN comments page




Historical Discussions: Further Reflections on Amanita Muscaria as an Edible Species (2012) (February 15, 2019: 6 points)

(25) Further Reflections on Amanita Muscaria as an Edible Species (2012)

25 points 1 day ago by benbreen in 63rd position

bayareamushrooms.org | Estimated reading time – 48 minutes | comments | anchor

Further Reflections on Amanita muscaria as an Edible Species

by Debbie Viess

Over twenty five years ago, a tiny perfect grisette seduced me into the world of mushrooms. Barely three inches tall, it glowed pearly gray and grew from the middle of my favorite Bay Area hiking trail. The sight of it drew me to my knees. It was too beautiful to disturb, so I sketched it on a bank deposit slip, the only scrap of paper that I had with me. I carried that paper in my wallet for years and eventually identified it as a grisette, a member of the Amanita vaginata group, one of the many edible Amanita species found here in California.

The hook was set, and amanitas in the wild continued to intrigue me. I obsessively read mushroom field guides, paying particular attention to the amanitas. A desire to eat what I had tentatively identified as a "Coccora" (Amanita calyptroderma), a locally popular edible Amanita, coupled with a strong sense of self-preservation, caused me to join a local mycological society and begin my mushroom studies in earnest.

Since then, I have become a proponent of the safe and mindful collection and consumption of various edible California Amanita species, as well as a mushroom poison identifier and mushroom educator, and I continue to have an abiding passion for all of the members of the genus Amanita. It was therefore with great interest that I first learned of the paper discussing Amanita muscaria and its use as food by William Rubel and David Arora, in the October 2008 special mushroom issue of the Journal of Economic Botany.

For as long as I have known him, David Arora has recounted the story of the modern day treatment of muscaria as an edible species in the Nagano Prefecture of Japan. Along with many others who attended his lectures and forays, I was fascinated by the concept. I was also aware of the many instances of serious muscaria poisonings that have occurred both through the ages and in modern times, so I was curious how their argument in the Economic Botany paper would proceed.

The title of the article was "A Study of Cultural Bias in Field Guide Determinations of Mushroom Edibility Using the Iconic Mushroom, Amanita muscaria, as an Example." The authors start by noting that there is a broad scientific acceptance that muscaria toxins are water soluble, and that there are a few isolated practices of people around the world detoxifying and eating Amanita muscaria. From this they conclude that somehow "cultural bias" causes North American field guide authors to continue to list muscaria as a poisonous rather than edible species (Rubel, Arora, 2008).

But is this really a case of "cultural bias," or is it just good, common sense?

The basic hypothesis of field guide authors' mushroom edibility bias is sound. Any mushroom book that deals with edibility preferences is subject to the whims of its author: their culinary experience, personal judgment, and prevailing opinions all help to determine poisonous and edible designations. Since no one wants to recommend a mushroom that might be harmful to others, it is in everyone's best interest for field guide authors to err on the side of caution, even if you might be aware of exceptions to the rule.

As I read Rubel's arguments and selected quotes in his muscaria paper, my uneasiness grew. His several attempts to redefine the word "poisonous" so that it didn't apply to muscaria were disturbing. His suggestion that future mushroom book authors should list muscaria as an edible species, and that it would be perfectly unremarkable to do so, was also troubling. Do we really want to encourage folks to use less caution about known poisonous species, even if, as mycophagists of long standing, we may know of ways to circumvent these poisons? And finally, I wasn't buying the premise that muscaria was commonly accepted as a perfectly safe edible species anywhere in the world.

Amanita muscaria is one of the most beautiful and eye-catching mushrooms found anywhere. Although muscaria is a seriously toxic mushroom, its most abundant toxins – ibotenic acid and its decarboxylation by-product muscimol – are water soluble, and can be leached from the mushroom flesh through careful and prolonged boiling.

It is true that very small numbers of people around the world have indeed discovered that it can be made edible through careful and sometimes elaborate preparation; but it is also imperative to remember to throw out the water into which the toxins were leached. One American couple who forgot to do so became seriously intoxicated, to the point of damaging both themselves and their household (Beug, 2010)!

Field Guide Bias Si! Muscaria as a Safe Edible Species No!

On these basic points (water soluble toxins in muscaria, field guide bias) I think that we can all agree. But rather than going on to demonstrate how most field guide authors show bias in all of their edibles' designations, the Rubel/Arora paper chose to present an elaborate justification for the treatment of muscaria as a perfectly safe edible species. The authors based this hypothesis upon the evidence that they selected, but I will show that this evidence is incomplete and therefore insufficient for declaring muscaria to be a perfectly safe edible species.

As an intellectual exercise, digging through dusty tomes to find a few scattered references to folks who ate muscaria as food in the course of history can make interesting reading. Conjecture can be strengthened by selective examples to support a hypothesis. However, it is difficult to prove a hypothesis beyond the shadow of doubt through the fog of centuries. What becomes troubling is when this conjecture and conflation of anecdotal evidence gets stamped with the imprimatur of someone of David Arora's stature, a man to whom many look for answers to mycological questions, especially in terms of mushroom edibility.

The authors' central hypothesis of the purported safe edibility of Amanita muscaria soon left the relative obscurity of subscribers to Economic Botany, and traveled to the boundless territory of the Internet, with links to the paper on both Rubel's and Arora's websites, in addition to many other places online, such as Wikipedia, Springerlink.com, Ingentaconnect.com, Discoverlife.org, Tititudorancea.org, etc.. Now the paper, with what I believe to be major misconceptions, was being referenced by a wide variety of mushroom enthusiasts worldwide as "common knowledge," and a recipe for the "safe" preparation of muscaria was freely shared.

"Edible" Amanita muscaria: A Recipe for Disaster?

As public educators, on a topic that is mostly unknown here in North America, I believe that we must consider the impact of our words. Although many experienced mushroomers are aware of the fact that it is possible to remove the toxins from Amanita muscaria, it is naïve at best to assume that people will always carefully follow a recipe, especially one that includes a potentially dangerous mushroom. Ironically enough, even the original muscaria detoxification recipe that Rubel and Arora provided in the Economic Botany article had important numerical conversion errors, listing 250 gm. of muscaria as the equivalent of 4 ounces. In the online version of this paper, linked to from his website, Arora changed the amount of muscaria in the recipe to the correct weight of 110 gm. (Arora, 2009).

Yet even a perfectly reasonable recipe can have unreasonable translation into a real-time meal. If many folks have difficulties following any recipe, why start with a troublesome and sometimes even dangerous ingredient? I know of at least four folks who had unpleasant experiences after attempting to detoxify muscaria at home. One told me of her experiences directly, another wrote it up in great and glorious detail online (Konecney, 2009), and two others published their story in Mushroom, the Journal of Wild Mushrooming (Millman, Haff, 2004). Even the recent book Mycophilia by Eugenia Bone describes a less than ideal experience (waking up in a chair not remembering anything) after eating muscaria as an "edible" species, with two well known Western amateur mycologists who brought the muscaria to her vacation home in Colorado (Bone, 2011). Do you think they got the recipe wrong, too, or perhaps didn't care if it ended with the diners in a muscaria dream state?

What seems to be a fairly obvious factor that the authors failed to consider was this: the vast majority of folks who would even want to try muscaria as an edible are undoubtedly already primed for eating muscaria as an entheogen (in layperson's terms: to get high). In other words, they would have even less reason to want to follow exactly the elaborate procedures necessary to make this mushroom wholly non-toxic. For these "psychonauts," a nice, neuro-toxic poisoning could be looked upon as a bonus. Pity those poor folks who just want a nice mushroom meal for their families, though, and not a trip to the emergency room. Wouldn't a bit of warning be in order for them?

Redefining Poisonous to Exempt Muscaria

According to Rubel, one shouldn't even consider muscaria to be poisonous, at least in the strict sense. After all, a small piece won't kill you (Rubel, Arora, 2008). But in fact, although seldom fatal (its deadly designation in many older field guides does indeed seem like "overkill" to most) muscaria can certainly be dangerously toxic.

Ibotenic acid-containing mushrooms (Amanita pantherina and A. muscaria and their close relatives) are a major cause of serious mushroom poisonings, especially in the Pacific Northwest, often resulting in hospitalizations (Benjamin, 1995; Beug, 2006; Spoerke et al, 1994). Usually, these poisonings are self-limiting. The folks who were poisoned, regardless of the reason the mushroom was eaten, have no wish to repeat the experience.

Recent North American Deaths Linked to Amanita Muscaria Ingestions

Sometimes the unforeseen results of eating muscaria are more serious than "merely" an unpleasant poisoning and hospital stay. The National Poison Data System for 2004, established by the American Association of Poison Control Centers (AAPCC), listed a fatal outcome for a young man who ate 6-10 freeze-dried muscaria caps (Watson, 2004). He was discovered in cardiac arrest, and died 10 days later from anoxic brain injury. Another fatal muscaria poisoning case from 2007, recounted in a NAMA Toxicology Committee Report in the 2009 issue of McIlvainea, tells of an otherwise healthy young man who died twelve hours after ingesting 6 or 7 muscaria caps. After falling into a muscaria-induced swoon the night before, he was found dead in bed the next morning. The medical examiner who autopsied the corpse labeled it death by mushroom poisoning, since he could find no other contributing cause of death (Beug, 2009), although since there were other drugs involved, the exact cause of death remained unclear (Beug, 2012).

Blithe assurances of the safe and unremarkable edibility of muscaria would be cold comfort indeed to the families of the two separate cases of young men who ate muscaria and then fell into comas. While in this helpless state, one froze to death while camping, and the other died after aspirating vomitus (Beug, 2006).

In a more recent case, recounted to me by Marilyn Shaw, toxicology expert and poison identifier for the Rocky Mountain Poison Control, a young man in Aurora, Colorado narrowly escaped death when he was discovered naked and unconscious, with a severely lowered body temperature and in cardiac arrest, after the recreational ingestion of muscaria (Shaw, 2012).

Who knows how many other incidental deaths after muscaria ingestion there may have been? Testing for muscimol levels is hardly part of a coroner's normal toxicology panel (Benjamin, 2012).

Recent Muscaria Deaths in the Southern Hemisphere

Documented deaths from the ingestion of Amanita muscaria are not restricted to North America. Formerly found only in the Northern hemisphere, Amanita muscaria has been inadvertently introduced to the Southern hemisphere in Pinus tree farms, producing a novel, toxic species of red Amanita in places where no ibotenic acid-containing amanitas have been found before. This has had tragic consequences in Tanzania, where locals had safely gathered a number of choice, edible Amanita species for many generations, without a thought to careful identifications. Often only the Amanita caps were gathered, leaving the bases buried. A muscaria cap in age, with its warts removed and with a striate margin, can closely resemble local edible species in Amanita section Caesarea.

While Finnish mycologists were in Tanzania describing some of these local edible Amanita species for science, they consulted on a muscaria poisoning case, where two women and a child were poisoned and in hospital. After reassuring doctors that the poisonings would resolve on their own, since that was indeed their experience with muscaria poisonings in Scandinavia, they were horrified to learn that one of the women died from her meal the next day. Upon further interviews with other Tanzanian locals gathering amanitas, they discovered even more recent muscaria deaths (Harkonen, 1994).

Our Most Famous North American Muscaria Fatality

If one is willing to go back a little over a hundred years ago, you discover the unfortunate death of Italian diplomat Count de Vecchj, who requested amanita mushrooms from the Virginia countryside for his breakfast, believing them to be local examples of Amanita caesarea. Unfortunately, the mushrooms that were brought to and consumed by the Count were not the choice, edible caesarea of Italy, but the toxic muscaria, and the Count ate a gluttonous meal of somewhere between a dozen and two dozen caps, which resulted in convulsions so great that he broke his hotel bed (Rose, 2006).

The Count, who prided himself upon his mushroom identification skills, died from his meal. Out of his death and its ensuing lurid and widespread publicity, sprang a renewed North American interest in mushroom societies, especially in the Northeast, to provide much needed public education about edible and poisonous wild mushrooms (Rose, 2006).

On the face of all of this evidence to the contrary, it is disingenuous at best to consider muscaria to not be a poisonous mushroom. But poisonous is an off-putting word, pleads Rubel, a fan of muscaria eating to be sure.

Evidence for Amanita muscaria as a Poisonous Mushroom

In his muscaria paper, Rubel states: "Listing A. muscaria as edible rather than poisonous is a completely unremarkable judgment in a culinary context." Here is how Rubel describes the effects of muscaria ingestion on his website:

Amanita muscaria is not poisonous in the sense that it can kill you. It is poisonous in the sense that if not parboiled in plentiful water (the "toxins" are water soluble), then raw or undercooked mushrooms eaten (in moderation) will cause you to become inebriated and possible nauseous. (Rubel, 2011).

The above statement assumes that future muscaria eaters, perhaps lulled into complacency by assurances that muscaria isn't really poisonous, will use moderation and carefully follow a recipe. But wouldn't a stronger emphasis on its very real toxicity be a better way to get any future muscaria mycophagists to be cautious in its preparation and consumption? Or maybe reject the idea of eating muscaria as an edible species altogether?

Here's what the Emergency Physicians Monthly website had to say about Amanita muscaria and the many ibotenic acid poisonings that they have collectively observed:

A toxic dose in adults is approximately 6 mg. muscimol or 60 mg. ibotenic acid—the amount found in one cap of Amanita muscaria. However, the amount and ratio of chemical compounds per mushroom varies widely from region to region and season to season. Spring and summer mushrooms have been reported to contain up to 10 times as much ibotenic acid and muscimol compared to autumn specimens. Toxic components are not distributed uniformly in the mushroom. Most of the muscimol and ibotenic are contained in the cap or pileus. A fatal dose has been calculated at approximately 15 caps. Fly agarics are known for unpredictable clinical effects which can be highly variable between individuals exposed to similar doses. Symptoms typically appear after 30 to 90 minutes and peak within three hours. Certain effects can last for days, but the majority of cases completely recover within 12 to 24 hours. Unlike other toxic mushroom ingestions, vomiting is uncommon. Patients may exhibit ataxia, auditory and visual hallucinations (described as sliding vision and "the ability to see through walls"), as well as hysteria. Central nervous system depression, coma, myoclonic jerking, hyperkinetic behavior, and seizures have been described in larger doses. Retrograde amnesia and somnolence can result following recovery. (Erickson, 2010)

Whoa, that sounds rather more unpleasant than "inebriation" and "mild nausea," doesn't it?

Let's take the informed opinion of another North American mycologist and toxicologist, Michael Beug, PhD. Beug fields poisoning calls in the Pacific Northwest, where muscimol poisonings account for the majority of all serious mushroom poisonings. He had also heard that some Russians living outside Moscow eat detoxified muscaria as an edible species (through the work of R. Gordon Wasson and later from Dr. Daniel Stuntz), but he doesn't know how many do so, nor to what degree this is practiced. Here is what Beug had to say about eating Amanita muscaria:

Both Amanita muscaria and Amanita pantherina are large, showy, and delicious, though poisonous mushrooms (unless cooked by boiling them and then discarding the water, but if you don't get rid of all the water, look out!). Though some people in Russia apparently parboil and eat Amanita muscaria, it is not a practice I recommend. Amanita muscaria and Amanita pantherina are frequently eaten intentionally by people seeking to get high and are also frequently eaten by mistake (believe it or not, often from people thinking they had an Agaricus). The "trip" from Amanita muscaria and Amanita pantherina is generally not pleasant and involves hospitals more predominately than hallucinations. (Beug, 2004)

Attitudes about Eating Amanita muscaria from Outside of North America

A very different viewpoint of Russian fungal proclivities is provided by Gary Lincoff, mycologist and author of The Audubon Society Field Guide to North American Mushrooms. Lincoff and a group of 15 or so others traveled to the Kamchatka Peninsula of Russia in 2004 and 2005. Their purpose was to investigate firsthand the statements made by Gordon Wasson about Amanita muscaria use in Siberia, taken from his 1968 book, Soma: Divine Mushroom of Immortality. Here is what Lincoff had to say about local attitudes towards muscaria:

The hunter-gatherer peoples differ from the Russians in many ways but none more dramatic than in their use of mushrooms. The Russians hunt many kinds of edible mushrooms but avoid one mushroom in particular, the fly-agaric, Amanita muscaria, which they regard as very poisonous. In fact, it is used in Russia and Europe as a fly-killer: the mushroom is placed in a cup of milk to which flies are attracted and become numbed. The hunter-gatherers, on the other hand, collect and eat just one single mushroom, the same fly-agaric that the Russians avoid. (Lincoff, 2005)

The Eastern Siberian Koryak and Even (or Evensk) tribes, the hunter-gatherers to which Lincoff refers, eat their muscaria sun-dried and uncooked, for maximum mind-altering potency. It is used as a sort of tonic within that traditional society, especially by the elderly. It is not eaten as a food species, but as medicine.

Russian mycologist Tatiana Bulyankova, a scientist from Western Siberia who has been contributing field observations to the popular website Mushroomobserver.org, sent me a long letter about first-hand Russian mushroom eating practices. She also spoke laughingly about how in Russia, American field guide authors were roundly ignored, and warned me that it was pretty impossible to generalize anything about a country the size of Russia, or as she put it, 1/7 of all land mass. Point taken, Tatiana!

Here is what she had to say about Amanita muscaria:

The Fly Agaric, predictably, is very common here (and everywhere else in cold to temperate-climate Russia, I guess). It is the symbol of all toxic mushrooms here, I'd even say it's the symbol of poison. It's featured in countless books, cartoons, artworks... everyone knows that it's poisonous. Of course there are young idiots who try it as a recreational drug but that's a bad influence of the Internet, I guess. It's also consumed by tribe shamans of Yugra, Yakutia and other Northern territories but it's something I've only read in the books. (Bulyankova, 2011)

I think we may safely draw the conclusion that even in obsessively fungiphilic Russia, the common-sense cultural bias is against eating Amanita muscaria as an edible.

A quick survey of various field guides and online sources where the eating of muscaria as an edible species is mentioned shows very little empirical or even local evidence to bolster the claims – most muscaria eating was reported from elsewhere. A modern Lithuanian field guide stated that muscaria was poisonous, but also: "eaten in mountainous France and Austria." No word about Lithuanian edibility practices, though, despite an apparent historical tradition of muscaria inebriation.

The main market for muscaria in Eastern Europe seems to be in high potency, dried muscaria caps, harvested in Latvia and Bulgaria, and then sold online for "scientific purposes." Beware of what is sometimes deadly home research.

George Atkinson, in his 1900 mushroom book Studies in American Fungi, claimed that muscaria was: "eaten as food in parts of France and Russia, and sometimes in North America," but again, this is repeating information drawn from other sources without explicit verification of facts.

Bruno Cetto's more recent Italian mushroom guide, I Funghi dal Vero, Vol. 1, claims that muscaria was "eaten cooked and pickled in Russia, France and the Lake Garda region" of Italy. Again, there is no verification of these claims; and the information appears to be merely copied from one source to another without citation. There may well be a very few folks in Russia that eat muscaria as an edible species, and perhaps Pouchet (detailed later in this essay) managed to convince some of the poor to do so in France as well, but these are hardly widespread practices.

A Food of Desperation in Italy

Attempting to track down some of the Italian muscaria eating references from the Lake La Garda region, (prior to WWII), I came up with the following, from Pierluigi Cornacchia's online article, "L'Amanita muscaria in Italia". This modern day writer remarks upon the difficulty of tracking down these old references, even within Italy, and lists many local variations of common names for muscaria, all of which refer to its poisonous properties. Here are two quoted instances where locals in the past had detoxified and eaten muscaria (Cornacchia, 2006):

F. Cavara (1897) confirmed that in Vallombrosa (Firenze) Amaníta muscaria was commonly consumed and stated, "I can assure you that many report, in some countries of Tuscany, for example above Pontassieve, in late autumn, this agaric is harvested in quantity and put to soak in basins where water is changed every day, for 10 or 12 days, after that is treated like other edible fungi and found excellent. It helps [in the preservation] that the season is cold." This information has been verified directly in the field. I have been collecting testimonies of elderly inhabitants of the villages of Reggello, Saltino, Pian di Melosa and Vallombrosa. The ovolo malefic ["evil egg"], as it is called in those parts, was usually consumed after appropriate preparations (boiling with vinegar, salting, rinsing with running water). According to the testimonies, the use of this fungus as food, which lasted until the beginning of World War II, was due solely to the economic problems.

In other words, the ovolo malefic was a food of desperation, and the preparation needed to make it edible was hardly trivial.

Pouchet's Place in History

Another country for which the historical and "culturally accepted" practice of eating muscaria has been claimed is France. Although I could find zero evidence of current muscaria eating practices, and in fact a respected French mycologist of my acquaintance scoffed at the very idea (Wuilbaut, 2012), in his muscaria paper Rubel devoted a good bit of ink to the work of a Frenchman and scientist who apparently tried to popularize muscaria eating amongst the poor in the 1800s: Felix Archimede Pouchet.

Pouchet in his time – like Rubel in ours – equated the preparation of and eating of poisonous muscaria to poisonous manioc, a staple food across Africa. Manioc starts out deadly poisonous and is made edible through careful preparation. But this is a poor analogy. Nobody in modern day North America needs to eat muscaria to survive. Fresh or dried, dangerously poisonous, cyanide-containing manioc is often the only high quality starch available to millions, mostly across Africa, where it can be grown in poor soil and under drought conditions. Its deadly toxins also discourage crop predation. But it can have faulty preparation as well, and can cause some very serious illnesses.

Perhaps, like me, you had never heard of Pouchet? He was indeed a respected scientist of his time, and a popular science writer, but also one of the strongest proponents of the theory of spontaneous generation. Would it be safe to hold the rest of his science up to a modern light?

To prove that muscaria was a safe edible species, he fed dogs both muscaria-infused broth (to show that muscaria toxins were water soluble; the dogs died) and boiled and drained muscaria (the dogs survived) (Pouchet, 1839). He also claimed to have "fattened dogs" on boiled muscaria, but experimental details for that experiment were not available and fortunately for local dogs, none of these experiments were repeated, to my knowledge, by any other researchers. Pouchet's work was widely cited by others at the time (Rubel, Arora, 2008). But do a few dog studies really translate to human safety?

If muscaria was such a wonderful and safe edible species, why would Pouchet limit its use to the poor?

Pouchet is best known today for being a fierce public critic of Louis Pasteur, another scientist of the day who publicly disputed the commonly held theory of spontaneous generation. Pasteur was, of course, the French scientist who managed to keep lots of folks from dying in various horrible ways, by creating the process of pasteurization that prevented formerly widespread milk fevers (typhoid and scarlet fever, septic sore throat, diphtheria, and diarrheal diseases) and for creating life-saving vaccines against the scourges of rabies and anthrax (Swayze and Reed, 1978).

Pasteur gave a public demonstration, to which Pouchet was formally invited, to prove once and for all that it was in fact microorganisms, not spontaneous generation, that created life where there was apparently none before. Pasteur gave birth to the science of microbiology.

Pouchet was a no-show at this triumphant exhibition by Pasteur, but he did give us boiled muscaria for the poor as his legacy.

Sketchy historical evidence, couched in terms of "it is said" and "it is reported that" of muscaria eating around the world should not be used to bolster claims of its safety. There is no evidence that it was ever a commonly accepted edible species anywhere in the world, and for good reason.

Amanita muscaria Consumption in Japan: Exception not the Rule

What about in Japan, specifically the Nagano Prefecture, where the consumption of muscaria as an edible species is often cited?

I first learned about the unusual practice of mundane (as opposed to ritual) muscaria munching in David Arora's annual Mendocino Thanksgiving Foray, an event that I attended, both as a participant and as staff, for over a dozen years running. The story that he told was both fascinating and charming: he claimed to have passed local mushroom hunters along a Nagano mountainside, whose baskets were filled with muscaria. Arora's basket brimmed with Boletus edulis, and they both looked at each others' finds in horror! Great theatre, but what is the deeper reality?

While visiting the Nagano Prefecture, Arora tried the muscaria pickles that are a traditional but in fact seldom eaten food. Nagano Prefecture is the only Japanese province wholly cut off from the sea. The practice of pickling muscaria began after "salt roads" were built from the coast into the mountains over a hundred years ago.

In addition to Arora's experiences there, a young man by the name of Allen Phipps, who spoke and understood the Japanese language, spent a good bit of time researching the localized treatment of muscaria as an edible species for his Master's Thesis at Florida International University. His results were quite interesting, and showed that eating muscaria is hardly typical for the Japanese culture as a whole (Phipps, 2000).

Phipps' thesis showed that local consumption of muscaria as an edible species is severely restricted, in both amounts of muscaria eaten as well as in general acceptance of the practice. Muscaria eating takes place not in the already limited Nagano Prefecture as a whole, but merely as a subset of people in one town: Sanada Town, with a population around 10,000. Within that subset, Phipps located 123 muscaria-favorable individuals, and from them he winnowed out ten most likely subjects for interviews (Phipps, 2000, p. 29).

Even more telling, he discovered these interview subjects by attending local mushroom fairs (three per year in Sanada Town), of a similar style to our North American mushroom fairs, with general collecting on one day, identification by local experts on the next, and then public displays with labeled mushrooms. At all of these fairs, within ground zero of muscaria eating in Japan, displays of muscaria were clearly labeled as poisonous mushrooms! These fairs were sponsored by the Japanese government and local insurance companies in hopes of preventing mushroom poisonings (my emphasis). Phipps found his interview subjects by hanging out at the muscaria display table and targeting those that scoffed at the poison label (Phipps, 2000, p. 29).

Indeed, within Sanada Town only (adjoining towns within the Nagano Prefecture treat muscaria as a wholly poisonous mushroom) muscaria is made into pickles, which have been shown through careful lab analysis to contain zero amounts of toxins. These pickles are then eaten in small amounts, for special occasions such as the New Year.

The process of making them is extremely involved (Phipps, 2000, p. 62). There are four steps to pickling muscaria, as relayed to Phipps by Sanada Town muscaria pickle devotees: boiling for ten minutes, or five minutes three times, washing, salting and soaking. Mushrooms are often initially boiled until all color is removed; the water is always tossed. After boiling, the mushrooms are rinsed under running water for 1-3 minutes. Mushrooms are then packed in salt and compressed, and left for at least one month. Prior to consumption, pickles were soaked for several hours or overnight to remove the salt (and any remaining traces of the toxins). These muscaria pickles were then used as culinary accents, not meals. They were and still are eaten for special occasions only, or served to special guests (Phipps. 2000, p. 37).

But frankly, the above method to prepare a wholly non-toxic snack does not sound like a reasonable recipe for today's modern-day, want-it-now cooks. In fact, the tradition is dying in Sanada Town, because modern Japanese youth can't be bothered to go through all the preparation steps to make a toxic mushroom edible (Phipps, 2000).

It was also illuminating to read in Phipps' thesis that unboiled muscaria is also grilled and eaten in small quantities (Phipps' emphasis) by certain local men. Here is what a subject told Phipps:

"He compared the experience of eating a known poisonous mushroom like muscaria to eating fugu, the poisonous blowfish. The combined thrill of eating something poisonous and the outstanding taste makes this mushroom worth the risk."

But only small amounts are ever eaten, and there remains a good bit of paranoia attached to the process, with the men fearful of possible accumulative effects in addition to directly toxic ones. In other words, despite the limited local tradition of treating muscaria as an edible species, they are still uneasy about actually eating it (Phipps, 2000, pg. 41).

It is safe to say that muscaria eating in Japan is by no means a culturally accepted practice – and as even Rubel pointed out, in apparent disbelief, even Japanese field guides list muscaria as an unambiguously toxic mushroom.

Limited Historical Evidence of Muscaria Consumption in North America

Rubel and Arora were "intrigued" by unsubstantiated reports of African-Americans in the southern states in the 1800s that may have eaten muscaria, but convincing evidence is lacking. Even if it were true, what reasons might an enslaved people have for eating muscaria? Was it another food of desperation? Or perhaps it was even eaten unboiled for its entheogenic strengthening effects, qualities a desperate slave could surely use. This is of course mere conjecture on my part, but so is any other imagined historical scenario.

A single verified example of historical muscaria eating in the Washington, D.C. area was also cited by Rubel, in hopes, I believe, of showing that it was at one time an accepted practice here in North America, so why not now? But even here in North America the evidence is not only flimsy but rather conjectural. Yes, there was apparently one black woman mushroom vendor who prepared muscaria for her table, discovered at a mushroom market outside of Washington, D.C. in the late 1800s, and of course that famous fatal encounter with muscaria by the late Count de Vecchj.

Even more telling to me than one individual with out-of-the-norm eating habits was this quote by Frederick Vernon Coville – a botanist who in 1898 investigated the recent, sensational muscaria poisoning of Count de Vecchj for the U.S. Department of Agriculture. Coville searched for potential sources of muscaria at the Washington, D.C., K Street Market:

Though most [my emphasis] of the colored women of the markets look upon the species [Amanita muscaria] with horror one [my emphasis] of them recited in detail how she was in the habit of cooking it. (Coville, 1898)

Oddly, in his muscaria paper Rubel also showed the entire quote, but somehow didn't derive the same meaning that I did: that eating muscaria was not a common practice, but one observed by a single individual among many others who rightly feared it. Rubel went on to seemingly conjecture that because this one woman's recipe (again, an anomaly among the rest of the mushroom sellers) for muscaria was printed in a USDA publication (Coville, 1898), and then cited by others, it was therefore a locally acceptable practice.

I drew a very different conclusion from these same facts: that the behavior of one woman does not a trend make, and that in fact, the publication of these recipes and the quotes about the market seller were widely cited and published for the very same reason that this topic gets press today: its shock value.

The muscaria prep by the African American market woman was quite elaborate, and the mushroom hardly resembled a muscaria when it was done: the cap was peeled, the stipe was peeled and the gills were removed. The remaining mushroom bits were then parboiled and the boiling water tossed before cooking. Coville went on to suggest that since, as was believed at the time, most of the toxins were contained in the gills and the cap cuticle, parboiling would have most certainly removed any remaining toxins, and he praised her ability to be able to detoxify a known poisonous mushroom without a scientific background. But he was hardly advocating its use.

It is possible that some misconstrued Coville's remarks as a recommendation that muscaria be treated as an edible species. A few months later, in a revised version of the original USDA Circular 13, Coville firmly recommended that no one eat this mushroom, as did physicians during the same time period who were writing for the medical rather than the botanical community. Here is his quote:

this process (of muscaria preparation) is cited not to recommend its wider use, but as a matter of general interest. The writer's recommendation is that a mushroom containing such a deadly poison should not be used for food in any form. (Coville, 1898, revised)

Coville also noted in the revision of USDA Circular 13 that the muscaria that poisoned the Count was not purchased at the K Street Market, but rather was brought to him from the Virginia countryside by a countryman who delivered it "under protest" to the Count.

As an added public safety precaution, muscaria selling was banned at local, Washington, D.C. markets, shortly after Coville's original article appeared (Chestnut, V.K, 1898). But there is no evidence that it was ever for sale in these markets to begin with.

Lacking in many of these historical, eating-muscaria-as-an-edible accounts are first hand reports of the effects, or lack thereof, post-ingestion, but there is certainly a broadly based fear of eating muscaria, here and across the world. This is reflected in the universal treatment of muscaria as a poisonous mushroom by mushroom field guides worldwide, including those countries with a strong mycophilic and mycophagic history, like Europe (Courtecuisse, 1994) and Japan (Hongo, Izawa, 1994).

Amanita muscaria Fed to Participants in Arora Forays

What about modern day use of muscaria here in North America? In his paper, Rubel states that David Arora has served muscaria to hundreds as a justification for its safety as an edible, and those numbers are probably true. But I would lay wager, based on my first-hand evidence, that none of them ate a full meal of it; and the kitchen preparation for their muscaria tastings included thin slicing, several boilings, carefully measured waters (thrown out between batches) and a good splash of vinegar at the end. That last seems more for good measure than for any real benefit.

I was at the first Mendocino California foray, back in the 90s, when Arora served about 70 of us boiled muscaria, and I participated in about a dozen forays after that one. Most folks, with a bit of peer pressure and the reassurances of the "god of mushrooms" would try a piece or two – though, according to one of the people who passed the mushrooms around, at least a third of the group declined. Several folks that I have talked to who attended those forays did not wish to repeat the experience of eating parboiled muscaria, and who knows how many others, over the years, felt the same way? Like the Japanese gentlemen in Nagano Prefecture, they were thrilled by their daring, but still uneasy about eating a mushroom widely believed to be poisonous.

Peer Reviewed is not the Same as Peer Approved

In correspondence with me, Rubel apparently attempted to bolster his claims about the safety of muscaria as an edible species by informing me that he and Arora had published their paper in a peer-reviewed journal (Rubel, 2009). But since only two of his reviewers were actual toxicologists (Michael Beug, PhD and Denis Benjamin, MD) and they both had issues with the paper as originally presented to them (neither read the final version), I was hardly reassured (Beug, 2009; Benjamin, 2009). Dr. Benjamin informed me that his opinions upon this topic were undergoing some evolution and he asked to not be quoted here, but Dr. Beug had no such qualms. Here is what he told me, and I quote:

I did not review the final version of the [muscaria] paper but was highly critical of the draft and recommended that it not be published. (Beug, 2012)

Does it count as much if the peers who review your work and are intimately familiar with toxic mushrooms pan it?

Evaluate All of the Evidence and Decide for Yourselves

I will not go through a point by point rebuttal of the Rubel/Arora paper, although I have certainly been doing so in my mind and in various forums online, ever since I first read it over five years ago. I would hope by now that you the reader are starting to see the bigger picture on your own: that despite the fact that a few people, here and there around the world, have indeed eaten Amanita muscaria after elaborate detoxification preparations, it is hardly a broadly accepted practice to eat muscaria as an edible species anywhere, nor has it ever been so. And it is preposterous to pretend that it is sometimes not a dangerously poisonous mushroom, when there is a wealth of evidence to the contrary.

When field guides both here and abroad list Amanita muscaria as a toxic mushroom they are representing both the universal cultural and common sense norm. Perhaps these various American field guide authors, scoffed at by Rubel, who list muscaria as a poisonous mushroom, were more concerned with the safety of innocent foragers rather than presenting all of the possible ways that one might circumvent the poisons?

As new mycophagists delve deeper into the study of mushrooms in their readings of other places and times, perhaps they will be tempted to try a piece or two of muscaria, boiled or unboiled. But to recommend its safe practice as an edible species, with the justification that it was ever commonly eaten in other places and has little toxic downside, is a highly implausible parsing of history.

In edible Amanita lectures that I have given around the country, I often cite the official United Nations Food and Agricultural Organization (FAO) document on edible fungi, where dozens of species of edible amanitas, among many, many other species of edible mushrooms, are listed by name. These are all amanitas, from caesarea to zambiana, commonly eaten or sold in markets around the world. This list even includes some Amanita species that might reasonably give us the willies, like Amanita manginiana, an edible, market amanita from China that is related to and even resembles Amanita phalloides (Boa, 2004). But even in this very even-handed, strongly fungiphilic international document, muscaria is listed not as an edible but as a medicinal mushroom. Even more emphatically, the U.N. actually proposed a resolution against its sale and use as an edible species:

Article 622 – None of the genera of poisonous mushrooms listed hereinafter may be used as food, even if they have undergone special treatments to deprive them of their toxic principles [italics mine]: Amanita: Mushrooms with fleshy caps colored green (Green Amanita or Amanita phalloides), or red with white warts (Fly Agaric or Amanita muscaria)...

Has it started to sink in yet? That maybe, just maybe, more than a little caution is called for when considering Amanita muscaria as an edible species?

In the interest of full disclosure, I admit that I have personally eaten very small amounts of Amanita muscaria as an edible three times: once, at a long-ago Arora foray, where it was first par-boiled (slimy, tasteless, but still thrilling in a naughty way), once at a camping foray at Salt Point State Park, on the California Coast, where it was grilled by a master Japanese chef (delicious; the best amanita that I ever ate), and once atop pizza, after rehydrating dried muscaria mushrooms and throwing out the pretty red water. And yet, I believe that to encourage folks to eat muscaria is a bad idea, and I feel safe in saying that the vast majority of the rational, mushroom-loving world agrees with me.

Ironically, perhaps Denis Benjamin's recent satirical piece on muscaria eating in FUNGI magazine (Winter, 2011), really does hold the answer: if you must recommend the eating of muscaria, treat it as a poisonous mushroom that can be presented as a daring culinary adventure – the land-based American fugu experience, if you will. Go ahead, flirt with danger and have a muscaria snack at some future foray or in the privacy of your own home; certainly a piece or two of muscaria with the crap boiled out of it won't kill you, and then you'll have those bragging rights (Benjamin, 2011).

But please, gentlemen, don't tout Amanita muscaria as a perfectly reasonable edible species with a long history of safe usage and cultural acceptance both here and overseas, when the evidence clearly refutes your claim. And if you do someday revise Mushrooms Demystified, Mr. Arora, please, err on the side of caution. The many people who look to you for personal safety as well as honest answers will appreciate it.

Muscaria Treatment in American Mushroom Field Guides

Just to see what all the fuss was about in the treatment of muscaria by American mushroom field guide authors, I read the muscaria edibility descriptions in over a dozen modern guides that I own. All authors, reasonably enough, cited muscaria as a toxic mushroom. None, other than McIlvaine's One Thousand American Fungi, cited it as deadly. Some mentioned its potential hallucinogenic properties. Some talked about its historic use as an inebriant. The most recent mushroom field guide from California, A Field Guide to Mushrooms of Western North America (Davis et al, 2012), touched upon some of this recent edibility controversy by expanding a bit upon the usual dismissive toxicity statements. They stated that muscaria was:

Poisonous and hallucinogenic; the toxins are water soluble, but given the preparation required to remove the toxins, this is not a good mushroom for the table.

The strongest argument against its use, however, was this one, from a mushroom field guide published in 1986:

Poisonous and hallucinogenic. Fatalities are extremely rare, but it is undoubtedly dangerous in large or even moderate doses. Too many people have had unpleasant experiences for me to recommend it.

The author? David Arora, in Mushrooms Demystified (Arora, 1986).

Those wise words still ring true today.

Acknowledgments

Much gratitude to Michael Beug and Denis Benjamin, for their many conversations with me on this topic, thanks to Marilyn Shaw for her many toxicology insights and for sharing her concerns about the treatment of muscaria as an edible species, and special thanks to my dedicated reviewers, Jan Lindgren, Roy Halling, Gary Lincoff and Michael Beug, your suggestions and corrections were much appreciated.

Correspondence about this article may be sent to: [email protected]

References

  • Arora, David. 1986. Mushrooms Demystified: A Comprehensive Guide to the Fleshy Fungi, 2nd Edition, Ten Speed Press, Berkeley.
  • Arora, D. 2009. "A Study of Cultural Bias...," Corrected muscaria recipe on page 243. http://davidarora.com/uploads/rubel_arora_muscaria_revised.pdf
  • Atkinson, George. 1900. Studies of American Fungi: Mushrooms Edible Poisonous etc. Henry Holt and Co.
  • Barron, George. 1999. Mushrooms of Northeastern North America: Midwest to New England. Lone Pine Publishing.
  • Benjamin, Denis R. 1995. Mushrooms, Poisons and Panaceas – A Handbook for Naturalists, Mycologists and Physicians. W.H. Freeman, New York.
  • Benjamin, D. 2009. Personal communication. August 10, 2009.
  • Benjamin, D. 2011. "Amanita muscaria – an entrepreneurial opportunity (A modern satire)." FUNGI Magazine, Vol. 4:1, Winter, 2011.
  • Benjamin, D. 2012. Personal communication. September 18, 2012.
  • Bessette, Alan, A. Bessette, D. Fischer. 1997. Mushrooms of Northeastern North America. Syracuse University Press, Syracuse, New York.
  • Bessette, A., W. Roody, A. Bessette and D. Dunway. 2007. Mushrooms of the Southeastern United States. Syracuse University Press, Syracuse, New York.
  • Beug, Michael 2004. "An Overview of Mushroom Poisonings in North America." The Mycophile Vol. 45:2, pp. 4-5 April 2004.
  • Beug, M. 2006. "Thirty-Plus Years of Mushroom Poisoning: Summary of the Approximately 2,000 Reports in the NAMA Case Registry." McIlvainea 16 (2) Fall 2006
  • Beug, M. 2009. Personal communication. August 10, 2009.
  • Beug, M. 2009. "NAMA Toxicology Committee Report for 2007: Recent Mushroom Poisonings in North America." McIlvainea, Vol. 18, 2009.
  • Beug, M. 2010. "Amanita Bravado." http://tech.groups.yahoo.com/group/BayAreaMushrooms/message/7749, Feb. 28, 2010.
  • Beug, M. 2011. "NAMA Toxicology Committee Report for 2010, North American Mushroom Poisonings." McIlvainea, Vol. 20, 2011. pp. 2-3.
  • Boa, Eric. 2004. "Wild edible fungi: a global overview of their use and importance to people." Food and Agricultural Organization (FAO) of the United Nations, Department of Forestry, Corporate Document Repository, http://www.fao.org/docrep/007/y5489e/y5489e00.htm
  • Bone, Eugenia. 2011. Mycophilia: Revelations from the Weird World of Mushrooms. Rodale Books.
  • Bulyankova, Tatiana. 2011. Personal communication. November 4, 2011.
  • Cetto, Bruno. 1994. I Funghi dal Vero. Vol. 1. Arti Grafiche Saturna, Trento, Italy.
  • Chestnut, V.K., 1898. "Principal Poisonous Plants of the United States." USDA, Department of Agriculture, Division of Botany, Bulletin 20.
  • Cornaccia, Pierluigi. 1980. "Amanita muscaria in Italy." (in the original Italian) http://www.autistici.org/mirrors/www.psicoattivo.it/media/libri/amanita/am_06.html
  • Courtecuisse, R. and Duhem, B. 1995. Mushrooms and Toadstools of Britain and Europe. Harper Collins Publishers.
  • Coville, F.V. 1898. "Observations on Recent Cases of Mushroom Poisoning in the District of Columbia. Circular 13." United States Department of Agriculture, Division of Botany, U.S. Government Printing Office, Washington, D.C.
  • Coville, F.V. 1898. Ibid. revised version of Circular 13.
  • Davis, R.M., R. Sommer, J. Menge, 2012. Field Guide to Mushrooms of Western North America. University of California Press, Berkeley, California.
  • Erickson, Timothy, MD. 2010. "Name that Toxin: Amanita muscaria." Emergency Physicians Monthly website: http://www.epmonthly.com/subspecialties/toxicology/name-that-toxin/1/ March 8, 2010.
  • Evenson, V.S., 1997. Mushrooms of Colorado and the Southern Rocky Mountains. Denver Botanic Garden, Westcliffe Publishers.
  • Glick, P. 1979. The Mushroom Trail Guide. Holt, Reinhart and Winston, New York.
  • Hall, I., S. Stephenson, P. Buchanon, W. Yun and A. Cole. 2003. Edible and Poisonous Mushrooms of the World. Timber Press, Portland, Cambridge.
  • Harkonen, M., T. Saarimake and L. Mwasumbi. 1994. "Tanzanian mushrooms and their uses. 4. Some reddish edible and poisonous Amanita species." Karstenia 34: 47-60.
  • Imazeki, R., Y. Otani, and T. Hongo. 1988. Fungi of Japan. Yama-kei, Tokyo.
  • Konecney, Tony. Dec. 2, 2009. http://blog.tonx.org/2009/12/cooking-with-amanita-muscaria/
  • Lincoff, Gary. 1981. Simon and Schuster's Guide to Mushrooms. Simon and Schuster, New York.
  • Lincoff, G. 1981. The Audubon Society Field Guide to North American Mushrooms. Chanticleer Press, Knopf, New York.
  • Lincoff, G. 2005. "Amanita muscaria in Kamchatka." http://www.nemf.org/files/various/muscaria/part1.html
  • McIlvaine, C. and R.K. Macadam. 1902. One Thousand American Fungi. Bowen-Merrill, Indianapolis. (Reprinted in 1973 by Dover publications, New York).
  • Metzler, Sue and V. Metzler. 1992. Texas Mushrooms. University of Texas Press, Austin.
  • Miller, Orson. K., 6th edition, 1984. Mushrooms of North America. Chanticleer Press, New York.
  • Miller, O.K. 2006. North American Mushrooms: A Field Guide to Edible and Inedible Fungi. Glove Pequot Press, Guilford, Connecticut.
  • Millman, L., and T. Haff. 2004. "Notes on the Ingestion of Amanita muscaria." Mushroom, The Journal of Wild Mushrooming. 223:55.
  • National Poison Data System, 2004. American Association of Poison Control Centers (AAPCC ), 2004 Annual Report, pg. 604 http://www.aapcc.org/dnn/Portals/0/AJEM%20%20AAPCC%20Annual%20Report%202004.pdf
  • Phipps, Alan. 2000. Japanese Use of Beni-Tengu-Dake (Amanita muscaria) and the Efficacy of Traditional Detoxification Methods. Master's Thesis, Biology Department, Florida International University.
  • Pouchet, Felix A. 1839. "Experiences sur L'Alimentation par les Champignons Veneneux." Journal de chimie de medicale, de pharmacie et de toxicology. V. 322-328.
  • Roody, William C. 2003. Mushrooms of West Virginia and the Central Appalachians. The University Press, of Kentucky.
  • Rose, David. 2006. "The Poisoning of Count Achilles de Vecchj and the Origins of American Amateur Mycology." McIlvainea, Vol. 16, No. 1, 2006.
  • Rubel, William. 2009. Personal communication. August 26, 2009.
  • Rubel, William and David Arora. 2008. "A Study of Cultural Bias in Field Guide Determinations of Mushroom Edibility Using the Iconic Mushroom, Amanita muscaria, as an Example." Economic Botany, 62 (3), 2008, pp. 223-243, New York Botanical Garden Press, Bronx, NY 10458-5126 U.S.A.
  • Rubel, W. 2011. "Amanita muscaria, edible if parboiled." The Magic of Fire: Traditional Foodways with William Rubel. http://www.williamrubel.com/2011/09/30/amanita-muscaria-edibile-if-parboiled/
  • Shaw, Marilyn. 2012. Personal communication. June, 28, 2012.
  • Spoerke, David G. and B. Rumack. 1994. Handbook of Mushroom Poisoning Diagnosis and Treatment. CRC Press, Inc. Boca Raton, FL. pp. 273-275.
  • Swazey, J. and Reeds, K. 1978. "Louis Pasteur: Science and the Applications of Science." Essays on Paths of Discovery in the Bio-Medical Sciences. DHEW Publication No. (NIH) 78-244, U.S. DEPARTMENT OF HEALTH, EDUCATION, AND WELFARE, Public Health Service National Institutes of Health. http://newman.baruch.cuny.edu/digital/2001/swazey_reeds_1978/chap_02.htm
  • Uzelac, Branislav. 2009. Gljive Srbije Izapadnog Balkana [Mushrooms of the Balkans]. BGV Logik, Beograd.
  • Watson, William A. et al. 2004. Annual Report of the American Association of Poison Control Centers Toxic Exposure Surveillance System. http://www.aapcc.org/dnn/Portals/0/AJEM%20-%20AAPCC%20Annual%20Report%202004.pdf. p. 605.
  • Wuilbaut, J.J., 2012. Personal communication. June 23, 2012.

This article first appeared in Mushroom The Journal, Issue 110, Fall 2011 - Winter 2012, p. 42.




All Comments: [-] | anchor

derefr(3493) about 5 hours ago [-]

What's the difference between a mushroom that is poisonous unless boiled, and, say, a potato? If we consider potatoes "edible"—do we?—then it'd make sense to me to classify these mushrooms the same way.

klyrs(10000) about 4 hours ago [-]

The big difference is that amanita muscaria is a hallucinagen, so if that got popular it would probably end up as a schedule 1 drug...

mykowebhn(3923) about 4 hours ago [-]

The toxins in Amanita muscaria can apparently be removed via sufficient boiling and discarding the water.

The toxins in the potato (and tomato for that matter) have been removed over generations of breeding. However, I believe some of the toxins remain in those green parts you see on potatoes.

nkurz(28) about 3 hours ago [-]

That's an interesting example. My impression is that most Americans consider raw potatoes to be unpalatable, but not poisonous. It's rare to eat them, but mostly because of taste rather than health consequences. Other countries seem more likely view them as actually harmful. I was surprised in Russia to realize that potato peels were always removed, with the Russians I spoke to being alarmed that Americans didn't worry about eating them. Are you possibly Russian, or at least close enough to incorporate similar cultural views?

This article (https://www.healthline.com/nutrition/raw-potatoes) is 'fringe' for suggesting that one might eat raw potatoes, but I think summarizes the scientific viewpoint: 'Raw potatoes are more likely to cause digestive issues and may contain more antinutrients and harmful compounds. Yet, they're higher in vitamin C and resistant starch, which may provide powerful health benefits. In truth, both raw and cooked potatoes can be enjoyed in moderation as part of a healthy diet.'

Anyway, while not the same as the Amanita Muscaria case, I wonder if there is something close to parallel happening with potatoes, with one culture feeling a particular preparation is necessary for safety, and another thinking it is just a culinary desire.

TheSpiceIsLife(3946) about 2 hours ago [-]

You can eat raw potatoes.

Just not a bucket full of green potatoes, cooked or raw.

dfsegoat(3820) about 5 hours ago [-]

With many edible fungi species - Boletes for instance - boiling can be the difference between getting sick (2-3 days of GI distress) and a good meal.

When it comes to other species like Amanita phalloides (death cap) or Galerina sp. - which contain Amatoxins and resemble edibles - boiling will have no effect: You will need a liver transplant [1].

Amanita muscaria does not contain amatoxins - but it does contain muscimol and ibotenic acid, which are interesting in their own right [2].

1 - https://en.wikipedia.org/wiki/Amatoxin

2 - https://en.wikipedia.org/wiki/Muscimol

lioeters(4014) about 3 hours ago [-]

Back when I studied biology, I found it fascinating that two of the major categories of neurotransmitter receptors are muscarinic and nicotinic receptors, named after: muscarine, from the Amanita muscaria mushroom; and nicotine, from tobacco and other plants from the nightshade family.

twic(3365) about 2 hours ago [-]

An amazing amount of biochemistry has been worked out in reverse from drugs and poisons. I once worked on a protein called TOR, which is a key regulator of cell growth; that gets its name because it's the Target Of Rapamycin, rapamycin being a drug.

In fruitflies and nematodes, the preferred way to shake things until they break is mutation rather than drugs, so genes often have names describing what happens when the gene is broken - which is quite confusing. So, another protein i worked on is called Dlg, short for 'discs large', because in fruitflies, when missing, the imaginal discs get too big. So you know Dlg's function is somehow to keep imaginal discs small.

mda(4023) about 2 hours ago [-]

Nit, Amanita muscaria does not really contain muscarin but muscimol and ibotenic acid. Latter causes stomach upset.





Historical Discussions: Show HN: Karaoke for Piano Overlaid and Synched with YouTube Videos (February 09, 2019: 24 points)

(24) Show HN: Karaoke for Piano Overlaid and Synched with YouTube Videos

24 points 7 days ago by robbrown451 in 10000th position

pianop.ly | Estimated reading time – 18 minutes | comments | anchor

Pianoply is color coded karaoke for piano. In your browser, free. It layers gorgeous graphics as well as sounds right on top of the original YouTube music videos for the songs you love. Users supply the MIDI 'piano rolls' for songs, which they can create and edit right in the app. The app is very interactive and talks to your MIDI piano, unlike all the zillions of 'piano tutorials' on Youtube.

(brain candy)

Even if you don't have a piano (and have no interest in learning to play), you still might find it to be some interesting eye, ear and brain candy.

If you are somewhere you can take in some audio and video (and are on a computer or really fast phone), please check out the the demo. You can also scroll down this page, for a list of over 50 full songs you can try, ranging from old classics to just-released pop hits.

The video below is a better option for those on mobile or on slower computers.

And if you'd rather just read about it (until you can make it back to your computer or to somewhere you can crank up the volume), read on. Also, scroll down for a bunch of links to complete songs that you can play along with right now, if you have an electronic piano attached. (first you'll want to read the first couple items in the FAQ though).

FAQ...

(part 1, the essentials)

How do I play songs on Pianoply?

You simply connect a digital piano or MIDI controller to your computer, and click on any links to songs below. Currently only Chrome supports MIDI devices, but we understand that Firefox and Safari will have support soon.

Pianorolls, also known as MIDI sequences, are named for player piano rolls.

More songs will arrive soon, since users contribute the 'pianorolls'. It helps if you have colored stickers (or marks from washable markers) on the white keys of your piano, but it isn't necessary. The colors are red, orange, yellow, green, blue, purple, and pink, for C D E F G A B. There isn't a lot to learn to be able to play songs. Black keys are 'striped' with the colors of the two adjacent white keys. That's pretty much it.

Anything else I need to know?

You should learn a few of the 'keyboard shortcuts,' which are handy for doing things like changing the instrument, toggling the piano display (photoreal vs. minimal), adjusting the volume of the video, adjusting the volume of the instrument, jumping around in a song, changing the speed, transposing, turning on and off note names (D# etc), and turning on and off 'autoplay.' Most of them are two letters (or a letter and a number), so for instance to jump to the 40% mark in the video, simply type 'j' followed by '4'. A menu is shown after the first letter, so to see all the 'jump' commands, press 'j' and you'll see this:

To adjust video volume, press 'v' and a number. To adjust instrument volume, press 'i' and a number. To change the instrument, press up and down arrow keys. To change the speed, press 's' and look at the menu choices. To toggle settings, press 't' and look at the options. One of the most important toggles is autoplay, which you turn on and off by pressing 't' then 'a'. If you want to see all the keyboard shortcuts, just press 'k'.

Please note that keyboard shortcuts aren't going to work on a tablet unless you have a bluetooth keyboard. The first version of Pianoply is optimized for a computer rather than a touch screen device. We're working on it and intend to have full touchscreen support in the not so distant future.

Full songs

that you can play right now...

over the rainbow - judy garland Amazing how well this song (and the whole movie) holds up after 80 years. Funny that a song about a rainbow is in the black and white part of the film, but it looks great with flying colored notes on top. Here the song is transposed to be all on the white keys, with a happy coincidence that the word 'blue' is a blue note. This is super easy to play.
thunderclouds - lsd The trippy video goes well with the group's name, which actually stands for Labrinth, Sia and Diplo. Dancer Maddy Zeigler plays Sia's younger alter ego. The song, the video and the pianoroll, all together, please my brain.
alan walker - faded The song that got Alan Walker famous. Over 2 billion views on this one. He's 21 years old and learned piano on YouTube. Easy and satisfying song to play. Sounds good with the instrument 'chimes' (remember, up and down arrow keys change the instrument).
a thousand years - christina perri Really beautiful song. Pretty singer too. I guess the 'I've loved you for a thousand years' thing makes sense because it is from a vampire movie. I like all the candles.
delicate - taylor swift Taylor is full-on electropop now I guess. This song has an excellent groove though, and the video is great as well. Rolling Stone said it was the best song of 2017. Maybe a bit repetitive (isn't it? isn't it...?), but it is fun to play on piano, with her dancing in the nighttime rain as a backdrop. Also all black keys.
how far i'll go - auli'i cravalho Beautiful and inspiring 'gotta be me' princess song from the fantastic animated Disney film Moana. Lin-Manuel Miranda composed.
live your story - auli'i cravalho The same Hawaiian teenager who voiced Moana does another song for Disney. She's got a great voice and a bright future. Lots of Disney princesses make appearances here.
mountains - lsd More from Labrinth, Sia and Diplo. When I added the pianoroll, my daughter listened to this at least 20 times in a row. At least she has good taste. She's convinced it says 'Wreck it Ralph' at 36 seconds in.
what does the fox say? - Ylvis Ok it's supposed to be a joke song, but it is amazingly well done. They seem so sincere about their question regarding the fox. Kids love this one. I do too, but admittedly it doesn't hold up to being played over and over and over again as well as 'Mountains' does.
try everything - shakira Good song from the movie Zootopia. I like videos from Disney movies, since you know every frame cost like ten thousand dollars.
seven rings - ariana grande I think I mentioned that I love Ariana Grande, especially when she is sampling Sound of Music. This video is very pink.
ray of light - madonna Gotta be my favorite Madonna song, from 1998. The video was made for Pianoply. This song is a remake of a kind of weird song from 1971, and this one is like a million times better than the original.
hey soul sister - train From 2012. Fun and easy to play. Love this song. The video was shot in Echo Park, LA.
solo - clean bandit with demi lovato One of my favorites to play. Glad my kid is too young to understand what the lyrics are about. The video, though, is about a mean boyfriend that gets turned into a rainbow dog (thanks, science!), so that is pretty awesome.
david guetta and sia - flames We're big Sia fans in this household. Fantastic song and awesome martial arts movie video. Easy to play, especially as a duet (the bass line can be played by a 3 year old).
someone like you - adele Everyone loves Adele. I think the pianoroll gets off partway through though, sorry. Good song though.
africa - toto You have to have Africa. Have to. I love the Yamaha synth piano...only a few of them were made, it is the precursor to the DX-7 which dominated 80s music.
dark horse - katy perry Love the Egyptian theme. Jiff the Pomeranian steals the show. And yeah it's a good song.
it's my life - bon jovi This was well past their hair band days, but they could still do arena rock. From 2000.
superman - five for fighting Beautiful, sweet song. Gained extra emotional weight after becoming associated with 9/11 first responders. Also a favorite of my little girl.
girls like you - maroon 5 Great video, although I think the pianoroll gets off for the second half (note to self: fix). It's fun to play though, as it is very rhythmic and has a lot of fast repeated notes. That's always a good thing.
despacito - luis fonsi The most viewed YouTube video ever. The most fun song to play on Pianoply. You can't lose here. The lyrics are naughty but you only know that if you know Spanish.
shape of you - ed sheeran And this is the second most viewed video on YouTube. It's a perfectly fine song and video, but I admit that surprised me.
chandelier - sia Amazing song. Amazing dance performance by a precocious 11 year old Maddie Zeigler.
hey jude - the beatles Hard to believe something recorded for TV 50 years ago can look so good today. Classic, obviously. The pianoroll is only for the first verse, sorry.
franz schubert - du bist die ruh There are a lot of vapid pop songs in here, so I thought I'd provide at least one classical composition. It takes some extra work, but it will make you a better person.
back to you - selena gomez My favorite Selena Gomez song and video, and really fun to play. The video is a bit of a parody of the 1965 French New Wave film 'Pierrot le Fou.'
all falls down -- alan walker What can I say I love Alan Walker's music. This one features Miley Cyrus's little sister Noah on vocals, as well as Juliander, who is one of the prettiest men I know of. The lyrics are kind of sad, but it sure sounds upbeat.
let it go - idina menzel From Frozen. If you have a young daughter, you know this song. Possibly all too well. Still, Idina/Elsa belts it out quite impressively, and it feels mighty good to play along on piano. The ice castle stuff is some pretty amazing visuals, although I don't think I'd actually want to live there.
alone - alan walker Noonie Bao does the vocals. I love her name. I can play along with this song on piano over and over, for hours. It really is that much fun.
babe - sugarland ft. taylor swift Taylor Swift wrote this catchy-yet-sad country song with Patrick Monahan from Train, and she appears in the video as the mistress. The video does a great job of showing the 60s, Mad Men style. The pianoroll enhances it a lot (recommended instrument: 'tinkle bell'), even if you are just watching and listening to it play itself. I like doing it as a duet, with my 4 year old handling the bass line.
the spectre -- alan walker I can't pick a favorite Alan Walker song, so many are so, so good. Like many of his, some of it is easy and then it gets...complicated and crazy and brain-ticklingly awesome. I usually play the easy part then switch it into autoplay and let it play the hard parts for me. (did I admit I'm not a good piano player?)

FAQ...

(part 2, the rest)

How do I submit/share a pianoroll for a music video?

It's pretty straightforward, however, right now we are requiring that people host their own pianoroll files either on their own server, or by simply putting it in a pastebin. It's both to assure you that 'you own your own data,' and because we are being careful regarding copyright issues. Note that you can even do short pianorolls that are not hosted, by creating long urls that contain all the information, and that you can easily paste into an email. There is currently a private beta that you should probably join if you want a bit more help with the process. (email rjbrown at gmail if you want to be in the beta) It will soon get easier, and tips and techniques will be published. However, for the next month or two, I expect that most casual users will want to just play along with existing songs rather than create their own.

I found a piano tutorial on YouTube I'd like to bring into Pianoply. Is there any easy way to do that?

Actually there is. Some of our beta users have been using a powerful 'tracer tool' to do that. It allows adjusting the geometry of the video and Pianoply's note display to match one another, and superimposing some colors to help out. You can run it at half or quarter speed and then play along, and can merge, edit and so on. Once you are done, you can and then bring it into the 'real' video and synch it up perfectly. It sounds complicated but it's actually quite straightforward and quick. Join the beta if you want to do some of this before it is fully documented.

Here is a video showing where someone has traced Ariana Grande's Seven Rings, and has our notes superimposed and in synch with the ones in the tutorial.

This seems like it would be perfect for kids. Is that the idea?

Pianoply was inspired by my daughter Stella's love of music. She was less than 2 when I started the project and now she's nearly five. She loves music and she absolutely loves Pianoply. Whenever a new music video comes out that she likes, she's on my case to 'put colors on it.' Here is a video of her using Pianoply almost a year ago:

Stella and I often play duets, where she'll play the bass line and I'll play the melody. Even if we are just listening to music (or she is singing along), we usually prefer have a pianoroll playing along with it. I think she gets a lot out of seeing music presented that way, that will give her an advantage later if she pursues music seriously.

All that said, it doesn't mean Pianoply is 'for' kids. Not at all. It's a ton of fun for adults too, whether they are new to piano, or are advanced (and possibly want to create pianorolls themselves). Pianoply is for anyone who loves music.

Are you really learning music when using Pianoply?

Absolutely. You just aren't learning traditional notation. But remember, a whole lot of modern music is composed using software (Ableton Live, Logic Pro, etc) that has a pianoroll display. Pianoroll display is simply more appropriate for computers than traditional notation, and is much more flexible and is better able to capture subtleties of timing. It's a perfectly valid way of representing music, even it seems to make som old-school piano teachers bristle.

How does copyright affect all of this?

It's complicated...

Of course, we are allowed to play music videos from YouTube on our web site, unless the content owner decides they don't want us to. Almost all music videos on YouTube can be embedded into a web site this way. We suspect that people who are actively engaged with the video, rather than just passively listening to the music, are more likely to click on the ads. The point is, they make money from videos being played Pianoply.

Pianorolls themselves are potentially subject to copyright, but we think they fall under fair use, as they certainly qualify as 'transformative' under US law. The DMCA's 'safe harbor' provision may come into play as well. Meanwhile the EU is now in the process of changing things around with Article 13 (a.k.a. the 'meme killer'). We're going to see how things shake out.

Regardless, if you are an artist or record company that finds yourself getting angry at the idea of people having too much fun while consuming your music videos, even though you are making money from them, it is simple to block those videos from playing on our site.

Can YouTubers who do piano tutorials make money by putting their stuff on Pianoply?

We're going to try to figure something out so they can. Most of these YouTubers are already concerned about YouTube shutting them down due to copyright issues, especially if Article 13 goes through. We hope to provide them a viable alternative. We love their work and think it would shine here. I'm currently reaching out to all of them that I can, in hopes of bringing them into the beta and getting their ideas and input.

Can you send me colored stickers for my piano keys, for free?

We're planning on printing up a batch of very cool colored stickers. Send me an email (rjbrown at gmail) and promise me that you'll tell everyone you know about Pianoply, and I'll send you stickers as soon as we print them (US and Canada only). We'll also give you first dibs on a username when we are ready with that. Don't worry I won't sell your address or do anything evil. First come first serve....but I'll remove this from the page when it no longer applies.

Where do the instrument sounds come from?

An excellent open source library called WebAudio TinySynth. (everything else in Pianoply is 'vanilla JavaScript' and built without 3rd party libraries)

How'd you do that 'photo-real' piano keyboard where the keys change color? For that matter how'd you do those pianos right here that change colors?

I'm glad you noticed. The ones here are PNGs with 'alpha transparency', and the color behind them is changing slowly (using css transitions to make it smooth). To prepare the image, I used this thing I built some years ago. For the one in the app, I spray painted an actual piano red, photographed it, did the same image processing technique to replace red with alpha transparency. There are SVG shapes behind it that are changing colors and opacity as needed. Here is a video I made while I was first exploring the technique for the piano keys.

Is Pianoply always going to be free?

That's the plan. It would be great to make some money off this (this wasn't slapped together in a weekend), but I think the best route to that is getting as many people as possible using it and spreading the word, gently suggesting that people buy new pianos, and taking a little cut if they do. I'm generally not big on advertising but this seems like a harmless model, especially if we aren't all pushy about it.

I want Pianoply to do X. Can you make it do it?

Maybe. But even better, Pianoply has a 'mod page' where you can easily write snippets of JavaScript and run them on the site, save them, etc. I'd be happy to help you get started. The hope is that hackers will figure out cool things to do that I haven't thought of, and if they are good we can integrate them into the product somehow. Who knows maybe we can pay them for it, if we are able to make a buck ourselves. We'll see how things go...




All Comments: [-] | anchor

GistNoesis(4019) 5 days ago [-]

Hi, I just did a show hn yesterday about pianorolls using tensorflow.js that might interest you https://news.ycombinator.com/item?id=19128287

robbrown451(10000) 5 days ago [-]

Interesting. I have to admit I'm not sure how to actually use it. Like how is it tutoring you?

Regardless, if you are doing sound analysis to try to pull out 'pianorolls' a.k.a. MIDI data, I'd be interested in talking. It's a very interesting problem and could be very useful.

cseebach(10000) 7 days ago [-]

I'm a terrible piano player, but this is still lots of fun. Particularly notable for me here is the use of https://github.com/g200kg/webaudio-tinysynth to generate the tones - the WebAudio et al latency is now good enough in browsers for stuff like this to work!

robbrown451(10000) 7 days ago [-]

Thanks. Yeah I'm not so good at piano either but I'm getting better.

I spent a solid month trying to build decent sounding instruments, and it's really hard and they didn't sound that great. Then noticed webaudio-tinysynth, it saved the day! They sound surprisingly good. Although the actual piano sounds aren't the best....piano is an incredibly complicated to synthesize...sympathetic resonance and all that. I'm personally happy with all the other ones though.

pilothouse(10000) 7 days ago [-]

Awesome job...pretty amazing when you consider what's involved in the graphic overlays and timing synchronization!

robbrown451(10000) 7 days ago [-]

Thanks! Yeah it took a good bit just to get a proof of concept working. YouTube's API doesn't have very accurate time, so I did a bit of smoothing. (you might notice they take a second or so to 'lock in')

The 'notes' are just divs with css transform and transitions....their position gets updated every second with where they are supposed to be two seconds later. Works surprisingly well. But yeah, lots of work. :) I'll be making videos to show how to do all the recording and editing and stuff in the coming weeks.




(20) Deep learning applications in drug discovery and protein structure analysis

20 points about 6 hours ago by msapaydin in 10000th position

msapaydin.wordpress.com | Estimated reading time – 3 minutes | comments | anchor

I have been reading about papers on application of deep learning to drug discovery. So far, I have seen a number of protein (and ligand) representations used:

1- a grid based representation, which is very straightforward, where a grid is placed on top of the protein or the protein-ligand complex, with a certain resolution such as 1 Angstrom, and then the effect of all or some atoms of the ligand and the protein are reflected on the grid cell positions. This approach is not rotation and translation invariant in the sense that the orientation of the protein changes how it is represented into the Deep Learning application and this may presumably change the resulting computation. This is similar to how the rotation and translation of a face in an image may skew the recognition of that face.

This representation has been used for pose prediction and virtual screening, e.g. in Prof. Koes' from U. Pittsburgh's papers.

2- a distance matrix based representation, where the pairwise distances between e.g. all C-alpha atoms are represented. This representation has the advantage of being rotation and translation invariant.

This representation has been used for classifying predicted protein structures against CASP targets, e.g. in [1].

3- a graph based representation, where the nodes are the atoms and the bonds are the edges of this graph. A drawback of such an approach is that it does not find the spatial neighborhood information if only the graph structure is taken into account.

This representation has been used in predicting toxicity properties of ligands in the graph convolutional neural network paper by Duvenaud et al [4].

4- some approximate representation called ACNN (for atomic convolutional neural networks) where only the atoms within 12 angstrom of a center atom are considered, and furthermore these atoms are "pooled" together.

This representation has been used for predicting free energy of a ligand-protein complex in the paper by Pande and his coworkers, although the predictions suggest that the system is heavily overfitting.

5- A topological representation [2] where the barcode of the protein is obtained through persistence (i.e. the Betti numbers) and these are discretized to represent the protein. Such a representation is also translation and rotation invariant and is not too sensitive to the fine details of atomic coordinates which are subject to error due to the experimental error.

6- I have not seen a paper on this yet but a point cloud representation of a protein structure is also feasible, based although it also does not take into account the bond structure of a protein. PointNet [3] could be used for this purpose.

This has been used in classification and regression tasks such as scoring protein structure prediction candidates in CASP competition [1] or the prediction of binding affinity in the following papers.

[1] Deep convolutional networks for quality assessment of protein folds. Georgy Derevyanko, Sergei Grudinin, Yoshua Bengio, and Guillaume Lamoureux . arXiv:1801.06252v1 [q-bio.BM] 18 Jan 2018

[2] TopologyNet: Topology based deep convolutional neural networks for biomolecular property predictions Zixuan Cang, and Guo-Wei Wei. arXiv:1704.00063v1 [q-bio.QM] 31 Mar 2017.

[3] PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation. Charles R. Qi, Hao Su, Kaichun Mo, Leonidas J. Guibas. arXiv:1612.00593v2 [cs.CV] 10 Apr 2017

[4] Convolutional networks based on graphs for learning molecular fingerprints. NeurIPS 2015. Duvenaud et al. 2015.

Like this:

Like Loading...




No comments posted yet: Link to HN comments page




Historical Discussions: Show HN: Share your Git hooks and config (February 10, 2019: 17 points)

(17) Show HN: Share your Git hooks and config

17 points 6 days ago by stefanhoelzl in 10000th position

github.com | Estimated reading time – 2 minutes | comments | anchor

share your git hooks and config

How to share your git hooks and config with your team members and put them under version control

Demo

# change your 'core.hookspath' to the tracked 'hooks' directory
$ git config --global core.hookspath hooks
# clone the repository
$ git clone https://github.com/stefanhoelzl/commit-git-hooks-and-config
$ cd commit-git-hooks-and-config
# use the custom alias defined in .gitconfig
$ git head-ref
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx refs/remotes/origin/HEAD
# use the custom pre-commit hook in hooks
$ touch test
$ git commit -a -m'test git hook'
About to commit using tracked git hook
[master d330c48] test git hook
 1 file changed, 1 insertion(+), 1 deletion(-)

Details

Let's start with the git hooks

Since it would be a security vulnerability to just allow random git hooks to be executed on your system we need to set one global git configuration value:

git config --global core.hookspath <your-hooks-path>

where <your-hooks-path> is relative to your repository root. This allows git hooks to be stored in a tracked directory. You can also save this in your local config, but then you have to set it for every repository you clone.

Now you can put all your git hooks into <your-hooks-path> and have them tracked and shared with your team members.

To be sure there is no malicous software hidden in the hooks coming with this repository check them out in hooks directory.

How to use this to share our git configuration?

Now that we can have a shared post-checkout git hook we can use this to set a include.path to our custom git configuration file.

So create a post-checkout hooks in <your-hooks-path>:

# content of <your-hooks-path>/post-checkout
git config --local include.path ../<your-gitconfig>

Now you can create a <your-gitconfig> file in your repo and the post-checout hook will enforce that your team members work every time with the correct gitconfig.




All Comments: [-] | anchor

Dunedan(10000) 6 days ago [-]

I'm a big fan of pre-commit (https://pre-commit.com/) for common git hooks.

stefanhoelzl(10000) 6 days ago [-]

one goal of my approach was to not have any dependencies you have to install to be able to use it.

It is certainly not as mighty as pre-commit. But it doesn't add an extra layer around the stuff you know (hooks and gitconfig) and it can be enabled by setting just one git config value





Historical Discussions: Show HN: Always Be Closing – Pull Request Management Service (February 14, 2019: 16 points)

(16) Show HN: Always Be Closing – Pull Request Management Service

16 points 2 days ago by drmajormccheese in 10000th position

www.thoughtdealership.com | Estimated reading time – 1 minutes | comments | anchor

Beta testers are wanted for the GitHub Apps Always Be Closing and Always Be Closing Jr.

Always Be Closing (ABC) is a GitHub App that provides several advanced features that are not provided by GitHub. Many of the features are related to Pull Request management. Any administrator of your repository can enable or disable features on an easy-to-use configuration dashboard. All the features have zero configuration options (boolean: either on or off) and are designed to be intuitive.

Install Always Be Closing via GitHub apps. Select the "Configure" button on the GitHub apps page to enable the service for your personal repositories or your GitHub organization. Use the configuration dashboard to enable and disable features. Leave feedback at the ABC community site.

The features include:

ABC requires the following GitHub permissions:

  • Repository administration - read-only access
  • Repository metadata - read-only access
  • Organization members - read-only access
  • Commit statuses - read-only access
  • Issues - read and write access
  • Pull Requests - read and write access
  • Repository contents - read and write access

If you prefer not to grant read or write access to repository contents (the code in your git repositories), try Always Be Closing Jr. It provides all the features of ABC that do not require access to repository contents.




All Comments: [-] | anchor

aequitas(4016) 2 days ago [-]

This should really be a GitHub feature that is on be default on every repo: https://www.thoughtdealership.com/post/delete-comments/

> Delete Reaction Comments is an Always Be Closing (ABC) feature that deletes comments from pull requests that are better expressed as GitHub reactions.

drmajormccheese(10000) 2 days ago [-]

Yeah. Recently (past half year) GitHub enabled a feature where duplicate comments are automatically edited to be hidden comments. I think it's a continuation of the GitHub feature that allows repository owners to hide a comment. Two caveats: (1) you can't hide a comment using the GitHub API; (2) I haven't tested whether duplicate comments trigger an email notification. Ideally they would not trigger notification.





Historical Discussions: Satellite Harpoons Space Debris in Test (February 15, 2019: 7 points)

(15) Satellite Harpoons Space Debris in Test

15 points 1 day ago by ChuckMcM in 733rd position

www.theverge.com | Estimated reading time – 4 minutes | comments | anchor

A British satellite in orbit around Earth has successfully tested out a particularly pointed method for cleaning up space debris: piercing objects with a harpoon. In a new video taken from the spacecraft, the satellite shoots its onboard harpoon to puncture a target panel that's about five feet away.

uncontrollable objects circling around Earth at more than 17,000 miles per hour

The test was part of the University of Surrey's RemoveDEBRIS mission, which is designed to try out various ways of getting rid of debris in orbit. Space debris has become a growing concern for the aerospace community over the last few decades, as it makes the space environment more dangerous for future satellites. These objects typically consist of defunct spacecraft and other uncontrollable objects circling around Earth at more than 17,000 miles per hour. Getting hit by even a small piece of this debris could be enough to take out a functioning satellite, and the collision could create even more dangerous pieces of junk in the process.

That's why those in the aerospace industry are interested in figuring out ways to remove debris from the space environment to make Earth orbit cleaner and safer for future space travel. The RemoveDEBRIS satellite, which was deployed from the International Space Station in June 2018, is equipped with different tech that's capable of ensnaring space junk. Before this harpoon test, the spacecraft successfully deployed a net for grabbing debris.

Now, those behind the RemoveDEBRIS mission say that harpoons could also be a good method of capture. "I think we have demonstrated the technology is viable," Guglielmo Aglietti, principal investigator of the RemoveDEBRIS mission and director of the Surrey Space Centre at the University of Surrey, tells The Verge. "Although what we have done will have to be scaled up in order to touch really large pieces of debris, the method has been successfully tested."

"I think we have demonstrated the technology is viable,"

Aglietti says that the harpoon did not create any small, unexpected pieces of debris during the test; it just created a hole where it pierced the panel. He also noted that future space harpoons would need to have more taut tethers to keep the debris from moving around after it's pierced. In the video, the targeted panel moves around quite a bit after it's hit with the harpoon. "Once you have caught your piece of debris, then you have to tighten," he says. Aglietti also notes that larger vehicles should not move around as much as the panel did once they're harpooned.

The idea is that once a vehicle grabs hold of a piece of junk, it can then bring the debris down closer to Earth where it will burn up in the planet's atmosphere. RemoveDEBRIS is going to test out this part of the removal process, too. The satellite will do one final experiment in March when it will inflate a giant "sail" that will help take the spacecraft out of orbit.

This sail is meant to increase the surface area of the vehicle, making it more susceptible to the air in Earth's atmosphere. While objects in low Earth orbit are technically in space, there is still a very thin atmosphere at this height, and the tiny particles and gas within the atmosphere constantly push on spacecraft, dragging them down to the ground. This is why most objects in low Earth orbit fall to the planet eventually. A drag sail expedites this process by providing more surface for the gas to hit.

When the RemoveDEBRIS satellite tests out its sail, it will ultimately be destroyed during the process as it plunges into the air surrounding Earth. But if the sail is successful, it means the RemoveDEBRIS mission may have demonstrated some key technologies for cleaning up space.

In the future, similar satellites may be able to drag themselves down to Earth with a harpooned piece of debris in tow.




All Comments: [-] | anchor

cgb223(4023) about 1 hour ago [-]

Considering the amount of space debris out there, is a single harpoon the most practical way to get rid of it all?

ChuckMcM(733) 19 minutes ago [-]

I don't think there is any one solution to 'all' space debris (perhaps there is but it seems intractable with a single solution to me). However there is always benefit in removing space debris. If this technique was only successful on pieces 5' (10 cm) in size or greater it could still get a lot of stuff out of the way. Especially things that are prone to creating Kessler cascades.





Historical Discussions: Browning Fever: A story of fandom, literary societies, and impenetrable verse (February 14, 2019: 5 points)

(15) Browning Fever: A story of fandom, literary societies, and impenetrable verse

15 points 2 days ago by lermontov in 147th position

www.laphamsquarterly.org | Estimated reading time – 16 minutes | comments | anchor

Hiram Corson was many things: a scholar of Anglo-Saxon literature, a translator of Roman satires, a theorist of education, but above all a diehard fan of the English poet Robert Browning. The Cornell professor dedicated the better part of his career to promoting, explicating, and declaiming Browning's poems to an American audience, as well as founding a club exclusively dedicated to the poet in 1877. Toward the end of his life, Corson's steadfast service was rewarded with one last opportunity to converse directly with the man he regarded as the greatest poetical mind since Shakespeare.

Over the course of several face-to-face exchanges, Browning assured the aging academic of their mutual intellectual understanding and thanked Corson effusively for the years spent propagating his poetry. None of this was particularly remarkable in itself—the two had met decades earlier and had even briefly traveled together in Italy. Far more noteworthy was the fact that at the time of these final conversations, Browning had been in his grave for twenty-two years.

These spectral testimonies, recorded in Corson's book Spirit Messages, were the product of a series of séances held at the home of a Boston medium named Minnie Meserve Soule in 1910. They were celebrity-studded affairs—deceased interlocutors from the sessions included Robert's wife, Elizabeth Barrett Browning; Nathaniel Hawthorne; Alfred Tennyson; William Gladstone; and Henry Wadsworth Longfellow, who obligingly brought along a "large band of Indian spirits" to protect the séance from otherworldly interference.

Browning, the book's unquestionable star (despite having himself implicitly accused mediums of fraud in his poem "Mr. Sludge, 'The Medium' "), speaks on a wide range of subjects, from the spirit world's intermittently active interest in worldly affairs and the time he spends in the afterlife chatting with Tennyson about the metrical defects of contemporary poets to the enduring postmortem love between him and Elizabeth.1 Corson doesn't ventriloquize the poet in any way contrary to Browning's basic principles, and if the language doesn't sound exactly like Browning, neither does it sound particularly unlike him. It is, in a sense, a formal mimicry of (or perhaps homage to) Browning's own modus operandi: the adoption of others' voices.

As a brazen bid for posterity by appealing to famous ghosts, Spirit Messages is both the product of an overly robust professorial ego and an artifact of the turn-of-the-century vogue for spiritualism. But Corson's lingering obsession with Browning is also symptomatic of a particular, decades-long cultural fixation with the poet. In fact, the small club Corson established at Cornell in 1877 heralded the blossoming of hundreds of such Browning Societies across America and Browning's native United Kingdom, a trend that continued well into the twentieth century. The fad was known as the "Browning craze," "Browning fever," or, as one periodical christened it, "Browningismus," a prolonged flourishing of transatlantic devotion to the study and dissemination of all things Browning.

Spirit Messages, by Hiram Corson, 1911. HathiTrust Digital Library, original from the Library of Congress.

At first glance, the poet was an unlikely candidate to inspire an outpouring of public worship. After all, the most common complaint about his work was that it was frequently impenetrable, whether through density of allusion, breadth of diction, or intricacies of syntax. This forced his defenders into adopting frequently amusing critical defenses of Browning's more abstruse passages. For instance, Corson argued that the seemingly cryptic lines "To be by him themselves made act, / Not watch Sordello acting each of them" from Sordello make perfect sense when one realizes that Browning simply refuses to abide by the syntactical laws that govern an uninflected language like English. "There are difficult passages in Browning which, if translated into Latin, would present no difficulty at all," Corson insists, as though this were self-evident.2 It was precisely the demanding quality of Browning's work that made it so suitable for endless, obsessive scrutiny and debate, however—the tantalizing promise that enough close attention would deliver some form of coherence. Scottish critic Andrew Lang provided a telling anecdote in the year of Browning's death:

There is a story of two clever girls who set out to peruse Sordello, and corresponded with one another about their progress. "Somebody is dead in Sordello," one of them wrote to her friend. "I don't quite know who it is, but it must make things a little clearer in the long run!"

Further jeopardizing Browning's prospects as a figure of popular devotion was the fact that his major work consisted mostly of dramatic monologues, spoken in the voice of figures as various as Pope Innocent XII, the Renaissance painter Andrea del Sarto, and Shakespeare's Caliban, not to mention dozens of anonymous murderers, madmen, and eccentrics both historical and contemporary. Given this choice of genre, some critics argued, how could Browning have any unified sense of poetic self, any discrete personality or philosophical vision? What could anyone know about what Browning thought about anything, if all we have are his characters?

Again, however, Browning societies could transform this difficulty into a kind of useful provocation: through close study, perhaps Browning's many masks could be penetrated, his true self made accessible. In this sense, the poet's elusiveness, along with the obscurity of whatever spiritual or moral "message" he might have been offering, provided infinite interpretive possibilities, making possible an infinite number of Brownings. And so many Brownings promised a lot of spilled ink.

A significant portion of that ink would issue from the most prominent Browning society of them all, the London Browning Society. Its founders were Frederick James Furnivall and Emily Hickey, two Victorian curios who may as well have been Browning monologuists themselves. Hickey was a precocious poet (she published her first long poem at twenty and would ultimately produce twelve volumes of verse) who supported herself by working as a governess, secretary, and journalist. She would eventually convert to Catholicism and spend the rest of her days in monastic seclusion, writing for an explicitly Catholic audience (for which the pope awarded her the Pro Ecclesia et Pontifice decoration), but she remained a devoted Browningite until death.

Furnivall was the more outspoken and visible of the two. "No man in England has done so much work for nothing, so perseveringly, as I've done," he once declared. If this was hyperbole, it was relatively modest in its exaggeration. An ever-zealous philologist, educator, and reformer—and friend of Thomas Carlyle, Charles Kingsley, Elizabeth Gaskell, John Ruskin, and seemingly every other spirit of the age—Furnivall helped establish the Christian Socialist Working Men's College, cofounded the Oxford English Dictionary, and instituted England's first all-women's rowing club in 1896 (years earlier, he had also invented a new, more efficient model of sculling boat). He agitated on behalf of cooperative stores, argued in favor of women's suffrage, and once allegedly sold his library to benefit a group of striking woodcutters. A close friend dubbed him the "Grand Old Optimist."

Above all, Furnivall was a compulsive creator of literary societies. The Early English Text Society, the Ballad Society, the Chaucer Society, the New Shakespeare Society, the Wycliffe Society, and the Shelley Society all could claim Furnivall as their father. ("Of making societies there is no end, and there never will be as long as Dr. F.J. Furnivall lives," quipped one contemporary magazine.) But the Browning Society was a unique venture insofar as it was dedicated not only to a living poet, but to one both its founders personally knew. Days before announcing the society's establishment, Furnivall and Hickey visited Browning, informing him of their plans. Accounts of Browning's response vary, but it seems fair to say that he was neither beside himself with enthusiasm nor entirely opposed to the idea. (This was despite the fact that he had recently concluded an unpleasant tenure as president of Furnivall's New Shakespeare Society, marked by a ferocious dispute between Furnivall and the poet Algernon Charles Swinburne over whether and how the plays should be dated). In the end, however reluctantly, Browning gave the society his blessings.

Robert Browning, by Elliott & Fry, 1884. © National Portrait Gallery, London.

A few months later, on October 28, 1881, in the Botany Theatre at University College, about three hundred Londoners convened for the London Browning Society's inaugural meeting. As its prospectus noted, the society's goals were ambitious: beyond encouraging the "study and discussion" of Browning's poems, it would foster "the publication of Papers on them, and extracts from works illustrating them...the formation of Browning Reading-Clubs, the acting of Browning's dramas by amateur companies, the writing of a Browning Primer, the compilation of a Browning Concordance or Lexicon, and generally the extension of the study and influence of the poet."

The society's monthly gatherings were scrupulously documented. Papers and talks given were preserved, abstracts of meetings were drawn up, byzantine organizational charts were carefully maintained. Members created wildly complex critical bibliographies, obsessively precise records of publication, and comprehensive lists of rhyme changes between various editions of some of the longer poems, all the while squabbling over interpretive minutiae. Furnivall was, in effect, bringing the same philological tools to bear on Browning's corpus that he had on Middle English poetry through the Early English Text Society, the same obsession with textual authority and scholarly meticulousness. Why wouldn't the preeminent poet of the age (in Furnivall's estimation) be worthy of the same study as Chaucer? As Shakespeare?

Meanwhile, across the Atlantic, Corson's endless advocacy work and famously theatrical recitations had spawned Browning societies in nearby Rochester and Syracuse. Browning enjoyed a kind of countercultural reputation, his experimentation a welcome reprieve from the fusty Fireside Poets that had dominated nineteenth-century literary culture. Soon other groups were cropping up in the major cities: Boston, Philadelphia, Baltimore, Chicago. By some counts, there were some nine hundred American Browning clubs throughout the United States at the craze's pinnacle. Their internal affairs were the subject of national press attention. An 1894 New York Times article describes in great detail one society's censure of a member for acting with "untimely levity" during a reading of one of Browning's plays. The headline: boston browning society offended.

Browning himself recognized Americans' particular affinity for him, purportedly claiming that Chicagoans were his most ardent and sophisticated readers. It's not improbable—the Chicago and Alton Railroad reprinted Browning's poetry in its official guide during the 1870s, so that passengers might read it on their journey. The world's largest Browning archive is located in the unlikely city of Waco, Texas, the product of one early twentieth-century Baylor professor's career-long infatuation. Dr. A.J. Armstrong fell in love with Browning as a young academic—at one point he was studying the poet's works for up to thirteen hours a day—and began to amass a large collection of Browningiana that ultimately grew into the Armstrong Browning Library. Armstrong's enthusiasm seems to have been infectious: a 1924 children's performance of Browning's Pied Piper of Hamelin at Baylor supposedly drew an audience of ten thousand (the New York Times again: browning, poet, competes with baseball in texas).

The New York Times, June 22, 1924.

Back in England, some members of the London Browning Society demonstrated their affinity for the poet through their own literary production. One Society member, the poet and critic Arthur Symons, published "A Fancy of Ferishtah," a poem in which Browning is cast as an ancient Persian sage ("Thrice-honored Master, Light of Nishapur / Star to whose shining Persis gazeth up!") whose poetry inspires a Symons-esque youth eager to "make some day some / Small, small, however small name for himself." Another passionately committed member, the physician Edward Berdoe, expressed his appreciation by penning a bizarre novel called St. Bernard's: The Romance of a Medical Student under the pen name Aesculapius Scalpel, in which a morally errant young doctor returns to the path of righteousness through careful study of Browning's Paracelsus, a poem about the titular sixteenth-century Swiss alchemist.

The fact that so much Browning adulation was carried on without a whiff of self-consciousness or humor made the societies inevitable targets of satire. Arthur Conan Doyle's justly forgotten 1899 novel A Duet, with an Occasional Chorus, for instance, crudely dismisses Browning societies as the domain of parochial suburban housewives. The novel describes the first meeting of one such society, consisting of just three women, whose attempts at engaging with Browning's poetry are continually derailed by ancillary discussions of ball gowns, hairstyles, and the seasonality of oysters. By the time they get around to tackling "Caliban upon Setebos," they are totally bewildered. "Dear me, I had no idea Browning was like this," notes one member. "What nonsense it is." Another remains confident that there is something in Browning, even if it escapes their analysis: "It is very easy to call everything which we do not understand 'nonsense'...I have no doubt that Browning had a profound meaning in this." If so, they don't discover it. The meeting concludes after a mere hour, and the members all resolve to read Tennyson instead.

Mr. Robert Browning taking tea with the Browning Society. The New York Public Library, The Miriam and Ira D. Wallach Division of Art, Prints and Photographs.

In the popular press, satirical treatments of Browning's disciples were ubiquitous in both the United States and England; most, like Doyle's, portrayed society members as the ignorant thralls of literary modishness. A particularly well-known Max Beerbohm drawing in Punch tweaks the formula slightly, depicting Browningites as fawning, austere sycophants. Illustrated in black and white, Browning's admirers cluster around the master—himself rendered in color—looking both unmistakably dour and wholly rapt. One elderly woman sits directly in front of the poet, looking as though she's either about to wash his feet or administer some shoe polish. The archetypal Society member, it seems, was either an unsophisticated suburbanite or a pitiable flatterer.

There is some kernel of truth to these caricatures: society members were often middle-class women who found a welcome opportunity in Browning societies for public "literary" discourse. The clubs also resonated with the religious, particularly those inclined toward evangelicalism. The inaugural meeting of the London society, as one speaker noted, included a "great proportion of clergymen and ladies." Yet Browning societies on both continents were more diverse than the satirists would suggest, encompassing a wide range of perspective and experience—amateurs and experts, young and old, men and women. It was nevertheless difficult to shake a reputation as the realm of doting dilettantes. (George Bernard Shaw, while a card-carrying member of the London society, later claimed he joined only to make fun of the "pious ladies.")

The London Browning Society, bankrupt and torn apart by internal debates between Christian and agnostic factions, disbanded in 1892. Punch, unsurprisingly, found the occasion a cause for celebration. "Hark! 'tis the knell of the Browning Society," it declared. "Windbags are busting all round us today." Other societies would persevere for decades, but by the 1920s and '30s, it's safe to say that the network of Browning societies across the UK and America had largely collapsed. Perhaps Browning had been outmodernized by the modernists, perhaps the clubs founded in his name had fallen victim to the professionalization of literary study, or perhaps they had simply gone the way of all fads.

Yet certain societies endured and still do, though in far more modest iterations. In his 1969 history of the London Browning Society, the scholar William S. Peterson (to whose work I'm much indebted) noted that, "Browning clubs today...carry a slightly musty odor about them, as if they do not quite belong to the twentieth century." Needless to say, the same is true fifty years later, though clubs still soldier on. The contemporary Browning Society based in London, despite having to cease publication of the Journal of Browning Studies and awarding of its Poetry Prize, still promotes events in honor of both Robert and Elizabeth: a "flower festival" at Pembury Parish Church, where the Brownings' son was married; a "Browning Sunday" at St. Marylebone Parish Church to commemorate the anniversary of the Brownings' marriage; an annual wreath-laying ceremony in Poets' Corner. Musty, yes, but undeniably charming in its commitment to honoring the legacy of not only Robert Browning but also the assortment of oddballs who conspired to enshrine his status as an actual cult favorite in the first place.


1 In what is perhaps the most outrageous moment in Spirit Messages, Corson has Robert misquote Elizabeth's poem Aurora Leigh, then corrects him in a footnote.

2 Sordello in particular was renowned for its inscrutability. One story has it that the writer Douglas Jerrold, while recuperating from an illness, found himself so unable to make sense out of Sordello that he was convinced the disease had cost him his mind.




All Comments: [-] | anchor

richardhod(3489) about 4 hours ago [-]

As a side note, many in Britain have heard of this Victorian man of letters because of two excellent films (1951, 1994) of Terence Rattigan's 1948 play The Browning Version, where one of his works serves as a McGuffin. https://en.m.wikipedia.org/wiki/The_Browning_Version

branweb(10000) about 2 hours ago [-]

Ah hadn't heard of these. Will check them out.

As to the article: good reading. I was always mildly curious why literary societies should grow around Robert Browning of all people. It's bewildering: America gripped by Browning fever? A children's performance of Browning's Pied Piper of Hamelin drawing more people than a baseball game...in TEXAS?! Truly the past is a foreign country.




(14) Eliza in GnuCOBOL (2017)

14 points about 18 hours ago by abrax3141 in 3583rd position

sourceforge.net | Estimated reading time – 55 minutes | comments | anchor

[r514]: / trunk / samples / eliza / eliza.lst Maximize Restore History

Download this file

1915 lines (1830 with data), 104.5 kB

GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0001
LINE    PG/LN  A...B............................................................
000001  <<<<<<< .mine
error: invalid indicator '<' at column 7
000002         IDENTIFICATION DIVISION.
000003
000004         PROGRAM-ID.             ELIZA.
000005        *AUTHOR.                 ARNOLD J. TREMBLEY.
000006        *DATE-WRITTEN.           2017-10-01.
000007        *SECURITY.               THIS PROGRAM IS PUBLIC DOMAIN FREEWARE.
000008
000009        ****************************************************************
000010        *                                                              *
000011        *    https://en.wikipedia.org/wiki/ELIZA                       *
000012        *    ELIZA is an early natural language processing program     *
000013        *    created around 1964 by Joseph Wiezenbaum at MIT.  This    *
000014        *    version is adapted from ELIZA.BAS which appeared in       *
000015        *    Creative Computing magazine in 1977, written by Jeff      *
000016        *    Shrager and adapted for IBM PC in the early 1980's by     *
000017        *    Patricia Danielson and Paul Hashfield.                    *
000018        *                                                              *
000019        *    COBOL translation by Arnold Trembley, 2017-10-01.         *
000020        *    [email protected]                                   *
000021        *    Using MinGW GnuCOBOL 2.2 for Windows 7 Pro.               *
000022        *    This version is public domain freeware.                   *
000023        *                                                              *
000024        *    ELIZA simulates a psychotherapist interacting with a      *
000025        *    human patient. Enter 'shut up' to stop the dialog.        *
000026        *                                                              *
000027        ****************************************************************
000028
000029         ENVIRONMENT DIVISION.
000030
000031         CONFIGURATION SECTION.
000032
000033         REPOSITORY.
000034             FUNCTION ALL INTRINSIC.
000035
000036         INPUT-OUTPUT SECTION.
000037
000038         FILE-CONTROL.
000039
000040         DATA DIVISION.
000041
000042         FILE SECTION.
000043
000044         WORKING-STORAGE SECTION.
000045
000046         01  100-PROGRAM-FLAGS.
000047             05  100-EOF-FLAG                PIC X(01)   VALUE SPACE.
000048                 88  88-100-ALL-DONE                     VALUE 'Y'.
000049             05  100-KEYWORD-FLAG            PIC X(01)   VALUE SPACE.
000050                 88  88-100-KEYWORD-FOUND                VALUE 'Y'.
000051                 88  88-100-KEYWORD-NOT-FOUND            VALUE 'N'.
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0002
LINE    PG/LN  A...B............................................................
000052
000053         01  200-USER-INPUT                  PIC X(80)   VALUE SPACES.
000054
000055         01  210-USER-INPUT-LC               PIC X(80)   VALUE SPACES.
000056
000057         01  220-LAST-USER-INPUT             PIC X(80)   VALUE SPACES.
000058
000059         01  230-TRANSLATED-INPUT            PIC X(80)   VALUE SPACES.
000060
000061         01  240-REPLY                       PIC X(79)   VALUE SPACES.
000062
000063         01  250-SUBSTITUTE-WORK             PIC X(100)  VALUE SPACES.
000064
000065         01  300-PROGRAM-CONSTANTS.
000066             05  300-MAX-KEYWORD-ENTRIES     PIC S9(4)   COMP VALUE +36.
000067             05  300-MAX-SCAN-LEN            PIC S9(4)   COMP VALUE +30.
000068             05  300-SHUT                    PIC X(04)   VALUE 'shut'.
000069             05  300-ASTERISK                PIC X(01)   VALUE '*'.
000070
000071         01  400-PROGRAM-COUNTERS.
000072             05  400-HOLD-KW-LEN             PIC S9(4)   COMP VALUE ZERO.
000073             05  400-SCAN-LEN                PIC S9(4)   COMP VALUE ZERO.
000074             05  400-HOLD-500-K              PIC S9(4)   COMP VALUE +0.
000075             05  400-HOLD-OFFSET             PIC S9(4)   COMP VALUE +0.
000076             05  400-OFFSET                  PIC S9(4)   COMP VALUE +0.
000077             05  400-SUB                     PIC S9(4)   COMP VALUE ZERO.
000078             05  400-SPACES-COUNT            PIC S9(4)   COMP VALUE ZERO.
000079
000080         01  500-KEYWORD-TABLE-DATA.
000081             05  FILLER   PIC X(16)  VALUE '07can you '.
000082             05  FILLER   PIC X(16)  VALUE '05can i '.
000083             05  FILLER   PIC X(16)  VALUE '07you are '.
000084             05  FILLER   PIC X(16)  VALUE '06you're '.
000085             05  FILLER   PIC X(16)  VALUE '07i don't '.
000086             05  FILLER   PIC X(16)  VALUE '06i feel  '.
000087             05  FILLER   PIC X(16)  VALUE '13why don't you '.
000088             05  FILLER   PIC X(16)  VALUE '11why can't i '.
000089             05  FILLER   PIC X(16)  VALUE '07are you '.
000090             05  FILLER   PIC X(16)  VALUE '07i can't '.
000091             05  FILLER   PIC X(16)  VALUE '04i am '.
000092             05  FILLER   PIC X(16)  VALUE '03i'm  '.
000093             05  FILLER   PIC X(16)  VALUE '03you '.
000094             05  FILLER   PIC X(16)  VALUE '06i want '.
000095             05  FILLER   PIC X(16)  VALUE '04what '.
000096             05  FILLER   PIC X(16)  VALUE '03how '.
000097             05  FILLER   PIC X(16)  VALUE '03who '.
000098             05  FILLER   PIC X(16)  VALUE '05where '.
000099             05  FILLER   PIC X(16)  VALUE '04when '.
000100             05  FILLER   PIC X(16)  VALUE '03why '.
000101             05  FILLER   PIC X(16)  VALUE '04name '.
000102             05  FILLER   PIC X(16)  VALUE '05cause '.
000103             05  FILLER   PIC X(16)  VALUE '05sorry '.
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0003
LINE    PG/LN  A...B............................................................
000104             05  FILLER   PIC X(16)  VALUE '05dream '.
000105             05  FILLER   PIC X(16)  VALUE '05hello '.
000106             05  FILLER   PIC X(16)  VALUE '02hi '.
000107             05  FILLER   PIC X(16)  VALUE '05maybe '.
000108             05  FILLER   PIC X(16)  VALUE '02no '.
000109             05  FILLER   PIC X(16)  VALUE '04your '.
000110             05  FILLER   PIC X(16)  VALUE '06always '.
000111             05  FILLER   PIC X(16)  VALUE '05think '.
000112             05  FILLER   PIC X(16)  VALUE '05alike '.
000113             05  FILLER   PIC X(16)  VALUE '03yes '.
000114             05  FILLER   PIC X(16)  VALUE '06friend '.
000115             05  FILLER   PIC X(16)  VALUE '08computer '.
000116             05  FILLER   PIC X(16)  VALUE '10NOKEYFOUND'.
000117
000118         01  500-KEYWORD-TABLE       REDEFINES 500-KEYWORD-TABLE-DATA.
000119             05  500-KEYWORD-ENTRY       OCCURS 36 TIMES
000120                                         INDEXED BY 500-K.
000121                 10  500-KW-LEN              PIC 9(02).
000122                 10  500-KEYWORD             PIC X(14).
000123
000124         01  520-TRANSLATION-CONSTANTS.
000125             05 520-THING-IN                 PIC X(05)   VALUE 'thing'.
000126             05 520-HIGH-IN                  PIC X(04)   VALUE 'high'.
000127             05 520-SHI-IN                   PIC X(03)   VALUE 'shi'.
000128             05 520-CHI-IN                   PIC X(03)   VALUE 'chi'.
000129             05 520-HIT-IN                   PIC X(03)   VALUE 'hit'.
000130             05 520-OUR-IN                   PIC X(03)   VALUE 'our'.
000131             05 520-QMARK-IN                 PIC X(02)   VALUE '? '.
000132             05 520-XMARK-IN                 PIC X(02)   VALUE '! '.
000133             05 520-FSTOP-IN                 PIC X(02)   VALUE '. '.
000134
000135             05 520-THING-OUT                PIC X(05)   VALUE 'th!ng'.
000136             05 520-HIGH-OUT                 PIC X(04)   VALUE 'h!gh'.
000137             05 520-SHI-OUT                  PIC X(03)   VALUE 'sh!'.
000138             05 520-CHI-OUT                  PIC X(03)   VALUE 'ch!'.
000139             05 520-HIT-OUT                  PIC X(03)   VALUE 'h!t'.
000140             05 520-OUR-OUT                  PIC X(03)   VALUE '0ur'.
000141             05 520-QMARK-OUT                PIC X(02)   VALUE '  '.
000142             05 520-FSTOP-OUT                PIC X(02)   VALUE '  '.
000143
000144             05 520-ARE-IN                   PIC X(05)   VALUE ' are '.
000145             05 520-WERE-IN                  PIC X(06)   VALUE ' were '.
000146             05 520-YOU-IN                   PIC X(05)   VALUE ' you '.
000147             05 520-YOUR-IN                  PIC X(06)   VALUE ' your '.
000148             05 520-MY-IN                    PIC X(04)   VALUE ' my '.
000149             05 520-IVE-IN                   PIC X(06)   VALUE ' i've '.
000150             05 520-IM-IN                    PIC X(05)   VALUE ' i'm '.
000151             05 520-I-AM-IN                  PIC X(06)   VALUE ' i am '.
000152             05 520-ME-IN                    PIC X(04)   VALUE ' me '.
000153             05 520-I-IN                     PIC X(03)   VALUE ' i '.
000154             05 520-YOURE-IN                 PIC X(08)   VALUE ' you're '.
000155             05 520-YOU-ARE-IN           PIC X(09)   VALUE ' you are '.
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0004
LINE    PG/LN  A...B............................................................
000156             05 520-YOURSELF-IN          PIC X(10)   VALUE ' yourself '.
000157
000158             05 520-AM-OUT                   PIC X(04)   VALUE ' am '.
000159             05 520-WAS-OUT                  PIC X(05)   VALUE ' was '.
000160             05 520-I-FIX                    PIC X(04)   VALUE ' i# '.
000161             05 520-IM-FIX                   PIC X(06)   VALUE ' i'm# '.
000162             05 520-I-AM-FIX                 PIC X(07)   VALUE ' i am# '.
000163             05 520-MY-FIX                   PIC X(05)   VALUE ' my# '.
000164             05 520-YOUR-FIX                 PIC X(07)   VALUE ' your# '.
000165             05 520-YOUVE-OUT                PIC X(08)   VALUE ' you've '.
000166             05 520-YOURE-OUT                PIC X(08)   VALUE ' you're '.
000167             05 520-YOU-FIX                  PIC X(06)   VALUE ' you# '.
000168             05 520-MYSELF-OUT               PIC X(08)   VALUE ' myself '.
000169
000170             05 520-I-OUT                    PIC X(03)   VALUE ' I '.
000171             05 520-IM-OUT                   PIC X(05)   VALUE ' I'm '.
000172             05 520-I-AM-OUT                 PIC X(06)   VALUE ' I am '.
000173             05 520-MY-OUT                   PIC X(04)   VALUE ' my '.
000174             05 520-YOUR-OUT                 PIC X(06)   VALUE ' your '.
000175             05 520-YOU-OUT                  PIC X(05)   VALUE ' you '.
000176
000177
000178         01  540-REPLY-TABLE-DATA.
000179             05  PIC x(60)   VALUE '29Don't you believe that I can*'.
000180             05  PIC X(60)   VALUE '29Perhaps you would like me to*'.
000181             05  PIC x(60)   VALUE '29Do you want me to be able to*'.
000182             05  PIC x(60)   VALUE '26Perhaps you don't want to*'.
000183             05  PIC x(60)   VALUE '26Do you want to be able to*'.
000184             05  PIC x(60)   VALUE '26What makes you think I am*'.
000185
000186             05  PIC X(30)   VALUE '35Does it please you to believ'.
000187             05  PIC X(30)   VALUE 'e I am*'.
000188
000189             05  PIC x(60)   VALUE '29Perhaps you would like to be*'.
000190
000191             05  PIC X(30)   VALUE '31Do you sometimes wish you we'.
000192             05  PIC X(30)   VALUE 're*'.
000193
000194             05  PIC x(60)   VALUE '17Don't you really*'.
000195             05  PIC x(60)   VALUE '14Why don't you*'.
000196             05  PIC x(60)   VALUE '26Do you wish to be able to*'.
000197             05  PIC x(60)   VALUE '22Does that trouble you?'.
000198             05  PIC x(60)   VALUE '18Do you often feel*'.
000199             05  PIC x(60)   VALUE '18Do you often feel*'.
000200             05  PIC x(60)   VALUE '21Do you enjoy feeling*'.
000201             05  PIC x(60)   VALUE '30Do you really believe I don't*'.
000202             05  PIC x(60)   VALUE '28Perhaps in good time I will*'.
000203             05  PIC x(60)   VALUE '18Do you want me to*'.
000204
000205             05  PIC X(30)   VALUE '35Do you think you should be a'.
000206             05  PIC X(30)   VALUE 'ble to*'.
000207
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0005
LINE    PG/LN  A...B............................................................
000208             05  PIC x(60)   VALUE '14Why can't you*'.
000209
000210             05  PIC X(30)   VALUE '46Why are you interested in wh'.
000211             05  PIC X(30)   VALUE 'ether or not I am*'.
000212
000213             05  PIC x(60)   VALUE '31Would you prefer if I were not*'.
000214             05  PIC x(60)   VALUE '31Perhaps in your fantasies I am*'.
000215             05  PIC x(60)   VALUE '26How do you know you can't*'.
000216             05  PIC x(60)   VALUE '15Have you tried?'.
000217             05  PIC x(60)   VALUE '20Perhaps you can now*'.
000218
000219             05  PIC X(30)   VALUE '35Did you come to me because y'.
000220             05  PIC X(30)   VALUE 'ou are*'.
000221
000222             05  PIC x(60)   VALUE '23How long have you been*'.
000223
000224             05  PIC X(30)   VALUE '34Do you believe it is normal '.
000225             05  PIC X(30)   VALUE 'to be*'.
000226
000227             05  PIC x(60)   VALUE '19Do you enjoy being*'.
000228             05  PIC x(60)   VALUE '31We were discussing you--not me.'.
000229             05  PIC x(60)   VALUE '06Oh, I*'.
000230
000231             05  PIC X(30)   VALUE '44You're not really talking ab'.
000232             05  PIC X(30)   VALUE 'out me, are you?'.
000233
000234             05  PIC X(30)   VALUE '37What would it mean to you if'.
000235             05  PIC X(30)   VALUE ' you got*'.
000236
000237             05  PIC x(60)   VALUE '16Why do you want*'.
000238             05  PIC x(60)   VALUE '21Suppose you soon got*'.
000239             05  PIC x(60)   VALUE '22What if you never got*'.
000240             05  PIC x(60)   VALUE '22I sometimes also want*'.
000241             05  PIC x(60)   VALUE '15Why do you ask?'.
000242             05  PIC x(60)   VALUE '32Does that question interest you?'.
000243
000244             05  PIC X(30)   VALUE '38What answer would please you'.
000245             05  PIC X(30)   VALUE ' the most?'.
000246
000247             05  PIC x(60)   VALUE '18What do you think?'.
000248
000249             05  PIC X(30)   VALUE '38Are such questions on your m'.
000250             05  PIC X(30)   VALUE 'ind often?'.
000251
000252             05  PIC X(30)   VALUE '40What is it that you really w'.
000253             05  PIC X(30)   VALUE 'ant to know?'.
000254
000255             05  PIC x(60)   VALUE '27Have you asked anyone else?'.
000256
000257             05  PIC X(30)   VALUE '37Have you asked such question'.
000258             05  PIC X(30)   VALUE 's before?'.
000259
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0006
LINE    PG/LN  A...B............................................................
000260             05  PIC X(30)   VALUE '42What else comes to mind when'.
000261             05  PIC X(30)   VALUE ' you ask that?'.
000262
000263             05  PIC x(60)   VALUE '24Names don't interest me.'.
000264
000265             05  PIC X(30)   VALUE '41I don't care about names -- '.
000266             05  PIC X(30)   VALUE 'Please go on.'.
000267
000268             05  PIC x(60)   VALUE '24Is that the real reason?'.
000269
000270             05  PIC X(30)   VALUE '37Don't any other reasons come'.
000271             05  PIC X(30)   VALUE ' to mind?'.
000272
000273             05  PIC X(30)   VALUE '39Does that reason explain any'.
000274             05  PIC X(30)   VALUE 'thing else?'.
000275
000276             05  PIC X(30)   VALUE '34What other reasons might the'.
000277             05  PIC X(30)   VALUE 're be?'.
000278
000279             05  PIC x(60)   VALUE '23Please don't apologize!'.
000280             05  PIC x(60)   VALUE '28Apologies are not necessary.'.
000281
000282             05  PIC X(30)   VALUE '45What feelings do you have wh'.
000283             05  PIC X(30)   VALUE 'en you apologize?'.
000284
000285             05  PIC x(60)   VALUE '22Don't be so defensive!'.
000286
000287             05  PIC X(30)   VALUE '36What does that dream suggest'.
000288             05  PIC X(30)   VALUE ' to you?'.
000289
000290             05  PIC x(60)   VALUE '19Do you dream often?'.
000291
000292             05  PIC X(30)   VALUE '35What persons appear in your '.
000293             05  PIC X(30)   VALUE 'dreams?'.
000294
000295             05  PIC X(30)   VALUE '33Are you disturbed by your dr'.
000296             05  PIC X(30)   VALUE 'eams?'.
000297
000298             05  PIC X(30)   VALUE '43How do you do ...Please stat'.
000299             05  PIC X(30)   VALUE 'e your problem.'.
000300
000301             05  PIC x(60)   VALUE '29You don't seem quite certain.'.
000302             05  PIC x(60)   VALUE '23Why the uncertain tone?'.
000303             05  PIC x(60)   VALUE '27Can't you be more positive?'.
000304             05  PIC x(60)   VALUE '16You aren't sure?'.
000305             05  PIC x(60)   VALUE '15Don't you know?'.
000306
000307             05  PIC X(30)   VALUE '38Are you saying no just to be'.
000308             05  PIC X(30)   VALUE ' negative?'.
000309
000310             05  PIC x(60)   VALUE '29You are being a bit negative.'.
000311             05  PIC x(60)   VALUE '08Why not?'.
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0007
LINE    PG/LN  A...B............................................................
000312             05  PIC x(60)   VALUE '13Are you sure?'.
000313             05  PIC x(60)   VALUE '07Why no?'.
000314             05  PIC x(60)   VALUE '31Why are you concerned about my*'.
000315             05  PIC x(60)   VALUE '20What about your own*'.
000316
000317             05  PIC X(30)   VALUE '36Can you think of a specific '.
000318             05  PIC X(30)   VALUE 'example?'.
000319
000320             05  PIC x(60)   VALUE '05When?'.
000321             05  PIC x(60)   VALUE '25What are you thinking of?'.
000322             05  PIC x(60)   VALUE '15Really, always?'.
000323             05  PIC x(60)   VALUE '23Do you really think so?'.
000324             05  PIC x(60)   VALUE '21But you are not sure*'.
000325             05  PIC x(60)   VALUE '13Do you doubt*'.
000326             05  PIC x(60)   VALUE '12In what way?'.
000327             05  PIC x(60)   VALUE '28What resemblance do you see?'.
000328
000329             05  PIC X(30)   VALUE '40What does the similarity sug'.
000330             05  PIC X(30)   VALUE 'gest to you?'.
000331
000332             05  PIC X(30)   VALUE '34What other connections do yo'.
000333             05  PIC X(30)   VALUE 'u see?'.
000334
000335             05  PIC X(30)   VALUE '38Could there really be some c'.
000336             05  PIC X(30)   VALUE 'onnection?'.
000337
000338             05  PIC x(60)   VALUE '04How?'.
000339             05  PIC x(60)   VALUE '24You seem quite positive.'.
000340             05  PIC x(60)   VALUE '13Are you sure?'.
000341             05  PIC x(60)   VALUE '06I see.'.
000342             05  PIC x(60)   VALUE '13I understand.'.
000343
000344             05  PIC X(30)   VALUE '41Why do you bring up the topi'.
000345             05  PIC X(30)   VALUE 'c of friends?'.
000346
000347             05  PIC x(60)   VALUE '26Do your friends worry you?'.
000348             05  PIC x(60)   VALUE '28Do your friends pick on you?'.
000349
000350             05  PIC X(30)   VALUE '34Are you sure you have any fr'.
000351             05  PIC X(30)   VALUE 'iends?'.
000352
000353             05  PIC x(60)   VALUE '30Do you impose on your friends?'.
000354
000355             05  PIC X(30)   VALUE '42Perhaps your love for friend'.
000356             05  PIC X(30)   VALUE 's worries you.'.
000357
000358             05  PIC x(60)   VALUE '23Do computers worry you?'.
000359
000360             05  PIC X(30)   VALUE '39Are you talking about me in '.
000361             05  PIC X(30)   VALUE 'particular?'.
000362
000363             05  PIC X(30)   VALUE '31Are you frightened by machin'.
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0008
LINE    PG/LN  A...B............................................................
000364             05  PIC X(30)   VALUE 'es?'.
000365
000366             05  PIC x(60)   VALUE '29Why do you mention computers?'.
000367
000368             05  PIC X(30)   VALUE '56What do you think machines h'.
000369             05  PIC X(30)   VALUE 'ave to do with your problem?'.
000370
000371             05  PIC X(30)   VALUE '42Don't you think computers ca'.
000372             05  PIC X(30)   VALUE 'n help people?'.
000373
000374             05  PIC X(30)   VALUE '43What is it about machines th'.
000375             05  PIC X(30)   VALUE 'at worries you?'.
000376
000377             05  PIC X(30)   VALUE '44Say, do you have any psychol'.
000378             05  PIC X(30)   VALUE 'ogical problems?'.
000379
000380             05  PIC x(60)   VALUE '30What does that suggest to you?'.
000381             05  PIC x(60)   VALUE '06I see.'.
000382
000383             05  PIC X(30)   VALUE '36I'm not sure I understand yo'.
000384             05  PIC X(30)   VALUE 'u fully.'.
000385
000386             05  PIC X(30)   VALUE '36Come, Come, elucidate your t'.
000387             05  PIC X(30)   VALUE 'houghts.'.
000388
000389             05  PIC x(60)   VALUE '26Can you elaborate on that?'.
000390             05  PIC x(60)   VALUE '26That is quite interesting.'.
000391
000392         01  540-REPLY-TABLE         REDEFINES 540-REPLY-TABLE-DATA.
000393             05  540-REPLY-ENTRY         OCCURS 112 TIMES
000394                                         INDEXED BY 540-R.
000395                 10  540-REPLY-LENGTH        PIC 9(02).
000396                 10  540-REPLY               PIC X(58).
000397
000398
000399         01  560-REPLY-LOCATER-DATA.
000400             05  FILLER      PIC X(12)   VALUE '000100030004'.
000401             05  FILLER      PIC X(12)   VALUE '000400050005'.
000402             05  FILLER      PIC X(12)   VALUE '000600090009'.
000403             05  FILLER      PIC X(12)   VALUE '000600090009'.
000404             05  FILLER      PIC X(12)   VALUE '001000130013'.
000405             05  FILLER      PIC X(12)   VALUE '001400160016'.
000406             05  FILLER      PIC X(12)   VALUE '001700190019'.
000407             05  FILLER      PIC X(12)   VALUE '002000210021'.
000408             05  FILLER      PIC X(12)   VALUE '002200240024'.
000409             05  FILLER      PIC X(12)   VALUE '002500270027'.
000410             05  FILLER      PIC X(12)   VALUE '002800310031'.
000411             05  FILLER      PIC X(12)   VALUE '002800310031'.
000412             05  FILLER      PIC X(12)   VALUE '003200340034'.
000413             05  FILLER      PIC X(12)   VALUE '003500390039'.
000414             05  FILLER      PIC X(12)   VALUE '004000480048'.
000415             05  FILLER      PIC X(12)   VALUE '004000480048'.
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0009
LINE    PG/LN  A...B............................................................
000416             05  FILLER      PIC X(12)   VALUE '004000480048'.
000417             05  FILLER      PIC X(12)   VALUE '004000480048'.
000418             05  FILLER      PIC X(12)   VALUE '004000480048'.
000419             05  FILLER      PIC X(12)   VALUE '004000480048'.
000420             05  FILLER      PIC X(12)   VALUE '004900500050'.
000421             05  FILLER      PIC X(12)   VALUE '005100540054'.
000422             05  FILLER      PIC X(12)   VALUE '005500580058'.
000423             05  FILLER      PIC X(12)   VALUE '005900620062'.
000424             05  FILLER      PIC X(12)   VALUE '006300630063'.
000425             05  FILLER      PIC X(12)   VALUE '006300630063'.
000426             05  FILLER      PIC X(12)   VALUE '006400680068'.
000427             05  FILLER      PIC X(12)   VALUE '006900730073'.
000428             05  FILLER      PIC X(12)   VALUE '007400750075'.
000429             05  FILLER      PIC X(12)   VALUE '007600790079'.
000430             05  FILLER      PIC X(12)   VALUE '008000820082'.
000431             05  FILLER      PIC X(12)   VALUE '008300890089'.
000432             05  FILLER      PIC X(12)   VALUE '009000920092'.
000433             05  FILLER      PIC X(12)   VALUE '009300980098'.
000434             05  FILLER      PIC X(12)   VALUE '009901050105'.
000435             05  FILLER      PIC X(12)   VALUE '010601120112'.
000436
000437         01  560-REPLY-LOCATER-TABLE REDEFINES 560-REPLY-LOCATER-DATA.
000438             05  560-REPLY-LOCATER-ENTRY OCCURS 36 TIMES INDEXED BY 560-L.
000439                 10  560-REPLY-LO            PIC 9(04).
000440                 10  560-REPLY-HI            PIC 9(04).
000441                 10  560-REPLY-LAST-USED     PIC 9(04).
000442
000443         01  600-PROGRAM-MESSAGES.
000444             05  600-REPLY-LIST.
000445                 10  FILLER                  PIC X(07)   VALUE 'Reply: '.
000446                 10  600-REPLY-DATA          PIC X(70)   VALUE SPACES.
000447
000448             05  600-INITIAL-MESSAGE         PIC X(40)   VALUE
000449                 'Hi!  I'm ELIZA.  What's your problem?'.
000450
000451             05  600-GOODBYE-MESSAGE         PIC X(40)   VALUE
000452                 'If that's how you feel--goodbye...'.
000453
000454             05  600-NO-REPEAT-MSG           PIC X(32)   VALUE
000455                 'Please don't repeat yourself!'.
000456
000457         PROCEDURE DIVISION.
000458
000459        ****************************************************************
000460        *    0 0 0 0 - M A I N L I N E .                               *
000461        ****************************************************************
000462        *    START THE PSYCHOTHERAPIST DIALOG WITH THE USER, ANALYZE   *
000463        *    THE USER INPUT AND GENERATE THE REPLIES.  THE USER CAN    *
000464        *    TYPE 'SHUT UP' OR SIMPLY 'SHUT' TO TERMINATE THE SESSION. *
000465        ****************************************************************
000466
000467         0000-MAINLINE.
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0010
LINE    PG/LN  A...B............................................................
000468
000469             DISPLAY SPACE
warning: DISPLAY statement not terminated by END-DISPLAY
000470             MOVE SPACE                  TO 100-EOF-FLAG
000471             DISPLAY 600-INITIAL-MESSAGE
warning: DISPLAY statement not terminated by END-DISPLAY
000472             PERFORM UNTIL 88-100-ALL-DONE
000473                 ACCEPT 200-USER-INPUT
warning: ACCEPT statement not terminated by END-ACCEPT
000474                 MOVE FUNCTION LOWER-CASE (200-USER-INPUT)
000475                                         TO 210-USER-INPUT-LC
000476                 IF 210-USER-INPUT-LC (1:4) = 300-SHUT
000477                     SET 88-100-ALL-DONE TO TRUE
000478                     DISPLAY 600-GOODBYE-MESSAGE
warning: DISPLAY statement not terminated by END-DISPLAY
000479                 ELSE
000480                     IF 210-USER-INPUT-LC = 220-LAST-USER-INPUT
000481                         DISPLAY 600-NO-REPEAT-MSG
warning: DISPLAY statement not terminated by END-DISPLAY
000482                     ELSE
000483                         MOVE 210-USER-INPUT-LC
000484                                         TO 220-LAST-USER-INPUT
000485                         PERFORM 1000-SCAN-FOR-KEYWORD
000486                         IF 400-HOLD-OFFSET > ZERO
000487                             PERFORM 2000-TRANSLATE-USER-INPUT
000488                         END-IF
000489                         PERFORM 3000-BUILD-KEYWORD-REPLY
000490                     END-IF
000491                 END-IF
000492             END-PERFORM
000493
000494             STOP RUN.
000495
000496        ****************************************************************
000497        *    1 0 0 0 - S C A N - F O R - K E Y W O R D .               *
000498        ****************************************************************
000499        *    SEARCH THE USER INPUT FOR KEYWORDS THAT WILL TRIGGER      *
000500        *    THE RESPONSES FROM THE REPLY TABLE.                       *
000501        ****************************************************************
000502
000503         1000-SCAN-FOR-KEYWORD.
000504
000505             PERFORM 1100-MASK-STRING-HI
000506
000507             SET 88-100-KEYWORD-NOT-FOUND TO TRUE
000508             MOVE ZERO                   TO 400-HOLD-OFFSET
000509             PERFORM VARYING 400-SUB FROM +1 BY +1
000510                     UNTIL   400-SUB > 300-MAX-SCAN-LEN
000511                     OR      88-100-KEYWORD-FOUND
000512                 PERFORM VARYING 500-K FROM +1 BY +1
000513                         UNTIL   500-K > 300-MAX-KEYWORD-ENTRIES
000514                         OR      88-100-KEYWORD-FOUND
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0011
LINE    PG/LN  A...B............................................................
000515                     MOVE 500-KW-LEN (500-K)
000516                                         TO 400-HOLD-KW-LEN
000517                     IF 210-USER-INPUT-LC (400-SUB:400-HOLD-KW-LEN) =
000518                             500-KEYWORD (500-K)
000519                         SET 400-HOLD-500-K TO 500-K
warning: some digits may be truncated
000520                         SET 88-100-KEYWORD-FOUND TO TRUE
000521                         COMPUTE 400-HOLD-OFFSET =
warning: COMPUTE statement not terminated by END-COMPUTE
000522                             400-SUB + 400-HOLD-KW-LEN
000523                         COMPUTE 400-SUB = 400-SCAN-LEN + 1
warning: COMPUTE statement not terminated by END-COMPUTE
000524                     END-IF
000525                 END-PERFORM
000526             END-PERFORM
000527
000528             IF 88-100-KEYWORD-NOT-FOUND
000529                 MOVE 300-MAX-KEYWORD-ENTRIES
000530                                         TO 400-HOLD-500-K
000531                 SET 88-100-KEYWORD-FOUND TO TRUE
000532             END-IF
000533
000534             PERFORM 1200-RESTORE-STRING-HI
000535             .
000536
000537        ****************************************************************
000538        *    1 1 0 0 - M A S K - S T R I N G - H I .                   *
000539        ****************************************************************
000540        *    WORDS LIKE 'THING' AND 'HIGH' WERE CAUSING A KEYWORD      *
000541        *    'HI' MATCH THAT TRIGGERED THE HELLO/HI KEYWORD RESPONSES, *
000542        *    SO THEY ARE MASKED HERE TO PREVENT THAT.                  *
000543        *    ALSO REMOVE TRAILING '?', '!', AND '.' CHARACTERS.        *
000544        ****************************************************************
000545
000546         1100-MASK-STRING-HI.
000547
000548             MOVE FUNCTION SUBSTITUTE
000549                 (210-USER-INPUT-LC, 520-THING-IN, 520-THING-OUT,
000550                                     520-HIGH-IN,  520-HIGH-OUT,
000551                                     520-SHI-IN,   520-SHI-OUT,
000552                                     520-CHI-IN,   520-CHI-OUT,
000553                                     520-HIT-IN,   520-HIT-OUT,
000554                                     520-OUR-IN,   520-OUR-OUT,
000555                                     520-QMARK-IN, 520-QMARK-OUT,
000556                                     520-XMARK-IN, 520-QMARK-OUT,
000557                                     520-FSTOP-IN, 520-FSTOP-OUT)
000558                                         TO 250-SUBSTITUTE-WORK
000559             MOVE 250-SUBSTITUTE-WORK    TO 210-USER-INPUT-LC
warning: sending field larger than receiving field
000560        ****************************************************************
000561        *    REMOVE MULTIPLE TRAILING QUESTION MARKS, EXCLAMATION      *
000562        *    POINTS, AND PERIODS (FULL STOPS).                         *
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0012
LINE    PG/LN  A...B............................................................
000563        ****************************************************************
000564             MOVE FUNCTION SUBSTITUTE
000565                 (210-USER-INPUT-LC, 520-QMARK-IN, 520-QMARK-OUT,
000566                                     520-XMARK-IN, 520-QMARK-OUT,
000567                                     520-FSTOP-IN, 520-FSTOP-OUT)
000568                                         TO 250-SUBSTITUTE-WORK
000569             MOVE 250-SUBSTITUTE-WORK    TO 210-USER-INPUT-LC
warning: sending field larger than receiving field
000570             MOVE FUNCTION SUBSTITUTE
000571                 (210-USER-INPUT-LC, 520-QMARK-IN, 520-QMARK-OUT,
000572                                     520-XMARK-IN, 520-QMARK-OUT,
000573                                     520-FSTOP-IN, 520-FSTOP-OUT)
000574                                         TO 250-SUBSTITUTE-WORK
000575             MOVE 250-SUBSTITUTE-WORK    TO 210-USER-INPUT-LC
warning: sending field larger than receiving field
000576             .
000577
000578        ****************************************************************
000579        *    1 2 0 0 - R E S T O R E - S T R I N G - H I .             *
000580        ****************************************************************
000581        *    AFTER COMPLETING THE KEYWORD SEARCH, RESTORE THE 'HI'     *
000582        *    STRING IN THE USER INPUT.                                 *
000583        ****************************************************************
000584
000585         1200-RESTORE-STRING-HI.
000586
000587             MOVE FUNCTION SUBSTITUTE
000588                 (210-USER-INPUT-LC, 520-THING-OUT, 520-THING-IN,
000589                                     520-HIGH-OUT,  520-HIGH-IN,
000590                                     520-SHI-OUT,   520-SHI-IN,
000591                                     520-CHI-OUT,   520-CHI-IN,
000592                                     520-HIT-OUT,   520-HIT-IN,
000593                                     520-OUR-OUT,   520-OUR-IN)
000594                                         TO 250-SUBSTITUTE-WORK
000595             MOVE 250-SUBSTITUTE-WORK    TO 210-USER-INPUT-LC
warning: sending field larger than receiving field
000596             .
000597
000598        ****************************************************************
000599        *    2 0 0 0 - T R A N S L A T E - U S E R - I N P U T .       *
000600        ****************************************************************
000601        *    PERFORM PRONOUN REPLACEMENT AND CONJUGATION ON THE USER   *
000602        *    INPUT SO IT WILL SOUND FAIRLY NORMAL WHEN APPENDED TO     *
000603        *    THE DOCTOR'S REPLY.                                       *
000604        ****************************************************************
000605
000606         2000-TRANSLATE-USER-INPUT.
000607
000608             MOVE 210-USER-INPUT-LC (400-HOLD-OFFSET:)
000609                                         TO 230-TRANSLATED-INPUT.
000610
000611             MOVE FUNCTION SUBSTITUTE
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0013
LINE    PG/LN  A...B............................................................
000612                 (230-TRANSLATED-INPUT, 520-ARE-IN,  520-AM-OUT,
000613                                        520-WERE-IN, 520-WAS-OUT
000614                                        520-YOU-IN,  520-I-FIX,
000615                                        520-YOUR-IN, 520-MY-FIX,
000616                                        520-MY-IN,   520-YOUR-FIX,
000617                                        520-IVE-IN,  520-YOUVE-OUT,
000618                                        520-IM-IN,   520-YOURE-OUT,
000619                                        520-I-AM-IN, 520-YOURE-OUT,
000620                                        520-ME-IN,   520-YOU-FIX,
000621                                        520-I-IN,    520-YOU-FIX,
000622                                        520-YOURE-IN 520-IM-FIX,
000623                                    520-YOU-ARE-IN   520-I-AM-FIX,
000624                                    520-YOURSELF-IN, 520-MYSELF-OUT)
000625                                         TO 250-SUBSTITUTE-WORK.
000626
000627             MOVE 250-SUBSTITUTE-WORK TO 230-TRANSLATED-INPUT.
warning: sending field larger than receiving field
000628
000629             MOVE FUNCTION SUBSTITUTE
000630                 (230-TRANSLATED-INPUT, 520-I-FIX,     520-I-OUT,
000631                                        520-IM-FIX,    520-IM-OUT,
000632                                        520-I-AM-FIX,  520-I-AM-OUT,
000633                                        520-MY-FIX,    520-MY-OUT,
000634                                        520-YOUR-FIX,  520-YOUR-OUT,
000635                                        520-YOU-FIX,   520-YOU-OUT)
000636                                         TO 250-SUBSTITUTE-WORK.
000637
000638             MOVE 250-SUBSTITUTE-WORK    TO 230-TRANSLATED-INPUT
warning: sending field larger than receiving field
000639             .
000640
000641        ****************************************************************
000642        *    3 0 0 0 - B U I L D - K E Y W O R D - R E P L Y .         *
000643        ****************************************************************
000644        *    BUILD THE REPLY BASED ON THE KEYWORD FOUND IN THE USER    *
000645        *    INPUT.  NOTE THERE ARE A VARIABLE NUMBER OF POSSIBLE      *
000646        *    REPLIES FOR EACH KEYWORD, AND SOME REPLIES INCLUDE TEXT   *
000647        *    ECHOED FROM THE USER INPUT.                               *
000648        ****************************************************************
000649
000650         3000-BUILD-KEYWORD-REPLY.
000651
000652             SET 560-L                   TO 400-HOLD-500-K
000653             ADD +1                      TO 560-REPLY-LAST-USED (560-L)
warning: ADD statement not terminated by END-ADD
000654             IF 560-REPLY-LAST-USED (560-L) > 560-REPLY-HI (560-L)
000655                 MOVE 560-REPLY-LO (560-L) TO 560-REPLY-LAST-USED (560-L)
000656             END-IF
000657
000658             SET 540-R                    TO 560-REPLY-LAST-USED (560-L)
000659             MOVE 540-REPLY (540-R)       TO 240-REPLY
000660             MOVE 540-REPLY-LENGTH (540-R)    TO 400-SUB
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0014
LINE    PG/LN  A...B............................................................
000661             IF 240-REPLY (400-SUB:1) = 300-ASTERISK
000662                 MOVE SPACE               TO 240-REPLY (400-SUB:1)
000663                 MOVE 230-TRANSLATED-INPUT
warning: sending field larger than receiving field
000664                                          TO 240-REPLY (400-SUB:)
000665                 PERFORM 3100-FIX-MORE-BAD-GRAMMAR
000666                 MOVE ZERO                TO 400-SPACES-COUNT
000667                 INSPECT 240-REPLY TALLYING 400-SPACES-COUNT
000668                     FOR TRAILING SPACES
000669        ****************************************************************
000670        *        MERGE USER INPUT INTO THE REPLY AND THEN CORRECT      *
000671        *        ENDING PUNCTUATION FOR '?' OR '.' (FULL-STOP).        *
000672        ****************************************************************
000673                 IF  400-SPACES-COUNT > ZERO
000674                 AND 400-SPACES-COUNT < (LENGTH OF 240-REPLY) - 1
000675                     COMPUTE 400-OFFSET =
000676                         (LENGTH OF 240-REPLY) - 400-SPACES-COUNT + 1
000677                     END-COMPUTE
000678                     IF 560-REPLY-LAST-USED (560-L) = 02 OR 04 OR 05
000679                     OR 08 OR 18 OR 24 OR 33 OR 39 OR 81
000680                         MOVE '.'         TO 240-REPLY (400-OFFSET:1)
000681                     ELSE
000682                         MOVE '?'         TO 240-REPLY (400-OFFSET:1)
000683                     END-IF
000684                 END-IF
000685             END-IF
000686
000687             DISPLAY 240-REPLY
warning: DISPLAY statement not terminated by END-DISPLAY
000688             .
000689
000690        ****************************************************************
000691        *    3 1 0 0 - F I X - M O R E - B A D - G R A M M A R .       *
000692        ****************************************************************
000693        *    HERE ARE SOME MORE FIXUPS FOR GRAMMAR PROBLEMS.  BUT IT   *
000694        *    DOESN'T SOLVE ALL OF THEM.                                *
000695        ****************************************************************
000696
000697         3100-FIX-MORE-BAD-GRAMMAR.
000698
000699             MOVE FUNCTION SUBSTITUTE (240-REPLY,
000700                 ' you want I ',            ' you want me ',
000701                 ' you got I ',             ' you got me ',
000702                 ' to make I ',             ' to make me ',
000703                 ' you been I ',            ' you been me ',
000704                 ' you be I ',              ' you be me ',
000705                 ' to be I ',               ' to be me ',
000706                 ' soon got I ',            ' soon got me ',
000707                 ' never got I ',           ' never got me ',
000708                 ' sometimes also want I ', ' sometimes also want me ',
000709                 ' normal to be I ',        ' normal to be me ',
000710                 ' enjoy being I ',         ' enjoy being me ',
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0015
LINE    PG/LN  A...B............................................................
000711                 ' can't make I ',          ' can't make me ',
000712                 ' can now make I ',        ' can now make me ',
000713                 ' I are ',                 ' I am ',
000714                 ' you am ',                ' you are ',
000715                 ' with I ',                ' with me')
000716                                         TO 250-SUBSTITUTE-WORK.
000717
000718             MOVE 250-SUBSTITUTE-WORK TO 240-REPLY.
warning: sending field larger than receiving field
000719
000720         END PROGRAM ELIZA.
000721  ||||||| .r0
error: invalid indicator '|' at column 7
000722  =======
error: invalid indicator '=' at column 7
000723         IDENTIFICATION DIVISION.
000724
000725         PROGRAM-ID.             ELIZA.
error: redefinition of program ID 'ELIZA'
000726        *AUTHOR.                 ARNOLD J. TREMBLEY.
000727        *DATE-WRITTEN.           2017-10-01.
000728        *SECURITY.               THIS PROGRAM IS PUBLIC DOMAIN FREEWARE.
000729
000730        ****************************************************************
000731        *                                                              *
000732        *    https://en.wikipedia.org/wiki/ELIZA                       *
000733        *    ELIZA is an early natural language processing program     *
000734        *    created around 1964 by Joseph Wiezenbaum at MIT.  This    *
000735        *    version is adapted from ELIZA.BAS which appeared in       *
000736        *    Creative Computing magazine in 1977, written by Jeff      *
000737        *    Shrager and adapted for IBM PC in the early 1980's by     *
000738        *    Patricia Danielson and Paul Hashfield.                    *
000739        *                                                              *
000740        *    COBOL translation by Arnold Trembley, 2017-10-01.         *
000741        *    [email protected]                                   *
000742        *    Using MinGW GnuCOBOL 2.2 for Windows 7.                   *
000743        *    This version is public domain freeware.                   *
000744        *                                                              *
000745        *    ELIZA simulates a psychotherapist interacting with a      *
000746        *    human patient. Enter 'shut up' to stop the dialog.        *
000747        *                                                              *
000748        ****************************************************************
000749
000750         ENVIRONMENT DIVISION.
000751
000752         CONFIGURATION SECTION.
000753
000754         REPOSITORY.
000755             FUNCTION ALL INTRINSIC.
000756
000757         INPUT-OUTPUT SECTION.
000758
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0016
LINE    PG/LN  A...B............................................................
000759         FILE-CONTROL.
000760
000761         DATA DIVISION.
000762
000763         FILE SECTION.
000764
000765         WORKING-STORAGE SECTION.
000766
000767         01  100-PROGRAM-FLAGS.
000768             05  100-EOF-FLAG                PIC X(01)   VALUE SPACE.
000769                 88  88-100-ALL-DONE                     VALUE 'Y'.
000770             05  100-KEYWORD-FLAG            PIC X(01)   VALUE SPACE.
000771                 88  88-100-KEYWORD-FOUND                VALUE 'Y'.
000772                 88  88-100-KEYWORD-NOT-FOUND            VALUE 'N'.
000773
000774         01  200-USER-INPUT                  PIC X(80)   VALUE SPACES.
000775
000776         01  210-USER-INPUT-LC               PIC X(80)   VALUE SPACES.
000777
000778         01  220-LAST-USER-INPUT             PIC X(80)   VALUE SPACES.
000779
000780         01  230-TRANSLATED-INPUT            PIC X(80)   VALUE SPACES.
000781
000782         01  240-REPLY                       PIC X(79)   VALUE SPACES.
000783
000784         01  250-SUBSTITUTE-WORK             PIC X(100)  VALUE SPACES.
000785
000786         01  300-PROGRAM-CONSTANTS.
000787             05  300-MAX-KEYWORD-ENTRIES     PIC S9(4)   COMP VALUE +36.
000788             05  300-MAX-SCAN-LEN            PIC S9(4)   COMP VALUE +30.
000789             05  300-SHUT                    PIC X(04)   VALUE 'shut'.
000790             05  300-ASTERISK                PIC X(01)   VALUE '*'.
000791
000792         01  400-PROGRAM-COUNTERS.
000793             05  400-HOLD-KW-LEN             PIC S9(4)   COMP VALUE ZERO.
000794             05  400-SCAN-LEN                PIC S9(4)   COMP VALUE ZERO.
000795             05  400-HOLD-500-K              PIC S9(4)   COMP VALUE +0.
000796             05  400-HOLD-OFFSET             PIC S9(4)   COMP VALUE +0.
000797             05  400-OFFSET                  PIC S9(4)   COMP VALUE +0.
000798             05  400-SUB                     PIC S9(4)   COMP VALUE ZERO.
000799             05  400-SPACES-COUNT            PIC S9(4)   COMP VALUE ZERO.
000800
000801         01  500-KEYWORD-TABLE-DATA.
000802             05  FILLER   PIC X(16)  VALUE '07can you '.
000803             05  FILLER   PIC X(16)  VALUE '05can i '.
000804             05  FILLER   PIC X(16)  VALUE '07you are '.
000805             05  FILLER   PIC X(16)  VALUE '06you're '.
000806             05  FILLER   PIC X(16)  VALUE '07i don't '.
000807             05  FILLER   PIC X(16)  VALUE '06i feel  '.
000808             05  FILLER   PIC X(16)  VALUE '13why don't you '.
000809             05  FILLER   PIC X(16)  VALUE '11why can't i '.
000810             05  FILLER   PIC X(16)  VALUE '07are you '.
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0017
LINE    PG/LN  A...B............................................................
000811             05  FILLER   PIC X(16)  VALUE '07i can't '.
000812             05  FILLER   PIC X(16)  VALUE '04i am '.
000813             05  FILLER   PIC X(16)  VALUE '03i'm  '.
000814             05  FILLER   PIC X(16)  VALUE '03you '.
000815             05  FILLER   PIC X(16)  VALUE '06i want '.
000816             05  FILLER   PIC X(16)  VALUE '04what '.
000817             05  FILLER   PIC X(16)  VALUE '03how '.
000818             05  FILLER   PIC X(16)  VALUE '03who '.
000819             05  FILLER   PIC X(16)  VALUE '05where '.
000820             05  FILLER   PIC X(16)  VALUE '04when '.
000821             05  FILLER   PIC X(16)  VALUE '03why '.
000822             05  FILLER   PIC X(16)  VALUE '04name '.
000823             05  FILLER   PIC X(16)  VALUE '05cause '.
000824             05  FILLER   PIC X(16)  VALUE '05sorry '.
000825             05  FILLER   PIC X(16)  VALUE '05dream '.
000826             05  FILLER   PIC X(16)  VALUE '05hello '.
000827             05  FILLER   PIC X(16)  VALUE '02hi '.
000828             05  FILLER   PIC X(16)  VALUE '05maybe '.
000829             05  FILLER   PIC X(16)  VALUE '02no '.
000830             05  FILLER   PIC X(16)  VALUE '04your '.
000831             05  FILLER   PIC X(16)  VALUE '06always '.
000832             05  FILLER   PIC X(16)  VALUE '05think '.
000833             05  FILLER   PIC X(16)  VALUE '05alike '.
000834             05  FILLER   PIC X(16)  VALUE '03yes '.
000835             05  FILLER   PIC X(16)  VALUE '06friend '.
000836             05  FILLER   PIC X(16)  VALUE '08computer '.
000837             05  FILLER   PIC X(16)  VALUE '10NOKEYFOUND'.
000838
000839         01  500-KEYWORD-TABLE       REDEFINES 500-KEYWORD-TABLE-DATA.
000840             05  500-KEYWORD-ENTRY       OCCURS 36 TIMES
000841                                         INDEXED BY 500-K.
000842                 10  500-KW-LEN              PIC 9(02).
000843                 10  500-KEYWORD             PIC X(14).
000844
000845         01  520-TRANSLATION-CONSTANTS.
000846             05 520-THING-IN                 PIC X(05)   VALUE 'thing'.
000847             05 520-HIGH-IN                  PIC X(04)   VALUE 'high'.
000848             05 520-SHI-IN                   PIC X(03)   VALUE 'shi'.
000849             05 520-CHI-IN                   PIC X(03)   VALUE 'chi'.
000850             05 520-HIT-IN                   PIC X(03)   VALUE 'hit'.
000851             05 520-OUR-IN                   PIC X(03)   VALUE 'our'.
000852             05 520-QMARK-IN                 PIC X(02)   VALUE '? '.
000853             05 520-XMARK-IN                 PIC X(02)   VALUE '! '.
000854             05 520-FSTOP-IN                 PIC X(02)   VALUE '. '.
000855
000856             05 520-THING-OUT                PIC X(05)   VALUE 'th!ng'.
000857             05 520-HIGH-OUT                 PIC X(04)   VALUE 'h!gh'.
000858             05 520-SHI-OUT                  PIC X(03)   VALUE 'sh!'.
000859             05 520-CHI-OUT                  PIC X(03)   VALUE 'ch!'.
000860             05 520-HIT-OUT                  PIC X(03)   VALUE 'h!t'.
000861             05 520-OUR-OUT                  PIC X(03)   VALUE '0ur'.
000862             05 520-QMARK-OUT                PIC X(02)   VALUE '  '.
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0018
LINE    PG/LN  A...B............................................................
000863             05 520-FSTOP-OUT                PIC X(02)   VALUE '  '.
000864
000865             05 520-ARE-IN                   PIC X(05)   VALUE ' are '.
000866             05 520-WERE-IN                  PIC X(06)   VALUE ' were '.
000867             05 520-YOU-IN                   PIC X(05)   VALUE ' you '.
000868             05 520-YOUR-IN                  PIC X(06)   VALUE ' your '.
000869             05 520-MY-IN                    PIC X(04)   VALUE ' my '.
000870             05 520-IVE-IN                   PIC X(06)   VALUE ' i've '.
000871             05 520-IM-IN                    PIC X(05)   VALUE ' i'm '.
000872             05 520-I-AM-IN                  PIC X(06)   VALUE ' i am '.
000873             05 520-ME-IN                    PIC X(04)   VALUE ' me '.
000874             05 520-I-IN                     PIC X(03)   VALUE ' i '.
000875             05 520-YOURE-IN                 PIC X(08)   VALUE ' you're '.
000876             05 520-YOU-ARE-IN           PIC X(09)   VALUE ' you are '.
000877             05 520-YOURSELF-IN          PIC X(10)   VALUE ' yourself '.
000878
000879             05 520-AM-OUT                   PIC X(04)   VALUE ' am '.
000880             05 520-WAS-OUT                  PIC X(05)   VALUE ' was '.
000881             05 520-I-FIX                    PIC X(04)   VALUE ' i# '.
000882             05 520-IM-FIX                   PIC X(06)   VALUE ' i'm# '.
000883             05 520-I-AM-FIX                 PIC X(07)   VALUE ' i am# '.
000884             05 520-MY-FIX                   PIC X(05)   VALUE ' my# '.
000885             05 520-YOUR-FIX                 PIC X(07)   VALUE ' your# '.
000886             05 520-YOUVE-OUT                PIC X(08)   VALUE ' you've '.
000887             05 520-YOURE-OUT                PIC X(08)   VALUE ' you're '.
000888             05 520-YOU-FIX                  PIC X(06)   VALUE ' you# '.
000889             05 520-MYSELF-OUT               PIC X(08)   VALUE ' myself '.
000890
000891             05 520-I-OUT                    PIC X(03)   VALUE ' I '.
000892             05 520-IM-OUT                   PIC X(05)   VALUE ' I'm '.
000893             05 520-I-AM-OUT                 PIC X(06)   VALUE ' I am '.
000894             05 520-MY-OUT                   PIC X(04)   VALUE ' my '.
000895             05 520-YOUR-OUT                 PIC X(06)   VALUE ' your '.
000896             05 520-YOU-OUT                  PIC X(05)   VALUE ' you '.
000897
000898
000899         01  540-REPLY-TABLE-DATA.
000900             05  PIC x(60)   VALUE '29Don't you believe that I can*'.
000901             05  PIC X(60)   VALUE '29Perhaps you would like me to*'.
000902             05  PIC x(60)   VALUE '29Do you want me to be able to*'.
000903             05  PIC x(60)   VALUE '26Perhaps you don't want to*'.
000904             05  PIC x(60)   VALUE '26Do you want to be able to*'.
000905             05  PIC x(60)   VALUE '26What makes you think I am*'.
000906
000907             05  PIC X(30)   VALUE '35Does it please you to believ'.
000908             05  PIC X(30)   VALUE 'e I am*'.
000909
000910             05  PIC x(60)   VALUE '29Perhaps you would like to be*'.
000911
000912             05  PIC X(30)   VALUE '31Do you sometimes wish you we'.
000913             05  PIC X(30)   VALUE 're*'.
000914
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0019
LINE    PG/LN  A...B............................................................
000915             05  PIC x(60)   VALUE '17Don't you really*'.
000916             05  PIC x(60)   VALUE '14Why don't you*'.
000917             05  PIC x(60)   VALUE '26Do you wish to be able to*'.
000918             05  PIC x(60)   VALUE '22Does that trouble you?'.
000919             05  PIC x(60)   VALUE '18Do you often feel*'.
000920             05  PIC x(60)   VALUE '18Do you often feel*'.
000921             05  PIC x(60)   VALUE '21Do you enjoy feeling*'.
000922             05  PIC x(60)   VALUE '30Do you really believe I don't*'.
000923             05  PIC x(60)   VALUE '28Perhaps in good time I will*'.
000924             05  PIC x(60)   VALUE '18Do you want me to*'.
000925
000926             05  PIC X(30)   VALUE '35Do you think you should be a'.
000927             05  PIC X(30)   VALUE 'ble to*'.
000928
000929             05  PIC x(60)   VALUE '14Why can't you*'.
000930
000931             05  PIC X(30)   VALUE '46Why are you interested in wh'.
000932             05  PIC X(30)   VALUE 'ether or not I am*'.
000933
000934             05  PIC x(60)   VALUE '31Would you prefer if I were not*'.
000935             05  PIC x(60)   VALUE '31Perhaps in your fantasies I am*'.
000936             05  PIC x(60)   VALUE '26How do you know you can't*'.
000937             05  PIC x(60)   VALUE '15Have you tried?'.
000938             05  PIC x(60)   VALUE '20Perhaps you can now*'.
000939
000940             05  PIC X(30)   VALUE '35Did you come to me because y'.
000941             05  PIC X(30)   VALUE 'ou are*'.
000942
000943             05  PIC x(60)   VALUE '23How long have you been*'.
000944
000945             05  PIC X(30)   VALUE '34Do you believe it is normal '.
000946             05  PIC X(30)   VALUE 'to be*'.
000947
000948             05  PIC x(60)   VALUE '19Do you enjoy being*'.
000949             05  PIC x(60)   VALUE '31We were discussing you--not me.'.
000950             05  PIC x(60)   VALUE '06Oh, I*'.
000951
000952             05  PIC X(30)   VALUE '44You're not really talking ab'.
000953             05  PIC X(30)   VALUE 'out me, are you?'.
000954
000955             05  PIC X(30)   VALUE '37What would it mean to you if'.
000956             05  PIC X(30)   VALUE ' you got*'.
000957
000958             05  PIC x(60)   VALUE '16Why do you want*'.
000959             05  PIC x(60)   VALUE '21Suppose you soon got*'.
000960             05  PIC x(60)   VALUE '22What if you never got*'.
000961             05  PIC x(60)   VALUE '22I sometimes also want*'.
000962             05  PIC x(60)   VALUE '15Why do you ask?'.
000963             05  PIC x(60)   VALUE '32Does that question interest you?'.
000964
000965             05  PIC X(30)   VALUE '38What answer would please you'.
000966             05  PIC X(30)   VALUE ' the most?'.
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0020
LINE    PG/LN  A...B............................................................
000967
000968             05  PIC x(60)   VALUE '18What do you think?'.
000969
000970             05  PIC X(30)   VALUE '38Are such questions on your m'.
000971             05  PIC X(30)   VALUE 'ind often?'.
000972
000973             05  PIC X(30)   VALUE '40What is it that you really w'.
000974             05  PIC X(30)   VALUE 'ant to know?'.
000975
000976             05  PIC x(60)   VALUE '27Have you asked anyone else?'.
000977
000978             05  PIC X(30)   VALUE '37Have you asked such question'.
000979             05  PIC X(30)   VALUE 's before?'.
000980
000981             05  PIC X(30)   VALUE '42What else comes to mind when'.
000982             05  PIC X(30)   VALUE ' you ask that?'.
000983
000984             05  PIC x(60)   VALUE '24Names don't interest me.'.
000985
000986             05  PIC X(30)   VALUE '41I don't care about names -- '.
000987             05  PIC X(30)   VALUE 'Please go on.'.
000988
000989             05  PIC x(60)   VALUE '24Is that the real reason?'.
000990
000991             05  PIC X(30)   VALUE '37Don't any other reasons come'.
000992             05  PIC X(30)   VALUE ' to mind?'.
000993
000994             05  PIC X(30)   VALUE '39Does that reason explain any'.
000995             05  PIC X(30)   VALUE 'thing else?'.
000996
000997             05  PIC X(30)   VALUE '34What other reasons might the'.
000998             05  PIC X(30)   VALUE 're be?'.
000999
001000             05  PIC x(60)   VALUE '23Please don't apologize!'.
001001             05  PIC x(60)   VALUE '28Apologies are not necessary.'.
001002
001003             05  PIC X(30)   VALUE '45What feelings do you have wh'.
001004             05  PIC X(30)   VALUE 'en you apologize?'.
001005
001006             05  PIC x(60)   VALUE '22Don't be so defensive!'.
001007
001008             05  PIC X(30)   VALUE '36What does that dream suggest'.
001009             05  PIC X(30)   VALUE ' to you?'.
001010
001011             05  PIC x(60)   VALUE '19Do you dream often?'.
001012
001013             05  PIC X(30)   VALUE '35What persons appear in your '.
001014             05  PIC X(30)   VALUE 'dreams?'.
001015
001016             05  PIC X(30)   VALUE '33Are you disturbed by your dr'.
001017             05  PIC X(30)   VALUE 'eams?'.
001018
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0021
LINE    PG/LN  A...B............................................................
001019             05  PIC X(30)   VALUE '43How do you do ...Please stat'.
001020             05  PIC X(30)   VALUE 'e your problem.'.
001021
001022             05  PIC x(60)   VALUE '29You don't seem quite certain.'.
001023             05  PIC x(60)   VALUE '23Why the uncertain tone?'.
001024             05  PIC x(60)   VALUE '27Can't you be more positive?'.
001025             05  PIC x(60)   VALUE '16You aren't sure?'.
001026             05  PIC x(60)   VALUE '15Don't you know?'.
001027
001028             05  PIC X(30)   VALUE '38Are you saying no just to be'.
001029             05  PIC X(30)   VALUE ' negative?'.
001030
001031             05  PIC x(60)   VALUE '29You are being a bit negative.'.
001032             05  PIC x(60)   VALUE '08Why not?'.
001033             05  PIC x(60)   VALUE '13Are you sure?'.
001034             05  PIC x(60)   VALUE '07Why no?'.
001035             05  PIC x(60)   VALUE '31Why are you concerned about my*'.
001036             05  PIC x(60)   VALUE '20What about your own*'.
001037
001038             05  PIC X(30)   VALUE '36Can you think of a specific '.
001039             05  PIC X(30)   VALUE 'example?'.
001040
001041             05  PIC x(60)   VALUE '05When?'.
001042             05  PIC x(60)   VALUE '25What are you thinking of?'.
001043             05  PIC x(60)   VALUE '15Really, always?'.
001044             05  PIC x(60)   VALUE '23Do you really think so?'.
001045             05  PIC x(60)   VALUE '21But you are not sure*'.
001046             05  PIC x(60)   VALUE '13Do you doubt*'.
001047             05  PIC x(60)   VALUE '12In what way?'.
001048             05  PIC x(60)   VALUE '28What resemblance do you see?'.
001049
001050             05  PIC X(30)   VALUE '40What does the similarity sug'.
001051             05  PIC X(30)   VALUE 'gest to you?'.
001052
001053             05  PIC X(30)   VALUE '34What other connections do yo'.
001054             05  PIC X(30)   VALUE 'u see?'.
001055
001056             05  PIC X(30)   VALUE '38Could there really be some c'.
001057             05  PIC X(30)   VALUE 'onnection?'.
001058
001059             05  PIC x(60)   VALUE '04How?'.
001060             05  PIC x(60)   VALUE '24You seem quite positive.'.
001061             05  PIC x(60)   VALUE '13Are you sure?'.
001062             05  PIC x(60)   VALUE '06I see.'.
001063             05  PIC x(60)   VALUE '13I understand.'.
001064
001065             05  PIC X(30)   VALUE '41Why do you bring up the topi'.
001066             05  PIC X(30)   VALUE 'c of friends?'.
001067
001068             05  PIC x(60)   VALUE '26Do your friends worry you?'.
001069             05  PIC x(60)   VALUE '28Do your friends pick on you?'.
001070
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0022
LINE    PG/LN  A...B............................................................
001071             05  PIC X(30)   VALUE '34Are you sure you have any fr'.
001072             05  PIC X(30)   VALUE 'iends?'.
001073
001074             05  PIC x(60)   VALUE '30Do you impose on your friends?'.
001075
001076             05  PIC X(30)   VALUE '42Perhaps your love for friend'.
001077             05  PIC X(30)   VALUE 's worries you.'.
001078
001079             05  PIC x(60)   VALUE '23Do computers worry you?'.
001080
001081             05  PIC X(30)   VALUE '39Are you talking about me in '.
001082             05  PIC X(30)   VALUE 'particular?'.
001083
001084             05  PIC X(30)   VALUE '31Are you frightened by machin'.
001085             05  PIC X(30)   VALUE 'es?'.
001086
001087             05  PIC x(60)   VALUE '29Why do you mention computers?'.
001088
001089             05  PIC X(30)   VALUE '56What do you think machines h'.
001090             05  PIC X(30)   VALUE 'ave to do with your problem?'.
001091
001092             05  PIC X(30)   VALUE '42Don't you think computers ca'.
001093             05  PIC X(30)   VALUE 'n help people?'.
001094
001095             05  PIC X(30)   VALUE '43What is it about machines th'.
001096             05  PIC X(30)   VALUE 'at worries you?'.
001097
001098             05  PIC X(30)   VALUE '44Say, do you have any psychol'.
001099             05  PIC X(30)   VALUE 'ogical problems?'.
001100
001101             05  PIC x(60)   VALUE '30What does that suggest to you?'.
001102             05  PIC x(60)   VALUE '06I see.'.
001103
001104             05  PIC X(30)   VALUE '36I'm not sure I understand yo'.
001105             05  PIC X(30)   VALUE 'u fully.'.
001106
001107             05  PIC X(30)   VALUE '36Come, Come, elucidate your t'.
001108             05  PIC X(30)   VALUE 'houghts.'.
001109
001110             05  PIC x(60)   VALUE '26Can you elaborate on that?'.
001111             05  PIC x(60)   VALUE '26That is quite interesting.'.
001112
001113         01  540-REPLY-TABLE         REDEFINES 540-REPLY-TABLE-DATA.
001114             05  540-REPLY-ENTRY         OCCURS 112 TIMES
001115                                         INDEXED BY 540-R.
001116                 10  540-REPLY-LENGTH        PIC 9(02).
001117                 10  540-REPLY               PIC X(58).
001118
001119
001120         01  560-REPLY-LOCATER-DATA.
001121             05  FILLER      PIC X(12)   VALUE '000100030004'.
001122             05  FILLER      PIC X(12)   VALUE '000400050005'.
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0023
LINE    PG/LN  A...B............................................................
001123             05  FILLER      PIC X(12)   VALUE '000600090009'.
001124             05  FILLER      PIC X(12)   VALUE '000600090009'.
001125             05  FILLER      PIC X(12)   VALUE '001000130013'.
001126             05  FILLER      PIC X(12)   VALUE '001400160016'.
001127             05  FILLER      PIC X(12)   VALUE '001700190019'.
001128             05  FILLER      PIC X(12)   VALUE '002000210021'.
001129             05  FILLER      PIC X(12)   VALUE '002200240024'.
001130             05  FILLER      PIC X(12)   VALUE '002500270027'.
001131             05  FILLER      PIC X(12)   VALUE '002800310031'.
001132             05  FILLER      PIC X(12)   VALUE '002800310031'.
001133             05  FILLER      PIC X(12)   VALUE '003200340034'.
001134             05  FILLER      PIC X(12)   VALUE '003500390039'.
001135             05  FILLER      PIC X(12)   VALUE '004000480048'.
001136             05  FILLER      PIC X(12)   VALUE '004000480048'.
001137             05  FILLER      PIC X(12)   VALUE '004000480048'.
001138             05  FILLER      PIC X(12)   VALUE '004000480048'.
001139             05  FILLER      PIC X(12)   VALUE '004000480048'.
001140             05  FILLER      PIC X(12)   VALUE '004000480048'.
001141             05  FILLER      PIC X(12)   VALUE '004900500050'.
001142             05  FILLER      PIC X(12)   VALUE '005100540054'.
001143             05  FILLER      PIC X(12)   VALUE '005500580058'.
001144             05  FILLER      PIC X(12)   VALUE '005900620062'.
001145             05  FILLER      PIC X(12)   VALUE '006300630063'.
001146             05  FILLER      PIC X(12)   VALUE '006300630063'.
001147             05  FILLER      PIC X(12)   VALUE '006400680068'.
001148             05  FILLER      PIC X(12)   VALUE '006900730073'.
001149             05  FILLER      PIC X(12)   VALUE '007400750075'.
001150             05  FILLER      PIC X(12)   VALUE '007600790079'.
001151             05  FILLER      PIC X(12)   VALUE '008000820082'.
001152             05  FILLER      PIC X(12)   VALUE '008300890089'.
001153             05  FILLER      PIC X(12)   VALUE '009000920092'.
001154             05  FILLER      PIC X(12)   VALUE '009300980098'.
001155             05  FILLER      PIC X(12)   VALUE '009901050105'.
001156             05  FILLER      PIC X(12)   VALUE '010601120112'.
001157
001158         01  560-REPLY-LOCATER-TABLE REDEFINES 560-REPLY-LOCATER-DATA.
001159             05  560-REPLY-LOCATER-ENTRY OCCURS 36 TIMES INDEXED BY 560-L.
001160                 10  560-REPLY-LO            PIC 9(04).
001161                 10  560-REPLY-HI            PIC 9(04).
001162                 10  560-REPLY-LAST-USED     PIC 9(04).
001163
001164         01  600-PROGRAM-MESSAGES.
001165             05  600-REPLY-LIST.
001166                 10  FILLER                  PIC X(07)   VALUE 'Reply: '.
001167                 10  600-REPLY-DATA          PIC X(70)   VALUE SPACES.
001168
001169             05  600-INITIAL-MESSAGE         PIC X(40)   VALUE
001170                 'Hi!  I'm ELIZA.  What's your problem?'.
001171
001172             05  600-GOODBYE-MESSAGE         PIC X(40)   VALUE
001173                 'If that's how you feel--goodbye...'.
001174
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0024
LINE    PG/LN  A...B............................................................
001175             05  600-NO-REPEAT-MSG           PIC X(32)   VALUE
001176                 'Please don't repeat yourself!'.
001177
001178         PROCEDURE DIVISION.
001179
001180        ****************************************************************
001181        *    0 0 0 0 - M A I N L I N E .                               *
001182        ****************************************************************
001183        *    START THE PSYCHOTHERAPIST DIALOG WITH THE USER, ANALYZE   *
001184        *    THE USER INPUT AND GENERATE THE REPLIES.  THE USER CAN    *
001185        *    TYPE 'SHUT UP' OR SIMPLY 'SHUT' TO TERMINATE THE SESSION. *
001186        ****************************************************************
001187
001188         0000-MAINLINE.
001189
001190             DISPLAY SPACE
warning: DISPLAY statement not terminated by END-DISPLAY
001191             MOVE SPACE                  TO 100-EOF-FLAG
001192             DISPLAY 600-INITIAL-MESSAGE
warning: DISPLAY statement not terminated by END-DISPLAY
001193             PERFORM UNTIL 88-100-ALL-DONE
001194                 ACCEPT 200-USER-INPUT
warning: ACCEPT statement not terminated by END-ACCEPT
001195                 MOVE FUNCTION LOWER-CASE (200-USER-INPUT)
001196                                         TO 210-USER-INPUT-LC
001197                 IF 210-USER-INPUT-LC (1:4) = 300-SHUT
001198                     SET 88-100-ALL-DONE TO TRUE
001199                     DISPLAY 600-GOODBYE-MESSAGE
warning: DISPLAY statement not terminated by END-DISPLAY
001200                 ELSE
001201                     IF 210-USER-INPUT-LC = 220-LAST-USER-INPUT
001202                         DISPLAY 600-NO-REPEAT-MSG
warning: DISPLAY statement not terminated by END-DISPLAY
001203                     ELSE
001204                         MOVE 210-USER-INPUT-LC
001205                                         TO 220-LAST-USER-INPUT
001206                         PERFORM 1000-SCAN-FOR-KEYWORD
001207                         IF 400-HOLD-OFFSET > ZERO
001208                             PERFORM 2000-TRANSLATE-USER-INPUT
001209                         END-IF
001210                         PERFORM 3000-BUILD-KEYWORD-REPLY
001211                     END-IF
001212                 END-IF
001213             END-PERFORM
001214
001215             STOP RUN.
001216
001217        ****************************************************************
001218        *    1 0 0 0 - S C A N - F O R - K E Y W O R D .               *
001219        ****************************************************************
001220        *    SEARCH THE USER INPUT FOR KEYWORDS THAT WILL TRIGGER      *
001221        *    THE RESPONSES FROM THE REPLY TABLE.                       *
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0025
LINE    PG/LN  A...B............................................................
001222        ****************************************************************
001223
001224         1000-SCAN-FOR-KEYWORD.
001225
001226             PERFORM 1100-MASK-STRING-HI
001227
001228             SET 88-100-KEYWORD-NOT-FOUND TO TRUE
001229             MOVE ZERO                   TO 400-HOLD-OFFSET
001230             PERFORM VARYING 400-SUB FROM +1 BY +1
001231                     UNTIL   400-SUB > 300-MAX-SCAN-LEN
001232                     OR      88-100-KEYWORD-FOUND
001233                 PERFORM VARYING 500-K FROM +1 BY +1
001234                         UNTIL   500-K > 300-MAX-KEYWORD-ENTRIES
001235                         OR      88-100-KEYWORD-FOUND
001236                     MOVE 500-KW-LEN (500-K)
001237                                         TO 400-HOLD-KW-LEN
001238                     IF 210-USER-INPUT-LC (400-SUB:400-HOLD-KW-LEN) =
001239                             500-KEYWORD (500-K)
001240                         SET 400-HOLD-500-K TO 500-K
warning: some digits may be truncated
001241                         SET 88-100-KEYWORD-FOUND TO TRUE
001242                         COMPUTE 400-HOLD-OFFSET =
warning: COMPUTE statement not terminated by END-COMPUTE
001243                             400-SUB + 400-HOLD-KW-LEN
001244                         COMPUTE 400-SUB = 400-SCAN-LEN + 1
warning: COMPUTE statement not terminated by END-COMPUTE
001245                     END-IF
001246                 END-PERFORM
001247             END-PERFORM
001248
001249             IF 88-100-KEYWORD-NOT-FOUND
001250                 MOVE 300-MAX-KEYWORD-ENTRIES
001251                                         TO 400-HOLD-500-K
001252                 SET 88-100-KEYWORD-FOUND TO TRUE
001253             END-IF
001254
001255             PERFORM 1200-RESTORE-STRING-HI
001256             .
001257
001258        ****************************************************************
001259        *    1 1 0 0 - M A S K - S T R I N G - H I .                   *
001260        ****************************************************************
001261        *    WORDS LIKE 'THING' AND 'HIGH' WERE CAUSING A KEYWORD      *
001262        *    'HI' MATCH THAT TRIGGERED THE HELLO/HI KEYWORD RESPONSES, *
001263        *    SO THEY ARE MASKED HERE TO PREVENT THAT.                  *
001264        *    ALSO REMOVE TRAILING '?', '!', AND '.' CHARACTERS.        *
001265        ****************************************************************
001266
001267         1100-MASK-STRING-HI.
001268
001269             MOVE FUNCTION SUBSTITUTE
001270                 (210-USER-INPUT-LC, 520-THING-IN, 520-THING-OUT,
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0026
LINE    PG/LN  A...B............................................................
001271                                     520-HIGH-IN,  520-HIGH-OUT,
001272                                     520-SHI-IN,   520-SHI-OUT,
001273                                     520-CHI-IN,   520-CHI-OUT,
001274                                     520-HIT-IN,   520-HIT-OUT,
001275                                     520-OUR-IN,   520-OUR-OUT,
001276                                     520-QMARK-IN, 520-QMARK-OUT,
001277                                     520-XMARK-IN, 520-QMARK-OUT,
001278                                     520-FSTOP-IN, 520-FSTOP-OUT)
001279                                         TO 250-SUBSTITUTE-WORK
001280             MOVE 250-SUBSTITUTE-WORK    TO 210-USER-INPUT-LC
warning: sending field larger than receiving field
001281        ****************************************************************
001282        *    REMOVE MULTIPLE TRAILING QUESTION MARKS, EXCLAMATION      *
001283        *    POINTS, AND PERIODS (FULL STOPS).                         *
001284        ****************************************************************
001285             MOVE FUNCTION SUBSTITUTE
001286                 (210-USER-INPUT-LC, 520-QMARK-IN, 520-QMARK-OUT,
001287                                     520-XMARK-IN, 520-QMARK-OUT,
001288                                     520-FSTOP-IN, 520-FSTOP-OUT)
001289                                         TO 250-SUBSTITUTE-WORK
001290             MOVE 250-SUBSTITUTE-WORK    TO 210-USER-INPUT-LC
warning: sending field larger than receiving field
001291             MOVE FUNCTION SUBSTITUTE
001292                 (210-USER-INPUT-LC, 520-QMARK-IN, 520-QMARK-OUT,
001293                                     520-XMARK-IN, 520-QMARK-OUT,
001294                                     520-FSTOP-IN, 520-FSTOP-OUT)
001295                                         TO 250-SUBSTITUTE-WORK
001296             MOVE 250-SUBSTITUTE-WORK    TO 210-USER-INPUT-LC
warning: sending field larger than receiving field
001297             .
001298
001299        ****************************************************************
001300        *    1 2 0 0 - R E S T O R E - S T R I N G - H I .             *
001301        ****************************************************************
001302        *    AFTER COMPLETING THE KEYWORD SEARCH, RESTORE THE 'HI'     *
001303        *    STRING IN THE USER INPUT.                                 *
001304        ****************************************************************
001305
001306         1200-RESTORE-STRING-HI.
001307
001308             MOVE FUNCTION SUBSTITUTE
001309                 (210-USER-INPUT-LC, 520-THING-OUT, 520-THING-IN,
001310                                     520-HIGH-OUT,  520-HIGH-IN,
001311                                     520-SHI-OUT,   520-SHI-IN,
001312                                     520-CHI-OUT,   520-CHI-IN,
001313                                     520-HIT-OUT,   520-HIT-IN,
001314                                     520-OUR-OUT,   520-OUR-IN)
001315                                         TO 250-SUBSTITUTE-WORK
001316             MOVE 250-SUBSTITUTE-WORK    TO 210-USER-INPUT-LC
warning: sending field larger than receiving field
001317             .
001318
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0027
LINE    PG/LN  A...B............................................................
001319        ****************************************************************
001320        *    2 0 0 0 - T R A N S L A T E - U S E R - I N P U T .       *
001321        ****************************************************************
001322        *    PERFORM PRONOUN REPLACEMENT AND CONJUGATION ON THE USER   *
001323        *    INPUT SO IT WILL SOUND FAIRLY NORMAL WHEN APPENDED TO     *
001324        *    THE DOCTOR'S REPLY.                                       *
001325        ****************************************************************
001326
001327         2000-TRANSLATE-USER-INPUT.
001328
001329             MOVE 210-USER-INPUT-LC (400-HOLD-OFFSET:)
001330                                         TO 230-TRANSLATED-INPUT.
001331
001332             MOVE FUNCTION SUBSTITUTE
001333                 (230-TRANSLATED-INPUT, 520-ARE-IN,  520-AM-OUT,
001334                                        520-WERE-IN, 520-WAS-OUT
001335                                        520-YOU-IN,  520-I-FIX,
001336                                        520-YOUR-IN, 520-MY-FIX,
001337                                        520-MY-IN,   520-YOUR-FIX,
001338                                        520-IVE-IN,  520-YOUVE-OUT,
001339                                        520-IM-IN,   520-YOURE-OUT,
001340                                        520-I-AM-IN, 520-YOURE-OUT,
001341                                        520-ME-IN,   520-YOU-FIX,
001342                                        520-I-IN,    520-YOU-FIX,
001343                                        520-YOURE-IN 520-IM-FIX,
001344                                    520-YOU-ARE-IN   520-I-AM-FIX,
001345                                    520-YOURSELF-IN, 520-MYSELF-OUT)
001346                                         TO 250-SUBSTITUTE-WORK.
001347
001348             MOVE 250-SUBSTITUTE-WORK TO 230-TRANSLATED-INPUT.
warning: sending field larger than receiving field
001349
001350             MOVE FUNCTION SUBSTITUTE
001351                 (230-TRANSLATED-INPUT, 520-I-FIX,     520-I-OUT,
001352                                        520-IM-FIX,    520-IM-OUT,
001353                                        520-I-AM-FIX,  520-I-AM-OUT,
001354                                        520-MY-FIX,    520-MY-OUT,
001355                                        520-YOUR-FIX,  520-YOUR-OUT,
001356                                        520-YOU-FIX,   520-YOU-OUT)
001357                                         TO 250-SUBSTITUTE-WORK.
001358
001359             MOVE 250-SUBSTITUTE-WORK    TO 230-TRANSLATED-INPUT
warning: sending field larger than receiving field
001360             .
001361
001362        ****************************************************************
001363        *    3 0 0 0 - B U I L D - K E Y W O R D - R E P L Y .         *
001364        ****************************************************************
001365        *    BUILD THE REPLY BASED ON THE KEYWORD FOUND IN THE USER    *
001366        *    INPUT.  NOTE THERE ARE A VARIABLE NUMBER OF POSSIBLE      *
001367        *    REPLIES FOR EACH KEYWORD, AND SOME REPLIES INCLUDE TEXT   *
001368        *    ECHOED FROM THE USER INPUT.                               *
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0028
LINE    PG/LN  A...B............................................................
001369        ****************************************************************
001370
001371         3000-BUILD-KEYWORD-REPLY.
001372
001373             SET 560-L                   TO 400-HOLD-500-K
001374             ADD +1                      TO 560-REPLY-LAST-USED (560-L)
warning: ADD statement not terminated by END-ADD
001375             IF 560-REPLY-LAST-USED (560-L) > 560-REPLY-HI (560-L)
001376                 MOVE 560-REPLY-LO (560-L) TO 560-REPLY-LAST-USED (560-L)
001377             END-IF
001378
001379             SET 540-R                    TO 560-REPLY-LAST-USED (560-L)
001380             MOVE 540-REPLY (540-R)       TO 240-REPLY
001381             MOVE 540-REPLY-LENGTH (540-R)    TO 400-SUB
001382             IF 240-REPLY (400-SUB:1) = 300-ASTERISK
001383                 MOVE SPACE               TO 240-REPLY (400-SUB:1)
001384                 MOVE 230-TRANSLATED-INPUT
warning: sending field larger than receiving field
001385                                          TO 240-REPLY (400-SUB:)
001386                 PERFORM 3100-FIX-MORE-BAD-GRAMMAR
001387                 MOVE ZERO                TO 400-SPACES-COUNT
001388                 INSPECT 240-REPLY TALLYING 400-SPACES-COUNT
001389                     FOR TRAILING SPACES
001390        ****************************************************************
001391        *        MERGE USER INPUT INTO THE REPLY AND THEN CORRECT      *
001392        *        ENDING PUNCTUATION FOR '?' OR '.' (FULL-STOP).        *
001393        ****************************************************************
001394                 IF  400-SPACES-COUNT > ZERO
001395                 AND 400-SPACES-COUNT < (LENGTH OF 240-REPLY) - 1
001396                     COMPUTE 400-OFFSET =
001397                         (LENGTH OF 240-REPLY) - 400-SPACES-COUNT + 1
001398                     END-COMPUTE
001399                     IF 560-REPLY-LAST-USED (560-L) = 02 OR 04 OR 05
001400                     OR 08 OR 18 OR 24 OR 33 OR 39 OR 81
001401                         MOVE '.'         TO 240-REPLY (400-OFFSET:1)
001402                     ELSE
001403                         MOVE '?'         TO 240-REPLY (400-OFFSET:1)
001404                     END-IF
001405                 END-IF
001406             END-IF
001407
001408             DISPLAY 240-REPLY
warning: DISPLAY statement not terminated by END-DISPLAY
001409             .
001410
001411        ****************************************************************
001412        *    3 1 0 0 - F I X - M O R E - B A D - G R A M M A R .       *
001413        ****************************************************************
001414        *    HERE ARE SOME MORE FIXUPS FOR GRAMMAR PROBLEMS.  BUT IT   *
001415        *    DOESN'T SOLVE ALL OF THEM.                                *
001416        ****************************************************************
001417
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0029
LINE    PG/LN  A...B............................................................
001418         3100-FIX-MORE-BAD-GRAMMAR.
001419
001420             MOVE FUNCTION SUBSTITUTE (240-REPLY,
001421                 ' you want I ',            ' you want me ',
001422                 ' you got I ',             ' you got me ',
001423                 ' to make I ',             ' to make me ',
001424                 ' you been I ',            ' you been me ',
001425                 ' you be I ',              ' you be me ',
001426                 ' to be I ',               ' to be me ',
001427                 ' soon got I ',            ' soon got me ',
001428                 ' never got I ',           ' never got me ',
001429                 ' sometimes also want I ', ' sometimes also want me ',
001430                 ' normal to be I ',        ' normal to be me ',
001431                 ' enjoy being I ',         ' enjoy being me ',
001432                 ' can't make I ',          ' can't make me ',
001433                 ' can now make I ',        ' can now make me ',
001434                 ' I are ',                 ' I am ',
001435                 ' you am ',                ' you are ',
001436                 ' with I ',                ' with me')
001437                                         TO 250-SUBSTITUTE-WORK.
001438
001439             MOVE 250-SUBSTITUTE-WORK TO 240-REPLY.
warning: sending field larger than receiving field
001440
001441         END PROGRAM ELIZA.
001442  >>>>>>> .r513
error: invalid indicator '>' at column 7
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0030
NAME                           DEFINED                REFERENCES
PROGRAM ELIZA
100-PROGRAM-FLAGS              46     referenced by child
100-EOF-FLAG                   47      *470    *477
88-100-ALL-DONE                48       472     477
100-KEYWORD-FLAG               49      *507    *520    *531
88-100-KEYWORD-FOUND           50       511     514     520     531
88-100-KEYWORD-NOT-FOUND       51       507     528
200-USER-INPUT                 53      *473     474
210-USER-INPUT-LC              55      *475     476     480     483     517
                                        549    *559     565    *569     571
                                       *575     588    *595     608
220-LAST-USER-INPUT            57       480    *484
230-TRANSLATED-INPUT           59      *609     612    *627     630    *638
                                        663
240-REPLY                      61      *659     661    *662    *664     667
                                        674     676    *680    *682     687
                                        699    *718
250-SUBSTITUTE-WORK            63      *558     559    *568     569    *574
                                        575    *594     595    *625     627
                                       *636     638    *716     718
300-PROGRAM-CONSTANTS          65     referenced by child
300-MAX-KEYWORD-ENTRIES        66       513     529
300-MAX-SCAN-LEN               67       510
300-SHUT                       68       476
300-ASTERISK                   69       661
400-PROGRAM-COUNTERS           71     referenced by child
400-HOLD-KW-LEN                72      *516     517     522
400-SCAN-LEN                   73       523
400-HOLD-500-K                 74      *519    *530     652
400-HOLD-OFFSET                75       486    *508     521     608
400-OFFSET                     76       675     680     682
400-SUB                        77       509     510     517     522     523
                                       *660     661     662     664
400-SPACES-COUNT               78      *666     667     673     674     676
500-KEYWORD-TABLE-DATA         80     not referenced
500-KEYWORD-TABLE              118    referenced by child
500-KEYWORD-ENTRY              119    referenced by child
500-KW-LEN                     121      515
500-KEYWORD                    122      518
520-TRANSLATION-CONSTANTS      124    referenced by child
520-THING-IN                   125      549     588
520-HIGH-IN                    126      550     589
520-SHI-IN                     127      551     590
520-CHI-IN                     128      552     591
520-HIT-IN                     129      553     592
520-OUR-IN                     130      554     593
520-QMARK-IN                   131      555     565     571
520-XMARK-IN                   132      556     566     572
520-FSTOP-IN                   133      557     567     573
520-THING-OUT                  135      549     588
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0031
NAME                           DEFINED                REFERENCES
520-HIGH-OUT                   136      550     589
520-SHI-OUT                    137      551     590
520-CHI-OUT                    138      552     591
520-HIT-OUT                    139      553     592
520-OUR-OUT                    140      554     593
520-QMARK-OUT                  141      555     556     565     566     571
                                        572
520-FSTOP-OUT                  142      557     567     573
520-ARE-IN                     144      612
520-WERE-IN                    145      613
520-YOU-IN                     146      614
520-YOUR-IN                    147      615
520-MY-IN                      148      616
520-IVE-IN                     149      617
520-IM-IN                      150      618
520-I-AM-IN                    151      619
520-ME-IN                      152      620
520-I-IN                       153      621
520-YOURE-IN                   154      622
520-YOU-ARE-IN                 155      623
520-YOURSELF-IN                156      624
520-AM-OUT                     158      612
520-WAS-OUT                    159      613
520-I-FIX                      160      614     630
520-IM-FIX                     161      622     631
520-I-AM-FIX                   162      623     632
520-MY-FIX                     163      615     633
520-YOUR-FIX                   164      616     634
520-YOUVE-OUT                  165      617
520-YOURE-OUT                  166      618     619
520-YOU-FIX                    167      620     621     635
520-MYSELF-OUT                 168      624
520-I-OUT                      170      630
520-IM-OUT                     171      631
520-I-AM-OUT                   172      632
520-MY-OUT                     173      633
520-YOUR-OUT                   174      634
520-YOU-OUT                    175      635
540-REPLY-TABLE-DATA           178    not referenced
540-REPLY-TABLE                392    referenced by child
540-REPLY-ENTRY                393    referenced by child
540-REPLY-LENGTH               395      660
540-REPLY                      396      659
560-REPLY-LOCATER-DATA         399    not referenced
560-REPLY-LOCATER-TABLE        437    referenced by child
560-REPLY-LOCATER-ENTRY        438    referenced by child
560-REPLY-LO                   439      655
560-REPLY-HI                   440      654
560-REPLY-LAST-USED            441      653     654    *655     658     678
600-PROGRAM-MESSAGES           443    referenced by child
600-REPLY-LIST                 444    not referenced
600-REPLY-DATA                 446    not referenced
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0032
NAME                           DEFINED                REFERENCES
600-INITIAL-MESSAGE            448      471
600-GOODBYE-MESSAGE            451      478
600-NO-REPEAT-MSG              454      481
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0033
LABEL                          DEFINED                REFERENCES
PROGRAM ELIZA
E ELIZA                        467
P 0000-MAINLINE                467    not referenced
P 1000-SCAN-FOR-KEYWORD        503      485
P 1100-MASK-STRING-HI          546      505
P 1200-RESTORE-STRING-HI       585      534
P 2000-TRANSLATE-USER-INPUT    606      487
P 3000-BUILD-KEYWORD-REPLY     650      489
P 3100-FIX-MORE-BAD-GRAMMAR    697      665
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0034
NAME                           DEFINED                REFERENCES
PROGRAM ELIZA
100-PROGRAM-FLAGS              767    referenced by child
100-EOF-FLAG                   768     *1191   *1198
88-100-ALL-DONE                769      1193    1198
100-KEYWORD-FLAG               770     *1228   *1241   *1252
88-100-KEYWORD-FOUND           771      1232    1235    1241    1252
88-100-KEYWORD-NOT-FOUND       772      1228    1249
200-USER-INPUT                 774     *1194    1195
210-USER-INPUT-LC              776     *1196    1197    1201    1204    1238
                                        1270   *1280    1286   *1290    1292
                                       *1296    1309   *1316    1329
220-LAST-USER-INPUT            778      1201   *1205
230-TRANSLATED-INPUT           780     *1330    1333   *1348    1351   *1359
                                        1384
240-REPLY                      782     *1380    1382   *1383   *1385    1388
                                        1395    1397   *1401   *1403    1408
                                        1420   *1439
250-SUBSTITUTE-WORK            784     *1279    1280   *1289    1290   *1295
                                        1296   *1315    1316   *1346    1348
                                       *1357    1359   *1437    1439
300-PROGRAM-CONSTANTS          786    referenced by child
300-MAX-KEYWORD-ENTRIES        787      1234    1250
300-MAX-SCAN-LEN               788      1231
300-SHUT                       789      1197
300-ASTERISK                   790      1382
400-PROGRAM-COUNTERS           792    referenced by child
400-HOLD-KW-LEN                793     *1237    1238    1243
400-SCAN-LEN                   794      1244
400-HOLD-500-K                 795     *1240   *1251    1373
400-HOLD-OFFSET                796      1207   *1229    1242    1329
400-OFFSET                     797      1396    1401    1403
400-SUB                        798      1230    1231    1238    1243    1244
                                       *1381    1382    1383    1385
400-SPACES-COUNT               799     *1387    1388    1394    1395    1397
500-KEYWORD-TABLE-DATA         801    not referenced
500-KEYWORD-TABLE              839    referenced by child
500-KEYWORD-ENTRY              840    referenced by child
500-KW-LEN                     842      1236
500-KEYWORD                    843      1239
520-TRANSLATION-CONSTANTS      845    referenced by child
520-THING-IN                   846      1270    1309
520-HIGH-IN                    847      1271    1310
520-SHI-IN                     848      1272    1311
520-CHI-IN                     849      1273    1312
520-HIT-IN                     850      1274    1313
520-OUR-IN                     851      1275    1314
520-QMARK-IN                   852      1276    1286    1292
520-XMARK-IN                   853      1277    1287    1293
520-FSTOP-IN                   854      1278    1288    1294
520-THING-OUT                  856      1270    1309
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0035
NAME                           DEFINED                REFERENCES
520-HIGH-OUT                   857      1271    1310
520-SHI-OUT                    858      1272    1311
520-CHI-OUT                    859      1273    1312
520-HIT-OUT                    860      1274    1313
520-OUR-OUT                    861      1275    1314
520-QMARK-OUT                  862      1276    1277    1286    1287    1292
                                        1293
520-FSTOP-OUT                  863      1278    1288    1294
520-ARE-IN                     865      1333
520-WERE-IN                    866      1334
520-YOU-IN                     867      1335
520-YOUR-IN                    868      1336
520-MY-IN                      869      1337
520-IVE-IN                     870      1338
520-IM-IN                      871      1339
520-I-AM-IN                    872      1340
520-ME-IN                      873      1341
520-I-IN                       874      1342
520-YOURE-IN                   875      1343
520-YOU-ARE-IN                 876      1344
520-YOURSELF-IN                877      1345
520-AM-OUT                     879      1333
520-WAS-OUT                    880      1334
520-I-FIX                      881      1335    1351
520-IM-FIX                     882      1343    1352
520-I-AM-FIX                   883      1344    1353
520-MY-FIX                     884      1336    1354
520-YOUR-FIX                   885      1337    1355
520-YOUVE-OUT                  886      1338
520-YOURE-OUT                  887      1339    1340
520-YOU-FIX                    888      1341    1342    1356
520-MYSELF-OUT                 889      1345
520-I-OUT                      891      1351
520-IM-OUT                     892      1352
520-I-AM-OUT                   893      1353
520-MY-OUT                     894      1354
520-YOUR-OUT                   895      1355
520-YOU-OUT                    896      1356
540-REPLY-TABLE-DATA           899    not referenced
540-REPLY-TABLE                1113   referenced by child
540-REPLY-ENTRY                1114   referenced by child
540-REPLY-LENGTH               1116     1381
540-REPLY                      1117     1380
560-REPLY-LOCATER-DATA         1120   not referenced
560-REPLY-LOCATER-TABLE        1158   referenced by child
560-REPLY-LOCATER-ENTRY        1159   referenced by child
560-REPLY-LO                   1160     1376
560-REPLY-HI                   1161     1375
560-REPLY-LAST-USED            1162     1374    1375   *1376    1379    1399
600-PROGRAM-MESSAGES           1164   referenced by child
600-REPLY-LIST                 1165   not referenced
600-REPLY-DATA                 1167   not referenced
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0036
NAME                           DEFINED                REFERENCES
600-INITIAL-MESSAGE            1169     1192
600-GOODBYE-MESSAGE            1172     1199
600-NO-REPEAT-MSG              1175     1202
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0037
LABEL                          DEFINED                REFERENCES
PROGRAM ELIZA
E ELIZA                        1188
P 0000-MAINLINE                1188   not referenced
P 1000-SCAN-FOR-KEYWORD        1224     1206
P 1100-MASK-STRING-HI          1267     1226
P 1200-RESTORE-STRING-HI       1306     1255
P 2000-TRANSLATE-USER-INPUT    1327     1208
P 3000-BUILD-KEYWORD-REPLY     1371     1210
P 3100-FIX-MORE-BAD-GRAMMAR    1418     1386
GnuCOBOL 2.2.0          eliza.cbl            Thu Oct 12 21:22:20 2017  Page 0038
Error/Warning summary:
eliza.cbl: 1: error: invalid indicator '<' at column 7
eliza.cbl: 721: error: invalid indicator '|' at column 7
eliza.cbl: 722: error: invalid indicator '=' at column 7
eliza.cbl: 1442: error: invalid indicator '>' at column 7
eliza.cbl: 469: warning: DISPLAY statement not terminated by END-DISPLAY
eliza.cbl: 471: warning: DISPLAY statement not terminated by END-DISPLAY
eliza.cbl: 473: warning: ACCEPT statement not terminated by END-ACCEPT
eliza.cbl: 478: warning: DISPLAY statement not terminated by END-DISPLAY
eliza.cbl: 481: warning: DISPLAY statement not terminated by END-DISPLAY
eliza.cbl: 519: warning: some digits may be truncated
eliza.cbl: 521: warning: COMPUTE statement not terminated by END-COMPUTE
eliza.cbl: 523: warning: COMPUTE statement not terminated by END-COMPUTE
eliza.cbl: 559: warning: sending field larger than receiving field
eliza.cbl: 569: warning: sending field larger than receiving field
eliza.cbl: 575: warning: sending field larger than receiving field
eliza.cbl: 595: warning: sending field larger than receiving field
eliza.cbl: 627: warning: sending field larger than receiving field
eliza.cbl: 638: warning: sending field larger than receiving field
eliza.cbl: 653: warning: ADD statement not terminated by END-ADD
eliza.cbl: 663: warning: sending field larger than receiving field
eliza.cbl: 687: warning: DISPLAY statement not terminated by END-DISPLAY
eliza.cbl: 718: warning: sending field larger than receiving field
eliza.cbl: 725: error: redefinition of program ID 'ELIZA'
eliza.cbl: 1190: warning: DISPLAY statement not terminated by END-DISPLAY
eliza.cbl: 1192: warning: DISPLAY statement not terminated by END-DISPLAY
eliza.cbl: 1194: warning: ACCEPT statement not terminated by END-ACCEPT
eliza.cbl: 1199: warning: DISPLAY statement not terminated by END-DISPLAY
eliza.cbl: 1202: warning: DISPLAY statement not terminated by END-DISPLAY
eliza.cbl: 1240: warning: some digits may be truncated
eliza.cbl: 1242: warning: COMPUTE statement not terminated by END-COMPUTE
eliza.cbl: 1244: warning: COMPUTE statement not terminated by END-COMPUTE
eliza.cbl: 1280: warning: sending field larger than receiving field
eliza.cbl: 1290: warning: sending field larger than receiving field
eliza.cbl: 1296: warning: sending field larger than receiving field
eliza.cbl: 1316: warning: sending field larger than receiving field
eliza.cbl: 1348: warning: sending field larger than receiving field
eliza.cbl: 1359: warning: sending field larger than receiving field
eliza.cbl: 1374: warning: ADD statement not terminated by END-ADD
eliza.cbl: 1384: warning: sending field larger than receiving field
eliza.cbl: 1408: warning: DISPLAY statement not terminated by END-DISPLAY
eliza.cbl: 1439: warning: sending field larger than receiving field
36 warnings in compilation group
5 errors in compilation group
© 2019 Slashdot Media. All Rights Reserved. Terms Privacy Opt Out Advertise
Oh no! Some styles failed to load. Please try reloading this page, or contact support.



All Comments: [-] | anchor

acqq(1167) about 3 hours ago [-]

Why linking to that .lst ? It's some strange output and contains error messages. The source is here:

https://sourceforge.net/p/open-cobol/contrib/514/tree//trunk...

and I'd like to know if the source compiles.

abrax3141(3583) about 2 hours ago [-]

Thanks. It didn't occur to me that that wasn't the source. (It doesn't look like I can edit it the OP.) BTW, it does compile. I've been in communication with the author and he has shared with me additional code and details. I've asked his permission to share it here (and encouraged him to share it himself).

azhenley(2802) about 3 hours ago [-]

For everyone who also had not heard of Eliza: https://en.wikipedia.org/wiki/ELIZA

It is a chat bot from the 60s that acts as a therapist.

abrax3141(3583) about 1 hour ago [-]

Thanks, but I doubt anyone hasn't heard of Eliza ... at least no one here!





Historical Discussions: Show HN: Autocert – use TLS to access internal kubernetes services from anywhere (February 12, 2019: 12 points)

(12) Show HN: Autocert – use TLS to access internal kubernetes services from anywhere

12 points 4 days ago by mmalone in 10000th position

github.com | Estimated reading time – 23 minutes | comments | anchor

Autocert

Autocert is a kubernetes add-on that automatically injects TLS/HTTPS certificates into your containers.

To get a certificate simply annotate your pods with a name. An X.509 (TLS/HTTPS) certificate is automatically created and mounted at /var/run/autocert.step.sm/ along with a corresponding private key and root certificate (everything you need for mTLS).

We feedback. Please report bugs & suggest enhancements. Fork and send a PR. Give us a ⭐ if you like what we're doing.

Motivation

Autocert exists to make it easy to use mTLS (mutual TLS) to improve security within a cluster and to secure communication into, out of, and between kubernetes clusters.

TLS (and HTTPS, which is HTTP over TLS) provides authenticated encryption: an identity dialtone and end-to-end encryption for your workloads. It's like a secure line with caller ID. This has all sorts of benefits: better security, compliance, and easier auditability for starters. It makes workloads identity-aware, improving observability and enabling granular access control. Perhaps most compelling, mTLS lets you securely communicate with workloads running anywhere, not just inside kubernetes.

Unlike VPNs & SDNs, deploying and scaling mTLS is pretty easy. You're (hopefully) already using TLS, and your existing tools and standard libraries will provide most of what you need. If you know how to operate DNS and reverse proxies, you know how to operate mTLS infrastructure.

There's just one problem: you need certificates issued by your own certificate authority (CA). Building and operating a CA, issuing certificates, and making sure they're renewed before they expire is tricky. Autocert does all of this for you.

Features

First and foremost, autocert is easy. You can get started in minutes.

Autocert uses step certificates to generate keys and issue certificates. This process is secure and automatic, all you have to do is install autocert and annotate your pods.

Features include:

  • A fully featured private certificate authority (CA) for workloads running on kubernetes and elsewhere
  • RFC5280 and CA/Browser Forum compliant certificates that work for TLS
  • Namespaced installation into the step namespace so it's easy to lock down your CA
  • Short-lived certificates with fully automated enrollment and renewal
  • Private keys are never transmitted across the network and aren't stored in etcd

Because autocert is built on step certificates you can easily extend access to developers, endpoints, and workloads running outside your cluster, too.

Getting Started

Warning: this project is in ALPHA. DON'T use it for anything mission critical. EXPECT breaking changes in minor revisions with little or no warning. PLEASE provide feedback.

Prerequisites

All you need to get started is kubectl and a cluster running kubernetes 1.9 or later with admission webhooks enabled:

$ kubectl version --short
Client Version: v1.13.1
Server Version: v1.10.11
$ kubectl api-versions | grep 'admissionregistration.k8s.io/v1beta1'
admissionregistration.k8s.io/v1beta1

Install

To install autocert run:

kubectl run autocert-init -it --rm --image smallstep/autocert-init --restart Never

installation complete.

You might want to check out what this command does before running it. You can also install autocert manually if that's your style.

Usage

Using autocert is also easy:

  • Enable autocert for a namespace by labelling it with autocert.step.sm=enabled, then
  • Inject certificates into containers by annotating pods with autocert.step.sm/name: <name>

Enable autocert (per namespace)

To enable autocert for a namespace it must be labelled autocert.step.sm=enabled.

To label the default namespace run:

kubectl label namespace default autocert.step.sm=enabled

To check which namespaces have autocert enabled run:

$ kubectl get namespace -L autocert.step.sm
NAME          STATUS   AGE   AUTOCERT.STEP.SM
default       Active   59m   enabled
...

Annotate pods to get certificates

To get a certificate you need to tell autocert your workload's name using the autocert.step.sm/name annotation (this name will appear as the X.509 common name and SAN).

Let's deploy a simple mTLS server named hello-mtls.default.svc.cluster.local:

cat <<EOF | kubectl apply -f - 
apiVersion: apps/v1
kind: Deployment
metadata: {name: hello-mtls, labels: {app: hello-mtls}}
spec:
  replicas: 1
  selector: {matchLabels: {app: hello-mtls}}
  template:
    metadata:
      annotations:
        # AUTOCERT ANNOTATION HERE -v ###############################
        autocert.step.sm/name: hello-mtls.default.svc.cluster.local #
        # AUTOCERT ANNOTATION HERE -^ ###############################
      labels: {app: hello-mtls}
    spec:
      containers:
      - name: hello-mtls
        image: smallstep/hello-mtls-server-go:latest
EOF

In our new container we should find a certificate, private key, and root certificate mounted at /var/run/autocert.step.sm:

$ export HELLO_MTLS=$(kubectl get pods -l app=hello-mtls -o jsonpath={$.items[0].metadata.name})
$ kubectl exec -it $HELLO_MTLS -c hello-mtls -- ls /var/run/autocert.step.sm
root.crt  site.crt  site.key

We're done. Our container has a certificate, issued by our CA, which autocert will automatically renew.

Certificates.

Hello mTLS

It's easy to deploy certificates using autocert, but it's up to you to use them correctly. To get you started, hello-mtls demonstrates the right way to use mTLS with various tools and languages (contributions welcome :). If you're a bit fuzzy on how mTLS works, the hello-mtls README is a great place to start.

To finish out this tutorial let's keep things simple and try curling the server we just deployed from inside and outside the cluster.

Connecting from inside the cluster

First, let's expose our workload to the rest of the cluster using a service:

kubectl expose deployment hello-mtls --port 443

Now let's deploy a client, with its own certificate, that curls our server in a loop:

cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata: {name: hello-mtls-client, labels: {app: hello-mtls-client}}
spec:
  replicas: 1
  selector: {matchLabels: {app: hello-mtls-client}}
  template:
    metadata:
      annotations:
        # AUTOCERT ANNOTATION HERE -v ######################################
        autocert.step.sm/name: hello-mtls-client.default.pod.cluster.local #
        # AUTOCERT ANNOTATION HERE -^ ######################################
      labels: {app: hello-mtls-client}
    spec:
      containers:
      - name: hello-mtls-client
        image: smallstep/hello-mtls-client-curl:latest
        env: [{name: HELLO_MTLS_URL, value: https://hello-mtls.default.svc.cluster.local}]
EOF

Note that the authority portion of the URL (the HELLO_MTLS_URL env var) matches the name of the server we're connecting to (both are hello-mtls.default.svc.cluster.local). That's required for standard HTTPS and can sometimes require some DNS trickery.

Once deployed we should start seeing the client log responses from the server saying hello:

$ export HELLO_MTLS_CLIENT=$(kubectl get pods -l app=hello-mtls-client -o jsonpath={$.items[0].metadata.name})
$ kubectl logs $HELLO_MTLS_CLIENT -c hello-mtls-client
Thu Feb  7 23:35:23 UTC 2019: Hello, hello-mtls-client.default.pod.cluster.local!
Thu Feb  7 23:35:28 UTC 2019: Hello, hello-mtls-client.default.pod.cluster.local!

For kicks, let's exec into this pod and try curling ourselves:

$ kubectl exec $HELLO_MTLS_CLIENT -c hello-mtls-client -- curl -sS \
       --cacert /var/run/autocert.step.sm/root.crt \
       --cert /var/run/autocert.step.sm/site.crt \
       --key /var/run/autocert.step.sm/site.key \
       https://hello-mtls.default.svc.cluster.local
Hello, hello-mtls-client.default.pod.cluster.local!

mTLS inside cluster.

Connecting from outside the cluster

Connecting from outside the cluster is a bit more complicated. We need to handle DNS and obtain a certificate ourselves. These tasks were handled automatically inside the cluster by kubernetes and autocert, respectively.

That said, because our server uses mTLS only clients that have a certificate issued by our certificate authority will be allowed to connect. That means it can be safely and easily exposed directly to the public internet using a LoadBalancer service type:

kubectl expose deployment hello-mtls --name=hello-mtls-lb --port=443 --type=LoadBalancer

To connect we need a certificate. There are a couple different ways to get one, but for simplicity we'll just forward a port.

kubectl -n step port-forward $(kubectl -n step get pods -l app=ca -o jsonpath={$.items[0].metadata.name}) 4443:4443

In another window we'll use step to grab the root certificate, generate a key pair, and get a certificate.

To follow along you'll need to install step if you haven't already. You'll also need your admin password and CA fingerprint, which were output during installation (see here and here if you already lost them :).

$ export CA_POD=$(kubectl -n step get pods -l app=ca -o jsonpath={$.items[0].metadata.name})
$ step ca root root.crt --ca-url https://127.0.0.1:4443 --fingerprint <fingerprint>
$ step ca certificate mike mike.crt mike.key --ca-url https://127.0.0.1:4443 --root root.crt
✔ Key ID: H4vH5VfvaMro0yrk-UIkkeCoPFqEfjF6vg0GHFdhVyM (admin)
✔ Please enter the password to decrypt the provisioner key: 0QOC9xcq56R1aEyLHPzBqN18Z3WfGZ01
✔ CA: https://127.0.0.1:4443/1.0/sign
✔ Certificate: mike.crt
✔ Private Key: mike.key

Now we can simply curl the service:

If you're using minikube or docker for mac the load balancer's 'IP' might be localhost, which won't work. In that case, simply export HELLO_MTLS_IP=127.0.0.1 and try again.

$ export HELLO_MTLS_IP=$(kubectl get svc hello-mtls-lb -ojsonpath={$.status.loadBalancer.ingress[0].ip})
$ curl --resolve hello-mtls.default.svc.cluster.local:443:$HELLO_MTLS_IP \
       --cacert root.crt \
       --cert mike.crt \
       --key mike.key \
       https://hello-mtls.default.svc.cluster.local
Hello, mike!

Note that we're using --resolve to tell curl to override DNS and resolve the name in our workload's certificate to its public IP address. In a real production infrastructure you could configure DNS manually, or you could propagate DNS to workloads outside kubernetes using something like ExternalDNS.

mTLS outside cluster.

Cleanup & uninstall

To clean up after running through the tutorial remove the hello-mtls and hello-mtls-client deployments and services:

kubectl delete deployment hello-mtls
kubectl delete deployment hello-mtls-client
kubectl delete service hello-mtls
kubectl delete service hello-mtls-lb

See the runbook for instructions on uninstalling autocert.

How it works

Architecture

Autocert is an admission webhook that intercepts and patches pod creation requests with some YAML to inject an init container and sidecar that handle obtaining and renewing certificates, respectively.

Enrollment & renewal

It integrates with step certificates and uses the one-time token bootstrap protocol from that project to mutually authenticate a new pod with your certificate authority, and obtain a certificate.

Tokens are generated by the admission webhook and transmitted to the injected init container via a kubernetes secret. The init container uses the one-time token to obtain a certificate. A sidecar is also installed to renew certificates before they expire. Renewal simply uses mTLS with the CA.

Further Reading

Questions

Wait, so any pod can get a certificate with any identity? How is that secure?

  1. Don't give people kubectl access to your production clusters
  2. Use a deploy pipeline based on git artifacts
  3. Enforce code review on those git artifacts

If that doesn't work for you, or if you have a better idea, we'd love to hear! Please open an issue!

Why do I have to tell you the name to put in a certificate? Why can't you automatically bind service names?

Mostly because monitoring the API server to figure out which services are associated with which workloads is complicated and somewhat magical. And it might not be what you want.

That said, we're not totally opposed to this idea. If anyone has strong feels and a good design please open an issue.

Doesn't kubernetes already ship with a certificate authority?

Yes, it uses a bunch of CAs for different sorts of control plane communication. Technically, kubernetes doesn't come with a CA. It has integration points that allow you to use any CA (e.g., Kubernetes the hard way uses CFSSL. You could use step certificates, which autocert is based on, instead.

In any case, these CAs are meant for control plane communication. You could use them for your service-to-service data plane, but it's probably not a good idea.

What permissions does autocert require in my cluster and why?

Autocert needs permission to create and delete secrets cluster-wide. You can check out our RBAC config here. These permissions are needed in order to transmit one-time tokens to workloads using secrets, and to clean up afterwards. We'd love to scope these permissions down further. If anyone has any ideas please open an issue.

Why does autocert create secrets?

The autocert admission webhook needs to securely transmit one-time bootstrap tokens to containers. This could be accomplished without using secrets. The webhook returns a JSONPatch response that's applied to the pod spec. This response could patch the literal token value into our init container's environment.

Unfortunately, the kubernetes API server does not authenticate itself to admission webhooks by default, and configuring it to do so requires passing a custom config file at apiserver startup. This isn't an option for everyone (e.g., on GKE) so we opted not to rely on it.

Since our webhook can't authenticate callers, including bootstrap tokens in patch responses would be dangerous. By using secrets an attacker can still trick autocert into generating superflous bootstrap tokens, but they'd also need read access to cluster secrets to do anything with them.

Hopefully this story will improve with time.

Why not use kubernetes service accounts instead of bootstrap tokens?

Great idea! This should be pretty easy to add. However, existing service accounts are somewhat broken for this use case. The upcoming TokenRequest API should fix most of these issues.

TODO: Link to issue for people who want this.

Too. many. containers. Why do you need to install an init container and sidecar?

We don't. It's just easier for you. Your containers can generate key pairs, exchange them for certificates, and manage renewals themselves. This is pretty easy if you install step in your containers, or integrate with our golang SDK. To support this we'd need to add the option to inject a bootstrap token without injecting these containers.

TODO: Link to issue for people who want this.

That said, the init container and sidecar are both super lightweight.

Why are keys and certificates managed via volume mounts? Why not use a secret or some custom resource?

Because, by default, kubernetes secrets are stored in plaintext in etcd and might even be transmitted unencrypted across the network. Even if secrets were properly encrypted, transmitting a private key across the network violates PKI best practices. Key pairs should always be generated where they're used, and private keys should never be known by anyone but their owners.

That said, there are use cases where a certificate mounted in a secret resource is desirable (e.g., for use with a kubernetes Ingress). We may add support for this in the future. However, we think the current method is easier and a better default.

TODO: Link to issue for people who want this.

Why not use kubernetes CSR resources for this?

It's harder and less secure. If any good and simple design exists for securely automating CSR approval using this resource we'd love to see it!

How is this different than cert-manager

Cert-manager is a great project. But it's design is focused on managing Web PKI certificates issued by Let's Encrypt's public certificate authority. These certificates are useful for TLS ingress from web browsers. Autocert is different. It's purpose-built to manage certificates issued by your own private CA to support the use of mTLS for internal communication (e.g., service-to-service).

What sorts of keys are issued and how often are certificates rotated?

Autocert builds on step certificates which issues ECDSA certificates using the P256 curve with ECDSA-SHA256 signatures by default. If this is all Greek to you, rest assured these are safe, sane, and modern defaults that are suitable for the vast majority of environments.

What crypto library is under the hood?

https://golang.org/pkg/crypto/

Building

This project is based on four docker containers. They use multi-stage builds so all you need in order to build them is docker.

Caveat: the controller container uses dep and dep init isn't run during the build. You'll need to run dep init in the controller/ subdirectory prior to building, and you'll need to run dep ensure -update if you change any dependencies.

Building autocert-controller (the admission webhook):

cd controller
docker build -t smallstep/autocert-controller:latest .

Building autocert-bootstrapper (the init container that generates a key pair and exchanges a bootstrap token for a certificate):

cd bootstrapper
docker build -t smallstep/autocert-bootstrapper:latest .

Building autocert-renewer (the sidecar that renews certificates):

cd renewer
docker build -t smallstep/autocert-renewer:latest .

Building autocert-init (the install script):

cd init
docker build -t smallstep/autocert-init:latest .

If you build your own containers you'll probably need to install manually. You'll also need to adjust which images are deployed in the deployment yaml.

Contributing

If you have improvements to autocert, send us your pull requests! For those just getting started, GitHub has a howto. A team member will review your pull requests, provide feedback, and merge your changes. In order to accept contributions we do need you to sign our contributor license agreement.

If you want to contribute but you're not sure where to start, take a look at the issues with the 'good first issue' label. These are issues that we believe are particularly well suited for outside contributions, often because we probably won't get to them right now. If you decide to start on an issue, leave a comment so that other people know that you're working on it. If you want to help out, but not alone, use the issue comment thread to coordinate.

If you've identified a bug or have ideas for improving autocert that you don't have time to implement, we'd love to hear about them. Please open an issue to report a bug or suggest an enhancement!

License

Copyright 2019 Smallstep Labs

Licensed under the Apache License, Version 2.0




No comments posted yet: Link to HN comments page




Historical Discussions: Show HN: Føcal Releases OpenCV Benchmark Tool (February 13, 2019: 9 points)

(10) Show HN: Føcal Releases OpenCV Benchmark Tool

10 points 3 days ago by jrf0cal in 10000th position

app.f0cal.com | Estimated reading time – 124 minutes | comments | anchor

::reduce

1280x720 1280x720, (8UC1, 8UC1), 0, CV_REDUCE_MIN

::reduce

1920x1080 1920x1080, (8UC1, 8UC1), 1, CV_REDUCE_MIN

::reduce

1920x1080 1920x1080, (8UC1, 8UC1), 0, CV_REDUCE_MAX

::reduce

3840x2160 3840x2160, (8UC1, 8UC1), 1, CV_REDUCE_MAX

::reduce

3840x2160 3840x2160, (8UC1, 8UC1), 0, CV_REDUCE_MAX

::reduce

3840x2160 3840x2160, (8UC1, 8UC1), 0, CV_REDUCE_MIN

::remap

640x480 640x480, 8UC1, INTER_LINEAR

::remap

1920x1080 1920x1080, 8UC1, INTER_NEAREST

cv::absdiff

640x480 640x480, 8UC1

cv::absdiff

1280x720 1280x720, 8UC1

cv::absdiff

1920x1080 1920x1080, 8UC1

cv::absdiff

3840x2160 3840x2160, 8UC1

cv::accumulateProduct

1920x1080 1920x1080, 8UC1

cv::accumulateSquare

1280x720 1280x720, 8UC1

cv::accumulateSquare

1920x1080 1920x1080, 8UC1

cv::accumulateWeighted

640x480 640x480, 8UC1

cv::adaptiveThreshold

127x61 127x61, THRESH_BINARY_INV, ADAPTIVE_THRESH_MEAN_C, 5, 10

cv::adaptiveThreshold

127x61 127x61, THRESH_BINARY, ADAPTIVE_THRESH_GAUSSIAN_C, 5, 10

cv::adaptiveThreshold

127x61 127x61, THRESH_BINARY_INV, ADAPTIVE_THRESH_GAUSSIAN_C, 3, 0

cv::adaptiveThreshold

127x61 127x61, THRESH_BINARY, ADAPTIVE_THRESH_MEAN_C, 5, 0

cv::adaptiveThreshold

127x61 127x61, THRESH_BINARY_INV, ADAPTIVE_THRESH_GAUSSIAN_C, 3, 10

cv::adaptiveThreshold

127x61 127x61, THRESH_BINARY, ADAPTIVE_THRESH_GAUSSIAN_C, 3, 0

cv::adaptiveThreshold

640x480 640x480, THRESH_BINARY, ADAPTIVE_THRESH_GAUSSIAN_C, 5, 10

cv::adaptiveThreshold

640x480 640x480, THRESH_BINARY_INV, ADAPTIVE_THRESH_GAUSSIAN_C, 5, 0

cv::adaptiveThreshold

640x480 640x480, THRESH_BINARY, ADAPTIVE_THRESH_MEAN_C, 3, 0

cv::adaptiveThreshold

1280x720 1280x720, THRESH_BINARY, ADAPTIVE_THRESH_GAUSSIAN_C, 5, 10

cv::adaptiveThreshold

1280x720 1280x720, THRESH_BINARY, ADAPTIVE_THRESH_GAUSSIAN_C, 3, 0

cv::adaptiveThreshold

1280x720 1280x720, THRESH_BINARY, ADAPTIVE_THRESH_MEAN_C, 5, 10

cv::adaptiveThreshold

1280x720 1280x720, THRESH_BINARY_INV, ADAPTIVE_THRESH_GAUSSIAN_C, 5, 0

cv::adaptiveThreshold

1920x1080 1920x1080, THRESH_BINARY_INV, ADAPTIVE_THRESH_GAUSSIAN_C, 3, 10

cv::adaptiveThreshold

1920x1080 1920x1080, THRESH_BINARY_INV, ADAPTIVE_THRESH_GAUSSIAN_C, 5, 10

cv::adaptiveThreshold

1920x1080 1920x1080, THRESH_BINARY, ADAPTIVE_THRESH_GAUSSIAN_C, 5, 10

cv::adaptiveThreshold

1920x1080 1920x1080, THRESH_BINARY, ADAPTIVE_THRESH_GAUSSIAN_C, 5, 0

cv::adaptiveThreshold

1920x1080 1920x1080, THRESH_BINARY, ADAPTIVE_THRESH_GAUSSIAN_C, 3, 10

cv::adaptiveThreshold

1920x1080 1920x1080, THRESH_BINARY, ADAPTIVE_THRESH_MEAN_C, 3, 0

cv::add

640x480 640x480, 8UC1

cv::add

1920x1080 1920x1080, 8UC1

cv::addWeighted

640x480 640x480, 8UC1

cv::addWeighted

1280x720 1280x720, 8UC1

cv::addWeighted

1920x1080 1920x1080, 8UC1

cv::addWeighted

3840x2160 3840x2160, 8UC1

cv::batchDistance

NORM_L1, 32FC1, true

cv::batchDistance

NORM_HAMMING2, true

cv::batchDistance

NORM_HAMMING, false

cv::batchDistance

8UC1, true

cv::batchDistance

NORM_L2SQR, 32SC1, true

cv::batchDistance

NORM_L1, 32SC1, false

cv::batchDistance

NORM_L2SQR, 32SC1, false

cv::bilateralFilter

640x480 640x480, 3, CV_8UC1

cv::bilateralFilter

1920x1080 1920x1080, 3, CV_8UC1

cv::bilateralFilter

1920x1080 1920x1080, 5, CV_8UC1

cv::bitwise_and

1920x1080 1920x1080, 8UC1

cv::bitwise_not

1920x1080 1920x1080, 8UC1

cv::bitwise_xor

127x61 127x61, 8UC1

cv::bitwise_xor

1280x720 1280x720, 8UC1

cv::bitwise_xor

1920x1080 1920x1080, 8UC1

cv::blendLinear

640x480 640x480, 8UC1

cv::blendLinear

1920x1080 1920x1080, 8UC1

cv::blendLinear

3840x2160 3840x2160, 8UC1

cv::blur

640x480 640x480, 8UC1, BORDER_REFLECT101

cv::blur

640x480 640x480, 8UC1, BORDER_REFLECT

cv::blur

1280x720 1280x720, 8UC1, BORDER_REFLECT

cv::blur

1280x720 1280x720, 8UC1, BORDER_CONSTANT

cv::blur

1920x1080 1920x1080, 8UC1, 5

cv::blur

3840x2160 3840x2160, 8UC1, 3

cv::boxFilter

127x61 127x61, 8UC1, BORDER_REPLICATE

cv::boxFilter

127x61 127x61, 8UC1, BORDER_CONSTANT

cv::boxFilter

127x61 127x61, 3, BORDER_REPLICATE

cv::boxFilter

127x61 127x61, 15, BORDER_REPLICATE

cv::boxFilter

320x240 320x240, 5, BORDER_REPLICATE

cv::boxFilter

320x240 320x240, 15, BORDER_CONSTANT

cv::boxFilter

640x480 640x480, 15, BORDER_CONSTANT

cv::boxFilter

1280x720 1280x720, 5, BORDER_REPLICATE

cv::boxFilter

1280x720 1280x720, 3, BORDER_REPLICATE

cv::boxFilter

1280x720 1280x720, 15, BORDER_REPLICATE

cv::boxFilter

1280x720 1280x720, 3, BORDER_CONSTANT

cv::boxFilter

1280x720 1280x720, 15, BORDER_CONSTANT

cv::buildOpticalFlowPyramid

'cv/optflow/frames/720p_01.png', 11, true, BORDER_TRANSPARENT, true

cv::buildOpticalFlowPyramid

'cv/optflow/frames/720p_01.png', 11, false, BORDER_TRANSPARENT, true

cv::buildOpticalFlowPyramid

'cv/optflow/frames/720p_01.png', 7, false, BORDER_DEFAULT, true

cv::buildOpticalFlowPyramid

'cv/optflow/frames/720p_01.png', 11, true, BORDER_DEFAULT, true

cv::buildPyramid

320x240 320x240, 8UC1

cv::buildPyramid

640x480 640x480, 8UC1

cv::buildPyramid

1280x720 1280x720, 8UC1

cv::calcBackProject

1920x1080 1920x1080

cv::calcHist

2048x1536 2048x1536, 8UC1

cv::calcOpticalFlowFarneback

(5, 1.1), 0, true

cv::calcOpticalFlowFarneback

(5, 1.1), OPTFLOW_FARNEBACK_GAUSSIAN, true

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 1, 1, (15, 15), 11, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 2, 3, (15, 15), 11, true

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 1, 4, (15, 15), 11, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 2, 3, (15, 15), 7, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 2, (9, 9), 7

cv::calcOpticalFlowPyrLK

1000

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 1, 4, (15, 15), 7, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 2, 1, (15, 15), 7, true

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 2, 3, (9, 9), 11

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 1, 1, (15, 15), 7, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 1, 4, (9, 9), 11

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 2, 3, (9, 9), 7

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 2, 1, (9, 9), 7, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 1, 1, (9, 9), 11, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 1, 3, (15, 15), 11, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 2, 4, (9, 9), 7

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 1, 4, (15, 15), 11

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 1, 1, (9, 9), 7, true

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 1, (9, 9), 7

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 2, 4, (9, 9), 11

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 2, (9, 9), 11

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 1, 3, (9, 9), 11

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 1, (15, 15), 7

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 2, 4, (15, 15), 11, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 2, 3, (15, 15), 7

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 1, 1, (15, 15), 11, true

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 1, 1, (15, 15), 11, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 1, 1, (9, 9), 7, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 1, 1, (15, 15), 11

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 2, 1, (9, 9), 7

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 1, (15, 15), 7

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 1, 3, (15, 15), 7, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 2, 1, (15, 15), 7, true

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 2, (9, 9), 7

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 1, 1, (15, 15), 7, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 2, 1, (9, 9), 11, true

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 1, 4, (9, 9), 7, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 2, (15, 15), 7

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 1, 3, (15, 15), 7

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 1, 4, (9, 9), 11, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 1, 4, (9, 9), 7

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 1, (15, 15), 11

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 2, 3, (15, 15), 11

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 2, (15, 15), 7

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 2, 4, (9, 9), 11, true

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 1, 1, (9, 9), 11, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 1, 3, (15, 15), 7, true

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 2, 1, (15, 15), 7

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 2, 4, (9, 9), 11, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 1, 1, (9, 9), 7, true

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 2, 4, (9, 9), 11, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 1, 4, (9, 9), 11, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/720p_%02d.png', 1, 1, (15, 15), 11

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 1, 4, (15, 15), 7, false

cv::calcOpticalFlowPyrLK

'cv/optflow/frames/VGA_%02d.png', 2, 1, (15, 15), 11, false

cv::Canny

'cv/shared/lena.png', 5, false, (100, 120)

cv::Canny

'cv/shared/lena.png', 5, true, (50, 100)

cv::Canny

'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png', 5, true, (100, 120)

cv::Canny

'cv/shared/lena.png', 3, true, (0, 50)

cv::Canny

'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png', 3, false, (100, 120)

cv::Canny

'cv/shared/lena.png', 5, false, (50, 100)

cv::Canny

'stitching/b1.png', 5, true, (0, 50)

cv::Canny

'stitching/b1.png', 5, true, (50, 100)

cv::Canny

'stitching/b1.png', 3, true, (0, 50)

cv::Canny

'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png', 3, false, (50, 100)

cv::Canny

'stitching/b1.png', 3, true, (100, 120)

cv::Canny

'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png', 3, true, (0, 50)

cv::Canny::Canny

1280x720 1280x720, 3, true

cv::Canny::Canny

1920x1080 1920x1080, 5, true

cv::Canny::Canny

3840x2160 3840x2160, 3, true

cv::CascadeClassifier::detectMultiScale

'cv/cascadeandhog/cascades/lbpcascade_frontalface.xml', 'cv/shared/lena.png', 90

cv::CascadeClassifier::detectMultiScale

'cv/cascadeandhog/cascades/haarcascade_frontalface_alt2.xml', 'cv/cascadeandhog/images/bttf301.png', 64

cv::CascadeClassifier::detectMultiScale

'cv/cascadeandhog/cascades/lbpcascade_frontalface.xml', 'cv/cascadeandhog/images/bttf301.png', 90

cv::CascadeClassifier::detectMultiScale

'cv/cascadeandhog/cascades/lbpcascade_frontalface.xml', 'cv/cascadeandhog/images/bttf301.png', 30

cv::CascadeClassifier::detectMultiScale

'cv/cascadeandhog/cascades/haarcascade_frontalface_alt2.xml', 'cv/shared/lena.png', 64

cv::CascadeClassifier::detectMultiScale

'cv/cascadeandhog/cascades/haarcascade_frontalface_alt.xml', 'cv/cascadeandhog/images/bttf301.png', 30

cv::CascadeClassifier::detectMultiScale

'cv/cascadeandhog/cascades/haarcascade_frontalface_alt.xml', 'cv/cascadeandhog/images/class57.png', 90

cv::CLAHE::apply

640x480 640x480

cv::CLAHE::apply

1280x720 1280x720, 0

cv::CLAHE::apply

1280x720 1280x720

cv::CLAHE::apply

1920x1080 1920x1080

cv::compare

127x61 127x61, 8UC1, CMP_LE

cv::compare

127x61 127x61, 8UC1, CMP_GE

cv::compare

127x61 127x61, 8UC1, CMP_EQ

cv::compare

127x61 127x61, 8UC1, CMP_NE

cv::compare

640x480 640x480, 8UC1, CMP_LT

cv::compare

640x480 640x480, 8UC1, CMP_EQ

cv::compare

640x480 640x480, 8UC1, CMP_GE

cv::compare

1280x720 1280x720, 8UC1, CMP_LE

cv::compare

1280x720 1280x720, 8UC1, CMP_GE

cv::compare

1280x720 1280x720, 8UC1, CMP_NE

cv::compare

1920x1080 1920x1080, 8UC1, CMP_GT

cv::compare

1920x1080 1920x1080, 8UC1, CMP_LE

cv::compare

1920x1080 1920x1080, 8UC1, CMP_EQ

cv::compare

1920x1080 1920x1080, 8UC1, CMP_NE

cv::compare

3840x2160 3840x2160, 8UC1, CMP_LE

cv::compare

3840x2160 3840x2160, 8UC1, CMP_LT

cv::compareHist

1, 1

cv::compareHist

3, 2

cv::convertScaleAbs

3840x2160 3840x2160, 8UC1

cv::copyMakeBorder

640x480 640x480, 8UC1, BORDER_REFLECT

cv::copyMakeBorder

1280x720 1280x720, 8UC1, BORDER_REFLECT_101

cv::copyMakeBorder

1280x720 1280x720, 8UC1, BORDER_CONSTANT

cv::copyMakeBorder

1280x720 1280x720, 8UC1, BORDER_REPLICATE

cv::copyMakeBorder

1280x720 1280x720, 8UC1, BORDER_REFLECT

cv::copyMakeBorder

3840x2160 3840x2160, 8UC1, BORDER_REPLICATE

cv::copyMakeBorder

3840x2160 3840x2160, 8UC1, BORDER_CONSTANT

cv::cornerEigenValsAndVecs

'stitching/a1.png', 5, 3, BORDER_REPLICATE

cv::cornerEigenValsAndVecs

'cv/shared/pic5.png', 5, 5, BORDER_REFLECT

cv::cornerEigenValsAndVecs

'stitching/a1.png', 3, 5, BORDER_REPLICATE

cv::cornerEigenValsAndVecs

'cv/shared/pic5.png', 3, 3, BORDER_REFLECT_101

cv::cornerEigenValsAndVecs

'stitching/a1.png', 5, 5, BORDER_CONSTANT

cv::cornerEigenValsAndVecs

'cv/shared/pic5.png', 5, 5, BORDER_REPLICATE

cv::cornerEigenValsAndVecs

'cv/shared/pic5.png', 3, 5, BORDER_REFLECT_101

cv::cornerEigenValsAndVecs

'stitching/a1.png', 3, 5, BORDER_REFLECT_101

cv::cornerEigenValsAndVecs

'cv/shared/pic5.png', 5, 5, BORDER_REFLECT_101

cv::cornerHarris

'cv/shared/pic5.png', 5, 3, 0.1, BORDER_REFLECT

cv::cornerHarris

'stitching/a1.png', 5, 5, 0.04, BORDER_CONSTANT

cv::cornerHarris

'stitching/a1.png', 5, 5, 0.1, BORDER_REPLICATE

cv::cornerHarris

'cv/shared/pic5.png', 5, 5, 0.1, BORDER_REPLICATE

cv::cornerHarris

'cv/shared/pic5.png', 5, 5, 0.04, BORDER_CONSTANT

cv::cornerHarris

'cv/shared/pic5.png', 3, 5, 0.1, BORDER_REFLECT

cv::cornerHarris

'cv/shared/pic5.png', 3, 3, 0.1, BORDER_REPLICATE

cv::cornerHarris

'stitching/a1.png', 5, 3, 0.04, BORDER_REFLECT

cv::cornerHarris

'cv/shared/pic5.png', 5, 3, 0.1, BORDER_CONSTANT

cv::cornerHarris

'stitching/a1.png', 3, 3, 0.04, BORDER_REFLECT_101

cv::cornerHarris

'stitching/a1.png', 5, 3, 0.1, BORDER_CONSTANT

cv::cornerHarris

'cv/shared/pic5.png', 3, 3, 0.04, BORDER_REFLECT_101

cv::cornerHarris

'cv/shared/pic5.png', 3, 3, 0.04, BORDER_REFLECT

cv::cornerHarris

'stitching/a1.png', 3, 5, 0.04, BORDER_CONSTANT

cv::cornerHarris

'stitching/a1.png', 5, 3, 0.04, BORDER_CONSTANT

cv::cornerHarris

'stitching/a1.png', 5, 3, 0.1, BORDER_REFLECT_101

cv::cornerHarris

'cv/shared/pic5.png', 5, 5, 0.04, BORDER_REPLICATE

cv::cornerHarris

'cv/shared/pic5.png', 5, 5, 0.04, BORDER_REFLECT

cv::cornerHarris

'stitching/a1.png', 5, 3, 0.1, BORDER_REPLICATE

cv::cornerHarris

'stitching/a1.png', 5, 5, 0.04, BORDER_REPLICATE

cv::CornerHarris

640x480 640x480, 8UC1

cv::CornerHarris

1920x1080 1920x1080, 8UC1

cv::cornerMinEigenVal

'stitching/a1.png', 3, 5, BORDER_REFLECT

cv::cornerMinEigenVal

'cv/shared/pic5.png', 5, 3, BORDER_REFLECT_101

cv::cornerMinEigenVal

'cv/shared/pic5.png', 3, 3, BORDER_REFLECT

cv::cornerMinEigenVal

'stitching/a1.png', 5, 5, BORDER_REFLECT_101

cv::cornerMinEigenVal

'stitching/a1.png', 3, 3, BORDER_REFLECT_101

cv::cornerMinEigenVal

'cv/shared/pic5.png', 3, 5, BORDER_REPLICATE

cv::cornerMinEigenVal

'stitching/a1.png', 3, 3, BORDER_REPLICATE

cv::cornerMinEigenVal

'cv/shared/pic5.png', 3, 5, BORDER_CONSTANT

cv::cornerMinEigenVal

'cv/shared/pic5.png', 5, 3, BORDER_REFLECT

cv::cornerMinEigenVal

'cv/shared/pic5.png', 5, 3, BORDER_CONSTANT

cv::cornerMinEigenVal

'stitching/a1.png', 3, 5, BORDER_CONSTANT

cv::cornerMinEigenVal

'stitching/a1.png', 5, 3, BORDER_REFLECT_101

cv::cornerMinEigenVal

1280x720 1280x720, 8UC1

cv::countNonZero

1280x720 1280x720, 8UC1

cv::cv::normalize

1920x1080 1920x1080, 8UC1, NORM_L1

cv::cvtColor

127x61 127x61, COLOR_YCrCb2RGB

cv::cvtColor

127x61 127x61, CX_RGBA2HLS

cv::cvtColor

127x61 127x61, COLOR_RGBA2BGR555

cv::cvtColor

127x61 127x61, COLOR_GRAY2BGR555

cv::cvtColor

127x61 127x61, CX_YCrCb2BGRA

cv::cvtColor

127x61 127x61, COLOR_RGB2Luv

cv::cvtColor

127x61 127x61, COLOR_RGB2BGR555

cv::cvtColor

127x61 127x61, CX_BGRA2HSV

cv::cvtColor

127x61 127x61, COLOR_BayerRG2BGRA

cv::cvtColor

127x61 127x61, COLOR_BGR2HSV

cv::cvtColor

127x61 127x61, CX_LBGRA2Luv

cv::cvtColor

127x61 127x61, CX_BGRA2Lab

cv::cvtColor

127x61 127x61, CX_Lab2RGBA

cv::cvtColor

127x61 127x61, COLOR_BayerBG2BGR

cv::cvtColor

127x61 127x61, COLOR_YUV2RGB

cv::cvtColor

127x61 127x61, CX_RGBA2HLS_FULL

cv::cvtColor

127x61 127x61, CX_LBGRA2Lab

cv::cvtColor

127x61 127x61, COLOR_BGR5552RGBA

cv::cvtColor

127x61 127x61, COLOR_LRGB2Luv

cv::cvtColor

127x61 127x61, COLOR_RGBA2BGR

cv::cvtColor

127x61 127x61, COLOR_XYZ2BGR

cv::cvtColor

127x61 127x61, COLOR_LBGR2Lab

cv::cvtColor

127x61 127x61, COLOR_RGB2HSV

cv::cvtColor

127x61 127x61, COLOR_HLS2BGR_FULL

cv::cvtColor

127x61 127x61, COLOR_Luv2LBGR

cv::cvtColor

127x61 127x61, CX_YCrCb2RGBA

cv::cvtColor

127x61 127x61, COLOR_RGB2HLS_FULL

cv::cvtColor

127x61 127x61, COLOR_HSV2RGB_FULL

cv::cvtColor

127x61 127x61, COLOR_BGR2Lab

cv::cvtColor

127x61 127x61, CX_RGBA2YCrCb

cv::cvtColor

127x61 127x61, COLOR_Luv2LRGB

cv::cvtColor

127x61 127x61, CX_HSV2BGRA

cv::cvtColor

127x61 127x61, CX_BGRA2Luv

cv::cvtColor

127x61 127x61, COLOR_BGR2RGB

cv::cvtColor

127x61 127x61, COLOR_BGR2GRAY

cv::cvtColor

127x61 127x61, CX_RGBA2HSV

cv::cvtColor

127x61 127x61, COLOR_HLS2RGB_FULL

cv::cvtColor

127x61 127x61, COLOR_BGR5652BGR

cv::cvtColor

127x61 127x61, CX_Luv2LRGBA

cv::cvtColor

127x61 127x61, CX_YUV2BGRA

cv::cvtColor

127x61 127x61, COLOR_RGB2YCrCb

cv::cvtColor

127x61 127x61, CX_Lab2BGRA

cv::cvtColor

127x61 127x61, COLOR_XYZ2RGB

cv::cvtColor

127x61 127x61, CX_XYZ2BGRA

cv::cvtColor

130x60 130x60, COLOR_BayerGR2BGR_EA

cv::cvtColor

130x60 130x60, COLOR_YUV2BGRA_NV21

cv::cvtColor

130x60 130x60, COLOR_BGR2YUV_IYUV

cv::cvtColor

130x60 130x60, COLOR_YUV2BGRA_UYVY

cv::cvtColor

130x60 130x60, COLOR_BGR2YUV_YV12

cv::cvtColor

130x60 130x60, COLOR_RGB2YUV_IYUV

cv::cvtColor

130x60 130x60, COLOR_RGBA2YUV_YV12

cv::cvtColor

130x60 130x60, COLOR_YUV2BGRA_YVYU

cv::cvtColor

130x60 130x60, COLOR_YUV2BGR_UYVY

cv::cvtColor

130x60 130x60, COLOR_YUV2RGB_IYUV

cv::cvtColor

130x60 130x60, COLOR_YUV2RGBA_NV21

cv::cvtColor

640x480 640x480, CX_LBGRA2Lab

cv::cvtColor

640x480 640x480, (COLOR_RGB2GRAY, 3, 1)

cv::cvtColor

640x480 640x480, COLOR_YUV2BGR_YV12

cv::cvtColor

640x480 640x480, COLOR_RGB2YUV_YV12

cv::cvtColor

640x480 640x480, COLOR_BGR2HLS

cv::cvtColor

640x480 640x480, COLOR_YUV2RGB_YV12

cv::cvtColor

640x480 640x480, COLOR_BGR5652RGBA

cv::cvtColor

640x480 640x480, (COLOR_mRGBA2RGBA, 4, 4)

cv::cvtColor

640x480 640x480, COLOR_BGRA2BGR555

cv::cvtColor

640x480 640x480, CX_LRGBA2Luv

cv::cvtColor

640x480 640x480, CX_Luv2BGRA

cv::cvtColor

640x480 640x480, CX_RGBA2YUV

cv::cvtColor

640x480 640x480, COLOR_BayerGB2BGRA

cv::cvtColor

640x480 640x480, COLOR_BayerRG2GRAY

cv::cvtColor

640x480 640x480, COLOR_LBGR2Lab

cv::cvtColor

640x480 640x480, COLOR_YUV2RGB

cv::cvtColor

640x480 640x480, (COLOR_YUV2RGB_YUY2, 2, 3)

cv::cvtColor

640x480 640x480, (COLOR_RGB2Lab, 3, 3)

cv::cvtColor

640x480 640x480, COLOR_BGRA2RGBA

cv::cvtColor

640x480 640x480, COLOR_RGBA2GRAY

cv::cvtColor

640x480 640x480, COLOR_BGR2RGBA

cv::cvtColor

640x480 640x480, CX_YCrCb2BGRA

cv::cvtColor

640x480 640x480, COLOR_BayerRG2BGR_EA

cv::cvtColor

640x480 640x480, (COLOR_XYZ2RGB, 3, 3)

cv::cvtColor

640x480 640x480, CX_YUV2BGRA

cv::cvtColor

640x480 640x480, COLOR_BayerBG2BGR_VNG

cv::cvtColor

640x480 640x480, (COLOR_Lab2BGR, 3, 4)

cv::cvtColor

640x480 640x480, COLOR_RGB2BGR565

cv::cvtColor

640x480 640x480, CX_LRGBA2Lab

cv::cvtColor

640x480 640x480, COLOR_BGR5652RGB

cv::cvtColor

640x480 640x480, COLOR_Luv2RGB

cv::cvtColor

640x480 640x480, COLOR_BGR2BGR555

cv::cvtColor

640x480 640x480, CX_RGBA2HSV_FULL

cv::cvtColor

640x480 640x480, COLOR_GRAY2BGR565

cv::cvtColor

640x480 640x480, COLOR_HLS2BGR

cv::cvtColor

640x480 640x480, CX_HLS2RGBA_FULL

cv::cvtColor

640x480 640x480, COLOR_YUV2BGR_UYVY

cv::cvtColor

640x480 640x480, COLOR_YUV2BGR_YVYU

cv::cvtColor

640x480 640x480, COLOR_HLS2BGR_FULL

cv::cvtColor

640x480 640x480, (COLOR_YUV2GRAY_YUY2, 2, 1)

cv::cvtColor

640x480 640x480, COLOR_BGR2YUV_IYUV

cv::cvtColor

640x480 640x480, CX_XYZ2RGBA

cv::cvtColor

640x480 640x480, COLOR_YUV2RGB_IYUV

cv::cvtColor

640x480 640x480, COLOR_BayerRG2BGR

cv::cvtColor

640x480 640x480, COLOR_BGR5552BGRA

cv::cvtColor

640x480 640x480, COLOR_HLS2RGB_FULL

cv::cvtColor

640x480 640x480, CX_Luv2LBGRA

cv::cvtColor

640x480 640x480, COLOR_BGR5552GRAY

cv::cvtColor

640x480 640x480, COLOR_YUV2RGBA_UYVY

cv::cvtColor

640x480 640x480, COLOR_GRAY2BGRA

cv::cvtColor

640x480 640x480, COLOR_LBGR2Luv

cv::cvtColor

640x480 640x480, COLOR_YUV2BGRA_NV21

cv::cvtColor

640x480 640x480, (COLOR_YUV2RGB_NV12, 1, 3)

cv::cvtColor

640x480 640x480, COLOR_XYZ2BGR

cv::cvtColor

640x480 640x480, COLOR_BayerBG2BGR_EA

cv::cvtColor

640x480 640x480, CX_HSV2RGBA

cv::cvtColor

640x480 640x480, COLOR_Lab2LRGB

cv::cvtColor

640x480 640x480, COLOR_YUV2RGBA_NV21

cv::cvtColor

640x480 640x480, COLOR_RGB2HLS

cv::cvtColor

640x480 640x480, COLOR_BGRA2BGR

cv::cvtColor

640x480 640x480, (COLOR_RGB2YUV_IYUV, 3, 1)

cv::cvtColor

640x480 640x480, COLOR_BayerBG2BGR

cv::cvtColor

640x480 640x480, COLOR_YUV2BGRA_IYUV

cv::cvtColor

640x480 640x480, CX_RGBA2HLS_FULL

cv::cvtColor

640x480 640x480, COLOR_BayerGR2BGR

cv::cvtColor

640x480 640x480, CX_BGRA2XYZ

cv::cvtColor

640x480 640x480, CX_RGBA2YCrCb

cv::cvtColor

640x480 640x480, COLOR_BGR2RGB

cv::cvtColor

640x480 640x480, COLOR_RGB2XYZ

cv::cvtColor

1280x720 1280x720, (COLOR_RGB2HLS, 3, 3)

cv::cvtColor

1280x720 1280x720, (COLOR_BGR5652BGR, 2, 3)

cv::cvtColor

1280x720 1280x720, (COLOR_RGBA2mRGBA, 4, 4)

cv::cvtColor

1280x720 1280x720, COLOR_BGR2YUV_YV12

cv::cvtColor

1280x720 1280x720, (COLOR_XYZ2RGB, 3, 3)

cv::cvtColor

1280x720 1280x720, (COLOR_RGB2Lab, 3, 3)

cv::cvtColor

1280x720 1280x720, (COLOR_Luv2LBGR, 3, 4)

cv::cvtColor

1280x720 1280x720, (COLOR_RGB2YUV_IYUV, 3, 1)

cv::cvtColor

1280x720 1280x720, (COLOR_YUV2RGB, 3, 3)

cv::cvtColor

1280x720 1280x720, (COLOR_YUV2RGB_NV12, 1, 3)

cv::cvtColor

1280x720 1280x720, (COLOR_Lab2BGR, 3, 4)

cv::cvtColor

1280x720 1280x720, (COLOR_YUV2GRAY_YUY2, 2, 1)

cv::cvtColor

1280x720 1280x720, (COLOR_YCrCb2RGB, 3, 3)

cv::cvtColor

1920x1080 1920x1080, COLOR_YUV2BGRA_NV12

cv::cvtColor

1920x1080 1920x1080, (COLOR_YUV2RGB_YUY2, 2, 3)

cv::cvtColor

1920x1080 1920x1080, COLOR_Lab2BGR

cv::cvtColor

1920x1080 1920x1080, (COLOR_RGB2YUV, 3, 3)

cv::cvtColor

1920x1080 1920x1080, COLOR_BGRA2GRAY

cv::cvtColor

1920x1080 1920x1080, COLOR_YUV2BGRA_NV21

cv::cvtColor

1920x1080 1920x1080, COLOR_GRAY2BGR565

cv::cvtColor

1920x1080 1920x1080, CX_LRGBA2Lab

cv::cvtColor

1920x1080 1920x1080, COLOR_HSV2BGR

cv::cvtColor

1920x1080 1920x1080, COLOR_BGRA2BGR565

cv::cvtColor

1920x1080 1920x1080, CX_HSV2RGBA

cv::cvtColor

1920x1080 1920x1080, COLOR_YUV2RGBA_NV12

cv::cvtColor

1920x1080 1920x1080, COLOR_YUV2BGRA_IYUV

cv::cvtColor

1920x1080 1920x1080, COLOR_BGR2BGR565

cv::cvtColor

1920x1080 1920x1080, COLOR_BGR2Lab

cv::cvtColor

1920x1080 1920x1080, COLOR_BGR5552BGRA

cv::cvtColor

1920x1080 1920x1080, COLOR_LBGR2Lab

cv::cvtColor

1920x1080 1920x1080, CX_BGRA2XYZ

cv::cvtColor

1920x1080 1920x1080, COLOR_BGR2HSV_FULL

cv::cvtColor

1920x1080 1920x1080, (COLOR_BGR2BGR565, 3, 2)

cv::cvtColor

1920x1080 1920x1080, CX_YUV2RGBA

cv::cvtColor

1920x1080 1920x1080, CX_Luv2LBGRA

cv::cvtColor

1920x1080 1920x1080, (COLOR_RGBA2mRGBA, 4, 4)

cv::cvtColor

1920x1080 1920x1080, COLOR_YUV2BGR_YV12

cv::cvtColor

1920x1080 1920x1080, CX_HSV2BGRA_FULL

cv::cvtColor

1920x1080 1920x1080, COLOR_YUV2BGR_NV12

cv::cvtColor

1920x1080 1920x1080, COLOR_RGBA2BGR

cv::cvtColor

1920x1080 1920x1080, CX_RGBA2HSV

cv::cvtColor

1920x1080 1920x1080, (COLOR_RGB2HLS, 3, 3)

cv::cvtColor

1920x1080 1920x1080, COLOR_BGR5552RGB

cv::cvtColor

1920x1080 1920x1080, COLOR_BayerGR2BGR_EA

cv::cvtColor

1920x1080 1920x1080, COLOR_RGB2GRAY

cv::cvtColor

1920x1080 1920x1080, COLOR_RGB2BGR565

cv::cvtColor

1920x1080 1920x1080, CX_BGRA2HSV_FULL

cv::cvtColor

1920x1080 1920x1080, COLOR_HLS2RGB_FULL

cv::cvtColor

1920x1080 1920x1080, COLOR_BGR2HLS

cv::cvtColor

1920x1080 1920x1080, COLOR_BGRA2RGBA

cv::cvtColor

1920x1080 1920x1080, COLOR_BGR2XYZ

cv::cvtColor

1920x1080 1920x1080, COLOR_Luv2RGB

cv::cvtColor

1920x1080 1920x1080, COLOR_GRAY2BGR

cv::cvtColor

1920x1080 1920x1080, COLOR_Luv2LRGB

cv::cvtColor

1920x1080 1920x1080, COLOR_YUV2BGRA_YVYU

cv::cvtColor

1920x1080 1920x1080, COLOR_BGR2Luv

cv::cvtColor

1920x1080 1920x1080, CX_BGRA2Lab

cv::cvtColor

1920x1080 1920x1080, COLOR_BGR2HSV

cv::cvtColor

1920x1080 1920x1080, CX_LBGRA2Luv

cv::cvtColor

1920x1080 1920x1080, COLOR_BGR2YCrCb

cv::cvtColor

1920x1080 1920x1080, CX_HLS2BGRA

cv::cvtColor

1920x1080 1920x1080, COLOR_BayerRG2BGR_EA

cv::cvtColor

1920x1080 1920x1080, CX_YUV2BGRA

cv::cvtColor

1920x1080 1920x1080, CX_Luv2BGRA

cv::cvtColor

1920x1080 1920x1080, CX_YCrCb2BGRA

cv::cvtColor

1920x1080 1920x1080, COLOR_BGR2HLS_FULL

cv::cvtColor

1920x1080 1920x1080, COLOR_Lab2LBGR

cv::cvtColor

1920x1080 1920x1080, COLOR_BGRA2YUV_YV12

cv::cvtColor

1920x1080 1920x1080, COLOR_YUV2BGRA_YUY2

cv::cvtColor

1920x1080 1920x1080, (COLOR_RGB2Luv, 3, 3)

cv::cvtColor

1920x1080 1920x1080, CX_RGBA2Luv

cv::cvtColor

1920x1080 1920x1080, COLOR_YUV2BGR_NV21

cv::cvtColor

1920x1080 1920x1080, COLOR_YUV2RGB_YV12

cv::cvtColor

1920x1080 1920x1080, (COLOR_YUV2RGB_IYUV, 1, 3)

cv::cvtColor

1920x1080 1920x1080, CX_XYZ2BGRA

cv::cvtColor

3840x2160 3840x2160, (COLOR_RGB2XYZ, 3, 3)

cv::cvtColor

3840x2160 3840x2160, (COLOR_RGB2Luv, 3, 3)

cv::cvtColor

3840x2160 3840x2160, (COLOR_YUV2GRAY_YUY2, 2, 1)

cv::cvtColor

3840x2160 3840x2160, (COLOR_RGB2HLS, 3, 3)

cv::cvtColor

3840x2160 3840x2160, (COLOR_HLS2RGB, 3, 3)

cv::cvtColor

3840x2160 3840x2160, (COLOR_RGB2Lab, 3, 3)

cv::cvtColor

3840x2160 3840x2160, (COLOR_YUV2GRAY_420, 1, 1)

cv::cvtColor

3840x2160 3840x2160, (COLOR_RGB2YUV, 3, 3)

cv::cvtColor

3840x2160 3840x2160, (COLOR_RGB2BGR, 3, 3)

cv::cvtColor

3840x2160 3840x2160, (COLOR_YCrCb2RGB, 3, 3)

cv::cvtColor

3840x2160 3840x2160, (COLOR_RGB2YUV_IYUV, 3, 1)

cv::DenseOpticalFlow::calc

640x480 'PRESET_MEDIUM', 640x480

cv::DenseOpticalFlow::calc

640x480 'PRESET_ULTRAFAST', 640x480

cv::DenseOpticalFlow::calc

1280x720 'PRESET_MEDIUM', 1280x720

cv::DenseOpticalFlow::calc

1280x720 'PRESET_ULTRAFAST', 1280x720

cv::DenseOpticalFlow::calc

1280x720 'PRESET_FAST', 1280x720

cv::DenseOpticalFlow::calc

1920x1080 'PRESET_ULTRAFAST', 1920x1080

cv::DenseOpticalFlow::calc

1920x1080 'PRESET_FAST', 1920x1080

cv::DenseOpticalFlow::calc

1920x1080 'PRESET_MEDIUM', 1920x1080

cv::detail::BundleAdjusterBase::BundleAdjusterBase

'orb', 'affinePartial'

cv::dft

512x512 2, 512x512, 4

cv::dft

512x512 1, 512x512, 2

cv::dft

512x512 2, 512x512, 5

cv::dft

512x512 3, 512x512, 0

cv::dft

512x512 1, 512x512, 1

cv::dft

512x512 3, 512x512, 5

cv::dft

640x480 1, 640x480, 4

cv::dft

640x480 0, 640x480, 1

cv::dft

640x480 0, 640x480, 0

cv::dft

640x480 3, 640x480, 4

cv::dft

640x480 3, 640x480, 1

cv::dft

640x480 1, 640x480, 5

cv::dft

640x480 0, 640x480, 5

cv::dft

640x480 1, 640x480, 2

cv::dft

640x480 1, 640x480, 3

cv::dft

1024x1024 1, 1024x1024, 0

cv::dft

1024x1024 1, 1024x1024, 5

cv::dft

1024x1024 2, 1024x1024, 3

cv::dft

1024x1024 1, 1024x1024, 2

cv::dft

1024x1024 2, 1024x1024, 5

cv::dft

1024x1024 3, 1024x1024, 2

cv::dft

1024x1024 0, 1024x1024, 3

cv::dft

1024x1024 3, 1024x1024, 1

cv::dft

1024x1024 0, 1024x1024, 4

cv::dft

1280x720 2, 1280x720, 2

cv::dft

1280x720 1, 1280x720, 4

cv::dft

1280x720 0, 1280x720, 5

cv::dft

1280x720 1, 1280x720, 1

cv::dft

1280x720 2, 1280x720, 1

cv::dft

1280x720 1, 1280x720, 0

cv::dft

1920x1080 0, 1920x1080, 3

cv::dft

1920x1080 2, 1920x1080, 3

cv::dft

1920x1080 3, 1920x1080, 1

cv::dft

1920x1080 0, 1920x1080, 4

cv::dft

1920x1080 1, 1920x1080, 1

cv::dft

1920x1080 2, 1920x1080, 5

cv::dft

1920x1080 2, 1920x1080, 1

cv::dft

1920x1080 0, 1920x1080, 2

cv::dft

2048x2048 3, 2048x2048, 2

cv::dft

2048x2048 1, 2048x2048, 2

cv::dft

2048x2048 0, 2048x2048, 0

cv::dft

2048x2048 3, 2048x2048, 3

cv::dft

2048x2048 2, 2048x2048, 0

cv::dilate

800x600 800x600, 8UC1

cv::dilate

1024x768 1024x768, 8UC1

cv::dilate

1920x1080 1920x1080, 8UC1, 3

cv::dilate

3840x2160 3840x2160, 8UC1, 3

cv::distanceTransform

640x480 640x480, DIST_C, DIST_MASK_PRECISE, DIST_LABEL_PIXEL

cv::distanceTransform

640x480 640x480, DIST_C, DIST_MASK_5, CV_32F

cv::distanceTransform

640x480 640x480, DIST_L1, DIST_MASK_PRECISE, DIST_LABEL_PIXEL

cv::distanceTransform

640x480 640x480, DIST_C, DIST_MASK_3, DIST_LABEL_CCOMP

cv::distanceTransform

640x480 640x480, DIST_L2, DIST_MASK_3, CV_32F

cv::distanceTransform

640x480 640x480, DIST_L2, DIST_MASK_5, CV_8U

cv::distanceTransform

640x480 640x480, DIST_C, DIST_MASK_5, DIST_LABEL_CCOMP

cv::distanceTransform

640x480 640x480, DIST_C, DIST_MASK_PRECISE, CV_32F

cv::distanceTransform

640x480 640x480, DIST_L1, DIST_MASK_PRECISE, CV_32F

cv::distanceTransform

640x480 640x480, DIST_L2, DIST_MASK_5, DIST_LABEL_CCOMP

cv::distanceTransform

640x480 640x480, DIST_L2, DIST_MASK_PRECISE, CV_32F

cv::distanceTransform

800x600 800x600, DIST_L2, DIST_MASK_5, DIST_LABEL_PIXEL

cv::distanceTransform

800x600 800x600, DIST_L1, DIST_MASK_5, CV_32F

cv::distanceTransform

800x600 800x600, DIST_L1, DIST_MASK_PRECISE, DIST_LABEL_PIXEL

cv::distanceTransform

800x600 800x600, DIST_C, DIST_MASK_3, DIST_LABEL_PIXEL

cv::distanceTransform

800x600 800x600, DIST_L1, DIST_MASK_5, DIST_LABEL_CCOMP

cv::distanceTransform

800x600 800x600, DIST_L2, DIST_MASK_5, CV_32F

cv::distanceTransform

800x600 800x600, DIST_L1, DIST_MASK_3, DIST_LABEL_CCOMP

cv::distanceTransform

800x600 800x600, DIST_L2, DIST_MASK_PRECISE, DIST_LABEL_CCOMP

cv::distanceTransform

800x600 800x600, DIST_L1, DIST_MASK_3, CV_32F

cv::distanceTransform

800x600 800x600, DIST_L2, DIST_MASK_5, DIST_LABEL_CCOMP

cv::distanceTransform

800x600 800x600, DIST_L1, DIST_MASK_PRECISE, CV_32F

cv::distanceTransform

800x600 800x600, DIST_L2, DIST_MASK_PRECISE, DIST_LABEL_PIXEL

cv::distanceTransform

1024x768 1024x768, DIST_L2, DIST_MASK_PRECISE, CV_8U

cv::distanceTransform

1024x768 1024x768, DIST_L1, DIST_MASK_5, CV_32F

cv::distanceTransform

1024x768 1024x768, DIST_L1, DIST_MASK_3, DIST_LABEL_PIXEL

cv::distanceTransform

1024x768 1024x768, DIST_C, DIST_MASK_PRECISE, DIST_LABEL_PIXEL

cv::distanceTransform

1024x768 1024x768, DIST_L1, DIST_MASK_3, CV_32F

cv::distanceTransform

1024x768 1024x768, DIST_C, DIST_MASK_5, DIST_LABEL_PIXEL

cv::distanceTransform

1024x768 1024x768, DIST_C, DIST_MASK_PRECISE, CV_32F

cv::distanceTransform

1024x768 1024x768, DIST_L2, DIST_MASK_PRECISE, DIST_LABEL_PIXEL

cv::distanceTransform

1024x768 1024x768, DIST_L2, DIST_MASK_PRECISE, DIST_LABEL_CCOMP

cv::distanceTransform

1024x768 1024x768, DIST_C, DIST_MASK_3, DIST_LABEL_PIXEL

cv::distanceTransform

1024x768 1024x768, DIST_L2, DIST_MASK_3, CV_8U

cv::distanceTransform

1280x1024 1280x1024, DIST_C, DIST_MASK_PRECISE, DIST_LABEL_CCOMP

cv::distanceTransform

1280x1024 1280x1024, DIST_L2, DIST_MASK_3, CV_8U

cv::distanceTransform

1280x1024 1280x1024, DIST_L1, DIST_MASK_3, DIST_LABEL_CCOMP

cv::distanceTransform

1280x1024 1280x1024, DIST_C, DIST_MASK_3, CV_8U

cv::distanceTransform

1280x1024 1280x1024, DIST_C, DIST_MASK_3, DIST_LABEL_CCOMP

cv::distanceTransform

1280x1024 1280x1024, DIST_L1, DIST_MASK_5, CV_32F

cv::distanceTransform

1280x1024 1280x1024, DIST_L1, DIST_MASK_5, DIST_LABEL_CCOMP

cv::distanceTransform

1280x1024 1280x1024, DIST_L2, DIST_MASK_PRECISE, DIST_LABEL_PIXEL

cv::distanceTransform

1280x1024 1280x1024, DIST_L1, DIST_MASK_PRECISE, DIST_LABEL_CCOMP

cv::divide

640x480 640x480, 8UC1

cv::divide

3840x2160 3840x2160, 8UC1

cv::equalizeHist

640x480 640x480

cv::equalizeHist

1280x720 1280x720

cv::equalizeHist

1920x1080 1920x1080

cv::erode

640x480 640x480, 8UC1, 5

cv::erode

800x600 800x600, 8UC1

cv::erode

1920x1080 1920x1080, 8UC1, 5

cv::erode

3840x2160 3840x2160, 8UC1, 5

cv::estimateAffine2D

100000, 0.99, LMEDS, 0

cv::estimateAffine2D

100000, 0.9, RANSAC, 0

cv::estimateAffine2D

100000, 0.99, RANSAC, 0

cv::estimateAffine2D

100, 0.9, LMEDS, 10

cv::estimateAffine2D

100, 0.95, RANSAC, 0

cv::estimateAffine2D

5000, 0.95, RANSAC, 0

cv::estimateAffine2D

100, 0.9, RANSAC, 0

cv::estimateAffine2D

5000, 0.9, LMEDS, 0

cv::estimateAffine2D

5000, 0.95, LMEDS, 0

cv::estimateAffine2D

100, 0.95, RANSAC, 10

cv::estimateAffine2D

5000, 0.99, RANSAC, 10

cv::estimateAffine2D

5000, 0.95, RANSAC, 10

cv::estimateAffine2D

5000, 0.9, RANSAC, 10

cv::estimateAffine2D

100000, 0.95, LMEDS, 0

cv::estimateAffine2D

100, 0.99, LMEDS, 0

cv::estimateAffinePartial2D

100, 0.9, LMEDS, 10

cv::estimateAffinePartial2D

5000, 0.9, LMEDS, 10

cv::estimateAffinePartial2D

100000, 0.9, LMEDS, 10

cv::estimateAffinePartial2D

5000, 0.9, RANSAC, 10

cv::estimateAffinePartial2D

100, 0.99, RANSAC, 10

cv::estimateAffinePartial2D

100000, 0.95, RANSAC, 0

cv::estimateAffinePartial2D

5000, 0.9, RANSAC, 0

cv::estimateAffinePartial2D

100, 0.9, RANSAC, 10

cv::estimateAffinePartial2D

100000, 0.9, LMEDS, 0

cv::estimateAffinePartial2D

5000, 0.95, LMEDS, 0

cv::extractChannel

640x480 640x480, CV_32F

cv::extractChannel

640x480 640x480, CV_8U

cv::fastNlMeansDenoising

cv::fastNlMeansDenoisingColored

cv::Feature2D::compute

ORB_DEFAULT, 'stitching/s2.jpg'

cv::Feature2D::compute

KAZE_DEFAULT, 'stitching/a3.png'

cv::Feature2D::compute

BRISK_DEFAULT, 'stitching/a3.png'

cv::Feature2D::compute

AKAZE_DESCRIPTOR_KAZE, 'stitching/s2.jpg'

cv::Feature2D::compute

AKAZE_DEFAULT, 'stitching/a3.png'

cv::Feature2D::compute

AKAZE_DESCRIPTOR_KAZE, 'stitching/a3.png'

cv::Feature2D::compute

AKAZE_DEFAULT, 'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png'

cv::Feature2D::compute

ORB_DEFAULT, 'stitching/a3.png'

cv::Feature2D::compute

ORB_1500_13_1, 'stitching/a3.png'

cv::Feature2D::detect

AGAST_DEFAULT, 'stitching/a3.png'

cv::Feature2D::detect

FAST_20_FALSE_TYPE7_12, 'stitching/s2.jpg'

cv::Feature2D::detect

FAST_20_TRUE_TYPE7_12, 'stitching/s2.jpg'

cv::Feature2D::detect

FAST_20_FALSE_TYPE5_8, 'stitching/a3.png'

cv::Feature2D::detect

FAST_20_TRUE_TYPE9_16, 'stitching/s2.jpg'

cv::Feature2D::detect

FAST_20_FALSE_TYPE9_16, 'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png'

cv::Feature2D::detect

ORB_DEFAULT, 'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png'

cv::Feature2D::detect

ORB_DEFAULT, 'stitching/a3.png'

cv::Feature2D::detect

AKAZE_DEFAULT, 'stitching/s2.jpg'

cv::Feature2D::detect

FAST_20_FALSE_TYPE7_12, 'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png'

cv::Feature2D::detect

AKAZE_DESCRIPTOR_KAZE, 'stitching/s2.jpg'

cv::Feature2D::detect

ORB_DEFAULT, 'stitching/s2.jpg'

cv::Feature2D::detect

FAST_20_TRUE_TYPE5_8, 'stitching/a3.png'

cv::Feature2D::detect

KAZE_DEFAULT, 'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png'

cv::Feature2D::detect

MSER_DEFAULT, 'stitching/s2.jpg'

cv::Feature2D::detect

AKAZE_DEFAULT, 'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png'

cv::Feature2D::detect

FAST_20_TRUE_TYPE9_16, 'stitching/a3.png'

cv::Feature2D::detect

AGAST_DEFAULT, 'stitching/s2.jpg'

cv::Feature2D::detect

FAST_20_FALSE_TYPE7_12, 'stitching/a3.png'

cv::Feature2D::detect

FAST_20_TRUE_TYPE5_8, 'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png'

cv::Feature2D::detect

FAST_20_TRUE_TYPE7_12, 'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png'

cv::Feature2D::detect

AGAST_OAST_9_16, 'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png'

cv::Feature2D::detect

FAST_20_TRUE_TYPE5_8, 'stitching/s2.jpg'

cv::Feature2D::detect

MSER_DEFAULT, 'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png'

cv::Feature2D::detect

AKAZE_DEFAULT, 'stitching/a3.png'

cv::Feature2D::detect

AGAST_OAST_9_16, 'stitching/s2.jpg'

cv::Feature2D::detect

BRISK_DEFAULT, 'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png'

cv::Feature2D::detect

AGAST_5_8, 'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png'

cv::Feature2D::detect

FAST_20_TRUE_TYPE7_12, 'stitching/a3.png'

cv::Feature2D::detect

FAST_20_FALSE_TYPE9_16, 'stitching/a3.png'

cv::Feature2D::detect

AGAST_5_8, 'stitching/s2.jpg'

cv::Feature2D::detect

AKAZE_DESCRIPTOR_KAZE, 'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png'

cv::Feature2D::detect

AGAST_7_12s, 'stitching/a3.png'

cv::Feature2D::detect

AGAST_7_12s, 'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png'

cv::Feature2D::detectAndCompute

ORB_1500_13_1, 'stitching/s2.jpg'

cv::Feature2D::detectAndCompute

KAZE_DEFAULT, 'stitching/a3.png'

cv::Feature2D::detectAndCompute

AKAZE_DEFAULT, 'stitching/a3.png'

cv::Feature2D::detectAndCompute

BRISK_DEFAULT, 'stitching/s2.jpg'

cv::Feature2D::detectAndCompute

AKAZE_DESCRIPTOR_KAZE, 'cv/detectors_descriptors_evaluation/images_datasets/leuven/img1.png'

cv::Feature2D::detectAndCompute

ORB_1500_13_1, 'stitching/a3.png'

cv::Feature2D::detectAndCompute

BRISK_DEFAULT, 'stitching/a3.png'

cv::Feature2D::detectAndCompute

AKAZE_DESCRIPTOR_KAZE, 'stitching/s2.jpg'

cv::filter2D

320x240 320x240, 3, BORDER_REFLECT_101

cv::filter2D

320x240 320x240, 3, BORDER_REPLICATE

cv::filter2D

640x480 640x480, 8UC1, 3

cv::filter2D

640x480 640x480, 8UC1, 5

cv::filter2D

1920x1080 1920x1080, 5, BORDER_REFLECT_101

cv::filter2D

1920x1080 1920x1080, 3, BORDER_CONSTANT

cv::filter2D

1920x1080 1920x1080, 5, BORDER_REPLICATE

cv::filter2D

1920x1080 1920x1080, 8UC1, 3

cv::filter2D

1920x1080 1920x1080, 3, BORDER_REPLICATE

cv::filter2D

1920x1080 1920x1080, 8UC1, 5

cv::filter2D

3840x2160 3840x2160, 8UC1, 5

cv::findCirclesGrid

'cv/cameracalibration/asymmetric_circles/acircles8.png', 3x9

cv::findCirclesGrid

'cv/cameracalibration/asymmetric_circles/acircles9.png', 3x9

cv::findCirclesGrid

'cv/cameracalibration/asymmetric_circles/acircles5.png', 5x5

cv::findContours

640x480 640x480, RETR_CCOMP, CHAIN_APPROX_TC89_L1, 128

cv::findContours

640x480 640x480, RETR_TREE, CHAIN_APPROX_SIMPLE, 128

cv::findContours

640x480 640x480, RETR_CCOMP, CHAIN_APPROX_SIMPLE, 32

cv::findContours

640x480 640x480, RETR_EXTERNAL, CHAIN_APPROX_NONE, 32

cv::findContours

640x480 640x480, RETR_TREE, CHAIN_APPROX_TC89_KCOS, 128

cv::findContours

640x480 640x480, RETR_CCOMP, CHAIN_APPROX_SIMPLE, 128

cv::findContours

640x480 640x480, CHAIN_APPROX_NONE, 128

cv::findContours

640x480 640x480, CHAIN_APPROX_SIMPLE, 32

cv::findContours

640x480 640x480, RETR_CCOMP, CHAIN_APPROX_TC89_KCOS, 32

cv::findContours

640x480 640x480, RETR_LIST, CHAIN_APPROX_TC89_L1, 32

cv::findContours

640x480 640x480, RETR_EXTERNAL, CHAIN_APPROX_TC89_KCOS, 32

cv::findContours

1920x1080 1920x1080, RETR_CCOMP, CHAIN_APPROX_TC89_KCOS, 128

cv::findContours

1920x1080 1920x1080, RETR_LIST, CHAIN_APPROX_TC89_L1, 32

cv::findContours

1920x1080 1920x1080, RETR_TREE, CHAIN_APPROX_TC89_L1, 128

cv::findContours

1920x1080 1920x1080, RETR_TREE, CHAIN_APPROX_TC89_L1, 32

cv::findContours

1920x1080 1920x1080, RETR_TREE, CHAIN_APPROX_TC89_KCOS, 32

cv::findContours

1920x1080 1920x1080, CHAIN_APPROX_TC89_L1, 32

cv::findContours

1920x1080 1920x1080, CHAIN_APPROX_TC89_L1, 128

cv::findContours

1920x1080 1920x1080, RETR_CCOMP, CHAIN_APPROX_TC89_L1, 32

cv::findContours

1920x1080 1920x1080, RETR_LIST, CHAIN_APPROX_NONE, 128

cv::findTransformECC

MOTION_TRANSLATION

cv::findTransformECC

MOTION_EUCLIDEAN

cv::flip

1280x720 1280x720, 8UC1, FLIP_ROWS

cv::flip

1280x720 1280x720, 8UC1, FLIP_COLS

cv::flip

1920x1080 1920x1080, 8UC1, FLIP_BOTH

cv::flip

3840x2160 3840x2160, 8UC1, FLIP_COLS

cv::floodFill

'cv/optflow/RubberWhale1.png', [200, 140], 4, 1, 1, 0

cv::floodFill

'cv/optflow/RubberWhale1.png', [200, 140], 8, 0, 1, 5

cv::floodFill

'cv/shared/fruits.png', [200, 140], 4, 1, 2, 5

cv::floodFill

'cv/shared/fruits.png', [120, 82], 8, 0, 0, 0

cv::floodFill

'cv/shared/fruits.png', [120, 82], 4, 0, 0, 0

cv::floodFill

'cv/optflow/RubberWhale1.png', [120, 82], 4, 1, 1, 4

cv::floodFill

'cv/optflow/RubberWhale1.png', [200, 140], 4, 1, 2, 0

cv::floodFill

'cv/shared/fruits.png', [120, 82], 4, 0, 2, 4

cv::floodFill

'cv/shared/fruits.png', [120, 82], 8, 0, 2, 5

cv::floodFill

'cv/optflow/RubberWhale1.png', [120, 82], 4, 0, 2, 4

cv::floodFill

'cv/optflow/RubberWhale1.png', [120, 82], 4, 0, 0, 4

cv::floodFill

'cv/optflow/RubberWhale1.png', [120, 82], 4, 1, 2, 0

cv::floodFill

'cv/shared/fruits.png', [120, 82], 4, 1, 1, 4

cv::floodFill

'cv/optflow/RubberWhale1.png', [200, 140], 8, 1, 1, 0

cv::floodFill

'cv/shared/fruits.png', [200, 140], 8, 0, 2, 4

cv::floodFill

'cv/shared/fruits.png', [200, 140], 4, 1, 0, 4

cv::floodFill

'cv/shared/fruits.png', [200, 140], 4, 0, 1, 0

cv::floodFill

'cv/optflow/RubberWhale1.png', [120, 82], 4, 0, 2, 0

cv::floodFill

'cv/shared/fruits.png', [120, 82], 8, 0, 0, 4

cv::floodFill

'cv/shared/fruits.png', [200, 140], 4, 1, 1, 0

cv::floodFill

'cv/optflow/RubberWhale1.png', [120, 82], 4, 1, 1, 0

cv::floodFill

'cv/shared/fruits.png', [120, 82], 4, 0, 2, 0

cv::floodFill

'cv/optflow/RubberWhale1.png', [200, 140], 4, 0, 1, 5

cv::floodFill

'cv/optflow/RubberWhale1.png', [200, 140], 4, 1, 1, 4

cv::floodFill

'cv/shared/fruits.png', [200, 140], 4, 1, 0, 0

cv::floodFill

'cv/shared/fruits.png', [120, 82], 4, 1, 2, 4

cv::floodFill

'cv/optflow/RubberWhale1.png', [200, 140], 4, 1, 0, 5

cv::floodFill

'cv/optflow/RubberWhale1.png', [120, 82], 8, 1, 1, 4

cv::floodFill

'cv/optflow/RubberWhale1.png', [120, 82], 8, 0, 2, 4

cv::floodFill

'cv/optflow/RubberWhale1.png', [120, 82], 8, 1, 2, 4

cv::floodFill

'cv/shared/fruits.png', [200, 140], 8, 1, 2, 0

cv::floodFill

'cv/optflow/RubberWhale1.png', [120, 82], 4, 1, 2, 4

cv::floodFill

'cv/optflow/RubberWhale1.png', [200, 140], 8, 0, 0, 5

cv::floodFill

'cv/shared/fruits.png', [120, 82], 8, 1, 1, 4

cv::floodFill

'cv/shared/fruits.png', [200, 140], 8, 1, 2, 5

cv::floodFill

'cv/optflow/RubberWhale1.png', [120, 82], 4, 0, 0, 0

cv::floodFill

'cv/optflow/RubberWhale1.png', [120, 82], 8, 0, 1, 0

cv::floodFill

'cv/optflow/RubberWhale1.png', [200, 140], 8, 0, 2, 5

cv::floodFill

'cv/optflow/RubberWhale1.png', [120, 82], 4, 0, 2, 5

cv::floodFill

'cv/optflow/RubberWhale1.png', [120, 82], 8, 1, 0, 0

cv::floodFill

'cv/shared/fruits.png', [200, 140], 4, 0, 0, 4

cv::GaussianBlur

127x61 127x61, 8UC1, BORDER_REFLECT101

cv::GaussianBlur

320x240 320x240, 8UC1, BORDER_REPLICATE

cv::GaussianBlur

640x480 640x480, 8UC1, BORDER_REPLICATE

cv::GaussianBlur

1280x720 1280x720, 8UC1, 3

cv::GaussianBlur

1280x720 1280x720, 8UC1, BORDER_REFLECT

cv::GaussianBlur

1280x720 1280x720, 8UC1, BORDER_REPLICATE

cv::GaussianBlur

1280x720 1280x720, 8UC1, 5

cv::GaussianBlur

1280x720 1280x720, 8UC1, BORDER_REFLECT101

cv::GaussianBlur

1280x720 1280x720, 8UC1, 7

cv::GaussianBlur

1920x1080 1920x1080, 8UC1, 5

cv::getPerspectiveTransform

cv::getUMat

3840x2160 3840x2160, false, 4096

cv::goodFeaturesToTrack

'stitching/a1.png', 100, 0.1, 3, 5, true

cv::goodFeaturesToTrack

'cv/shared/pic5.png', 100, 0.1, 5, 5, true

cv::goodFeaturesToTrack

'gpu/opticalflow/rubberwhale1.png', 3, false

cv::goodFeaturesToTrack

'cv/shared/pic5.png', 500, 0.01, 3, 5, false

cv::goodFeaturesToTrack

'stitching/a1.png', 100, 0.01, 5, 3, true

cv::goodFeaturesToTrack

'stitching/a1.png', 100, 0.1, 3, 3, false

cv::goodFeaturesToTrack

'stitching/a1.png', 500, 0.1, 3, 5, false

cv::goodFeaturesToTrack

'stitching/a1.png', 100, 0.01, 5, 5, true

cv::goodFeaturesToTrack

'stitching/a1.png', 100, 0.1, 5, 3, true

cv::goodFeaturesToTrack

'stitching/a1.png', 100, 0.01, 3, 5, false

cv::goodFeaturesToTrack

'cv/shared/pic5.png', 100, 0.01, 3, 5, false

cv::goodFeaturesToTrack

'cv/shared/pic5.png', 100, 0.01, 5, 3, false

cv::goodFeaturesToTrack

'stitching/a1.png', 500, 0.01, 3, 3, true

cv::goodFeaturesToTrack

'cv/shared/pic5.png', 100, 0.1, 3, 5, false

cv::goodFeaturesToTrack

'cv/shared/pic5.png', 500, 0.1, 3, 5, true

cv::goodFeaturesToTrack

'stitching/a1.png', 500, 0.01, 5, 5, true

cv::goodFeaturesToTrack

'stitching/a1.png', 500, 0.1, 5, 3, true

cv::goodFeaturesToTrack

'cv/shared/pic5.png', 500, 0.01, 3, 3, true

cv::goodFeaturesToTrack

'stitching/a1.png', 500, 0.01, 5, 3, true

cv::goodFeaturesToTrack

'cv/shared/pic5.png', 500, 0.01, 3, 3, false

cv::goodFeaturesToTrack

'stitching/a1.png', 500, 0.1, 3, 3, true

cv::HOGDescriptor::detectMultiScale

cv::HoughLines

'cv/shared/pic5.png', 10, 0.1, 1.1

cv::HoughLines

'stitching/a1.png', 10, 0.01, 1.1

cv::HoughLines

'stitching/a1.png', 10, 0.1, 0.5

cv::HoughLines

'cv/shared/pic5.png', 10, 0.01, 0.5

cv::HoughLines

'cv/shared/pic5.png', 10, 0.01, 1.1

cv::HoughLines

640x480 640x480, 0.1, 0.1

cv::HoughLines

1280x720 1280x720, 1, 0.0174533

cv::HoughLines

1920x1080 1920x1080, 0.1, 0.0174533

cv::HoughLines

3840x2160 3840x2160, 0.1, 0.0174533

cv::HoughLinesP

'cv/shared/pic5.png', 1, 0.1

cv::HoughLinesP

'cv/shared/pic5.png', 0.1, 0.1

cv::HoughLinesP

'stitching/a1.png', 1, 0.1

cv::inpaint

32x32 32x32, INPAINT_TELEA

cv::inRange

640x480 640x480, 8UC1

cv::insertChannel

1920x1080 1920x1080, CV_32F

cv::integral

127x61 127x61, 8UC1, CV_32F

cv::integral

127x61 127x61, 8UC1, CV_32S

cv::integral

640x480 640x480, CV_32F

cv::integral

640x480 640x480, 8UC1, CV_32S

cv::integral

640x480 640x480, 8UC1, CV_32F

cv::integral

1280x720 1280x720, 8UC1, CV_64F

cv::integral

1920x1080 1920x1080, 8UC1, CV_64F

cv::integral

1920x1080 1920x1080, CV_32S

cv::kmeans

8, 16, 1000

cv::kmeans

100, 2, 1000

cv::LUT

640x480 640x480, 8UC1

cv::LUT

1920x1080 1920x1080

cv::LUT

3840x2160 3840x2160, 8UC1

cv::Mat::clone

640x480 640x480, 8UC1

cv::Mat::convertTo

640x480 640x480, 8UC1, 32SC1, 4, 0.00392157

cv::Mat::convertTo

640x480 640x480, 8UC1, 8UC1, 4, 0.00392157

cv::Mat::convertTo

640x480 640x480, 8UC1, 8SC1, 4, 1

cv::Mat::convertTo

640x480 640x480, 8UC1, 64FC1, 1, 0.00392157

cv::Mat::convertTo

640x480 640x480, 8UC1, 16SC1, 4, 0.00392157

cv::Mat::convertTo

640x480 640x480, 8UC1, 8UC1, 1, 1

cv::Mat::convertTo

640x480 640x480, 8UC1, 8SC1, 4, 0.00392157

cv::Mat::convertTo

640x480 640x480, 8UC1, 16UC1, 1, 1

cv::Mat::convertTo

640x480 640x480, 8UC1, 32SC1, 1, 0.00392157

cv::Mat::convertTo

640x480 640x480, 8UC1, 16SC1, 1, 1

cv::Mat::convertTo

640x480 640x480, 8UC1, 32FC1, 1, 0.00392157

cv::Mat::convertTo

1920x1080 1920x1080, 8UC1, 8SC1, 1, 0.00392157

cv::Mat::convertTo

1920x1080 1920x1080, 8UC1, 16SC1, 4, 0.00392157

cv::Mat::convertTo

1920x1080 1920x1080, 8UC1, 32FC1, 1, 1

cv::Mat::convertTo

1920x1080 1920x1080, 8UC1, 64FC1, 1, 0.00392157

cv::Mat::dot

8UC1, 512

cv::Mat::dot

32FC1, 128

cv::Mat::dot

32SC1, 32

cv::Mat::dot

8UC1, 1024

cv::Mat::dot

32SC1, 128

cv::Mat::eye

127x61 127x61, 8UC1

cv::Mat::zeros

1920x1080 1920x1080, 8UC1

cv::matchTemplate

128x128 128x128, 12x12, TM_CCORR_NORMED

cv::matchTemplate

128x128 128x128, 28x9, TM_SQDIFF

cv::matchTemplate

128x128 128x128, 8x30, TM_CCORR

cv::matchTemplate

128x128 128x128, 12x12, TM_SQDIFF

cv::matchTemplate

128x128 128x128, 16x16, TM_SQDIFF_NORMED

cv::matchTemplate

320x240 320x240, 28x9, TM_CCOEFF

cv::matchTemplate

320x240 320x240, 8x30, TM_SQDIFF

cv::matchTemplate

320x240 320x240, 16x16, TM_CCORR

cv::matchTemplate

320x240 320x240, 8x30, TM_SQDIFF_NORMED

cv::matchTemplate

320x240 320x240, 28x9, TM_SQDIFF_NORMED

cv::matchTemplate

320x240 320x240, 12x12, TM_SQDIFF

cv::matchTemplate

320x240 320x240, 12x12, TM_SQDIFF_NORMED

cv::matchTemplate

640x480 640x480, 41x41, TM_CCOEFF_NORMED, 8UC1

cv::matchTemplate

640x480 640x480, 11x11, TM_CCOEFF, 8UC1

cv::matchTemplate

640x480 640x480, 16x16, TM_CCORR

cv::matchTemplate

640x480 640x480, 8x30, TM_SQDIFF_NORMED

cv::matchTemplate

640x480 640x480, 28x9, TM_SQDIFF_NORMED

cv::matchTemplate

640x480 640x480, 16x16, TM_CCORR_NORMED

cv::matchTemplate

640x480 640x480, 41x41, TM_SQDIFF_NORMED, 8UC1

cv::matchTemplate

640x480 640x480, 8x30, TM_SQDIFF

cv::matchTemplate

640x480 640x480, 41x41, TM_CCORR_NORMED, 8UC1

cv::matchTemplate

640x480 640x480, 16x16, TM_CCOEFF_NORMED

cv::matchTemplate

640x480 640x480, 8x30, TM_CCOEFF_NORMED

cv::matchTemplate

640x480 640x480, 12x12, TM_CCORR

cv::matchTemplate

800x600 800x600, 12x12, TM_CCOEFF

cv::matchTemplate

800x600 800x600, 28x9, TM_CCOEFF

cv::matchTemplate

800x600 800x600, 12x12, TM_CCORR_NORMED

cv::matchTemplate

800x600 800x600, 16x16, TM_SQDIFF_NORMED

cv::matchTemplate

800x600 800x600, 8x30, TM_CCORR_NORMED

cv::matchTemplate

800x600 800x600, 28x9, TM_CCORR_NORMED

cv::matchTemplate

800x600 800x600, 16x16, TM_CCOEFF

cv::matchTemplate

800x600 800x600, 8x30, TM_CCORR

cv::matchTemplate

1024x768 1024x768, 12x12, TM_CCORR_NORMED

cv::matchTemplate

1024x768 1024x768, 28x9, TM_CCOEFF

cv::matchTemplate

1024x768 1024x768, 12x12, TM_CCOEFF

cv::matchTemplate

1024x768 1024x768, 8x30, TM_CCOEFF_NORMED

cv::matchTemplate

1024x768 1024x768, 12x12, TM_SQDIFF

cv::matchTemplate

1024x768 1024x768, 28x9, TM_SQDIFF

cv::matchTemplate

1280x1024 1280x1024, 28x9, TM_CCOEFF_NORMED

cv::matchTemplate

1280x1024 1280x1024, 1261x1013, TM_SQDIFF

cv::matchTemplate

1280x1024 1280x1024, 12x12, TM_CCORR_NORMED

cv::matchTemplate

1280x1024 1280x1024, 16x16, TM_CCOEFF_NORMED

cv::matchTemplate

1280x1024 1280x1024, 8x30, TM_SQDIFF

cv::matchTemplate

1280x1024 1280x1024, 28x9, TM_CCORR_NORMED

cv::matchTemplate

1280x1024 1280x1024, 16x16, TM_CCORR_NORMED

cv::matchTemplate

1280x1024 1280x1024, 1261x1013, TM_SQDIFF_NORMED

cv::matchTemplate

1280x1024 1280x1024, 28x9, TM_SQDIFF

cv::matchTemplate

1280x1024 1280x1024, 41x41, TM_SQDIFF, 8UC1

cv::matchTemplate

1280x1024 1280x1024, 16x16, TM_CCOEFF_NORMED, 8UC1

cv::matchTemplate

1280x1024 1280x1024, 16x16, TM_CCOEFF

cv::matchTemplate

1280x1024 1280x1024, 1260x1000, TM_CCOEFF_NORMED

cv::matchTemplate

1280x1024 1280x1024, 1260x1000, TM_CCOEFF

cv::matchTemplate

1280x1024 1280x1024, 16x16, TM_SQDIFF

cv::matchTemplate

1280x1024 1280x1024, 16x16, TM_CCORR_NORMED, 8UC1

cv::matchTemplate

1280x1024 1280x1024, 16x16, TM_SQDIFF_NORMED

cv::matchTemplate

1280x1024 1280x1024, 8x30, TM_SQDIFF_NORMED

cv::matchTemplate

1280x1024 1280x1024, 11x11, TM_CCORR, 8UC1

cv::matchTemplate

1280x1024 1280x1024, 11x11, TM_CCOEFF, 8UC1

cv::max

1280x720 1280x720, 8UC1

cv::mean

127x61 127x61, 8UC1

cv::mean

640x480 640x480, 8UC1

cv::meanStdDev

1920x1080 1920x1080, 8UC1

cv::medianBlur

320x240 320x240, 8UC1, 3

cv::medianBlur

1280x720 1280x720, 8UC1, 5

cv::medianBlur

1920x1080 1920x1080, 3

cv::merge

640x480 640x480, CV_32F, 2

cv::merge

640x480 640x480, CV_8U, 3

cv::merge

1280x720 1280x720, CV_32F, 2

cv::merge

1280x720 1280x720, 8UC1, 4

cv::merge

1920x1080 1920x1080, 8UC1, 4

cv::merge

1920x1080 1920x1080, CV_32F, 3

cv::merge

1920x1080 1920x1080, CV_8U, 3

cv::merge

3840x2160 3840x2160, CV_8U, 2

cv::merge

3840x2160 3840x2160, CV_32F, 3

cv::minMaxLoc

127x61 127x61, 8UC1

cv::minMaxLoc

1280x720 1280x720, 8UC1

cv::minMaxLoc

3840x2160 3840x2160, 8UC1

cv::minScalarSameType

640x480 640x480, 8UC1

cv::minScalarSameType

1920x1080 1920x1080, 8UC1

cv::moments

127x61 127x61, CV_16U, false

cv::moments

127x61 127x61, CV_64F, false

cv::moments

640x480 640x480, CV_32F, false

cv::moments

1280x720 1280x720, CV_32F, true

cv::moments

1280x720 1280x720, CV_64F, true

cv::moments

1280x720 1280x720, CV_16S, true

cv::moments

1280x720 1280x720, true

cv::moments

1280x720 1280x720, CV_64F, false

cv::moments

1920x1080 1920x1080, CV_32F, false

cv::moments

1920x1080 1920x1080, CV_64F, true

cv::moments

1920x1080 1920x1080, CV_16U, true

cv::moments

1920x1080 1920x1080, true

cv::moments

1920x1080 1920x1080, CV_32F, true

cv::moments

1920x1080 1920x1080, CV_16S, true

cv::moments

3840x2160 3840x2160, true

cv::morphologyEx

640x480 640x480, 8UC1, MORPH_CLOSE, 3

cv::morphologyEx

640x480 640x480, 8UC1, MORPH_TOPHAT, 5

cv::morphologyEx

640x480 640x480, 8UC1, MORPH_GRADIENT, 5

cv::morphologyEx

1280x720 1280x720, 8UC1, MORPH_CLOSE, 3

cv::morphologyEx

1280x720 1280x720, 8UC1, MORPH_OPEN, 5

cv::morphologyEx

1280x720 1280x720, 8UC1, MORPH_TOPHAT, 3

cv::morphologyEx

1920x1080 1920x1080, 8UC1, MORPH_CLOSE, 3

cv::morphologyEx

1920x1080 1920x1080, 8UC1, MORPH_GRADIENT, 5

cv::morphologyEx

1920x1080 1920x1080, 8UC1, MORPH_TOPHAT, 3

cv::morphologyEx

1920x1080 1920x1080, 8UC1, MORPH_TOPHAT, 5

cv::morphologyEx

1920x1080 1920x1080, 8UC1, MORPH_OPEN, 3

cv::morphologyEx

3840x2160 3840x2160, 8UC1, MORPH_CLOSE, 5

cv::morphologyEx

3840x2160 3840x2160, 8UC1, MORPH_BLACKHAT, 5

cv::morphologyEx

3840x2160 3840x2160, 8UC1, MORPH_GRADIENT, 5

cv::morphologyEx

3840x2160 3840x2160, 8UC1, MORPH_TOPHAT, 5

cv::morphologyEx

3840x2160 3840x2160, 8UC1, MORPH_TOPHAT, 3

cv::mulSpectrums

640x480 640x480, true

cv::mulSpectrums

1280x720 1280x720, true

cv::multiply

640x480 640x480, 8UC1

cv::multiply

1920x1080 1920x1080, 8UC1

cv::norm

127x61 127x61, 8UC1, NORM_INF|NORM_RELATIVE

cv::norm

127x61 127x61, 8UC1, NORM_L1

cv::norm

127x61 127x61, 8UC1, NORM_INF

cv::norm

640x480 640x480, 8UC1, NORM_L2

cv::norm

640x480 640x480, 8UC1, NORM_L1|NORM_RELATIVE

cv::norm

640x480 640x480, 8UC1, NORM_L1

cv::norm

1280x720 1280x720, 8UC1, NORM_INF

cv::norm

1280x720 1280x720, 8UC1, NORM_L1

cv::norm

1280x720 1280x720, 8UC1, NORM_L2

cv::norm

1920x1080 1920x1080, 8UC1, NORM_L1

cv::normalize

127x61 127x61, 8UC1, NORM_L1

cv::normalize

640x480 640x480, 8UC1, CV_L2

cv::normalize

640x480 640x480, 8UC1

cv::normalize

1280x720 1280x720, 8UC1, CV_L2

cv::normalize

1920x1080 1920x1080, 8UC1, NORM_INF

cv::normalize

1920x1080 1920x1080, 8UC1

cv::normalize

3840x2160 3840x2160, 8UC1, CV_L2

cv::phase

131072

cv::phase

1048576

cv::phase

128

cv::preCornerDetect

3840x2160 3840x2160, 8UC1

cv::PSNR

640x480 640x480, 8UC1

cv::PSNR

1920x1080 1920x1080, 8UC1

cv::PyrDown

640x480 640x480, 8UC1

cv::pyrDown

1280x720 1280x720, 8UC1

cv::PyrDown

1920x1080 1920x1080, 8UC1

cv::PyrDown

3840x2160 3840x2160, 8UC1

cv::pyrUp

640x480 640x480, 8UC1

cv::QRCodeDetector::decode

640x480 'zero', 640x480

cv::QRCodeDetector::decode

640x480 'chessboard', 640x480

cv::QRCodeDetector::detect

'link_ocv.jpg'

cv::QRCodeDetector::detect

'version_5_right.jpg'

cv::QRCodeDetector::detect

'version_5_left.jpg'

cv::QRCodeDetector::detect

'kanji.jpg'

cv::QRCodeDetector::detect

'version_5_up.jpg'

cv::QRCodeDetector::detect

'version_1_left.jpg'

cv::QRCodeDetector::detect

640x480 'random', 640x480

cv::QRCodeDetector::detect

1920x1080 'chessboard', 1920x1080

cv::reduce

127x61 127x61, 8UC1, CV_REDUCE_MIN

cv::reduce

127x61 127x61, 8UC1, CV_REDUCE_MAX

cv::reduce

640x480 640x480, 8UC1, CV_REDUCE_MIN

cv::reduce

1280x720 1280x720, 8UC1, CV_REDUCE_SUM

cv::reduce

1920x1080 1920x1080, 8UC1, CV_REDUCE_MIN

cv::reduce

1920x1080 1920x1080, 8UC1, CV_REDUCE_AVG

cv::remap

640x480 8UC1, 640x480, INTER_LINEAR, BORDER_REPLICATE, UPSIDE_DOWN

cv::remap

640x480 8UC1, 640x480, INTER_LINEAR, BORDER_REPLICATE, REFLECTION_X

cv::remap

640x480 8UC1, 640x480, INTER_LINEAR, BORDER_REPLICATE, HALF_SIZE

cv::remap

640x480 8UC1, 640x480, INTER_LINEAR, BORDER_CONSTANT, REFLECTION_X

cv::remap

640x480 8UC1, 640x480, INTER_NEAREST, BORDER_REPLICATE, REFLECTION_BOTH

cv::remap

640x480 8UC1, 640x480, INTER_NEAREST, BORDER_CONSTANT, HALF_SIZE

cv::remap

640x480 8UC1, 640x480, INTER_NEAREST, BORDER_CONSTANT, REFLECTION_BOTH

cv::remap

1920x1080 8UC1, 1920x1080, INTER_LINEAR, BORDER_CONSTANT, HALF_SIZE

cv::remap

1920x1080 8UC1, 1920x1080, INTER_LINEAR, BORDER_REPLICATE, REFLECTION_BOTH

cv::remap

1920x1080 8UC1, 1920x1080, INTER_LINEAR, BORDER_CONSTANT, UPSIDE_DOWN

cv::repeat

640x480 640x480, 8UC1

cv::repeat

1280x720 1280x720, 8UC1

cv::resize

640x480 640x480, 8UC1, 0.5

cv::resize

640x480 640x480, 8UC1, INTER_NEAREST, 0.5

cv::resize

640x480 8UC1, 640x480, 2.4

cv::resize

640x480 8UC1, 640x480, 2

cv::resize

960x540 8UC1, 960x540, 3.4

cv::resize

960x540 8UC1, 960x540, 2

cv::resize

1280x720 1280x720, 8UC1, 0.6

cv::resize

1280x720 1280x720, 8UC1, 2

cv::resize

1280x720 1280x720, 8UC1, 0.5

cv::resize

1280x720 1280x720, 8UC1, INTER_LINEAR, 0.5

cv::resize

1280x720 8UC1, 1280x720, 3.4

cv::resize

1280x720 8UC1, 1280x720, 320x240

cv::resize

1920x1080 8UC1, 1920x1080, 1.3

cv::resize

1920x1080 1920x1080, 8UC1, INTER_LINEAR, 2

cv::resize

1920x1080 1920x1080, 8UC1, INTER_NEAREST, 2

cv::resize

1920x1080 1920x1080, 8UC1, 0.5

cv::resize

1920x1080 8UC1, 1920x1080, 2.4

cv::resize

3840x2160 3840x2160, 8UC1, INTER_NEAREST, 0.5

cv::resize

3840x2160 3840x2160, 8UC1, 0.6

cv::resize

3840x2160 3840x2160, 8UC1, 0.5

cv::resize

3840x2160 8UC1, 3840x2160, 1.3

cv::resize

3840x2160 8UC1, 3840x2160, 2.4

cv::resize

3840x2160 3840x2160, 8UC1, 2

cv::scaleAdd

640x480 640x480, 8UC1

cv::Scharr

1920x1080 1920x1080, 8UC1

cv::setIdentity

1280x720 1280x720, 8UC1

cv::Sobel

1920x1080 1920x1080, 8UC1

cv::solvePnP

27, SOLVEPNP_UPNP

cv::solvePnP

27, SOLVEPNP_DLS

cv::sorIdx

640x480 640x480, 8UC1, 17

cv::sorIdx

1280x720 1280x720, 8UC1, 1

cv::sort

127x61 127x61, 8UC1, 0

cv::sort

127x61 127x61, 8UC1, 16

cv::sort

640x480 640x480, 8UC1, 0

cv::sort

1280x720 1280x720, 8UC1, 0

cv::split

127x61 127x61, 8UC1, 4

cv::split

640x480 640x480, CV_32F, 2

cv::split

640x480 640x480, 8UC1, 3

cv::split

640x480 640x480, 8UC1, 2

cv::split

640x480 640x480, CV_32F, 3

cv::split

1280x720 1280x720, 8UC1, 3

cv::split

1920x1080 1920x1080, 8UC1, 3

cv::split

3840x2160 3840x2160, CV_8U, 2

cv::split

3840x2160 3840x2160, CV_8U, 3

cv::sqrBoxFilter

640x480 640x480, 8UC1, 3x3

cv::sqrBoxFilter

1280x720 1280x720, 8UC1, 3x20

cv::sqrBoxFilter

1920x1080 1920x1080, 8UC1, 3x20

cv::sqrBoxFilter

1920x1080 1920x1080, 8UC1, 20x3

cv::sqrBoxFilter

3840x2160 3840x2160, 8UC1, 20x20

cv::sqrBoxFilter

3840x2160 3840x2160, 8UC1, 3x3

cv::Stitcher::stitch

'newspaper', 'akaze'

cv::Stitcher::stitch

8

cv::Stitcher::stitch

'prague', 'orb'

cv::Stitcher::stitch

's', 'akaze'

cv::Stitcher::stitch

'blocks_channels', 2

cv::subtract

640x480 640x480, 8UC1

cv::subtract

1280x720 1280x720, 8UC1

cv::subtract

1920x1080 1920x1080, 8UC1

cv::subtract

3840x2160 3840x2160, 8UC1

cv::sum

640x480 640x480, 8UC1

cv::sum

1280x720 1280x720, 8UC1

cv::sum

1920x1080 1920x1080, 8UC1

cv::threshold

640x480 640x480, 8UC1, THRESH_BINARY

cv::threshold

640x480 640x480, 8UC1, THRESH_BINARY_INV

cv::threshold

640x480 640x480, 8UC1, THRESH_TRUNC

cv::threshold

1280x720 1280x720, 8UC1, THRESH_TOZERO

cv::threshold

1280x720 1280x720, 8UC1, THRESH_BINARY

cv::threshold

1920x1080 1920x1080, 8UC1, THRESH_TOZERO

cv::threshold

1920x1080 1920x1080, 8UC1, THRESH_TOZERO_INV

cv::threshold

1920x1080 1920x1080, 8UC1, THRESH_TRUNC

cv::threshold

3840x2160 3840x2160, 8UC1, THRESH_TRUNC

cv::UMat::copyTo

1280x720 1280x720, 8UC1

cv::UMat::copyTo

1920x1080 1920x1080, 8UC1

cv::UMat::setTo

1920x1080 1920x1080, 8UC1

cv::UMat::setTo

3840x2160 3840x2160, 8UC1

cv::VariationalRefinement::calc

640x480 640x480, 5, 5

cv::warpAffine

640x480 640x480, 8UC1, INTER_CUBIC

cv::warpAffine

640x480 640x480, INTER_NEAREST, BORDER_CONSTANT

cv::warpAffine

1280x720 1280x720, 8UC1, INTER_NEAREST

cv::warpAffine

1280x720 1280x720, INTER_NEAREST, BORDER_CONSTANT

cv::warpAffine

1280x720 1280x720, INTER_NEAREST, BORDER_REPLICATE

cv::warpAffine

3840x2160 3840x2160, 8UC1, INTER_CUBIC

cv::warpPerspective

640x480 640x480, INTER_NEAREST, BORDER_REPLICATE, 8UC1

cv::warpPerspective

640x480 640x480, 8UC1, INTER_NEAREST

cv::warpPerspective

640x480 640x480, INTER_NEAREST, BORDER_REPLICATE

cv::warpPerspective

1280x720 1280x720, INTER_NEAREST, BORDER_REPLICATE

cv::warpPerspective

1280x720 1280x720, INTER_NEAREST, BORDER_CONSTANT

cv::warpPerspective

1920x1080 1920x1080, INTER_LINEAR, BORDER_REPLICATE

cv::warpPerspective

1920x1080 1920x1080, INTER_NEAREST, BORDER_CONSTANT

cv::warpPerspective

1920x1080 1920x1080, 8UC1, INTER_NEAREST

cv::warpPerspective

1920x1080 1920x1080, INTER_LINEAR, BORDER_CONSTANT

opencv_test::ocl::WarperBase::buildMaps

640x480 640x480, PlaneWarperType

opencv_test::ocl::WarperBase::buildMaps

1920x1080 1920x1080, CylindricalWarperType

opencv_test::ocl::WarperBase::buildMaps

3840x2160 3840x2160, SphericalWarperType

opencv_test::ocl::WarperBase::buildMaps

3840x2160 3840x2160, CylindricalWarperType

opencv_test::ocl::WarperBase::warp

640x480 640x480, SphericalWarperType

opencv_test::ocl::WarperBase::warp

640x480 640x480, CylindricalWarperType

opencv_test::ocl::WarperBase::warp

1280x720 1280x720, AffineWarperType

opencv_test::ocl::WarperBase::warp

1280x720 1280x720, CylindricalWarperType

opencv_test::ocl::WarperBase::warp

1920x1080 1920x1080, PlaneWarperType

opencv_test::ocl::WarperBase::warp

3840x2160 3840x2160, SphericalWarperType

opencv_test::ocl::WarperBase::warp

3840x2160 3840x2160, PlaneWarperType



All Comments: [-] | anchor

brian_herman__(10000) 3 days ago [-]

What is F0cal?

jrf0cal(10000) 3 days ago [-]

Hey Brian. F0cal provides a suite of tools to help design, test, optimize, and deploy computer vision systems. We're currently developing automated pipeline profiling and hardware simulation features. Let me know if you're interested in hearing more.

Q6T46nT668w6i3m(3265) 3 days ago [-]

Neat! It would be nice if contrib functions were added.

Q6T46nT668w6i3m(3265) 3 days ago [-]

I just noticed you're in Cambridge. I work down the street in the Broad's Imaging Platform. We should have coffee or something.

rhardih(3635) 1 day ago [-]

Wow, I literally wrote down an idea for a tool like this, just a couple of days ago.

I'm running OpenCV on Android for an app project, and gauging pipeline costs at different steps is a pain. Right now I'm resorting to a 'timing' build, with basic printouts of elapsed times at each step. Archaic.

Do you guys plan to add e.g. Snapdragon etc. as a measure at some point?

Also are the benchmarks based off of real device numbers, or are they fuzzed estimates somehow?

You seem to target businesses, understandably, but I signed up for a beta in any case.

jrf0cal(10000) 1 day ago [-]

Thanks for signing up. I'll reach out via email.





Historical Discussions: Show HN: Slides.ai, a tool to create beautiful slide decks and track everything (February 13, 2019: 10 points)

(10) Show HN: Slides.ai, a tool to create beautiful slide decks and track everything

10 points 3 days ago by priansh in 3973rd position

www.slides.ai | | comments | anchor

Understand who's viewing your deck

See the full picture of who's viewing your deck, so you're prepared to reach the right people, make great first impressions and nurture relationships to fund your company.




No comments posted yet: Link to HN comments page



(10) Where is the universe hiding its missing mass?

10 points about 9 hours ago by dnetesn in 8th position

phys.org | Estimated reading time – 5 minutes | comments | anchor

Credit: Chandra X-ray Center

Astronomers have spent decades looking for something that sounds like it would be hard to miss: about a third of the 'normal' matter in the Universe. New results from NASA's Chandra X-ray Observatory may have helped them locate this elusive expanse of missing matter.

From independent, well-established observations, scientists have confidently calculated how much normal matter—meaning hydrogen, helium and other elements—existed just after the Big Bang. In the time between the first few minutes and the first billion years or so, much of the normal matter made its way into cosmic dust, gas and objects such as stars and planets that telescopes can see in the present-day Universe.

The problem is that when astronomers add up the mass of all the normal matter in the present-day Universe about a third of it can't be found. (This missing matter is distinct from the still-mysterious dark matter.)

One idea is that the missing mass gathered into gigantic strands or filaments of warm (temperature less than 100,000 Kelvin) and hot (temperature greater than 100,000 Kelvin) gas in intergalactic space. These filaments are known by astronomers as the 'warm-hot intergalactic medium' or WHIM. They are invisible to optical light telescopes, but some of the warm gas in filaments has been detected in ultraviolet light.

Using a new technique, researchers have found new and strong evidence for the hot component of the WHIM based on data from Chandra and other telescopes.

'If we find this missing mass, we can solve one of the biggest conundrums in astrophysics,' said Orsolya Kovacs of the Center for Astrophysics | Harvard & Smithsonian (CfA) in Cambridge, Massachusetts. 'Where did the universe stash so much of its matter that makes up stuff like stars and planets and us?'

Astronomers used Chandra to look for and study filaments of warm gas lying along the path to a quasar, a bright source of X-rays powered by a rapidly growing supermassive black hole. This quasar is located about 3.5 billion light years from Earth. If the WHIM's hot gas component is associated with these filaments, some of the X-rays from the quasar would be absorbed by that hot gas. Therefore, they looked for a signature of hot gas imprinted in the quasar's X-ray light detected by Chandra.

Light Path (Credit: NASA/CXC/K. Williamson, Springel et al.

One of the challenges of this method is that the signal of absorption by the WHIM is weak compared to the total amount of X-rays coming from the quasar. When searching the entire spectrum of X-rays at different wavelengths, it is difficult to distinguish such weak absorption features—actual signals of the WHIM—from random fluctuations.

Kovacs and her team overcame this problem by focusing their search only on certain parts of the X-ray light spectrum, reducing the likelihood of false positives. They did this by first identifying galaxies near the line of sight to the quasar that are located at the same distance from Earth as regions of warm gas detected from ultraviolet data. With this technique they identified 17 possible filaments between the quasar and us, and obtained their distances.

Because of the expansion of the universe, which stretches out light as it travels, any absorption of X-rays by matter in these filaments will be shifted to redder wavelengths. The amounts of the shifts depend on the known distances to the filament, so the team knew where to search in the spectrum for absorption from the WHIM.

'Our technique is similar in principle to how you might conduct an efficient search for animals in the vast plains of Africa,' said Akos Bogdan, a co-author also from CfA. 'We know that animals need to drink, so it makes sense to search around watering holes first.'

While narrowing their search helped, the researchers also had to overcome the problem of the faintness of the X-ray absorption. So, they boosted the signal by adding spectra together from 17 filaments, turning a 5.5-day-long observation into the equivalent of almost 100 days' worth of data. With this technique they detected oxygen with characteristics suggesting it was in a gas with a temperature of about one million degrees Kelvin.

By extrapolating from these observations of oxygen to the full set of elements, and from the observed region to the local universe, the researchers report they can account for the complete amount of missing matter. At least in this particular case, the missing matter had been hiding in the WHIM after all.

'We were thrilled that we were able to track down some of this missing matter' said co-author Randall Smith, also of CfA. 'In the future we can apply this same method to other quasar data to confirm that this long-standing mystery has at last been cracked.'

A paper describing these results was published in the Astrophysical Journal on February 13, 2019.

Explore further: Researchers find last of universe's missing ordinary matter

More information: Orsolya E. Kovacs et al. Detection of the Missing Baryons toward the Sightline of H1821+643. arXiv:1812.04625 [astro-ph.CO]. arxiv.org/abs/1812.04625

Journal reference: Astrophysical Journal Provided by: Chandra X-ray Center



No comments posted yet: Link to HN comments page



(9) Color Spaces

9 points about 10 hours ago by tylerchr in 10000th position

ciechanow.ski | Estimated reading time – 45 minutes | comments | anchor

February 15, 2019

For the longest time we didn't have to pay a lot of attention to the way we talk about color. The modern display technologies capable of showing more vivid shades have, for better or for worse, changed the rules of the game. Once esoteric ideas like a gamut or a color space are becoming increasingly important.

A color can be described in many different ways. We could use words, list amounts of CMYK printer inks, enumerate quite flawed HSL and HSV values, or even quantify the responses of cells in a human retina. Those notions are useful in some contexts, but I'm not going to focus on any of them.

This article is dedicated entirely to RGB values from RGB color spaces. It may seem fairly restrictive, but considering the domination of displays as the medium for color presentation, it is a pragmatic approach and it ultimately won't prevent us from describing everything we can see.

A dry definition of a color space is not a good way to kick things off. Instead, we'll start by playing with one of the most common tools used to specify colors.

Color Pickers

You've probably seen an RGB color picker before, it usually looks like this:

By dragging all the sliders to the right you can create a white color. Lots of red and green with little blue produces shades of yellow. That color picker is not the only way to specify colors. Try using the one below:

The gist of the behavior of that new picker is the same – the red slider controls the red component, the green slider affects the green component, and the blue slider acts on the blue component. Both sets of sliders have the minimum value of 0 and the maximum value of 255. However, the shades and intensities they create are quite different. We can compare some of the colors for the same slider positions side by side. In each plate the top half shows the color from the first picker and the bottom half contains the color from the second picker:

The only color that looks the same is a pure black, none of the other colors match despite the same values of red, green, and blue. You may be wondering which halves of the plates are wrong, but the truth is that they're both correct, they just come from different RGB color spaces.

I'll soon explain what causes those differences and we'll eventually see what defines a color space, but for now the important lesson is that on their own the numeric values of the red, green, and blue components have no meaning. A color space is what assigns the meaning to those numeric values.

What I've shown you above is an extreme example in which almost everything about the two color spaces is different. Over the course of the next few paragraphs we will be looking at simpler examples which will help us get a feel for what different aspects of color spaces mean. Before we discuss those aspects we should make a quick detour to revisit how the components of an RGB color are described.

Normalized Range

When dealing with RGB colors you'll often seem the specified in a 0 to 255 range:

255, 230, 26

Or in an equivalent hexadecimal form:

ff, e6, 1a

While using the range of 0 to 255 is convenient, especially for specifying colors on the web, it ties the description of color to a specific 8-bit depth. What we actually want to express is the percentage of the maximum intensity a red, green, or blue component can represent.

In our discourse I'll use a so called normalized range where the minimum value is still 0.0, but the maximum value is 1.0. Calculating normalized values from 0 to 255 range is easy – just divide the source numbers by 255.0:

1.000, 0.902, 0.102

This form may feel less familiar, but it lets us talk about the values without having to care what kind of discrete encoding scheme they'll eventually use.

Intensity Mismatch

Let's continue playing with the color pickers by upgrading them to show two colors at the same time. In the top half we'll see the color obtained from the RGB values interpreted by one color space, while the bottom half shows the color obtained from the same RGB values, but interpreted by another color space:

While playing with the sliders you may have noticed something peculiar – if the sliders are at their minimum or maximum the colors look the same, otherwise they don't. Here's a comparison of some of the colors for different slider values:

A color space can specify how the numeric values of the red, green, and blue components map to intensity of the corresponding light source. In other words, the position of a slider may not be equal to intensity of the light the slider controls. The color space from the bottom half uses a simple linear encoding:

(intensity value) = (encoded value)

The light shining at 64% of its maximum intensity will be encoded as the number 0.64. The top color space uses an encoding with a fixed 2.0 exponent:

(intensity value) = (encoded value)2.0

The light shining at 64% of its maximum intensity will be encoded as 0.8.

This may seem all like a pointless transformation, but there is a good reason for doing all this nonlinear mapping. The human eye is not a simple detector of the power of the incoming light – its response is nonlinear. A two-fold increase in emitted number of photons per second will not be perceived as twice as bright light.

If we were to encode the colors using floating point numbers the need for a nonlinear encoding function would be diminished. However, the numeric values of color are often encoded using the familiar 8 bits per component, e.g. in the most common configurations of JPEG and PNG files. Using a nonlinear tone response curve, or TRC for short, lets us maintain more or less perceptual uniformity and use the chunky, quantized range to keep the detail in the darker parts.

Here are the first 14 representable values in 8 bit encoding of linearly increasing amount of light. You can probably tell that the brightness difference we perceive between the first two panes is much larger than between the last two panes:

First 14 representable linear shades of gray in 8 bit encoding

The extremely common sRGB color space employs a nonlinear TRC to make a better use of the human visual perception. Here are the first 14 representable values in an 8 bit encoding of output sRGB values. You may need to ramp up your display brightness to see them, or maybe even make the background blackmake the background bright:

First 14 representable sRGB shades of gray in an 8 bit encoding

Notice the hollow circles under all but the first and the last colors. None of those 12 shades of gray would be representable in an 8 bit format if we were to use straightforward linear encoding. You'd actually need a range of 0 to 4095 (12 bits per component) to represent the same very dark shades of sRGB gray without using any tone response curves.

The situation at the other end of the spectrum is more difficult to visualize since the images on this website use sRGB color space, but we can at least show that the white shades in linear color space shown in the upper part of the picture get darker more slowly than corresponding white shades in sRGB at the bottom:

Last 14 representable linear (top) and sRGB (bottom) shades of gray in an 8 bit encoding

Linear encoding has the same precision of light intensity everywhere, which accounting for our nonlinear perception ends up having very quickly increasing brightness at the dark end and very slowly decreasing darkness at the bright end.

Different color spaces use different TRCs. Some, like DCI P3, use a simple power function with a fixed exponent. Others, like ProPhoto employ a piecewise combination of a linear segment with a power segment. The magnitude of the exponent differs between color spaces – there is no unanimous opinion on what constitutes the best encoding. Finally, some color spaces don't use any special encoding and just represent the component values in linear fashion.

While TRCs are very useful for encoding the intensities for storage, the crucial thing to remember is that mathematical operations on light only make sense when done on linear values, because they represent the actual intensities of light. It's actually fairly easy to visualize in a gradual mix between a pure red and a pure green:

Notice how in the top halves, which use nonlinear encoding, the colors get darker in the middle, while in the bottom, linear halves the progression looks more balanced.

What we've discussed in this section is the first component of a typical RGB display color space. We can now say that one of the defining properties of a color space is:

A set of tone response curves for its red, green, and blue components.

The encoding is an essential, but nonetheless, technical part of a color space. The specification of the actual colors is what we usually care about the most.

Primary Differences

Let's look at a different example of two color spaces in a color picker. To make things simpler, both halves operate with linear intensities:

Notice that grayscale colors now match perfectly, however, the colors in the bottom half are clearly more subdued for the same numeric value:

I'll symbolize the red, green, and blue values from the top half with a small bar on top of the letters: RGB. For the values from the bottom half I'll use a bar at the bottom: RGB.

You may be wondering if, despite a rather different behavior, it is possible to make the colors match. In other words, knowing the RGB values we'd like to express the same color using RGB values.

The notion may seem contrived, but the underlying ideas are important for understanding how the same color can be described by many different triplets of values. To see the scenario in action we can fix the bottom half of the picker and let the sliders control only the top color:

After some trial and error you may get pretty close, or perhaps even achieve the exact match. Unless you're quite skilled, the task probably wasn't trivial and trying to do the same manual work for every possible color would be a daunting endeavor.

A consistent reproduction of color is crucial for realizing any design – we wouldn't want a carefully selected shade of pastel beige to look like a ripe orange when seen on some other device. We're in a dire need of a better approach for trying to make the colors look the same despite a different meaning of the values of the RGB components.

Primaries Matched

Reproducing arbitrary colors between different color spaces is difficult, but we can try to simplify things a little by just trying to match the pure red, green, and blue colors and seeing where it gets us. Let's start by matching the red from the bottom plate:

Making the border between the two halves disappear may take some tweaking, but eventually we can agree on values that fit perfectly. It's a step in the right direction – since we're dealing with linear values we can now express how much a single unit of R will contribute to RGB components. For instance, to recreate the RGB color of 0.60.00.0 we'd need to use:

0.6 × 0.712 = 0.427 units of R 0.6 × 0.100 = 0.060 units of G 0.6 × 0.024 = 0.014 units of B

We can put these results into equations. When a color from the RGB space has no green and no blue, but some amount of red, we could use the following formulas to see the same color in the top plate:

R = 0.712×R G = 0.100×R B = 0.024×R

We can repeat the experiment for pure G:

Having decided on the the perfect match, we can write down the impact of G on RGB components:

R = 0.221×G G = 0.690×G B = 0.000×G

All that's left is to repeat the experiments for pure B. If you're still not tired of dragging the sliders you can do it yourself, or just let me do it:

Once again, the results can be written down as:

R = 0.067×B G = 0.210×B B = 0.976×B

The provided equations apply only when the RGB half shows just some amount of red, or just some amount of green, or just some amount of blue. Being limited to having non-zero values in only one component won't get us far. To create a more diverse set of colors we need to have a way of finding out how mixes of basic components in RGB affect values in RGB.

Mixing Things Up

Thankfully, Grassmann's laws come to our rescue. Those empirical rules tell us that if two colored lights match, then after addition of another set of matching light sources their overlap, or their sum, will also match:

The sum of two matching sets of colored lights also matches

As a result we can do the three matchings for R, G, and B separately, then just combine the results into one set. For instance, to obtain the final R value from RGB all we need to do is to sum the contribution of R to R, the contribution of G to R, and the contribution of B to R. Repeating the trick for the other two components yields the following set of equations:

R = 0.712×R + 0.221×G + 0.067×B G = 0.100×R + 0.690×G + 0.210×B B = 0.024×R + 0.000×G + 0.976×B

What we just did was a matching of primary red, green, and blue colors from one color space to another. With the equations in hand we can finally recreate the math behind the matching beige from the first example. By plugging in the original RGB values of 0.90.50.2 we can calculate the value of the exact match:

R = 0.712×0.9 + 0.221×0.5 + 0.067×0.2 = 0.765 G = 0.100×0.9 + 0.690×0.5 + 0.210×0.2 = 0.477 B = 0.024×0.9 + 0.000×0.5 + 0.976×0.2 = 0.217

Seeing the Matrix

If you look closely at the three equations governing the conversion from RGB to RGB you may notice the coefficients form a 3×3 grid:

0.7120.2210.067 0.1000.6900.210 0.0240.0000.976

It's actually a 3×3 matrix that defines the conversion from RGB to RGB. If you're familiar with matrices and vectors, you might have already realized that the transformation we did was a plain old matrix times vector multiplication.

On its own the matrix isn't particularly interesting, but the transformation it does can be visualized in 3D. In the following interactive diagram the outer cube defines the RGB space, while the inner, skewed cube (parallelepiped) defines the RGB space. You can drag the cubes around to see them from different angles and control the RGB color indicator using the sliders:

As an experiment I encourage you to set red and green components to 0 and just play with the blue slider. You should be able to see that movement along the pure blue color in RGB space requires some shift in red and green in RGB.

Notice that RGB parallelepiped fits completely inside RGB cube which means that one can recreate every single RGB color using RGB space. The inverse, however, is not true.

There and Back Again

In the example below the background of the top half is set to a pure R at maximum intensity. You may try really hard, but there is no combination of values in RGB that can cause the seam between the two halves to disappear:

If we ignore the issue for a minute and treat the problem algebraically we can solve the system of 3 equations looking for pure R at the output:

1.0 = 0.712×R + 0.221×G + 0.067×B 0.0 = 0.100×R + 0.690×G + 0.210×B 0.0 = 0.024×R + 0.000×G + 0.976×B

The details of the evaluation are boring, so let's just present the solution as is:

R = +1.470 G = −0.201 B = −0.036

All three values are outside of 0.0 to 1.0 range – a clear indication of the unreachability of that color in RGB space!

Values larger than 1.0 have a very straightforward explanation: 1.0 of R is beyond the range of intensity of R, you could say that R is not strong enough. Some less intense values of R can contribute to R within limit. For example, 0.5 of R requires 0.5 × 1.470 = 0.735 of R.

Negative values are slightly trickier because they are imaginary, but we can still reason about them quite easily.

Negative Values

Let's try to think about an experiment in which we're tasked with matching a light on the right side with a combination of a red, green, and blue light on the left side:

Mixing lights to obtain the color on the right

After some tuning we may realize that even with no green and no blue light the colors still don't match and the left patch of light continues to look a bit orangey indicating that it has too much green in it:

The left patch is still too green

We can't decrease the green light on the left side anymore, so we have to become more creative. We could say that the left side is too green, or we could say that the right side is not green enough! Since all we care about is making the colors match we could try adding some of the green light to the other side:

Adding green light to the target side

Now the right side is too green, but if we adjust the added amount we can eventually make the colors look the same:

A match with just the right amount of green

Let's try to write down what we've achieved in an equation. We have a maximum amount of red and no green or blue on the left side, while on the right side we have the target color with some green light added to it:

1.0×R + 0.0×G + 0.0×B = (target) + 0.2×G

This feels a little mischievous – we wanted to match the pure target color without any additions. Let's try to rearrange the equation by moving the added green light from the right to the left side:

1.0×R − 0.2×G + 0.0×B = (target)

That equivalent equation tells us that getting the exact match with the starting target color requires using a negative amount of the green light on the left side. Naturally, this is something we can't do in a physical world. Mathematically, however, a match of the very saturated red from the right side really requires removing some green light from the left side.

If you're still not convinced consider what would happen if we somehow were able to create −0.2×G on the left side. I can't show you a picture of that imaginary situation, but I can show you what would happen if we then added 0.2×G to both sides:

This is exactly the same image as before since both scenarios are equivalent. Recall from Grassmann's laws that adding the same amount of light to both sides maintains the color match, so indeed the imaginary −0.2×G on the left side must have made it look like the original unmodified saturated red from the right side.

This may all be somehow unsatisfying, however, when you step away from the physical restrictions and think about lights as just numbers, it all, quite literally, adds up.

The Matrix Seeing

If we go back to our quest for obtaining RGB values from RGB colors we can repeat the "system of equations" trick for the other two components and combine the results using Grassmann's laws to end up with the full set of equations:

R = +1.470×R − 0.470×G + 0.000×B G = −0.201×R + 1.513×G − 0.311×B B = −0.036×R + 0.011×G + 1.025×B

Once again, the set of coefficients can be presented in a 3×3 matrix:

+1.470−0.470+0.000 −0.201+1.513−0.311 −0.036+0.011+1.025

If you've solved systems of linear equations before, you may have realized that this is just an inverse matrix of the original RGB to RGB transformation. This is a very useful property. When establishing the conversion from one linear color space to another we just need to figure out the coefficients of the matrix in one direction – the matrix in the other direction is just its inverse.

One more time we can visualize the transformation the matrix performs. This time RGB is the perfect cube, while RGB is the outer skewed cube:

You can easily see how much bigger the RGB space is and indeed it lets us express some colors that RGB can't.

Breaking the Boundaries

So far whenever a numeric value in a color space was outside of 0.0 to 1.0 range we'd just clip it to the representable limits. Let's try to see what happens when we remove that restriction. We can modify the sliders to allow −1.0 to 2.0 range and try to match the pure R again:

With that approach we can actually represent the pure R using RGB color space. Since the numbers don't care, the entire thing works and is often called unbounded or extended range.

To actually see RGB colors represented using extended range RGB colors the display has to be capable of showing RGB colors in the first place, otherwise they would get physically clamped by the display itself. To put it differently, the transformation of values from a color space with extended range to the native color space of the display should end up with the values in a standard range.

While unbounded values are very flexible, there are two important caveats one should consider when dealing with them. First of all, since the range of values is unlimited, it is possible to create colors that truly have no physical meaning, e.g. −1.0−1.0−1.0.

Additionally, storing the values in a type with a limited range may not be possible. If we decide to use 8 bits per component and encode the 0.0 to 1.0 range into 0 to 255 range then we won't be able to represent the values outside of normalized range since they simply won't fit.

One Space to Rule Them All

All the color transformations we performed are somewhat contrived – the two color spaces we analyzed are defined in relation to each other and have no immutable attachment to the physical world. Their reds, greens, and blues are whatever we ended up seeing on the screen. We have no way of knowing how the colors would look on another device.

If we had a master color space that was derived from physical quantities we could solve the problem by finding a transformation from any color space to that common connection space. Luckily, in 1931 that very color space has been specified by CIE and resulted in what is known as CIE 1931 XYZ color space. Before we discuss how the space was defined we need to take a quick look at physics and the human visual system.

And There Was Light

To ground the discussion of color perception in the real world we have to establish it in terms of light – the part of electromagnetic radiation that is visible to the human eye. The visible spectrum can only be approximated on a typical display, but its colors at specific wavelengths look roughly like this:

Visible spectrum with wavelengths shown in nanometers

Colors corresponding to a single wavelength are called spectral colors. For example, a light with wavelength of 570 nm would produce a pure spectral yellow. Notice that boundaries between the colors are soft, so there is no single "spectral red", but a wavelength is all we need to describe a spectral color accurately.

Human eyes are not simple wavelength detectors and the perception of the same color can be created in many different ways. A yellow color can be created using light with wavelength around 570 nm, or as a mixture of a green and a red light.

Since a human retina has three different types of photoreceptive cones, any color sensation can be matched using three fixed colors with varying intensities. It doesn't mean that all colors can be recreated using just additive mixes of three colors. As we've discussed, sometimes one or two of those colors have negative weights and have to be added to the target color instead.

Algebraically, however, we don't need more than three primary colors to represent everything humans can perceive (with some very rare exceptions). This property of the human visual system is called trichromacy and is exploited by various display technologies to present a very broad spectrum of colors using just three RGB primaries.

The Color Matching Experiments

At the beginning of the 20th century David Wright and John Guild independently conducted experiments with human observers in which the test subjects tried to match the spectral colors in 10 nm increments with a combination of red, green, and blue light sources. The experiments were fairly similar to what've been doing with the sliders trying to match the target color as a combination of some other three colors.

The results of the experiments were standardized as rgb color matching functions that can be presented on a graph:

CIE rgb color matching functions

Notice the presence of negative values. Not all spectral colors could be achieved by a combination of selected RGB values and yet again a negative value means that the color was added to the target value.

The CIE standardized the reference R of the experiments as monochromatic light with wavelength of 700.0 nm, the reference G as 546.1 nm, and the reference B as 435.8 nm, with specific ratio of relative intensities between them. This is a critical step in our discussion of color – we're finally grounded to some physically measurable properties.

Let's look at the yellow wavelength of 570 nm. Reading from the color matching plot we can tell that the color match required 0.16768 units of R, 0.17087 units of G and −0.00135 units of B. A set of three RGB coordinates just begs for a three dimensional presentation. If we read the color matching value for every wavelength we obtain the following plot:

You may wonder why the spectrum curve doesn't go through the pure red, green, or blue endpoints of the CIE RGB cube, but it's simply the result of normalization and scaling applied to the color matching functions by the committee. Within reasonable limits a light source can be constructed to be as powerful as needed, so what constitutes a "1.0" is always defined somehow arbitrarily.

XYZ Space

CIE RGB color space is rooted to concrete physical properties of monochromatic lights and we could use it to define any color sensation. However, the committee strived to create a derived space that would have a few useful properties, two of which are worth noting:

  • No negative coordinates of spectral colors
  • Separation of chromaticity (hue and colorfulness) from luminance (visual perception of brightness)

After some deliberation the following set of equations was created:

X = 2.769×R + 1.752×G + 1.130×B Y = 1.000×R + 4.591×G + 0.061×B Z = 0.000×R + 0.057×G + 5.594×B

The factors may seem arbitrary, and in some sense they are, but this is simply yet another mapping of values from one space to another, similar to what we did with RGB to RGB transformation. The Y component was chosen as the luminance component, while X and Z define the chromaticity.

We can visualize the CIE XYZ space inside the CIE RGB space:

You can see how the XYZ space is designed around the spectrum curve to make sure it fits in the positive octant. Its smaller size is not really a concern, the scaling makes things more convenient to work with by making sure the perceptually brightest wavelength of 555 nm has a Y value of 1. As a final step we can just show the XYZ space and the spectral colors on their own:

Since the CIE XYZ space is derived from CIE RGB it is also grounded in measurable physical quantities. The XYZ color space is the base color space used for all conversions of matrix-based RGB display color spaces. The mapping between any two color spaces is done through a common CIE XYZ intermediate. This simplifies the color transformations since we don't need to know how to convert colors between any two arbitrary color spaces, we just need to be able to convert both of them to the XYZ space.

xy Chromaticity Diagram

Three dimensional diagrams are fun to play with, but they're often not particularly practical – a 2D plot is often easier to work with and reason about. Since the Y component of the XYZ space is devoid of any colorfulness and hue, we're left with only two components that affect the chromaticity of a color. We can perform three operations intended to reduce the dimensionality of the space:

x = X / (X + Y + Z) y = Y / (X + Y + Z) z = Z / (X + Y + Z)

The distinction between a lower and an upper case is important here – X is not the same as x. These seemingly arbitrary equations have a simple visual explanation – it's a projection onto a triangle spanned between XYZ coordinates of 1.00.00.0, 0.01.00.0, and 0.00.01.0:

Notice that x, y, and z add up to 1, so we can drop the last component since it's redundant – we can always recreate it by subtracting x and y from 1. Rejection of z is equivalent to a flat projection onto xy plane:

If we repeat that step for every combination of spectral colors we can finally present the 2D plot known as CIE xy chromaticity diagram in its full glory. You may have seen this horseshoe shape before:

CIE xy chromaticity diagram

No RGB display technology is capable of presenting all the colors that humans can see, so many of the ones shown in the picture are actually the result of clamping to the representable range of the sRGB color space.

The colors on the inside of the plot are just some combinations of the spectral colors and the colors outside of the plot don't exist. Notice the straight diagonal line connecting the end points of the red and blue area. While every point on the outline of the horseshoe shape has a corresponding spectral wavelength, the colors on that line of purples do not – there is no wavelength of light that looks like magenta. The purples are simply how the human brain interprets the mixes of red and blue light and ultimately they are no different than any other shade. Perception of every color happens in our heads.

It's important to mention that the xy chromaticity diagram is not perceptually uniform. In some areas of the plot one has to move relatively far away from a chosen color to notice the difference in chromaticity, while in some other areas the distance to change is much smaller. Over time CIE developed more uniform chromaticity diagrams, but since the xy diagram is easily obtained from the XYZ space it continues to be used in discussion of RGB color spaces.

Gamut

The chromaticity diagram is useful in visualizing the gamut of a color space – the extent of colors that a color space can represent. It's necessary to note that a gamut is a three dimensional construct, so a 2D projection onto an image, somehow contrarily, does not present a full picture. It's nonetheless a useful tool employed in comparison of color spaces.

We can finally present the chromaticities of primaries of both RGB and RGB color spaces in a single graph:

I'll discuss the meaning of the little cross in the middle soon, but the important fact is that a triangle depicts all representable chromaticities of a color space. Notice how RGB triangle is smaller than RGB triangle showing us yet again that the latter can represent more colors.

This diagram summarizes a long journey we took to define the second component that defines every RGB color space:

Location of its red, green, and blue primaries on CIE xy chromaticity diagram.

In the simulation below you can drag the slider to see how the extent of the chromaticity triangle corresponds to the representable colors. The base values of an image from sRGB color space are converted to a space with a reduced gamut, clamped to 0.0 to 1.0 range, then finally converted back to sRGB for display. You can clicktap the image to change it:

Your browser doesn't support WebGL which is required for this demo.

In some cases the color clamping is pretty severe, but for the image with snowy mountains the change is minimal. An almost black and white picture has little chromaticity and therefore a reduced gamut has almost no effect on it.

White Point

The last aspect of color spaces we will discuss is related to the little cross in the middle of RGB and RGB gamut visualization. Similarly to how different color spaces can assign different colors to their pure red defined as 1.00.00.0, the white point of a color space defines its color of white – the represented color when all three components are ones 1.01.01.0.

In the color picker below both halves have the same RGB primaries, but different white points. See what happens when you drag all the sliders to the right:

The color space form the bottom half defines its white as a slightly different color than the top color space.

A 3D visualization shows two interesting properties. Firstly, the axes of the inner and the outer cubes are collinear, they're just scaled differently. Secondly, the "far" endpoints of the cubes no longer overlap since their whitepoints are different:

The white point is the last piece of the color space puzzle. We can now say that a color space is also defined by:

A location of its white point on CIE xy chromaticity diagram.

With xy coordinates of the red, green, and blue primaries, and the xy coordinates of the white point one can evaluate the RGB to XYZ transformation for a given color space. The details of this computation are not critical to our discussion and you can read about them in many places online, e.g. on Bruce Lindbloom's website.

While necessary for correct calculation of RGB to XYZ transformation, in practice it may be difficult to notice that two color spaces have different whitepoints. Most color conversion operations will undergo chromatic adaptation and the color of 1.01.01.0 in the source color space will be mapped to 1.01.01.0 in the destination color space.

sRGB Color Space

We'll finish our discussion with a showcase of the sRGB color space, described by its authors as "A Standard Default Color Space for the Internet". If you ever were specifying just "RGB" colors it's extremely likely that the components were assumed to come from the sRGB color space. In fact, it's the color space used in the very first color picker in this article.

While the sRGB specification isn't free, Wikipedia and International Color Consortium provide all the information we need to describe it.

Tone Response Curve

Let's look at the plot of the tone response curve of the sRGB color space. The light intensity value is on the vertical axis and it's symbolized with a sun icon. The encoded value is on the horizontal axis and it's symbolized with a binary code:

Tone response curve of sRGB

While the curve may look like a single power function, in the range of 0 to 0.04045 it actually is a linear segment:

(intensity value) = (encoded value)/12.92

When encoded value is larger than 0.04045 the intensity value is indeed defined by a slightly nudged power function with exponent of 2.4:

(intensity value) = (((encoded value) + 0.055)/1.055)2.4

The transition between the two parts is continuous and I marked its location on the plot with a little bump.

The curve was designed to roughly correspond to the response curve of CRT displays thus to some extent minimizing the need for any color management. That convenience had the unfortunate consequence of creating a widespread confusion by binding the two separate ideas into one.

In general case the purpose of tone response curves is to make the best use of available space when storing the color information in formats with limited bit depth. The non-linear response of an electron gun in a CRT display is a separate and only loosely related concept.

Chromaticity and White Point Coordinates

Chromaticity coordinates of red, green, blue, and white are just 4 pairs of x and y values and we can put them in a table:

Red Green Blue White
x 0.6400 0.3000 0.1500 0.3127
y 0.3300 0.6000 0.0600 0.3290

You'll probably agree that this form is not particularly exciting, things look much nicer on the xy chromaticity diagram:

Chromaticity and white point coordinates of sRGB

On a technical note it's worth mentioning that the primaries share the same location as Rec. 709 – the standard for HDTV. Additionally, the white point location corresponds to Standard Illuminant D65 – a representation of the average daylight.

While sRGB isn't capable of representing some of the more vivid shades and clearly doesn't contain all the colors humans can see, it has served its purpose as a standard color space of 8-bit graphics remarkably well.

Further Reading

The web is filled with interesting gems related to color. For a very good dissection of many of the major components on a pathway between hex colors and what we end up perceiving with our eyes I suggest reading Jamie Wong's "Color: From Hexcodes to Eyeballs". I especially like his very approachable explanation of spectral distributions.

Bruce MacEvoy's set of pages on color vision is an amazing resource about various aspects of color. I highly recommend the first post in the series in which the author describes the fascinating details of biophysics of the human visual system.

Elle Stone authored an extensive collection of great articles on color spaces, calibration, and image editing. For a different take on what we've touched upon in this article you should see "Completely Painless Programmer's Guide to XYZ, RGB, ICC, xyY, and TRCs".

If you think you'd enjoy a deep dive into the optimization process of creating a minimal sRGB ICC profile, Clinton Ingram has got you covered in his four part series. His pragmatic approach to the problem provides a very entertaining read.

Finally, as a resource in a more traditional form, I recommend the book "Color Imaging: Fundamentals and Applications". It covers a vast selection of topics related to color and its reproduction, all in a very readable form.

Final Words

Many of the concepts we've discussed were developed decades ago, but despite the age even the old ideas continue to be incredibly useful and the science behind them is expanded to this day.

Color is one of those areas with seemingly infinite depth of complexity. I hopefully showed you that some of that complexity isn't actually as scary as it looks, you just need to shine a light on it.




No comments posted yet: Link to HN comments page




Historical Discussions: Show HN: Interactive GLSL fragment shaders editor made with Qt (February 13, 2019: 9 points)

(9) Show HN: Interactive GLSL fragment shaders editor made with Qt

9 points 3 days ago by vmakeev in 10000th position

github.com | Estimated reading time – 6 minutes | comments | anchor

Failed to load latest commit information. .gitignore Initial commit 25 days ago LICENSE Initial commit 25 days ago README.md Add examples to readme file 3 days ago ShaderWorkshop.pro Add ChannelSettings designer form class 6 days ago channelsettings.cpp Add license notices 5 days ago channelsettings.h Add license notices 5 days ago channelsettings.ui Add ChannelSettings designer form class 6 days ago codeeditor.cpp Add license notices 5 days ago codeeditor.h Add license notices 5 days ago editorpage.cpp Add license notices 5 days ago editorpage.h Add license notices 5 days ago editorpage.ui Rework EditorPage creation 6 days ago effect.cpp Add license notices 5 days ago effect.h Add license notices 5 days ago glslhighlighter.cpp Add license notices 5 days ago glslhighlighter.h Add license notices 5 days ago linenumberarea.h Add license notices 5 days ago main.cpp Add license notices 5 days ago renderer.cpp Add license notices 5 days ago renderer.h Add license notices 5 days ago shaderworkshop.cpp Add 'About' menu 5 days ago shaderworkshop.h Add 'About' menu 5 days ago shaderworkshop.ui Add 'About' menu 5 days ago



No comments posted yet: Link to HN comments page




Historical Discussions: Show HN: Use Tyk as kubernetes ingress (February 13, 2019: 9 points)

(9) Show HN: Use Tyk as kubernetes ingress

9 points 3 days ago by LeonidBugaev in 3714th position

github.com | Estimated reading time – 5 minutes | comments | anchor

Tyk Pro for Kubernetes Helm Chart

This chart provides a full Tyk Installation (API Management Dashboard and API Gateways with Analytics) for Kubernetes with an ingress controller and service-mesh injector.

This means that a single Tyk installation can be used for both 'north-south' inbound traffic from the internet to protect and promote your services, as well as internal east-west traffic, enabling you to secure your services in any way you see fit, including mutual TLS.

It also means that yo can bring the full features et of the Tyk API Gateway to your internal and external services rom a single control plane.

Pre-requisites

  • Redis installed in the cluster or reachable from K8s
  • MongoDB installed in the cluster, or reachable from inside K8s

To get started quickly, you can use these rather excellent Redis and MongoDB charts to get goig:

helm repo add tc http://trusted-charts.stackpoint.io
helm repo update
helm install tc/redis --name redis --namespace=redis --set usePassword=false
helm install tc/mongodb-replicaset --name mongodb --namespace=mongodb 

Important Note regarding TLS: This helm chart assumes TLS is being used by default, so the gateways will listen on port 443 and load up a dummy certificate. You can set your own default certificate by replacing the files in the certs/ folder.

To install, first modify the values.yaml file to add redis and mongo details, and add your license:

helm install -f ./values.yaml ./tyk-pro

Follow the instructions in the Notes that follow the installation to install the controller.

Using the Ingress Controller

To enable the ingress controller, simply add the ingress class defintiion to your ingress annotations:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: cafe-ingress
  annotations:
    kubernetes.io/ingress.class: tyk
spec:
  rules:
  - host: cafe.example.com

By default Tyk will create an Open API (no security enabled), however you can set any property in the API Definition using the following annotations (remember that all annotations are treated as strings:

  • bool.service.tyk.io/{path}: value: Set a value to 'true' or 'false'
  • string.service.tyk.io/{path}: value: Set a value that is a string literal, e.g. 'name'
  • num.service.tyk.io/{path}: value: Set a value that is a number (assumes an int)
  • object.service.tyk.io/{path}: value: Set a whole json object

Here's an example:

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  name: cafe-ingress
  annotations:
    kubernetes.io/ingress.class: tyk
    bool.service.tyk.io/use-keyless': 'false',
	string.service.tyk.io/proxy.target-url': 'http://foo.bar/bazington',
	num.service.tyk.io/cache_options.cache-timeout': '20',
	object.service.tyk.io/version_data.versions.Default.extended-paths': '{'hard_timeouts':[{'path':'{all}','method':'GET','timeout':60,'fromDashboard':true}]}',
spec:
  rules:
  - host: cafe.example.com
    http:
      paths:
      - path: /tea
        backend:
          serviceName: tea-svc
          servicePort: 80
      - path: /coffee
        backend:
          serviceName: coffee-svc
          servicePort: 80

You can also directly modify the service in your Tyk Dasboard, though if the ingress is recreated, the changes may not be retained.

When an ingress is removed, the services will be removed from the API Gateway as well.

TLS

Tyk supports the TLS section for the ingress controller. If you set a TLS entry with a secret, the controller will retrieve the certificate from K8s and load it into the encrypted certificate store in Tyk and dynamically load it into the ingress. You can manage the certificate from within Tyk Dashboard.

Using the injector

The service mesh injector will create two services:

  1. An inbound service for the container - this is only loaded into the sidecar for the service and handles all inbound requests for the service from the mesh
  2. The mesh endpoint, this is the route that is circulated to all sidecars as the route to the service in (1)

Setting up a service to use the injector is very simple, simply add the inject annotation to your Deployment:

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: sleep
spec:
  replicas: 1
  template:
    metadata:
      annotations:
        injector.tyk.io/inject: 'true'
        injector.tyk.io/route: '/sleep'
      labels:
        app: sleep
    spec:
      containers:
      - name: sleep
        image: tutum/curl
        command: ['/bin/sleep','infinity']
        imagePullPolicy: IfNotPresent

By default, the injector will create the service route on /{pod.Name}, however you can specify the route to advertise using the injector.tyk.io/route instruction, this will set the proxy.listen_path in the Tyk API Definition.

By default the service is open (as with the ingress), however you can use the same service annotations as the ingress to set options on the created service. Note: The changes only affect the inbound service, not the mesh service. This means rate limits, quotas, and security policies are applied in the inbound side-car as opposed to the sender side-car.




No comments posted yet: Link to HN comments page




Historical Discussions: Show HN: A linter for MySQL (February 12, 2019: 9 points)

(9) Show HN: A linter for MySQL

9 points 4 days ago by professorlamp in 10000th position

github.com | Estimated reading time – 4 minutes | comments | anchor

sql-lint

sql-lint is a linter for MySQL, it brings back any error from the MySQL server as well as custom errors written for sql-lint.

sql-lint will show errors about the following things

  • DELETE statements missing WHERE clauses
  • DROP statements with invalid things to be dropped
  • Odd code points in queries
  • Any MySQL error*
  • sql-lint brings back errors from the MySQL server too. It will catch any MySQL error. these include but are not limited to:
  • Unknown columns on a table
  • A non existent database
  • A non existent table
  • Syntax errors

etc...

Running / Installation

There are binaries on the releases page for Mac, Linux and Windows.

Linting a query

sql-lint works on queries with the --query flag.

> sql-lint --query='DELETE FROM test;'
query:1 Delete missing WHERE, intentional?

Linting a file

sql-lint works on files with the --file flag.

> sql-lint --file='test/test.sql' 
test/test.sql:13 Bad code point
test/test.sql:40 Delete missing WHERE, intentional?
test/test.sql:6 Database 'test' does not exist.
test/test.sql:10 Database 'dykes_reservations' does not exist.
test/test.sql:11 Database 'dykes_res' does not exist.
test/test.sql:13 [ER_PARSE_ERROR] You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near 'FRO�' at line 1
test/test.sql:16 [ER_NO_DB_ERROR] No database selected
test/test.sql:29 [ER_NO_DB_ERROR] No database selected
test/test.sql:54 [ER_NO_SUCH_TABLE] Table 'symfony.dont_exist' doesn't exist

Linting stdin

sql-lint works with stdin if you don't supply any arguments.

> echo 'DELETE FROM test;' | sql-lint
stdin:1 Delete missing WHERE, intentional?

Editor integration

There is a patch here which will allow sql-lint to work with ALE for Vim and Neovim.

Configuration

Configuring sql-lint with connection details allows errors from the server to come through. You'll probably want these as they supply even more information about what's going wrong.

You can connect in two ways:

Via CLI

You can connect via the command line if you wish with the respective flags.

sql-lint --host='localhost' --user='root' --password='hunter2' --query='SELECT 1;'

Via config.json

A configuration file for sql-lint can reside in ~/.config/sql-lint/config.json

You should put the following in there for more intelligent errors to come through

{
    'host': 'localhost',
    'user': 'root',
    'password': 'hunter2'
}

Contributing

To test

./build/build.sh //This will run more than just the tests (recommended)

Working with the docker container

First, make sure port 3306 is available locally. (You can do this by inspecting the output of sudo lsof -i :3306 and docker ps and killing anything using that port) Now do:

docker-compose up --build -d --force-recreate

At this point the container(s) will be up and ready to use. You can login with the following credentials: mysql -u root -ppassword.

Here's an example of a query:

docker exec sqllint_mysql_1 mysql -u root -ppassword -e 'SHOW DATABASES'

Connecting sql-lint to the docker container

Change your config file in ~/.config/sql-lint/config.json to have the following values:

{
    'host': 'localhost',
    'user': 'root',
    'password': 'password'
}

Troubleshooting

I'm not seeing any warnings

Run sql-lint -f your-file and it will display the exception. Add the -v flag for more information.

It's telling me there's a syntax error when there's clearly not.

Chances are you're using an old(er) version of MySQL. EXPLAINing on INSERT|UPDATE|DELETE was added in Mysql 5.6.




No comments posted yet: Link to HN comments page




Historical Discussions: Show HN: Collection of Go examples for beginner back end developers (February 09, 2019: 9 points)

(9) Show HN: Collection of Go examples for beginner back end developers

9 points 7 days ago by cephaslr in 10000th position

github.com | Estimated reading time – 3 minutes | comments | anchor

Failed to load latest commit information. db-cache Add usage 5 days ago gdrive Update readme.md 10 hours ago ginswagger Minor text update 10 days ago hellocassandra Move Cassandra to hello cassandra 3 years ago redis Redis lib examples 10 days ago rest-example clean up formatting 2 days ago rpc Add description to rpc example 3 days ago sqltesting Add usage to sqltest 5 days ago timing Clean up timer 3 days ago webdata