by Kevin Simler
If you've spent any time thinking about complex systems, you surely understand the importance of networks.
Networks rule our world. From the chemical reaction pathways inside a cell, to the web of relationships in an ecosystem, to the trade and political networks that shape the course of history.
Or consider this very post you're reading. You probably found it on a social network, downloaded it from a computer network, and are currently deciphering it with your neural network.
But as much as I've thought about networks over the years, I didn't appreciate (until very recently) the importance of simple diffusion.
This is our topic for today: the way things move and spread, somewhat chaotically, across a network. Some examples to whet the appetite:
- Infectious diseases jumping from host to host within a population
- Memes spreading across a follower graph on social media
- A wildfire breaking out across a landscape
- Ideas and practices diffusing through a culture
- Neutrons cascading through a hunk of enriched uranium
A quick note about form.
Unlike all my previous work, this essay is interactive. There will be sliders to pull, buttons to push, and things that dance around on the screen. I'm pretty excited about this, and I hope you are too.
So let's get to it. Our first order of business is to develop a visual vocabulary for diffusion across networks.
A simple model
I'm sure you all know the basics of a network, i.e., nodes + edges.
To study diffusion, the only thing we need to add is labeling certain nodes as active. Or, as the epidemiologists like to say, infected:
This activation or infection is what will be diffusing across the network. It spreads from node to node according to rules we'll develop below.
Now, real-world networks are typically far bigger than this simple 7-node network. They're also far messier. But in order to simplify — we're building a toy model here — we're going to look at grid or lattice networks throughout this post.
(What a grid lacks in realism, it makes up for in being easy to draw ;)
Except where otherwise specified, the nodes in our grid will have 4 neighbors, like so:
And we should imagine that these grids extend out infinitely in all directions. In other words, we're not interested in behavior that happens only at the edges of the network, or as a result of small populations.
Given that grid networks are so regular, we can simplify by drawing them as pixel grids. These two images represent the same network, for example:
Alright, let's get interactive.
The network below has playback controls at the bottom. Press the ▷ button to watch the activation spread, or step through one moment at a time:
In this simulation, an active node always transmits its infection to its (uninfected) neighbors.
But this is dull. Far more interesting things happen when transmission is probabilistic.
SIR vs. SIS
In the simulation below, you can vary the transmission rate using the slider at the bottom:
This is what's called an SIR model. The initials stand for the three different states a node can be in:
Here's how it works:
- Nodes start out as
Susceptible, except for a few nodes (like the center node above) which start as
- At each time step,
Infected nodes get a chance to pass the infection along to each of their
Susceptible neighbors, with a probability equal to the transmission rate.
Infected nodes then transition to the
Removed state, indicating that they're no longer capable of infecting others or being infected again themselves.
In a disease context,
Removed may mean that the person has died or that they've developed an immunity to the pathogen. Regardless, we say that they're 'removed' from the simulation because nothing ever happens to them again.
Now, depending on what we're trying to simulate, we may need something other than an SIR model.
If we're simulating the spread of measles or an outbreak of wildfire, SIR is perfect. But suppose we're simulating the adoption of a new cultural practice, e.g., meditation. At first a node (person) is
Susceptible, because they've never done it before. Then, if they start meditating (perhaps after hearing about it from a friend), we would model them as
Infected. But if they stop practicing, they don't die or drop out of the simulation, because they could easily pick up the habit again in the future. So they transition back to the
This is an SIS simulation — which (you guessed it) stands for Susceptible–Infected–Susceptible. Here's what it looks like on a grid:
[+] Implementation details
As you can see, this plays out very different from the SIR model.
Because the nodes never get used up (
Removed), even a very small and finite grid can sustain an SIS infection for a long time. The infection simply hops around from node to node and eventually back again.
Despite their differences, SIR and SIS turn out to be surprisingly interchangeable for our purposes today (namely: developing intuition). So we're going to anchor on SIS for the remainder of this essay — mostly because it dances around longer and is therefore more fun.
Now, in playing with the simulations above — both SIR and SIS — you may have noticed something about the longevity of the infection.
At very low transmission rates, like 10 percent, the infection tends to die out. Whereas at higher values, like 50 percent, the infection remains alive and takes over most of the network. If the network were infinite, we could imagine it continuing on and spreading outward forever.
This limitless diffusion has many names: 'going viral' or 'going nuclear' or (per the title of this post) going critical.
It turns out that there's a precise tipping point that separates subcritical networks (those fated for extinction) from supercritical networks (those that are capable of neverending growth). This tipping point is called the critical threshold, and it's a pretty general feature of diffusion processes on regular networks.
The exact value of the critical threshold differs between networks. What's shared is the existence of such a value.
Here's an SIS network to play around with. Can you find its critical threshold?
By my tests, the critical value seems to be between 22 and 23 percent.
At 22 percent (and below), the infection eventually dies out. At 23 percent (and above), the initial infection occasionally dies out, but on most runs it manages to survive and spread long enough to survive forever.
(By the way, there's an academic cottage industry devoted to finding these critical thresholds for different network topologies. For a taste, I recommend a quick scroll down the Wikipedia page for percolation threshold
In general, here's how it works: Below the critical threshold, any finite infection on the network is guaranteed (with probability 1) to eventually go extinct. But above the critical threshold, it's possible (p > 0) for the infection to carry on forever, and in doing so to spread out arbitrarily far from the initial site.
Note, however, that an infection on a supercritical network isn't guaranteed to go on forever. In fact, the infection will frequently fizzle out, especially in the very early steps of the simulation.
To see this, suppose we start with a single
Infected node and its 4 neighbors. On the first step of the simulation, the infection has 5 independent chances to spread (including the chance to 'spread' to itself on the next time step):
Now suppose the transmission rate is 50 percent. In that case, the first step of the simulation amounts to doing 5 coin flips. And if they all come up tails, the infection will be extinguished. This happens about 3 percent of the time — and that's just on the first step. An infection that survives the first step will then have some (typically smaller) probability of going extinct on the second step, and some (even smaller) probability of dying on the third step, etc.
So even when the network is supercritical — even if the transmission rate is 99 percent — there's a chance that the infection will fizzle out.
But the important thing is that it won't always fizzle out. When you add up the fizzle-probabilities for all the steps out to infinity, the result is less than 1. In other words, with nonzero probability, the infection carries on forever. This is what it means for a network to be supercritical.
SISa: spontaneous activation
Up to this point, all our simulations have started with a little nugget of preinfected nodes at the center.
But what if we decide to start with nothing? Then we would need to model spontaneous activation — the process by which a
Susceptible node randomly becomes
Infected (without catching the infection from one of its neighbors).
This has been dubbed
the SISa model. The 'a' stands for 'automatic.'
Below you can play with an SISa simulation. There's a new parameter, the spontaneous activation rate, which changes how often a spontaneous infection will occur. (The transmission rate parameter, which we saw earlier, is also present.)
What does it take to get the infection to spread across the whole network?
As you may have noticed, increasing the rate of spontaneous activation doesn't change whether or not the infection takes over the network. Instead, in this simulation, it's only the transmission rate that determines whether the network is sub- or supercritical. And when the network is subcritical (trans. rate ≤ 22%), no infection can catch on and spread, no matter how frequently it happens.
This is like trying to start a fire in a wet field. You might get a few dry leaves to catch, but the flame will quickly die out because the rest of the landscape isn't flammable enough (subcritical). Whereas in a very dry field (supercritical), it may only take one spark to start a raging wildfire.
We can see similar things taking place in the landscape for ideas and inventions. Often the world isn't ready for an idea, in which case it may be invented again and again without catching on. At the other extreme, the world may be fully primed for an invention (lots of latent demand), and so as soon as it's born, it's adopted by everyone. In-between are ideas that are invented in multiple places and spread locally, but not enough so that any individual version of the idea takes over the whole network all at once. In this latter category we find e.g. agriculture and writing, which were independently invented ~10 and ~3 times respectively.
Suppose we make some nodes completely resistant or "immune" to activation. This is like putting them in the
Removed state, then running SIS(a) on the remaining nodes.
You can play with it below. The 'Immunity' slider controls the percentage of nodes that are
Removed. Try varying the slider (while the simulation is running!) to see its effect on whether the network is supercritical or not:
Changing how many nodes are immune absolutely changes whether the network is sub- or supercritical. And it's not hard to see why. When many nodes are immune, each infection has fewer opportunities to spread to a new host.
This turns out to have a number of very important practical implications.
One is preventing the spread of wildfires. Now, individuals should always take their own local precautions (e.g., never leaving an open flame unattended). But at a larger scale, small outbreaks are inevitable. So another mitigation technique is to ensure there are enough 'gaps' (in the network of flammable materials) that an outbreak can't take over the entire network. Thus firewalls and firebreaks:
Another outbreak that's important to stop is infectious disease. Enter here the concept of herd immunity. This is the idea that, even if some people can't be vaccinated (e.g., because they have compromised immune systems), as long as enough people are immunized, the disease won't be able to spread indefinitely. In other words, vaccinating enough people can bring a population down from supercritical to subcritical. When this happens, a single patient might still catch the disease (e.g., by traveling to another region and then back home), but without a supercritical network in which to grow, the disease won't infect but a small handful of people.
Finally, we can use the concept of immune nodes to understand what happens in a nuclear reactor. In a nuclear chain reaction, a decaying uranium-235 atom releases ~3 neutrons, which trigger (on average) more than 1 other U-235 atoms to split. The new neutrons that are released then trigger further atoms to split, and so on exponentially:
Now, when making a bomb, the whole point is to let the exponential growth proceed unchecked. But in a power plant, the goal is to produce energy without killing everyone in the neighborhood. For this we use control rods, which are made from material that can absorb neutrons (like silver or boron). Because they absorb rather than release neutrons, control rods act like immune nodes in our simulation above, thereby preventing the radioactive core from going supercritical.
The trick to running a nuclear reactor, then, is to keep the reaction hovering just at the critical threshold, while making EXTRA! SPECIAL! SURE! that whenever anything goes wrong, the control rods slam into the core and put a stop to it.
The degree of a node is the number of neighbors it has. Up to this point, we've been looking at networks of degree 4. But what happens when we vary this parameter?
For example, we might connect each node not only to its 4 immediate neighbors, but also to its 4 diagonal neighbors. In such a network, the degree would be 8.
You can play with this parameter below:
[+] Implementation details
Again, it's not hard to understand what's happening here. When each node has more neighbors, there are more chances for an infection to spread — and thus the network is more likely to go critical.
This can have surprising implications, however, as we'll see below.
Cities and network density
Up to now, our networks have been completely homogeneous. Every node looks the same as every other node. But what if we subvert this assumption and allow things to vary across the network?
For example, let's try to model cities. We'll do this by creating patches of the network that are denser in connections (have higher degree) than the rest of the network. This is motivated by data that suggests that people in cities have wider social circles and more social interactions
than people outside cities.
In the simulation below, we color
Susceptible nodes based on their degree. Nodes out in the 'countryside' have degree 4 (and are colored in light gray), whereas nodes in the 'cities' have higher degrees (and are colored corresponingly darker), starting at degree 5 on the outskirts and culminating at 8 in the city center.
Can you get the initial activation to spread to the cities, and then remain only in the cities?
I find this simulation both obvious and surprising at the same time.
Of course cities can support more culture than rural areas — everyone knows this. What surprises me is that some of this cultural variety can arise based on nothing more than the topology of the social network.
This is worth dwelling on, so let me try to explain it more carefully.
What we're dealing with here are forms of culture that get transmitted simply and directly from person to person. For example, manners
, parlor games, fashion trends, linguistic trends, small-group rituals, and products that spread by word of mouth — plus many of the packages of information we call ideas.
(Note: Person-to-person diffusion is complicated tremendously by mass media. So as you're thinking about these processes, it may help to imagine a more technologically primitive environment, e.g., Archaic Greece, where almost every scintilla of culture is transmitted during meatspace interactions.)
What I learned from the simulation above is that there are ideas and cultural practices that can take root and spread in a city that simply can't spread out in the countryside. (Mathematically can't.) These are the very same ideas and the very same kinds of people. It's not that rural folks are e.g. 'small-minded'; when exposed to one of these ideas, they're exactly as likely to adopt it as someone in the city. Rather, it's that the idea itself can't go viral in the countryside because there aren't as many connections along which it can spread.
This is perhaps easiest to see in the domain of fashion — clothing, hairstyles, etc. In the fashion network, we might say that an edge exists whenever two people notice each other's outfits. In an urban center, each person could see upwards of 1000 other people every day — on the street, in the subway, at a crowded restaurant, etc. In a rural area, in contrast, each person may see only a couple dozen others. Based on this difference alone, the city is capable of sustaining more fashion trends. And only the most compelling trends — the ones with the highest transmission rates — will be able to take hold outside of the city.
We tend to think that if something's a good idea, it will eventually reach everyone, and if something's a bad idea, it will fizzle out. And while that's certainly true at the extremes, in between are a bunch of ideas and practices that can only go viral in certain networks. I find this fascinating.
Not just cities
What we're exploring here are the effects of network density. This is defined, for a given set of nodes, as the number of actual edges divided by the number of potential edges. I.e., the percentage of possible connections that actually exist.
So, as we've seen, urban centers have higher network densities than rural areas. But cities aren't the only place we find dense networks.
High schools are an interesting example. Consider, in a given neighborhood, the network that exists among the students vs. the network that exists among their parents. Same geographic area and similar population sizes, but one network is many times denser than the other. And it's no surprise, then, that fashion and linguistic trends proliferate among adolescents, and spread much slower among the adults.
Similarly, elite networks are generally much denser than non-elite networks — an underappreciated fact, IMO. (People who are popular or powerful spend more time networking, and so they have more 'neighbors' than ordinary folks.) Based on the simulations above, we would expect elite networks to support some cultural forms that can't be supported by the mainstream, based on nothing more than the average degree of the network. I'll leave it to you to speculate on what these forms might be.
Finally, we can apply this lens to the internet, by choosing to model it as a huge and very densely networked city. Not surprisingly, there are many new kinds of culture flourishing online that simply couldn't be sustained in purely meatspace networks. Most of these are things we want to celebrate: niche hobbies, better design standards, greater awareness of injustices, etc. But it's not all gravy. Just as the first cities were a hotbed for diseases that couldn't spread at lower population densities, so too is the internet a breeding ground for malignant cultural forms like clickbait, fake news, and performative outrage.
"The attention of the right expert at the right time is often the single most valuable resource one can have in creative problem solving." — Michael Nielsen, Reinventing Discovery
We often think of discovery or invention as a process that takes place in the mind of a singular genius. A flash of inspiration strikes and — eureka! — suddenly we get a new way to measure volume. Or the equations for gravity. Or the lightbulb.
But taking the perspective of the lone inventor and zeroing in on the moment of discovery is to take the node's eye view of the phenomenon. Whereas, properly construed, invention is something that happens on a network.
The network is important in at least two ways. First, preexisting ideas have to make their way into the mind of the inventor. These are the citations of a new paper, the bibliography section of a new book — the giants on whose shoulders Newton stood. Second, the network is crucial for getting a new idea back out into the world; an invention that doesn't spread is hardly worth calling an 'invention' at all. And so, for both of these reasons, it makes sense to model invention — or more broadly, the growth of knowledge — as a diffusion process.
In just a moment, I'll present a crude simulation of how knowledge might diffuse and grow within a network. But first I need to explain it.
At the start of the simulation, there will be 4 experts positioned in each quadrant of the grid, like so:
Expert 1 has the first version of the idea — let's call it Idea 1.0. Expert 2 is the kind of person who knows how to transform Idea 1.0 into Idea 2.0. Expert 3 knows how to transform Idea 2.0 into Idea 3.0. And finally, Expert 4 knows how to put the finishing touches on the idea to create Idea 4.0.
This might represent a craft (technê) like origami, in which techniques are elaborated and combined with other techniques to produce more interesting constructions. Or it might represent a field of knowledge (epistêmê) like physics, in which later work builds on more fundamental work established by earlier physicists.
The conceit of this simulation is that we need all four experts to contribute to the final version of the idea. And at each phase of development, the idea has to diffuse to the relevant expert.
Here's what it looks like in action:
This is a ridiculously simplified model of how knowledge actually grows. It leaves out a great many important details (see caveats above). Nevertheless, I think it captures an important essence of the process. And so we can, tentatively, use what we've learned so far (about diffusion) to reason about knowledge growth.
In particular, the diffusion model gives us intuition for how to speed things up: make it easier for expert nodes to share ideas. This might mean clearing out the dead nodes that get in the way of diffusion. Or it might mean putting all the experts in a city, where ideas percolate quickly. Or just get them in the same room together:
So... that's all I have to share with you about diffusion.
I have one last thought to share, however, and it's an important one. It's about the growth (and stagnation) of knowledge in scientific communities. This will be a departure in tone and content from everything above, but I hope you'll indulge me.
On scientific networks
The loop below, it seems to me, is among the most important positive feedback loops
in the world (and has been for quite some time):
The upstroke of the loop (K ⟶ T) is reasonably straightforward: We use new knowledge to devise new tools. For example, understanding the physics of semiconductors enables us to build computers.
The downstroke, however, warrants some unpacking. How does technological growth lead to knowledge growth?
One way — perhaps the most direct — is when new technology gives us new ways to perceive the world. For example, better microscopes allow us to peer more deeply inside the cell, generating insight into molecular biology. GPS trackers show us where animals are moving. Sonar allows us to explore the oceans. And so on.
This mechanism is vital, no doubt, but there are at least two other paths from technology to knowledge. They may be less straightforward, but I think they're at least as important:
One. Technology leads to economic surplus (i.e., wealth), and more surplus, in turn, allows more people to specialize in knowledge production.
If 90 percent of your country is engaged in subsistence agriculture, and most of the remaining 10 percent are performing some form of commerce (or war), it doesn't leave a lot of people with the free time to ponder the laws of nature. Perhaps this is why most science in premodern times was done by the children of wealthy families.
Today, the US produces over 50,000 PhDs every year. Instead of getting a job at age 18 (or earlier), a PhD student must be subsidized well into their 20s and perhaps into their 30s — and even then, it's unclear that they'll produce anything of real economic value. But this is what's necessary to get people to the frontier of knowledge, especially in difficult domains like physics or biology.
Point is, from a systems perspective, specialists don't come cheap. And the ultimate source of the societal wealth which funds these specialists is new technology; the plow subsidizes the pen.
Two. New technologies, especially in travel and communication, change the structure of the social networks on which knowledge grows. In particular, it allows experts and specialists to network more tightly with one another.
Notable inventions here include the printing press, steamships and railroads (making it easier to travel and/or mail things over long distances), telephones, airplanes, and the internet. All of these technologies serve to increase network density, especially within specialist communities (which is where the vast majority of knowledge growth occurs). For example, the correspondence networks that arose among European scholars during the late Middle Ages, or the way modern physicists use arXiv.
Ultimately both of these pathways are similar. Both lead to a greater network density of specialists, which in turn leads to knowledge growth:
For years I've been fairly dismissive of academia. A short stint as a PhD student left a bad taste in my mouth. But now, when I step back and think about it (and abstract away all my personal issues), I have to conclude that academia is still extremely important.
Academic social networks (e.g., scientific research communities) are some of the most refined and valuable structures our civilization has produced. Nowhere have we amassed a greater concentration of specialists focused full-time on knowledge production. Nowhere have people developed a greater ability to understand and critique each other's ideas. This is the beating heart of progress. It's in these networks that the fire of the Enlightenment burns hottest.
But we can't take progress for granted. If the reproducibility crisis has taught us anything, it's that science can have systemic problems. And one way to look at those problems is network degradation.
Suppose we distinguish two ways of practicing science: Real Science vs. careerist science. Real Science is whatever habits and practices reliably produce knowledge. It's motivated by curiosity and characterized by honesty. (Feynman: 'I just have to understand the world, you see.') Careerist science, in contrast, is motivated by professional ambition, and characterized by playing politics and taking scientific shortcuts. It may look and act like science, but it doesn't produce reliable knowledge.
(Yes this is an exaggerated dichotomy. It's a thought exercise. Bear with me.)
Point is, when careerists take up space in a Real Science research community, they gum up the works. They angle to promote themselves while the rest of the community is trying to learn and share what's true. Instead of striving for clarity, they complicate and obfuscate in order to sound more impressive. They engage in (what Harry Frankfurt might call) scientific bullshit. And consequently, we might model them as dead nodes, immune to the good-faith information exchanges necessary for the growth of knowledge:
Perhaps a better model is one in which careerist nodes aren't just impervious to knowledge, but are actively spreading fake knowledge. Fake knowledge might include minor results that get hyped up and oversold, for example, or genuinely false results that arise from p-hacking or fabricated data.
But regardless of how we model them, careerists certainly have the potential to stifle our scientific communities.
It's like a nuclear reaction that we badly need — an explosion of knowledge — except that our enriched U-235 is salted with too much U-238, the nonreactive isotope that suppresses the chain reaction.
Of course, there's no categorical distinction between careerists and Real Scientists. We all have a little careerism in us. The question is just how much the network can carry before going quiet.
Oh hi, you made it all the way to end. Thanks for reading.
A quick request
If you're on Twitter and have a few minutes, I'd really appreciate some feedback on this post. I'm excited about this medium (prose + interactive widgets) and plan on doing more posts like this in the future. So I'd love to know what worked and what didn't. Please get in touch
— no rights reserved. You're free to use this work however you see fit :).
Kevin Kwok and Nicky Case for their thoughtful comments and suggestions on various drafts.
Nick Barr for moral support throughout the process, and for some of the most helpful feedback I've ever been given on my work.
- Keith A. for pointing me to percolation theory, a field that 'wouldn't know a proof if it bit them in the face.'
Jeff Lonsdale for the link to this essay, which (despite its many flaws) was my main impetus to work on this post.
Originally published May 13, 2019.