The limits of simulation

In a rather clever confluence of Bostron’s simulation theory and the Fermi Paradox, Anatoly Karlin hypothesizes the possibility that the reason there is no extraterrestial life in our simulated universe is that it lies beyond the simulation’s limits:

In a classic paper from 2003, Nick Bostrom argued that at least one of the following propositions is very likely true: That posthuman civilizations don’t tend to run “ancestor-simulations”; that we are living in a simulation; or that we will go extinct before reaching a “posthuman” stage[58]. Let us denote these “basement simulators” as the Architect, the constructor of the Matrix world-simulation in the eponymous film. As Bostrom points out, it seems implausible, if not impossible, that there is a near uniform tendency to avoid running ancestor-simulations in the posthuman era.

There are unlikely to be serious hardware constraints on simulating human history up to the present day. Assuming the human brain can perform ~1016 operations per seconds, this translates to ~1026 operations per second to simulate today’s population of 7.7 billion humans. It would also require ~1036 operations over the entirety of humanity’s ~100 billion lives to date [8]. As we shall soon see, even the latter can be theoretically accomplished with a nano-based computer on Earth running exclusively off its solar irradiance within about one second.

Sensory and tactical information is much less data heavy, and is trivial to simulate in comparison to neuronal processes. The same applies for the environment, which can be procedurally generated upon observation as in many video games. In Greg Egan’s Permutation City, a sci-fi exploration of simulations, they are designed to be computationally sparse and highly immersive. This makes intuitive sense. There is no need to model the complex thermodynamics of the Earth’s interior in their entirety, molecular and lower details need only be “rendered” on observation, and far away stars and galaxies shouldn’t require much more than a juiced up version of the Universe Sandbox video game sim.

Bostrom doesn’t consider the costs of simulating the history of the biosphere. I am not sure that this is justified, since our biological and neurological makeup is itself a result of billions of years of natural selection. Nor is it likely to be a trivial endeavour, even relative to simulating all of human history. Even today, there are about as many ant neurons on this planet as there are human neurons, which suggests that they place a broadly similar load on the system [9]. Consequently, rendering the biosphere may still require one or two more orders of magnitude of computing power than just all humans. Moreover, the human population – and total number of human neurons – was more than three orders of magnitude lower than today before the rise of agriculture, i.e. irrelevant next to the animal world for ~99.9998{5c1a0fb425e4d1363f644252322efd648e1c42835b2836cd8f67071ddd0ad0e3} of the biosphere’s history [10]. Simulating the biosphere’s evolution may have required as many as 1043 operations [11].

I am not sure whether 1036 or 1043 operations is the more important number so far as generating a credible and consistent Earth history is concerned. However, we may consider this general range to be a hard minimal figure on the amount of “boring” computation the simulators are willing to commit to in order in search for a potentially interesting results.

Even simulating a biosphere history is eminently doable for an advanced civilization. A planet-scale computer based on already known nanotechnological designs and powered by a single-layer Matryoshka Brain that cocoons the Sun will generate 1042 flops[60]. Assuming the Architect’s universe operates within the same set of physical laws, there is enough energy and enough mass to compute such an “Earth history” within 10 seconds – and this is assuming they don’t use more “exotic” computing technologies (e.g. based on plasma or quantum effects). Even simulating ten billion such Earth histories will “only” take ~3,000 years – a blink of an eye in cosmic terms. Incidentally, that also happens to be the number of Earth-sized planets orbiting in the habitable zones of Sun-like stars in the Milky Way[61].

So far, so good – assuming that we’re more or less in the ballpark on orders of magnitude. But what if we’re not? Simulating the human brain may require as much 1025 flops, depending on the required granularity, or even as many as 1027 flops if quantum effects are important [62,63]. This is still quite doable for a nano-based Matryoshka Brain, though the simulation will approach the speed of our universe as soon as it has to simulate ~10,000 civilizations of 100 billion humans. However, doing even a single human history now requires 1047 operations, or two days of continuous Matryoshka Brain computing, while doing a whole Earth biosphere history requires 1054 operations (more than 30,000 years).

This will still be feasible or even trivial in certain circumstances even in our universe. Seth Lloyd calculates a theoretical upper bound of 5*1050 flops for a 1 kg computer[64]. Converting the entirety of the Earth’s mass into such a computer would yield 3*1075 flops. That said, should we find that one needs significantly more orders of magnitude than 1016 flops to simulate a human brain, we may start to slowly devalue the probability that we are living in a simulation. Conversely, if we are to find clues that simulating a biosphere is much easier than simulating a human noosphere – for instance, if the difficulty of simulating brains increases non-linearly with respect to their numbers of neurons – we may instead have to conclude that it is more likely that we live in a simulation.