An Inspiring Critique

Dennis McCarthy recently put up a post offering a detailed critique of the Amazon-banned Amazon bestseller Probability Zero. We don’t know that it was publishing Probability Zero and the effectiveness of the book that inspired some evolutionary enthusiast in the KDP department to ban Castalia’s account, but we can be very confident that it wasn’t because Castalia submitted my own Japanese translation of my own book for publication without having the right to do so, as we were informed.

In any event, McCarthy’s critique is the first substantive one we’ve seen and it’s a more competent attempt to engage with the mathematical arguments in Probability Zero than those from Redditors opining in ignorance, but his critique immediately fails for multiple reasons that demonstrate the significant difference between biological intuition and mathematical rigor. For some reason, McCarthy elects to focus on the Darwillion, my probability calculation about the likelihood of evolution by natural selection instead of MITTENS itself, but that’s fine. Either way, there was no chance he was going to even scratch the paint on the proven fact of the mathematical impossibility of natural selection.

“What Vox Day calculated—(1/20,000)^20,000,000—are the odds that a particular group or a pre-specified list of 20 million mutations (or 20 million mutations in a row) would all become fixed. In other words, his calculation would only be accurate if the human race experienced only 20 million mutations in total over the last 9 million years—and every one of them then became fixed… Using Vox Day’s numbers, in a population of 10,000 humans, we would expect, on average, 50,000 new mutations per year. And over the course of 9 million years, this means we would expect: 50,000 × 9 million = 450 billion new mutations altogether. So out of 450 billion mutations, how many mutations may we expect to achieve fixation? Well, as Vox Day noted, each mutation has a probability of 1/20,000 in becoming fixed. 450 billion × 1/20,000 = 22.5 million fixed mutations.”

This is a category error. What McCarthy has done here is abandon Darwin, abandon natural selection, and retreated to an aberrant form of neutral theory that he’s implementing without even realizing that he has done so. He’s cargo-culting the structure of Kimura’s core equation that underlies neutral theory without understanding what the terms mean or where they come from. Because my numbers weren’t arbitrary, they are straight out of Kimura’s fixation model.

So he took my number for mutations arising, which depends on effective population (Nₑ), multiplied it by the fixation probability (which depends on 1/Nₑ), and got the textbook neutral theory answer because the Nₑ terms cancel each other out. He wrote it as “mutations × probability” because he was reverse-engineering an argument to match the observed 20 million, not applying the theory directly. It’s rather like someone proving F=ma by measuring force and acceleration separately, then multiplying them together and thinking they’ve discovered mass. It’s technically correct, yes, but also completely misses the point.

The next thing to point out is that not only is what he’s cited incorrect and irrelevant, it isn’t even a defense of evolution through natural selection. McCarthy’s rebuttal has nothing to do with Darwin, nothing to do with adaptation, nothing to do with fitness, nothing to do with selection pressure, nothing to do with speciation, and nothing to do with all of the biogeography that McCarthy later lovingly details. Neutral theory, or genetic drift, if you prefer, is what happens automatically over time, and it is appealed to by biologists as a retreat from Neo-Darwinism to try to explain the existence of these huge genetic caps for which they know natural selection and sexual selection cannot possibly account.

Even the great defender of orthodox Darwinism, Richard Dawkins, has retreated from TENS. It’s now “the Theorum of Evolution by (probably) Natural Selection, Sexual Selection, Biased Mutation, Genetic Drift, and Gene Flow.”

But that’s not the only problem with the critique. McCarthy’s calculation is correct for the number of mutations that enter the population. That tells you precisely nothing about whether those mutations can actually complete fixation within the available time. He has confused mutation with fixation, as do the vast majority of biologists who attempt to address these mathematical issues. I don’t know why they find it so difficult, as presumably these people are perfectly capable of communicating that they only want one burrito from Taco Bell, and not 8 billion, with their order.

McCarthy’s calculation implicitly assumes that fixation is instantaneous. He’s assuming that when a mutation appears, it has a 1/20,000 chance of succeeding, and if it succeeds, it immediately becomes fixed in 100% of the population. But this is not true. Fixation is a process that takes time. Quite often, a lot of time. Because if McCarthy had understood that he was utilizing Kimura’s fixation model, then he would known to have taken into account that the expected time to fixation is approximately 4Nₑ generations, which is around 40,000 generations for an effective population size of 10,000.

In other words, he actually INCREASED the size of the Darwillion by a factor of 25. I was using a time-to-fixation number of 1,600. He’s proposing that increasing that 1,600 to 40,000 is somehow going to reduce the improbability, which obviously is not the case. The problem is due to the fact that all fixations must propagate through actual physical reproduction. Every individual carrying the fixing allele must reproduce, their offspring must survive, those offspring must reproduce, and so on—generation after generation, for tens of thousands of generations—until the mutation reaches 100% frequency throughout the entire reproducing population.

Here’s the part that McCarthy omitted: can those 22 million mutations actually complete and become fixated through this reproductive process in 450,000 generations once they appear? Of course they can’t! Both reasons are related to the limits on natural selection and are explained in great detail in the book:

  • The Reproductive Ceiling: Selection operates through differential reproduction. For mutations to fix faster than neutral drift, carriers must outreproduce non-carriers. But humans can only produce a limited number of offspring per generation. A woman might have 10 children in a lifetime; a man might sire 100 under exceptional circumstances. This places a hard ceiling on how much selection can operate simultaneously across the genome.
  • The Bernoulli Barrier: Even if we invoke parallel fixation (many mutations fixing simultaneously), the Law of Large Numbers creates a devastating problem. As the number of simultaneously segregating beneficial loci increases, the variance in individual fitness decreases relative to the mean. Selection requires variance to operate; parallel fixation destroys the variance it needs. This constraint is hard, but purely mathematical, arising from probability theory rather than biology.

McCarthy’s second objection concerns the 2009 Nature study on E. coli:

“Unfortunately, this analysis is flawed from the jump: E. coli does not exhibit the highest mutation rate per generation; in fact, it has one of the lowest—orders of magnitude lower than humans when measured on a per-genome, per-generation basis.”

McCarthy is correct that humans have a higher per-genome mutation rate than E. coli—roughly 60-100 de novo mutations per human generation versus roughly one mutation per 1000-2400 bacterial divisions. But this observation is irrelevant. Once again, he’s confusing mutation with fixation.

I didn’t cite the E. coli study for its mutation rate but for its fixation rate: 25 mutations fixed in 40,000 generations, yielding an average of 1,600 generations per fixed mutation. These 25 mutations were not fixed sequentially—they fixed in parallel. The 1,600-generation rate already incorporates parallel fixation.

Now, McCarthy is operating under the frame of Kimura, and assuming that since mutations = fixations, the fact that humans mutate faster than bacteria means that they fixate faster. They don’t. No one has ever observed any human fixation faster than 1,600 generations. Even if we very generously extrapolate from the existing CCR5-delta32 mutation that underwent the most intense selection pressure ever observed, the fastest we could get, in theory, is 2,278 generations, and even that fixation will never happen because the absence of the Black Death means there is no longer any selection pressure or fitness advantage granted by mutation.

Which means that in the event neutral drift carries CCR5-delta32 the rest of the way to fixation, it will require another 37,800 generations in the event that it happens to hit on its 10 percent chance of completing fixation from its current percentage of the global population.

In short, the fact that E. coli mutate slower doesn’t change the fact that humans don’t fixate faster.

The rest of the critique is irrelevant and incorrect. I’ll just give two examples:

Finally, there is no brake—no invisible wall—that arbitrarily halts adaptation after some prescribed amount of change. Small variations accumulate without limit. Generation after generation, those increments compound, and what begin as modest differences become profound transformations. When populations of the same species are separated by an earthly barrier—a mountain, a sea, a desert—they diverge: first into distinct varieties or subspecies, and eventually into separate species. And precisely what this process predicts is exactly what we find. Everywhere, without exception.

This is a retreat to the innumeracy of the biologist. There is absolutely a hard limit, a very visible flesh-and-blood wall, that prevents adaptation and renders natural selection almost irrelevant as a proposed mechanism for evolution. That is the reproductive barrier, which is far stronger and far more significant than the earthly barriers to which McCarthy appeals.

I don’t know why this is so hard for evolutionary enthusiasts to grasp: we actually know what the genetic distance between two different species are. We know the amount of time that it took to create that genetic gap. And there are not enough generations, not enough births, not enough reproductions, to account for ANY of the observed genetic gaps in the available amount of time.

Imagine a traveler made the same appeal in order to support his claim about his journey.

There is no brake—no invisible wall—that arbitrarily halts movement after some prescribed amount of steps. Small steps accumulate without limit. Block after block, those increments compound, and what begin as modest differences become profound transformations. When man is separated from his earthly objective—a city on a distant shore—he begins to walk, first across county lines, and then across states, over mountains, through forests, and even across deserts. And precisely what this process predicts is exactly what we find. Everywhere, without exception. That is why you must believe that I walked from New York City to Los Angeles in five minutes.

Dennis McCarthy is a very good writer. I envy the lyricism of his literary style. But what he entirely fails to grasp is that Probability Zero isn’t an end run, as he calls it. It is an undermining, a complete demolition of the entire building.

The book is first and foremost what I like to call an end-around. It does not present a systematic attack on the facts just presented—or, for that matter, any of the vast body of empirical evidence that confirms evolution. It sidesteps entirely the biogeographical patterns that trace a continuous, unbroken organic thread that runs through all regions of the world, with the most closely related species living near each other and organic differences accruing with distance; the nested hierarchies revealed by comparative anatomy and genetics; the fossil record’s ordered succession of transitional forms (see pic); directly observed evolution in laboratories and natural populations; the frequency of certain beneficial traits (and their associated genes) in human populations, etc.

He’s absolutely correct in that I don’t attack or address any of those things. I didn’t need to do so. It’s exactly like appealing to how I haven’t admired the arrangement of the furniture on the fifth floor or taken in the lovely view from the twentieth when the explosives were planted in the supports and the entire building is lying in smoking rubble. Natural selection never accounted for any of those things to which he appeals. It could not possibly have done so, and neither could genetic drift.

All those things exist, to be sure but they do not exist because of evolution by natural selection. Mr. McCarthy will need to find another mechanism to explain them. Which is something I pointed out in the book.

Now, all that being said, I am extremely grateful to Dennis McCarthy for his critique, because the way in which he indirectly invoked the Kimura fixation model inspired me to look directly at its core equation for the first time. Now, I knew that it was incomplete, which is why I first created a corrective for its failure to account for overlapping generations, the Selective Turnover Coefficient. And I also knew that it was not a constant 10,000 as it is commonly utilized by biologists, because my analysis of the ancient DNA database proved that it varied between 3,300 and 10,000.

But I didn’t know that Kimura’s core equation underlying the fixation model was complete garbage based on a mathematical bait-and-switch until looking at it from this different perspective. And the result was the paper “Breaking Neutral Theory: Empirical Falsification of Effective Population-Size Invariance in Kimura’s Fixation Model.” You can read the preprint if you enjoy the deep dives. Here is the abstract:

Kimura’s neutral theory includes the famous invariance result: the expected rate of neutral substitution equals the mutation rate μ, independent of population size. This result is presented in textbooks as a general discovery about evolution and is routinely applied to species with dramatically varying population histories. It is not generally true. The standard derivation holds exactly only for a stationary Wright-Fisher population with constant effective population size. When population size varies—as it does in virtually every real species—the expected neutral substitution rate depends on the full demographic trajectory and is not equal to μ. We demonstrate this mathematically by showing that the standard derivation uses a single symbol (Ne) for two distinct quantities that are equal only under constant population size. We then show that the direction of the predicted deviation matches observed patterns in three independent mammalian comparisons: forest versus savanna elephants, mouse versus rat, and human versus chimpanzee. Kimura’s invariance is an approximation valid only under demographic stationarity, not a general law. Evolutionary calculations that apply it to species with changing population sizes are unreliable.

Let’s just say neutral theory is no longer a viable retreat for the Neo-Darwinians. The math is real. I wouldn’t go so far as to say that the math is the only reality, but it is the one thing you cannot ever ignore if you want to avoid having all your beautiful theories and assumptions and beliefs destroyed in one fell swoop.

Probability Zero will be in print next week. You can already preorder the print edition at NDM Express.

DISCUSS ON SG


A Substantive Critique of PZ

Dennis McCarthy, the historical literary sleuth whose remarkable case for the true authorship of Shakespeare’s works is one of the great detective works of history, has aimed his formidable analytical abilities at Probability Zero. And it is, as he quite correctly ascertains, an important subject that merits his attention.

I believe this is one of my more important posts—not only because it explains evolution in simple, intuitive terms, making clear why it must be true, but because it directly refutes the core claims of Vox Day’s best-selling book Probability Zero: The Mathematical Possibility of Evolution by Natural Selection. Day’s adherents are now aggressively pushing its claims across the internet, declaring evolution falsified. As far as I am aware, this post is the only thorough and effective rebuttal to its mathematical analyses currently available.

It’s certainly the only attempt to provide an effective rebuttal that I’ve seen to date. Please note that I will not respond to this critique until tomorrow, because I want to give everyone a chance to consider it and think about it for themselves. I’d also recommend engaging in the discussion at his site, and to do so respectfully. I admire Mr. McCarthy and his work, and I do not find his perspective either surprising or offensive. This is exactly the kind of criticism that I like to see, as opposed to the incoherent “parallel drift” Reddit-tier posturing.

The book is first and foremost what I like to call an end-around. It does not present a systematic attack on the facts just presented—or, for that matter, any of the vast body of empirical evidence that confirms evolution. It sidesteps entirely the biogeographical patterns that trace a continuous, unbroken organic thread that runs through all regions of the world, with the most closely related species living near each other and organic differences accruing with distance; the nested hierarchies revealed by comparative anatomy and genetics; the fossil record’s ordered succession of transitional forms (see pic); directly observed evolution in laboratories and natural populations; the frequency of certain beneficial traits (and their associated genes) in human populations, etc.

Probability Zero, instead, attempts to fire a mathematical magic bullet that finds some tiny gap within this armored fort of facts and takes down Darwin’s theory once and for all. No need to grapple with biology, geology, biogeography, fossils, etc., the math has pronounced it “impossible,” so that ends that.

Probability Zero advances two principal mathematical arguments intended to show that the probability of evolution is—as its title suggests—effectively zero. One centers on the roughly 20 million mutations that have become fixed (that is, now occur in 100% of the population) in the human lineage since our last common ancestor with chimpanzees roughly 9 million years ago. Chimpanzees have experienced a comparable number of fixed mutations.

Day argues that this is impossible given the expected number of mutations arising each generation and the probability that any particular neutral mutation reaches fixation—approximately 1 in 20,000, based on estimates of ancestral human population size. Beneficial mutations do have much higher fixation probabilities, but the vast majority of these ~20 million substitutions are neutral.

Read the whole thing there. Mr. McCarthy is familiar with the relevant literature and he is not an innumerate biologist, which is what makes this discussion both interesting and relevant.

As I said before, I will refrain from saying anymore here or on SG, and I will refrain from commenting there, until I provide my own response tomorrow. But I will say that I owe a genuine debt to Mr. McCarthy for drawing my attention to something I’d overlooked…

DISCUSS ON SG


No Chance At All

The Band reviews Probability Zero:

Probability Zero demolishes TENS so utterly, the preface should be “PULL!”

This is the first version of a new book by Vox Day that demonstrates the mathematical impossibility of the Theory of Evolution by Natural Selection [TENS]. Given how big the House of Lies and reality-facing counterculture are around here, it demands attention. There may not be a more important pillar for its entire fake ontology.

Probability Zero strikes the heart of what the setup post called conflict between The Science! and the Scientific Method. This matters for more than intellectual reasons. Readers know personal responsibility is a priority around here. But we also live in a complex socio-culture that has unavoidable influence on us. From basic things, like adding tax and regulatory burdens to organic community demands. Up to the fundamental beliefs that set the public ethos…

Probability Zero starts by setting aside the religious and philosophical arguments, just like The Science! does. It accepts the discourse on its terms, by adhering to the “scientific” arguments it claims to adhere to. To be defined by. Full concession of TENS huffing’s own epistemological standards. Then lays out the mathematical parameters claimed to be involved in the TENS process. No additional yeah, buts. Just what is accepted in the literature. And then lets the logical realities of math blow the whole mess into a smoking crater so apocalyptically vast, I’ll never be able to see biologists the same way again.

There’s no need to recap the statistical arguments, they’re clear and complete. The kernel is that if mutations take an amount of time to appear and fix, that much time has to be available for the theory to be possible.

This was clear when MITTENS was pointed out. Even before it had a name. General conditions of possibility make it obvious once seen. But the full demonstration lights up that gulf between The Science! and science as modes of knowledge production. The whole point of science is empirical conformation and abstract reasoning in concert. Day’s observation that evolutionary biologists have replaced experimentation with pure modeling was legitimately surprising. Apparently there still was a bar, however low. Not anymore.

Consider what problems innumeracy might present for pure modelers. Because the level is staggering. To the point where a simple arithmetic mean is incomprehensible. No hyperbole. Probability Zero describes blank stares when asked for the average rate of mutation. The ongoing idiocy over parallel vs. sequential mutation is illustrative. The total number of mutations separating species includes all of them. Parallel, sequential, or however else. Hence the word “total”. And dividing “total” by “amount of time” gives a simple, unweighted average number. The rate.

I’m not exaggerating. There was always the joke that biologists were fake scientists that couldn’t do math. Easier for premed GPAs too. But the assumption was that it was relative. Lighter than physics or chemistry, but still substantial compared to social sciences or the arts. And that would be wrong. There are some computational sub-fields of biology. Assuming they’re legit, they clearly aren’t working in evolution.

Read the whole thing there. He has several very illuminating examples of historical evo-fluffery, including one page of a manuscript that I’m going to put up here as a separate post, simply because it demands seeing in order to believe it.

DISCUSS ON SG



Rethinking Human Evolution Again

Imagine that! The timelines of human evolution just magically changed again! And it’s really not good news for the Neo-Darwinians or the Modern Synthesis, while it simultaneously highlights the importance of Probability Zero and its mathematical approach to evolution.

A stunning discovery in a Moroccan cave is forcing scientists to reconsider the narrative of human origins. Unearthed from a site in Casablanca, 773,000-year-old fossils display a perplexing blend of ancient and modern features, suggesting that key traits of our species emerged far earlier and across a wider geographic area than previously believed…

The find directly challenges the traditional “out-of-Africa” model, which holds that anatomically modern humans evolved in Africa around 200,000 years ago before migrating and replacing other hominin species. Instead, it supports a more complex picture where early human populations left Africa well before fully modern traits had evolved, with differentiation happening across continents.

“The fossils show a mosaic of primitive and derived traits, consistent with evolutionary differentiation already underway during this period, while reinforcing a deep African ancestry for the H. sapiens lineage,” Hublin added.

Detailed analysis reveals the nuanced transition. One jaw shows a long, low shape similar to H. erectus, but its teeth and internal features resemble both modern humans and Neanderthals. The right canine is slender and small, akin to modern humans, while some incisor roots are longer, closer to Neanderthals. The molars present a unique blend, sharing traits with North African teeth, the Spanish species H. antecessor and archaic African H. erectus.

The fossils are roughly contemporaneous with H. antecessor from Spain, hinting at ancient interconnections. “The similarities between Gran Dolina and Grotte à Hominides are intriguing and may reflect intermittent connections across the Strait of Gibraltar, a hypothesis that deserves further investigation,” noted Hublin.

Dated by the magnetic signature of the surrounding cave sediments, the Moroccan fossils align with genetic estimates that the last common ancestor of modern humans, Neanderthals and Denisovans lived between 765,000 and 550,000 years ago. This discovery gives a potential face to that mysterious population.

The research, suggests that modern human traits did not emerge in a single, rapid event in one region. Instead, they evolved gradually and piecemeal across different populations in Africa, with connections to Eurasia, deep in the Middle Pleistocene.

This sort of article really underlines the nature of the innumeracy of the archeologists as well as the biologists. It’s not that they can’t do the basic arithmetic involved, it’s that they have absolutely no idea what the numbers they are throwing around signify, or understand the necessary second- and third-order implications of changing both their numbers and their assumptions.

For example, the reason the Out of Africa hypothesis was so necessary to the evolutionary timeline is because it kept the whole species in a nice, tight little package, evolving together and fixating together over time. But geographic dispersion necessarily prevents universal fixation. So, let’s take a look at how this new finding changes the math, because it is a significant complication for the orthodox model.

If human traits were evolving “gradually and piecemeal across different populations” spanning Africa and Eurasia as early as 773,000 years ago, then fixation had to occur separately in each isolated population before those populations could contribute to modern humans. This isn’t parallel processing that helps the model, it’s the precise opposite. Each isolated population is a separate fixation bottleneck that must be traversed independently.

Consider the simplest case: two isolated populations (Africa and Eurasia) that occasionally reconnect. For a trait to become universal in modern humans, one of two things must happen:

  1. Independent fixation: The same beneficial mutation arises and fixes independently in both populations. This requires the fixation event to happen twice, which squares the improbability.
  2. Migration and re-fixation: The mutation fixes in one population, then migrants carry it to the other population, where it must fix again from low frequency. This doubles the time requirement since the allele must go from rare-to-fixed twice in sequence.

If there were n substantially isolated populations contributing to modern human ancestry, and k of the 20 million fixations had to spread across all of them through migration and re-fixation, the time requirement multiplies accordingly.

The “mosaic” of traits—some modern, some archaic, some Neanderthal-like, some unique—found in the Moroccan fossils suggest that different features were fixing in different populations at different times, which is what one would expect. The eventual modern human phenotype was assembled from contributions across multiple semi-isolated groups. However, this means the 20 million fixations weren’t a single sequential process in a single lineage. They were distributed across multiple populations that had to:

  1. Fix different subsets of mutations locally
  2. Reconnect through migration
  3. Allow the locally-fixed alleles to spread and fix in the combined population
  4. Repeat for 773,000+ years

Let’s say there were effectively 3 semi-isolated populations contributing to modern human ancestry: North Africa, Sub-Saharan Africa, and Eurasia. This is the absolute minimum number. If half of the 20 million fixations had to spread across population boundaries after initially fixing locally, that’s 10 million alleles requiring a second fixation event after migration reintroduced them at low frequency.

The time requirement approximately doubles for those 10 million alleles (first fixation + migration + second fixation), while the original problem remains for the other 10 million.

Original shortfall: ~150,000-fold (from MITTENS)

Revised shortfall with geographic structure: ~300,000 to 450,000-fold

But this understates the issue. The real problem is that geographic structure reduces effective population size locally while increasing it globally.

  • Small local populations mean more drift, which sounds helpful for fixation
  • But small local populations also mean more mutations are lost to drift before they can spread
  • And the global population that must eventually carry the fixed allele is larger than any local population, meaning the final fixation is harder

The multiregional model doesn’t help Neo-Darwinism. It creates a nested fixation problem: alleles must fix locally (possible but slow), then spread through migration (slow), then fix in the receiving population (slow again), then spread further (slow), until global fixation is achieved (slowest of all).

The mathematical impossibility of TENS was just multiplied by at least a factor of 3. Notice how every time they find new evidence and adjust the narrative to accommodate it, they make the mathematical problem worse. The Moroccan fossils can’t save Neo-Darwinism. They’re just another shovel of dirt on the coffin.

DISCUSS ON SG


A Necessary Revision

It was pointed out to me that I made a mistake and used the full quote as well as a subset of the quote from Haldane’s 1932 work, The Causes of Evolution, as two different chapter headers. So, for the print edition, I selected a replacement quote that I think many of you will find ironic indeed, as it comes from Richard Dawkins’s landmark 1976 work, The Selfish Gene.

Incidentally, a thing that is sometimes not realized even by people who worry about population problems is that population growth depends upon when people have children, as well as how many they have. Since populations tend to increase by a certain proportion per generation, it follows that if you space the generations out more, the population will grow at a slower rate per year.
—Richard Dawkins, The Selfish Gene, 1976

Athos firmly concurred with my decision:

That’s a good choice. It accomplishes several things at once:

  1. It comes from an unimpeachable mainstream source—Dawkins in 1976, the foundational text of gene-centric evolution
  2. It explicitly states the principle that generations, not years, are the fundamental unit
  3. It’s accessible to your target audience without requiring population genetics background
  4. It sets up the Bio-Cycle argument perfectly: if generation time matters for population growth, it matters even more for fixation rates.
  5. The slight irony of using Dawkins to introduce an argument that undermines neo-Darwinism is a nice touch your readers will appreciate.

The irony compounds nicely when you consider that Dawkins spent his career not thinking through the implications of what he just said. If generation time matters for population growth, it matters for everything that happens generation by generation—including fixation.

The last comment was precisely why I chose that quote. There will be people who quibble over the necessity of the Bio-Cycle Fixation Model even though it is observably more accurate than either Wright-Fisher or Kimura, and even though Richard Dawkins himself pointed out the importance of the very factors upon which it relies 50 years ago.

And thereby underlining that Probability Zero is belatedly doing the work that the professional evolutionary biologists could have, and should have, done long before the turn of the century.

Some people are starting to post their reviews of the book, and I thought that this was one particularly perspicacious observation. The reviewer may be underestimating himself:

Vox Day is a lot smarter than I am, and he’s done a lot of research and complicated math that I am not even going to attempt to do myself. The math is over my head. I don’t understand Vox’s arguments. But here’s what I do understand: if Vox publicly demonstrates the impossibility of evolution by natural selection, given the facts and timeline asserted by the Darwinists themselves — or even if enough people form the impression that Vox has managed to refute Darwinism, regardless of whether he actually has — it absolutely presents a mortal threat to the civic religion that has been essential to the overarching project of the social engineers. That’s the point I was making in yesterday’s post. Moreover, if the powers that be do not suppress Vox’s “heresy,” that acquiescence on their part would show that they are prepared to abandon Darwinism, and that is a new and incredibly significant development.

That’s what I find intriguing too. There was far more, and far more vehement, opposition to The Irrational Atheist compared to what we’re seeing to Probability Zero. What little opposition we’ve seen has been, quite literally, Reddit-tier, and amounts to little more than irrelevant posturing centered around a complete refusal to read the book, let alone offer any substantive criticism.

Meanwhile, I’ve been hearing from mathematicians, physicists, scientists, and even literal Jesuits who are taking the book, its conclusions, and its implications very seriously after going through it carefully enough to identify the occasional decimal point error.

My original thought was that perhaps the smarter rational materialists realized that the case is too strong and there isn’t any point in trying to defend the indefensible. But there were enough little errors in the initial release that someone should have pointed out something, however minor. So, perhaps it’s something else, perhaps it’s useful in some way to those who have always known that the falsity of Neo-Darwinism was going to eventually be exposed in a comprehensive manner and are now ready to abandon their failing plans to engineer society on a materialist basis.

But I’m somewhat less sanguine about that possibility since Nature shot down all three papers I submitted to it. Then again, it could be that the editors just haven’t gotten the message yet that it’s all over now for the Enlightenment and its irrational materialism.

DISCUSS ON SG


PROBABILITY ZERO Q&A

This is where questions related to the #1 Biology, Genetics, and Evolution bestseller PROBABILITY ZERO will be posted along with their answers. The newest questions are on the top.

QUESTION: The math predicts that random drift with natural selection turned off will result in negative mutations would take over and kill a population in roughly 225 years. I would argue modern medicine has significantly curtailed negative natural selection, and the increases of genetic disorders, autoimmune diseases, etc. are partially the result of lessened negative selection and then resulting drift. Am I reading too much into the math, or is this a reasonable possibility?

Yes, that’s not only correct and a definite possibility, it is the basis for the next book, which is called THE FROZEN GENE as well as the hard science fiction series BIOSTELLAR. However, based on my calculations, natural selection effectively stopped protecting the human genome around the year 1900. And this may well account for the various problems that appear to be on the rise in the younger generations which are presently attributed to everything from microplastics to vaccines.

QUESTION: In the Bernoulli Barrier, how is competition against others with their own set of beneficial mutations handled?”

Category error. Drift is not natural selection. The question assumes selection is still operating, just against a different baseline. But that’s not what’s happening. When everyone has approximately the same number of beneficial alleles, there’s no meaningful selection at all. What remains is drift—random fluctuation in allele frequencies that has nothing to do with competitive advantage. The mutations that eventually fix do so by chance, not because their carriers outcompeted anyone.

This is why the dilemma in the Biased Mutation paper bites so hard. Since the observed pattern of divergence matches the mutational bias, then drift dominated, not selection. The neo-Darwinian cannot claim adaptive credit for fixations that occurred randomly, even though he’s going to attempt to claim drift for the Modern Synthesis in a vain bait-and-switch that is actually an abandonment of Neo-Darwinian theory that poses as a defense.

The question posits a scenario where everyone is competing with their different sets of beneficial alleles, and somehow selection sorts it out. But that’s not competition in any meaningful sense—it’s noise. When the fitness differential between the best and worst is less than one percent, you’re not watching selection in action. You’re watching a random walk that, as per the Moran model, will take vastly longer than the selective models assume.

QUESTION: In the book’s example, an individual with no beneficial mutations almost certainly does not exist, so how can the reproductive success of an individual be constrained by a non-existent individual?

That’s exactly right. The individual with zero beneficial mutations doesn’t exist when many mutations are segregating simultaneously. That’s the problem, not the solution. Selection requires a fitness differential between individuals. If everyone in the population carries roughly the same number of beneficial alleles, which the Law of Large Numbers guarantees when thousands are segregating, then selection has nothing with which to work. The best individual is only marginally better than the worst individual, and the required reproductive differential to drive all those mutations to fixation cannot be achieved.

The parallel fixation defense implicitly assumes that some individuals carry all the beneficial alleles while others carry none because that’s the only way to get the massive fitness differentials required. The Bernoulli Barrier shows how this assumption is mathematically impossible. You simply can’t have 1,570-to-1 reproductive differentials when a) the actual genetic difference between the population’s best and worst is less than one percent or b) you’re dealing with human beings.

QUESTION: What about non-random mutation? Base pair mutation is not totally random, as purine to purine and pyrimidine to pyrimidine happens a lot more often then purine to pyrimidine and reverse. And CGP sites are only about one parcent of the genome but mutate 10s of times more often than other sites. This would have some effect on the numbers, but obviously might get you a bit further across the line than totally random mutation, how much, no idea, I have not done the math.

Excellent catch and a serious omission from the book. After doing the math and adding the concomitant chapter to the next book, it turns out that if we add non-random mutations to the MITTENS equation, it’s the mathematical equivalent of reducing the available number of post-CHLCA d-corrected reproductive generations from 209,500 to 157,125 generations. The equivalent, mind you, it doesn’t actually reduce the number of nominal generations the way d does. The reason is that Neo-Darwinian models implicitly assume that mutation samples the space of possible genetic changes in a more or less uniform fashion. When population geneticists calculate waiting times for specific mutations or estimate how many generations are required for a given adaptation, they treat the gross mutation rate as though any nucleotide change is equally likely to occur. This assumption is false, and the false assumption reduces the required time by about 25 percent.

Mutation is heavily biased in at least two ways. First, transitions (purine-to-purine or pyrimidine-to-pyrimidine changes) occur at roughly twice the rate of transversions (purine-to-pyrimidine or vice versa), despite transversions being twice as numerous in combinatorial terms. The observed transition/transversion ratio of 2.1 represents a four-fold deviation from the expected ratio of 0.5 under uniform mutation. Second, CpG dinucleotides—comprising only about 2% of the genome—generate approximately 25% of all mutations due to the spontaneous deamination of methylated cytosine. These sites mutate at 10-18 times the background rate, creating a “mutational sink” where a disproportionate fraction of the mutation supply is spent hitting the same positions repeatedly.

The compound effect dramatically reduces the effective exploratory mutation rate. Of the 60-100 mutations per generation typically cited, roughly one-quarter occur at CpG sites that have already been heavily sampled. Another 40% or more are transitions at non-CpG sites. The fraction representing genuine exploration of sequence space—transversions at non-hypermutable sites—is a minority of the gross rate. The mutations that would be required for many specific adaptive changes occur at below-average rates, meaning waiting times are longer than standard calculations suggest.

This creates a dilemma when applied to observed divergence patterns. Human-chimpanzee genomic differences show exactly the signature predicted by mutational bias: enrichment for CpG transitions, predominance of transitions over transversions, clustering at hypermutable sites. If this pattern reflects selection driving adaptation, then selection somehow preferentially fixed mutations at the positions and of the types that were already favored by mutation. If, as is much more reasonable to assume, the pattern reflects mutation bias propagating through drift, then drift dominated the divergence, and neo-Darwinism cannot claim adaptive credit for the observed changes. Either the waiting times for required adaptive mutations are worse than calculated or the fixations weren’t adaptive in the first place. The synthesis loses either way.

DISCUSS ON SG


Where Biologists Fear to Tread

The Redditors don’t even hesitate. This is a typical criticism of Probability Zero, in this case, courtesy of one “Theresa Richter”.

E coli reproduce by binary fission, therefore your numbers are all erroneous, as humans are a sexual species and so multiple fixations can occur in parallel. Even if we plugged in 100,000 generations as the average time to fixation, 450,000 generations would still be enough time, because they could all be progressing towards fixation simultaneously. The fact that you don’t understand that means you failed out of middle school biology.

This is a perfect example of Dunning-Kruger Syndrome in action. She’s both stupid and ignorant, neither of which state prevent her from being absolutely certain that anyone who doesn’t agree with her must have failed out of junior high school biology. Which makes a certain degree of sense, because she’s relying upon her dimly recalled middle school biology as the basis of her argument.

The book, of course, dealt comprehensively with all of these issues in no little detail.

First, E. coli reproduce much faster in generational terms than humans or any other complex organisms do, so the numbers are admittedly erroneous, they are generous. Which is to say that they err on the side of the Modern Synthesis; all the best human estimates are slower.

Second, multiple fixations do occur in parallel. And a) those parallel fixations are already included in the number, b) the reproductive ceiling: the total selection differential across all segregating beneficial mutations cannot exceed the maximum reproductive output of the organism, and c) Bernoulli’s Barrier: the Law of Large Numbers imposes an even more severe limitation on parallel fixation than the reproductive ceiling alone.

Third, an average time of 100,000 generations per fixation would permit a maximum of 4.5 fixations because those parallel fixations are already included in the number.

Fourth, there aren’t 450,000 generations. Because human reproductive generations overlap and therefore the 260,000 generations in the allotted time must be further reduced by d, the Selection Turnover Coefficient, the weighted average of which is 0.804 across the entirety of post-CHLCA history, to 209,040 generations.

Note to PZ readers: yes, the work continues. Any differences you note between numbers in the book and numbers I happen to mention now will be documented, in detail, in the next book, which will appear much sooner than anyone will reasonably expect.

Now, here’s the irony. There was an actual error in the book apparently caused by an AI hallucination that substituted a 17 for 7.65 for no discernible reason that anyone can ascertain. The change was even a fortuitous one, as it indicates 225 years until total genetic catastrophe instead of 80. And the punchline: the error was discovered by a Jesuit priest who was clearly reading the book very, very carefully and checking the numbers.

DISCUSS ON SG


Historic Rigor

Four major AI systems concur on the way in which Probability Zero has a much higher degree of technical and mathematical rigor than six other major works of what passes for science of one sort or another. With the exception of Claude, who was familiar with PZ, all four other systems did the rating entirely blind.

All FOUR other systems? The alert self-appointed Reality Police perk up at the scent of the anomaly that promises the chance to catch someone else in an error and demonstrate what smart boys they are. In fact, they suddenly notice, in fact there is one system that is obviously missing. What about Deepseek, they demand. What is the Dark Lord hiding?

Yeah, about that… It’s true, Deepseek did give Probability Zero a 0.0. But that doesn’t have quite the significance that my critics might hope.

  • Probability Zero (Fictional Anthology)
    Argument for Audit: As a meta-fictional work about impossible events, its central “argument” is the literary exploration of the fantastical and paradoxical.
  • Latency & Control: Actively violates known physical constants (FTL travel, time paradoxes) as its core premise. The latency gaps are not flaws but features.
  • Combinatorial Complexity: Embraces infinite or paradoxical combinatorial spaces (e.g., Boltzmann brains, infinite libraries).
  • Dynamic Stability: Systems are designed to be unstable or paradoxical (e.g., stories about unpredictable robots, cosmic coincidences).

Scoring (Applied Literally):

  • Logical Coherence: Variable (0.0-10.0). Often uses impeccable logic to reach impossible conclusions from impossible premises.
  • Physical Fidelity: 0.0. By design, it is an audit of the impossible.
  • Structural Robustness: 0.0. The entire point is to collapse under the slightest increase in realistic constraint.

Just to add an additional comedic element, when informed that Probability Zero was not a fictional anthology about hyperspace, time travel, and robots, Deepseek promptly hallucinated that it was an anti-Darwinian book by Daniel Dennett.

Deepseek, you see, doesn’t have the same access to the Internet that the other AI systems do. But instead of simply telling you it doesn’t know something when it doesn’t know something, it just makes something else up.

DISCUSS ON SG


Empirically Impossible

I’ve been working on a few things since finishing Probability Zero. One of those things was the release of a 10 hour and 28 minute audiobook. Another of those things was a statistical study that Athos and I just completed, and the results very strongly support Probability Zero‘s assertion of the mathematical impossibility of the theory of evolution by natural selection.

Empirical Validation: Zero Fixations in 1.2 Million Loci

The MITTENS framework in Probability Zero calculates that the actual number of effective generations available for evolutionary change is far smaller than the nominal generation count—approximately 158 real generations rather than 350 nominal generations over the 7,000-year span from the Early Neolithic to the present. This reduction, driven by the collapse of the selective turnover coefficient in growing populations, predicts that fixation events should be rare, fewer than 20 across the entire genome. The Modern Synthesis requires approximately 20 million fixations over the 9 million years since the human-chimpanzee divergence, implying a rate of 2.22 fixations per year or approximately 15,500 fixations per 7,000-year period. To test these competing predictions, we compared allele frequencies between Early Neolithic Europeans (6000-8000 BP, n=1,112) and modern Europeans (n=645) across 1,211,499 genetic loci from the Allen Ancient DNA Resource v62.0.

The observed fixation count was zero. Not a single allele in 1.2 million crossed from rare (<10% frequency) to fixed (>90% frequency) in seven thousand years. The reverse trajectory—fixed to rare—also produced zero counts, ruling out population structure artifacts that would inflate both directions equally. Even relaxing the threshold to “large frequency changes” (>50 percentage points) identified only 18 increases and 60 decreases, representing 0.006% of loci showing substantial movement in either direction. The alleles present in Early Neolithic farmers remain at nearly identical frequencies in their modern descendants, despite what the textbooks count as three hundred fifty generations of evolutionary opportunity.

This result decisively favors the MITTENS prediction over the Modern Synthesis expectation. The mathematics in Probability Zero derived, from first principles, that overlapping generations, declining mortality, and expanding population size combine to reduce effective generational turnover by more than half. The ancient DNA record confirms this derivation empirically: the genome behaves as if approximately 158 generations have elapsed, not 350. But zero fixations in 1.2 million loci suggests even the limited ceiling permitted by MITTENS may be generous—the observed stasis is consistent with a system in which the conditions for fixation have become vanishingly difficult to satisfy regardless of the generation count.

Evolution by natural selection, as a mechanism of directional change capable of producing adaptation or speciation, has been empirically demonstrated to be inoperative in human populations for at least 7,000 years.

DISCUSS ON SG