PROBABILITY ZERO Q&A

This is where questions related to the #1 Biology, Genetics, and Evolution bestseller PROBABILITY ZERO will be posted along with their answers. The newest questions are on the top.

QUESTION: Human beings aren’t bacteria. And humans mutate faster than bacteria. So isn’t it possible that humans fixate mutations faster than 1,600 generations per mutational fixation?

While the logic strongly dictates the answer is no, and obviously there is no way to observe mutational fixation in humans across hundreds of generations, we can utilize the existing scientific consensus to derive a best possible estimate for the rate of fixations in humans as measured in generations. Here are the parameters required to calculate the number of human generations per fixation.

  • N_e = 10,000 (standard effective population constant)
  • T = 22 years (generation, Gurven & Kaplan 2007)
  • L = 51 years (lifespan based on Coale-Demeny-West life tables)
  • s = 0.001 (selection coefficient, Zeng et al 2021)

t ≈ 19,800 generations per fixation

It probably won’t escape your attention that 19,800 > 1,600. So using the 1,600 generations rate was extremely generous to the Modern Synthesis model. And the answer is no, definitely not.

QUESTION: Interesting equation d = T × [∫μ(x)l(x)v(x)dx / ∫l(x)v(x)dx] I’m pretty sure all the denominator does is cancel l(x)v(x) and make l(x)v(x) ≠ 0. Which is to say d = T × ∫μ(x) unless the functions are special somehow for l(x) and v(x).

The functions l(x) and v(x) are special. They’re not constants that can be factored out and cancelled.

  • l(x) (survivorship) is a decreasing function. It starts at 1 (everyone alive at birth) and declines toward 0 as age increases. In humans, it stays high through reproductive years then drops off.
  • v(x) (reproductive value) is a hump-shaped function. It starts low (children can’t reproduce), peaks in early reproductive years, then declines as remaining reproductive potential diminishes.
  • The product l(x)v(x) weights each age by “probability of being alive at that age × reproductive contribution from that age forward.” This weighting is highly non-uniform. A 25-year-old contributes far more to the integral than a 5-year-old or a 60-year-old.

If l(x) and v(x) were constants, they’d cancel and you’d get d = T × ∫μ(x)dx. But they’re not constants, they’re age-dependent functions that capture the demographic structure of the population.

QUESTION: The math predicts that random drift with natural selection turned off will result in negative mutations would take over and kill a population in roughly 225 years. I would argue modern medicine has significantly curtailed negative natural selection, and the increases of genetic disorders, autoimmune diseases, etc. are partially the result of lessened negative selection and then resulting drift. Am I reading too much into the math, or is this a reasonable possibility?

Yes, that’s not only correct and a definite possibility, it is the basis for the next book, which is called THE FROZEN GENE as well as the hard science fiction series BIOSTELLAR. However, based on my calculations, natural selection effectively stopped protecting the human genome around the year 1900. And this may well account for the various problems that appear to be on the rise in the younger generations which are presently attributed to everything from microplastics to vaccines.

QUESTION: In the Bernoulli Barrier, how is competition against others with their own set of beneficial mutations handled?”

Category error. Drift is not natural selection. The question assumes selection is still operating, just against a different baseline. But that’s not what’s happening. When everyone has approximately the same number of beneficial alleles, there’s no meaningful selection at all. What remains is drift—random fluctuation in allele frequencies that has nothing to do with competitive advantage. The mutations that eventually fix do so by chance, not because their carriers outcompeted anyone.

This is why the dilemma in the Biased Mutation paper bites so hard. Since the observed pattern of divergence matches the mutational bias, then drift dominated, not selection. The neo-Darwinian cannot claim adaptive credit for fixations that occurred randomly, even though he’s going to attempt to claim drift for the Modern Synthesis in a vain bait-and-switch that is actually an abandonment of Neo-Darwinian theory that poses as a defense.

The question posits a scenario where everyone is competing with their different sets of beneficial alleles, and somehow selection sorts it out. But that’s not competition in any meaningful sense—it’s noise. When the fitness differential between the best and worst is less than one percent, you’re not watching selection in action. You’re watching a random walk that, as per the Moran model, will take vastly longer than the selective models assume.

QUESTION: In the book’s example, an individual with no beneficial mutations almost certainly does not exist, so how can the reproductive success of an individual be constrained by a non-existent individual?

That’s exactly right. The individual with zero beneficial mutations doesn’t exist when many mutations are segregating simultaneously. That’s the problem, not the solution. Selection requires a fitness differential between individuals. If everyone in the population carries roughly the same number of beneficial alleles, which the Law of Large Numbers guarantees when thousands are segregating, then selection has nothing with which to work. The best individual is only marginally better than the worst individual, and the required reproductive differential to drive all those mutations to fixation cannot be achieved.

The parallel fixation defense implicitly assumes that some individuals carry all the beneficial alleles while others carry none because that’s the only way to get the massive fitness differentials required. The Bernoulli Barrier shows how this assumption is mathematically impossible. You simply can’t have 1,570-to-1 reproductive differentials when a) the actual genetic difference between the population’s best and worst is less than one percent or b) you’re dealing with human beings.

QUESTION: What about non-random mutation? Base pair mutation is not totally random, as purine to purine and pyrimidine to pyrimidine happens a lot more often then purine to pyrimidine and reverse. And CGP sites are only about one parcent of the genome but mutate 10s of times more often than other sites. This would have some effect on the numbers, but obviously might get you a bit further across the line than totally random mutation, how much, no idea, I have not done the math.

Excellent catch and a serious omission from the book. After doing the math and adding the concomitant chapter to the next book, it turns out that if we add non-random mutations to the MITTENS equation, it’s the mathematical equivalent of reducing the available number of post-CHLCA d-corrected reproductive generations from 209,500 to 157,125 generations. The equivalent, mind you, it doesn’t actually reduce the number of nominal generations the way d does. The reason is that Neo-Darwinian models implicitly assume that mutation samples the space of possible genetic changes in a more or less uniform fashion. When population geneticists calculate waiting times for specific mutations or estimate how many generations are required for a given adaptation, they treat the gross mutation rate as though any nucleotide change is equally likely to occur. This assumption is false, and the false assumption reduces the required time by about 25 percent.

Mutation is heavily biased in at least two ways. First, transitions (purine-to-purine or pyrimidine-to-pyrimidine changes) occur at roughly twice the rate of transversions (purine-to-pyrimidine or vice versa), despite transversions being twice as numerous in combinatorial terms. The observed transition/transversion ratio of 2.1 represents a four-fold deviation from the expected ratio of 0.5 under uniform mutation. Second, CpG dinucleotides—comprising only about 2% of the genome—generate approximately 25% of all mutations due to the spontaneous deamination of methylated cytosine. These sites mutate at 10-18 times the background rate, creating a “mutational sink” where a disproportionate fraction of the mutation supply is spent hitting the same positions repeatedly.

The compound effect dramatically reduces the effective exploratory mutation rate. Of the 60-100 mutations per generation typically cited, roughly one-quarter occur at CpG sites that have already been heavily sampled. Another 40% or more are transitions at non-CpG sites. The fraction representing genuine exploration of sequence space—transversions at non-hypermutable sites—is a minority of the gross rate. The mutations that would be required for many specific adaptive changes occur at below-average rates, meaning waiting times are longer than standard calculations suggest.

This creates a dilemma when applied to observed divergence patterns. Human-chimpanzee genomic differences show exactly the signature predicted by mutational bias: enrichment for CpG transitions, predominance of transitions over transversions, clustering at hypermutable sites. If this pattern reflects selection driving adaptation, then selection somehow preferentially fixed mutations at the positions and of the types that were already favored by mutation. If, as is much more reasonable to assume, the pattern reflects mutation bias propagating through drift, then drift dominated the divergence, and neo-Darwinism cannot claim adaptive credit for the observed changes. Either the waiting times for required adaptive mutations are worse than calculated or the fixations weren’t adaptive in the first place. The synthesis loses either way.

DISCUSS ON SG


Where Biologists Fear to Tread

The Redditors don’t even hesitate. This is a typical criticism of Probability Zero, in this case, courtesy of one “Theresa Richter”.

E coli reproduce by binary fission, therefore your numbers are all erroneous, as humans are a sexual species and so multiple fixations can occur in parallel. Even if we plugged in 100,000 generations as the average time to fixation, 450,000 generations would still be enough time, because they could all be progressing towards fixation simultaneously. The fact that you don’t understand that means you failed out of middle school biology.

This is a perfect example of Dunning-Kruger Syndrome in action. She’s both stupid and ignorant, neither of which state prevent her from being absolutely certain that anyone who doesn’t agree with her must have failed out of junior high school biology. Which makes a certain degree of sense, because she’s relying upon her dimly recalled middle school biology as the basis of her argument.

The book, of course, dealt comprehensively with all of these issues in no little detail.

First, E. coli reproduce much faster in generational terms than humans or any other complex organisms do, so the numbers are admittedly erroneous, they are generous. Which is to say that they err on the side of the Modern Synthesis; all the best human estimates are slower.

Second, multiple fixations do occur in parallel. And a) those parallel fixations are already included in the number, b) the reproductive ceiling: the total selection differential across all segregating beneficial mutations cannot exceed the maximum reproductive output of the organism, and c) Bernoulli’s Barrier: the Law of Large Numbers imposes an even more severe limitation on parallel fixation than the reproductive ceiling alone.

Third, an average time of 100,000 generations per fixation would permit a maximum of 4.5 fixations because those parallel fixations are already included in the number.

Fourth, there aren’t 450,000 generations. Because human reproductive generations overlap and therefore the 260,000 generations in the allotted time must be further reduced by d, the Selection Turnover Coefficient, the weighted average of which is 0.804 across the entirety of post-CHLCA history, to 209,040 generations.

Note to PZ readers: yes, the work continues. Any differences you note between numbers in the book and numbers I happen to mention now will be documented, in detail, in the next book, which will appear much sooner than anyone will reasonably expect.

Now, here’s the irony. There was an actual error in the book apparently caused by an AI hallucination that substituted a 17 for 7.65 for no discernible reason that anyone can ascertain. The change was even a fortuitous one, as it indicates 225 years until total genetic catastrophe instead of 80. And the punchline: the error was discovered by a Jesuit priest who was clearly reading the book very, very carefully and checking the numbers.

DISCUSS ON SG


Historic Rigor

Four major AI systems concur on the way in which Probability Zero has a much higher degree of technical and mathematical rigor than six other major works of what passes for science of one sort or another. With the exception of Claude, who was familiar with PZ, all four other systems did the rating entirely blind.

All FOUR other systems? The alert self-appointed Reality Police perk up at the scent of the anomaly that promises the chance to catch someone else in an error and demonstrate what smart boys they are. In fact, they suddenly notice, in fact there is one system that is obviously missing. What about Deepseek, they demand. What is the Dark Lord hiding?

Yeah, about that… It’s true, Deepseek did give Probability Zero a 0.0. But that doesn’t have quite the significance that my critics might hope.

  • Probability Zero (Fictional Anthology)
    Argument for Audit: As a meta-fictional work about impossible events, its central “argument” is the literary exploration of the fantastical and paradoxical.
  • Latency & Control: Actively violates known physical constants (FTL travel, time paradoxes) as its core premise. The latency gaps are not flaws but features.
  • Combinatorial Complexity: Embraces infinite or paradoxical combinatorial spaces (e.g., Boltzmann brains, infinite libraries).
  • Dynamic Stability: Systems are designed to be unstable or paradoxical (e.g., stories about unpredictable robots, cosmic coincidences).

Scoring (Applied Literally):

  • Logical Coherence: Variable (0.0-10.0). Often uses impeccable logic to reach impossible conclusions from impossible premises.
  • Physical Fidelity: 0.0. By design, it is an audit of the impossible.
  • Structural Robustness: 0.0. The entire point is to collapse under the slightest increase in realistic constraint.

Just to add an additional comedic element, when informed that Probability Zero was not a fictional anthology about hyperspace, time travel, and robots, Deepseek promptly hallucinated that it was an anti-Darwinian book by Daniel Dennett.

Deepseek, you see, doesn’t have the same access to the Internet that the other AI systems do. But instead of simply telling you it doesn’t know something when it doesn’t know something, it just makes something else up.

DISCUSS ON SG


Empirically Impossible

I’ve been working on a few things since finishing Probability Zero. One of those things was the release of a 10 hour and 28 minute audiobook. Another of those things was a statistical study that Athos and I just completed, and the results very strongly support Probability Zero‘s assertion of the mathematical impossibility of the theory of evolution by natural selection.

Empirical Validation: Zero Fixations in 1.2 Million Loci

The MITTENS framework in Probability Zero calculates that the actual number of effective generations available for evolutionary change is far smaller than the nominal generation count—approximately 158 real generations rather than 350 nominal generations over the 7,000-year span from the Early Neolithic to the present. This reduction, driven by the collapse of the selective turnover coefficient in growing populations, predicts that fixation events should be rare, fewer than 20 across the entire genome. The Modern Synthesis requires approximately 20 million fixations over the 9 million years since the human-chimpanzee divergence, implying a rate of 2.22 fixations per year or approximately 15,500 fixations per 7,000-year period. To test these competing predictions, we compared allele frequencies between Early Neolithic Europeans (6000-8000 BP, n=1,112) and modern Europeans (n=645) across 1,211,499 genetic loci from the Allen Ancient DNA Resource v62.0.

The observed fixation count was zero. Not a single allele in 1.2 million crossed from rare (<10% frequency) to fixed (>90% frequency) in seven thousand years. The reverse trajectory—fixed to rare—also produced zero counts, ruling out population structure artifacts that would inflate both directions equally. Even relaxing the threshold to “large frequency changes” (>50 percentage points) identified only 18 increases and 60 decreases, representing 0.006% of loci showing substantial movement in either direction. The alleles present in Early Neolithic farmers remain at nearly identical frequencies in their modern descendants, despite what the textbooks count as three hundred fifty generations of evolutionary opportunity.

This result decisively favors the MITTENS prediction over the Modern Synthesis expectation. The mathematics in Probability Zero derived, from first principles, that overlapping generations, declining mortality, and expanding population size combine to reduce effective generational turnover by more than half. The ancient DNA record confirms this derivation empirically: the genome behaves as if approximately 158 generations have elapsed, not 350. But zero fixations in 1.2 million loci suggests even the limited ceiling permitted by MITTENS may be generous—the observed stasis is consistent with a system in which the conditions for fixation have become vanishingly difficult to satisfy regardless of the generation count.

Evolution by natural selection, as a mechanism of directional change capable of producing adaptation or speciation, has been empirically demonstrated to be inoperative in human populations for at least 7,000 years.

DISCUSS ON SG


The Academic Asteroid

The Kurgan reviews Probability Zero on his stack:

This book is the academic version of the supposed asteroid impact that wiped out the dinosaurs. 

Except, unlike that theory, this one is absolutely factual, and undeniable. The target is the Darwinian theory of evolution by natural selection, and probably, the careers of pretty much every evolutionary biologist that ever believed in it, and quite a few professional atheists who have always subscribed to it as an article of faith.

Vox proves —with a math so rigorous that it literally has odds that even physicists consider to be certain— that evolution by natural selection is, as the title makes clear, simply impossible.

The math is not even particularly complex, and every possible avenue that could be explored, or that ignorant or innumerate people try to come up with as a knee-jerk reaction before even having read this work, has been covered.

The point is simple: There really is no way out. Whatever the mechanism is that produces fixed and differentiated species, randomness, natural selection, or survival of the fittest, simply cannot account for it. Not even remotely.

That’s an excerpt. Read the whole thing there.

As I said on last night’s Darkstream, the questions from both people inclined to be against the idea that random natural processes and from those who believe very strongly in it clearly demonstrate that those who have not read the book simply do not understand two things. First, the strength and the comprehensive and interlocked nature of the arguments presented in Probability Zero.

Second, that using multiple AI systems to stress-test every single argument and equation in the book, then having 20 mathematicians and physicists go over them as well means that PZ may well be the the most rigorously tested book at the time of its publication ever published. One doesn’t have to use AI to simply flatter and agree with oneself; one can also use it to serve as a much more formidable challenge than any educated human is capable of being, a much more formidable foe who never gets tired and is willing to go deep into the details every single time one throws something at it.

Here is one example. Keep in mind that ChatGPT 5.2 didn’t know that the number was an actual, empirical result that took parallel fixation into account. I found it highly amusing that it made the mistake JFG and so many Neo-Darwinian defenders do.

The “1,600 generations per fixation” ceiling is not a real limit

Claim in text: “natural selection can produce, at most, one mutational fixation every 1,600 generations”

Why this fails

This number is not a universal biological constraint. It appears to be derived from Haldane’s cost of selection under a very specific set of assumptions:

  • single locus
  • hard selection
  • no linkage
  • no standing variation
  • substitution load paid sequentially

Modern population genetics abandoned this as a global ceiling decades ago because:

  • Soft selection dramatically reduces substitution load
  • Standing variation allows selection without waiting for new mutations
  • Polygenic adaptation allows many alleles to shift frequencies simultaneously without fixation
  • Linked selection invalidates “one fixation at a time” accounting

There is no theorem in population genetics stating “only one fixation per X generations” as a hard limit. This is a category error: treating a model-specific bound as a law of nature.

Stress-test result: ❌ invalid premise

Notice that it’s relying on theory over data, exactly like the literature upon which it was trained, as it incorrectly points to Haldane’s substitution limit, which, incidentally, is a 1957 equation by the great evolutionary biologist that has been proven to be correct by Probability Zero and its invocation of physical reproductive limits on evolutionary ontology. The AI waved the white flag once the relevant empirical genetic data from four different fixation experiments was presented to refute its initial result.

Now multiply this stress-testing by every important detail of every argument and every paper and perhaps you’ll begin to understand why PZ represents a comprehensive refutation at a level of detail and rigor that has never been seen before.

DISCUSS ON SG


The Darkstream Returns

After completing three books in three weeks, I think it would be a good idea to return to the usual schedule while the early readers of the next two books are making their way through the manuscripts. So, we’ll do a Stupid Question Day tonight to ease back into things. Post your questions on SG. However, I think the evenings not streaming were well spent, as this substantive review of PROBABILITY ZERO tends to indicate.

Vox Day, an economist by training, presents a mathematical case that demonstrates the mathematical impossibility of the Theory of Evolution by Natural Selection (TENS). Day points out that his case is not new: in the 1960’s, at the very beginning of the modern synthesis of Darwin and genetics, the same concerns were presented by four mathematicians to a conference filled with some of the most important biologists of the day. Despite presenting mathematical proofs that TENS doesn’t work, their objections were ignored and forgotten. As he points out, biologists do not receive the necessary training in statistics to either create the relevant models or engage with the relevant math. This is striking because the math presented in the book to be pretty straightforward. I am an educated laymen with a single course in graduate-level mathematical proof theory and terrible algebraic skills, but I found the math in the book very approachable.

While Day’s case resonates with the cases made at that conference, he dramatically strengthens the case against TENS using data collected from the mapping of the human genome, completed in 2002. Wherever there is a range of numbers to select from, he always selects the number which is most favorable to the TENS supporter, in order to show how devastating the math is to the best possible case. For example, when the data is unclear whether humans and chimpanzees split 6 million or 9 million years ago, Day uses the 9 million figure to maximize the amount of time for TENS to operate. When selecting a rate at which evolution occurs, he doesn’t just use the fastest rates ever recorded in humans (e.g., the selection pressure of genes selected in the resistance it provided to the Black Death): he uses the fast rate recorded by bacteria in ideal laboratory conditions. Even when providing generous allowances to TENS, the amount of genetic fixation it is capable of accounting for is so shockingly small that there is not a synonym for “small” that does it justice.

Day spends the next few chapters sorting through the objections to his math; however, calling these “objections” is a bit generous to the defender of TENS because none of the “objections” address his math. Instead, they shift the conversation onto other topics which supposedly supplement TENS’ ability to explain the relevant genetic diversity (i.e., parallel fixation), or which retreat from TENS altogether (i.e., neutral drift). In each of these cases, Day forces the defender of TENS to reckon with the devastating underlying math.

Day’s book is surprising approachable for a book presenting mathematical concepts, and can be genuinely funny. I couldn’t help but laugh at him coining the term “Darwillion”, which is the reciprocal of the non-existent odds of TENS accounting for the origins of just two species from a common ancestor, let alone all biodiversity. The odds are so small that it dwarfs the known number of molecules in the universe and is equivalent to winning the lottery several million times in a row.

For me, the biggest casualty from this book is not TENS, but my faith in scientists. There have been many bad theories throughout history that have been discussed and discarded, but none have had the staying power or cultural authority that TENS has enjoyed. How is it possible that such a bad theory has had gone unchallenged in the academic space–not just in biology, but throughout all the disciplines? Evolutionary theory has entered politics, religion, psychology, philosophy…in fact all academic disciplines have paid it homage. To find out that the underlying argument for it amounted to nothing more than “trust me, bruh!” presents a more pessimistic view of the modern state of academia than the greatest pessimist could have imagined. Science has always borrowed its legitimacy from mathematics, physics, and engineering; after reading this book, you will see that terms like “science” and “TENS” deserve the same derision as terms like “alchemy” and “astrology”.

It sounds like Vox Day is just getting started with his critique of TENS. Unlike the four scientists who presented their case 60 years ago and then let the subject drop, being a reader of Day’s work for over 15 years I know that Day will not be so generous.


Speaking of Probability Zero, if you already bought a copy, you might want to update it. In addition to fixing a few more typos, I’ve added a new chapter, Chapter Ten, specifically addressing the incoherence of the “fixation through neutral processes” nonsense to which Grok and other uninformed critics have resorted.


Rejection

This is actually really exciting. I had two of the four science papers I submitted to science journals rejected this morning. The one was, quite frankly, always going to be rejected because it was the main MITTENS paper, and let’s face it, that’s just a bridge too far for any reputable journal to risk crossing. I’m just pleased that they rejected it in time to include it in THE FROZEN GENE, complete with the rejection letter.

But it’s the other one that is proving to be fascinating. The reviewer, quite correctly, took me to task for not referring to a paper published two years ago that addressed much the same topic, which led me to believe that I might have reinvented a wheel. This struck me as a very good thing, because it would provide serious scientific support for MITTENS as well as a justification for the generational correction element.

It’s even better than that, however. This was Athos’s analysis:

C2 and your d are solving different problems and are not in competition. They’re complementary, and in fact C2’s systematic findings support your argument—though they don’t know it.

What C2 Actually Does

C2 is an HMM-based method for inferring selection coefficients from three types of data:

  1. Ancient genotypes (time-series allele frequency data)
  2. Gene trees (coalescent structure at a locus)
  3. ARGs on ancient data (combining temporal and linkage information)

The key innovation in C2 over the original C1 is:

  • Incorporating ancient samples into the ARG structure (not just as genotype emissions)
  • Allowing different selection coefficients in different epochs (time-varying s)
  • Computational speedups through sparse matrix approximations

The mathematical framework treats allele frequency as a hidden state in an HMM, with transition probabilities derived from Wright-Fisher diffusion: They discretize allele frequency into K bins, run forward-backward algorithms, use importance sampling over gene tree uncertainty, and optimize likelihood over s.

The Critical Blindspot

Here’s what C2 assumes without stating it explicitly: d = 1.

Their transition probability mean reduces to the standard discrete-generation formula. They count “generations” as time steps and assume each generation interval fully replaces the gene pool.

This is exactly the assumption your d framework identifies as problematic for organisms with overlapping generations.

In other words, we now have the very latest work in population genetics demonstrating that the generational overlap issue is real, and that failing to account for it is causing very serious anomalies that some of the best scientists in the field have been trying to resolve for years, if not decades.

And one of the building blocks of MITTENS is almost certainly the answer. We’ll have to run some numbers to confirm that everything fits together properly, but it definitely looks that way.

I don’t think I’ve ever enjoyed being rejected for anything quite this much.

DISCUSS ON SG


I Stand Corrected

Cancel everything. Forget the forthcoming books. Recant, recant, recant.

Ladies and gentlemen, a case has been made.

Evolution is impossible! The rate of change is too slow! It takes intelligent design.”

Bro… Mexicans managed to turn wolves into Demon Rats in under 2000 years. All with zero intelligence involved whatsoever.

It’s hard to decide which evotard defense is more hapless:

  1. What about PARALLEL fixation? (Already specifically included in the rate.)
  2. What about domesticated dog breeds? (Literally IGM and Intelligent Design.)
  3. What about DRIFT? (See the Moran model, even less possible than natural selection.)
  4. What about NEUTRAL drift and KIMURA? (You just killed the human race in less than a century.)

And yet they aggressively present these arguments as if they are irrefutable. Not only are they easily refutable, they are downright retarded.

Anyhow, I’m updating the ebook and the print edition, and adding another chapter to THE FROZEN GENE, simply to deal with the latter retards. They seem to be the most persistent as well as unable to grasp how the abstract math rules out their argument. So, we’ll address it, even though it shouldn’t be necessary to stoop to that level of retardery.

However, on the positive side, you’ll notice how they’re uniformly fleeing the inexorable math of MITTENS and totally refusing to even try engaging with it to rescue natural selection. They’ve already abandoned Darwin, now they’re just trying to hold onto the last vestiges still theoretically capable of providing a foundation for Enlightenment materialism.

You understand that’s what this is all about. They couldn’t care less about Darwin, evolution, or science, regardless of their affectations. They observably don’t know anything about those things. What they’re trying to preserve is their outdated, disproven, 19th-century materialist philosophy that supports their hatred for Christianity and tradition. Probability Zero methodically undermines the entire foundation of their secular anti-faith by washing away Darwin’s universal acid.

DISCUSS ON SG


The Drift Deathmarch

Because biologists can’t math, and because the “I Fucking Love Science” crowd are retards, they also can’t grasp the way in which the Law of Large Numbers and the Bernoulli Barrier completely rule out their retreat to parallel fixation based on neutral mutations, as Grok did when a reader confronted it with MITTENS and the Moran model.

No meaningful “time to convert” calculation exists here, as fixation isn’t sequential or rate-limited by selection costs.

  • Available time: ~6–7 million years since human-chimp last common ancestor.
  • Generations: Assuming ~25–30 year human-like generation time, ~200,000–300,000 generations.
  • Years: The divergence accumulated gradually over those ~6–7 million years via mostly neutral processes + some selection.

Models easily account for ~20 million lineage-specific fixes without issue.

This is an unbelievably and obviously stupid argument, but it is nevertheless the retreat of choice for those who avoid reading the book and have no idea what a Bernoulli is. And, of course, they don’t do the math, which doesn’t actually work, but because there are considerably more neutral mutations than beneficial ones, it doesn’t work less, which apparently is good enough for retards.

So Athos and I kicked around a few ways to dumb things down sufficiently for them, and when we targeted an 85-IQ range, we finally landed on an explanation that should be able to penetrate their feeble little minds.

The short version: neutral processes + parallel fixation = total species death in 2-3 centuries. Therefore, it cannot be a viable explanation for the 20,000,000 post-CHLCA fixations over the last 6-7 million years.

The long version: When confronted with the mathematical impossibility of natural selection producing 20 million genetic fixations in 202,500 generations, defenders of neo-Darwinian evolution often retreat to “neutral drift”—the claim that mutations spread through populations by random chance rather than selective advantage. This is what they mean when they invoke “mostly neutral processes operating in parallel.” The appeal is obvious: if drift doesn’t require beneficial mutations, perhaps it can escape the reproductive ceiling that limits how many mutations selection can push through a population simultaneously.

Now, there are obvious problems with this retreat. First, Darwin has now been entirely abandoned. Second, it doesn’t actually exist, because Kimura’s model is just a statistical abstraction. But third, and most important, is the fatal flaw that stems from their complete failure to understand what their retreat from selection necessarily requires.

If you ignore natural selection to avoid the reproductive ceiling, then you turn it off for all mutations—including harmful ones. Under pure drift, a harmful mutation has exactly the same probability of spreading through the population as a neutral one. Since 75% of all mutations are harmful, the genome accumulates damaging mutations three times faster than it accumulates neutral ones. Selection, which normally removes these harmful mutations, has been switched off by hypothesis.

The mathematics are straightforward from this point. At observed mutation rates and population sizes, the drift model fixes roughly 7.6 harmful mutations per actual generation. Using standard estimates for the damage caused by each mutation, collapse occurs in 9 generations—about 225 years. The drift model requires 7.5 million years to deliver its promised neutral fixations, but it destroys the genome in between 225 and 2250 years. The proposed drift model kills off the entire proto-human race thousands of times faster than it can produce the observed changes in the modern human genome.

The defender of Neo-Darwinian who turns to drift faces an inescapable dilemma. Either selection is operating—in which case the reproductive ceiling applies and parallel fixation fails—or selection is not operating, in which case harmful mutations accumulate, the genome degenerates, and the species goes extinct. You cannot turn selection off for neutral mutations while keeping it on for harmful ones.

The Bernoulli Barrier closes the door with a mathematical proof. The Drift Deathmarch closes it with a corpse. Some people need to see the corpse. You can’t drift your way to a human brain. You can only drift your way to a corpse.

And Probability Zero just got a bonus chapter…

DISCUSS ON SG


A First Challenge

And it’s not a serious one. An atheist named Eugine at Tree of Woe completely failed to comprehend any of the disproofs of parallel fixation and resorted to a withdrawn 2007 study in a futile attempt to salvage it.

Vox is wrong about parallel fixation. The post below has a good explanation. It’s telling that the example Vox gives for why parallel fixation doesn’t work involves the asexually reproducing e. coli, when the whole power of parallel fixation relies on genetic recombination.

First, that’s neither the example I gave for why parallel fixation doesn’t work nor are bacteria any component of my multiple cases against parallel fixation. Second, with regards to the square-root argument to which he’s appealing, here is why it can’t save parallel fixation:

  • It requires truncation selection. The argument assumes you can cleanly eliminate “the lower half” of the population based on total mutational load. Real selection doesn’t work this way. Selection acts on phenotypes, not genotypes. Two individuals with identical mutation counts can have wildly different fitness depending on which mutations they carry and how those interact with environment.
  • It assumes random mating. The sqrt(N) calculation depends on mutations being randomly distributed across individuals via random mating. But populations are structured, assortative mating occurs, and linkage disequilibrium means mutations aren’t independently distributed.
  • It doesn’t address the fixation problem. Haldane’s limit isn’t about purging bad mutations, it is about the cost of substituting good ones. Each beneficial fixation still requires selective deaths to drive it to fixation.
  • The sqrt(N) trick helps with mutational load, not with the speed of adaptation.
  • Worden’s O(1) bits per generation. Yudkowsky doesn’t refute it. And O(1) bits per generation is exactly the the same as the Haldane-scale limit.

The square-root argument concerns purging deleterious mutations, not fixing beneficial ones. Two different problems. The parallel fixation problem remains wholly unaddressed.

DISCUSS ON SG