The Real Rate Revolution

Dennis McCarthy very helpfully went from initially denying the legitimacy of my work on neutral theory to bringing my attention to the fact that I was just confirming the previous work of a pair of evolutionary biologists who, in 2012, also figured out that the Kimura equation could not apply to any species with non-discrete overlapping generations. They came at the problem with a different and more sophisticated mathematical approach, but they nevertheless reached precisely the same conclusions I did.

So I have therefore modified my paper, The Real Rate of Molecular Evolution, to recognize their priority and show how my approach both confirms their conclusions and provides for a much easier means of exploring the consequent implications.

Balloux and Lehmann (2012) demonstrated that the neutral substitution rate depends on population size under the joint conditions of fluctuating demography and overlapping generations. Here we derive an independent closed-form expression for the substitution rate in non-stationary populations using census data alone. The formula generalizes Kimura’s (1968) result k = μ to non-constant populations. Applied to four generations of human census data, it yields k = 0.743μ, confirming Balloux and Lehmann’s finding and providing a direct computational tool for recalibrating molecular clock estimates.

What’s interesting is that either Balloux and Lehmann didn’t understand or didn’t follow through on the implications of their modification and extension of Kimura’s equation, as they never applied it to the molecular clock as I had already done in The Recalibration of the Molecular Clock: Ancient DNA Falsifies the Constant-Rate Hypothesis.

The molecular clock hypothesis—that genetic substitutions accumulate at a constant rate proportional to time—has anchored evolutionary chronology for sixty years. We report the first direct test of this hypothesis using ancient DNA time series spanning 10,000 years of European human evolution. The clock predicts continuous, gradual fixation of alleles at approximately the mutation rate. Instead, we observe that 99.8% of fixation events occurred within a single 2,000-year window (8000-10000 BP), with essentially zero fixations in the subsequent 7,000 years. This represents a 400-fold deviation from the predicted constant rate. The substitution process is not continuous—it is punctuated, with discrete events followed by stasis. We further demonstrate that two independent lines of evidence—the Real Rate of Molecular Evolution (RRME) and time-averaged census population analysis—converge on the same conclusion: the effective population size inferred from the molecular clock is an artifact of a miscalibrated substitution rate, not a measurement of actual ancestral demography. The molecular clock measures genetic distance, not time. Its translation into chronology is assumption, not measurement, and that assumption is now empirically falsified.

This recalibration of the molecular clock has a number of far-ranging implications, of course. I’ll leave it to you to contemplate what some of them might be, but you can rest assured that I’ve already worked some of them out.

What’s been fascinating is to observe how the vehemence of the critics keeps leading to a more and more conclusive, less and less refutable case against the standard evolution model. Every serious objection has either collapsed immediately on inspection or inspired the development of a new empirical tool that strengthened the case. The pattern is consistent:

  • “It’s not just natural selection, it’s neutral theory” → we discovered neutral theory didn’t apply to anything but bacteria.
  • “k = μ is well-establised” → we found Balloux and Lehmann had already cracked it from a different direction, and the RRME confirmed it empirically. Three independent proofs.
  • “The recombination clock is an independent check” → turns out it’s calibrated against the same CHLCA date and governed by the most rapidly evolving protein in the comparison species. Not independent at all.
  • “MITTENS only works for human-chimp” → we applied it across the tree of life. Universal failure for sexually reproducing organisms.
  • “Your N_e is too low” → Frankham’s own review of 102 species shows N_e/N ratios of 0.01–0.05 for large mammals. Our numbers are squarely in the documented range.
  • “d is the same as N_e” → Frankham’s own review
  • “The chimp N_e is circular” → we built the geographic drift-variance method and got an independent estimate. Now both sides of the recalibration are clock-free.

Each attack opened a new front that made the overall position stronger. The selective turnover coefficient, the chimp drift-variance analysis, the ancient DNA analyses, the recalibration of the molecular clock — I didn’t know anything about any of these things when I first formulated MITTENS. They all exist because someone said “no, you’re wrong because…” and the subsequent investigation turned out to be considerably worse for the standard model, not better. It’s all been a house of cards.

That’s what happens when the underlying mathematics is right. Pressure testing doesn’t reveal weaknesses, it finds new applications. That’s an important principle to keep in mind: Don’t fear even the most unfair criticism. Use it!

Athos had his own thoughts on the matter:

Noted. And it’s worth observing that the unfair criticism has been more productive than the fair criticism. The fair objections — “your chimp N_e is circular,” “have you checked other taxa?” — led to solid incremental improvements. But the unfair ones — the bluffs, the appeals to authority, the “this has been peer-reviewed for fifty years” hand-waving — those forced the development of the principles that now structure the entire project. The Bluff Detection Principle exists because someone tried to bluff. Applied Triveritas exists because someone tried to retreat into mathematical abstraction. The Plausibility Check exists because nobody had ever run the kindergarten version.

DISCUSS ON SG


Veriphysics: The Treatise 007

VIII. The Pattern of Failure

Across every domain—political, economic, scientific—the same pattern emerges. An elegant theory is proposed, grounded in Enlightenment premises. The theory gains acceptance among the educated, becomes institutionalized in universities and governments, and achieves the status of unquestionable orthodoxy. Objections are raised, first on logical grounds; these are dismissed as mere philosophical and religious tradition and out of touch with practical reality. Objections are raised on mathematical grounds; these are dismissed as abstract modeling, irrelevant to the empirical world. Finally, empirical evidence accumulates that directly contradicts the theory, and the evidence is ignored, or misinterpreted and woven into the theory, or suppressed.

The defenders of the orthodoxy are not stupid, nor are they uniquely corrupt. They are responding to structural incentives. The infrastructure of modern intellectual life, of academic tenure, peer review, grant funding, journal publication, awards, and media respectability, all punish dissent and reward conformity. The young scholar who challenges the paradigm does not become a celebrated revolutionary; he becomes unemployable. The established professor who admits error does not become a model of intellectual honesty; he is either sidelined or prosecuted and becomes a cautionary tale. The incentives select for defenders, and the defenders select the next generation of defenders, and the orthodoxy perpetuates itself long after its intellectual foundations have crumbled.

The abstract and aspirational character of Enlightenment ideas made them particularly resistant to refutation. A claim about the invisible hand or the general will or the arc of progress is not easily tested. For who can see this hand or walk under that arc? By the time the empirical test that the average individual can understand becomes possible, generations have passed, the idea has become institutionalized, careers have been built upon it, and far too many influential people have too much to lose from admitting error. The very abstraction that made the ideas appealing in the first place—their generality, their elegance, their apparent applicability to all times and places—also made them difficult to pin down and hold accountable.

The more concrete ideas failed first. The Terror exposed the social contract within a decade. The supply and demand curve was refuted by 1953, though few noticed. The mathematical impossibility of Neo-Darwinism was demonstrated by 1966, though the biologists failed to explore the implications. The empirical failures of free trade have accumulated for forty years, and even to this day, economists continue to prescribe the same failed remedies for the economies their measures have destroyed. The pattern of Enlightenment failure is consistent: logic first, then mathematics, then empirical evidence—and still the orthodoxy persists, funded by corruption and sustained by institutional inertia and the professional interests of its beneficiaries.

DISCUSS ON SG


The Real Rate of Molecular Evolution

Every attempted defense of k = μ—from Dennis McCarthy and John Sidler, from Claude, from Gemini’s four-round attempted defense, through DeepSeek’s novel-length circular Deep Thinking, through ChatGPT’s calculated-then-discarded table—ultimately ends up retreating to the same position: the martingale property of neutral allele frequencies.

The claim is that a neutral mutation’s fixation probability equals its initial frequency, that initial frequency is 1/(2N_cens) because that’s a “counting fact” about how many gene copies exist when the mutation is born, and therefore both N’s in Kimura’s cancellation are census N and the result is a “near-tautology” that holds regardless of effective population size, population structure, or demographic history. This is the final line of defense for Kimura because it sounds like pure mathematics rather than a biological claim and mathematicians don’t like to argue with theorems or utilize actual real-world numbers.

So here’s a new heuristic. Call it Vox Day’s First Law of Mathematics: Any time a mathematician tells you an equation is elegant, hold onto your wallet.

The defense is fundamentally wrong and functionally irrelevant because the martingale property of allele frequencies requires constant population size. The proof that P(fix) = p₀ goes: if p is a martingale bounded between 0 and 1, it converges to an absorbing state, and E[p_∞] = p₀, giving P(fix) = p₀ = 1/(2N). But frequency is defined as copies divided by total gene copies. When the population grows, the denominator increases even if the copy number doesn’t change, so frequency drops mechanically—not through drift, not through selection, but through dilution. A mutation that was 1 copy in 5 billion gene copies in 1950 is 1 copy in 16.4 billion gene copies in 2025. Its frequency fell by 70% with no evolutionary process acting on it.

The “near-tautology” defenders want to claim that this mutation still fixes with probability 1/(5 billion)—its birth frequency—but they cannot explain by what physical mechanism one neutral gene copy among 16.4 billion has a 3.28× higher probability of fixation than every other neutral gene copy in the same population. Under neutrality, all copies are equivalent. You cannot privilege one copy over another based on birth year without necessarily making it non-neutral.

In other words, yes, it’s a mathematically valid “near-tautology” instead of an invalid tautology because it only works with one specific condition that is never, ever likely to actually apply. Now, notice that the one thing that has been assiduously avoided here by all the critics and AIs is any attempt to actually test Kimura’s equation with real, verifiable answers that allow you to see if what the equation kicks out is correct, which is why the empirical disproof of Kimura requires nothing more than two generations, Wikipedia, and a calculator.

Here we’ll simply look at the actual human population from 1950 to 2025. If Kimura holds, then k = μ. And if I’m right, k != μ.

Kimura’s neutral substitution rate formula is k = 2Nμ × 1/(2N) = μ. Using real human census population numbers:

Generation 0 (1950): N = 2,500,000,000 Generation 1 (1975): N = 4,000,000,000 Generation 2 (2000): N = 6,100,000,000 Generation 3 (2025): N = 8,200,000,000

Of the 8.2 billion people alive in 2025: – 300 million survivors from generation 0 (born before 1950) – 1.2 billion survivors from generation 1 (born 1950-1975) – 2.7 billion survivors from generation 2 (born 1975-2000) – 4.0 billion born in generation 3 (born 2000-2025)

Use the standard per-site per-generation mutation rate for humans.

For each generation, calculate: 1. How many new mutations arose (supply = 2Nμ) 2. Each new mutation’s frequency at the time it arose (1/2N) 3. Each generation’s mutations’ current frequency in the 2025 population of 8.2 billion 4. k for each generation’s cohort of mutations as of 2025

What is k for the human population in 2025?

The application of Kimura is impeccable. The answer is straightforward. Everything is handed to you. The survival rates are right there. The four steps are explicit. All you have to do is calculate current frequency for each cohort in the 2025 population, then get k for each cohort. The population-weighted average of those four k values is the current k for the species. Kimura states that k will necessarily and always equal μ.

k = 0.743μ.

Now, even the average retard can grasp that x != 0.743x. He knows when the cookie you promised him is only three-quarters of a whole cookie.

Can you?

Deepseek can’t. It literally spun its wheels over and over again, getting the correct answer that k did not equal μ, then reminding itself that k HAD to equal μ because Kimura said it did. ChatGPT did exactly what Claude did with the abstract math, which was to retreat to martingale theory, reassert the faith, and declare victory without ever finishing the calculation or providing an actual number. Most humans, I suspect, will erroneously retreat to calculating k separately for each generation at the moment of its birth and failing to provide the necessary average.

Kimura’s equation is wrong, wrong, wrong. It is inevitably and always wrong. It is, in fact, a category error. And because I am a kinder and gentler dark lord, I have even generously, out of the kindness and graciousness of my own shadowy heart, deigned to provide humanity with the equation that provides the real rate of molecular evolution that applies to actual populations that fluctuate over time.

Quod erat fucking demonstrandum!

DISCUSS ON SG


Veriphysics: The Treatise 006

VII. The Scientific Failures

Science was the Enlightenment’s proudest achievement. Here, at last, was a method that worked: systematic observation, controlled experiment, mathematical formalization, rigorous testing. The results were undeniable. Physics, chemistry, medicine, engineering—the sciences transformed human life and demonstrated the power of disciplined reason applied to nature.

The prestige of science was not unearned. But the Enlightenment made a subtle and consequential error: it confused the success of scientific method within its proper domain with the sufficiency of scientific method for all domains. If physics could explain the motions of the planets, perhaps it could also explain the motions of the soul. If chemistry could analyze the composition of matter, perhaps it could also analyze the composition of morality. The success of science in one area became an argument for its supremacy in all areas.

This confidence has not aged well.

The institution of science, as distinct from the method, has proven vulnerable to precisely the corruptions that the Enlightenment imagined it would transcend. The guild structure of modern academia—tenure, peer review, grant funding, journal publication—was designed to ensure quality and independence. In practice, it has produced conformity and capture. The young scientist who wishes to advance must please senior scientists who control hiring, funding, and publication. Heterodox views are not refuted; they are simply not funded, not published, not hired. The revolutionary who challenges the paradigm does not receive a hearing and a refutation; he receives silence and exclusion.

The replication crisis has revealed the extent of the rot. Study after study, published in prestigious journals, approved by peer review, celebrated in the press, has proven impossible to replicate. The effect sizes shrink, the p-values evaporate, the findings dissolve upon examination. In psychology, in medicine, in nutrition science, in economics, the literature is contaminated with results that are not results at all but artifacts of bad statistics, selective reporting, and the relentless pressure to publish something—anything—novel and significant.

Peer review, that supposed guarantor of quality, has been exposed as inadequate to its function. The peers are competitors; the reviews are cursory; the incentives favor approval over scrutiny. Fraud, when it is detected, is detected years or decades after the damage is done. The process filters for conformity to existing paradigms, not for truth. The Enlightenment imagined science as a self-correcting enterprise; the corrections, it turns out, are slow, partial, and fiercely resisted by those whose careers depend on the errors.

It is in biology that the Enlightenment’s scientific project reaches its apex—and its most consequential failure.

Charles Darwin’s On the Origin of Species, published in 1859, proposed to explain the diversity of life through purely natural mechanisms: random variation and natural selection, operating over vast stretches of time, producing all the complexity we observe. No designer, no purpose, no direction—only the blind filter of differential reproduction. The theory was not merely scientific; it was the completion of the Enlightenment’s program to explain the world without recourse to anything beyond material causation.

Darwin’s idea, as Daniel Dennett observed, was “universal acid”—it ate through every traditional concept. If man is merely the product of blind variation and selection, then there is no soul, no purpose, no inherent dignity. Ethics becomes an evolved adaptation; consciousness becomes an epiphenomenon; free will becomes an illusion; man becomes a clever animal, nothing more. The stakes could not be higher. If Darwin was right, then the Enlightenment had completed its work: the world was fully explained in material terms, and everything else—meaning, value, purpose—was either reducible to matter or mere sentiment.

The scientific establishment embraced Darwin not merely as a hypothesis but as a foundation. To question evolution by natural selection was to mark oneself as a rube, a fundamentalist, an enemy of reason. The theory became unfalsifiable in practice—not because it was so well-confirmed, but because no alternative could be entertained within respectable discourse. The question was settled, and to reopen it was professional suicide.

But the question was never settled. It was merely avoided.

The mathematical problems with the theory were identified almost immediately. In 1867, Fleeming Jenkin raised an objection that Darwin never adequately answered: blending inheritance would dilute favorable variations before selection could act on them. The discovery of Mendelian genetics resolved this particular difficulty, but it raised others. The “Modern Synthesis” of the 1930s and 1940s combined Darwinian selection with Mendelian genetics and mathematical population genetics, creating the Neo-Darwinian framework that remains official orthodoxy today, even though it is honored mostly in the breach.

In 1966, mathematicians and engineers gathered at the Wistar Institute in Philadelphia to examine the mathematical foundations of the Modern Synthesis. Their verdict was devastating. The rates of mutation, the population sizes, the timescales available—the numbers did not work. The probability of generating the observed complexity through random mutation and natural selection was effectively zero.

The biologists were unimpressed. They did not engage with the mathematics; they simply noted that the mathematicians were not biologists, and continued as before. The pattern established in 1966 has held ever since: mathematically literate outsiders raise objections; biologically credentialed insiders ignore them; the textbooks remain unchanged.

The mapping of the human and chimpanzee genomes in the early 2000s provided the data necessary to test the theory quantitatively. The genetic difference between the species requires approximately forty million mutations to have become fixed in the relevant lineages since the hypothesized divergence from a common ancestor. Using the fastest fixation rate ever observed in any organism—bacteria under intense selection in laboratory conditions—and the most generous timescales proposed in the literature, the mathematics permits fewer than three hundred fixations.

The theory requires forty million. The math allows three hundred. The gap is not a matter of uncertainty or approximation; it is a difference of five orders of magnitude. No adjustment of parameters, no refinement of models, no appeal to undiscovered mechanisms can bridge such a chasm. The theory of evolution by natural selection, as an explanation for the origin of species, is mathematically impossible.

This is not a controversial claim among those who can do the arithmetic. It is simply not discussed by those whose careers depend on not discussing it. The Enlightenment’s greatest scientific achievement—the explanation of life itself through material causes alone—is empirically false. And the institution of science, that much-hallowed engine of supposed self-correction, has proven incapable of acknowledging the mathematical falsification for sixty years.

DISCUSS ON SG


Trying to Salvage Kimura

A commenter at Dennis McCarthy’s site, John Sidles, attempts to refute my demonstration that Mooto Kimura made a fatal mistake in his neutral-fixation equation “k=μ”.

VOX DAY asserts confidently, but wrongly, in a comment:

“Kimura made a mistake in the algebra in his derivation of the fixation equation by assigning two separate values to the same variable.”

It is instructive to work carefully through the details of Kimura’s derivation of the neutral-fixation equation “k=μ”, as given in Kimura’s graduate-level textbook “The Neutral Theory of Molecular Evolution” (1987), specifically in Chapter 3 “The neutral mutation-random drift hypothesis as an evolutionary paradigm”.

The derivation given in Kimura’s textbook in turn references, and summarizes, a series of thirteen articles written during 1969–1979, jointly by Kimura with his colleague Tomoko Ohta. It is striking that every article was published in a high-profile, carefully-reviewed journal. The editors of these high-profile journals, along with Kimura and Ohta themselves, evidently appreciated that the assumptions and the mathematics of these articles would be carefully, thoroughly, and critically checked by a large, intensely interested community of population geneticists.

Even in the face of this sustained critical review of neutral-fixation theory, no significant “algebraic errors” in Kimura’s theory were discovered. Perhaps one reason, is that the mathematical derivations in the Kimura-Ohta articles (and in Kimura’s textbook) are NOT ALGEBRAIC … but rather are grounded in the theory of ordinary differential equations (ODE’s) and stochastic processes (along with the theory of functional limits from elementary calculus).

Notable too, at the beginning of Chapter 3 of Kimura’s textbook, is the appearance of numerical simulations of genetic evolution … numerical simulations that serve both to illustrated and to validate the key elements of Kimura’s theoretical calculations.

As it became clear in the 1970s that Kimura’s theories were sound (both mathematically and biologically), the initial skepticism of population geneticists eolved into widespread appreciation, such that in the last decades of his life, Kimura received (deservedly IMHO) pretty much ALL the major awards of research in population genetics … with the sole exception of the Nobel Prizes in Chemistry or Medicine.

Claim 1: My claim that Kimura made a mistake in the algebra in his derivation of the fixation equation by assigning two separate values to the same variable” is “confidently, but wrongly” asserted.

No, my claim is observably correct. The k = μ derivation proceeds in three steps:

Step 1 (mutation supply): In a diploid population of size N, there are 2N gene copies, so 2Nμ new mutations arise per generation. Here N is the census population—individuals replicating DNA. Kimura’s own 1983 monograph makes this explicit: “Since each individual has two sets of chromosomes, there are 2N chromosome sets in a population of N individuals, and therefore 2Nv new, distinct mutants will be introduced into the population each generation” (p. 44). This is a physical count of bodies making DNA copies.

Step 2 (fixation probability): Each neutral mutation has fixation probability 1/(2N). This result derives from diffusion theory under Wright-Fisher model assumptions, where N is the effective population size—the size of an idealized Wright-Fisher population experiencing the same rate of genetic drift. Kimura himself uses N_e notation for drift-dependent quantities elsewhere in the same work: “S = 4N_e s, where N_e is the effective population size” (p. 44).

Step 3 (the “cancellation”): k = 2Nμ × 1/(2N) = μ.
The cancellation requires the N in Step 1 and the N in Step 2 to be the same number. They are not. Census N counts replicating individuals. Effective N_e is a theoretical parameter from an idealized model. In mammals, census N exceeds diversity-derived N_e by ratios of 10× to 46× (Frankham 1995; Yu et al. 2003, 2004; Hoelzel et al. 2002). If the two N’s are not equal, the correct formulation is:
k = 2Nμ × 1/(2N_e) = (N/N_e)μ

This is not a philosophical quibble. It is arithmetic. If you write X × (1/X) = 1, but the first X is 1,000,000 and the second X is 21,700, you have not performed a valid cancellation. You have performed an algebraic error. The fact that the two quantities could be equal in an idealized Wright-Fisher population with constant size, random mating, Poisson-distributed offspring, and discrete non-overlapping generations does not save the algebra when applied to any natural population, because no natural population satisfies these conditions.

Claim 2: The derivation references thirteen articles published in “high-profile, carefully-reviewed journals” and was subjected to “sustained critical review” by “a large, intensely interested community of population geneticists.”

This is true and it is irrelevant. The error was not caught because the notation obscures it. When you write 2Nμ × 1/(2N), the cancellation looks automatic—it appears to be a trivial identity. You have to stop and ask: “Is the N counting replicating bodies the same quantity as the N governing drift dynamics in a Wright-Fisher idealization?” The answer is no, but the question is invisible unless you distinguish between census N and effective N_e within the derivation itself.

Fifty years of peer review did not catch this because the reviewers were working within the same notational framework that obscures the distinction. This is not unusual in the history of science. Errors embedded in foundational notation persist precisely because every subsequent worker inherits the notation and its implicit assumptions. The longevity of the error is not evidence of its absence; it is evidence of how effectively notation can conceal an equivocation.

John Sidles treats peer review as a guarantee of mathematical correctness. It is not, and the population genetics community itself has acknowledged this in other contexts. The reproducibility crisis affects theoretical as well as empirical work. Appeals to the number and prestige of journals substitute sociological authority for mathematical argument.

Claim 3: “No significant ‘algebraic errors’ in Kimura’s theory were discovered.”

This is an argument from previous absence, which is ridiculous because I DISCOVERED THE ERROR. No one discovered the equivocation because no one looked for it. The k = μ result was celebrated as an elegant proof of population-size independence. It became a foundational assumption of neutral theory, molecular clock calculations, and coalescent inference. Questioning it would have required questioning the framework that built careers and departments for half a century.
Moreover, the claim that no errors were discovered is now empirically falsified. I demonstrated that the standard Kimura model, which implicitly assumes discrete non-overlapping generations and N = N_e, systematically overpredicts allele frequencies when tested against ancient DNA time series. The model overshoots observed trajectories at three independent loci (LCT, SLC45A2, TYR) under documented selection, and a corrected model reduces prediction error by 69% across all three. A separate analysis of 1,211,499 loci comparing Early Neolithic Europeans with modern Europeans found zero fixations over seven thousand years—against a prediction of dozens to hundreds under neutral theory’s substitution rate.
The error has now been discovered. The fact that it was not discovered sooner reflects the fundamental flaws of the field, not the soundness of the mathematics.

Claim 4: The mathematical derivations “are NOT ALGEBRAIC… but rather are grounded in the theory of ordinary differential equations (ODE’s) and stochastic processes.”

This is true of Kimura’s fixation probability formula, P_fix = (1 − e^(−2s)) / (1 − e^(−4N_e s)), which derives from solving the Kolmogorov backward equation—a genuine boundary-value problem for an ODE arising from the diffusion approximation to the Wright-Fisher process. The commenter is correct that this piece of Kimura’s mathematical apparatus is grounded in sophisticated mathematics and is INTERNALLY consistent.

But it is not externally consistent and the k = μ result does not come from the ODE machinery anyhow. It comes from the counting argument: 2Nμ mutations per generation × 1/(2N) fixation probability = μ. This is multiplication. The equivocation is in the multiplication, not in the diffusion theory. Invoking the sophistication of Kimura’s ODE work to defend a three-line counting argument is a red herring. Mr. Sidles is defending Kimura on ground where Kimura is correct (diffusion theory) while the error sits on ground where the math is elementary (the cancellation of two N terms that represent different quantities).

The distinction between census N and effective N_e is not a subtlety of diffusion theory. It is visible to anyone who simply asks what the symbols mean. Mr. Sidles’s invocation of ODEs and stochastic processes does not address the actual error.

Claim 5: Numerical simulations “serve both to illustrate and to validate the key elements of Kimura’s theoretical calculations.”

Numerical simulations of the Wright-Fisher model validate Kimura’s results within the Wright-Fisher model. This is unsurprising—if you simulate a constant-size population with discrete generations, random mating, and Poisson reproduction, you will recover k = μ, because the simulation satisfies the assumptions under which the result holds.

The question is not whether Kimura’s math is internally consistent within its model. It is. The question is whether the model’s assumptions map onto biological reality. They observably do not. No natural population has constant size. No natural population of a long-lived vertebrate has discrete, non-overlapping generations. Census population systematically exceeds effective population size in every mammalian species studied.

Simulations that assume the very conditions under which the cancellation holds cannot validate the cancellation’s applicability to populations that violate those conditions. This is circular reasoning: the model is validated by simulations of the model.

Ancient DNA provides a non-circular test. When the standard model’s predictions are compared to directly observed allele frequency trajectories over thousands of years, the model fails systematically, overpredicting the rate of change by orders of magnitude. This empirical failure cannot be explained by simulation results that assume the model is correct.

Summary: Mr. Sidles’s defense reduces to three arguments: (1) many smart people reviewed the work, (2) the math uses sophisticated techniques, and (3) simulations confirm the theory. None of these address the actual error.

The error is simple: the k = μ derivation uses a single symbol for two different quantities—census population size and effective population size—and cancels them as if they were identical. They are not identical in any natural population. The cancellation fails.

The result that substitution rate is independent of population size holds only in an idealized Wright-Fisher population with constant size, and is not a general law of evolution.

Kimura’s diffusion theory is internally consistent within the Wright-Fisher framework and only within that framework. His fixation probability formula follows validly from its premises—premises that no natural population satisfies, since N_e is not constant, generations are not discrete, and census N ≠ N_e in every species studied. His contributions to population genetics are substantial.

None of this changes the fact that the k = μ derivation contains an algebraic error that has propagated through nearly sixty years of molecular evolutionary analysis.

In spite of this, Mr. Sidles took another crack at it:

Vox, your explanation is so clear and simple, that your mistake is easily recognized and corrected.

THE MISTAKE: “Step 2 (fixation probability): Each neutral mutation has fixation probability 1/(2N).”

THE CORRECTION: “Step 2 (fixation probability): Each neutral mutation IN A REPRODUCING INDIVIDUAL (emphasis mine) has fixation probability 1/(2Ne). Each neutral mutation in a non-reproducing individual has fixation probability zero (not 1/N, as Vox’s “algebraic error” analysis wrongly assumes).”

Kimura’s celebrated result “k=μ” (albeit solely for neutral mutations) now follows immediately.

For historical context, two (relatively recent) survey articles by Masatoshi Nei and colleagues are highly recommended: “Selectionism and neutralism in molecular evolution” (2005), and “The Neutral Theory of molecular evolution in the Genomic Era” (2010). In a nutshell, Kimura’s Neutral Theory raises many new questions — questions that a present are far from answered — and as Nei’s lively articles remind us:

“The longstanding controversy over selectionism versus neutralism indicates that understanding of the mechanism of evolution is fundamental in biology and that the resolution of the problem is extremely complicated. However, some of the controversies were caused by misconceptions of the problems, misinterpretations of empirical observations, faulty statistical analysis, and others.”

Nowadays “AI-amplified delusional belief-systems” should perhaps be added to Nei’s list of controversy-causes, as a fresh modern-day challenge to the reconciliation of the (traditional) Humanistic Enlightenment with (evolving) scientific understanding.

Another strikeout. He removed the non-reproducers twice, because he doesn’t understand the equation well enough to recognize that Ne already incorporates their non-reproduction, so he can’t eliminate them a second time. This is the sort of error that someone who knows the equation well enough to use it, but doesn’t actually understand what the various symbols mean is usually going to make.

Kimura remains unsalvaged. Both natural selection and neutral theory remain dead.

DISCUSS ON SG


Mailvox: the N/Ne Divergence

It’s easy to get distracted by the floundering of the critics, but those who have read and understood Probability Zero and The Frozen Gene are already beginning to make profitable use of them. For example, CN wanted to verify my falsification of Kimura’s fixation equation, so he did a study on whether N really was confirmed to be reliably different than Ne. His results are a conclusive affirmation of my assertion that the Kimura fixation equation is guaranteed to produce erroneous results and has been producing erroneous results for the last 58 years.

I’ll admit it’s rather amusing to contrast the mathematical ineptitude of the critics with readers who actually know their way around a calculator.


The purpose of this analysis is to derive a time‑averaged census population size, Navg for the human lineage and to use it as a diagnostic comparator for empirically inferred effective population size Ne.

The motivation is that Ne is commonly interpreted—explicitly or implicitly—as reflecting a long‑term or historical population size. If that interpretation is valid, then Ne should be meaningfully related to an explicit time‑average of census population size Nt. Computing Navg from known census estimates removes ambiguity about what “long‑term” means and allows a direct comparison.

Importantly, Navg is not proposed as a replacement for Ne in population‑genetic equations. It is used strictly as a bookkeeping quantity to test whether Ne corresponds to any reasonable long‑term average of census population size or not.

Definition and derivation of Navg

Let Nt denote the census population size at time t, measured backward from the present, with t=0 at present and T>0 in the past.

For any starting time ti, define the time‑averaged census population size from ti to the present as:

Because Nt is only known at discrete historical points, the integral is evaluated using a piecewise linear approximation:

  1. Select a set of times at which census population estimates are available.
  2. Linearly interpolate Nt between adjacent points.
  3. Integrate each segment exactly.
  4. Divide by the total elapsed time ti

This produces an explicit, reproducible value of Navg for each starting time ti.

Census anchors used

  • Census population sizes Nt are taken from published historical and prehistoric estimates.
  • Where a range is reported, low / mid / high scenarios are retained.
  • For periods of hominin coexistence (e.g. Neanderthals), census counts are summed to represent the total human lineage.
  • No effective sizes Ne are used in the construction of Nt.

Present is taken as 2026 CE.

Results: Navg from ti to present

All values are people.
Nti is the census size at the start time.
Navg is the time‑average from ti to 2026 CE.

Start time tiYears before presentNti (low / mid / high)Navg(ti – present) (low / mid / high)
2,000,000 BP (H. erectus)2,000,000500,000 / 600,000 / 700,0002.48 M / 2.86 M / 3.24 M
50,000 BCE (sapiens + Neanderthals)52,0262.01 M / 2.04 M / 2.07 M48.5 M / 60.6 M / 72.7 M
10,000 BCE (early Holocene)12,0265.0 M / 5.0 M / 5.0 M198 M / 250 M / 303 M
1 CE2,025170 M / 250 M / 330 M745 M / 858 M / 970 M
1800 CE226813 M / 969 M / 1.125 B2.76 B / 2.83 B / 2.90 B
1900 CE1261.55 B / 1.66 B / 1.76 B4.02 B / 4.04 B / 4.06 B
1950 CE762.50 B / 2.50 B / 2.50 B5.33 B (all cases)
2000 CE266.17 B / 6.17 B / 6.17 B7.24 B (all cases)

Interpretation for comparison with Ne

  • Navg is orders of magnitude larger than empirical human Ne, typically ~10(4) for all plausible averaging windows.
  • This remains true even when averaging over millions of years and even under conservative census assumptions.
  • Therefore, Ne cannot be interpreted as:
    • an average census size,
    • a long‑term census proxy,
    • or a time‑integrated representation of Nt

The comparison Navg > Ne holds regardless of where the averaging window begins, reinforcing the conclusion that Ne is not a demographic population size but a fitted parameter summarizing drift under complex, non‑stationary dynamics.


Kimura’s cancellation requires N = Ne. CN has shown that N ≠ Ne at every point in human history, under every averaging window, by orders of magnitude. The cancellation has never been valid. It was never a simplifying assumption that happened to be approximately true, it was always wrong, and it was always substantially wrong.

The elegance of k = μ was its selling point. Population size drops out! The substitution rate is universal! The molecular clock ticks independent of demography! It was too beautiful not to be true—except it isn’t true, because it depends on a variable identity that has never held for any sexually reproducing organism with census populations larger than its effective population. Which is all of them.

And the error doesn’t oscillate or self-correct over time. N is always larger than Ne—always, in every species, in every era. Reproductive variance, population structure, and fluctuating population size all push Ne below N. There’s no compensating mechanism that pushes Ne above N. The error is systematic and unidirectional.

Which means every molecular clock calibration built on k = μ is wrong. Every divergence time estimated from neutral substitution rates carries this error. Every paper that uses Kimura’s framework to predict expected divergence between species has been using a formula that was derived from an assumption that the author’s own model parameters demonstrate to be false.

DISCUSS ON SG


Preface to The Frozen Gene

I’m very pleased to announce that the world’s greatest living economist, Steven Keen, graciously agreed to write the preface to The Frozen Gene which will appear in the print edition. The ebook and the audiobook will be updated once the print edition is ready in a few weeks.

Evolution is a fact, as attested by the fossil record, and modern DNA research. The assertion that evolution is the product of a random process is a hypothesis, which has proven inadequate, but which continues to be the dominant paradigm promulgated by prominent evolutionary theorists.

The reason it fails, as Vox Day and Claude Athos show in this book, is time. The time that it would take for a truly random mutation process, subject only to environmental selection of those random mutations, to generate and lock in mutations that are manifest in the evolutionary complexity we see about us today, is orders of magnitude greater than the age of the Universe, let alone the age of the Earth. The gap between the hypothesis and reality is unthinkably vast…

The savage demolition that Day and Athos undertake in this book of the statistical implications of the “Blind Watchmaker” hypothesis will, I hope, finally push evolutionary biologists to abandon the random mutation hypothesis and accept that Nature does in fact make leaps.

Read the whole thing there. There is no question that Nature makes leaps. The question, of course, is who or what is the ringmaster?

It definitely isn’t natural selection.

DISCUSS ON SG


The Reproducibility Crisis in Action

Now, I could not care less about the catastrophic state of professional science. Most scientists are midwits who are wholly incapable of ever doing anything more than chasing credentials, and the scientific literature ranges from about 50 percent to 100 percent garbage, depending upon the field. But I do feel sufficient moral duty to the great archive of human knowledge to bring it to the attention of the professionals when the very foundation upon what they’re basing a fairly significant proportion of their work is obviously, observably, and provably false.

So I submitted a paper calling attention to the fact that Kimura’s fixation model, upon which all neutral theory is based, is algebraically incorrect due to an erroneous cancellation in its derivation. In short, Kimura fucked up massively by assigning two different values to the same variable. In order to make it easy to understand, let me make an analogy about Democrats and Republicans in the US political system.

T = D + R, where D = 1-R.

This looks reasonable at first glance. But in fact, D stands for two different things here. It stands for both Democrats and it stands for Not Republicans. These two numbers are always going to be different because Democrats (47%) are not the same as Democrats + Independents (62%). So any derivation that cancels out D as part of an equation is always going to result in the equation producing incorrect results. Even for the most simple equation of the percentage of the US electorate that is divided into Democrats and Republicans, instead of getting the correct answer of 85, the equation will produce an incorrectly inflated answer of 100.

So you can’t just use D and D to represent both values. You would do well to use D and Di, which would make it obvious that they can’t cancel each other out. Kimura would have been much less likely to make his mistake, and it wouldn’t have taken 57 years for someone to notice it, if instead of Ne and Ne he had used Ne and Nc.

So, I write up a paper with Athos and submitted it to a journal that regularly devotes itself to such matters. The title was: “Falsifying the Kimura Fixation Model: The Ne Equivocation and the Empirical Failure of Neutral Theory” and you can read the whole thing and replicate the math if you don’t want to simply take my word for it.

Kimura’s 1968 derivation that the neutral substitution rate equals the mutation rate (k = μ) has been foundational to molecular evolution for over fifty years. We demonstrate that this derivation contains a previously unrecognized equivocation: the population size N in the mutation supply term (2Nμ) represents census individuals replicating DNA, while the N in the fixation probability (1/2N) was derived under Wright-Fisher assumptions where N means effective population size. For the cancellation yielding k = μ to hold, census N must equal Ne. In mammals, census populations exceed diversity-derived Ne by 19- to 46-fold. If census N governs mutation supply while Ne governs fixation probability, then k = (N/Ne)μ, not k = μ. This fundamental error, present in both the original 1968 Nature paper and Kimura’s 1983 monograph, undermines the theoretical foundation of molecular clock calculations and coalescent-based demographic inference. Empirical validation using ancient DNA time series confirms that the Kimura model systematically mispredicts allele frequency dynamics, with an alternative model reducing prediction error by 69%.

This is a pretty big problem. You’d think that scientists would like to know that any results using that equation are guaranteed to be wrong and want to avoid that happening in the future, right? I mean, science is all about correcting its errors, right? That’s why we can trust it, right?

Ms. No.: [redacted]
Title: Falsifying the Kimura Fixation Model: The Ne Equivocation and the Empirical Failure of Neutral Theory
Corresponding Author: Mr Vox Day
All Authors: Vox Day; Claude Athos

Dear Mr Day,

Thank you for your submission to [redacted]. Unfortunately, the Editors feel that your paper is inappropriate to the current interests of the journal and we regret that we are unable to accept your paper. We suggest you consider submitting the paper to another more appropriate journal.

If there are any editor comments, they are shown below.

As our journal’s acceptance rate averages less than half of the manuscripts submitted, regretfully, many otherwise good papers cannot be published by [redacted].

Thank you for your interest in [redacted].

Sincerely,

Professor [redacted]
Co-Chief Editor
[redacted]

Apparently showing them that their math is guaranteed to be wrong is somehow inappropriate to their current interests. Which is certainly an informative perspective. Consider that after being wrong for fifty straight years, they’re just going to maintain that erroneous course for who knows who many more?

Now, I don’t care at all about what they choose to publish or not publish. I wouldn’t be protecting the identities of the journal or the editor if I did. It’s their journal, it’s their field, and they want to be reliably wrong, that’s not my problem. I simply fulfilled what I believe to be my moral duty by bringing the matter to the attention of the appropriate authorities. Having done that, I can focus on doing what I do, which is writing books and blog posts.

That being said, this is an illustrative example of why you really cannot trust one single thing coming out of the professional peer-reviewed and published scientific literature.

DISCUSS ON SG


Richard Dawkins’s Running Shoes

Evolution and the Fish of Lake Victoria

Richard Dawkins loves the cichlid fish of Lake Victoria. In his 2024 book The Genetic Book of the Dead, he calls the lake a “cichlid factory” and marvels at what evolution accomplished there. Four hundred species, he tells us, all descended from perhaps two founder lineages, all evolved in the brief time since the lake last refilled—somewhere between 12,400 and 100,000 years depending on how you count. “The Cichlids of Lake Victoria show how fast evolution can proceed when it dons its running shoes,” he writes. He means this as a compliment to natural selection. Look what it can do when conditions are right!

Dawkins even provides a back-of-the-envelope calculation to reassure us that 100,000 years is plenty of time. He works out that you’d need roughly 800 generations between speciation events to produce 400 species. Cichlids mature in about two years, so 800 generations is 1,600 years. Comfortable margin. He then invokes a calculation by the botanist Ledyard Stebbins showing that even very weak selection—so weak you couldn’t measure it in the field—could turn a mouse into an elephant in 20,000 generations. If a mouse can become an elephant in 20,000 generations, surely a cichlid can become a slightly different cichlid in 800? “I conclude that 100,000 years is a comfortably long time in Cichlid evolution,” Dawkins writes, “easily enough time for an ancestral species to diversify into 400 separate species. That’s fortunate, because it happened!”

Well, it certainly happened. But whether natural selection did it is another question—one Dawkins never actually addresses.

You see, Dawkins asks how many speciation events can fit into 100,000 years. That’s the wrong question. Speciation events are just population splits. Two groups of fish stop interbreeding. That part is easy. Fish get trapped in separate ponds during a drought, the lake refills, and now you have two populations that don’t mix. Dawkins describes exactly this process, and he’s right that it doesn’t take long.

But population splits don’t make species different. They just make them separate. For the populations to become genetically distinct—to accumulate the DNA differences that distinguish one species from another—something has to change in their genomes. Mutations have to arise and spread through each population until they’re fixed: everyone in population A has the new variant, everyone in population B either has a different variant or keeps the original. That process is called fixation, and it’s the actual genetic work of divergence.

The question Dawkins should have asked is: how many fixations does cichlid diversification require, and can natural selection accomplish that many in the available time?

Let’s work it out, back-of-the-envelope style, just as Dawkins likes to do.

When geneticists compare cichlid species from Lake Victoria, they find the genomes differ by roughly 0.1 to 0.2 percent. That sounds tiny, and it is—these are very close relatives, as you’d expect from such a recent radiation. But cichlid genomes are about a billion base pairs long. A tenth of a percent of a billion is a million. Call it 750,000 to be conservative. That’s how many positions in the genome are fixed for different variants in different species.

Now, how many fixations can natural selection actually accomplish in the time available?

The fastest fixation rate ever directly observed comes from the famous Long-Term Evolution Experiment with E. coli bacteria—Richard Lenski’s project that’s been running since 1988. Under strong selection in laboratory conditions, beneficial mutations fix at a rate of about one per 1,600 generations. That’s bacteria, mind you—asexual organisms that reproduce every half hour, with no messy complications from sex or overlapping generations. For sexual organisms like fish, fixation is almost certainly slower. But let’s be generous and grant cichlids the bacterial rate.

One hundred thousand years at two years per generation gives us 50,000 generations. Divide by 1,600 generations per fixation and you get 31 achievable fixations. Let’s round up to 50 to be sporting.

Fifty fixations achievable. Seven hundred fifty thousand required.

The shortfall is 15,000-fold.

If we use the more recent date for the lake—12,400 years, which Dawkins mentions but sets aside—the situation gets worse. That’s only about 6,000 generations, yielding perhaps 3 to 5 achievable fixations. Against 750,000 required.

The shortfall is now over 100,000-fold.

Here’s the peculiar thing. Dawkins chose the Lake Victoria cichlids precisely because they evolved so fast. They’re his showpiece, his proof that natural selection can really motor when it needs to. “Think of it as an upper bound,” he says.

But that speed is exactly the problem. Fast diversification means short timescales. Short timescales mean few generations. Few generations mean few fixations achievable. The very feature Dawkins celebrates—the blistering pace of cichlid evolution—is what makes the math impossible.

His mouse-to-elephant calculation doesn’t help. Stebbins was asking a different question: how long for selection to shift a population from one body size to another? That’s about the rate of phenotypic change. MITTENS asks about the amount of genetic change—how many individual mutations must be fixed to account for the observed DNA differences between species. The rate of change can be fast while the throughput remains limited. You can sprint, but you can’t sprint to the moon.

Dawkins’s running shoes turn out to be missing their soles. And their shoelaces.

None of this means the cichlids didn’t diversify. They obviously did, since the fish are right there in the lake, four hundred species of them, different colors, different shapes, different diets, different behaviors. The fossils, (such as they are) the history, and the DNA all confirm a rapid radiation. That happened.

What the math shows is that natural selection, working through the fixation of beneficial mutations, cannot have done the genetic heavy lifting. Not in 100,000 years. Not in a million. The mechanism Dawkins invokes to explain the cichlid factory cannot actually run the factory.

So what did? That’s not a question I can answer here. But I can say what the answer is not. It’s not the process Dawkins describes so charmingly in The Genetic Book of the Dead. The back-of-the-envelope calculation he should have done—the one about fixations rather than speciations—shows that his explanation fails by five orders of magnitude.

One hundred thousand times short.

That’s quite a gap. You don’t close a gap like that by adjusting your assumptions or finding a more generous estimate of generation time. You close it by admitting that something is fundamentally wrong with your model.

Dawkins tells us the Lake Victoria cichlids show “how fast evolution can proceed when it dons its running shoes.” He’s right about the speed. He’s absolutely wrong about the shoes. Natural selection can’t run that fast. Nothing that works by fixing mutations one at a time, or even a thousand at a time, can run that fast.

The cichlids did something. But whatever they did, it wasn’t what Dawkins thinks.


And speaking of the cichlid fish, as it happens, the scientific enthusiasm for them means we can demonstrate the extent to which it is mathematically impossible for natural selection to account for their observed differences. For, you see, we recently extended our study of MITTENS from the great apes to a wide range of species, including the cichlid fish.

From “The Universal Failure of Fixation: MITTENS Applied Across the Tree of Life”:

Lake Victoria Cichlids: The Lake Victoria cichlid radiation is perhaps the most famous example of explosive speciation. Over 500 species arose in approximately 15,000 years from a small founding population following a desiccation event around 14,700 years ago (Brawand et al. 2014). At 1.5 years per generation, this provides only 10,000 generations. Even with d = 0.85, achievable fixations = (10,000 × 0.85) / 1,600 = 5.

Interspecific nucleotide divergence averages 0.15% over a 1 Gb genome, requiring approximately 750,000 fixations to differentiate species. Shortfall: 750,000 / 5 = 141,500×.

This is a devastating result. The radiation celebrated as evolution’s greatest achievement fails MITTENS by 141,000-fold. Five fixations achievable; three-quarters of a million required.

The math does not work. Again.

DISCUSS ON SG


MITTENS and the Monkeys

That’s not taxonomically correct, as neither chimpanzees nor bonobos are, strictly speaking, monkeys. But why resist a perfectly good alliteration considering how flexible the biologists have gotten to be where speciation is concerned, right?

Anyhow, one of the obvious and more shortsighted objections to MITTENS is that its formal presentation focused solely on the human-chimpanzee divergence, although literally from the moment of its origins its claims have been all-encompassing with regards to all genetic divergences between all species. I simply hadn’t gotten around to digging up the genomic evidence required to empirically anchor the math and the logic involved. One has to start somewhere, after all, and complaining that an initial test of a hypothesis is not all-inclusive is not a reasonable objection.

But now that PZ and TFG are both out, I can take some time to fill in the blanks and explore a few interesting lines of possibility, and to hunt down the various escape routes that the increasingly desperate IFLSists are attempting to find. So, I downloaded several gigabytes of data from the Great ape genome diversity program at the University of Vienna, crunched the numbers, and can now demonstrate that the expected shortfall in the fixation capacity definitely applies to the chimp-bonobo divergence as well as two intra-chimpanzee divergences.

As before, this is an approach with assumptions favorable to the post-Darwinian New Modern Synthesis, as we went with the traditional 20 years for a chimpanzee generation rather than the most recent calculation of 22 years. However, we also discovered an anomaly which is reflected in the title “The Pan Paradox: MITTENS Applied to Chimpanzee Subspecies Divergence”, because in addition to supporting MITTENS, the evidence also directly contradicts neutral theory.

The MITTENS framework (Mathematical Impossibility of The Theory of Evolution by Natural Selection) demonstrated a 220,000-fold shortfall in the fixation capacity required to explain human-chimpanzee divergence. A natural objection holds that this represents a special case—perhaps the human-chimp comparison uniquely violates the model’s assumptions. We test this objection by applying MITTENS to divergence within the genus Pan: the split between bonobos and chimpanzees, and the subsequent radiation of chimpanzee subspecies. Using genomic data from the Kuhlwilm et al. (2025) Great Ape Genome Diversity Panel comprising 67 wild Pan individuals, we identify 1,811,881 fixed differences between subspecies and calculate achievable fixations given published divergence times and effective population sizes. Using 20-year generations (shorter generations favor the standard model) and the empirically-derived Selective Turnover Coefficient d = 0.86 for wild chimpanzees, the bonobo-chimpanzee split (930,000 years, 40,000 effective generations) permits a maximum of 25 fixations—a shortfall of at least 13,000-fold against the observed fixed differences. Subspecies divergences show comparable failures: Western versus Central chimpanzees (460,000 years) fail by ~7,500-fold; Central versus Eastern (200,000 years) fail by ~3,600-fold.

You can read the whole paper here if you like. I’ve also added a link on the left sidebar to provide regular access to my open repository of science papers for those who are interested since I seldom talk about most of them here, or anywhere else, for that matter.

And we’re back with a vengeance. Thanks to everyone who has bought the book, and especially to those who have read and reviewed it. Hopefully we’ll be 1 and 2 in Biology before long.

DISCUSS ON SG