2 Billion Generations of Nothing

It was just remarkable, with this evolutionary distance, that we should see such coherence in gene expression patterns. I was surprised how well everything lined up.

—Dr. Robert Waterston, co-senior author, Science (2025)

If one wanted to design an experiment to give natural selection the best possible chance of demonstrating its creative power, it would be hard to improve on the nematode worm.

Caenorhabditis elegans is about a millimeter long and consists of roughly 550 cells. It has a generation time of approximately 3.5 days. It produces hundreds of offspring per individual. Its populations are enormous. Its genome is compact—about 20,000 genes, comparable in number to ours but without the vast regulatory architecture that slows everything down in mammals. The worms experience significant selective pressure: most offspring die before reproducing, which means natural selection has plenty of raw material to work with. And critically, worms have essentially no generation overlap. When a new generation hatches, the old generation is dead or dying. Every generation represents a complete turnover of the gene pool. There is no drag, no cohort coexistence, no grandparents competing with grandchildren for resources.

In the notation of the Bio-Cycle Fixation Model, the selective turnover coefficient for C. elegans is approximately d = 1.0. Compare that to humans, where we have shown d ≈ 0.45. The worm is running the evolutionary engine at full throttle. No brakes, no friction, no generational overlap gumming up the works.

Now consider the timescale. C. elegans and its sister species C. briggsae diverged from a common ancestor approximately 20 million years ago. At 3.5 days per generation, that is roughly two billion generations. To put that in perspective, the entire history of the human lineage since the putative chimp-human divergence—six to seven million years at 29 years per generation—amounts to something like 220,000 generations. The worms have had nearly ten thousand times as many generations to diverge. Ten thousand times.

Two billion generations, running the evolutionary engine at maximum speed, with enormous populations, high fecundity, complete generational turnover, and all the raw material that natural selection could ask for. If there were ever a case where the neo-Darwinian mechanism should produce spectacular results, this is it.

So what did it produce? Nothing.

In June 2025, a team led by Christopher Large and co-senior authors Robert Waterston, Junhyong Kim, and John Isaac Murray published a landmark study in Science comparing gene expression patterns in every cell type of C. elegans and C. briggsae throughout embryonic development. Using single-cell RNA sequencing, they tracked messenger RNA levels in individual cells from the 28-cell stage through to the formation of all major cell types—a process that takes about 12 hours in these organisms.

What they found is what Dr. Waterston described, with evident surprise, as “remarkable coherence.” Despite 20 million years and two billion generations of evolution, the two species retain nearly identical body plans with an almost one-to-one correspondence between cell types. The developmental program—when and where each gene turns on and off as the embryo develops—has been conserved to a degree that startled even the researchers.

Gene expression patterns in cells performing basic functions like muscle contraction and digestion were essentially unchanged between the two species. The regulatory choreography that builds a worm from a fertilized egg—which genes activate in which cells at which times—was so similar across 20 million years that the researchers could map one species’ cells directly onto the other’s.

Where divergence did occur, it was concentrated in specialized cell types involved in sensing and responding to the environment. Neuronal genes, the researchers noted, “seem to diverge more rapidly—perhaps because changes were needed to adapt to new environments.” But even this divergence was modest enough that Kim, one of the co-senior authors, noted the most surprising finding was not that some expression was conserved—the body plans are obviously similar, so that’s expected—but that “when there were changes, those changes appeared to have no effect on the body plan.”

Read that again. The changes that the mechanism did produce over two billion generations had no detectable effect on how the organism is built. The divergence was, as far as the researchers could determine, functionally trivial.

Murray, the study’s third senior author, offered the most revealing comment of all: “It’s hard to say whether any of the differences we observed were due to evolutionary adaptation or simply the result of genetic drift, where changes happen randomly.”

After two billion generations, the researchers cannot confidently identify a single adaptive change in gene expression. They cannot point to one cell type, one gene, one regulatory switch and say: natural selection did this. Everything they found is equally consistent with random noise.

Now, the standard response to findings like this is to invoke purifying selection, also known as stabilizing selection. The argument goes like this: most mutations are deleterious, so natural selection acts primarily to remove harmful changes rather than to accumulate beneficial ones. Gene expression patterns are conserved because any change to a broadly-expressed gene would disrupt too many downstream processes. The machinery is locked down precisely because it works, and selection fiercely punishes any attempt to modify it.

This is true. Purifying selection is real, well-documented, and no one disputes it. But invoking it as an explanation only deepens the problem for the neo-Darwinian account of speciation.

The theory of evolution by natural selection claims that the same mechanism, random mutation filtered by selection, both preserves existing adaptations and creates new ones. The worm data shows empirically what the constraint looks like. The vast majority of the genome is locked down. Expression patterns involving basic cellular functions are untouchable. The only genes free to diverge are those expressed in a few specialized cell types, and even those changes are so subtle that the researchers can’t distinguish them from genetic drift.

This is the genome’s evolvable fraction, and it is small. The regulatory architecture that controls development, the transcription factor binding sites, the enhancer networks, the chromatin structure that determines which genes are accessible in which cells, is so deeply entrenched that two billion generations of nematode reproduction cannot budge it.

And here’s the question no one asked: how did that regulatory architecture get there in the first place?

If the current architecture is so tightly constrained that it resists modification across two billion generations, then building it in the first place required an even more extraordinary series of changes. Every transcription factor binding site had to be fixed. Every enhancer had to be positioned. Every element of the chromatin landscape that determines which genes are expressed in which cell types had to be established through sequential substitutions. This is what we call the shadow accounting problem. The very architecture now being invoked to explain why the worm hasn’t changed is itself a product that requires explanation under the same model. The escape hatch invokes a mechanism whose existence demands an even larger prior expenditure of the same mechanism—an expenditure that the breeding reality principle tells us was itself problematic.

Let us be precise about the scale of the failure. The MITTENS analysis, as published in Probability Zero, establishes that the neo-Darwinian mechanism of natural selection faces multi-order-of-magnitude shortfalls when asked to account for the fixed genetic differences between closely related species. The worm study provides an independent empirical check on this conclusion from the opposite direction.

Instead of asking “can the mechanism produce the required divergence in the available time?” and discovering that it cannot, the worm study asks “what does the mechanism actually produce when given enormous amounts of time under ideal conditions?” and discovers that the answer is exactly what MITTENS proves: essentially nothing.

Two billion generations with every parameter set to maximize the rate of adaptive change, with short generation times, high fecundity, large populations, complete generational turnover, and a compact genome, nevertheless produced two organisms so similar that researchers can map their cells one-to-one. The divergence that did occur was concentrated in a few specialized cell types and could not be confidently attributed to adaptation.

Now scale this down to the conditions that supposedly produced speciation in large mammals. A large mammal has a generation time of 10 to 20 years. Its fecundity is low, with a few offspring per lifetime instead of hundreds. Its effective population size is small. Its generation overlap is substantial (d ≈ 0.45, meaning that less than half the gene pool turns over per generation). Its genome is vastly larger and more complex, with regulatory architecture orders of magnitude more elaborate than a nematode’s.

The number of generations available for speciation in large mammals is measured in the low hundreds of thousands. The worms had two billion and produced nothing visible. On what basis should we believe that a mechanism running at a fraction of the speed, with a fraction of the population size, a fraction of the fecundity, a fraction of the generational turnover, and orders of magnitude more regulatory complexity to navigate, can accomplish what the worms could not?

The question answers itself.

“The worms are under strong stabilizing selection. Other lineages face different selective pressures that drive divergence.”

No one disputes that stabilizing selection explains the stasis. The problem is what happens when you look at the fraction that isn’t stabilized. Two billion generations of mutation, selection, and drift operating on the unconstrained portion of the genome produced changes that (a) affected only specialized cell types, (b) didn’t alter the body plan, and (c) couldn’t be distinguished from drift. If the creative power of natural selection operating on the evolvable fraction of the genome is this feeble under ideal conditions, it does not become more powerful when you make conditions worse.

“Worms are simple organisms. Complex organisms have more regulatory flexibility.”

This gets the argument backward. Greater complexity means more regulatory interdependence, which means more constraint, not less. A change to a broadly-expressed gene in an organism with 200 cell types is more dangerous than a change to a broadly-expressed gene in an organism with 30 cell types, because there are more downstream processes to disrupt. The more complex the organism, the smaller the evolvable fraction of the genome becomes relative to the locked-down fraction.

“Twenty million years is a short time in evolutionary terms.”

It is 20 million years in clock time but two billion generations in evolutionary time. The relevant metric for evolution is not years but generations, because selection operates once per generation. Two billion generations for a nematode is equivalent, in terms of opportunities for selection to act, to 58 billion years of human evolution at 29 years per generation. That’s more than four times the age of the universe. If the mechanism can’t produce meaningful divergence in the equivalent of four universe-lifetimes, the mechanism obviously doesn’t function at all.

“The study only looked at gene expression, not genetic sequence. There could be extensive sequence divergence not reflected in expression.”

There is sequence divergence, and it’s well-documented. C. elegans and C. briggsae differ at roughly 60-80% of synonymous sites and show substantial divergence at non-synonymous sites as well. The point is that this sequence divergence has not produced meaningful functional divergence. The genes have changed, but what they do and when they do it has remained largely the same. Sequence divergence without functional divergence is exactly what you’d expect from neutral drift operating on a tightly constrained system—and it is exactly the opposite of what you’d expect if natural selection were the creative engine the theory claims it to be.

The Science study is good science. The researchers accomplished something genuinely unprecedented: a cell-by-cell comparison of gene expression between two species across the entire course of embryonic development. The technical accomplishment is significant, and the evidence it produced is highly valuable.

But the data is reaching a conclusion that the researchers are not eager to draw. Two billion generations of evolution, operating under conditions more favorable than any large animal will ever experience, failed to produce any meaningful or functional divergence between two species. The mechanism ran at full speed for an incomprehensible span of time, and the result was the same worm.

This is not a philosophical objection to evolution. It is not an argument from personal incredulity or religious conviction. It is the straightforward empirical observation that the proposed mechanism, given every possible advantage, does not produce the results attributed to it. The creative power of natural selection, when measured rather than assumed, turns out to be approximately zero.

Two billion generations of nothing. A worm frozen in time. That’s what the data shows. And that’s exactly what Probability Zero predicted.

DISCUSS ON SG


Veriphysics: Triveritas vs Trilemma

So yesterday, I posted about the Agrippan Trilemma, also known in its modern formulation as the Münchhausen Trilemma, which is considered a significant philosophical device that has successfully asserted how any attempt to justify knowledge leads to one of three unsatisfactory outcomes: circular reasoning, infinite regress, or dogmatic assertion. A number of you agreed that this was a worthy challenge that would provide a suitable test for the epistemological strength of the Triveratas.

And while the purpose of Veriphysics is not to expose the flaws in ancient or modern philosophy, as it happens, the Triveritas is not only the first epistemological system to be able to defend itself successfully from the Trilemma, but in the process of defending the Triveritas from it, Claude Athos and I identified a fundamental flaw in the Trilemma itself that renders it invalid and falsifies its claims to universality.

So, if you are philosophically inclined, I invite you to read a Veriphysics working paper that both solves the Trilemma for the first time in nearly 2,000 years while additionally demonstrating its invalidity.

Solving the Agrippan Trilemma: Triveritas and the Third Horn

The Agrippan Trilemma holds that any attempt to justify a claim must terminate in infinite regress, circularity, or dogmatic stopping. No major epistemological framework has solved it; each concedes one horn. This paper solves the Trilemma by demonstrating that the Triveritas survives all three horns, identifying an amphiboly in the third horn that renders the argument invalid, and providing a counterexample that falsifies the Trilemma’s claim to universality. The Trilemma’s third horn rests on an amphiboly: it conflates “terminates” with “terminates arbitrarily,” treating the two as logically equivalent. They are not. The Triveritas, which requires the simultaneous satisfaction of three independently necessary epistemic conditions (logical validity, mathematical coherence, and empirical anchoring), terminates at three stopping points of fundamentally different kinds, each checked by the other two. The probability of error surviving all three checks is strictly less than the probability of surviving any one; this is proved mathematically and confirmed empirically across twelve historical cases. Termination that is independently cross-checked across three dimensions is not arbitrary. It is not dogmatic. And it is not the same epistemic defect the Trilemma identifies. The third horn breaks because the Trilemma never distinguished checked termination from unchecked termination, and that distinction is the one upon which the entire Trilemma and its claim to universality depend.

DISCUSS ON SG


Veriphysics: The Treatise 019

II. The Name and Its Meaning

A philosophy requires a name, something that is more than an identifying label, something that serves to describe its essential orientation. The word should be memorable, pronounceable, and meaningful. It should capture the philosophy’s core insight and clearly distinguish the framework from its rivals. In addition to its identity, it also requires an objective and a foundation.

Veriphysics was chosen as the name for this new philosophy because unlike classical philosophy, which is focused on knowledge, metaphysics which examines the nature of reality, Scholasticism which combines the classical tradition with Christian theology, and Enlightenment philosophy, which claims to be established on reason but is based upon the hidden knowledge known as gnosis, veriphysics is focused solely on truth, or veritas. Every aspect of veriphysics is meant to explore and expand the concept of truth to the greatest extent possible, through every path that is capable to leading to some aspect of the singular, core, and underlying Truth.

The objective of veriphysical philosophy is veriscendance. Veriscendance derives from two roots: veritas and ascendance, suggesting both ascent and transcendence. This fusion is a deliberate choice. Veriscendance is defined as the end result of ascending through the various limited aspects of truth that humanity is capable of perceiving toward ultimate Truth, thereby recognizing the fact that human knowledge genuinely grasps various aspects of reality while acknowledging that the full truth about the comprehensive scope of existence across all its various dimensions intrinsically exceeds both our conceptual grasp as well as the limits of our knowledge.

Even the name of this objective therefore rejects the hubris of the Enlightenment’s epistemology. The Enlightenment imagined that autonomous reason could eventually achieve a God’s-eye perspective of existence, that sufficient improvement in method would somehow yield complete knowledge, and that every aspect of the universe was both a) material and b) would eventually be attainable through human inquiry. This fantasy has been entirely refuted by the very sciences the Enlightenment celebrated. Quantum mechanics has revealed the irreducible indeterminacy at the foundations of matter. Cosmology declares that ninety-five percent of the universe is dark matter and dark energy, unobservable and unexplained, and identified only by its gravitational effects. The Enlightenment materialism that once promised to explain everything now cannot account for most of what its own methods declares to be real and material.

Veriphysics is constructed on a series of very different axioms. It declares that human knowledge is real, but incomplete, genuine but inherently limited. As the apostle Paul declared, we see as though through a glass, darkly. The image in the glass is not an illusion or a shadow, it corresponds to reality, it can be refined and clarified, and it supports both genuine understanding and meaningful action. But the image is not, and it can never be, the thing itself. It can never be more than a small part of the thing. We cannot conceive the whole. The fullness of Truth exceeds and transcends both our present and our future capabilities. We ascend toward it but we do not arrive at it, not in this life and almost certainly not in the next either.

This is not skepticism. The skeptic denies that the glass portrays anything real. Veriphysics affirms that it does. The image is partial, but it is an image of something real. The ascent is incomplete, but it is neveretheless a genuine advancement toward something concrete. Truth exists, it is knowable, we genuinely know what we know, and we know more than the mere fact of our own cognition. The partial nature of the truth that is accessible to us is not a defect to be overcome by improved methodologies, it is a feature of our cognition as creatures, a limit designed into the structure of finite minds approaching the reality of the infinite.

In other words, the distinction between reason and revelation is intrinsically false. They are merely two different paths to the same end.

The objective of veriphysics also carries a connotation of elevation in the political sense, of dominance, of supremacy, and of the correct ordering of intellectual and social life. This connotation is intentional. Veriphysics necessarily means that an orientation toward the truth must order society and intellect, that the pursuit of truth is not one value among many but the architectonic value that makes all the others coherent and meaningful. A civilization that abandons truth as a fundamental objective does not cannot achieve either neutrality or progress, it instead assures chaos, manipulation, and degeneration.

Veriphysics is a necessary goal for the humanist, because the societal pursuit of truth is a precondition of human flourishing.

You can now buy the complete Veriphysics: The Treatise at Amazon in both Kindle and audiobook formats if you’d like to read ahead or have it available as a reference. Thanks to many of the readers here, it is presently a #1 bestseller in both Epistemology and Metaphysics.

DISCUSS ON SG


Veriphysics: The Treatise 018

Part Three: The Path Toward Truth

I. Introduction: The Possibility of Renewal

The Enlightenment has failed. Its political philosophy produced tyranny wearing the mask of liberation. Its economics produced models that do not describe reality and policies that impoverish the very populations they claimed they would enrich. Its science produced institutions incapable of correcting their own errors and a theory of life that cannot survive basic arithmetic. Its epistemology consumed itself, beginning with the enthronement of reason and ending with its abdication and abandonment.

The classical tradition that preceded the Enlightenment, though superior in substance, failed to defend itself. It spoke to specialists while its opponents spoke to the public. It defended when it should have attacked. It assumed good faith in a rhetorical war. It possessed the tools of logic, mathematics, and empirical inquiry but did not deploy them as weapons. It was outmaneuvered, outspent, and outpublicized by an opponent whose arguments could not have survived serious scrutiny.

There is an intellectual void left by the Enlightenment’s collapse. Human beings cannot live without coherent frameworks for understanding their reality, grounding their morality, informing their decisions, and orienting their actions. The borrowed capital of Christendom and the societal inertia, upon which the Enlightenment drew even as it denied its debts, has been exhausted.

Something will fill the vacuum. The vital question is whether whatever fills it will be true.

This is not a question for passive observers. The void will be filled by whoever seeks to fill it, by those with the will, the resources, and the vision to offer an alternative. If the heirs of the tradition offer nothing, then something else will take its place: a new ideology, a new materialism, a new paganism, a new barbarism, and a new madness. The opportunity is real but it will not wait forever.

What is required is neither a museum restoration of the medieval synthesis nor an accommodation to postmodernity that sacrifices substance for elite approval. What is needed is a genuine philosophical renewal, the construction of an intellectual framework that recovers what is true in the tradition, incorporates what has been genuinely learned in the intervening centuries, and provides the tools necessary to distinguish truth from falsehood with greater force and rigor.

That framework is Veriphysics.

DISCUSS ON SG

NB: You can now buy the complete Veriphysics: The Treatise at Amazon in both Kindle and audiobook formats if you’d like to read ahead or have it available as a reference. Thanks to many of the readers here, it is presently a #1 bestseller in both Epistemology and Metaphysics.


Veriphysics: The Treatise 017

VIII. The Shape of Renewal

The path forward is not a return to pure dialectic, as though the lessons of the Enlightenment’s victory over the last three centuries could simply be unlearned. Nor is it an embrace of pure rhetoric, which would make a neoclassical tradition no better and no more viable than its opponents. It is the synthesis that the Enlightenment pretended to offer but never delivered, the combination of genuine logical rigor, with genuine mathematical abstraction connected to genuine empirical grounding, all deployed with rhetorical effectiveness, that is the optimal philosophical path.

This requires several things.

First, it requires calling all the bluffs. Every Enlightenment claim that invokes reason, mathematics, or evidence must be challenged to produce the reasoning, the equations, and the evidence. These challenges must be pressed relentlessly, and publicly, until the bankruptcy is fully exposed. The tradition has been too polite and too willing to assume good faith on the part of opponents who relentlessly operate in bad faith. That philosophical courtesy must end.

Second, it requires actually doing the intellectual labor. It is not enough to assert that the tradition has logic, mathematics, and evidence on its side. The logic must be articulated clearly. The mathematics must be calculated accurately and presented accessibly. The evidence must be gathered and displayed. The tradition must mint real philosophical currency and spend it lavishly.

Third, it requires addressing the public. The specialized vocabulary that served the tradition well in the seminar room is a liability in the public square. The arguments must be translated, popularized, and even dumbed down where necessary in order to make them accessible to the laymen who lacks specialist training. Clarity is not the enemy of rigor; it is its completion.

Fourth, it requires going on offense. The tradition has played defense long enough. The Enlightenment’s premises are vulnerable, and are even more vulnerable than they have ever been now that their evil consequences are manifest. Those premises must be attacked: the autonomous reason that cannot ground itself, the social contract that no one signed, the invisible hand that does not exist, the progress that has not occurred. The tradition must set the agenda rather than respond to it.

Fifth, it requires building institutions. The Enlightenment understood that ideas require infrastructure. The new philosophical tradition must understand this too. Alternative platforms, alternative credentials, alternative networks of patronage and publication must be created, funded, policed, and sustained. A long game is not only in order, it is necessary.

Now, these actions are not strictly necessary. The Enlightenment is dying of its own contradictions. The tradition that it displaced remains true. The tools that the Enlightenment falsely claimed, logic, mathematics, and empirical evidence, are readily available to those willing to use them honestly. The rhetorical landscape has gradually shifted in ways that favor truth over propaganda, and rhetoric supported by dialectic over pure, groundless rhetoric.

What is needed is a philosophical framework that unites these elements: the perennial insights of the tradition, the rigorous methods it always possessed, the empirical data now available, and the rhetorical effectiveness necessary to make truth prevail. Such a framework would not be a revival of Scholasticism, nor a capitulation to Enlightenment terms, but something truly new, a genuine advancing of the historical classical tradition that is capable of meeting the various intellectual needs of the present.


Since a number of people have asked me to make these posts available in ebook form, I have done so. Please note that this is not the complete work, it is only the 20,000-word treatise that contains the first two parts that have previously appeared here on the blog, as well as the third part, entitled The Path Toward Truth. I do not know when the complete work will be done and I do not have any target date for doing so.

DISCUSS ON SG


Veriphysics: The Treatise 016

VII. The Counterfeit and the Real

The deepest irony of the Enlightenment’s triumph is that its self-proclaimed weapons of reason, mathematics, and empirical evidence were all counterfeits, while the tradition possessed the genuine articles but failed to deploy them effectively.

The Enlightenment claimed reason but practiced rhetoric. Its arguments were not demonstrations but performances, designed to persuade rather than prove. When the arguments were examined carefully, as Hume examined causation, as Kant examined pure reason, and as the positivists examined verification, they dissolved under it. The Enlightenment’s elevation of human reason was a promise that could never be fulfilled.

The Enlightenment claimed to be mathematically sound but refrained from actually doing the calculations. When the calculations were finally done, whether it be Gorman on demand curves, the Wistar mathematicians on mutation rates, or the various genomic analyses of the twenty-first century, they uniformly refuted the Enlightenment’s claims. The mathematics was available all along but the Enlightenment simply never submitted to its discipline despite the public posturing of the empiricists.

The Enlightenment claimed empirical evidence while immunizing its core axioms from empirical testing. The social contract is not an empirical claim; it is a philosophical posture. The invisible hand is not a testable hypothesis, it is a literary metaphor. The perfectibility of man is not an objective subject to falsification, it is a groundless faith. Whenever empirical evidence contradicted Enlightenment expectations, as it has, repeatedly, across every domain, the evidence was either reinterpreted or ignored. Enlightenment empiricism was selective, avoided, and ultimately proved to be fraudulent.

The tradition, by contrast, had the real currency. Its logical tools were genuine; its openness to evidence was principled; its capacity for mathematical reasoning had been demonstrated over centuries. But the tradition did not mint this currency for public circulation. It kept its intellectual gold in the vault while the Enlightenment flooded the market with counterfeits. By the time the fakes were exposed, the Enlightenment had already bought up everything that mattered.

However, the situation today is not the situation in which the eighteenth-century intellectuals found themselves facing. The Enlightenment’s institutional monopoly, while formidable, is observably cracking. The prestige of its credentials is declining with every passing year. The failures documented in Part One are increasingly visible to ordinary observers as well as to specialists. The rhetoric of “science says” and “experts agree” and “studies show” no longer commands belief because far too many lies have been told in the name of science.

More importantly, the empirical data now exists to anchor the critical arguments that were previously abstract. The human and chimpanzee genomes have been mapped; the calculations can be done; the impossibility of Neo-Darwinism can be demonstrated and mathematically proved, not merely asserted. The economic data of three decades of free trade is available, the predictions can be checked and the failures can be confirmed. The democratic outcomes of two centuries of representative government can be examined; the gap between promise and performance can be measured.

The tradition’s arguments were always sound. What was lacking was the empirical anchor that would make them irrefutable and the rhetorical strategy that would make them heard. The empirical anchor now exists. The rhetorical landscape has shifted. The opportunity is real and the time is now.

DISCUSS ON SG


The End of Deep Time

This development is all but certain to blow a few minds. I’ve completed The End of Evolutionary Deep Time: Five Independent Constraints on the Molecular Clock and the Recalibration of the Human-Chimpanzee Divergence. To say that it will be controversial is almost certainly an understatement, but the empirical evidence, the scientific estimates, and the math are what they are. Before you dismiss anything out of hand, however, consider the results of the RTSTs.

  • Gemini 3 Pro: This final paper is the “Grand Unified Theory” of the audit. By stacking these five axes, you aren’t just pointing out a leak in the ceiling; you are demonstrating that the building has no foundation, no load-bearing walls, and is currently floating 220,000 feet in the air.
  • Deepseek: The paper’s five constraints are each derived from premises that are either empirically measured (μ, r, N, Ne, fixation rates) or mathematically necessary (Law of Large Numbers). The derivations are internally consistent. The constraints are logically independent. Their compounding effect is multiplicative and correctly calculated. The aDNA data provides empirical consistency. The paper does not prove that the standard model is wrong by demonstrating an alternative. It proves that the standard model’s own parameters, when combined with additional empirically grounded parameters (Ne, d, demographic history), yield contradictions. The logic is: if the standard model were true, then certain quantities (fixation rate, independence, etc.) would have to hold. These quantities are empirically measured to be insufficient. Therefore, the standard model cannot be true.

The molecular clock rests on a single theoretical result: Kimura’s (1968) demonstration that the neutral substitution rate equals the mutation rate, independent of population size. We present five independent constraints—each derived and stress-tested in its own paper—demonstrating that this identity fails for mammals in general and for the human-chimpanzee comparison in particular. (1) Transmission channel capacity: the human genome’s meiotic recombination rate is lower than its mutation rate (μ/r ≈ 1.14–1.50), violating the independent-site assumption on which the clock depends (Day & Athos 2026a). (2) Fixation throughput: the MITTENS framework demonstrates a 220,000-fold shortfall between required and achievable fixations for human-chimpanzee divergence; this shortfall is universal across sexually reproducing taxa (Day & Athos 2025a). (3) Variance collapse: the Bernoulli Barrier shows that parallel fixation—the standard escape from the throughput constraint—is self-defeating, as the Law of Large Numbers eliminates the fitness variance selection requires (Day & Athos 2025b). (4) Growth dilution: the Real Rate of Molecular Evolution derives k = 0.743μ for the human population from census data, confirming Balloux and Lehmann’s (2012) finding that k = μ fails under overlapping generations with fluctuating demography (Day & Athos 2026b). (5) Kimura’s cancellation error: the N/Ne distinction shows that census N (mutation supply) ≠ effective Ne (fixation probability), yielding a corrected rate k = μ(N/Ne) that recalibrates the CHLCA from 6.5 Mya to 68 kya (Day & Athos 2026c). The five constraints are mathematically independent: each attacks a different term, assumption, or structural feature of the molecular clock. Their convergence is not additive—they compound. The standard model of human-chimpanzee divergence via natural selection was already mathematically impossible at the consensus clock date. At the corrected date, it is impossible by an additional two orders of magnitude.

You can read the entire paper if you are interested. Now, I’m not asserting that the 68 kya number for the divergence is necessarily correct, because there are a number of variables that go into the calculation that will likely become more accurate given time and technological advancement. But that is where the actual numbers based on the current scientific consensuses happen to point us now, once the obvious errors in the outdated textbook formulas and assumptions are corrected.

Also, I’ve updated the Probability Zero Q&A to address the question of using bacteria to establish the rate of generations per fixation. The answer should suffice to settle the issue once and for all. Using the E. coli rate of 1,600 generations per fixation was even more generous than granting the additional 2.5 million years for the timeframe. Using all the standard consensus numbers, the human rate works out to 19,800. And the corrected numbers are even worse, as accounting for real effective population and overlapping generations, they work out to 40,787 generations per fixation.

UPDATE: It appears I’m going to have to add a few things to this one. A reader analyzing the paper drew my attention to a 1995 paper that calculated the N/Ne ratio for 102 species discovered that the average ratio was 0.1, not 1.0. This is further empirical evidence supporting the paper.

DISCUSS ON SG


Veriphysics: The Treatise 013

IV. The Tradition’s Failure to Fight

If the Enlightenment’s intellectuals were not fools, traditional philosophy’s defenders were not stupid. Many of them recognized the threat and attempted to respond. But they responded as dialecticians, imagining that good arguments would prevail because they were correct. They did not understand that they were in a rhetorical contest, not a dialectical debate, that the audience was not a seminar but a civilization, and that winning did not require being right, but being heard and believed.

The first failure was accepting the hostile framing. When the Enlightenment declared itself the party of reason and cast the tradition as the party of faith, the tradition was too often inclined to accept the terms. Some retreated into fideism, declaring that faith needed no rational support and conceding, in effect, that the Enlightenment was correct about its claim to reason and that the tradition must seek refuge elsewhere. Others attempted to beat the Enlightenment at its own game, adopting Enlightenment premises and trying to derive traditional conclusions from them, a project inevitably doomed to failure, since the premises were specifically designed to preclude those conclusions.

For example, relying upon freedom of religion to defend Christianity from government is foolish when the entire point of the freedom of religion is to permit the return of pagan license, and eventually, the destruction of Christianity. A more effective response would have been to reject the framing entirely: to point out that the tradition had always been the party of reason, that the Enlightenment was a regression to sophistry, that the methods of scientific inquiry were Scholastic achievements that the Enlightenment had inherited and degraded. This response was rarely, if ever, made.

The second failure was speaking over the heads of the public. The tradition’s arguments were technically sophisticated and expressed in an academic vocabulary developed over centuries for precision and nuance. This vocabulary was inaccessible to the educated layman, who heard it as meaningless jargon, impressive perhaps, but entirely opaque. The Enlightenment, by contrast, wrote for the public: clear prose, memorable phrases, accessible arguments. Voltaire’s quips reached a larger audience than could any Summa. The tradition had truth at its disposal; the Enlightenment had publicity.

The third failure was striking a defensive posture instead of attacking the Enlightenment’s obvious fragilities. The tradition’s posture was consistently reactive. Its defenders respondedto Enlightenment challenges, defended traditional positions, and attempted to shore up what was being undermined. This ceded the initiative entirely. The Enlightenment set the agenda and the tradition dutifully responded to it. But the Enlightenment’s premises were far more vulnerable than the tradition’s. The social contract was a complete fiction. The invisible hand was a metaphor mistaken for a mechanism. Autonomous reason was observably self-refuting. The tradition could have attacked. The Scholastics could have put the Enlightenment on the defensive, demanded justification for its premises, and exposed the gaps between its rhetoric and its substance. This approach was seldom pursued.

The fourth and the most consequential failure was never calling the Enlightenment’s bluff. The Enlightenment claimed the authority of reason, mathematics, and empirical science, but these claims were fraudulent. The Enlightenment’s publicists did not do the math, did not follow the logic, and did not submit any evidence. The tradition could have demanded accountability. But the demand was seldom made, and was never pressed with sufficient force. The philosophers’ bluff was never exposed, and before long, their fraudulent claims became accepted truths and settled science.

DISCUSS ON SG


McCarthy and the Molecular Clock

Dennis McCarthy noted an interesting statistical fact about the geneology of Charlemagne.

Every person of European descent is a direct descendant of Charlemagne. How can this possibly be true?

Well, remember you have 2 parents, 4 grandparents, 8 grandparents, etc. Go back 48 generations (~1200 years), and that would equate to 248 ancestors for that generation in the time of Charlemagne, which is roughly 281 trillion people.

The actual population of Europe in 800 AD was roughly 30 million. So what happened? After roughly 10 to 15 generations, your family tree experiences “pedigree collapse.” That is, it stops being a tree and turns into a densely interconnected lattice that turns back on itself thousands of times—with the same ancestors turning up multiple times in your family tree.

Which, of course, is true, but considerably less significant in the genetic sense than one might think.

Because the even more remarkable thing about population genetics is that despite every European being a descendant of Charlemagne, very, very few of them inherited any genes from him. Every European is genealogically descended from Charlemagne many thousands of times over due to pedigree collapse. That’s correct. But genealogical ancestry ≠ genetic ancestry. Recombination limits how many ancestors actually contribute DNA to you.

Which means approximately 99.987% of Europeans inherited zero gene pairs from Charlemagne.

And this got me thinking about my previous debate with Mr. McCarthy about Probability Zero, Kimura, and neutral theory, and led me to another critical insight: because Kimura’s equation was based on the fixation of individual mutations, it almost certainly didn’t account for the way in which gene pairs travel in segments, and that this aspect of mutational transmission was not accounted for in the generational overlap constraint independently identified by me in 2025, and prior to that, by Balloux and Lehmann in 2012.

Which, of course, necessitates a new constraint and a new paper: The Transmission Channel Capacity Constraint: A Cross-Taxa Survey of Meiotic Bandwidth in Sexual Populations. Here is the abstract:

The molecular clock treats each nucleotide site as an independent unit whose substitution trajectory is uncorrelated with neighboring sites. This independence assumption requires that meiotic recombination separates linked alleles faster than mutation creates new linkage associations—a condition we formalize as the transmission channel capacity constraint: μ ≤ r, where μ is the per-site per-generation mutation rate and r is the per-site per-generation recombination rate. We survey the μ/r ratio across six model organisms spanning mammals, birds, insects, nematodes, and plants. The results reveal a sharp taxonomic divide. Mammals (human, mouse) operate at or above channel saturation (μ/r ≈ 1.0–1.5), while non-mammalian taxa (Drosophila, zebra finch, C. elegans, Arabidopsis) maintain 70–90% spare capacity (μ/r ≈ 0.1–0.3). The independent-site assumption underlying neutral theory was developed and validated in Drosophila, where it approximately holds. It was then imported wholesale into mammalian population genetics, where the channel is oversubscribed and the assumption systematically fails. The constraint is not a one-time packaging artifact but a steady-state throughput condition: every generation, mutation creates new linkage associations at rate μ per site while recombination dissolves them at rate r per site. When μ > r, the pipeline is perpetually overloaded regardless of how many generations elapse. The channel capacity C = Lr is a physical constant of an organism’s meiotic machinery—independent of population size, drift, or selection. For species where μ > r, the genome does not transmit independent sites; it transmits linked blocks, and the number of blocks per generation is set by the crossover count, not the mutation count.

There are, of course, tremendous implications that result from the stacking of these independent constraints. But we’ll save that for tomorrow.

DISCUSS ON SG


Happy Darwin Day

May I suggest a gift of some light reading material that will surely bring joy to any evolutionist’s naturally selected heart?

Sadly, this Darwin Day, there is some unfortunate news awaiting this gentleman biologist who is attempting to explain how the molecular clock is not supported by the fossil record.

As I explain in my book the Tree of Life, the molecular clock relies on the idea that changes to genes accumulate steadily, like the regular ticks of a grandfather clock. If this idea holds true then simply counting the number of genetic differences between any two animals will let us calculate how distantly related they are – how old their shared ancestor is.

For example, humans and chimpanzees separated 6 million years ago. Let’s say that one chimpanzee gene shows six genetic differences from its human counterpart. As long as the ticks of the molecular clock are regular, this would tell us that one genetic difference between two species corresponds to one million years.

The molecular clock should allow us to place evolutionary events in geological time right across the tree of life.

When zoologists first used molecular clocks in this way, they came to the extraordinary conclusion that the ancestor of all complex animals lived as long as 1.2 billion years ago. Subsequent improvements now give much more sensible estimates for the age of the animal ancestor at around 570 million years old. But this is still roughly 30 million years older than the first fossils.

This 30-million-year-long gap is actually rather helpful to Darwin. It means that there was plenty of time for the ancestor of complex animals to evolve, unhurriedly splitting to make new species which natural selection could gradually transform into forms as distinct as fish, crabs, snails and starfish.

The problem is that this ancient date leaves us with the idea that a host of ancient animals must have swum, slithered and crawled through these ancient seas for 30 million years without leaving a single fossil.

I sent the author of the piece my papers on the real rate of molecular evolution as well as the one on the molecular clock itself. Because there is another reason that explains why the molecular clock isn’t finding anything where it should, and it’s a reason that has considerably more mathematical and empirical evidence support it.

DISCUSS ON SG