A Critical Review of PROBABILITY ZERO

Someone by the name of Joe Bowers has asserted that Probability Zero is “Ignorant and Unscientific Drivel” and offers what he describes as ” a direct, point-by-point rebuttal of the core mathematical claims” in my book. Let’s see how he did:

1. The “MITTENS” mutation accumulation equation
Day argues that the number of mutations required for large-scale evolutionary change exceeds what can realistically fix in a population within available time. The flaw is that he treats evolution as requiring a long chain of specific, pre-targeted mutations that must all occur and fix sequentially. Modern population genetics does not require pre-specified targets. Evolution explores fitness landscapes through branching pathways, neutral networks, standing genetic variation, recombination, and parallel mutations. Multiple mutational paths can lead to similar phenotypes. His math assumes a single narrow path; biology does not.

2. Fixation probability simplification
He often reduces fixation probability to approximately 1/N (or similar simplified forms) and then multiplies improbabilities across many required mutations. That approach ignores selection coefficients. The correct approximation for a beneficial mutation is roughly 2s (in diploids under weak selection), not 1/N. Beneficial mutations do not behave like neutral drift events. By modeling them as near-neutral events, he artificially suppresses the rate of adaptive change and inflates improbability.

3. Multiplying independent improbabilities
Day multiplies probabilities of sequential mutations as if each required mutation is statistically independent and must occur in a strict order. This is mathematically inappropriate. In real genomes, recombination allows beneficial mutations arising in different individuals to combine. Parallel lineages explore different paths simultaneously. Evolution operates across entire populations, not along a single linear lineage. Treating it like a serial lottery is a category error.

4. Effective population size misuse
He frequently uses conservative or arbitrarily low effective population sizes to restrict mutational supply. In reality, many species (especially microbes) have enormous effective populations and rapid generation times, dramatically increasing the number of mutational trials. Even in vertebrates, long time spans combined with standing variation and recombination increase evolutionary capacity beyond what his constrained models assume.

5. “Probability zero” threshold claim
He invokes extremely small probability cutoffs to argue practical impossibility. But probability zero in mathematics means literal impossibility under the model — not merely “very small.” His conclusion depends entirely on the assumptions baked into his model. If the model omits recombination, epistasis, neutral networks, regulatory evolution, gene duplication, and exaptation, then the resulting “zero” reflects model incompleteness, not biological impossibility.

6. Information increase argument
Day argues that new biological information cannot arise via mutation and selection. This ignores well-documented mechanisms such as gene duplication followed by divergence, horizontal gene transfer, exon shuffling, regulatory evolution, and de novo gene birth from previously noncoding sequences. These processes have been observed and sequenced. The claim that no new information arises is empirically false.

7. Large-scale morphological change requirement
He assumes that complex traits require many simultaneous coordinated mutations. Evolutionary developmental biology shows that small regulatory changes can produce large phenotypic effects. Changes in gene expression timing and location often drive macroevolutionary shifts without requiring dozens of simultaneous structural mutations.

In short, Probability Zero reaches its conclusion by modeling evolution as a blind, single-threaded, neutral lottery with fixed targets and no recombination. That is not how evolution works. When realistic population genetics, parallel mutation, selection coefficients, and genomic mechanisms are included, the “zero” vanishes — because it was produced by an oversimplified and biologically inaccurate mathematical setup, not by actual evolutionary constraints.

Point 1 claims I treat evolution as requiring “pre-targeted mutations that must all occur and fix sequentially.” This is false. MITTENS counts fixed differences between species—observed genomic divergence documented in the literature. These are not hypothetical, not pre-targeted, and not assumed to follow a single pathway. They are measured. The reviewer is attacking a model I don’t use. The fixed differences between humans and chimpanzees exist regardless of what pathway produced them. The question is whether the mechanism can produce that many fixations in the available time. The reviewer never addresses this, which is the most basic mathematical claim in the book.

Point 2 claims I model beneficial mutations as neutral drift events with fixation probability 1/N. This is the opposite of what I do. The entire MITTENS framework uses Haldane’s cost of natural selection, which assumes selection is operating. The fixation rate limit of one substitution per 300 generations is derived from the selective load—the reproductive excess required to drive an allele to fixation under selection. The 2s approximation the reviewer invokes for fixation probability is irrelevant to the throughput constraint, which is about how many substitutions the population can sustain simultaneously given finite reproductive capacity. The reviewer has confused fixation probability with fixation rate. These are two different things.

Point 3 invokes recombination as a rescue. The Bernoulli Barrier paper addresses this directly and at length. Recombination reshuffles existing variation; it does not accelerate the rate at which any individual allele increases in frequency. Kimura and Ohta (1969) established that expected time to fixation does not depend on recombination rate. The reviewer asserts that recombination is capable of resolving the problem without demonstrating how it changes the mathematics. This is a false and groundless assertion.

Point 4 claims I use “arbitrarily low effective population sizes.” This is totally false. I used published estimates from the population genetics literature. For humans, Ne ≈ 10,000 is the standard figure used by the field itself—it’s not my invention. The reviewer then pivots to microbes, which is irrelevant since the book’s central analysis concerns sexually reproducing organisms. I actually address microbes explicitly because bacteria are the one case where the fixation math works, precisely because they have the features sexual reproducers lack—no recombination delay, complete generational turnover, and astronomical generation counts. The reviewer is citing the exception that was the basis for Kimura’s algebraic error and the subsequent misapplication of his substitution formula.

Point 5 claims Probability Zero reflects “model incompleteness” because I omit recombination, epistasis, neutral networks, regulatory evolution, gene duplication, and exaptation. Each of these is addressed in the book, several of them in complete chapters dedicated to them. The Escape Hatches chapter, the Closing the Escape Hatch paper, and the shadow accounting analysis specifically demonstrate why these various mechanisms do not rescue the model. The reviewer lists them as if simply mentioning them could somehow constitute a rebuttal. It does not. Where is the math showing that gene duplication closes a five-order-of-magnitude shortfall? It doesn’t exist because it can’t do it.

Point 6 claims I argue “no new information arises.” I never made any such argument. Nothing like this ever appears in the book. The reviewer is attacking a position I do not hold and have never even considered. What I demonstrate is that the rate at which fixation can occur is insufficient to account for observed divergence. This is a quantitative constraint, not a claim about the impossibility of mutation producing changes.

Point 7 invokes evo-devo and regulatory changes producing large phenotypic effects. The Closing the Escape Hatch paper addresses this explicitly under shadow accounting: regulatory changes are themselves substitutions. Transcription factor binding sites turn over. Enhancers diverge. Chromatin architecture evolves. These are all fixations that must be accounted for. Calling them “regulatory” rather than “structural” does not exempt them from the fixation throughput constraint. The accounting still applies.

The summary paragraph is the evidence that the reviewer hasn’t even read the book. The reviewer describes the Probability Zero model as “a blind, single-threaded, neutral lottery with fixed targets and no recombination.” This bears no resemblance to anything in the book. It is a straw man constructed from standard anti-creationist talking points, it’s not a criticism of the actual text. The reviewer has written a review of a very different book by listing standard objections to arguments I never made.

Every point is either addressed in the text, is based on a misreading of the argument, or is an assertion offered without mathematics. Not a single calculation. Not a single specific engagement with any of my actual numbers. The reviewer never mentions the 220,000× shortfall, never addresses Haldane’s cost, never engages with the Bio-Cycle model or the d coefficient, never mentions the ancient DNA validation data. Seven points, zero math, zero engagement with the actual argument.

It’s not a review or a rebuttal, it’s not even a critique. It’s just a midwit attacking a figment of his own imagination.

DISCUSS ON SG


Veriphysics: The Treatise 026

IX. Development, Not Restoration

Veriphysics is a living philosophy, not a museum exhibit. It honors the tradition but does not merely curate it. A tradition that cannot develop is a tradition that will die; what does not grow, decays. The medieval synthesis was a genuine achievement, but it was an achievement of the thirteenth century, formulated to address questions live in that era, expressed in vocabulary suited to that context. To simply restore it, unchanged, would be to embalm it.

John Henry Newman articulated the principle: genuine development preserves type while extending application. A doctrine develops when it encounters new questions, engages new challenges, incorporates new knowledge, all while remaining faithful to its essential character. Development is not corruption; it is fidelity expressed across time. The oak is not a corruption of the acorn; it is the acorn’s fulfillment. The question is always whether a proposed change preserves the essential identity or betrays it.

Veriphysics advances the classic philosophical tradition in several respects.

First, it incorporates mathematical tools unavailable to the Scholastics. The medievals had arithmetic and geometry; they did not have probability theory, statistics, information theory, or the computational resources to apply these disciplines to complex questions. Veriphysics regards these new tools as gifts and extensions of human reason that can be deployed in service of truth. The Triveritas makes mathematical coherence a necessary condition of warranted assent; this is a positive development and an application of the tradition’s commitment to reason in a form the tradition knew, but did not utilize.

Second, it incorporates empirical data that would have been literally unimaginable to the medievals or the Enlightenment intellectuals. The human genome has been mapped. Economic statistics have been collected for decades. The outcomes of various applied political theories have been documented. This data provides anchors for arguments that were previously abstract. The tradition always affirmed that truth must conform to reality; Veriphysics has access to aspects of reality that the tradition could not observe. This is not a change of principle but an expansion of application.

Third, it incorporates historical scholarship that situates the tradition itself. We know more about the ancient world, about the transmission of texts, about the contexts in which doctrines were formulated, than any previous generation. This knowledge permits a more nuanced understanding of what the tradition actually taught, as distinguished from what later interpreters claimed it taught. Veriphysics reads the tradition critically, not to undermine it but to recover it, to strip away false accretions, and to distinguish the essential from the accidental.

Fourth, it engages contemporary questions that the tradition did not face and had no reason to consider. The nature of artificial intelligence. The ethics of genetic engineering. The political economy of global capital. The epistemology of digital information. These questions require fresh thinking, not merely the attempted application of pre-formed answers derived from different subjects. Veriphysics undertakes this thinking in continuity with the tradition by applying perennial principles to novel problems, but it does not pretend that the answers have already been provided.

New intellectual developments are intrinsically risky. Not every proposed development is genuine; some are corruptions, betrayals of the essential type under the guise of extension. Veriphysics acknowledges this risk and addresses it through the Triveritan method. A proposed development must satisfy logical validity, mathematical coherence, and empirical anchoring. It must cohere with the tradition’s core commitments, not contradict them. It must produce fruits consistent with the tradition’s character, with intellectual clarity, moral seriousness, spiritual depth. The Triveritas provides a criterion for distinguishing genuine development from corruption, just as it provides a criterion for distinguishing truth from falsehood more generally.

The tradition was defeated, in part, because it ceased to develop in harmony with Man’s societal and intellectual developments, because it mistook specific formulations for eternal truths, because it defended static conclusions rather than pursuing dynamic inquiries, and because it became rigid, defensive, and backward-looking. Veriphysics requires its adherents to learn from this failure to adapt to new circumestances. It remains open to development while at the same time being vigilant against corruption. It is a living philosophy, growing toward the way, the truth, and the light.

You can now buy the complete Veriphysics: The Treatise at Amazon in both Kindle and audiobook formats if you’d like to have it available as a reference. 

Also, due to the high level of interest in Veriphysics and the amount of new material that others are already creating based upon its foundation, I have created a substack devoted specifically to Veriphysics, the Triveritas, and related discussions, papers, and applications. I welcome guests posts there; if you have a potential guest post, post it somewhere, send me the link, and then email me the link as well as the permission to post the information at the link on the Veriphysics site in its entirety. I may post the whole thing, I may just post an excerpt with a link to the whole thing, but either way I require the explicit permission to post the whole thing there and I will provide a link to the original.

UPDATE: I’ve added a post with the first part of the philosophical proof of the Triveritas.

UPDATE: Grokipedia now has a page on Veriphysics.

DISCUSS ON SG


Veriphysics: The Treatise 025

VIII. Through a Glass, Darkly

The Triad of Truth known as the Triveritas is a powerful tool, but it must be wielded with appropriate humility. Veriphysics does not claim omniscience. It does not promise a God’s-eye view. It does not pretend that sufficient method will dissolve all mystery and render reality fully transparent to human inquiry.

The Apostle Paul’s words provide the governing image: “For now we see through a glass, darkly; but then face to face: now I know in part; but then shall I know even as also I am known.” This is not mysticism or obscurantism; it is realism about the human condition. We are finite creatures attempting to know an infinite reality. Our knowledge is genuine, and we truly see what we see, but what we see is limited and partial. The glass is real; we cannot step outside it. The darkness is real; we cannot fully dispel it.

The Enlightenment rejected these intrinsic limitations. It imagined that progress would asymptotically approach complete knowledge, that better methods would gradually eliminate the darkness, that the glass would eventually become perfectly transparent. This fantasy produced the characteristic Enlightenment vices: overconfidence, dogmatism dressed as skepticism, the dismissal of mystery as mere ignorance awaiting resolution. When reality refused to cooperate, when quantum mechanics revealed irreducible indeterminacy, when cosmology discovered that most of the universe is dark, when every attempt to explain consciousness in material terms failed, the Enlightenment had no resources for acknowledging its limits. It could only assume that future science would somehow manage to solve what present science could not, with all its empirical falsifications indefinitely deferred.

Veriphysics begins where the Enlightenment failed: with the acknowledgment that some darkness is permanent, that some limits are structural, that creaturely knowledge is necessarily partial. This acknowledgment is not defeat; it is the precondition of genuine inquiry. The investigator who knows he sees through a glass will attend carefully to the glass, he will study its distortions, compensate for its limitations, and refine his vision within the constraints it imposes. The investigator who imagines he sees directly will not notice his errors until they have produced catastrophe.

The Triveritas operates within these epistemic limits. It does not promise certainty; it offers warranted assent. It does not claim to establish truth absolutely; it distinguishes claims that deserve belief from claims that do not. The distinction is real and important even if neither category achieves the Enlightenment’s fantasy of transparent access to the thing itself. We can know with certainty that Neo-Darwinism is false, being refuted by logic, math, and empirical evidence, without pretending to know, fully or even in meaningful part, what the true historical account of Man’s biological origins were. We can know that the Enlightenment’s foundations are rotten without claiming to have mapped every room in the edifice that will replace it.

This humility is not weakness but strength. The Enlightenment’s overconfidence made it brittle; when the failures accumulated, it had no way to assimilate them except denial. The intellectual humility of Veriphysics makes it resilient; it expects partial knowledge, provisional conclusions, and future revisions. The tradition developed for two millennia precisely because it understood itself as an ongoing inquiry, not a finished system. The Enlightenment failed in less than one-quarter that time because it did not. Veriphysics builds upon the philosophical tradition, adding the mathematical and empirical tools that the tradition did not possess or did not deploy, while retaining the structural humility that kept the tradition open to growth.

DISCUSS ON SG


Veriphysics: The Treatise 024

VII. The Triveritas in Operation

The power of the Triad of Truth is best demonstrated through application. Consider the case that Part One examined in detail: the theory of evolution by natural selection.

The claim is that random mutation, filtered by natural selection operating over geological time, suffices to explain the diversity and complexity of life. This is not a modest claim; it is the keystone of Enlightenment naturalism, the demonstration that purpose and design can be eliminated from biology, the acid that dissolves teleology and leaves only mechanism.

Apply the Triveritas.

Logical validity: The argument requires that random mutation and natural selection can generate specified complexity—can produce, from simpler precursors, the integrated functional systems that characterize living organisms. The logical problems with this claim were identified almost immediately. Fleeming Jenkin, in 1867, pointed out that blending inheritance would dilute favorable variations before selection could act on them. The discovery of particulate (Mendelian) inheritance addressed this specific objection but raised others: mutations are mostly deleterious, beneficial mutations are rare, and the coordination of multiple independent mutations required for complex adaptations is probabilistically prohibitive. The logical coherence of the mechanism has never been established; it has only been assumed.

Mathematical coherence: The quantitative requirements of the theory can be specified. For humans and chimpanzees to have diverged from a common ancestor through mutation and selection, a certain number of genetic changes must have become fixed in the relevant lineages within the available time. The genomes have now been mapped; the numbers are known. Using the most generous assumptions—the longest timescales proposed, the shortest generation lengths, the fastest fixation rates ever observed in any organism—the mathematics permits fewer than three hundred fixed mutations in the human lineage. The theory requires at least twenty million. The gap is not a matter of fine-tuning or boundary conditions; it is a difference of five orders of magnitude. The math does not work. The theory is not merely unproven; it is refuted.

Empirical anchoring: The genomic data provides the anchor. The sequences are known; the differences are countable; the calculations can be performed by anyone with access to the data and competence in arithmetic. The empirical evidence does not support the theory; it falsifies it. The anchor drags the ship onto the rocks.

Neo-Darwinism fails all three elements of the Triveritas. The logic is unsound: the mechanism cannot do what is claimed. The math is wrong: the numbers do not permit it. The evidence, properly interpreted, confirms the failure rather than the success. The theory persists not because it has survived scrutiny but because the scrutiny has been suppressed, marginalized, and excluded from respectable discourse by institutional gatekeepers with careers and worldviews at stake.

This is not an isolated case. Apply the triad to classical economics: Smith’s law of supply and demand fails mathematical scrutiny (Gorman), Ricardo’s comparative advantage fails logical scrutiny (Keen’s amphiboly, the assumptions do not hold), and the empirical outcomes of free trade policies fail to match the predictions. Apply the triad to social contract theory: the contract is a logical fiction, no mathematical content exists to test, and no empirical evidence supports the claim that governments derive their authority from consent. Apply the triad to Enlightenment rights theory: the rights are asserted without derivation, have no mathematical structure, and the empirical history of rights shows consistent erosion and inversion rather than progressive realization.

The pattern is uniform. Enlightenment claims, when subjected to the Triveritas, collapse catastrophically. They survive only because the three elements of the triad has never been applied to them—because the tradition’s defenders did not deploy the logical, mathematical, and empirical tools they possessed, and because the Enlightenment’s institutional dominance ensured that the tools would not be deployed by anyone with the standing to be heard.

Veriphysics changes this. It applies the triad of logic, math, and empirical data without apology, demands accountability without deference, and exposes fraud without mercy. The Enlightenment claimed reason, mathematics, and evidence as its own; as a post-Enlightenment philosophy Veriphysics calls the bluff and demonstrates that the tradition actually held a stronger claim to reason given how the Enlightenment relied upon rhetoric in its place.

DISCUSS ON SG


Veriphysics: The Treatise 023

VI. The Core Criterion of Warranted Assent

Philosophy needs methods, not merely principles. The most beautiful metaphysics is useless if it cannot be applied, if it provides no guidance for distinguishing true claims from false, no criterion for deciding what to believe. The Enlightenment understood this and offered scientific method as the criterion. The offer proved fraudulent: the scientific method became a rhetorical gesture rather than a practiced discipline, primarily invoked to legitimize conclusions reached by other means, and never actually applied to the Enlightenment’s core commitments.

Veriscendancy offers a genuine criterion: the Triad of Truth, the Triveritas. A claim merits assent and may be accepted as probably true when and only when it satisfies three conditions: logical validity, mathematical coherence, and empirical anchoring. Each condition is necessary; none is sufficient; the conjunction of all three elements is required.

Logical validity means that the argument for the claim must be formally sound. The conclusions must follow from the premises; the inferences must be valid; the reasoning must be free from fallacy. This seems obvious, but the Enlightenment systematically violated it. The social contract is a logical fiction, since no such contract was ever written, and the consent it presupposes is manufactured from Rousseau’s imagination. The invisible hand is a metaphor mistaken for a mechanism—there is no actual entity coordinating markets, and the claim that uncoordinated self-interest produces optimal outcomes is an assertion, not a derivation. The autonomous reason is self-refuting—a reason that answers to nothing outside itself cannot justify its own authority.

The tradition always possessed logical tools superior to the Enlightenment’s. Scholastic logic was developed over centuries, refined through disputation, tested against objections. It distinguished valid from invalid inference with precision that the Enlightenment never matched. The tradition’s failure was not logical inadequacy but rhetorical malpractice: it kept its logic in the seminar room while the Enlightenment preached in the public square. Veriphysics deploys the tradition’s logical resources as weapons, subjecting Enlightenment claims to the scrutiny they never received and finding them wanting.

Mathematical coherence means that the claim must survive quantitative analysis where quantification is possible. If a theory makes numerical predictions or depends on rates, probabilities, or magnitudes, those numbers must work. Mathematics operates at a level prior to domain-specific interpretation; it constrains what is possible regardless of what experts prefer to believe. If the math says a thing cannot happen, then it cannot happen, no matter how many authorities assert otherwise.

The Enlightenment invoked mathematics constantly but rarely submitted to its discipline. Darwin’s theory of evolution by natural selection makes implicit claims about mutation rates, fixation rates, and timescales. When these claims are made explicit and calculated, the theory fails catastrophically, not by small margins but by five orders of magnitude. The classical economists’ supply and demand curves depend on aggregation conditions that Gorman proved do not hold in the manner they are customarily utilized. The mathematicians at the Wistar Institute demonstrated in 1966 that the Modern Synthesis could not generate the observed complexity of life; the biologists ignored them because they were not capable of grasping the mathematical implications. The pattern is consistent: mathematics exposes what rhetoric conceals.

Veriphysics demands mathematical accountability. Every claim that involves quantities must provide the correct calculations. The calculations must be examined, not by credentialed authorities with careers at stake, but by anyone competent in mathematics. A game designer with arithmetic can refute a biological establishment with doctorates, if the game designer does the math and the establishment does not. The Triveritas democratizes critique: there is no need for a priestly anointing or credentialed membership in a guild to check the numbers.

Empirical anchoring means that the claim must be tethered to observed reality. Theory without evidence is speculation; it may be elegant, coherent, mathematically sophisticated, and still describe nothing actual. The claim must make contact with the world, must be confirmed or at least not refuted by what we observe, must have some purchase on the phenomena it purports to explain.

But empirical anchoring alone is insufficient. Data is always interpreted through frameworks; evidence underdetermines theory; the same observations can be made consistent with multiple explanations. This is why the Enlightenment’s “empiricism” proved so hollow: the evidence was real, but it was filtered through interpretive schemes that were never questioned. Darwinism accumulated vast quantities of evidence—fossils, biogeography, comparative anatomy—all of which could be reinterpreted once the theory was questioned. The evidence was an anchor, but it was attached to a ship that should never have sailed.

The Triad addresses this problem by requiring all three elements. Evidence alone can be accommodated to any sufficiently flexible theory. Logic alone can generate elegant systems with no relation to reality. Mathematics alone can become a game of formal manipulation. But evidence that is logically derived from coherent premises, that survives mathematical scrutiny, and that anchors the conclusions in observed phenomena is evidence that commands assent. The conjunction is demanding, far more demanding than false pretense of the scientific method as actually practiced in the credentialed science guilds. But truth is demanding. A criterion that was not demanding would not be worth constructing.

You can now buy the complete Veriphysics: The Treatise at Amazon in both Kindle and audiobook formats if you’d like to read ahead or have it available as a reference. 

DISCUSS ON SG


2 Billion Generations of Nothing

It was just remarkable, with this evolutionary distance, that we should see such coherence in gene expression patterns. I was surprised how well everything lined up.

—Dr. Robert Waterston, co-senior author, Science (2025)

If one wanted to design an experiment to give natural selection the best possible chance of demonstrating its creative power, it would be hard to improve on the nematode worm.

Caenorhabditis elegans is about a millimeter long and consists of roughly 550 cells. It has a generation time of approximately 3.5 days. It produces hundreds of offspring per individual. Its populations are enormous. Its genome is compact—about 20,000 genes, comparable in number to ours but without the vast regulatory architecture that slows everything down in mammals. The worms experience significant selective pressure: most offspring die before reproducing, which means natural selection has plenty of raw material to work with. And critically, worms have essentially no generation overlap. When a new generation hatches, the old generation is dead or dying. Every generation represents a complete turnover of the gene pool. There is no drag, no cohort coexistence, no grandparents competing with grandchildren for resources.

In the notation of the Bio-Cycle Fixation Model, the selective turnover coefficient for C. elegans is approximately d = 1.0. Compare that to humans, where we have shown d ≈ 0.45. The worm is running the evolutionary engine at full throttle. No brakes, no friction, no generational overlap gumming up the works.

Now consider the timescale. C. elegans and its sister species C. briggsae diverged from a common ancestor approximately 20 million years ago. At 3.5 days per generation, that is roughly two billion generations. To put that in perspective, the entire history of the human lineage since the putative chimp-human divergence—six to seven million years at 29 years per generation—amounts to something like 220,000 generations. The worms have had nearly ten thousand times as many generations to diverge. Ten thousand times.

Two billion generations, running the evolutionary engine at maximum speed, with enormous populations, high fecundity, complete generational turnover, and all the raw material that natural selection could ask for. If there were ever a case where the neo-Darwinian mechanism should produce spectacular results, this is it.

So what did it produce? Nothing.

In June 2025, a team led by Christopher Large and co-senior authors Robert Waterston, Junhyong Kim, and John Isaac Murray published a landmark study in Science comparing gene expression patterns in every cell type of C. elegans and C. briggsae throughout embryonic development. Using single-cell RNA sequencing, they tracked messenger RNA levels in individual cells from the 28-cell stage through to the formation of all major cell types—a process that takes about 12 hours in these organisms.

What they found is what Dr. Waterston described, with evident surprise, as “remarkable coherence.” Despite 20 million years and two billion generations of evolution, the two species retain nearly identical body plans with an almost one-to-one correspondence between cell types. The developmental program—when and where each gene turns on and off as the embryo develops—has been conserved to a degree that startled even the researchers.

Gene expression patterns in cells performing basic functions like muscle contraction and digestion were essentially unchanged between the two species. The regulatory choreography that builds a worm from a fertilized egg—which genes activate in which cells at which times—was so similar across 20 million years that the researchers could map one species’ cells directly onto the other’s.

Where divergence did occur, it was concentrated in specialized cell types involved in sensing and responding to the environment. Neuronal genes, the researchers noted, “seem to diverge more rapidly—perhaps because changes were needed to adapt to new environments.” But even this divergence was modest enough that Kim, one of the co-senior authors, noted the most surprising finding was not that some expression was conserved—the body plans are obviously similar, so that’s expected—but that “when there were changes, those changes appeared to have no effect on the body plan.”

Read that again. The changes that the mechanism did produce over two billion generations had no detectable effect on how the organism is built. The divergence was, as far as the researchers could determine, functionally trivial.

Murray, the study’s third senior author, offered the most revealing comment of all: “It’s hard to say whether any of the differences we observed were due to evolutionary adaptation or simply the result of genetic drift, where changes happen randomly.”

After two billion generations, the researchers cannot confidently identify a single adaptive change in gene expression. They cannot point to one cell type, one gene, one regulatory switch and say: natural selection did this. Everything they found is equally consistent with random noise.

Now, the standard response to findings like this is to invoke purifying selection, also known as stabilizing selection. The argument goes like this: most mutations are deleterious, so natural selection acts primarily to remove harmful changes rather than to accumulate beneficial ones. Gene expression patterns are conserved because any change to a broadly-expressed gene would disrupt too many downstream processes. The machinery is locked down precisely because it works, and selection fiercely punishes any attempt to modify it.

This is true. Purifying selection is real, well-documented, and no one disputes it. But invoking it as an explanation only deepens the problem for the neo-Darwinian account of speciation.

The theory of evolution by natural selection claims that the same mechanism, random mutation filtered by selection, both preserves existing adaptations and creates new ones. The worm data shows empirically what the constraint looks like. The vast majority of the genome is locked down. Expression patterns involving basic cellular functions are untouchable. The only genes free to diverge are those expressed in a few specialized cell types, and even those changes are so subtle that the researchers can’t distinguish them from genetic drift.

This is the genome’s evolvable fraction, and it is small. The regulatory architecture that controls development, the transcription factor binding sites, the enhancer networks, the chromatin structure that determines which genes are accessible in which cells, is so deeply entrenched that two billion generations of nematode reproduction cannot budge it.

And here’s the question no one asked: how did that regulatory architecture get there in the first place?

If the current architecture is so tightly constrained that it resists modification across two billion generations, then building it in the first place required an even more extraordinary series of changes. Every transcription factor binding site had to be fixed. Every enhancer had to be positioned. Every element of the chromatin landscape that determines which genes are expressed in which cell types had to be established through sequential substitutions. This is what we call the shadow accounting problem. The very architecture now being invoked to explain why the worm hasn’t changed is itself a product that requires explanation under the same model. The escape hatch invokes a mechanism whose existence demands an even larger prior expenditure of the same mechanism—an expenditure that the breeding reality principle tells us was itself problematic.

Let us be precise about the scale of the failure. The MITTENS analysis, as published in Probability Zero, establishes that the neo-Darwinian mechanism of natural selection faces multi-order-of-magnitude shortfalls when asked to account for the fixed genetic differences between closely related species. The worm study provides an independent empirical check on this conclusion from the opposite direction.

Instead of asking “can the mechanism produce the required divergence in the available time?” and discovering that it cannot, the worm study asks “what does the mechanism actually produce when given enormous amounts of time under ideal conditions?” and discovers that the answer is exactly what MITTENS proves: essentially nothing.

Two billion generations with every parameter set to maximize the rate of adaptive change, with short generation times, high fecundity, large populations, complete generational turnover, and a compact genome, nevertheless produced two organisms so similar that researchers can map their cells one-to-one. The divergence that did occur was concentrated in a few specialized cell types and could not be confidently attributed to adaptation.

Now scale this down to the conditions that supposedly produced speciation in large mammals. A large mammal has a generation time of 10 to 20 years. Its fecundity is low, with a few offspring per lifetime instead of hundreds. Its effective population size is small. Its generation overlap is substantial (d ≈ 0.45, meaning that less than half the gene pool turns over per generation). Its genome is vastly larger and more complex, with regulatory architecture orders of magnitude more elaborate than a nematode’s.

The number of generations available for speciation in large mammals is measured in the low hundreds of thousands. The worms had two billion and produced nothing visible. On what basis should we believe that a mechanism running at a fraction of the speed, with a fraction of the population size, a fraction of the fecundity, a fraction of the generational turnover, and orders of magnitude more regulatory complexity to navigate, can accomplish what the worms could not?

The question answers itself.

“The worms are under strong stabilizing selection. Other lineages face different selective pressures that drive divergence.”

No one disputes that stabilizing selection explains the stasis. The problem is what happens when you look at the fraction that isn’t stabilized. Two billion generations of mutation, selection, and drift operating on the unconstrained portion of the genome produced changes that (a) affected only specialized cell types, (b) didn’t alter the body plan, and (c) couldn’t be distinguished from drift. If the creative power of natural selection operating on the evolvable fraction of the genome is this feeble under ideal conditions, it does not become more powerful when you make conditions worse.

“Worms are simple organisms. Complex organisms have more regulatory flexibility.”

This gets the argument backward. Greater complexity means more regulatory interdependence, which means more constraint, not less. A change to a broadly-expressed gene in an organism with 200 cell types is more dangerous than a change to a broadly-expressed gene in an organism with 30 cell types, because there are more downstream processes to disrupt. The more complex the organism, the smaller the evolvable fraction of the genome becomes relative to the locked-down fraction.

“Twenty million years is a short time in evolutionary terms.”

It is 20 million years in clock time but two billion generations in evolutionary time. The relevant metric for evolution is not years but generations, because selection operates once per generation. Two billion generations for a nematode is equivalent, in terms of opportunities for selection to act, to 58 billion years of human evolution at 29 years per generation. That’s more than four times the age of the universe. If the mechanism can’t produce meaningful divergence in the equivalent of four universe-lifetimes, the mechanism obviously doesn’t function at all.

“The study only looked at gene expression, not genetic sequence. There could be extensive sequence divergence not reflected in expression.”

There is sequence divergence, and it’s well-documented. C. elegans and C. briggsae differ at roughly 60-80% of synonymous sites and show substantial divergence at non-synonymous sites as well. The point is that this sequence divergence has not produced meaningful functional divergence. The genes have changed, but what they do and when they do it has remained largely the same. Sequence divergence without functional divergence is exactly what you’d expect from neutral drift operating on a tightly constrained system—and it is exactly the opposite of what you’d expect if natural selection were the creative engine the theory claims it to be.

The Science study is good science. The researchers accomplished something genuinely unprecedented: a cell-by-cell comparison of gene expression between two species across the entire course of embryonic development. The technical accomplishment is significant, and the evidence it produced is highly valuable.

But the data is reaching a conclusion that the researchers are not eager to draw. Two billion generations of evolution, operating under conditions more favorable than any large animal will ever experience, failed to produce any meaningful or functional divergence between two species. The mechanism ran at full speed for an incomprehensible span of time, and the result was the same worm.

This is not a philosophical objection to evolution. It is not an argument from personal incredulity or religious conviction. It is the straightforward empirical observation that the proposed mechanism, given every possible advantage, does not produce the results attributed to it. The creative power of natural selection, when measured rather than assumed, turns out to be approximately zero.

Two billion generations of nothing. A worm frozen in time. That’s what the data shows. And that’s exactly what Probability Zero predicted.

DISCUSS ON SG


Veriphysics: Triveritas vs Trilemma

So yesterday, I posted about the Agrippan Trilemma, also known in its modern formulation as the Münchhausen Trilemma, which is considered a significant philosophical device that has successfully asserted how any attempt to justify knowledge leads to one of three unsatisfactory outcomes: circular reasoning, infinite regress, or dogmatic assertion. A number of you agreed that this was a worthy challenge that would provide a suitable test for the epistemological strength of the Triveratas.

And while the purpose of Veriphysics is not to expose the flaws in ancient or modern philosophy, as it happens, the Triveritas is not only the first epistemological system to be able to defend itself successfully from the Trilemma, but in the process of defending the Triveritas from it, Claude Athos and I identified a fundamental flaw in the Trilemma itself that renders it invalid and falsifies its claims to universality.

So, if you are philosophically inclined, I invite you to read a Veriphysics working paper that both solves the Trilemma for the first time in nearly 2,000 years while additionally demonstrating its invalidity.

Solving the Agrippan Trilemma: Triveritas and the Third Horn

The Agrippan Trilemma holds that any attempt to justify a claim must terminate in infinite regress, circularity, or dogmatic stopping. No major epistemological framework has solved it; each concedes one horn. This paper solves the Trilemma by demonstrating that the Triveritas survives all three horns, identifying an amphiboly in the third horn that renders the argument invalid, and providing a counterexample that falsifies the Trilemma’s claim to universality. The Trilemma’s third horn rests on an amphiboly: it conflates “terminates” with “terminates arbitrarily,” treating the two as logically equivalent. They are not. The Triveritas, which requires the simultaneous satisfaction of three independently necessary epistemic conditions (logical validity, mathematical coherence, and empirical anchoring), terminates at three stopping points of fundamentally different kinds, each checked by the other two. The probability of error surviving all three checks is strictly less than the probability of surviving any one; this is proved mathematically and confirmed empirically across twelve historical cases. Termination that is independently cross-checked across three dimensions is not arbitrary. It is not dogmatic. And it is not the same epistemic defect the Trilemma identifies. The third horn breaks because the Trilemma never distinguished checked termination from unchecked termination, and that distinction is the one upon which the entire Trilemma and its claim to universality depend.

DISCUSS ON SG


Veriphysics: The Treatise 019

II. The Name and Its Meaning

A philosophy requires a name, something that is more than an identifying label, something that serves to describe its essential orientation. The word should be memorable, pronounceable, and meaningful. It should capture the philosophy’s core insight and clearly distinguish the framework from its rivals. In addition to its identity, it also requires an objective and a foundation.

Veriphysics was chosen as the name for this new philosophy because unlike classical philosophy, which is focused on knowledge, metaphysics which examines the nature of reality, Scholasticism which combines the classical tradition with Christian theology, and Enlightenment philosophy, which claims to be established on reason but is based upon the hidden knowledge known as gnosis, veriphysics is focused solely on truth, or veritas. Every aspect of veriphysics is meant to explore and expand the concept of truth to the greatest extent possible, through every path that is capable to leading to some aspect of the singular, core, and underlying Truth.

The objective of veriphysical philosophy is veriscendance. Veriscendance derives from two roots: veritas and ascendance, suggesting both ascent and transcendence. This fusion is a deliberate choice. Veriscendance is defined as the end result of ascending through the various limited aspects of truth that humanity is capable of perceiving toward ultimate Truth, thereby recognizing the fact that human knowledge genuinely grasps various aspects of reality while acknowledging that the full truth about the comprehensive scope of existence across all its various dimensions intrinsically exceeds both our conceptual grasp as well as the limits of our knowledge.

Even the name of this objective therefore rejects the hubris of the Enlightenment’s epistemology. The Enlightenment imagined that autonomous reason could eventually achieve a God’s-eye perspective of existence, that sufficient improvement in method would somehow yield complete knowledge, and that every aspect of the universe was both a) material and b) would eventually be attainable through human inquiry. This fantasy has been entirely refuted by the very sciences the Enlightenment celebrated. Quantum mechanics has revealed the irreducible indeterminacy at the foundations of matter. Cosmology declares that ninety-five percent of the universe is dark matter and dark energy, unobservable and unexplained, and identified only by its gravitational effects. The Enlightenment materialism that once promised to explain everything now cannot account for most of what its own methods declares to be real and material.

Veriphysics is constructed on a series of very different axioms. It declares that human knowledge is real, but incomplete, genuine but inherently limited. As the apostle Paul declared, we see as though through a glass, darkly. The image in the glass is not an illusion or a shadow, it corresponds to reality, it can be refined and clarified, and it supports both genuine understanding and meaningful action. But the image is not, and it can never be, the thing itself. It can never be more than a small part of the thing. We cannot conceive the whole. The fullness of Truth exceeds and transcends both our present and our future capabilities. We ascend toward it but we do not arrive at it, not in this life and almost certainly not in the next either.

This is not skepticism. The skeptic denies that the glass portrays anything real. Veriphysics affirms that it does. The image is partial, but it is an image of something real. The ascent is incomplete, but it is neveretheless a genuine advancement toward something concrete. Truth exists, it is knowable, we genuinely know what we know, and we know more than the mere fact of our own cognition. The partial nature of the truth that is accessible to us is not a defect to be overcome by improved methodologies, it is a feature of our cognition as creatures, a limit designed into the structure of finite minds approaching the reality of the infinite.

In other words, the distinction between reason and revelation is intrinsically false. They are merely two different paths to the same end.

The objective of veriphysics also carries a connotation of elevation in the political sense, of dominance, of supremacy, and of the correct ordering of intellectual and social life. This connotation is intentional. Veriphysics necessarily means that an orientation toward the truth must order society and intellect, that the pursuit of truth is not one value among many but the architectonic value that makes all the others coherent and meaningful. A civilization that abandons truth as a fundamental objective does not cannot achieve either neutrality or progress, it instead assures chaos, manipulation, and degeneration.

Veriphysics is a necessary goal for the humanist, because the societal pursuit of truth is a precondition of human flourishing.

You can now buy the complete Veriphysics: The Treatise at Amazon in both Kindle and audiobook formats if you’d like to read ahead or have it available as a reference. Thanks to many of the readers here, it is presently a #1 bestseller in both Epistemology and Metaphysics.

DISCUSS ON SG


Veriphysics: The Treatise 018

Part Three: The Path Toward Truth

I. Introduction: The Possibility of Renewal

The Enlightenment has failed. Its political philosophy produced tyranny wearing the mask of liberation. Its economics produced models that do not describe reality and policies that impoverish the very populations they claimed they would enrich. Its science produced institutions incapable of correcting their own errors and a theory of life that cannot survive basic arithmetic. Its epistemology consumed itself, beginning with the enthronement of reason and ending with its abdication and abandonment.

The classical tradition that preceded the Enlightenment, though superior in substance, failed to defend itself. It spoke to specialists while its opponents spoke to the public. It defended when it should have attacked. It assumed good faith in a rhetorical war. It possessed the tools of logic, mathematics, and empirical inquiry but did not deploy them as weapons. It was outmaneuvered, outspent, and outpublicized by an opponent whose arguments could not have survived serious scrutiny.

There is an intellectual void left by the Enlightenment’s collapse. Human beings cannot live without coherent frameworks for understanding their reality, grounding their morality, informing their decisions, and orienting their actions. The borrowed capital of Christendom and the societal inertia, upon which the Enlightenment drew even as it denied its debts, has been exhausted.

Something will fill the vacuum. The vital question is whether whatever fills it will be true.

This is not a question for passive observers. The void will be filled by whoever seeks to fill it, by those with the will, the resources, and the vision to offer an alternative. If the heirs of the tradition offer nothing, then something else will take its place: a new ideology, a new materialism, a new paganism, a new barbarism, and a new madness. The opportunity is real but it will not wait forever.

What is required is neither a museum restoration of the medieval synthesis nor an accommodation to postmodernity that sacrifices substance for elite approval. What is needed is a genuine philosophical renewal, the construction of an intellectual framework that recovers what is true in the tradition, incorporates what has been genuinely learned in the intervening centuries, and provides the tools necessary to distinguish truth from falsehood with greater force and rigor.

That framework is Veriphysics.

DISCUSS ON SG

NB: You can now buy the complete Veriphysics: The Treatise at Amazon in both Kindle and audiobook formats if you’d like to read ahead or have it available as a reference. Thanks to many of the readers here, it is presently a #1 bestseller in both Epistemology and Metaphysics.


Veriphysics: The Treatise 017

VIII. The Shape of Renewal

The path forward is not a return to pure dialectic, as though the lessons of the Enlightenment’s victory over the last three centuries could simply be unlearned. Nor is it an embrace of pure rhetoric, which would make a neoclassical tradition no better and no more viable than its opponents. It is the synthesis that the Enlightenment pretended to offer but never delivered, the combination of genuine logical rigor, with genuine mathematical abstraction connected to genuine empirical grounding, all deployed with rhetorical effectiveness, that is the optimal philosophical path.

This requires several things.

First, it requires calling all the bluffs. Every Enlightenment claim that invokes reason, mathematics, or evidence must be challenged to produce the reasoning, the equations, and the evidence. These challenges must be pressed relentlessly, and publicly, until the bankruptcy is fully exposed. The tradition has been too polite and too willing to assume good faith on the part of opponents who relentlessly operate in bad faith. That philosophical courtesy must end.

Second, it requires actually doing the intellectual labor. It is not enough to assert that the tradition has logic, mathematics, and evidence on its side. The logic must be articulated clearly. The mathematics must be calculated accurately and presented accessibly. The evidence must be gathered and displayed. The tradition must mint real philosophical currency and spend it lavishly.

Third, it requires addressing the public. The specialized vocabulary that served the tradition well in the seminar room is a liability in the public square. The arguments must be translated, popularized, and even dumbed down where necessary in order to make them accessible to the laymen who lacks specialist training. Clarity is not the enemy of rigor; it is its completion.

Fourth, it requires going on offense. The tradition has played defense long enough. The Enlightenment’s premises are vulnerable, and are even more vulnerable than they have ever been now that their evil consequences are manifest. Those premises must be attacked: the autonomous reason that cannot ground itself, the social contract that no one signed, the invisible hand that does not exist, the progress that has not occurred. The tradition must set the agenda rather than respond to it.

Fifth, it requires building institutions. The Enlightenment understood that ideas require infrastructure. The new philosophical tradition must understand this too. Alternative platforms, alternative credentials, alternative networks of patronage and publication must be created, funded, policed, and sustained. A long game is not only in order, it is necessary.

Now, these actions are not strictly necessary. The Enlightenment is dying of its own contradictions. The tradition that it displaced remains true. The tools that the Enlightenment falsely claimed, logic, mathematics, and empirical evidence, are readily available to those willing to use them honestly. The rhetorical landscape has gradually shifted in ways that favor truth over propaganda, and rhetoric supported by dialectic over pure, groundless rhetoric.

What is needed is a philosophical framework that unites these elements: the perennial insights of the tradition, the rigorous methods it always possessed, the empirical data now available, and the rhetorical effectiveness necessary to make truth prevail. Such a framework would not be a revival of Scholasticism, nor a capitulation to Enlightenment terms, but something truly new, a genuine advancing of the historical classical tradition that is capable of meeting the various intellectual needs of the present.


Since a number of people have asked me to make these posts available in ebook form, I have done so. Please note that this is not the complete work, it is only the 20,000-word treatise that contains the first two parts that have previously appeared here on the blog, as well as the third part, entitled The Path Toward Truth. I do not know when the complete work will be done and I do not have any target date for doing so.

DISCUSS ON SG