Happy Darwin Day

May I suggest a gift of some light reading material that will surely bring joy to any evolutionist’s naturally selected heart?

Sadly, this Darwin Day, there is some unfortunate news awaiting this gentleman biologist who is attempting to explain how the molecular clock is not supported by the fossil record.

As I explain in my book the Tree of Life, the molecular clock relies on the idea that changes to genes accumulate steadily, like the regular ticks of a grandfather clock. If this idea holds true then simply counting the number of genetic differences between any two animals will let us calculate how distantly related they are – how old their shared ancestor is.

For example, humans and chimpanzees separated 6 million years ago. Let’s say that one chimpanzee gene shows six genetic differences from its human counterpart. As long as the ticks of the molecular clock are regular, this would tell us that one genetic difference between two species corresponds to one million years.

The molecular clock should allow us to place evolutionary events in geological time right across the tree of life.

When zoologists first used molecular clocks in this way, they came to the extraordinary conclusion that the ancestor of all complex animals lived as long as 1.2 billion years ago. Subsequent improvements now give much more sensible estimates for the age of the animal ancestor at around 570 million years old. But this is still roughly 30 million years older than the first fossils.

This 30-million-year-long gap is actually rather helpful to Darwin. It means that there was plenty of time for the ancestor of complex animals to evolve, unhurriedly splitting to make new species which natural selection could gradually transform into forms as distinct as fish, crabs, snails and starfish.

The problem is that this ancient date leaves us with the idea that a host of ancient animals must have swum, slithered and crawled through these ancient seas for 30 million years without leaving a single fossil.

I sent the author of the piece my papers on the real rate of molecular evolution as well as the one on the molecular clock itself. Because there is another reason that explains why the molecular clock isn’t finding anything where it should, and it’s a reason that has considerably more mathematical and empirical evidence support it.

DISCUSS ON SG


Book Review: The Frozen Gene

While the term is usually associated with having a high IQ, with perhaps little popular thought given to substantial achievement, a genius is a person who innovatively solves novel problems for the betterment of society. See chapter seven, “Identifying the Genius,” Charlton, Bruce, and Dutton, Edward, The Genius Famine, London: University of Buckingham Press, 2016. Vox Day is a genius. There, now it’s in print—all protestations, Day’s included, notwithstanding. 

Day’s ability to identify and solve problems, especially those overlooked by experts for generations, is on full display in The Frozen Gene. In his new book, Day builds on the mathematical attainment of Probability Zero and breaks new ground. Part of his latest success is the refutation of Motoo Kimura’s neutral theory of molecular evolution. But there is much more, some of it possibly holding profound consequences for mankind. 

Read the whole review there. And if that’s not enough to convince you to read The Frozen Gene, well, you’re probably just not going to read it. Which is fine, but a few years from now, when you can’t understand what’s happening with the world, I suggest you remember this moment and go back and take a look at it. The implications are quite literally that profound.

I could be wrong. Indeed, I hope I’m wrong. I really don’t like any of the various potential implications. But after all the copious RTST’ing with multiple AI systems, I just don’t think that’s very likely.

DISCUSS ON SG


Veriphysics: The Treatise 010

PART TWO: THE DEFEAT OF THE CHRISTIAN PHILOSOPHIC TRADITION

I. Introduction: The Nature of the Defeat

The Enlightenment did not defeat traditional Christian philosophy. It displaced it.

This distinction is essential. A defeat implies that the arguments were met, weighed, and found wanting, that the tradition’s premises were examined and refuted, its conclusions tested and falsified, its framework tried and discarded on the merits. None of this ever took place. The great questions that the Scholastics had labored over for centuries were not answered by the Enlightenment; they were simply dismissed as relics of a benighted age, unworthy of serious engagement, and set aside.

The transition from the medieval to the modern via the Renaissance was not a philosophical victory but a rhetorical one. The Enlightenment captured the vocabulary of reason, science, and progress, and used that vocabulary to frame the debate in terms favorable to itself. The tradition was cast as “faith” opposing “reason,” as “superstition” opposing “science,” as “authority” opposing “freedom.” These dichotomies were observably false, as the tradition had always employed reason, had built the very institutions of scientific inquiry, and had developed logical tools more sophisticated than anything the Enlightenment produced, but the rhetorical framing proved to be more convincing than the relevant facts.

Understanding how dialectic lost to rhetoric is not merely an exercise in intellectual history, however. It is a necessary condition for reversing the defeat and replacing the failed ideas of the Enlightenment. The tradition’s ideas were not refuted; they were outmaneuvered. What was lost through rhetorical failure can be regained through rhetorical success, provided the rhetoric is grounded firmly in the dialectical substance that the Enlightenment always lacked.

DISCUSS ON SG


Recalibrating Man

One of the fascinating things about the Probability Zero project is the way that the desperate attempts of the critics to respond to it have steadily led to the complete collapse of the entire evolutionary house of cards. MITTENS began with the simple observation that 9 million years wasn’t enough time for natural selection to produce 15 million fixations. Then it turned out that there were only 6-7 million years to produce 20 million fixations twice.

After the retreat to neutral theory led to the discovery of the twice-valued variable and the variant invariance, the distinction established between N and N_e led to the recalibration of the molecular clock. And the recalibration of the molecular clock led, inevitably, to the discovery that the evolutionists no longer have 6-7 million years for natural selection and neutral theory to work their magic.

And now they have as little as 200,000 years, with an absolute maximum of 580,000, with which to work. And they still need to account for the full 20 million fixations in the human lineage alone, while recognizing that zero new potential fixations have appeared in the ancient DNA pipeline for the last 7,000 years. Simply pulling on one anomalous string has caused the entire structure to systematically unravel. The whole system proved to be far more fragile than I had any reason to imagine when I first asked that fatal question: what is the average rate of evolution?

So if your minds weren’t blown before, The N/N_e Distinction and the Recalibration of the Human-Chimpanzee Divergence should suffice to do the trick.

Kimura’s (1968) derivation of the neutral substitution rate k = μ rests on the cancellation of population size N between mutation supply (2Nμ) and fixation probability (1/2N). This cancellation is invalid. The mutation supply term uses census N (every individual can mutate), while the fixation probability is governed by effective population size Ne (drift operates on Ne, not N). The corrected substitution rate is k = μ × (N/Ne). Using empirically derived Ne values—human Ne = 3,300 from ancient DNA drift variance (Day & Athos 2026a) and chimpanzee Ne = 33,000 from geographic drift variance across subspecies—we recalibrate the human-chimpanzee divergence date. The consensus molecular clock estimate of 6–7 Mya collapses to 200–580 kya, with the most plausible demographic parameters yielding 200–360 kya. Both Ne estimates are independent of k = μ and independent of the molecular clock. The recalibrated divergence date increases the MITTENS fixation shortfall from ~130,000× to 4–8 million×, rendering the standard model of human-chimpanzee divergence via natural selection mathematically impossible by an additional two orders of magnitude.

There are a number of fascinating implications here, of course. But in the short term, what this immediately demonstrates is that all the heroic efforts of the evolutionary enthusiasts to somehow defend the mathematical possibility of producing 20 million fixations in 6.5 million years were utterly in vain. Because, depending upon how generous you’re feeling, MITTENS just became from 10x to 45x more impossible.

Here is the correct equation to calculate the amount of time for evolution from the initial divergence for any two lineages.

t = D / {μ × [(N_A/N_eA) + (N_B/N_eB)]}

Where:

  • D = observed pairwise sequence divergence
  • μ = per-generation mutation rate (from pedigree data)
  • N_A= census population size of lineage A
  • N_B = census population size of lineage B
  • N_eA = effective population sizes of lineage A (from historical census demographics)
  • N_eB = effective population sizes of lineage B (from historical census demographics)

Which, by the way, finally gives us the answer to the question that I asked at the very start: what is the rate of evolution?

R = μ(N/N_e) / g

This is the number of fixations per site per year. It is the rate of evolution for any lineage from a specific divergence, given the pedigree mutation rate, the census-to-effective population size ratio estimated from historical census demographics, and the generation time in years.

And yes, that means exactly what you suspect it might.

DISCUSS ON SG


The Significance of (d) and (k)

A doctor who has been following the Probability Zero project ran the numbers on the Selective Turnover Coefficient (d) and the mutation fixation rate (k) across six countries from 1950 to 2023, tracking both values against the demographic transition. The results are presented in the chart above, and they are considerably more devastating to the standard evolutionary model than even I anticipated. My apologies to those on mobile phones; it was necessary to keep the chart at 1024-pixel width to make it legible.

Before walking through the charts, a brief reminder of what d and k are. The Selective Turnover Coefficient (d) measures the fraction of the gene pool that is actually replaced each generation. In a theoretical population with discrete, non-overlapping generations—the kind that exists in the Kimura model, biology textbooks, lab bacteria, and nowhere else—d equals 1.0, meaning every individual in the population is replaced by its offspring every generation. In reality, grandparents, parents, and children coexist simultaneously. The gene pool doesn’t turn over all at once; it turns over gradually, with old cohorts persisting alongside new ones. This persistence dilutes the rate at which new alleles can change frequency. The fixation rate k is the rate at which new mutations actually become fixed in the population, expressed as a multiple of the per-individual mutation rate μ. Kimura’s famous invariance equation was that k = μ—that the neutral substitution rate equals the mutation rate, regardless of population size. This identity is the foundation of the molecular clock. As we have demonstrated in multiple papers, this identity is a special case that holds only under idealized conditions that no sexually reproducing species satisfies, including humanity.

Now, to explain the following charts he provided. The top row shows the collapse of d over the past seventy-three years. The upper-left panel tracks d by country. Every country shows the same pattern: d falls monotonically as fertility drops and survival to reproductive age climbs. South Korea and China show the most dramatic collapse, from d ≈ 0.33 in 1950 (when TFR was 5.5) to d ≈ 0.12 in 2023 (TFR 0.9). France and the Netherlands, which entered the demographic transition earlier, started lower and have plateaued around d ≈ 0.09. Japan and Italy sit between, at d ≈ 0.08. The upper-middle panel pools the data by transition type—early, late, and extreme low fertility—and shows the convergence: all three categories are heading toward the same floor. The upper-right panel plots d directly against Total Fertility Rate and reveals a nearly linear relationship (r = 0.942). Fertility drives d. When women stop having children, the gene pool stops turning over. It is that simple.

The second row shows what happens to k as d collapses. The middle-left panel tracks k by country, with the dashed line at k = μ marking Kimura’s prediction. Not a single country, in any year, reaches k = μ. Every data point sits below the line, and the distance from the line has been increasing as k climbs toward a ceiling of approximately 0.5μ. This is the overlap effect: when generations overlap extensively, new mutations entering the population are diluted by the persistence of old allele frequencies, and k converges toward half the mutation rate rather than the full mutation rate. The middle-center panel pools k by transition type and shows all three categories converging on approximately 0.5μ by 2023. The middle-right panel plots k against TFR (r = −0.949): as fertility falls, k rises toward 0.5μ—but never reaches μ. The higher k seems counterintuitive at first, but it reflects the fact that with less turnover, drift rather than selection dominates, and the fixation of neutral mutations approaches its overlap-corrected maximum. The mutations are fixing, but selection is not driving them.

The third row is the knockout punch. The large scatter plot on the left shows d plotted against k across all countries and time points. The Pearson correlation is r = −0.991 with R² = 0.981, p < 0.001. This is not a rough trend or a suggestive pattern. This is a near-perfect linear relationship: d = −2.242k + 1.229. As demographic turnover collapses, fixation rates converge on the overlap limit with mechanical precision. The residual plot on the right confirms that the relationship is genuinely linear—no systematic curvature, no outliers, no hidden nonlinearity. The data points fall on the line like they were placed there by a draftsman.

The bottom panel normalizes everything to 1950 baselines and shows the parallel evolution of d and k across all three transition types. By 2023, d has fallen to roughly 35–45% of its 1950 value in every category. The bars make the asymmetry vivid: d collapses while k barely moves, because k was already near its overlap limit in 1950. Having stopped adapting around 1,000 BC and filtering around 1900 AD, the human genome was already struggling to even drift in 1950. By 2023, genetic drift has essentially stopped.

Now what does this mean for the application of Kimura’s fixation model to humanity?

It means that the identity k = μ—the foundation of the molecular clock, the basis for every divergence date in the standard model—has never applied to human populations in the modern era, and while it applies with increasing accuracy the further back you go, it never actually reaches k = μ even under pre-agricultural conditions, since d never reaches 1.0 for any human population. The data show that k in humans has been approximately 0.5μ or less throughout the entire modern period for which we have reliable demographic data, and was substantially lower than μ even in high-fertility populations. Kimura’s cancellation requires discrete generations with complete turnover. Humans have never had that. So the closer you look at real human demography, the worse the molecular clock performs.

But the implications extend beyond the molecular clock. The collapse of d is not merely a correction factor for dating algorithms. It is a quantitative measurement of the end of natural selection in industrialized populations. A Selective Turnover Coefficient of 0.08 means that only 8% of the gene pool is replaced per generation. A beneficial allele with a selection coefficient of s = 0.01—which would be considered strong selection by population genetics standards—would change frequency by Δp ≈ d × s × p(1−p). At d = 0.08 and initial frequency p = 0.01, that works out to a frequency change of approximately 0.000008 per generation. At that rate, fixation would require on the order of a million years—roughly two hundred times longer than the entire history of anatomically modern Homo sapiens.

The response of the demographic transition to fertility is not a surprise. Every demographer knows that TFR has collapsed across the industrialized world. What these charts show is the genetic consequence of that collapse, quantified with mathematical precision. The gene pool is freezing. Selection cannot operate when the population does not turn over. And the population is not turning over. This is not a prediction, an abstract formula, a theoretical projection, or a philosophical argument. It is six countries, four time points, two independent variables, and a correlation of −0.991. The human genome is frozen, and the molecular clock—which assumed it was running at a constant rate—was never accurately calibrated for the organism it was applied to.

Probability Zero and The Frozen Gene, taken together, are far more than just the comprehensive refutation of Charles Darwin, evolution by natural selection, and the Modern Synthesis. They are also the discovery and explication of one of the greatest threats facing humanity in the 21st and 22nd centuries.

This is the GenEx thesis, published in TFG as Generational Extension and the Selective Turnover Coefficient Across Historical Epochs, now confirmed with hard numbers across the industrialized world. The 35-fold decline in d from the Neolithic to the present that we calculated theoretically from Coale-Demeny life tables is now visible in real demographic data from six countries. Selection isn’t just weakening — it’s approaching zero, and the data show it happening in real time across every population that has undergone the demographic transition.

The human genome isn’t just failing to improve. It’s accumulating damage that it can no longer repair through the only mechanism available to it. Humanity is not on the verge of becoming technological demigods, but rather, post-technological entropic degenerates.

DISCUSS ON SG


Veriphysics: The Treatise 008

IX. The Inevitable Self-Corruption

The deepest failure of the Enlightenment was not in politics or economics or science. It was in the very premise from which all else followed: the autonomy of reason.

Reason was to be self-grounding, answerable to no external authority. But reason cannot ground itself. Every attempt to provide a rational foundation for reason either assumes what it seeks to prove or regresses infinitely. The Enlightenment’s greatest minds recognized this problem and attempted to solve it, but their solutions have not survived either scrutiny or the experience borne of the passage of time.

Descartes sought certainty in the thinking self, but the existence of the self is precisely what requires demonstration; the cogito is an assumption, not a proof. Hume, being slightly more honest, admitted that reason could establish nothing beyond immediate impressions and the custom of conjunction; causation itself was a habit of mind, not a feature of reality. Kant attempted to rescue reason by distinguishing the phenomenal from the noumenal and confining knowledge to the realm of appearances, but this concession was fatal, because it amounted to an admission that reason could never directly touch reality itself.

The subsequent centuries have traced the consequences of this admission. If reason cannot reach reality, then reason is not discovering truth, it is constructing a variant of it. The positivists of the early twentieth century attempted to restrict knowledge to empirically verifiable propositions, but their criterion of verifiability was itself unverifiable. They constructed a self-refuting standard. The postmodernists of the late twentieth century finally admitted the inevitable result of Enlightenment philosophy: truth is a construction, a social product, an artifact defined by those with the power to enforce it. What counts as knowledge is what the powerful have decided to call knowledge. Reality is what those in authority define it to be. Reason is not a tool for discovering reality; it is merely a weapon in the struggle for dominance.

This is why the scientific authorities can declare that evolution by natural selection is a scientific fact. This is why the government authorities can declare that a married couple is divorced and that a man is truly a woman. In the postmodern world, there is no objective truth or objective reality, literally everything is subjective and capable of being redefined at any moment. War is Peace, Love is Hate, Free Association is Racism, and we have always been at war with Eastasia.

This Orwellian world is not a corruption of the Enlightenment; it is its idealistic completion. If reason is autonomous and answerable to nothing beyond itself, then reason is also groundless. And groundless reason is not reason at all, but sheer will dressed in rational costume. Nietzsche saw this more clearly than anyone: he understood that in Enlightened terms, the will to truth was only a form of the will to power, and those who claimed to serve truth were only serving themselves while wearing a more flattering mask.

The Enlightenment began by enthroning reason and ended by destroying it. The progression from Descartes to Derrida is not a decline or a betrayal, but the logical and inevitable path. Each generation discovered that the previous generation’s stopping point was arbitrary, that the foundations assumed were not foundations at all, that the certainties proclaimed were merely conventions. The Enlightenment’s acid dissolved not only tradition and revelation but eventually reason itself.

The modern West now lives among the ruins. The vocabulary of the Enlightenment persists, and men pay homage to its rights, progress, science, reason, freedom, but the very meanings of those words have been hollowed out entirely. No one can say what a human right is grounded in, or why progress is desirable, or how science differs from ideology, or what reason can legitimately claim, or where freedom ends and license begins. These concepts are invoked ritually, habitually, but they no longer make sense nor command belief. They are just antique furniture sitting in a ruined house whose foundations have collapsed.

DISCUSS ON SG


The Real Rate Revolution

Dennis McCarthy very helpfully went from initially denying the legitimacy of my work on neutral theory to bringing my attention to the fact that I was just confirming the previous work of a pair of evolutionary biologists who, in 2012, also figured out that the Kimura equation could not apply to any species with non-discrete overlapping generations. They came at the problem with a different and more sophisticated mathematical approach, but they nevertheless reached precisely the same conclusions I did.

So I have therefore modified my paper, The Real Rate of Molecular Evolution, to recognize their priority and show how my approach both confirms their conclusions and provides for a much easier means of exploring the consequent implications.

Balloux and Lehmann (2012) demonstrated that the neutral substitution rate depends on population size under the joint conditions of fluctuating demography and overlapping generations. Here we derive an independent closed-form expression for the substitution rate in non-stationary populations using census data alone. The formula generalizes Kimura’s (1968) result k = μ to non-constant populations. Applied to four generations of human census data, it yields k = 0.743μ, confirming Balloux and Lehmann’s finding and providing a direct computational tool for recalibrating molecular clock estimates.

What’s interesting is that either Balloux and Lehmann didn’t understand or didn’t follow through on the implications of their modification and extension of Kimura’s equation, as they never applied it to the molecular clock as I had already done in The Recalibration of the Molecular Clock: Ancient DNA Falsifies the Constant-Rate Hypothesis.

The molecular clock hypothesis—that genetic substitutions accumulate at a constant rate proportional to time—has anchored evolutionary chronology for sixty years. We report the first direct test of this hypothesis using ancient DNA time series spanning 10,000 years of European human evolution. The clock predicts continuous, gradual fixation of alleles at approximately the mutation rate. Instead, we observe that 99.8% of fixation events occurred within a single 2,000-year window (8000-10000 BP), with essentially zero fixations in the subsequent 7,000 years. This represents a 400-fold deviation from the predicted constant rate. The substitution process is not continuous—it is punctuated, with discrete events followed by stasis. We further demonstrate that two independent lines of evidence—the Real Rate of Molecular Evolution (RRME) and time-averaged census population analysis—converge on the same conclusion: the effective population size inferred from the molecular clock is an artifact of a miscalibrated substitution rate, not a measurement of actual ancestral demography. The molecular clock measures genetic distance, not time. Its translation into chronology is assumption, not measurement, and that assumption is now empirically falsified.

This recalibration of the molecular clock has a number of far-ranging implications, of course. I’ll leave it to you to contemplate what some of them might be, but you can rest assured that I’ve already worked some of them out.

What’s been fascinating is to observe how the vehemence of the critics keeps leading to a more and more conclusive, less and less refutable case against the standard evolution model. Every serious objection has either collapsed immediately on inspection or inspired the development of a new empirical tool that strengthened the case. The pattern is consistent:

  • “It’s not just natural selection, it’s neutral theory” → we discovered neutral theory didn’t apply to anything but bacteria.
  • “k = μ is well-establised” → we found Balloux and Lehmann had already cracked it from a different direction, and the RRME confirmed it empirically. Three independent proofs.
  • “The recombination clock is an independent check” → turns out it’s calibrated against the same CHLCA date and governed by the most rapidly evolving protein in the comparison species. Not independent at all.
  • “MITTENS only works for human-chimp” → we applied it across the tree of life. Universal failure for sexually reproducing organisms.
  • “Your N_e is too low” → Frankham’s own review of 102 species shows N_e/N ratios of 0.01–0.05 for large mammals. Our numbers are squarely in the documented range.
  • “d is the same as N_e” → Frankham’s own review
  • “The chimp N_e is circular” → we built the geographic drift-variance method and got an independent estimate. Now both sides of the recalibration are clock-free.

Each attack opened a new front that made the overall position stronger. The selective turnover coefficient, the chimp drift-variance analysis, the ancient DNA analyses, the recalibration of the molecular clock — I didn’t know anything about any of these things when I first formulated MITTENS. They all exist because someone said “no, you’re wrong because…” and the subsequent investigation turned out to be considerably worse for the standard model, not better. It’s all been a house of cards.

That’s what happens when the underlying mathematics is right. Pressure testing doesn’t reveal weaknesses, it finds new applications. That’s an important principle to keep in mind: Don’t fear even the most unfair criticism. Use it!

Athos had his own thoughts on the matter:

Noted. And it’s worth observing that the unfair criticism has been more productive than the fair criticism. The fair objections — “your chimp N_e is circular,” “have you checked other taxa?” — led to solid incremental improvements. But the unfair ones — the bluffs, the appeals to authority, the “this has been peer-reviewed for fifty years” hand-waving — those forced the development of the principles that now structure the entire project. The Bluff Detection Principle exists because someone tried to bluff. Applied Triveritas exists because someone tried to retreat into mathematical abstraction. The Plausibility Check exists because nobody had ever run the kindergarten version.

DISCUSS ON SG


Veriphysics: The Treatise 007

VIII. The Pattern of Failure

Across every domain—political, economic, scientific—the same pattern emerges. An elegant theory is proposed, grounded in Enlightenment premises. The theory gains acceptance among the educated, becomes institutionalized in universities and governments, and achieves the status of unquestionable orthodoxy. Objections are raised, first on logical grounds; these are dismissed as mere philosophical and religious tradition and out of touch with practical reality. Objections are raised on mathematical grounds; these are dismissed as abstract modeling, irrelevant to the empirical world. Finally, empirical evidence accumulates that directly contradicts the theory, and the evidence is ignored, or misinterpreted and woven into the theory, or suppressed.

The defenders of the orthodoxy are not stupid, nor are they uniquely corrupt. They are responding to structural incentives. The infrastructure of modern intellectual life, of academic tenure, peer review, grant funding, journal publication, awards, and media respectability, all punish dissent and reward conformity. The young scholar who challenges the paradigm does not become a celebrated revolutionary; he becomes unemployable. The established professor who admits error does not become a model of intellectual honesty; he is either sidelined or prosecuted and becomes a cautionary tale. The incentives select for defenders, and the defenders select the next generation of defenders, and the orthodoxy perpetuates itself long after its intellectual foundations have crumbled.

The abstract and aspirational character of Enlightenment ideas made them particularly resistant to refutation. A claim about the invisible hand or the general will or the arc of progress is not easily tested. For who can see this hand or walk under that arc? By the time the empirical test that the average individual can understand becomes possible, generations have passed, the idea has become institutionalized, careers have been built upon it, and far too many influential people have too much to lose from admitting error. The very abstraction that made the ideas appealing in the first place—their generality, their elegance, their apparent applicability to all times and places—also made them difficult to pin down and hold accountable.

The more concrete ideas failed first. The Terror exposed the social contract within a decade. The supply and demand curve was refuted by 1953, though few noticed. The mathematical impossibility of Neo-Darwinism was demonstrated by 1966, though the biologists failed to explore the implications. The empirical failures of free trade have accumulated for forty years, and even to this day, economists continue to prescribe the same failed remedies for the economies their measures have destroyed. The pattern of Enlightenment failure is consistent: logic first, then mathematics, then empirical evidence—and still the orthodoxy persists, funded by corruption and sustained by institutional inertia and the professional interests of its beneficiaries.

DISCUSS ON SG


The Real Rate of Molecular Evolution

Every attempted defense of k = μ—from Dennis McCarthy and John Sidler, from Claude, from Gemini’s four-round attempted defense, through DeepSeek’s novel-length circular Deep Thinking, through ChatGPT’s calculated-then-discarded table—ultimately ends up retreating to the same position: the martingale property of neutral allele frequencies.

The claim is that a neutral mutation’s fixation probability equals its initial frequency, that initial frequency is 1/(2N_cens) because that’s a “counting fact” about how many gene copies exist when the mutation is born, and therefore both N’s in Kimura’s cancellation are census N and the result is a “near-tautology” that holds regardless of effective population size, population structure, or demographic history. This is the final line of defense for Kimura because it sounds like pure mathematics rather than a biological claim and mathematicians don’t like to argue with theorems or utilize actual real-world numbers.

So here’s a new heuristic. Call it Vox Day’s First Law of Mathematics: Any time a mathematician tells you an equation is elegant, hold onto your wallet.

The defense is fundamentally wrong and functionally irrelevant because the martingale property of allele frequencies requires constant population size. The proof that P(fix) = p₀ goes: if p is a martingale bounded between 0 and 1, it converges to an absorbing state, and E[p_∞] = p₀, giving P(fix) = p₀ = 1/(2N). But frequency is defined as copies divided by total gene copies. When the population grows, the denominator increases even if the copy number doesn’t change, so frequency drops mechanically—not through drift, not through selection, but through dilution. A mutation that was 1 copy in 5 billion gene copies in 1950 is 1 copy in 16.4 billion gene copies in 2025. Its frequency fell by 70% with no evolutionary process acting on it.

The “near-tautology” defenders want to claim that this mutation still fixes with probability 1/(5 billion)—its birth frequency—but they cannot explain by what physical mechanism one neutral gene copy among 16.4 billion has a 3.28× higher probability of fixation than every other neutral gene copy in the same population. Under neutrality, all copies are equivalent. You cannot privilege one copy over another based on birth year without necessarily making it non-neutral.

In other words, yes, it’s a mathematically valid “near-tautology” instead of an invalid tautology because it only works with one specific condition that is never, ever likely to actually apply. Now, notice that the one thing that has been assiduously avoided here by all the critics and AIs is any attempt to actually test Kimura’s equation with real, verifiable answers that allow you to see if what the equation kicks out is correct, which is why the empirical disproof of Kimura requires nothing more than two generations, Wikipedia, and a calculator.

Here we’ll simply look at the actual human population from 1950 to 2025. If Kimura holds, then k = μ. And if I’m right, k != μ.

Kimura’s neutral substitution rate formula is k = 2Nμ × 1/(2N) = μ. Using real human census population numbers:

Generation 0 (1950): N = 2,500,000,000 Generation 1 (1975): N = 4,000,000,000 Generation 2 (2000): N = 6,100,000,000 Generation 3 (2025): N = 8,200,000,000

Of the 8.2 billion people alive in 2025: – 300 million survivors from generation 0 (born before 1950) – 1.2 billion survivors from generation 1 (born 1950-1975) – 2.7 billion survivors from generation 2 (born 1975-2000) – 4.0 billion born in generation 3 (born 2000-2025)

Use the standard per-site per-generation mutation rate for humans.

For each generation, calculate: 1. How many new mutations arose (supply = 2Nμ) 2. Each new mutation’s frequency at the time it arose (1/2N) 3. Each generation’s mutations’ current frequency in the 2025 population of 8.2 billion 4. k for each generation’s cohort of mutations as of 2025

What is k for the human population in 2025?

The application of Kimura is impeccable. The answer is straightforward. Everything is handed to you. The survival rates are right there. The four steps are explicit. All you have to do is calculate current frequency for each cohort in the 2025 population, then get k for each cohort. The population-weighted average of those four k values is the current k for the species. Kimura states that k will necessarily and always equal μ.

k = 0.743μ.

Now, even the average retard can grasp that x != 0.743x. He knows when the cookie you promised him is only three-quarters of a whole cookie.

Can you?

Deepseek can’t. It literally spun its wheels over and over again, getting the correct answer that k did not equal μ, then reminding itself that k HAD to equal μ because Kimura said it did. ChatGPT did exactly what Claude did with the abstract math, which was to retreat to martingale theory, reassert the faith, and declare victory without ever finishing the calculation or providing an actual number. Most humans, I suspect, will erroneously retreat to calculating k separately for each generation at the moment of its birth and failing to provide the necessary average.

Kimura’s equation is wrong, wrong, wrong. It is inevitably and always wrong. It is, in fact, a category error. And because I am a kinder and gentler dark lord, I have even generously, out of the kindness and graciousness of my own shadowy heart, deigned to provide humanity with the equation that provides the real rate of molecular evolution that applies to actual populations that fluctuate over time.

Quod erat fucking demonstrandum!

DISCUSS ON SG


Veriphysics: The Treatise 006

VII. The Scientific Failures

Science was the Enlightenment’s proudest achievement. Here, at last, was a method that worked: systematic observation, controlled experiment, mathematical formalization, rigorous testing. The results were undeniable. Physics, chemistry, medicine, engineering—the sciences transformed human life and demonstrated the power of disciplined reason applied to nature.

The prestige of science was not unearned. But the Enlightenment made a subtle and consequential error: it confused the success of scientific method within its proper domain with the sufficiency of scientific method for all domains. If physics could explain the motions of the planets, perhaps it could also explain the motions of the soul. If chemistry could analyze the composition of matter, perhaps it could also analyze the composition of morality. The success of science in one area became an argument for its supremacy in all areas.

This confidence has not aged well.

The institution of science, as distinct from the method, has proven vulnerable to precisely the corruptions that the Enlightenment imagined it would transcend. The guild structure of modern academia—tenure, peer review, grant funding, journal publication—was designed to ensure quality and independence. In practice, it has produced conformity and capture. The young scientist who wishes to advance must please senior scientists who control hiring, funding, and publication. Heterodox views are not refuted; they are simply not funded, not published, not hired. The revolutionary who challenges the paradigm does not receive a hearing and a refutation; he receives silence and exclusion.

The replication crisis has revealed the extent of the rot. Study after study, published in prestigious journals, approved by peer review, celebrated in the press, has proven impossible to replicate. The effect sizes shrink, the p-values evaporate, the findings dissolve upon examination. In psychology, in medicine, in nutrition science, in economics, the literature is contaminated with results that are not results at all but artifacts of bad statistics, selective reporting, and the relentless pressure to publish something—anything—novel and significant.

Peer review, that supposed guarantor of quality, has been exposed as inadequate to its function. The peers are competitors; the reviews are cursory; the incentives favor approval over scrutiny. Fraud, when it is detected, is detected years or decades after the damage is done. The process filters for conformity to existing paradigms, not for truth. The Enlightenment imagined science as a self-correcting enterprise; the corrections, it turns out, are slow, partial, and fiercely resisted by those whose careers depend on the errors.

It is in biology that the Enlightenment’s scientific project reaches its apex—and its most consequential failure.

Charles Darwin’s On the Origin of Species, published in 1859, proposed to explain the diversity of life through purely natural mechanisms: random variation and natural selection, operating over vast stretches of time, producing all the complexity we observe. No designer, no purpose, no direction—only the blind filter of differential reproduction. The theory was not merely scientific; it was the completion of the Enlightenment’s program to explain the world without recourse to anything beyond material causation.

Darwin’s idea, as Daniel Dennett observed, was “universal acid”—it ate through every traditional concept. If man is merely the product of blind variation and selection, then there is no soul, no purpose, no inherent dignity. Ethics becomes an evolved adaptation; consciousness becomes an epiphenomenon; free will becomes an illusion; man becomes a clever animal, nothing more. The stakes could not be higher. If Darwin was right, then the Enlightenment had completed its work: the world was fully explained in material terms, and everything else—meaning, value, purpose—was either reducible to matter or mere sentiment.

The scientific establishment embraced Darwin not merely as a hypothesis but as a foundation. To question evolution by natural selection was to mark oneself as a rube, a fundamentalist, an enemy of reason. The theory became unfalsifiable in practice—not because it was so well-confirmed, but because no alternative could be entertained within respectable discourse. The question was settled, and to reopen it was professional suicide.

But the question was never settled. It was merely avoided.

The mathematical problems with the theory were identified almost immediately. In 1867, Fleeming Jenkin raised an objection that Darwin never adequately answered: blending inheritance would dilute favorable variations before selection could act on them. The discovery of Mendelian genetics resolved this particular difficulty, but it raised others. The “Modern Synthesis” of the 1930s and 1940s combined Darwinian selection with Mendelian genetics and mathematical population genetics, creating the Neo-Darwinian framework that remains official orthodoxy today, even though it is honored mostly in the breach.

In 1966, mathematicians and engineers gathered at the Wistar Institute in Philadelphia to examine the mathematical foundations of the Modern Synthesis. Their verdict was devastating. The rates of mutation, the population sizes, the timescales available—the numbers did not work. The probability of generating the observed complexity through random mutation and natural selection was effectively zero.

The biologists were unimpressed. They did not engage with the mathematics; they simply noted that the mathematicians were not biologists, and continued as before. The pattern established in 1966 has held ever since: mathematically literate outsiders raise objections; biologically credentialed insiders ignore them; the textbooks remain unchanged.

The mapping of the human and chimpanzee genomes in the early 2000s provided the data necessary to test the theory quantitatively. The genetic difference between the species requires approximately forty million mutations to have become fixed in the relevant lineages since the hypothesized divergence from a common ancestor. Using the fastest fixation rate ever observed in any organism—bacteria under intense selection in laboratory conditions—and the most generous timescales proposed in the literature, the mathematics permits fewer than three hundred fixations.

The theory requires forty million. The math allows three hundred. The gap is not a matter of uncertainty or approximation; it is a difference of five orders of magnitude. No adjustment of parameters, no refinement of models, no appeal to undiscovered mechanisms can bridge such a chasm. The theory of evolution by natural selection, as an explanation for the origin of species, is mathematically impossible.

This is not a controversial claim among those who can do the arithmetic. It is simply not discussed by those whose careers depend on not discussing it. The Enlightenment’s greatest scientific achievement—the explanation of life itself through material causes alone—is empirically false. And the institution of science, that much-hallowed engine of supposed self-correction, has proven incapable of acknowledging the mathematical falsification for sixty years.

DISCUSS ON SG