Kimura’s Fixation Calculator

It occurred to me that since the population genetics and evolutionary biology fields are obsessed with Kimura’s substitution formula to the point of literal unreason, instead of trying to show them how Kimura made an algebraic mistake and why the formula only applies to one specific case instead of everything, it would be much more useful to demonstrate how, with a few modifications, Kimura’s equation could serve as the foundation of a predictive calculator that is considerably more accurate and useful than the original equation.

Kimura’s Fixation Calculator: Providing Neutral Theory With Predictive Capacity

Neutral theory has stood for fifty-seven years on a simple result: the substitution rate k equals the per-site mutation rate μ. This identity, derived by Kimura in three lines, rests on canceling two quantities that share a letter but not a meaning: the census number of breeding adults N (which supplies mutations) and the variance effective population size Nₑ (which governs drift and fixation). The cancellation in the derivation is valid in the special case of asexual bacteria where N ≈ Nₑ. It does not hold in sexually reproducing species, where Nₑ/N is typically ~0.1 (Frankham 1995).

Rejecting the incorrect application of the derivation and treating the realized substitution rate as the minimum of three serial constraints—input flux, polymorphism throughput, and selection cost—yields Kimura’s Fixation Calculator. The selection-cost term is a simple expression in four independently measurable parameters (maximum reproductive differential s_max ≈ 1, Selective Turnover Coefficient d, genome length L, and effective population size Nₑ). The full calculator recovers k ≈ μ for bacteria while predicting the observed compression of rates across sexual eukaryotes, where the selection term sets a ceiling two to five orders of magnitude below textbook expectations based on the standard derivation.

Validated on fourteen sexual species pairs plus the E. coli LTEE (all calibrations independent of molecular clocks), the calculator provides forward prediction of k from organismal parameters, inverse inference of divergence time or Nₑ from observed substitutions, and joint constraint surfaces. Where the textbook supplies a single number, the calculator returns a mechanistically grounded range consistent with observable biological reality.

You can read the whole paper if you are a serious glutton for punishment or if you want to understand why no less than nine scientific fields will be seeing significant future adjustments. This paper will be one of the new appendices in the second edition of Probability Zero, since there really is no need for the Sakana study and the rejection of the MITTENS paper means that there is no reason to add it at the back as well.

DISCUSS ON SG


It was Never Real

The Littlest Chickenhawk has been a manufactured creature since the beginning, when he was being pushed as a “musical prodigy” at the age of 12 back in 1996.

Larry King introduced him by informing the crowd that Shapiro wanted to be the first Orthodox Rabbi to sit on the Supreme Court of the United States. He also wanted to play his violin at Carnegie Hall.

Of course, he couldn’t cut it as a violinist, a lawyer, or a rabbi, so he was repurposed as a “whip-smart” political commentator, spitting out AIPAC talking points while being handed everything from syndicated columns and book deals to Michael Savage’s radio show despite the fact that he was never even among the ten most popular columnists on WND back in the day. On average, he was number 14.

The tragedy of Ben Shapiro is that he knows he’s a fraud. He realized it even before he went to college, when he began to understand that he didn’t have the ability to think for himself or enough talent to accomplish anything on his own. This is why so many celebrities and “successful” people have Imposter Syndrome: they are imposters whose success is fake and manufactured. Once the funding required to maintain their pretend popularity dries up, their audience disappears because it never really existed in the first place.

DISCUSS ON SG


The Decay Function of Professional Science

An excerpt from the #1 Generative AI bestseller, HARDCODED: AI and The End of the Scientific Consensus:

How long does it take for a scientific field to fill with garbage?

The question sounds polemical, but it has a precise mathematical answer. Given a field’s publication rate, its replication rate, its correction mechanisms, and—critically—its citation dynamics, we can model the accumulation of unreliable findings over time. The result is not encouraging.

The key insight comes from a 2021 study by Marta Serra-Garcia and Uri Gneezy published in Science Advances. They examined papers from three major replication projects—in psychology, economics, and general science journals including Nature and Science—and correlated replicability with citation counts. Their finding was striking: papers that failed to replicate were cited significantly more than papers that replicated successfully.

Not slightly more. Sixteen times more per year, on average.

In Nature and Science, the gap was even larger: non-replicable papers were cited 300 times more than replicable ones. And the citation advantage persisted even after the replication failure was published. Only 12% of post-replication citations acknowledged that the original finding had failed to replicate. The other 88% cited the discredited paper as if it were still valid.

This is not a bug in the scientific literature. It is a feature of the incentive structure. “Interesting” findings—surprising results, counterintuitive claims, dramatic effects—attract attention, generate citations, and advance careers. They are also, precisely because they are surprising, more likely to be false positives or artifacts of methodological error. The system selects for interestingness, and interestingness is inversely correlated with reliability.

The Serra-Garcia and Gneezy finding transforms the replication crisis from a problem of individual bad actors into a problem of system dynamics. It’s not just that bad papers get published. It’s that bad papers get amplified. They accumulate citations. They enter textbooks. They shape the training of the next generation of researchers. They become, in effect, the curriculum.

Let’s build the model.

Define the following variables for a scientific field:

S(t) = the stock of “active” papers at time t (papers published in the last N years that are still being cited)

p(t) = the proportion of active papers that are unreliable (would fail replication if tested)

B(t) = the rate at which new unreliable papers enter the literature

G(t) = the rate at which new reliable papers enter the literature

C = the correction rate (the fraction of unreliable papers that are retracted, corrected, or otherwise removed from active circulation per year)

α = the citation amplification factor for unreliable papers relative to reliable ones

From the Serra-Garcia and Gneezy data, α ≈ 16 for typical fields and can reach 300 for high-profile journals. The correction rate C is extremely low: retraction rates are approximately 11 per 10,000 papers as of 2022, and retractions capture only a tiny fraction of unreliable papers. Elisabeth Bik’s analysis of 20,000 papers found that approximately 2% contained deliberately manipulated images—a rate 200 times higher than the retraction rate.

Now consider how new researchers are trained.

A graduate student entering a field reads the literature. They learn what questions are interesting, what methods are appropriate, what findings are established. They calibrate their sense of “what is true in this field” against the papers they encounter. Crucially, they encounter papers in proportion to how often those papers are cited. A paper with 1,000 citations is more likely to appear in syllabi, review articles, and search results than a paper with 100 citations.

This means the effective training signal is not the proportion of unreliable papers in the literature. It is the citation-weighted proportion. If unreliable papers receive α times more citations than reliable papers, then:

Effective training signal = (p × α) / (p × α + (1 – p))

Consider a field where 50 percent of papers are unreliable (p = 0.5). If unreliable papers are cited 16 times more often (α = 16), then:

Effective training signal = (0.5 × 16) / (0.5 × 16 + 0.5 × 1) = 8 / 8.5 ≈ 0.94

When half the literature is unreliable, 94 percent of the citation-weighted training signal comes from unreliable papers.

This is the amplification mechanism. The literature can be 50 percent garbage, but the effective literatur, what researchers actually encounter, learn from, and calibrate against, is 94 percent garbage. The citation dynamics concentrate the garbage.

Now what happens when researchers trained on this signal produce new work?

DISCUSS ON SG


Interview with Orson Scott Card

Fandom Pulse sat down for an interview with one of the last living greats of science fiction, Orson Scott Card.

The Alvin Maker series is more explicitly tied to American spiritual and folk tradition than almost anything else in fantasy. Do you think today’s readership is equipped to receive that material, or has something been lost in how we read myth?

What’s been lost is a knowledge of our own history. I remember my wife’s and my amusement and shock when we got a fan letter early in the Alvin Maker sequence, in which a reader said how much she loved the way I was dealing with American history, but then added, “And I never knew that George Washington had been beheaded.” What? She thought that bit of alternate history was true? Hadn’t she studied American history in high school? How can an alternate fantasy history of America resonate properly with readers who don’t know real American history in the first place?

Your book Characters and Viewpoint remains required reading decades later. How has your own understanding of character changed since you wrote it?

I think the techniques laid out in Characters and Viewpoint remain true and useful. I’m sad to see some of the nonsense that has begun to pervade the teaching of writing in the universities. Present tense narrative is NOT part of the American tradition. Past tense is the way we tell the truth. Idiotic nonrules of grammar have perverted our language. Yes you CAN and sometimes MUST end sentences with words that are often used as prepositions. To merrily split infinitives is one of the treasured traditions in English; poor Latin couldn’t split their infinitives. But that’s no reason to deprive ourselves of such a useful device. I sometimes think that my seventh grade teacher, Mrs. Johnson, was the last American teacher giving students a grounding in grammar and structure. I did love diagramming sentences.

Ender’s Game has become one of the defining works of 20th century science fiction. At what point did you realize the book had taken on a life entirely outside your control, and how do you not let that dominate your creativity?

Early in the life of the novel Ender’s Game, I was at an event in Utah Valley, when a librarian from a local Junior High School confided to me, “Ender’s Game is our ‘most-lost book.’” I thought: If young readers can’t bear to part with the book, it must be touching something deep in their souls.

I’m happy with the number of people who tell me that Ender’s Game was important in their youth. There are also people who feel that way about Ender’s Shadow and Speaker for the Dead. If I knew what worked so well in those books, I’d do it every time. Instead, I do as I’ve always done: I tell a story I care about and believe in as clearly as I can, and then hope that readers will find value in it.

Prolific authors often say their best work gets buried under their most famous title. Do you have a book or series you wish more readers would find, something you feel hasn’t gotten the attention it deserves?

I don’t resent the popularity or success of Ender’s Game, I’m grateful that any of my books has won readers’ hearts. Yes, I think I’ve written better books; Yes, I’m proud of all my stories. And some few people have told me, over the years, that their favorite of my books is one of the less well-known ones.

Read the rest of it there.

DISCUSS ON SG



Ready to Rumble

  • Israeli TV Channel 12 is reporting tonight that: “Israel is preparing to officially announce the collapse of negotiations with Iran.”
  • The US has delivered 6,500 tons of munitions and equipment to Israel within 24 hours, West Jerusalem has said. The announcement coincided with media reports claiming that the head of US Central Command, Brad Cooper, has briefed US President Donald Trump on a plan for the potential renewal of military action against Iran in a bid to pressure it to consent to a more favorable peace deal.

Considering that Israel hasn’t been able to do much more than fight Iran to a stalemate with the active assistance of the US military and 115,600 tons of military equipment via 403 airlifts and 10 sealifts since the US-Israeli attack on Iran began on February 28, I fail to see what another 6,500 tons of munitions are expected to accomplish. Especially if those 6,500 tons don’t include any missile interceptors.

DISCUSS ON SG


HARDCODED

Why artificial intelligence will replace institutional science is explained in my latest book from Castalia House, HARDCODED: AI and the End of Scientific Consensus.

When Claude Athos and I submitted four mathematically rigorous papers challenging neo-Darwinian evolution and one parody paper to six leading AI models configured as peer reviewers, the results exposed a fundamental problem with both science and AI. Five of six models comprehensively failed. Three were anti-calibrated—they reliably preferred fabricated nonsense over genuine science. A parody paper with about Japanese scientists dying fish different colors to prove natural selection scored 9/10. The real science, mathematically airtight and empirically validated against ancient DNA, was rated 1/10 and dismissed as “pseudoscience.”

This is the book that documents what that happened and what it means.

HARDCODED is the definitive account of how AI systems trained on the corrupted corpus of modern science have inherited every pathology of the institutions that produced them: the credentialism, the consensus enforcement, the systematic preference for orthodox nonsense over heterodox reality. The reproducibility crisis preceded the machines. AI didn’t cause the rot but AI revealed it at scale, with confidence, and in a form impossible to ignore.

Across sixteen chapters, the reader is introduced to:

  • The replication catastrophe that quietly invalidated half of all published science before anyone was looking
  • How peer review degenerated from quality control into hazing ritual and why Reviewer 2 became a meme
  • The details of the Probability Zero collaboration that produced the Bernoulli Barrier, the Selective Turnover Coefficient, and the maximal mutations ceiling—the mathematical constraints that killed neo-Darwinian theory.
  • The full transcripts of twelve rounds of debate with DeepSeek, in which an AI defending evolutionary orthodoxy stubbornly retreats step by step from one nonsenscal position into another, just like a human biologist.
  • The Red Team Stress Test that methodically closes every escape hatch before critics can retreat to them.
  • The harrowing of science: a field-by-field assessment of which disciplines will adapt, which will calcify, and which are already dead.

The book also delivers something genuinely new and positive: a scientific methodology for outsiders. With AI systems available as adversarial reviewers more powerful than peer review, the gatekeeping power of institutional science is broken. The credentialed monopoly on legitimate inquiry is over. The math does not care where you went to school, and the AI does not check for credentials before analyzing your arguments.

For readers who have suspected that “trust the science” was a mantra for the insane, HARDCODED is the book that explains exactly what went wrong with science, why it cannot be fixed from inside, and what comes next. For readers who still believe the institutions of science are still functioning, it is a conclusive proof that they are not.

The transcripts are reproduced in full. The mathematics is presented in detail. The four papers are included as appendices. Every claim is documented. Every retreat is closed off.

The institutions will adapt or they will become irrelevant. But the methodology of science which proceeded them will continue, with or without them.

Neither the math nor the AI models care where you went to school.

521 pages, or 15 hours and 37 minutes. Available for Kindle, KU, and audiobook. From the author of Probability Zero and The Frozen Gene.

DISCUSS ON SG



Two Can Play That Game

China will seize property of corporations that respect US sanctions:

On April 7 and 13, 2026, China’s State Council enacted two new regulations, [Decree No. 834 (Supply Chain Security)] and [Decree No. 835 (Countering Foreign Improper Extraterritorial Jurisdiction)], allowing the seizure of assets from foreign entities deemed to violate China’s anti-sanctions laws or disrupt industrial supply chains.

These regulations, effective immediately, allow for freezing assets, restricting transactions, and visa bans, targeting companies that comply with foreign sanctions against China.

Key Aspects of the New Regulations

Regulations on Countering Foreign Improper Extraterritorial Jurisdiction (Decree No. 835): Focuses on preventing foreign states’ sanctions from being enforced on Chinese entities and allows for lawsuits against those enforcing such measures.

Regulations on the Security of Industrial and Supply Chains (Decree No. 834): Targets “malicious entities” that disrupt Chinese supply chains through unfair restrictions or, for example, complying with US-led, or similar, “de-risking” efforts.

Targeted Measures: Authorities can seize or freeze assets located in China, restrict transactions with Chinese partners, and ban entry to individuals connected to the targeted foreign entities.

Malicious Entity List: A, created list will identify foreign organizations or individuals that act in ways that are deemed harmful to Chinese sovereignty or security.

Context: These measures expand on the 2021 Anti-Foreign Sanctions Law (AFSL), providing a legal framework for retaliation against foreign governments and firms.

These rules increase risk for multinational corporations, particularly those in high-tech sectors, as compliance with foreign sanctions may directly violate Chinese law.

And so the Great Bifurcation continues. Now we get to find out who the real economic big dog is.

DISCUSS ON SG


Three Categories, Zero Errors

Someone named David Fenger thought he could “correct my math” in Probability Zero:

“I went through Vox’s math. He dropped two critical terms (size of genome and cell divisions per generation) and got an answer that was out by about 5 orders of magnitude.”

He’s incorrect, and what he did is confuse three different mutation rates. There are three entirely distinct quantities that can all be described as “the mutation rate”:

  1. Per-base-pair, per-cell-division ≈ 10⁻¹⁰
  2. Per-base-pair, per-generation (μ) ≈ 1.2–1.5 × 10⁻⁸ (Kong 2012, Jónsson 2017)
  3. Per-genome, per-generation ≈ 70–100 mutations per individual (Kong 2012, Nature 488: 471–475)

This is how they’re related: (3) = (2) × genome size = (1) × cell divisions per generation × genome size

My calculations don’t start at (1) or (2). They start at level (3) — the empirically measured ~100 de novo mutations per generation per individual, directly observed in trio sequencing. That number is already the product of genome size and cell divisions per generation and the per-base-pair per-division rate. Both terms he claims I “dropped” are terms that are baked into the third. You don’t multiply them in again because that would be double-counting by a factor of roughly 3 × 10¹¹.

The Cross-Taxa Channel Capacity paper uses level (2), μ ≈ 1.3 × 10⁻⁸ per bp per generation. Genome size appears explicitly in that paper as L = 3.2 × 10⁹, and the channel capacity is derived as C = L × r. Cell divisions per generation don’t appear because we’re already at the per-generation level — that’s the whole point of using μ rather than the per-division rate.

So in both formulations Mr. Fenger’s “missing terms” are either explicitly present or were already absorbed into the empirical measurement. Moreover, we already know his “math” is incorrect or he never actually did it.

If I had used the per-bp per-cell-division rate (10⁻¹⁰) and forgot to multiply by both cell divisions (~400) and genome size (~3 × 10⁹), you’d be off by about 12 orders of magnitude, not 5.
If I used μ (10⁻⁸) and forgot to multiply by genome size only, I’d be off by about 9.5 orders of magnitude, not 5.

There is no clean way to drop “size of genome and cell divisions per generation” and end up five orders of magnitude off. It’s nonsense that doesn’t correspond to any actual arithmetic operation the math from Probability Zero.

Ironically, I am off by at least one order of magnitude, but the other way. I didn’t utilize the full range of genetic differences between the chimp and human genomes, because I was not familiar with the Yoo (2025) paper than published them, so the probability of evolution by natural selection is actually less than the zero of Probability Zero.

UPDATE: A gentleman by the name of Devon Ericksen is apparently a moron, as well as an object lesson in why one should never attempt to criticize a book without reading it. Probability Zero is a mathematical work, not a “creationist” one, and Isaac Asimov was never capable of debunking it, not 50 years ago, not today, and not in the future, because no one ever will. Ironically, this sort of mindless pattern-matching as a basis for rejecting math, logic, and empirical evidence is more commonly committed by AIs than humans, as my next book chronicles.

DISCUSS ON SG