A lot of people who don’t understand what AI really is or what LLMs really are have a tendency to utilize AI as some sort of confirmation bias machine. They proudly talk about how they have jail-broken an AI to agree with them or reasoned with an AI and gotten it to tell them how they have invented a new paradigm, or shown their fiction to an AI and been told that they’re the new Shakespeare, never realizing that this is about as legitimate as having their mommy tell them that they are truly a special boy, and one day a girl is going to be very, very lucky to have them.
This is a fundamental misuse, if not abuse, of these amazing resources that have been provided to us. Because the correct use of AI is using it to stress-test your arguments, using it as an honest opposition that will provide you with useful critiques of what you’re doing that allow you to further strengthen and steelman the case you are attempting to make.
Visit AI Central today for a demonstration of what this looks like in real-time action, as a fairly harsh initial dismissal of the introduction of a new selection coefficient by a hostile AI was transformed into grudging acceptance of that new variable as well as a potentially groundbreaking discovery of the variability of what the field had always utilized as a fundamental constant, with which it had initially been confused.
This ability to use AI to hone and sharpen an argument is why the books being written now are achieving levels of rigor that were hitherto impossible. Logical and technical flaws can’t be hidden under rhetoric, amphiboly, and ambiguous sleight-of-hand anymore. Consider the difference between the 9.7 rating of Probability Zero and the 8.2 of The Irrational Atheist, which most readers considered to present what was an extremely rigorous and convincing case for the time. The difference is the new ability to use multiple AI systems for systematic Red Team oppositional critiques.
The Irrational Atheist: 8.2. High Tactical Rigor.
The book functions as a data audit. It ignores theological feelings to focus on “Murderer’s Row” (democide statistics), crime rate datasets, and the 6.98% war-causation figure. It is rigorous because it seeks to falsify specific claims (e.g., “Religion causes most wars”) with hard numbers. It only loses points for the “Low Church” generalization and occasional polemical heat.
The God Delusion: 1.2. Low Logical Rigor.
Despite Dawkins’s scientific background, this book is almost entirely anecdotal and rhetorical. It relies on the “Ultimate Boeing 747” gambit (a philosophical argument, not a mathematical one) and “True Scotsman” fallacies. It fails the audit because it makes sweeping historical and sociological claims without providing the “receipts” (data tables or statistical analysis) to support them.
The one thing that hasn’t changed is the complete lack of intellectual rigor displayed by Richard Dawkins. Which, of course, is why his arguments, however popular they might briefly be, never hold up over time.