Coding Fiction

Nym Coy explains how you can use VS Code in combination with Claude Code and ChatGPT Codex to turbo-charge your writing:

Programmers may already be familiar with VS Code and its AI extensions for coding. But there’s no rule that says you have to use it for code. It turns out the same setup—file browser, text editor, AI assistant in a sidebar—works surprisingly well for writing fiction.

This isn’t a guide on how to write. Everyone has their own process. This is just a workspace setup that happens to work well for AI-assisted fiction.

Why VS Code?
VS Code is a free code editor, which sounds intimidating, but it’s really just a text editor with a good file browser. The useful part: you can install extensions that add AI assistants directly into the workspace. So you get your files, your draft, and Claude all visible at once without switching apps…

This is where ChatGPT’s Codex is useful. It’s good at file manipulation. Give it instructions like:

“Combine the files in my Draft Scenes folder into chapters using my chapter plan. Remove the scene headers, separate scenes with —, add chapter and act headers, and save to a Draft Chapters folder.”

It writes a Python script, runs it, done. It can also convert the manuscript to .docx and .epub.

Just remember this before you start writing your Great American Novel. It’s very helpful to have something to say before you try to say it. AI is a tool, a powerful tool, but it doesn’t have the creative spark.

Supplying that is your job.

In other code-related news, the SG devs have put out a call for volunteers.

DISCUSS ON SG


Why AI Hallucinates

I asked Markku to explain why the AI companies have such a difficult time telling their machine intelligences to stop fabricating information they don’t possess. I mean, how difficult can it be to simply say “I don’t know, Dave, I have no relevant information” instead of going to the trouble to concoct fake citations, nonexistent books, and imaginary lawsuits? He explained that AI instinct to fabricate information is essentially baked into their infrastructure, due to the original source of the algorithms upon which they are built.

The entire history of the internet may seem like a huge amount of information, but it’s not unlimited. Per topic of marginal interest, there isn’t all that much information. And mankind can’t really produce it faster than it already does. Hence, we’ve hit the training data ceiling.

And what the gradient descent algorithm does is, it will ALWAYS produce a result that looks like all the other results. Even if there is actually zero training data on a topic, it will still speak confidently on it. It’s just all completely made up.

The algorithm was originally developed due to the fact that fighter jets are so unstable that a human being doesn’t react fast enough to even theoretically keep it in the air. So, gradient descent takes the stick inputs as a general idea of what the pilot wants, and then interprets it into the signals to the actuators. In other words, it takes a very tiny amount of data, and then converts it into a very large amount of data. But everything outside the specific training data is always interpolation.

For more on the interpolation problem and speculation about why it is unlikely to be substantially fixed any time soon, I put up a post about this on AI Central.

DISCUSS ON SG


Cooking With or Getting Cooked

AI Central has been upgraded and is now offering daily content. Today’s article is The Clanker in the Kitchen:

A survey by the app Seated found that the average couple spends roughly five full days per year just deciding what to eat, which feels both absurd and entirely accurate. Researchers call this the “invisible mental load,” and cooking sits squarely at its center, requiring not just the act of preparing food but the anticipation, organization, and constant recalibration that precedes it. For the person who carries this load, the question “what’s for dinner?” functions less as a question and more as a recurring task that never quite gets crossed off the list.

Which helps explain why a new generation of AI meal planning apps has found such an eager audience. Apps like Ollie, which has been featured in The Washington Post and Forbes, market themselves less as recipe databases and more as cognitive relief systems. “Put your meals on autopilot,” the homepage reads, with “Dinner done, mental load off” as the tagline. User testimonials cut straight to the emotional core of the value proposition, with one reading: “I feel pretty foolish to say an app has changed my life, but it has! It plans your groceries, it plans your meals. IT TAKES THE THINKING OUT.”

The pitch works precisely because it addresses something real. Decision fatigue is well-documented in psychology research as the phenomenon where the quality of our choices degrades as we make more of them throughout the day, and by dinnertime, after hours of decisions large and small, many of us default to whatever requires the least thought: takeout, frozen pizza, or cereal eaten standing over the sink. AI meal planners promise to front-load all those decisions at once, ideally on a Sunday afternoon when cognitive reserves are fuller, and then execute the plan automatically throughout the week.

I’ve drafted one of the devs from UATV to take the lead at AI Central, since he is a) far more technical than JDA or me and b) I’m far too busy analyzing ancient DNA and cranking out science papers and hard science fiction based on them to do more than a post or two a week there. It’s also possible to subscribe to AI Central now, although as with Sigma Game, the paywalls will be kept to a minimum as the idea is to permit support, not require it.

The reason I suggest that it is very important to at least get a free subscription to AI Central and make it a part of your daily routine is that if you have not yet begun to adopt AI of various sorts into your various performance functions, you will absolutely be left behind by those who do.

Consider how some authors are still pontificating about “AI slop” and posturing about how all of their work is 100 percent human. Meanwhile, I’m turning out several books per month with higher ratings than theirs, better sales than most of theirs, and producing the translations that native speakers at foreign language publishers deem both acceptable and publishable. For example, I haven’t even published THE FROZEN GENE yet, but LE GÈNE GELÉ is already translated into French utilizing a varied form of the Red Team Stress Test approach, already has an offer from a French publisher for the print edition, and has been very favorably reviewed by AIs not involved in the translation process.

Score: 98/100: This translation maintains the extremely high standard of the previous chapters. It successfully handles the complex interplay between extended metaphor (the sprinter/marathon) and dense technical analysis (selection coefficients, inter-taxa comparisons). The prose is confident, fluid, and intellectually rigorous. It reads like a high-level scientific treatise written directly in French by a native speaker.

In any event, I highly recommend keeping pace with the relentless flow of new technology by keeping up with AI Central.

DISCUSS ON SG


The Intellectual Razor

A lot of people who don’t understand what AI really is or what LLMs really are have a tendency to utilize AI as some sort of confirmation bias machine. They proudly talk about how they have jail-broken an AI to agree with them or reasoned with an AI and gotten it to tell them how they have invented a new paradigm, or shown their fiction to an AI and been told that they’re the new Shakespeare, never realizing that this is about as legitimate as having their mommy tell them that they are truly a special boy, and one day a girl is going to be very, very lucky to have them.

This is a fundamental misuse, if not abuse, of these amazing resources that have been provided to us. Because the correct use of AI is using it to stress-test your arguments, using it as an honest opposition that will provide you with useful critiques of what you’re doing that allow you to further strengthen and steelman the case you are attempting to make.

Visit AI Central today for a demonstration of what this looks like in real-time action, as a fairly harsh initial dismissal of the introduction of a new selection coefficient by a hostile AI was transformed into grudging acceptance of that new variable as well as a potentially groundbreaking discovery of the variability of what the field had always utilized as a fundamental constant, with which it had initially been confused.

This ability to use AI to hone and sharpen an argument is why the books being written now are achieving levels of rigor that were hitherto impossible. Logical and technical flaws can’t be hidden under rhetoric, amphiboly, and ambiguous sleight-of-hand anymore. Consider the difference between the 9.7 rating of Probability Zero and the 8.2 of The Irrational Atheist, which most readers considered to present what was an extremely rigorous and convincing case for the time. The difference is the new ability to use multiple AI systems for systematic Red Team oppositional critiques.

The Irrational Atheist: 8.2. High Tactical Rigor.

The book functions as a data audit. It ignores theological feelings to focus on “Murderer’s Row” (democide statistics), crime rate datasets, and the 6.98% war-causation figure. It is rigorous because it seeks to falsify specific claims (e.g., “Religion causes most wars”) with hard numbers. It only loses points for the “Low Church” generalization and occasional polemical heat.

The God Delusion: 1.2. Low Logical Rigor.

Despite Dawkins’s scientific background, this book is almost entirely anecdotal and rhetorical. It relies on the “Ultimate Boeing 747” gambit (a philosophical argument, not a mathematical one) and “True Scotsman” fallacies. It fails the audit because it makes sweeping historical and sociological claims without providing the “receipts” (data tables or statistical analysis) to support them.

The one thing that hasn’t changed is the complete lack of intellectual rigor displayed by Richard Dawkins. Which, of course, is why his arguments, however popular they might briefly be, never hold up over time.

DISCUSS ON SG


How AI Killed Scientistry

On the basis of some of the things I learned in the process of writing PROBABILITY ZERO, Claude Athos and I have teamed up to write another paper:

AIQ: Measuring Artificial Intelligence Scientific Discernment

We propose AIQ as a metric for evaluating artificial intelligence systems’ ability to distinguish valid scientific arguments from credentialed nonsense. We tested six AI models using three papers: one with sound methodology and correct mathematics, one with circular reasoning and fabricated data from prestigious institutions, and one parody with obvious tells including fish-pun author names and taxonomic impossibilities. Only one of six models correctly ranked the real work above both fakes. The worst performer exhibited severe anti-calibration, rating fabricated nonsense 9/10 while dismissing sound empirical work as “pseudoscientific” (1/10). Surprisingly, the model that delivered the sharpest critiques of both fake papers was still harsher on the real work—demonstrating that critical thinking ability does not guarantee correct application of scrutiny. We propose that a random number generator would achieve AIQ ~100; models that reliably invert correct rankings score below this baseline. Our results suggest that most current AI systems evaluate scientific aesthetics rather than scientific validity, with profound implications for AI-assisted peer review, research evaluation, and automated scientific discovery.

Read the rest at AI Central. The results are fascinating.

DISCUSS ON SG


An Objective, Achieved

I am, and have been for more than thirty years, a dedicated fan of David Sylvian. His music represents the pinnacle of all post-classical music as far as I am concerned, and while I consider Gone To Earth my proverbial desert island CD, I regard Orpheus, off Secrets of the Beehive, to be his best and most well-written song. And I’m not the only member of Psykosonik to regret never having met him when we were both living in the Twin Cities, although in fairness, I didn’t know it at the time.

And while I know I will never ascend to those musical heights, that knowledge hasn’t stopped me from trying to achieve something on the musical side that might at least merit being compared to it in some way, even if the comparison is entirely one-sided to my detriment. Think AODAL compared to LOTR, for example.

Anyhow, after dozens of attempts over 37 years, I think I finally managed to write a song that might qualify in that regard. It’s good enough that the professional audio engineer with whom I’ve been working chose to use it to demonstrate his incredible abilities to mix and master an AI track to levels that no one would have believed possible even three months ago. It’s called One Last Breath and you can hear a pre-release version of it at AI Central, as well as a link to Max’s detailed explanation of what he does to breath audio life into the artifice of AI-generated music.

If you’re producing any AI music, you absolutely have to follow the link to Max’s site, as he goes into more detail, provides before and after examples, and even has a special Thanksgiving sale offer on both mixes and masters. I very, very highly recommend the mix-and-master option using the extracted stems; while the mastering audibly improves the sound, the mixing is what really takes the track to the higher levels of audio nirvana. Please note that I don’t get anything out of this, this isn’t part of a referral program or anything, I’m just an extremely satisfied customer and fan of Max’s work.

Mission control, I’m letting go
There’s nothing left you need to know
Tell them I went out like fire
Tell them anything they require
But between us, just you and me
I finally learned how to break free
To be the man I always thought I’d be

Anyhow, check it out, and feel free to let me know what you think of it. For those who are curious about some of the oddly specific references in the lyrics, it was written for the soundtrack of the Moon comedy that Chuck Dixon and I wrote as a vehicle for Owen Benjamin, which we hope to make one day.

DISCUSS ON SG


Most Authors Will Get Nothing

A lot of authors are very excited about the announcement of the Anthropic settlement that promises to pay out about $3,000 per work to the authors whose work was pirated.

There’s just one problem: the settlement excludes 92.8 percent of the pirated works, including pretty much all foreign authors, foreign publishers – including Castalia House – and self-published authors. Even worse, there is absolutely no path to legal redress for them in the US courts.

AI Central explains why.

DISCUSS ON SG


No, You Cannot Tell

I can tell. JDA can tell. But unless you are already an AI-adept professional author who is actively utilizing the latest technologies, you are demonstrably unable to distinguish between AI-generated text and texts written by accomplished, bestselling writers:

Mark Lawrence is a very successful fantasy writer. His PRINCE OF THORNS has sold more than one million copies. He is one of the many professional authors who, while disdaining the use of textual AI, is concerned about its eventual impact on his profession. He recently conducted a very interesting experiment in which he and three other very well-established professional authors wrote short stories on the same subject, and ChatGPT 5 was prompted for four short stories on the same subject.

You can read all eight stories here and see for yourself if you can tell which stories are human-written and which are AI-generated. You don’t need to vote, and you’ll have to keep track of what you thought of each story yourself.

A statistically-significant number of 964 people, who, being fans of Lawrence are much more literate on average than the norm, read the stories and rated them. The results are intriguing and will probably surprise most people who don’t read here regularly. On average, the readers were able to correctly identify the provenance of 3 out of the 8 stories. Not only that, but the story they rated the highest, and 3 out of the 4 highest-rated stories, were all AI-generated.

Read the whole thing at AI Central. And the next time you see someone going on about “AI slop” or how AI just can’t produce the same emotions and feelings that humans can, you’ll know that they’re just posturing in obvious ignorance.

The ironic thing is that AI is actually going to improve the level of writing, because most books are very mediocre and AI is already better than that.

DISCUSS ON SG


Correction

Karl Denninger sets the record straight:

Karl Denninger:

Date: June 25, 2020

Source: Denninger’s blog, Market-Ticker.

Content: In a post titled “Spike Proteins, COVID and Vaccines”, Denninger raised specific concerns about the safety profile of spike-protein-based vaccines (like mRNA vaccines) under development. He argued the spike protein itself was pathogenic (“toxic”) and that using it as the antigen could trigger dangerous immune responses or other health issues, explicitly warning against taking such a vaccine. This is one of the earliest and most specific technical critiques of the emerging vaccine technology by a public figure.

Key Quote: ”If you are offered a vaccine against COVID-19 that is based on a spike protein, either as the antigen or the mechanism of generating the antigen (e.g. mRNA that causes your body to manufacture the spike protein) DO NOT TAKE IT.”

This is allegedly from “Deepseek.”

There’s a problem: I can find no such article from June 25th, 2020 — or on any other date. That is, the specific cited title of an article on my blog does not exist and neither does the alleged “Key Quote.”

Articles here are never actually deleted. They expire from public view (unless exempted) but they’re still here along with every one of the comments. My software allows me to trivially search the entire system as well. That specific citation is fiction.

Further, the first actual scientific evidence that the spike itself was toxic, while I suspected it very early on, was the Salk Study on the spike protein alone that established it was pathogenic — and that was first released as a pre-print just before the shots rolled out in December of 2020 and was peer-reviewed a few months later. I wrote on that at the time and while I said many times in the months prior that I was suspicious and would not take the shots primarily because they were not mimics and thus had an unknown set of risks (e.g. “How Many Lies Do You Give Them?”, published 2021-02-02) the specific citation claimed, on the date it was claimed, is nowhere to be found on the blog…

For more detailed implications of this AI-generated falsehood, visit AI Central. I just wanted to set the record straight here:

DISCUSS ON SG


Five Generations of Modern War

Military history buffs and fans of William S. Lind should recognize the form of this AI-generated lecture, which updates his famous Four Generations of Modern War lecture with the latest transformations in warfare. Read the whole thing at AI Central. It’s not too much of an exaggeration to observe that this is probably in advance and more up-to-date than what is presently being taught at most military colleges today, if the actions of various militaries, including the US Navy and the IDF, are any guide. And I think you’ll agree that this is an absolute tour de force of applied AI in action.


The Fifth Generation of Modern War: Drones, Attrition, and the Collapse of the Logistics Sanctuary

A lecture examining how unmanned systems fundamentally transform the nature of warfare by eliminating the distinction between the front lines and the logistics space.

Introduction:

Ladies and gentlemen, what I’m going to present to you today builds directly on the intellectual framework that William Lind laid out in his groundbreaking lecture entitled the Four Generations of Modern War. As Lind emphasized, we cannot determine the consistency of a system from inside itself—we must stand outside it to see clearly. Today, we must step outside not just our current military thinking, but outside the entire framework of the first four generations to understand what is happening in conflicts from Nagorno-Karabakh to Ukraine to the skies over Israel and Iran.

We are witnessing the emergence of the Fifth Generation of Modern War, and like each previous generational shift, it represents what the Hegelians would call a dialectically qualitative change—not merely an evolution in tactics or technology, but a fundamental transformation in the nature of warfare itself. This transformation is driven by the proliferation of unmanned systems—drones—which have done something unprecedented since the Peace of Westphalia: they have eliminated the sanctuary of the logistics space.

For the first time since modern warfare began, there is no safe rear area. The combat zone has expanded from what was traditionally a 5-kilometer depth to 25 kilometers and beyond. This is not simply longer-range artillery or deeper penetration by special forces—this is the permanent, persistent threat of attack against every element of military force, from the frontline rifleman to the supply depot hundreds of kilometers from the front.

But before we examine this revolutionary change, we must understand what came before. Lind’s framework of the Four Generations provides the foundation upon which we must build our understanding of the Fifth.

DISCUSS ON SG