Five Generations of Modern War

Military history buffs and fans of William S. Lind should recognize the form of this AI-generated lecture, which updates his famous Four Generations of Modern War lecture with the latest transformations in warfare. Read the whole thing at AI Central. It’s not too much of an exaggeration to observe that this is probably in advance and more up-to-date than what is presently being taught at most military colleges today, if the actions of various militaries, including the US Navy and the IDF, are any guide. And I think you’ll agree that this is an absolute tour de force of applied AI in action.


The Fifth Generation of Modern War: Drones, Attrition, and the Collapse of the Logistics Sanctuary

A lecture examining how unmanned systems fundamentally transform the nature of warfare by eliminating the distinction between the front lines and the logistics space.

Introduction:

Ladies and gentlemen, what I’m going to present to you today builds directly on the intellectual framework that William Lind laid out in his groundbreaking lecture entitled the Four Generations of Modern War. As Lind emphasized, we cannot determine the consistency of a system from inside itself—we must stand outside it to see clearly. Today, we must step outside not just our current military thinking, but outside the entire framework of the first four generations to understand what is happening in conflicts from Nagorno-Karabakh to Ukraine to the skies over Israel and Iran.

We are witnessing the emergence of the Fifth Generation of Modern War, and like each previous generational shift, it represents what the Hegelians would call a dialectically qualitative change—not merely an evolution in tactics or technology, but a fundamental transformation in the nature of warfare itself. This transformation is driven by the proliferation of unmanned systems—drones—which have done something unprecedented since the Peace of Westphalia: they have eliminated the sanctuary of the logistics space.

For the first time since modern warfare began, there is no safe rear area. The combat zone has expanded from what was traditionally a 5-kilometer depth to 25 kilometers and beyond. This is not simply longer-range artillery or deeper penetration by special forces—this is the permanent, persistent threat of attack against every element of military force, from the frontline rifleman to the supply depot hundreds of kilometers from the front.

But before we examine this revolutionary change, we must understand what came before. Lind’s framework of the Four Generations provides the foundation upon which we must build our understanding of the Fifth.

DISCUSS ON SG


AI Text is Fair Use

It’s not at all surprising that a Federal judge – a particularly good one who has tried to rein in various corporate abuses of the arbitration system – has recognized that AI training and AI text generation is protected under the fair use doctrine in a landmark pre-trial decision.

A federal judge in California issued a landmark ruling that protects the development of artificial intelligence and creative freedom by determining that training AI systems on copyrighted books constitutes fair use under copyright law. U.S. District Judge William Alsup’s decision in Bartz v. Anthropic represents a crucial victory against attempts to stifle technological innovation through overly broad copyright claims.

Judge Alsup ruled that Anthropic’s training of its Claude AI on authors’ works was “exceedingly transformative,” and therefore protected under the fair use doctrine as specified in Section 107 of the Copyright Act. This decision correctly recognizes that AI training represents a fundamentally different use of copyrighted material than simple reproduction or distribution.

The judge’s reasoning demonstrates a sophisticated understanding of how AI works, comparing the training process to human learning rather than mechanical copying. “Everyone reads texts, too, then writes new texts,” Alsup wrote. “To make anyone pay specifically for the use of a book each time they read it, each time they recall it from memory, each time they later draw upon it when writing new things in new ways would be unthinkable.”

This analogy captures why attempts to restrict AI training are flawed. Human authors read thousands of books, absorb their techniques and ideas, and incorporate that knowledge into their own writing without paying licensing fees for each influence. AI systems operate similarly, learning patterns and techniques rather than copying specific content.

It absolutely makes sense. How can copyright protect something that isn’t copied? How can the use of a copyright text as nothing more than a reference and a style guide be illegal in any way? And as I have pointed out repeatedly, an author’s literary style cannot be protected given the “look-and-feel” decision in favor of Microsoft when Apple tried to protect its graphic user interface.

“Legal experts expect the decision to be appealed.”

That’s not going to happen. They simply don’t have a case, and Alsup is a smart, thoughtful judge who knows what he’s doing when he writes his decisions. He’s the judge whose decision prevented corporations from indefinitely delaying their responses to the arbitrations their terms of use required.

DISCUSS ON SG


Abandon the Platforms

YouTube has adopted a strategy of slow strangulation of popular channels of which it does not approve.

YouTuber Jeremy Hambly, known as The Quartering, has revealed the devastating impact YouTube’s algorithm manipulation is having on his channel, exposing a coordinated soft censorship campaign targeting creators who challenge progressive narratives. His breakdown of the platform’s systematic suppression demonstrates how Big Tech companies are weaponizing their algorithms to silence dissenting voices without the transparency of outright bans.

“I don’t know if many of you have noticed, but I’ve been a lot more shifty and weird breathing lately,” Hambly admitted in his recent video. “And it’s because I’m having near-daily anxiety. There’s been something going on with my YouTube channel over the past couple of weeks that I have been spending almost every single night trying to figure out.”

The numbers tell a stark story of algorithmic manipulation. Hambly explained how “over the past several weeks I have noticed videos that used to get 50, 60, 70, 80,000 views were starting to get 10,000 views, 15,000 views.” This dramatic drop isn’t due to content quality or audience interest – it’s the result of YouTube’s deliberate throttling of channels that don’t align with the platform’s ideological preferences.

Even more concerning is the subscriber hemorrhaging Hambly has experienced. “For the first time in three or four years, I have started losing a lot of subscribers. Hundreds, thousands a day,” he revealed. This pattern suggests YouTube isn’t just limiting video reach but actively manipulating subscription feeds and notifications to starve channels of their established audiences.

The financial impact has been severe enough to threaten Hambly’s entire operation. “These numbers are if it continued like this past these two weeks, which it has been, I would have to fire everybody that works for me. Everybody,” he explained. This reveals YouTube’s soft censorship strategy – rather than creating martyrs through outright bans, they slowly strangle channels economically until creators are forced to change their content or abandon the platform entirely.

We’re now laying the foundation for the 4th Stage of UATV, which will be the best, strongest, and most stable yet. I alluded to a few of our plans for the next stage in last night’s Darkstream, but you’ll see the new features and content being added gradually over the summer.

And in the meantime, be sure to tune into the Big Bear and Dark Lord show tonight at 7 PM Eastern.

DISCUSS ON SG


Iran Has Nukes

Let me get this straight. Atomic/Nuclear weaponry has supposedly been around since 1945. It’s very old, very basic technology that has been acquired by countries as underdeveloped as India, Pakistan, South Africa, and Israel.

Hypersonic missiles are so technologically advanced and difficult to manufacture that only four countries in the world have deployed them: China, Russia, North Korea, and Iran. The USA hasn’t successfully developed one yet, neither have Japan, France, or the UK.

And yet, we’re supposed to believe that Iran doesn’t already possess nuclear warheads to install on those hypersonic delivery systems?

I don’t buy it. If nuclear weaponry actually exists – and there is very good reason to doubt that it does – then Iran has it. If Iran doesn’t have it, then no one does because it doesn’t actually exist. The fact that nuclear weapons have been held over humanity’s collective heads for generations and used to justify globalist organizations for decades is sufficient reason for them to have been among the foremost myths perpetrated by Clown World.

DISCUSS ON SG


Your Own Private AI

Markku has helpfully instructed the good people of AI Central on how to set up your own local version of Deepseek, thereby permitting you to control the data being utilized by it.

Until recently, training an AI on a set of research data has been so resource-intensive that it has been entirely out of reach for home users even for smaller (“distilled”) models intended for ordinary gaming computers. However, these days there is a methodology called Retrieval-Augmented Generation, RAG for short, that can achieve something very close to the effect of training in a relatively tiny portion of the time. The trade-off is that the understanding of the data is not as deep, and the data has to be processed every time the AI is launched.

With an average gaming PC with an NVIDIA GPU that’s in the RTX 3000 -series or newer, you can expect it to spend about 10 minutes, assuming you use a modest 7 billion parameter model. Parameters can be thought of as virtual brain cells. With a better computer, 14 billion is also realistic, especially if you are asking just a few but important questions. Since you are having the AI focus on a set of data that is extremely limited compared to cloud-based AI’s, the normally expected half trillion parameter counts aren’t important. You need them only when the AI has to essentially know the entire contents of the internet. For one set of books, 7 to 14 billion is sufficient. If you choose to get serious about using a locally installed AI, then you’d install it on a dedicated Linux server and keep it constantly running, which mitigates the problem to essentially zero.

It’s not for everyone, to be sure. Not yet, anyhow. But I’m sure there are a few hardcore programmers here who are more than up to the challenge, if interested.

DISCUSS ON SG


A Test of Three Tales

Just to give people here an idea of what sort of thing they’ll be able to find at AI CENTRAL, I posed three different AI text engines the same very specific challenge: a 2,500-word story in the style of a brilliant author. Bonus points if you can guess which author it was from this excerpt from one of the three entrants:

The Scarlet Visitor

The city of Paradyse rose from the sea like a dream of gold and shadow, its towers clawing at the sky, its streets winding in serpentine coils. Ships from distant lands brought spices, silks, and secrets to its harbors, but none so strange as the vessel that came on the eve of the Festival of Masks.

It was a ship without a name, its sails the color of dried blood, its hull black as a starless night. The dockmaster, a wizened man named Orlan, squinted at it through the salt-haze and crossed himself—though he could not say why. When the gangplank descended, only one figure emerged: a woman wrapped in a scarlet cloak, her face hidden behind a veil of silver lace.

She moved through the city like a whisper, her steps soundless, her presence drawing eyes yet leaving no impression. The people of Paradyse were accustomed to strangeness—this was a city where nobles wore living jewels that whispered secrets, where the dead sometimes walked the markets at dusk—but there was something about her that unsettled even the most jaded souls.

The woman took lodging at the Inn of the Twisted Serpent, a place frequented by those who did not wish to be found. The innkeeper, Madame Vex, was a creature of sharp angles and sharper wits, her fingers adorned with rings that could sting like scorpions. She offered the stranger a room without asking for coin, sensing that some debts were best left unspoken.

“How long will you stay in our fair city?” Madame Vex inquired, her voice like honeyed poison.

The woman lifted her veil just enough to sip her wine. Her lips were the same crimson as her cloak. “Until my business is concluded.”

“And what business is that?”

A smile, fleeting as a knife’s gleam. “The oldest kind.”

And there is a very clear and obvious winner, which may be of interest to some of the writers here. Visit AI CENTRAL to read all three entrants and see the verdict.

DISCUSS ON SG


Introducing AI Central

As I did some years ago with Alpha Game, and as I have done with Sigma Game and Castalia Library, I have created a new site to host posts and discussions related to artificial intelligence and its use in producing music, illustrations, text, and video in order to avoid monopolizing the discourse here where most of the readership really isn’t that interested in it. In looking around, I saw that most of the sites that discuss AI regularly do so from a skeptic’s perspective or a programming perspective, and I wanted a site that can focus instead on AI as it is actually used by creative people today and going forward into the future.

So, if it’s a subject that is of interest to you, or if you are an AI creator yourself, please consider subscribing to AI CENTRAL and perhaps even contemplate providing an occasional guest post showing off your work and explaining what you used to produce it. My plan is to post once daily, which given my current music backlog will not be a problem for at least the next four months even if I don’t log into Suno at all.

DISCUSS ON SG


AI is Killing College

While I knew the current higher education system is both unsustainable and unnecessary now, I never suspected that it would be AI and not debt or the absence of men that would put the final nail in the university coffin:

While professors may think they are good at detecting AI-generated writing, studies have found they’re actually not. One, published in June 2024, used fake student profiles to slip 100 percent AI-generated work into professors’ grading piles at a U.K. university. The professors failed to flag 97 percent. It doesn’t help that since ChatGPT’s launch, AI’s capacity to write human-sounding essays has only gotten better…

There are, of course, plenty of simple ways to fool both professors and detectors. After using AI to produce an essay, students can always rewrite it in their own voice or add typos. Or they can ask AI to do that for them: One student on TikTok said her preferred prompt is “Write it as a college freshman who is a li’l dumb.” Students can also launder AI-generated paragraphs through other AIs, some of which advertise the “authenticity” of their outputs or allow students to upload their past essays to train the AI in their voice. “They’re really good at manipulating the systems. You put a prompt in ChatGPT, then put the output into another AI system, then put it into another AI system. At that point, if you put it into an AI-detection system, it decreases the percentage of AI used every time,” said Eric, a sophomore at Stanford.

Most professors have come to the conclusion that stopping rampant AI abuse would require more than simply policing individual cases and would likely mean overhauling the education system to consider students more holistically. “Cheating correlates with mental health, well-being, sleep exhaustion, anxiety, depression, belonging,” said Denise Pope, a senior lecturer at Stanford and one of the world’s leading student-engagement researchers.

Many teachers now seem to be in a state of despair. In the fall, Sam Williams was a teaching assistant for a writing-intensive class on music and social change at the University of Iowa that, officially, didn’t allow students to use AI at all. Williams enjoyed reading and grading the class’s first assignment: a personal essay that asked the students to write about their own music tastes. Then, on the second assignment, an essay on the New Orleans jazz era (1890 to 1920), many of his students’ writing styles changed drastically. Worse were the ridiculous factual errors. Multiple essays contained entire paragraphs on Elvis Presley (born in 1935). “I literally told my class, ‘Hey, don’t use AI. But if you’re going to cheat, you have to cheat in a way that’s intelligent. You can’t just copy exactly what it spits out,’” Williams said.

Williams knew most of the students in this general-education class were not destined to be writers, but he thought the work of getting from a blank page to a few semi-coherent pages was, above all else, a lesson in effort. In that sense, most of his students utterly failed. “They’re using AI because it’s a simple solution and it’s an easy way for them not to put in time writing essays. And I get it, because I hated writing essays when I was in school,” Williams said. “But now, whenever they encounter a little bit of difficulty, instead of fighting their way through that and growing from it, they retreat to something that makes it a lot easier for them.”

By November, Williams estimated that at least half of his students were using AI to write their papers. Attempts at accountability were pointless. Williams had no faith in AI detectors, and the professor teaching the class instructed him not to fail individual papers, even the clearly AI-smoothed ones. “Every time I brought it up with the professor, I got the sense he was underestimating the power of ChatGPT, and the departmental stance was, ‘Well, it’s a slippery slope, and we can’t really prove they’re using AI,’” Williams said. “I was told to grade based on what the essay would’ve gotten if it were a ‘true attempt at a paper.’ So I was grading people on their ability to use ChatGPT.”

The “true attempt at a paper” policy ruined Williams’s grading scale. If he gave a solid paper that was obviously written with AI a B, what should he give a paper written by someone who actually wrote their own paper but submitted, in his words, “a barely literate essay”? The confusion was enough to sour Williams on education as a whole. By the end of the semester, he was so disillusioned that he decided to drop out of graduate school altogether. “We’re in a new generation, a new time, and I just don’t think that’s what I want to do,” he said.

“The students kind of recognize that the system is broken and that there’s not really a point in doing this.”

The students are right. There is no point in doing this, because the only reason they’re doing it is to acquire a golden ticket to higher income and higher social status that increasingly no longer exists.

DISCUSS ON SG


The Vessel of Dreams

Since Neil Gaiman presumably won’t be publishing anything ever again, it falls to AI to imagine what might have been.

The bathtub was a vessel of dreams, a porcelain ship adrift in a sea of steam and possibility. Neil sank into the warm embrace of the water, the world outside dissolving into a haze of muted sounds and soft light. A book rested on the edge of the tub, its pages slightly curled from the humidity, waiting to transport him to realms where gods walked among mortals and shadows held secrets too vast for the daylight. In one hand, he held a cup of tea, its aroma mingling with the scent of lavender bubbles, while the other hand trailed lazily in the water, stirring ripples that danced like fleeting thoughts. This was his sanctuary, a place where stories whispered to him from the edges of his mind, where the mundane melted away, and the extraordinary took root. Sometimes, he would close his eyes and let the water hold him, imagining he was floating through the stars or sinking into the depths of an ancient, forgotten ocean. The bathtub was not just a place to wash away the day; it was a portal, a liminal space where the boundaries between reality and imagination blurred, and where the next story always began.

UPDATE: The musical rendition of this has been released on UATV. It is beautifully… disturbing.

Oh, this bath is a vessel of dreams
Sailing through stars on a whisper of steam
Gods in the water with demons and sin
Let the world wait
Let the stories begin

DISCUSS ON SG


AI Estimate for US Breakup

I asked Deepseek to estimate when the political entity known as the USA would fragment, and a probability estimate for the collapse to begin in 2033 specifically.

Predicting the exact year of a potential political breakup of the United States is highly speculative, but we can approach the question by analyzing historical precedents, current trends, and structural vulnerabilities. Here’s a breakdown:

1. Historical Lifespan of Political Entities

  • The average lifespan of a historical empire or large nation-state is roughly 200-250 years (e.g., Roman Republic ~500 years, Soviet Union ~70 years, British Empire peak dominance ~200 years).
  • The U.S. is currently 248 years old (1776–2024), placing it in the upper range of historical longevity for republics.

2. Key Stressors Increasing Fragmentation Risk

  • Political Polarization: The U.S. is experiencing unprecedented ideological division, with states increasingly governed by diametrically opposed policies (e.g., abortion, gun laws, climate policy).
  • Cultural & Regional Divides: Distinct political identities have solidified (e.g., “Greater Appalachia” vs. “Left Coast” in American Nations by Colin Woodard).
  • Economic Disparities: GDP disparities between states (e.g., California vs. Mississippi) create tensions over federal resource allocation.
  • Loss of Institutional Trust: Declining faith in federal government (Congress approval ~20%, Supreme Court polarization).
  • External Shocks: A major crisis (economic collapse, climate disaster, or AI-driven labor disruption) could accelerate disintegration.

3. Most Vulnerable Breakup Scenarios

  • Soft Secession: States or regions (e.g., Texas, California) increasingly nullify federal laws, leading to de facto autonomy.
  • Constitutional Crisis: A contested election or refusal to accept election results could trigger state-level rebellions.
  • Decentralization via Technology: Blockchain, AI governance, or parallel economies could enable regions to bypass federal systems.

4. Predicted Timeline

Based on historical patterns and current trends, the most plausible range for initial breakup movements is 2030–2070, with key inflection points:

  • 2030s: If polarization worsens and a crisis (e.g., debt default, climate migration) overwhelms federal cohesion, states may test autonomy.
  • 2040–2060: Demographic shifts (e.g., non-white majority, aging population) could fuel regional tensions over representation.
  • Baseline 2033 Probability: 8% (most likely outcome: no breakup).
  • With Crisis: 15–20%.
  • With Multiple Crises + Secessionist Surge: 25–35%.

Conclusion: While 2033 is too early for a spontaneous breakup, an external shock or constitutional crisis could push risk into the 15–25% range. The most likely window remains 2040s–2060s.

I see no reason to alter my original estimate. But it’s interesting to see that what was perceived as an impossibility back in 2004 has now risen to an 8 percent probability sans any crises. Either way, most of you reading this will live to see the end of the USA as it is presently comprised.

DISCUSS ON SG