AI is More Accurate

People are sometimes amazed that I generally prefer engagement with AI systems to people. But the thing is, being pattern recognition machines, AI’s actually describe people much more accurately than most other people can. Consider the following quote from a recent criticism of my current projects by one AI:

Vox Day operates dialectically when he can (exposing logical fallacies, pointing out contradictions) and rhetorically when he must (reframing, using sharp language, appealing to observable reality over credentials), but he certainly doesn’t appeal to the authority of fields he considers corrupted or irrelevant.

That was just one little throwaway passage in a three-model analysis of the SSH I was doing in order to smoke out any obvious flaws in my reasoning. And yet, it’s considerably better than the level of critical understanding demonstrated by any of my human detractors, most of whom couldn’t distinguish between Rhetoric, dialectic, and rhetoric if their lives depended upon it.

DISCUSS ON SG


Diversity Uber Alles

This is a very clear and cogent example of the way convergence eliminates an organization’s ability to perform its core functions. You might quite reasonably assume that the Python Software Foundation’s prime objective is to produce Python software. And you would be wrong.

It is also a convincing demonstration of the need to keep the SJWs very far away from an organization’s mission statement.

In January 2025, the PSF submitted a proposal to the US government National Science Foundation under the Safety, Security, and Privacy of Open Source Ecosystems program to address structural vulnerabilities in Python and PyPI. It was the PSF’s first time applying for government funding, and navigating the intensive process was a steep learning curve for our small team to climb. Seth Larson, PSF Security Developer in Residence, serving as Principal Investigator (PI) with Loren Crary, PSF Deputy Executive Director, as co-PI, led the multi-round proposal writing process as well as the months-long vetting process. We invested our time and effort because we felt the PSF’s work is a strong fit for the program and that the benefit to the community if our proposal were accepted was considerable.

We were honored when, after many months of work, our proposal was recommended for funding, particularly as only 36% of new NSF grant applicants are successful on their first attempt. We became concerned, however, when we were presented with the terms and conditions we would be required to agree to if we accepted the grant. These terms included affirming the statement that we “do not, and will not during the term of this financial assistance award, operate any programs that advance or promote DEI, or discriminatory equity ideology in violation of Federal anti-discrimination laws.” This restriction would apply not only to the security work directly funded by the grant, but to any and all activity of the PSF as a whole. Further, violation of this term gave the NSF the right to “claw back” previously approved and transferred funds. This would create a situation where money we’d already spent could be taken back, which would be an enormous, open-ended financial risk.

Diversity, equity, and inclusion are core to the PSF’s values, as committed to in our mission statement:

The mission of the Python Software Foundation is to promote, protect, and advance the Python programming language, and to support and facilitate the growth of a diverse and international community of Python programmers.

Given the value of the grant to the community and the PSF, we did our utmost to get clarity on the terms and to find a way to move forward in concert with our values. We consulted our NSF contacts and reviewed decisions made by other organizations in similar circumstances, particularly The Carpentries.

In the end, however, the PSF simply can’t agree to a statement that we won’t operate any programs that “advance or promote” diversity, equity, and inclusion, as it would be a betrayal of our mission and our community.

Note that the need “to address structural vulnerabilities in Python and PyPI” and to “promote, protect, and advance the Python programming language” both take a back seat to facilitating the growth of a diverse community.

Which is why, eventually, the only thing left to the Python Software Foundation will be the diversity and the ruins that are the inevitable consequences of social justice convergence.

DISCUSS ON SG


The Death of Wikipedia

It’s already apparent on this, the second day of Grokipedia, that Wikipedia is effectively dead. It may not have stopped moving yet, but it’s clearly and inevitably toast. Compare and contrast, for example, the competitive listings on the concept of the Sigma Male, which as yet exists only as a subset of tangential pages on both sites.

The most fundamental difference is not actually Grokipedia’s incorporation of AI, but rather, its long-overdue rejection of the perverse Wikipedia demand for a reliable secondhand source, which not only guarantees inaccurate and outdated information, but is a contradiction in terms. Providing the media with a de facto veto on any and all information that can appear on Wikipedia necessarily rendered it incapable of serving as anything more than a mainstream media repository.

The idea of requiring “reliable sources” sounds superficially reasonable, but the observable facts are that the editors, the sources deemed acceptable, and most of all, the admins, are at the very least every bit as biased as any direct source. A direct source might very well put a spin on the information published on Wikipedia, but at least it would provide the information in the first place!

For example, this is the full description of my music career and discography on Wikipedia, even though my status as an award-winning, three-time Billboard charting musician is undisputed and dozens of my songs are publicly available on Spotify and Apple Music.

Beale was a member of the band Psykosonik between 1992 and 1994.

You simply wouldn’t know that I’ve written and recorded over 100 songs for six different bands. You wouldn’t know that my music was featured in a Nintendo game published by Activision. You wouldn’t know what my band beat out Prince for a Best Dance Record award. And you wouldn’t know that I founded the band a year before I was supposedly a member of it. Now, Grokipedia doesn’t do much better in that regard, but it does provide considerably more detail and context.

Psykosonik, an American techno and industrial music project, formed in 1991 in Minneapolis, Minnesota, drawing inspiration from cyberpunk themes and club scenes. The name derived from a lyric in the band’s early track “Sex Me Up,” altered to “Psykosonik” with a “k” for distinctiveness. Key contributors included Paul Skrowaczewski, who handled musical production and vocals, and Theodore Beale, who provided lyrics influenced by political nihilism and extropian ideas. The project evolved from earlier electronic experiments tied to local nightclubs like The Upper Level and The Underground, managed by impresario Gordie.[12]

Beale’s involvement stemmed from his prior experience in the cover band NoBoys, active in 1987–1988, which performed synth-pop sets including Depeche Mode and New Order tracks at Minneapolis venues. NoBoys played a notable one-hour gig at The Upper Level in summer 1988, drawing crowds before being cut short due to internal club tensions. By late 1991, Beale collaborated with Skrowaczewski on Psykosonik, writing lyrics for songs like “Silicon Jesus” and contributing conceptual vision. The lineup expanded in early 1992 with drummer Mike Reed and DJ Dan Lenzmeier, solidifying the project’s electronic sound. Beale served as lyricist until departing the music scene in 1994 to focus on technology ventures.[13][12][14]

Psykosonik’s early momentum built through club exposure rather than extensive live tours, characteristic of 1990s techno acts emphasizing studio production. The track “Sex Me Up” gained traction by late 1991 when played regularly by DJs at The Perimeter nightclub, prompting crowds to anticipate and chant along during peak hours. Subsequent demos, such as an early version of “Down to the Ground” recorded that winter, fueled local buzz but did not lead to documented full-band concerts. The project prioritized releases over stage performances, with Beale’s lyrics appearing on the 1993 self-titled debut album, though live sets remained minimal amid internal creative dynamics.[12]

There are a few errors, of course. But it’s notable that it actually got Paul Sebastian’s surname right.

  • The drummer was Mike Larson, not Mike Reed.
  • My lyrics also appear on the second album, Unlearn.

It’s remarkable that it has only one more error than the Wikipedia entry despite providing considerably more detail… but more about that anon.

It’s clear that Grokipedia offers a clear technological path forward for Infogalactic, as well as leaveing considerable room for some of the curation and user features that we’ve always planned to provide that will allow Infogalactic to complement Grokipedia in a way that it could never co-exist with Wikipedia. If you’re an AI programmer with potential interest in the next phase of the project, watch this space.

Regardless, it’s clear that Wikipedia’s monopoly has been broken by artificial intelligence and its convergence ensures its inability to perform its core function sufficiently well enough for it to compete and survive.

UPDATE: Wikipedia founder Larry Sanger has some additional thoughts, and even created a metric that found Grokipedia to be considerably less biased despite its reliance on supposedly unreliable direct sources.

According to ChatGPT 4o, which is a competent LLM that is widely perceived to lean to the left, primarily on account of its training data, the Wikipedia articles on these controversial topics, on average, had a bias somewhere between “emphasizes one side rather more heavily” and “severely biased.” By contrast, the Grokipedia articles on these topics are said to “exhibit minor imbalances” on average. On these topics, Wikipedia was never wholly neutral, while Grokipedia was entirely neutral (rating of 1) three out of ten times, and was only slightly biased (rating of 2) five other times. Meanwhile, Wikipedia’s bias was heavy, severe, or wholly one-sided (rating of 3, 4, or 5) six out of ten times.

DISCUSS ON SG


ESR Speaks With Authority

Now this is an area in which the man definitely knows whereof he speaks. Listen to him.

I’m about to do something I think I’ve never done before, which is assert every bit of whatever authority I have as the person who discovered and wrote down the rules of open source.

After ten years of drama and idiocy, lots of people other than me are now willing to say in public that “Codes of Conduct” have been a disaster – a kind of infectious social insanity producing lots of drama and politics and backbiting, and negative useful work.

Here is my advice about codes of conduct:

  1. Refuse to have one. If your project has one, delete it. The only actual function they have is as a tool in the hands of shit-stirrers.
  2. If you’re stuck with having one for bureaucratic reasons, replace it with the following sentence or some close equivalent: “If you are more annoying to work with than your contributions justify, you’ll be ejected.”
  3. Attempts to be more specific and elaborate don’t work. They only provide control surfaces for shit-stirrers to manipulate.

Yes, we should try to be kind to each other. But we should be ruthless and merciless towards people who try to turn “Be kind!” into a weapon. Indulging them never ends well.

Granted, I said much the same in SJWs Always Lie back in 2015, but then, I do not have the authority in the open source world that ESR does. If you want to keep your organization functional, always apply these three rules:

  • No codes of conduct
  • No human resources department or employees
  • No tolerance for thought police

DISCUSS ON SG


The Theranos Fraud

A former hedge fund venture capitalist observes some of the more peculiar aspects of the Theranos story.

Over the last 20 years, part of my own work has been raising money from wealthy investors. Based on that experience, I find the Elizabeth Holmes story completely impossible to believe. Now, my experience was different in that I wasn’t raising money for a tech startup and I never worked in Silicon Valley. Rather, I sought funding for hedge fund ventures. But in essence, the process is the same: you go to wealthy investors, pitch your project and hope to raise funds. Your counterparts are shopping for investments that can give them a high return on capital.

The experience gave me a good sense of the way wealthy individuals make their investment decisions. For starters, they are not stupid; they are usually quite rigorous and don’t easily fall for cosmetics or charm. It’s true that some investors spray money on startup ventures less discriminately with the rationale that some projects will succeed. Typically they’ll look at your team, business plan, demand some proof of concept, and if they’re half-convinced that you have a shot at succeeding, they might give you some money. But in such cases we’re normally talking about relatively smaller sums – say, a few hundred thousand bucks or something in that ballpark.

But when it comes to large sums of money, investors tend to be very demanding. Venture capital funds tend to specialize in a limited number of industries and they use domain experts to vet prospective investments. Their job is to conduct thorough due diligence on potential investments and distill the most likely future success stories out of many, many applicants. This process is itself costly and time-consuming, and I would expect that in Silicon Valley, which attracts top notch creative talent from all over the world, the process is quick to eliminate candidates that fail to convince that they have a sound concept, competent management team and a compelling business strategy.

The cosmetics alone – the stories, visions, displays of confidence or personal charm – they won’t even get you past the gatekeepers if the stuff behind the façade doesn’t convince. In Elizabeth Holmes’s case, even minimal due diligence should have eliminated her: she set out to revolutionize health care but had no qualifications or experience in medicine and only rudimentary training in biochemistry. In almost all cases, her patents specified design of future solutions but not the functionality. She published no white papers or technical specifications, and could not demonstrate that her supposed inventions even worked. Any specialist in the field of medicine or biochemistry would have easily disqualified her claims and determined that there was no substance to her story.

Holmes’ fakery was obvious from the start

For example, Holmes was twice introduced to Stanford clinical pharmacologist and professor of medicine Dr. Phyllis Gardner with the recommendation that she was brilliant and had a revolutionary investment idea. But professor Gardner saw right through her: “she had no knowledge of medicine and rudimentary knowledge of engineering… And she really didn’t want any expertise, she thought she knew it all!” Another qualified longtime observer of the Theranos saga was also skeptical. Dr. Darren Saunders worked as an associate professor of medicine at the University of New South Wales where he ran the Ubiquitin Signaling Lab. He knew that Holmes could never do what she claimed. In an interview for the 60 minutes Australia program, he said that “it takes years and years to develop any one of those tests and make sure that it’s accurate.

Indeed, what was glaringly obvious to Dr. Gardner and Dr. Saunders should have been just as obvious to any specialist in the field. In fact, Holmes also failed to convince the US military to adopt Theranos technology. In spite of wholehearted help from General Mattis, she was unable to pass the vetting process at the Pentagon. A few years later, in May 2015, University of Toronto professor Eleftherios Diamandis analyzed Theranos technology and also politely concluded that “most of the company’s claims are exaggerated.” Diamandis expressed that opinion at the time when the hype about Theranos and Holmes were at their peak.

For some reason however, Elizabeth Holmes’ ascent was not obstructed by any scrutiny of her fantastic claims. Early on, not only was she able to get a face-to-face meeting with Don Lucas Sr., one of the most prominent venture capitalists in Silicon Valley, she also managed to persuade him to make a large investment in Theranos. Lucas explained his rationale for that decision in a 2009 interview: “Her great-grandfather was an entrepreneur, very successful. And it turned out later that the hospital [near] where [her family] lives is named after her great-uncle.

Apparently, her great uncle’s and great-grandfather’s success was enough for Lucas to invest in her project. I wonder if that same qualification was equally convincing to all other investors? Or was it her passion and charm? Whatever the case, big fish investors gave her more than $750 million, unconcerned about her qualifications or the functioning of her technology.

This is all very strange, to put it politely. The media narrative has meanwhile contrived plausible-sounding explanation for this: you see, the big investors gave Holmes a ton of cash because they were just so afraid of missing the next facebook or google. But this explanation is just as unlikely as the rest of the story. Neither do such silly rationalizations explain the massive allocations from a group of top-notch power players, nor the terms of investment that prohibited verification of Theranos technology, nor share prices that valued the fraudulent venture at $9 billion.

Read the whole thing, because it wasn’t just about making money. It appears to have been some sort of dry run for Covid.

DISCUSS ON SG


Fear of a Dark Lord

People occasionally ask me why I am often referred to as a “dark lord” and why my various minions, ilk, followers, and fans address me as “SDL”. This is just one of the many reasons why:

I’ve discovered that any reference to you or the SSH shuts down, and makes inoperable, Proton’s AI, Lumo.

When even artificial intelligences fear to speak your name, or dare to even attempt to write in your style, well, you just might be a dark lord.

DISCUSS ON SG


The Defense Catches Up

As is always the case with technological development, the offense has the initial advantage. But the defense always catches up in time, as we’re seeing with regards to drone and missile warfare:

India has successfully tested a new integrated air defense system consisting of a variety of weapons that shot down three targets at different altitudes and ranges off the coast of India’s eastern state of Odisha, Indian media reported on Monday citing the country’s defense ministry.

A Chinese expert said on Monday that while the inclusion of a laser weapon is a notable feature in this short-range system, its operational effectiveness remains to be proved, as a test conducted under preset scenario cannot fully demonstrate performance in real combat conditions.

The maiden test of the integrated air defense weapon system (IADWS), which is expected to be a part of the bigger national security shield, was conducted by India’s Defence Research and Development Organisation (DRDO) on Saturday, the Hindustan Times reported on Monday, noting that the development comes days after Indian Prime Minister Narendra Modi announced the creation of a formidable military capability to defend India’s military and civilian installations against aerial attacks and set a 10-year deadline for developing an indigenous air defense shield integrated with offensive weapons.

According to Indian media, the IADWS is a multi-layered air defense system consisting of quick reaction surface-to-air missiles (QRSAM), very short range air defense system (VSHORADS) and a laser-based directed energy weapon.

During the flight-tests, three different targets including two high-speed fixed wing unmanned aerial vehicle targets and a multi-copter drone were simultaneously engaged and destroyed completely by the QRSAM, VSHORADS and the high-energy laser weapon system at different ranges and altitudes, the Hindustan Times reported, citing the Indian defense ministry.

The point is not that India is at the cutting edge of anti-drone and anti-missile technologies, but rather, that even India, a third-rate power, has understood the obvious and is focusing its military investment in areas that are likely to be relevant in the future rather than on tactically and strategically outdated technologies like planes, littoral warships, and aircraft carriers.

DISCUSS ON SG


No, You Cannot Tell

I can tell. JDA can tell. But unless you are already an AI-adept professional author who is actively utilizing the latest technologies, you are demonstrably unable to distinguish between AI-generated text and texts written by accomplished, bestselling writers:

Mark Lawrence is a very successful fantasy writer. His PRINCE OF THORNS has sold more than one million copies. He is one of the many professional authors who, while disdaining the use of textual AI, is concerned about its eventual impact on his profession. He recently conducted a very interesting experiment in which he and three other very well-established professional authors wrote short stories on the same subject, and ChatGPT 5 was prompted for four short stories on the same subject.

You can read all eight stories here and see for yourself if you can tell which stories are human-written and which are AI-generated. You don’t need to vote, and you’ll have to keep track of what you thought of each story yourself.

A statistically-significant number of 964 people, who, being fans of Lawrence are much more literate on average than the norm, read the stories and rated them. The results are intriguing and will probably surprise most people who don’t read here regularly. On average, the readers were able to correctly identify the provenance of 3 out of the 8 stories. Not only that, but the story they rated the highest, and 3 out of the 4 highest-rated stories, were all AI-generated.

Read the whole thing at AI Central. And the next time you see someone going on about “AI slop” or how AI just can’t produce the same emotions and feelings that humans can, you’ll know that they’re just posturing in obvious ignorance.

The ironic thing is that AI is actually going to improve the level of writing, because most books are very mediocre and AI is already better than that.

DISCUSS ON SG


Spain Drops F-35

Spain cancelled their order for F-35s:

Spain has abandoned plans to buy dozens of F-35 fighter jets, Spanish newspaper El Pais says, citing unnamed sources in the Spanish government. The preliminary discussions for a potential order have been suspended indefinitely, the newspaper writes. The F-35 is made by U.S. defense contractor Lockheed Martin and a number of suppliers including Italy’s Leonardo, Britain’s BAE Systems and hundreds of other U.K. companies.

Switzerland should follow suit. The F-35 is a junk aircraft anyhow, and the era of conventional airpower is already over. National militaries should be spending their budgets on drones, not manned aircraft.

DISCUSS ON SG


Convergence in the Home

I shouldn’t have to tell anyone who reads this site regularly to avoid all Amazon Home products, but I have no doubt that more than a few of you have decided that the convenience outweighs the possible risks. You might want to reconsider the matter.

A delivery driver for Amazon misheard an automated doorbell for “racism” and reported it.

Nobody was home.

Amazon turned off all the lights, shut the entire smart home down before it even started its investigation.

Law enforcement via corporation with no due process.

It’s fascinating that it only took eight years to actually arrive at the comedic dystopia about which some were joking back in the day. I think this has also disabused those of us with libertarian inclinations of our previous imaginings about corpocratic rule being any less insidious than rule by government.

DISCUSS ON SG