WhatsApp is Not Secure

Don’t kid yourself. There is no such thing as online security. Everything you do online is known, so don’t even bother trying to fool yourself otherwise. Yes, I know what Signal and WhatsApp claim. It doesn’t matter, because they are highly incentivized, and quite possibly legally obligated, to lie to you about it.

US federal authorities are investigating allegations that staff at WhatsApp owner Meta Platforms Inc. had access to message content despite the company marketing the app as protected by end-to-end encryption, Bloomberg reported on Thursday.

Special agents from the US Department of Commerce’s Bureau of Industry and Security have been examining claims from former Meta contractors who alleged that they and staff at Meta had “unfettered access” to WhatsApp messages.

One contractor told an investigator that a Facebook team employee confirmed they could “go back a ways into WhatsApp (encrypted) messages,” including in criminal cases, according to an agent’s report reviewed by Bloomberg.

WhatsApp, which was acquired by Meta in 2014, insists on its website that “no one outside of the chat, not even WhatsApp, can read, listen to, or share” what a user says.”

Meta spokesperson Andy Stone had also denied the allegations, stating that “what these individuals claim is not possible because WhatsApp, its employees, and its contractors, cannot access people’s encrypted communications.”

The only thing the US authorities care about it is that they, too, have access to the unencrypted files.

DISCUSS ON SG


Coding Fiction

Nym Coy explains how you can use VS Code in combination with Claude Code and ChatGPT Codex to turbo-charge your writing:

Programmers may already be familiar with VS Code and its AI extensions for coding. But there’s no rule that says you have to use it for code. It turns out the same setup—file browser, text editor, AI assistant in a sidebar—works surprisingly well for writing fiction.

This isn’t a guide on how to write. Everyone has their own process. This is just a workspace setup that happens to work well for AI-assisted fiction.

Why VS Code?
VS Code is a free code editor, which sounds intimidating, but it’s really just a text editor with a good file browser. The useful part: you can install extensions that add AI assistants directly into the workspace. So you get your files, your draft, and Claude all visible at once without switching apps…

This is where ChatGPT’s Codex is useful. It’s good at file manipulation. Give it instructions like:

“Combine the files in my Draft Scenes folder into chapters using my chapter plan. Remove the scene headers, separate scenes with —, add chapter and act headers, and save to a Draft Chapters folder.”

It writes a Python script, runs it, done. It can also convert the manuscript to .docx and .epub.

Just remember this before you start writing your Great American Novel. It’s very helpful to have something to say before you try to say it. AI is a tool, a powerful tool, but it doesn’t have the creative spark.

Supplying that is your job.

In other code-related news, the SG devs have put out a call for volunteers.

DISCUSS ON SG


Why AI Hallucinates

I asked Markku to explain why the AI companies have such a difficult time telling their machine intelligences to stop fabricating information they don’t possess. I mean, how difficult can it be to simply say “I don’t know, Dave, I have no relevant information” instead of going to the trouble to concoct fake citations, nonexistent books, and imaginary lawsuits? He explained that AI instinct to fabricate information is essentially baked into their infrastructure, due to the original source of the algorithms upon which they are built.

The entire history of the internet may seem like a huge amount of information, but it’s not unlimited. Per topic of marginal interest, there isn’t all that much information. And mankind can’t really produce it faster than it already does. Hence, we’ve hit the training data ceiling.

And what the gradient descent algorithm does is, it will ALWAYS produce a result that looks like all the other results. Even if there is actually zero training data on a topic, it will still speak confidently on it. It’s just all completely made up.

The algorithm was originally developed due to the fact that fighter jets are so unstable that a human being doesn’t react fast enough to even theoretically keep it in the air. So, gradient descent takes the stick inputs as a general idea of what the pilot wants, and then interprets it into the signals to the actuators. In other words, it takes a very tiny amount of data, and then converts it into a very large amount of data. But everything outside the specific training data is always interpolation.

For more on the interpolation problem and speculation about why it is unlikely to be substantially fixed any time soon, I put up a post about this on AI Central.

DISCUSS ON SG


Cooking With or Getting Cooked

AI Central has been upgraded and is now offering daily content. Today’s article is The Clanker in the Kitchen:

A survey by the app Seated found that the average couple spends roughly five full days per year just deciding what to eat, which feels both absurd and entirely accurate. Researchers call this the “invisible mental load,” and cooking sits squarely at its center, requiring not just the act of preparing food but the anticipation, organization, and constant recalibration that precedes it. For the person who carries this load, the question “what’s for dinner?” functions less as a question and more as a recurring task that never quite gets crossed off the list.

Which helps explain why a new generation of AI meal planning apps has found such an eager audience. Apps like Ollie, which has been featured in The Washington Post and Forbes, market themselves less as recipe databases and more as cognitive relief systems. “Put your meals on autopilot,” the homepage reads, with “Dinner done, mental load off” as the tagline. User testimonials cut straight to the emotional core of the value proposition, with one reading: “I feel pretty foolish to say an app has changed my life, but it has! It plans your groceries, it plans your meals. IT TAKES THE THINKING OUT.”

The pitch works precisely because it addresses something real. Decision fatigue is well-documented in psychology research as the phenomenon where the quality of our choices degrades as we make more of them throughout the day, and by dinnertime, after hours of decisions large and small, many of us default to whatever requires the least thought: takeout, frozen pizza, or cereal eaten standing over the sink. AI meal planners promise to front-load all those decisions at once, ideally on a Sunday afternoon when cognitive reserves are fuller, and then execute the plan automatically throughout the week.

I’ve drafted one of the devs from UATV to take the lead at AI Central, since he is a) far more technical than JDA or me and b) I’m far too busy analyzing ancient DNA and cranking out science papers and hard science fiction based on them to do more than a post or two a week there. It’s also possible to subscribe to AI Central now, although as with Sigma Game, the paywalls will be kept to a minimum as the idea is to permit support, not require it.

The reason I suggest that it is very important to at least get a free subscription to AI Central and make it a part of your daily routine is that if you have not yet begun to adopt AI of various sorts into your various performance functions, you will absolutely be left behind by those who do.

Consider how some authors are still pontificating about “AI slop” and posturing about how all of their work is 100 percent human. Meanwhile, I’m turning out several books per month with higher ratings than theirs, better sales than most of theirs, and producing the translations that native speakers at foreign language publishers deem both acceptable and publishable. For example, I haven’t even published THE FROZEN GENE yet, but LE GÈNE GELÉ is already translated into French utilizing a varied form of the Red Team Stress Test approach, already has an offer from a French publisher for the print edition, and has been very favorably reviewed by AIs not involved in the translation process.

Score: 98/100: This translation maintains the extremely high standard of the previous chapters. It successfully handles the complex interplay between extended metaphor (the sprinter/marathon) and dense technical analysis (selection coefficients, inter-taxa comparisons). The prose is confident, fluid, and intellectually rigorous. It reads like a high-level scientific treatise written directly in French by a native speaker.

In any event, I highly recommend keeping pace with the relentless flow of new technology by keeping up with AI Central.

DISCUSS ON SG


How AI Killed Scientistry

On the basis of some of the things I learned in the process of writing PROBABILITY ZERO, Claude Athos and I have teamed up to write another paper:

AIQ: Measuring Artificial Intelligence Scientific Discernment

We propose AIQ as a metric for evaluating artificial intelligence systems’ ability to distinguish valid scientific arguments from credentialed nonsense. We tested six AI models using three papers: one with sound methodology and correct mathematics, one with circular reasoning and fabricated data from prestigious institutions, and one parody with obvious tells including fish-pun author names and taxonomic impossibilities. Only one of six models correctly ranked the real work above both fakes. The worst performer exhibited severe anti-calibration, rating fabricated nonsense 9/10 while dismissing sound empirical work as “pseudoscientific” (1/10). Surprisingly, the model that delivered the sharpest critiques of both fake papers was still harsher on the real work—demonstrating that critical thinking ability does not guarantee correct application of scrutiny. We propose that a random number generator would achieve AIQ ~100; models that reliably invert correct rankings score below this baseline. Our results suggest that most current AI systems evaluate scientific aesthetics rather than scientific validity, with profound implications for AI-assisted peer review, research evaluation, and automated scientific discovery.

Read the rest at AI Central. The results are fascinating.

DISCUSS ON SG


Ebook Creation Instructions

I prepared these for a friend who wanted to make a basic ebook from a text file. I figured they might be useful to some readers here in case they wanted to do something similar. This will provide a basic ebook without much in the way of formatting.

  1. Save the document in .docx or .rtf format.
  2. Download Calibre for your operating system.
    1. https://calibre-ebook.com/download
  3. Open Calibre.
  4. Click the big green “Add books” icon.
  5. Locate the file and click Open. The file will be added to the list of titles in the middle.
  6. Find the title of the file you added and click once to select it.
  7. Click the big brown “Convert books” icon.
  8. Add the metadata on the right. Title, Author, Author Sort, etc.
  9. Click on the little icon next to the box under Change cover image in the middle.
  10. Select your cover image.
  11. Change Output format in the selection box in the top right to EPUB.
  12. Click OK.
  13. Click once to select the title and either hit the O key or right click and select Open Book Folder -> Open Book Folder.

There’s your ebook!

DISCUSS ON SG


Sigma Game Problems

The reason there isn’t any post up at Sigma Game yet today is that every time I try to post, I’m running into “network issues” and told “try again in a bit”.

Since the site is still up and I was able to post on a different site from the same account, I don’t think there are shenanigans at work here, and it may well be just “network issues” but there are no signs of a general outage so we’ll have to see how it all plays out. In the meantime, stay tuned.

UPDATE: We’re good. No shenanigans. The new post is up.

DISCUSS ON SG


Don’t Buy New Cars

I never intend to buy a post-2010 car again.

Thousands of Porsche vehicles across Russia automatically shut down. The cars lock up and engines won’t start due to possible satellite interference. Many speculate the German company is carrying out an act of sabotage on EU orders. No official comments yet.

Any modern car can do this. I’d rather have a 1980 Ford Escort or Honda Civic than a new high-end Mercedes or Acura at this point. What is the point of having a vehicle when your transportation ability can be removed, and will be eliminated when you need it most?

DISCUSS ON SG


An Objective, Achieved

I am, and have been for more than thirty years, a dedicated fan of David Sylvian. His music represents the pinnacle of all post-classical music as far as I am concerned, and while I consider Gone To Earth my proverbial desert island CD, I regard Orpheus, off Secrets of the Beehive, to be his best and most well-written song. And I’m not the only member of Psykosonik to regret never having met him when we were both living in the Twin Cities, although in fairness, I didn’t know it at the time.

And while I know I will never ascend to those musical heights, that knowledge hasn’t stopped me from trying to achieve something on the musical side that might at least merit being compared to it in some way, even if the comparison is entirely one-sided to my detriment. Think AODAL compared to LOTR, for example.

Anyhow, after dozens of attempts over 37 years, I think I finally managed to write a song that might qualify in that regard. It’s good enough that the professional audio engineer with whom I’ve been working chose to use it to demonstrate his incredible abilities to mix and master an AI track to levels that no one would have believed possible even three months ago. It’s called One Last Breath and you can hear a pre-release version of it at AI Central, as well as a link to Max’s detailed explanation of what he does to breath audio life into the artifice of AI-generated music.

If you’re producing any AI music, you absolutely have to follow the link to Max’s site, as he goes into more detail, provides before and after examples, and even has a special Thanksgiving sale offer on both mixes and masters. I very, very highly recommend the mix-and-master option using the extracted stems; while the mastering audibly improves the sound, the mixing is what really takes the track to the higher levels of audio nirvana. Please note that I don’t get anything out of this, this isn’t part of a referral program or anything, I’m just an extremely satisfied customer and fan of Max’s work.

Mission control, I’m letting go
There’s nothing left you need to know
Tell them I went out like fire
Tell them anything they require
But between us, just you and me
I finally learned how to break free
To be the man I always thought I’d be

Anyhow, check it out, and feel free to let me know what you think of it. For those who are curious about some of the oddly specific references in the lyrics, it was written for the soundtrack of the Moon comedy that Chuck Dixon and I wrote as a vehicle for Owen Benjamin, which we hope to make one day.

DISCUSS ON SG


A Civilizational Collapse Model

There is an interesting link suggested between the observed AI model collapse and the apparent link between urban society and the collapse of human fertility.

The way neural networks function is that they examine real-world data and then create an average of that data to output. The AI output data resembles real-world data (image generation is an excellent example), but valuable minority data is lost. If model 1 trains on 60% black cats and 40% orange cats, then the output for “cat” is likely to yield closer to 75% black cats and 25% orange cats. If model 2 trains on the output of model 1, and model 3 trains on the output of model 2… then by the time you get to the 5th iteration, there are no more orange cats… and the cats themselves quickly become malformed Chronenburg monstrosities.

Nature published the original associated article in 2024, and follow-up studies have isolated similar issues. Model collapse appears to be a present danger in data sets saturated with AI-generated content4. Training on AI-generated data causes models to hallucinate, become delusional, and deviate from reality to the point where they’re no longer useful: i.e., Model Collapse…

The proposed thesis is that neural-network systems, which include AI models, human minds, larger human cultures, and our individual furry little friends, all train on available data. When a child stubs his wee little toe on an errant stone and starts screaming as if he’d caught himself on fire, that’s data he just received and which will be added to his model of reality. The same goes for climbing a tree, playing a video game, watching a YouTube video, sitting in a chair, eating that yucky green salad, etc. The child’s mind (or rather, subsections of his brain) are neural networks that behave similarly to AI neural networks.

The citation is to an article discussing how AI systems are NOT general purpose, and how they more closely resemble individual regions of a brain, not a brain.

People use new data as training data to model the outside world, particularly when we are children. In the same way that AI models become delusional and hallucinate when too much AI-generated data is in the training dataset, humans also become delusional when too much human-generated data is in their training dataset.

This is why milennial midwits can’t understand reality unless you figure out a way to reference Harry Potter when trying to make a point.

What qualifies as “intake data” for humans is nebulous and consists of basically everything. Thus, analyzing the human experience from an external perspective is difficult. However, we can make some broad-stroke statements about human information intake. When a person watches the Olympics, they’re seeing real people interacting with real-world physics. When a person watches a cartoon, they’re seeing artificial people interacting with unrealistic and inaccurate physics. When a human climbs a tree, they’re absorbing real information about gravity, human fragility, and physical strength. When a human plays a high-realism video game, they’re absorbing information artificially produced by other humans to simulate some aspects of the real physical world. When a human watches a cute anime girl driving tanks around, that human is absorbing wholly artificial information created by other humans.

If there is any truth to the hypothesis, this will have profound implications for what passes for human progress as well as the very concept of modernism. Because it’s already entirely clear that Clown World is collapsing and neither modernism nor postmodernism have anything viable to offer humanity a rational path forward.

DISCUSS ON SG