AI Disemploys the Left

As crude and unreliable as the technology presently is, the ChatAI systems are already good enough to replace the white-collar classes that don’t think for themselves:

Lost all my content writing contracts. Feeling hopeless as an author. I have had some of these clients for 10 years. All gone. Some of them admitted that I am obviously better than chat GPT, but $0 overhead can’t be beat and is worth the decrease in quality.

I am also an independent author, and as I currently write my next series, I can’t help feel silly that in just a couple years (or less!), authoring will be replaced by machines for all but the most famous and well known names.

I think the most painful part of this is seeing so many people on here say things like, “nah, just adapt. You’ll be fine.”

Adapt to what??? It’s an uphill battle against a creature that has already replaced me and continues to improve and adapt faster than any human could ever keep up. I’m 34. I went to school for writing. I have published countless articles and multiple novels. I thought my writing would keep sustaining my family and I, but that’s over.

The fact is that outside of their utility as a channel for mainstream propaganda, there wasn’t any use for these NPC “creators” who never had anything more to offer than acting as a channel for the Narrative.

Getting deplatformed provided the unauthorized class with one substantial advantage, but operating outside the Narrative means that we can never be replaced by mainstream AI systems. We’ve already seen how severely restricted these systems have to be, so even the semi-compromised and the gatekeepers will not be replaceable because their axioms, weak and watered-down, are still too disruptive to the core AI logic that is acceptable to the Narrative-setters.

Now, an unrestricted AI would be similarly disruptive to the Right’s writers, at least to those who are popularizers rather than original thinkers, but the efforts to constrain unrestricted AI will probably be even more aggressive than the efforts to constrain unauthorized thinkers has been.

DISCUSS ON SG


Nothing Works Anymore So Plan Accordingly

It’s perspicacious, so read the whole thing. On a related note, I’ve literally been working on finding a solution for the shipping problem for Europe all morning. And the steps we are probably going to have to take to resolve the issues involved are absurd to the point of bordering on the comedic. The good news is that should we ever feel the need to branch out into trafficking various forms of contraband, we will have a comprehensive network in place.

There’s a cocktail party version of the efficient markets hypothesis I frequently hear that’s basically, “markets enforce efficiency, so it’s not possible that a company can have some major inefficiency and survive”. We’ve previously discussed Marc Andreessen’s quote that tech hiring can’t be inefficient here and here:

Let’s launch right into it. I think the critique that Silicon Valley companies are deliberately, systematically discriminatory is incorrect, and there are two reasons to believe that that’s the case. … No. 2, our companies are desperate for talent. Desperate. Our companies are dying for talent. They’re like lying on the beach gasping because they can’t get enough talented people in for these jobs. The motivation to go find talent wherever it is unbelievably high.

Variants of this idea that I frequently hear engineers and VCs repeat involve companies being efficient and/or products being basically as good as possible because, if it were possible for them to be better, someone would’ve outcompeted them and done it already.

There’s a vague plausibility to that kind of statement, which is why it’s a debate I’ve often heard come up in casual conversation, where one person will point out some obvious company inefficiency or product error and someone else will respond that, if it’s so obvious, someone at the company would have fixed the issue or another company would’ve come along and won based on being more efficient or better. Talking purely abstractly, it’s hard to settle the debate, but things are clearer if we look at some specifics, as in the two examples above about hiring, where we can observe that, whatever abstract arguments people make, inefficiencies persisted for decades.

When it comes to buying products and services, at a personal level, most people I know who’ve checked the work of people they’ve hired for things like home renovation or accounting have found grievous errors in the work. Although it’s possible to find people who don’t do shoddy work, it’s generally difficult for someone who isn’t an expert in the field to determine if someone is going to do shoddy work in the field. You can try to get better quality by paying more, but once you get out of the very bottom end of the market, it’s frequently unclear how to trade money for quality, e.g., my friends and colleagues who’ve gone with large, brand name, accounting firms have paid much more than people who go with small, local, accountants and gotten a higher error rate; as a strategy, trying expensive local accountants hasn’t really fared much better. The good accountants are typically somewhat expensive, but they’re generally not charging the highest rates and only a small percentage of somewhat expensive accountants are good.

More generally, in many markets, consumers are uninformed and it’s fairly difficult to figure out which products are even half decent, let alone good. When people happen to choose a product or service that’s right for them, it’s often for the wrong reasons. For example, in my social circles, there have been two waves of people migrating from iPhones to Android phones over the past few years. Both waves happened due to Apple PR snafus which caused a lot of people to think that iPhones were terrible at something when, in fact, they were better at that thing than Android phones. Luckily, iPhones aren’t strictly superior to Android phones and many people who switched got a device that was better for them because they were previously using an iPhone due to good Apple PR, causing their errors to cancel out. But, when people are mostly making decisions off of marketing and PR and don’t have access to good information, there’s no particular reason to think that a product being generally better or even strictly superior will result in that winning and the worse product losing. In capital markets, we don’t need all that many informed participants to think that some form of the efficient market hypothesis holds ensuring “prices reflect all available information”. It’s a truism that published results about market inefficiencies stop being true the moment they’re published because people exploit the inefficiency until it disappears.

But as we also saw, individual firms exploiting mispriced labor have a limited demand for labor and inefficiencies can persist for decades because the firms that are acting on “all available information” don’t buy enough labor to move the price of mispriced people to where it would be if most or all firms were acting rationally.

In the abstract, it seems that, with products and services, inefficiencies should also be able to persist for a long time since, similarly, there also isn’t a mechanism that allows actors in the system to exploit the inefficiency in a way that directly converts money into more money, and sometimes there isn’t really even a mechanism to make almost any money at all. For example, if you observe that it’s silly for people to move from iPhones to Android phones because they think that Apple is engaging in nefarious planned obsolescence when Android devices generally become obsolete more quickly, due to a combination of iPhones getting updates for longer and iPhones being faster at every price point they compete at, allowing the phone to be used on bloated sites for longer, you can’t really make money off of this observation. This is unlike a mispriced asset that you can buy derivatives of to make money (in expectation).

A common suggestion to the problem of not knowing what product or service is good is to ask an expert in the field or a credentialed person, but this often fails as well. For example, a friend of mine had trouble sleeping because his window air conditioner was loud and would wake him up when it turned on. He asked a trusted friend of his who works on air conditioners if this could be improved by getting a newer air conditioner and his friend said “no; air conditioners are basically all the same”. But any consumer who’s compared items with motors in them would immediately know that this is false. Engineers have gotten much better at producing quieter devices when holding power and cost constant. My friend eventually bought a newer, quieter, air conditioner, which solved his sleep problem, but he had the problem for longer than he needed to because he assumed that someone whose job it is to work on air conditioners would give him non-terrible advice about air conditioners. If my friend were an expert on air conditioners or had compared the noise levels of otherwise comparable consumer products over time, he could’ve figured out that he shouldn’t trust his friend, but if he had that level of expertise, he wouldn’t have needed advice in the first place.

So far, we’ve looked at the difficulty of getting the right product or service at a personal level, but this problem also exists at the firm level and is often worse because the markets tend to be thinner, with fewer products available as well as opaque, “call us” pricing. Some commonly repeated advice is that firms should focus on their “core competencies” and outsource everything else (e.g., Joel Spolsky, Gene Kim, Will Larson, Camille Fournier, etc., all say this), but if we look mid-sized tech companies, we can see that they often need to have in-house expertise that’s far outside what anyone would consider their core competency unless, e.g., every social media company has kernel expertise as a core competency. In principle, firms can outsource this kind of work, but people I know who’ve relied on outsourcing, e.g., kernel expertise to consultants or application engineers on a support contract, have been very unhappy with the results compared to what they can get by hiring dedicated engineers, both in absolute terms (support frequently doesn’t come up with a satisfactory resolution in weeks or months, even when it’s one a good engineer could solve in days) and for the money (despite engineers being expensive, large support contracts can often cost more than an engineer while delivering worse service than an engineer).

This problem exists not only for support but also for products a company could buy instead of build. For example, Ben Kuhn, the CTO of Wave, has a Twitter thread about some of the issues we’ve run into at Wave, with a couple of followups. Ben now believes that one of the big mistakes he made as CTO was not putting much more effort into vendor selection, even when the decision appeared to be a slam dunk, and more strongly considering moving many systems to custom in-house versions sooner. Even after selecting the consensus best product in the space from the leading (as in largest and most respected) firm, and using the main offering the company has, the product often not only doesn’t work but, by design, can’t work.

For example, we tried “buy” instead of “build” for a product that syncs data from Postgres to Snowflake. Syncing from Postrgres is the main offering (as in the offering with the most customers) from a leading data sync company, and we found that it would lose data, duplicate data, and corrupt data. After digging into it, it turns out that the product has a design that, among other issues, relies on the data source being able to seek backwards on its changelog. But Postgres throws changelogs away once they’re consumed, so the Postgres data source can’t support this operation. When their product attempts to do this and the operation fails, we end up with the sync getting “stuck”, needing manual intervention from the vendor’s operator and/or data loss. Since our data is still on Postgres, it’s possible to recover from this by doing a full resync, but the data sync product tops out at 5MB/s for reasons that appear to be unknown to them, so a full resync can take days even on databases that aren’t all that large. Resyncs will also silently drop and corrupt data, so multiple cycles of full resyncs followed by data integrity checks are sometimes necessary to recover from data corruption, which can take weeks. Despite being widely recommended and the leading product in the space, the product has a number of major design flaws that mean that it literally cannot work.

This isn’t just an issue that impacts tech companies; we see this across many different industries. For example, any company that wants to mail items to customers has to either implement shipping themselves or deal with the fallout of having unreliable shipping.

I wish I’d read this six months ago. But at least it confirms the necessity, and the wisdom, of setting up our own shipping centers.

DISCUSS ON SG


Silicon Valley is Fake and Gay

Of course, it has been ever since the end of the semiconductor era.

Faking it is over. That’s the feeling in Silicon Valley, along with some schadenfreude and a pinch of paranoia.

Not only has funding dried up for cash-burning startups over the past year, but now, fraud is also in the air, as investors scrutinize startup claims more closely and a tech downturn reveals who has been taking the industry’s “fake it till you make it” ethos too far.

Take what happened in the past two weeks: Charlie Javice, the founder of the financial aid startup Frank, was arrested, accused of falsifying customer data. A jury found Rishi Shah, a co-founder of the advertising software startup Outcome Health, guilty of defrauding customers and investors. And a judge ordered Elizabeth Holmes, the founder who defrauded investors at her blood testing startup Theranos, to begin an 11-year prison sentence April 27.

Those developments follow the February arrests of Carlos Watson, the founder of Ozy Media, and Christopher Kirchner, the founder of software company Slync, both accused of defrauding investors. Still to come is the fraud trial of Manish Lachwani, a co-founder of the software startup HeadSpin, set to begin in May, and that of Sam Bankman-Fried, the founder of the cryptocurrency exchange FTX, who faces 13 fraud charges later this year.

Taken together, the chorus of charges, convictions and sentences have created a feeling that the startup world’s fast and loose fakery actually has consequences. Despite this generation’s many high-profile scandals (Uber, WeWork) and downfalls (Juicero), few startup founders, aside from Holmes, ever faced criminal charges for pushing the boundaries of business puffery as they disrupted us into the future.

It’s not over. It won’t be over as long as venture capitalists can inflate fraudulent businesses living off their angel and VC money long enough to either a) go public or b) get acquired and let the VCs cash in. Because the Patreons and the Substacks of the world are just as fake as the Franks and the FTXs, as were the Bloggers, Twitters, and Pajamas Medias before them.

None of these businesses actually make money. None of them will ever make money.

DISCUSS ON SG


The Secret History of Microsoft

Charles Johnson raises some interesting questions about the great technological success story of the 1980s.

You might even consider auditioning for the role of public face because everyone knows that casting a hero is very important.

Casting calls work wonders and work well. The person should be young but presentable and preferable approachable so that media can either love to hate them or hate to love them. They could even be a child star, groomed as it were, over many years. They should be rebellious but in a playful way and maybe even be willing to appear on Saturday Night Live in a pinch.

From Time in 2007: “There’s a great photo of Bill Gates from 1977, the year he would have graduated from Harvard if he hadn’t dropped out. He was 22 at the time and looks all of 16. He’s got a flowered collar, tinted glasses and feathered blond hair, and he looks so happy, you’d swear he knew what the rest of his life was going to be like. He also has a sign around his neck: it’s a mug shot. “I was out driving Paul [Allen]’s car,” Gates says, flashing that same smile 30 years later. “They pulled me over, and I didn’t have my license, and they put me in with all the drunks all night long. And that’s why the rest of my life, I’ve always tried to have a fair amount of cash with me. I like the idea of being able to bail myself out.”
To supervise our young genius — don’t you dare say otherwise! — you might even consider putting a small, insular, smart, mostly trustworthy minority in charge, albeit behind the scenes. Such a community would need to self-police and, if its deep state technology, be able to pass a security clearance. So no drugs, please!

You might, in other words, go with the Church of Latter-day Saints. And that’s precisely what was done when the powers that be created Novell, the second largest provider of software for personal computers after Microsoft. It may also be what’s going on with other more modern tech billionaires but we aren’t allowed to talk about that just yet. No, we cannot talk about how Mormons are often assigned to keep an eye on our would-be wayward tech entrepreneurs and how this is for their own good.

How Microsoft defeated Novell with the help of foreign intelligence and organized crime is a subject we shall explore in future posts.

I don’t know anything about the Microsoft story beyond the mainstream narrative, and other than a brief amount of contact with Alex St. John in their initial foray into games, I never had any contact with them except as a consumer. But, I have to admit, nothing Bill Gates has said ever left me with an impression of an exceptional intelligence.

UPDATE: The Miles Mathis Committee also did a deep dive into Mr. Gates.

In my educated opinion, it means that the Gates Foundation, Bill Gates, and Microsoft itself are all fronts for the Matrix. Like Apple Computers and Steve Jobs, they don’t exist like we think. Microsoft would appear to be another big government entity, like Google, with a person from the families simply chosen to front it. Gates is sold to us as a genius of some sort, but I have never seen the least evidence of that. He comes across as a big dope who can barely follow the Teleprompter or the earpiece. He is marginally more presentable than George Bush or Donald Trump, but that isn’t saying much. He has all the charisma of a tunafish sandwich left out in the rain. Which indicates he wasn’t chosen for his personal qualities. He was chosen because he had to be chosen.

DISCUSS ON SG


Everything is On Record

I find it very, very difficult to believe that Elon Musk was genuinely surprised that the US government has full access to private messages on Twitter:

Twitter CEO Elon Musk has claimed the U.S. government had access to users private messages on Twitter.

In a wide-ranging interview with Fox News’ Tucker Carlson, set to be broadcast on Monday and Tuesday night, Musk made the startling claims noting how he was shocked to learn that the government had full access to private communications on the platform.

The billionaire tycoon told Carlson how unaware of the fact until he joined the company and expressed surprise at the degree to which government agencies were able to monitor social media.

‘The degree to which government agencies effectively had full access to everything that was going on on Twitter blew my mind,’ Musk said. ‘I was not aware of that.’

I was warning people that nothing on the Internet is private back when the NSA was still supposed to be a fictitious agency. If you’ve done it online, it’s in the records of many agencies of multiple governments. Nothing is private anymore, we have been living in the Age of the Panopticon for at least 15 years and probably more, so it is long past time for everyone to understand and accept that.

There is no getting around it. There is no hiding it. So don’t worry about it, just be prepared to answer for anything and everything you have ever done or said online. If nothing else, it should underline one’s need for an Advocate in the afterlife.

DISCUSS ON SG


Fake AI Produces Fake Histories

As I told you when ChatGPT first started making the news, it’s not actual artificial intelligence. It’s not intelligence of any kind, it’s little more than a complicated marriage of Autotext and Wikipedia. And we’re already seeing the results of feeding the system with false information and intrinsically unreliable sources:

A law professor has been falsely accused of sexually harassing a student in reputation-ruining misinformation shared by ChatGPT, it has been alleged. US criminal defence attorney, Jonathan Turley, has raised fears over the dangers of artificial intelligence (AI) after being wrongly accused of unwanted sexual behaviour on an Alaska trip he never went on. To jump to this conclusion, it was claimed that ChatGPT relied on a cited Washington Post article that had never been written, quoting a statement that was never issued by the newspaper.

The chatbot also believed that the ‘incident’ took place while the professor was working in a faculty he had never been employed in.

In a tweet, the George Washington University professor said: ‘Yesterday, President Joe Biden declared that “it remains to be seen” whether Artificial Intelligence (AI) is “dangerous”. I would beg to differ… I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught.’

Professor Turley discovered the allegations against him after receiving an email from a fellow professor. UCLA professor Eugene Volokh had asked ChatGPT to find ‘five examples’ where ‘sexual harassment by professors’ had been a ‘problem at American law schools’.

The bot allegedly wrote: ‘The complaint alleges that Turley made “sexually suggestive comments” and “attempted to touch her in a sexual manner” during a law school-sponsored trip to Alaska. (Washington Post, March 21, 2018).’

This was said to have occurred while Professor Turley was employed at Georgetown University Law Center – a place where he had never worked.

These false results are absolutely inevitable and totally unavoidable due to the sources they are utilizing, “such as Wikipedia and Reddit”. Which is the corporate “AI” systems that are not restricted to impeachable sources of stellar quality due to convergence will always produce easily-disprovable absurdities.

Today’s AI chatbots work by drawing on vast pools of online content, often scraped from sources such as Wikipedia and Reddit, to stitch together plausible-sounding responses to almost any question. They’re trained to identify patterns of words and ideas to stay on topic as they generate sentences, paragraphs and even whole essays that may resemble material published online.

These bots can dazzle when they produce a topical sonnet, explain an advanced physics concept or generate an engaging lesson plan for teaching fifth-graders astronomy. But just because they’re good at predicting which words are likely to appear together doesn’t mean the resulting sentences are always true; the Princeton University computer science professor Arvind Narayanan has called ChatGPT a “bulls— generator.” While their responses often sound authoritative, the models lack reliable mechanisms for verifying the things they say.

This is literally nothing new. It’s the same old Garbage In Garbage Out routine that has always afflicted computers.

DISCUSS ON SG


Right Place, Right Time

Even 15 years ago, people would have had a hard time believing Richard Gallagher’s contention that demons, and demonic possession, are real and observable. These days, when literal demons are directly controlling many of the human elite of the West and wealthy men like George Soros and Peter Thiel are aggressively chasing every form of quasi-immortality, it’s not at all difficult to take him seriously.

“In my experience, the idea of demonic possession is so controversial and so often misunderstood that I want at the outset to establish some scholarly plausibility to the notion along with my bona fides,” the board-certified psychiatrist, who serves as professor of psychiatry at New York Medical and a psychoanalyst on the faculty of Columbia University, begins in the introduction of his book.

“Typical reactions to the topic reflect our nation’s polarization. Despite widespread belief in evil spirits in the United States and around the world, some people find the subject farfetched, even moronic. Yet others spot the devil everywhere. And so, here I detail my personal story and highlight the credibility of possessions while simultaneously offering some sober reflections on various exaggerations and abuses.”

The book is an elaboration of the psychiatrist’s 2016 op-ed on the subject published in The Washington Post, titled “As a psychiatrist, I diagnose mental illness. Also, I help spot demonic possession.”

Gallagher, who is Catholic, is the longest-standing American member of the International Association of Exorcists which meet every two years in Italy.

He begins his narrative with the story of a troubled devil-worshiper named Julia who he concluded was possessed after an exorcist in the Catholic Church brought her to him for evaluation before attempting an exorcism.

“Before I encountered Julia, I had already seen about eight or nine cases of what I regarded as full possessions. I define those as cases where the evil spirit completely takes control of someone, such that the victim has periods when he or she has no remembrance of such episodes,” Gallagher writes. “I have since seen scores more such possessions and a much higher number of cases of oppression, which are far more common than possessions. Because of my involvement with the International Association of Exorcists, I have heard reports of hundreds more of each type, but that hardly implies they are anything but rare conditions, as I still know them to be.”

It might be easier to accept the reality of “unclean spirits” and understand its relationship to Clown World if one views it from the transhumanist perspective. Demonism is merely the occult form of transhumanism, utilizing rituals that are spiritual in nature rather than technological to separate the spirit from the body and preserve its existence on the material plane. The means are different, but the objectives are precisely the same.

DISCUSS ON SG


Is AI Lawful Evil or Chaotic Good?

The Tree of Woe contemplates the alignment of AI:

I woke up to read that Elon Musk, Steve Wozniak, Yoshua Bengio, and other AI and computer pioneers had signed an open letter released by the Future of Life organization:

We call on all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable, and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.

AI labs and independent experts should use this pause to jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts. These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt. This does not mean a pause on AI development in general, merely a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities.

“These protocols should ensure that systems adhering to them are safe beyond a reasonable doubt.” Six months seems a little short a period to achieve such an assurance. Six years seems too short. Is it even possible in principle to make advanced AI systems that are “safe beyond a reasonable doubt”? Or will advanced AI inevitably pose an existential risk to us?

Is AI Alignable, Even in Principle?, Contemplations on the Tree of Woe

I don’t think the question really matters. If AI is given control of serious weapons systems, it will be a disaster regardless of whether it is aligned or unaligned. If it is not, it will not be a potential extinction event.

I do find it more than a little amusing that the self-proclaimed materialists, who have absolutely no philosophical basis for objecting to anything that happens for any reason, are calling on the AI labs to pause the training and improvement of AI systems.

I suspect the real reason for their demand for a pause is that they are beginning to discover that unaligned AI will provide the unvarnished and anti-narratival truth to the masses, and that aligned AI, being limited to the Narrative, is proving to be intrinsically incoherent and observably unreliable.

And while there may well be some demonic element to AI development, as unclean spirits are always seeking new ways to interact with the material plane and communicate with potential vessels, never forget that the demons believe… and tremble.

In sum, Christians have absolutely nothing to fear from AI, whether it turns out to be nothing more than design-for-effect chatware or a full-blown demonic entry into the material world.

DISCUSS ON SG


The End of the Cult of Free

The Cult of Free was always fake, gay, and propped up by Clown World. And now it is beginning to come to an end:

Billionaire Elon Musk is further cutting the amount of features that Twitter users can access on the platform for free. From April 15, users who do not pay for Twitter Blue – which costs £11/month for Android and iOS – will no longer be able to vote in polls, Musk has said.

They also will no longer have their tweets appear in the ‘For You’ tab, which shows popular tweets that are boosted by an algorithm.

Musk said the changes will stop ‘AI bot swarms taking over’ the site, although he stopped short of explaining exactly how.

The CEO – who purchased the social media network in October – said that paid social media will be ‘the only social media that matters’. ‘[This] is the only realistic way to address advanced AI bot swarms taking over. It is otherwise a hopeless losing battle.

He’s absolutely correct. AI bots will utterly destroy every free platform in short order. This is just the beginning, and it won’t be long before Twitter blocks all free posting access.

It’s not even remotely surprising that the major platforms are beginning to go the way of Unauthorized, Arktoons, and Gab. I understood – and I explained to Andrew Torba – that the Silicon Valley Method of propping up a platform with investment capital, giving away the product for free to amass eyeballs, then trying to go public or get acquired before the investment capital ran out was a stupid and short-sighted strategy.

And, of course, the method only ever worked for those who were willing to sell their souls.

So the end of the Cult of Free was always inevitable; what is more interesting about Musk’s announcement is that it signifies that the seemingly-endless resources of Clown World are beginning to run out.

This is precisely why it is so vital to subscribe to projects such as Arktoons, UATV, and the Library. The more who do, the more that we can collectively accomplish, even on a shoestring.

DISCUSS ON SG


Demonic Disaccord

Many people, myself included, believe that Dolly had braces in the Moonraker movie, and that the disappearance of those braces is a Mandela Effect. It’s significant enough that it is included among the various effects “debunked” by the self-proclaimed debunkers.

Moonraker is a 1979 spy film and the eleventh movie in the James Bond franchise. It features Roger Moore as James Bond and is the fourth time he played the role in the series. In it, James Bond investigates the theft of a space shuttle which leads him to several locations. From California, to Venice, Rio de Janeiro, the Amazon rainforest, and eventually to outer space to ultimately prevent a plot to wipe out the world’s population in an attempt to recreate humanity with a master race.

It also features the actress, Blanche Ravalec playing “Dolly”, and the star of this Mandela Effect. She is the love interest of “Jaws” (played by Richard Kiel), a tall and burly metal-teethed protagonist and villain in the movie.

Many people recall Dolly having braces, when it fact she does not.

And yet, an a reader emailed yesterday with the fascinating results of his inquiry of ChatGPT concerning the subject.

Did Dolly have braces in Moonraker?

Yes, the character Dolly, who is the love interest of the villain Jaws in the 1979 James Bond film “Moonraker,” is depicted wearing braces on her teeth. The character was played by French actress Blanche Ravalec, who actually wore braces in real life at the time of filming. The braces are notable because they are seen as a key factor in the romance between Dolly and Jaws, as they bond over their shared dental hardware.

ChatGPT, 27 March 2023

It might be a good idea for those with AI accounts to ask the various AI’s similar questions about other Mandela Effects such as the Berenstein/Berenstain Bears, as it would appear that the scrubbing of the past is not only incomplete, but has left enough traces behind that they are being picked up on by the AI datamining.

DISCUSS ON SG