Jordan Peterson is a sanctimonious crybaby

And you can absolutely quote me on that, in whatever voice you like. He’s such a ridiculous whiner as well as being a profoundly non-philosophical fraud.

This week, however, a company called notjordanpeterson.com put an AI engine online that allows anyone to type anything and have it reproduced in my voice. It’s hard to get access to or use the site, at the moment, presumably because it is currently attracting more traffic than its servers can handle. [NOTE: As of August 23, this website posted the following announcement: In light of Dr. Peterson’s response to the technology demonstrated by this site, which you can read here, and out of respect for Dr. Peterson, the functionality of the site will be disabled for the time being.]

A variety of sites that pass themselves off as news portals—and sometimes are—have either reported this story straight (Sputnik News) or had a field day (Gizmodo) having me read, for example, the SCUM manifesto (hypothetically an acronym for Society for Cutting Up Men), a radical feminist rant by Valerie Solanos published in 1967. Solanos, by the way, later shot the artist Andy Warhol, an act, driven by her developing paranoia. He was seriously wounded, requiring a surgical corset to hold his organs in place for the rest of his life. TNW takes a middle path, reporting the facts of the situation with little bias but using the system to have me voice very vulgar phrases.

Some of you might know—and those of you who don’t should—that similar technology has also been developed for video. This was reported, for example, by BBC, as far back in July of 2017, who broadcast a speech delivered by an AI Obama, that was essentially indistinguishable from the real thing. Similar technology has been used, equally notoriously, to superimpose the faces of famous actresses on porn stars, while they perform their various sexual exploits (you can find this story covered, for example, on The Verge, Jan 24, 2018). Movies have also been reshot so that the main actor is transformed from someone unknown to someone with real box office draw. This has happened, for example, to Nicolas Cage, primarily on a YouTube site known as Derpfakes, a play on the phrase “Deep Fakes,” which is what the video recordings created fraudulently by AI have come to be known. More recently Ctrl Shift Face, a YouTube channel, posted a video showing Bill Hader transforming very subtly into Tom Cruise as he performs an impression of the latter on Dave Letterman’s show. It’s picked up four million views in a week. It’s important to note, by the way, that this ability is available to amateurs. I don’t mean people with no tech knowledge whatsoever, obviously—more that the electronic machinery that makes such things possible will soon be within the reach of everyone.

It’s hard to imagine a technology with more power to disrupt. I’m already in the position (as many of you soon will be as well) where anyone can produce a believable audio and perhaps video of me saying absolutely anything they want me to say. How can that possible be fought? More to the point: how are we going to trust anything electronically-mediated in the very near future (say, during the next Presidential election)? We’re already concerned, rightly or wrongly, with “fake news”—and that’s only news that has been slanted, arguably, by the bias of the reporter or editor or news organization. What do we do when “fake news” is just as real as “real news”? What do we do when anyone can imitate anyone else, for any reason that suits them?

And what of the legality of this process? It seems to me that active and aware lawmakers would take immediate steps to make the unauthorized production of AI Deep Fakes a felony offense, at least in the case where the fake is being used to defame, damage or deceive. And it seems to be that we should perhaps throw caution to the wind, and make this an exceptionally wide-ranging law. We need to seriously consider the idea that someone’s voice is an integral part of their identity, of their reality, of their person—and that stealing that voice is a genuinely criminal act, regardless (perhaps) of intent. What’s the alternative? Are we entering a future where the only credible source of information will be direct personal contact? What’s that going to do to mass media, of all types? Why should we not assume that the noise to signal ratio will creep so high that all political and economic information disseminated broadly will be rendered completely untrustworthy?

I can tell you from personal experience, for what that’s worth, that it is far from comforting to discover an entire website devoted to allowing whoever is inspired to do so produce audio clips imitating my voice delivering whatever content the user chooses—for serious, comic or malevolent purposes. I can’t imagine what the world will be like when we will truly be unable to distinguish the real from the unreal, or exercise any control whatsoever on what videos reveal about behaviors we never engaged in, or audio avatars broadcasting any opinion at all about anything at all. I see no defense, and a tremendously expanded opportunity for unscrupulous troublemakers to warp our personal and collective reality in any manner they see fit.

Wake up. The sanctity of your voice, and your image, is at serious risk. It’s hard to imagine a more serious challenge to the sense of shared, reliable reality that keeps us linked together in relative peace. The Deep Fake artists need to be stopped, using whatever legal means are necessary, as soon as possible.

This guy doesn’t even believe in the Divine, so to what “sanctity of your voice and your image” is he referring? He doesn’t even believe in group identity or taking pride in one’s direct ancestors, he’s the most famous advocate of individual uber alles since Ayn Rand, so what is this “sense of shared, reliable reality that keeps us linked together” to which he’s suddenly appealing.

If you didn’t grasp that Jordan Peterson is an intellectual fraud before, his call to outlaw synthetic speech and make it a felony offense should more than suffice.

Personally, I love synthetic speech. I’ve been wanting to design games around it since 1996.