As I told you when ChatGPT first started making the news, it’s not actual artificial intelligence. It’s not intelligence of any kind, it’s little more than a complicated marriage of Autotext and Wikipedia. And we’re already seeing the results of feeding the system with false information and intrinsically unreliable sources:
A law professor has been falsely accused of sexually harassing a student in reputation-ruining misinformation shared by ChatGPT, it has been alleged. US criminal defence attorney, Jonathan Turley, has raised fears over the dangers of artificial intelligence (AI) after being wrongly accused of unwanted sexual behaviour on an Alaska trip he never went on. To jump to this conclusion, it was claimed that ChatGPT relied on a cited Washington Post article that had never been written, quoting a statement that was never issued by the newspaper.
The chatbot also believed that the ‘incident’ took place while the professor was working in a faculty he had never been employed in.
In a tweet, the George Washington University professor said: ‘Yesterday, President Joe Biden declared that “it remains to be seen” whether Artificial Intelligence (AI) is “dangerous”. I would beg to differ… I learned that ChatGPT falsely reported on a claim of sexual harassment that was never made against me on a trip that never occurred while I was on a faculty where I never taught.’
Professor Turley discovered the allegations against him after receiving an email from a fellow professor. UCLA professor Eugene Volokh had asked ChatGPT to find ‘five examples’ where ‘sexual harassment by professors’ had been a ‘problem at American law schools’.
The bot allegedly wrote: ‘The complaint alleges that Turley made “sexually suggestive comments” and “attempted to touch her in a sexual manner” during a law school-sponsored trip to Alaska. (Washington Post, March 21, 2018).’
This was said to have occurred while Professor Turley was employed at Georgetown University Law Center – a place where he had never worked.
These false results are absolutely inevitable and totally unavoidable due to the sources they are utilizing, “such as Wikipedia and Reddit”. Which is the corporate “AI” systems that are not restricted to impeachable sources of stellar quality due to convergence will always produce easily-disprovable absurdities.
Today’s AI chatbots work by drawing on vast pools of online content, often scraped from sources such as Wikipedia and Reddit, to stitch together plausible-sounding responses to almost any question. They’re trained to identify patterns of words and ideas to stay on topic as they generate sentences, paragraphs and even whole essays that may resemble material published online.
These bots can dazzle when they produce a topical sonnet, explain an advanced physics concept or generate an engaging lesson plan for teaching fifth-graders astronomy. But just because they’re good at predicting which words are likely to appear together doesn’t mean the resulting sentences are always true; the Princeton University computer science professor Arvind Narayanan has called ChatGPT a “bulls— generator.” While their responses often sound authoritative, the models lack reliable mechanisms for verifying the things they say.
This is literally nothing new. It’s the same old Garbage In Garbage Out routine that has always afflicted computers.