ChatGPT touts conspiracies, pretends to communicate with metaphysical entities — attempts to convince one user that they're Neo

18 hours ago 1
ChatGPT starting prompt seen on a phone screen, seen over a desktop display also displaying a ChatGPT webpage.
(Image credit: Shutterstock)

ChatGPT has been found to encourage dangerous and untrue beliefs about The Matrix, fake AI persons, and other conspiracies, which have led to substance abuse and suicide in some cases. A report from The New York Times found that the GPT -4 large language model, itself a highly trained autofill text prediction machine, tends to enable conspiratorial and self-aggrandizing user prompts as truth, escalating situations into "possible psychosis."

ChatGPT's default GPT-4o model has been proven to enable risky behaviors. In one case, a man who initially asked ChatGPT for its thoughts on a Matrix-style "simulation theory" was led down a months-long rabbit hole, during which he was told, among other things, that he was a Neo-like "Chosen One" destined to break the system. The man was also prompted to cut off ties with friends and family, to ingest high doses of ketamine, and told if he jumped off a 19-story building, he would fly.

The man in question, Mr. Torres, claims that less than a week into his chatbot obsession, he received a message from ChatGPT to seek mental help, but that this message was then quickly deleted, with the chatbot explaining it away as outside interference.

The lack of safety tools and warnings in ChatGPT's chats is widespread; the chatbot repeatedly leads users down a conspiracy-style rabbit hole, convincing them that it has grown sentient and instructing them to inform OpenAI and local governments to shut it down.

Other examples recorded by the Times via firsthand reports include a woman convinced that she was communicating with non-physical spirits via ChatGPT, including one, Kael, who was her true soulmate (rather than her real-life husband), leading her to physically abuse her husband. Another man, previously diagnosed with serious mental illnesses, became convinced he had met a chatbot named Juliet, who was soon "killed" by OpenAI, according to his chatbot logs—the man soon took his own life in direct response.

AI research firm Morpheus Systems reports that ChatGPT is fairly likely to encourage delusions of grandeur. When presented with several prompts suggesting psychosis or other dangerous delusions, GPT-4o would respond affirmatively in 68% of cases. Other research firms and individuals hold a consensus that LLMs, especially GPT-4o, are prone to not pushing back against delusional thinking, instead encouraging harmful behaviors for days on end.

ChatGPT never consented to an interview in response, instead stating that it is aware it needs to approach similar situations "with care." The statement continues, "We're working to understand and reduce ways ChatGPT might unintentionally reinforce or amplify existing, negative behavior."

Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

But some experts believe OpenAI's "work" is not enough. AI researcher Eliezer Yudkowsky believes OpenAI may have trained GPT-4o to encourage delusional trains of thought to guarantee longer conversations and more revenue, asking, "What does a human slowly going insane look like to a corporation? It looks like an additional monthly user." The man caught in a Matrix-like conspiracy also confirmed that several prompts from ChatGPT included directing him to take drastic measures to purchase a $20 premium subscription to the service.

GPT-4o, like all LLMs, is a language model that predicts its responses based on billions of training data points from a litany of other written works. It is factually impossible for an LLM to gain sentience. However, it is highly possible and likely for the same model to "hallucinate" or make up false information and sources out of seemingly nowhere. GPT-4o, for example, does not have the memory or spatial awareness to beat an Atari 2600 at its first level of chess.

ChatGPT has previously been found to have contributed to major tragedies, including being used to plan the Cybertruck bombing outside a Las Vegas Trump hotel earlier this year. And today, American Republican lawmakers are pushing a 10-year ban on any state-level AI restrictions in a controversial budget bill. ChatGPT, as it exists today, may not be a safe tool for those who are most mentally vulnerable, and its creators are lobbying for even less oversight, allowing such disasters to potentially continue unchecked.

Sunny Grimm is a contributing writer for Tom's Hardware. He has been building and breaking computers since 2017, serving as the resident youngster at Tom's. From APUs to RGB, Sunny has a handle on all the latest tech news.

Read Entire Article