Serving tech enthusiasts for over 25 years.
TechSpot means tech analysis and advice you can trust.
WTF?! Even as generative AI becomes more widespread, the systems remain prone to hallucinations. Advising people to put glue on pizza and eat rocks is one thing, but ChatGPT falsely telling a man he had spent 21 years in prison for killing his two sons is a lot more serious.
Norwegian national Arve Hjalmar Holmen contacted the Norwegian Data Protection Authority after he decided to see what ChatGPT knew about him.
The chatbot responded in its usual confident manner, falsely stating that Holmen had murdered two of his sons and attempted to kill his third son. It added that he was sentenced to 21 years in prison for these fake crimes.
While the story was entirely fabricated, there were elements of Holmen's life that ChatGPT got right, including the number and gender of his children, their approximate ages, and the name of his hometown, making the false claims about murder all the more sinister.
Holmen says he has never been accused nor convicted of any crime and is a conscientious citizen.
Holmen contacted privacy rights group Noyb about the hallucination. It carried out research to ensure ChatGPT wasn't getting Holmen mixed up with someone else, possibly with a similar name. The group also checked newspaper archives, but there was nothing obvious to suggest why the chatbot was making up this gruesome tale.
ChatGPT's LLM has since been updated, so it no longer repeats the story when asked about Holmen. But Noyb, which has clashed with OpenAI in the past over ChatGPT providing false information about people, still filed a complaint with the Norwegian Data Protection Authority, Datatilsynet.
According to the complaint, OpenAI violated GDPR rules that state companies processing personal data must ensure it is accurate. If these details aren't accurate, it must be corrected or deleted. However, Noyb argues that as ChatGPT feeds user data back into the system for training purposes, there is no way to be certain the incorrect data has been completely removed from the LLM's dataset.
Noyb also claims that ChatGPT does not comply with Article 15 of GDPR, which means there is no guarantee that you can recall or see every piece of data about an individual that has been fed into a dataset. "This fact understandably still causes distress and fear for the complainant, [Holmen]," wrote Noyb.
Noyb is asking the Datatilsynet to order OpenAI to delete the defamatory data about Holmen and fine-tune its model to eliminate inaccurate results about individuals, which would be no simple task.
Right now, OpenAI's method of covering its back in these situations is limited to a tiny disclaimer at the bottom of ChatGPT's page that states, "ChatGPT can make mistakes. Check important info," like whether someone is a double murderer.