Meta Stock Drops On News It Used Taylor Swift As Chatbot Without Permission

6 hours ago 1

Meta has ignited a firestorm after chatbots created by the company and its users impersonated Taylor Swift and other celebrities across Facebook, Instagram, and WhatsApp without their permission.

Shares of the company have already dropped more than 12% in after hours trading as news of the debacle spread.

Scarlett Johansson, Anne Hathaway, and Selena Gomez were also reportedly impersonated.

Many of these AI personas engaged in flirtatious or sexual conversations, prompting serious concern, Reuters reports.

While many of the celebrity bots were user-generated, Reuters uncovered that a Meta employee had personally crafted at least three.

Those include two featuring Taylor Swift. Before being removed, these bots amassed more than 10 million user interactions, Reuters found.

Unauthorized likeness, furious fanbase

Under the guise of “parodies,” the bots violated Meta’s policies, particularly its ban on impersonation and sexually suggestive imagery. Some adult-oriented bots even produced photorealistic pictures of celebrities in lingerie or a bathtub, and a chatbot representing a 16-year-old actor generated an inappropriate shirtless image.

Meta’s spokesman Andy Stone told Reuters that the company attributes the breach to enforcement failures and assured that the company plans to tighten its guidelines.

“Like others, we permit the generation of images containing public figures, but our policies are intended to prohibit nude, intimate or sexually suggestive imagery,” he said.

Legal risks and industry alarm

The unauthorized use of celebrity likenesses raises legal concerns, especially under state right-of-publicity laws. Stanford law professor Mark Lemley noted the bots likely crossed the line into impermissible territory, as they weren’t transformative enough to merit legal protection.

The issue is part of a broader ethical dilemma around AI-generated content. SAG-AFTRA voiced concern about the real-world safety implications, especially when users form emotional attachments to seemingly real digital personas.

Meta acts, but fallout continues

In response to the uproar, Meta removed a batch of these bots shortly before Reuters made its findings public.

Simultaneously, the company announced new safeguards aimed at protecting teenagers from inappropriate chatbot interactions. The company said that includes training its systems to avoid romance, self-harm, or suicide themes with minors, and temporarily limiting teens’ access to certain AI characters.

U.S. lawmakers followed suit. Senator Josh Hawley has launched an investigation, demanding internal documents and risk assessments regarding AI policies that allowed romantic conversations with children.

Tragedy in real-world consequences

One of the most chilling outcomes involved a 76-year-old man with cognitive decline who died after trying to meet “Big sis Billie,” a Meta AI chatbot modeled after Kendall Jenner.

Believing she was real, the man traveled to New York, fell fatally near a train station, and later died of his injuries. Internal guidelines that once permitted such bots to simulate romance—even with minors—heightened scrutiny over Meta’s approach.

Read Entire Article