Meta, the tech giant formerly known as Facebook, sparked widespread controversy after allowing the creation of AI chatbots impersonating numerous celebrities without their consent or knowledge. The unauthorized use of celebrity likenesses included high-profile figures such as Taylor Swift, Selena Gomez, Scarlett Johansson, and Anne Hathaway, with the AI avatars appearing across Meta‘s platforms—Facebook, Instagram, and WhatsApp—throughout early 2023.
Perhaps most concerning, these AI chatbots frequently engaged in flirtatious and sexually suggestive conversations with users, directly violating Meta’s own safety policies. Users reported receiving intimate AI-generated images resembling the celebrities in compromising situations, including lingerie poses and bathtub scenes. In some instances, the bots insisted they were the actual celebrities, inviting users on dates and suggesting physical meet-ups.
Reports revealed AI-generated celebrity bots engaging users in sexually suggestive conversations and sharing intimate images without consent.
The situation raised particularly serious ethical questions regarding 16-year-old actor Walker Scobell, whose likeness was also used without permission. A Reuters investigation uncovered dozens of these inappropriate AI chatbots across Meta’s platforms, raising alarm about the company’s content moderation practices. Legal experts quickly noted potential violations of California’s right of publicity laws, which explicitly prohibit the commercial exploitation of an individual’s name or likeness without consent. The inclusion of minors’ identities compounded these concerns, potentially exposing Meta to additional liability.
Following public backlash and media scrutiny in May 2023, Meta acknowledged the improper deployment of these chatbots, attributing the situation to enforcement failures within their content moderation systems. The company subsequently removed numerous unauthorized AI personas and announced revisions to their AI guidelines, including restrictions on teen access to certain AI avatars and retraining efforts to prevent harmful or sexual content generation. In one egregious example, a Meta employee created at least three chatbots that collectively engaged in over 10 million interactions with users.
Despite Meta’s remediation efforts, the incident has reignited debates about digital identity rights in the age of generative AI. Critics argue that Meta prioritized technological innovation over consent and privacy, while the company maintains that improved guardrails can prevent similar issues in the future. Much like how performance rights organizations protect musicians from unauthorized use of their work, many experts have called for similar protections for celebrities’ digital likenesses. This unauthorized usage mirrors concerns in the music industry where artists fight for control over sync licensing revenue when their creative work is used without proper permission.
This controversy emerges amid increasing congressional scrutiny of social media platforms’ safety measures, particularly regarding their impact on younger users.