OpenAI targets Sora 2 deepfakes after actor backlash

Hatice Baran/Pexels

OpenAI has taken decisive action to limit the misuse of its Sora 2 video generation tool, following significant pressure from actor Bryan Cranston and the performers’ union SAG-AFTRA. This move comes amid growing concerns over the unauthorized use of actors’ likenesses in deepfake videos. Cranston, renowned for his roles in “Breaking Bad” and “Malcolm in the Middle,” has been a vocal advocate for protecting performers’ rights in the digital age. On October 20, 2025, OpenAI announced new restrictions on Sora 2, prompting Cranston to express his gratitude for the company’s responsiveness to these pressing issues.

Background on Sora 2 and Deepfake Risks

Launched as OpenAI’s cutting-edge text-to-video model, Sora 2 has demonstrated remarkable capabilities in generating realistic video content. However, this technological advancement has also raised significant ethical concerns, particularly regarding the potential for misuse in creating deepfakes. These AI-generated videos can convincingly replicate the likenesses of public figures, including actors, without their consent. The entertainment industry has voiced fears about the implications of such technology, as tools like Sora 2 could facilitate the unauthorized creation of celebrity deepfakes, leading to potential reputational damage and privacy violations.

Before the recent policy changes, OpenAI’s guidelines on AI-generated content did not specifically address the use of public figures’ likenesses. This oversight left room for exploitation, prompting calls for more stringent safeguards. The industry has witnessed instances where deepfakes have been used to create non-consensual recreations of celebrities, highlighting the urgent need for protective measures. The introduction of restrictions on Sora 2 marks a significant step towards addressing these concerns and ensuring that AI technology is used responsibly.

Bryan Cranston’s Advocacy Against AI Misuse

Bryan Cranston has been at the forefront of advocating against the misuse of AI technologies like Sora 2. His public statements have underscored the importance of protecting performers from unauthorized deepfakes. Cranston directly appealed to OpenAI, urging the company to implement measures that safeguard actors’ rights. His involvement with SAG-AFTRA has further amplified these concerns, as the union has been actively negotiating for stronger protections against AI exploitation in the entertainment industry.

Cranston’s frustration with unauthorized uses of his likeness in AI-generated videos has been palpable. He has emphasized the need for consent and compensation when using an actor’s image, arguing that the current landscape poses a threat to performers’ livelihoods and reputations. His advocacy has played a crucial role in bringing attention to the ethical dilemmas posed by AI technologies, ultimately influencing OpenAI’s decision to implement new restrictions on Sora 2.

SAG-AFTRA’s Role in Pressuring Tech Companies

SAG-AFTRA has been a key player in the campaign against AI exploitation, advocating for the rights of performers in the face of rapidly advancing technology. The union has called for explicit consent and fair compensation for the use of actors’ likenesses in AI-generated content. Their demands have been directed at companies like OpenAI, urging them to implement safeguards that protect performers from unauthorized deepfakes.

The collaboration between SAG-AFTRA and figures like Bryan Cranston has been instrumental in lobbying for change. Their efforts have coincided with broader Hollywood strikes over AI protections, highlighting the industry’s collective push for ethical standards in technology use. The union’s response to OpenAI’s October 20, 2025, announcement has been one of cautious optimism, viewing it as a partial victory in the ongoing battle to safeguard performers’ rights.

OpenAI’s Policy Changes and Implementation

OpenAI’s recent crackdown on Sora 2 deepfakes involves the implementation of new filters designed to block the generation of videos featuring real individuals’ likenesses without consent. This policy change was announced on October 20, 2025, as a direct response to the advocacy efforts of Bryan Cranston and SAG-AFTRA. The introduction of these restrictions represents a significant shift in OpenAI’s approach to AI-generated content, aligning with industry demands for ethical standards.

Despite these advancements, challenges remain in enforcing the new restrictions. Detecting subtle deepfakes can be difficult, and the effectiveness of the filters will depend on their ability to accurately identify unauthorized content. OpenAI’s commitment to addressing these challenges will be crucial in ensuring the long-term success of the policy changes. The company’s actions may also set a precedent for other tech companies, influencing future regulations for similar technologies.

Reactions and Future Implications

Following OpenAI’s announcement, Bryan Cranston expressed his gratitude for the company’s efforts to address the deepfake issue. On October 21, 2025, he publicly thanked OpenAI, highlighting the positive impact of collaborative solutions in tackling the ethical challenges posed by AI technologies. Cranston’s acknowledgment underscores the importance of industry and tech companies working together to protect performers’ rights.

SAG-AFTRA and other industry groups have also responded positively to the crackdown, viewing it as a step in the right direction. However, the effectiveness of the new restrictions will continue to be scrutinized as the debate over AI ethics evolves. The implications of OpenAI’s actions extend beyond the entertainment industry, potentially influencing broader discussions on AI regulation and the responsible use of technology.


As AI development continues to advance, the measures taken by OpenAI with Sora 2 may serve as a model for other companies navigating the complex landscape of AI ethics. The ongoing dialogue between stakeholders will be essential in shaping the future of AI technologies, ensuring that they are developed and used in ways that respect individual rights and promote ethical standards.