OpenAI's Sora 2 Launch Sparks Controversy Over Violent and Racist AI Videos

by: @dminMM

Researchers warn that hyper-realistic scenes could blur truth and accelerate misinformation

OpenAI’s newest AI video app, Sora 2, was meant to showcase the future of creative storytelling. Instead, its first week online turned into a case study of AI’s darker side.

Launched with much fanfare and a built-in social feed for sharing user-generated clips, Sora 2 quickly topped Apple’s App Store charts—despite being invite-only.

But within hours, the feed was flooded with disturbing and misleading content: lifelike depictions of bomb scares, mass shootings, war zones, and even racist or extremist imagery.

Videos of fabricated reporters, political figures, and copyrighted characters such as SpongeBob and Pikachu circulated widely—some shown promoting cryptocurrency scams or appearing in violent and offensive scenarios.

While OpenAI’s terms of service prohibit any content that “promotes violence or causes harm,” researchers say the flood of problematic videos proves those guardrails are largely ineffective.

‘ChatGPT for creativity’—but at what cost?

OpenAI CEO Sam Altman described Sora 2 as the company’s “ChatGPT for creativity moment,” acknowledging that the team had taken “great care” to avoid addictive or harmful use cases.

Safeguards were supposedly in place to block illegal, explicit or misleading videos, and even to prevent likeness misuse—though many users say they’re not holding up.

Reporters demonstrated how easily Sora could generate scenes of violence, fake news footage, and deepfake-style portrayals of real individuals.

Others found copyrighted characters used without permission in political or controversial contexts.

[the_ad_placement id=”leaderboard-top”]

Copyright, chaos, and blurred lines

According to the Wall Street Journal, OpenAI recently informed talent agencies and studios they must opt out if they don’t want their IP replicated by the AI.

Critics argue this puts the burden on creators rather than the company generating the content.

OpenAI maintains that rights holders can file complaints via a “copyright disputes form.”

But the company admits there is no blanket opt-out—a move that has raised red flags across Hollywood and the creative industries.

AI linguistics expert Emily Bender warned that tools like Sora are “weakening and breaking relationships of trust” in how audiences perceive information.

The bigger picture

The controversy surrounding Sora 2 underscores the growing tension between AI-driven creativity and ethical responsibility.

While OpenAI positions the tool as an engine for human imagination, critics see it as yet another accelerant in the misinformation crisis.

For marketers, the implications are profound: as AI-generated visuals become harder to distinguish from reality, brand safety, authenticity, and trust will be increasingly at risk.

Sora’s early chaos is a warning shot—not just for OpenAI, but for the entire creative and media ecosystem rushing toward the next frontier of generative AI.

Share Post: 

Other Latest News

RELATED CONTENT

Your daily dose of marketing & advertising insights is just one click away

Haven’t subscribed to our Telegram channel yet? Don’t miss out on the hottest updates in marketing & advertising!