Sora’s new video generator wants to let you deep-fake your friends—after you pinky-swear you have permission.
OpenAI’s latest policy means every marketer who uploads a real employee’s photo into Sora must first check a box that says they have secured consent from the person pictured. The platform then embeds C2PA metadata and stamps the resulting promo clip with a moving watermark that includes the creator’s name, making the origin impossible to hide.
Image-to-video clips that feature real people run through stricter guardrails than the optional Sora Characters feature; anyone who looks underage gets extra moderation. If the employee later changes their mind, they can revoke permission through the Characters feature, instantly blocking future use of their likeness.
Teen accounts receive filtered feeds, no adult-initiated direct messages, and default scroll limits, while automated scanners plus human reviewers pull down sexual content, terrorist propaganda, or self-harm material. Audio tracks are scanned for policy breaches and for music that imitates living artists; takedown requests are honored.
Users control publication timing, and every video, profile, DM, comment, or character can be reported or blocked. Content can be unpublished at any moment, giving small-business owners a reversible way to test personalized marketing without risking permanent embarrassment.
Source: Openai