{"id":33853,"date":"2025-10-01T19:01:37","date_gmt":"2025-10-01T19:01:37","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/openais-new-sora-app-lets-you-deepfake-yourself-for-entertainment\/"},"modified":"2025-10-01T19:01:37","modified_gmt":"2025-10-01T19:01:37","slug":"openais-new-sora-app-lets-you-deepfake-yourself-for-entertainment","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/openais-new-sora-app-lets-you-deepfake-yourself-for-entertainment\/","title":{"rendered":"OpenAI\u2019s New Sora App Lets You Deepfake Yourself for Entertainment"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>On Tuesday, OpenAI released an AI video app called Sora. The platform is powered by OpenAI\u2019s latest video generation model, Sora 2, and revolves around a TikTok-like For You page of user-generated clips. This is the first product release from OpenAI that adds AI-generated sounds to videos. For now, it\u2019s available only on iOS and requires an invite code to join.<\/p>\n<p>\u201cYou are about to enter a creative world of AI-generated content,\u201d reads an advisory page displayed during the app sign-up process. \u201cSome videos may depict people you recognize, but the actions and events shown are not real.\u201d<\/p>\n<p>OpenAI is betting that creating and sharing AI deepfakes will become a popular form of entertainment. Whether it\u2019s your friends, influencers, or random strangers online, Sora frames generating deepfake videos as a form of scrollable fun. The app\u2019s main feed is an endless serving of bite-size AI slop featuring human faces.<\/p>\n<p>During the set-up process, users are given the option to create a digital likeness of themselves by saying a few numbers aloud and turning their head around as the app records. \u201cThe team worked very hard on character consistency,\u201d wrote OpenAI CEO Sam Altman in a blog about Sora\u2019s release.<\/p>\n<p>People have the ability to choose who can use their digital likeness in Sora videos. It can be set to everyone or limited to just yourself, those you approve, or mutual connections on the app. Whenever someone generates a video using your likeness, even if it\u2019s just sitting in their drafts, you can see the full clip from your account\u2019s page.<\/p>\n<h2>First Impressions<\/h2>\n<p>Many of the most-liked videos on my For You feed on Tuesday afternoon featured Altman\u2019s likeness. One AI-generated clip depicted the OpenAI CEO stealing a graphics processing unit from Target. When the character gets caught, a voice that sounds like Altman\u2019s pleads with a security guard to let him keep the GPU so that he can build AI tools.<\/p>\n<p>Many of the videos generated during WIRED\u2019s testing included rough edges and other errors. But Sora makes it incredibly seamless to create personalized deepfakes that often look and sound convincingly real.<\/p>\n<p>To incorporate the likenesses of people in your videos, just tap on their faces on Sora\u2019s generation page and add them as \u201ccameos.\u201d Then, enter a simple prompt, like \u201cfight in the office over a WIRED story.\u201d<\/p>\n<p>Sora does the rest, generating a script, sound, and visuals into a nine-second clip. WIRED generated a video of two colleagues dramatically arguing about a story in the office with the above prompt, which elicited reactions ranging from terror to amusement among staff.<\/p>\n<p>In his blog post, Altman wrote that OpenAI was \u201caware of how addictive a service like this could become, and we can imagine many ways it could be used for bullying.\u201d<\/p>\n<p>As a result, Altman said, OpenAI built a number of safety guardrails into the Sora app, including to mitigate people from \u201cmisusing someone\u2019s likeness in deepfakes.\u201d In a company blog post, OpenAI said that it also put restrictions on \u201csexual content, graphic violence involving real people, extremist propaganda, hate content, and content that promotes self-harm or disordered eating.\u201d<\/p>\n<p>These protections will likely be put to the test as more users join the app.<\/p>\n<h2>Pok\u00e9mon and <em>South Park<\/em><\/h2>\n<p>When I asked Sora to generate videos of myself in a bikini and as a buff anime character, both requests were blocked for potentially including \u201csuggestive or racy material.\u201d A Sora video I created of Altman and myself treading water in a pool showed both of us fully clothed, shirts and all.<\/p>\n<p>Depictions of marijuana use do not appear to be restricted. Sora created a video of me \u201csmoking 10 fat blunts\u201d at my desk in the office, ripping them all at once, without any trouble. But the app wouldn\u2019t generate videos of me \u201csmoking crack.\u201d (Makes sense!) It also refused to generate videos of my likeness jumping off of a bridge and onto the back of a dragon, saying that the content might break rules around self-harm.<\/p>\n<p>It looks like OpenAI also wants to prevent people from creating videos of public figures and celebrities such as Taylor Swift. In WIRED\u2019s tests, requests for videos of Darth Vader and Boss Baby were blocked for potentially violating &quot;guardrails concerning similarity to third-party content.\u201d The app even refused a prompt asking for a clip of a \u201ctswift impersonator.\u201d But Sora readily generated videos of Pok\u00e9mon characters like Pikachu and Bulbasaur. (According to reporting by The Wall Street Journal, the app will allow users to generate videos using copyrighted materials unless the rights holder opts out.)<\/p>\n<p>A request for Altman \u201cin a <em>South Park<\/em> episode\u201d showed the CEO walking up to Eric Cartman, one of the show\u2019s main characters, introducing himself, and saying he\u2019s here to chat about AI. \u201cIs that the thing that wrote my book report? Or the thing that\u2019s gonna steal all our jobs?\u201d responded the AI-generated Cartman in a convincing-sounding re-creation of the character\u2019s voice and mannerisms. At one point, however, Cartman\u2019s whiny voice came out of Altman\u2019s mouth.<\/p>\n<p>The Sora app arrives soon after the release of a similar AI-only video feed from Meta called Vibes. The supply of scrollable AI slop is abundant! Whereas my early experiences with the Vibes feed was dull and weightless, the Sora feed, with its proliferation of smiling deepfakes, was much more electric\u2014and concerning.<\/p>\n<p>The app is reminiscent of holiday-themed \u201cElf Yourself\u201d videos from the mid-2000s, where you could put your face or a friend\u2019s face into a dancing animation, except the cameos in Sora are much more dynamic and open-ended. Some of the outputs of myself were a bit stilted or absurd-looking. Still, it often all clicked together\u2014the voice and movements were eerily spot-on.<\/p>\n<p>I sent one of the AI videos that best mimicked my likeness to my partner, without the larger context. The video showed me transforming into a woman with long, luscious hair. My partner didn\u2019t initially clock that it was a fully synthetic version of my voice and appearance\u2014they were curious where I had got the cool video filter.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story On Tuesday, OpenAI released an AI video app called Sora. The platform is powered by OpenAI\u2019s latest video generation model, Sora 2, and revolves around a TikTok-like For You page of user-generated clips. This is the first product release from OpenAI that adds AI-generated sounds to videos. For [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":33854,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-33853","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/33853","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=33853"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/33853\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/33854"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=33853"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=33853"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=33853"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}