{"id":36382,"date":"2025-10-25T00:11:28","date_gmt":"2025-10-25T00:11:28","guid":{"rendered":"https:\/\/agooka.com\/news\/technologies\/sam-altman-ai-will-cause-strange-or-scary-moments\/"},"modified":"2025-10-25T00:11:28","modified_gmt":"2025-10-25T00:11:28","slug":"sam-altman-ai-will-cause-strange-or-scary-moments","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/technologies\/sam-altman-ai-will-cause-strange-or-scary-moments\/","title":{"rendered":"Sam Altman: AI will cause \u201cstrange or scary moments\u201d"},"content":{"rendered":"<p><img decoding=\"async\" src=\"https:\/\/dataconomy.com\/wp-content\/uploads\/2025\/10\/1150938.jpg\" alt=\"Sam Altman: AI will cause \u201cstrange or scary moments\u201d\" title=\"Sam Altman: AI will cause \u201cstrange or scary moments\u201d\"\/><\/p>\n<p>Sam Altman, CEO of OpenAI, stated on a recent podcast that he expects negative outcomes from artificial intelligence, including deepfakes, as his company\u2019s new video application, Sora 2, gains widespread use following its recent invitation-only launch.<\/p>\n<p>In an interview on the a16z podcast, produced by venture capital firm Andreessen Horowitz, Altman articulated his expectations for the technology his company develops. \u201cI expect some really bad stuff to happen because of the technology,\u201d he said, specifically highlighting the potential for \u201creally strange or scary moments.\u201d This warning from the head of the company responsible for ChatGPT comes as AI-powered generative tools become increasingly accessible and sophisticated. His comments provide context for the rapid deployment of powerful AI models into the public sphere and the accompanying societal risks he anticipates.<\/p>\n<p>The release of OpenAI\u2019s new video application, Sora 2, late last month demonstrated the speed at which such technology can achieve mainstream penetration. Although its initial launch was restricted to an invitation-only basis, the application quickly ascended to the number one position on Apple\u2019s U.S. App Store. This rapid adoption illustrates a significant public interest in and accessibility of advanced video-generation technology, which can create realistic-looking but entirely fabricated video content. The app\u2019s popularity underscores the immediate relevance of discussions surrounding the potential misuse of such tools.<\/p>\n<p>Shortly after the app\u2019s release, instances of its use to create deepfake videos began appearing on social media platforms. These videos featured public figures, including civil rights leader Martin Luther King Jr. and Altman himself. The fabricated content depicted these individuals engaged in various forms of criminal activity. In response to the circulation of these deepfakes, OpenAI took action to prevent its users from generating videos that featured Martin Luther King Jr. using the Sora platform. This incident served as a direct and immediate example of the type of misuse that AI video generation tools can enable.<\/p>\n<p>Concerns about misuse extended beyond the creation of defamatory deepfakes of public figures. According to the Global Coalition Against Hate and Extremism, videos promoting Holocaust denial created with Sora 2 accumulated hundreds of thousands of likes on Instagram within days of the application\u2019s launch. The organization has pointed to OpenAI\u2019s usage policies as a contributing factor. It argues that the policies lack specific prohibitions against hate speech, a gap that has, in the coalition\u2019s view, helped enable extremist content to proliferate on online platforms using the new tool.<\/p>\n<p>Altman provided a rationale for releasing powerful AI models to the public despite the evident risks. He argued that society needs a form of test drive to prepare for what is to come. \u201cVery soon the world is going to have to contend with incredible video models that can deepfake anyone or kind of show anything you want,\u201d he stated during the podcast interview. His approach is rooted in the belief that society and artificial intelligence must \u201cco-evolve.\u201d Instead of developing technology in isolation and then releasing a perfected version, he advocates for early and incremental exposure. Altman\u2019s theory is that this process allows communities and institutions to develop necessary social norms and technological guardrails before the tools become even more powerful and potentially more disruptive. He acknowledged the high stakes, including the potential erosion of trust in video evidence, which has historically served as a powerful record of truth.<\/p>\n<p>The OpenAI CEO\u2019s warnings extended beyond the immediate threat of fake videos to broader, systemic risks. He cautioned against a future where a significant portion of the population outsources decision-making to opaque algorithms that few people understand. \u201cI do still think there are going to be some really strange or scary moments,\u201d he said, emphasizing that the absence of a catastrophic AI-related event to date \u201cdoesn\u2019t mean it never will.\u201d Altman described a scenario where \u201cbillions of people talking to the same brain\u201d could lead to \u201cweird, societal-scale things.\u201d This could manifest as unexpected and rapid chain reactions, producing substantial shifts in public information, political landscapes, and the foundations of communal trust, all moving at a pace that outstrips any ability to control or mitigate them.<\/p>\n<p>Despite these acknowledgments of broad and consequential risks, Altman expressed opposition to widespread government regulation of the technology. \u201cMost regulation probably has a lot of downside,\u201d he commented. He did, however, voice support for a more targeted approach to safety. Altman specified that he is in favor of implementing \u201cvery careful safety testing\u201d for what he termed \u201cextremely superhuman\u201d AI models, suggesting a distinction between current AI and more advanced future systems. He concluded with a belief in a societal adaptation process, stating, \u201cI think we\u2019ll develop some guardrails around it as a society.\u201d<\/p>\n<p><strong>Featured image credit<\/strong><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Sam Altman, CEO of OpenAI, stated on a recent podcast that he expects negative outcomes from artificial intelligence, including deepfakes, as his company\u2019s new video application, Sora 2, gains widespread use following its recent invitation-only launch. In an interview on the a16z podcast, produced by venture capital firm Andreessen Horowitz, Altman articulated his expectations for [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":36383,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[37],"tags":[],"class_list":{"0":"post-36382","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-technologies"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/36382","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=36382"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/36382\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/36383"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=36382"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=36382"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=36382"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}