{"id":34082,"date":"2025-10-03T19:21:16","date_gmt":"2025-10-03T19:21:16","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/sam-altman-says-the-gpt-5-haters-got-it-all-wrong\/"},"modified":"2025-10-03T19:21:16","modified_gmt":"2025-10-03T19:21:16","slug":"sam-altman-says-the-gpt-5-haters-got-it-all-wrong","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/sam-altman-says-the-gpt-5-haters-got-it-all-wrong\/","title":{"rendered":"Sam Altman Says the GPT-5 Haters Got It All Wrong"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>OpenAI\u2019s August launch of its GPT-5 large language model was somewhat of a disaster. There were glitches during the livestream, with the model generating charts with obviously inaccurate numbers. In a Reddit AMA with OpenAI employees, users complained that the new model wasn\u2019t friendly, and called for the company to restore the previous version. Most of all, critics griped that GPT-5 fell short of the stratospheric expectations that OpenAI has been juicing for years. Promised as a game changer, GPT-5 might have indeed played the game better. But it was still the same game.<\/p>\n<p>Skeptics seized on the moment to proclaim the end of the AI boom. Some even predicted the beginning of another AI Winter. \u201cGPT-5 was the most hyped AI system of all time,\u201d full-time bubble-popper Gary Marcus told me during his packed schedule of victory laps. \u201cIt was supposed to deliver two things, AGI and PhD-level cognition, and it didn&#039;t deliver either of those.\u201d What\u2019s more, he says, the seemingly lackluster new model is proof that OpenAI\u2019s ticket to AGI\u2014massively scaling up data and chip sets to make its systems exponentially smarter\u2014can no longer be punched. For once, Marcus\u2019 views were echoed by a sizable portion of the AI community. In the days following launch, GPT-5 was looking like AI\u2019s version of New Coke.<\/p>\n<p>Sam Altman isn\u2019t having it. A month after the launch he strolls into a conference room at the company\u2019s newish headquarters in San Francisco\u2019s Mission Bay neighborhood, eager to explain to me and my colleague Kylie Robison that GPT-5 is everything that he\u2019d been touting, and that all is well in his epic quest for AGI. \u201cThe vibes were kind of bad at launch,\u201d he admits. \u201cBut now they\u2019re great.\u201d Yes, <em>great<\/em>. It\u2019s true the criticism has died down. Indeed, the company\u2019s recent release of a mind-bending tool to generate impressive AI video slop has diverted the narrative from the disappointing GPT-5 debut. The message from Altman, though, is that naysayers are on the wrong side of history. The journey to AGI, he insists, is still on track.<\/p>\n<h2>Numbers Game<\/h2>\n<p>Critics might see GPT-5 as the waning end of an AI summer, but Altman and team argue that it cements AI technology as an indispensable tutor, a search-engine-killing information source, and, especially, a sophisticated collaborator for scientists and coders. Altman claims that users are beginning to see it his way. \u201cGPT-5 is the first time where people are, \u2018Holy fuck. It\u2019s doing this important piece of physics.\u2019 Or a biologist is saying, \u2018Wow, it just really helped me figure this thing out,\u2019\u201d he says. \u201cThere&#039;s something important happening that did not happen with any pre-GPT-5 model, which is the beginning of AI helping accelerate the rate of discovering new science.\u201d (OpenAI hasn\u2019t cited who those physicists or biologists are.)<\/p>\n<p>So why the tepid initial reception? Altman and his team have sussed out several reasons. One, they say, is that since GPT-4 hit the streets, the company delivered versions that were themselves transformational, particularly the sophisticated reasoning modes they added. \u201cThe jump from 4 to 5 was <em>bigger<\/em> than the jump from 3 to 4,\u201d Altman says. \u201cWe just had a lot of stuff along the way.\u201d OpenAI president Greg Brockman agrees: \u201cI&#039;m not shocked that many people had that [underwhelmed] reaction, because we&#039;ve been showing our hand.\u201d<\/p>\n<p>OpenAI also says that since GPT-5 is optimized for specialized uses like doing science or coding, everyday users are taking a while to appreciate its virtues. \u201cMost people are not physics researchers,\u201d Altman observes. As Mark Chen, OpenAI\u2019s head of research, explains it, unless you\u2019re a math whiz yourself, you won\u2019t care much that GPT-5 ranks in the top five of Math Olympians, whereas last year the system ranked in the top 200.<\/p>\n<p>As for the charge about how GPT-5 shows that scaling doesn\u2019t work, OpenAI says that comes from a misunderstanding. Unlike previous models, GPT-5 didn\u2019t get its major advances from a massively bigger dataset and tons more computation. The new model got its gains from reinforcement learning, a technique that relies on expert humans giving it feedback. Brockman says that OpenAI had developed its models to the point where they could produce their own data to power the reinforcement learning cycle. \u201cWhen the model is dumb, all you want to do is train a bigger version of it,\u201d he says. \u201cWhen the model is smart, you want to sample from it. You want to train on its own data.\u201d<\/p>\n<p>Altman and company seem stung by the criticism of GPT-5 and respond with disbelief that people think that the scaling hypothesis is moot. OpenAI has obviously not given up on massive scale. That\u2019s the reason it\u2019s spending hundreds of billions of dollars that it doesn\u2019t have to build giant datacenters in Abilene, Texas and other locations. Brockman indicates that until those giant computation factories come online, there simply isn\u2019t enough firepower to make the next big leaps. \u201cThe scaling challenge is hard,\u201d he says. It&#039;s like, really, really hard to execute, like building a bigger rocket\u2014building a 2x bigger rocker is probably 10x harder.\u201d<\/p>\n<p>When I mention Marcus to Altman, he\u2019s visibly offended. \u201cIs that a real question?\u201d he asks. Well, it\u2019s not only Gary, I say. Altman straightens up in his chair. \u201cWhat I can tell you with confidence is GPT-6 will be significantly better than GPT-5, and GPT-7 will be significantly better than GPT-6. And we have a pretty good track record on these.\u201d<\/p>\n<h2>AGI or Bust<\/h2>\n<p>Altman spent many months this year talking about how AGI was imminent. Lately, however, OpenAI has been steering people away from the idea that AGI is a destination. Now it\u2019s a process. Sounds logical, but the rhetorical tweak liberates the company from a deadline. \u201cWe had almost a category error of thinking of OpenAI as a project with a defined end date,\u201d says Brockman. \u201cWe were thinking, \u2018OK, if we just build AGI, and make it good for humanity, that&#039;s what we&#039;re here to do. That is not how we think about it anymore.\u201d Now, he says it\u2019s more of a never-ending rollout. \u201cThe mission is really about this continuous impact, and transforming the economy into this AI-powered world. And even if AGI is a mile marker\u2014reasonably well defined, maybe a little bit of fuzziness\u2014there&#039;s this continuous exponential.\u201d<\/p>\n<p>When I bring up AGI with Altman, he says that the discussion might be less than useful, since people have such different ideas of what it means. Employing one of his favorite bits of interview jiu-jitsu, he asks us to share our definitions of AGI, as if they make any difference to OpenAI. In its charter, OpenAI defines AGI as \u201chighly autonomous systems that outperform humans at most economically valuable work.\u201d Altman now says his view has evolved beyond the charter. His thinking on AGI seems to center on scientific acumen. \u201cWe can wrap our heads around what it means for most economic work to happen,\u201d he says. \u201cBut the scientific progress definition is really a big deal for the world. It\u2019s hard to wrap our heads around that, so we talk about it less.\u201d<\/p>\n<p>GPT-5, then, is a modest step toward that milestone. \u201cI would not claim that GPT-5 is like doing meaningful science, obviously not,\u201d Altman says. \u201cBut there is a glimmer, and I think by 6 or 7, we&#039;ll see more of it.\u201d<\/p>\n<p>Despite AGI\u2019s \u201cfuzziness,\u201d as Brockman puts it, OpenAI is having a branding moment with the acronym. Literally in the room with Altman as he mused over definitions was his PR minder, whose laptop had a sticker that read, \u201cFEEL THE AGI.\u201d A merch kiosk on the first floor of the company&#039;s headquarters sells T-shirts with the same legend. Hanging in the hallways are marketing posters highlighting issues on the road to AGI and explaining phenomena such as superintelligence and \u201cAI puberty,\u201d defined as \u201cthe transitional stage where a system shifts from narrow AI toward more general human-like intelligence, with unpredictable or awkward behavior.\u201d Whether AGI is a process or a destination, OpenAI is linked with it forever. And it\u2019s willing to spend hundreds of billions of dollars to scale its way there.<\/p>\n<p><em>Additional reporting by Kylie Robison.<\/em><\/p>\n<p><em>This is an edition of<\/em> <a href=\"https:\/\/www.wired.com\/author\/steven-levy\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>Steven Levy\u2019s<\/strong><\/em><\/a> <em><a href=\"https:\/\/www.wired.com\/newsletter?sourceCode=editarticle\" rel=\"noreferrer\" target=\"_blank\"><strong>Backchannel newsletter<\/strong><\/a>. Read previous newsletters<\/em> <a href=\"https:\/\/www.wired.com\/tag\/backchannel-nl\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>here.<\/strong><\/em><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story OpenAI\u2019s August launch of its GPT-5 large language model was somewhat of a disaster. There were glitches during the livestream, with the model generating charts with obviously inaccurate numbers. In a Reddit AMA with OpenAI employees, users complained that the new model wasn\u2019t friendly, and called for the [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":34083,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-34082","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/34082","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=34082"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/34082\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/34083"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=34082"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=34082"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=34082"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}