{"id":37217,"date":"2025-10-31T22:31:09","date_gmt":"2025-10-31T22:31:09","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/the-man-who-invented-agi\/"},"modified":"2025-10-31T22:31:09","modified_gmt":"2025-10-31T22:31:09","slug":"the-man-who-invented-agi","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/the-man-who-invented-agi\/","title":{"rendered":"The Man Who Invented AGI"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>In the summer of 1956, a group of academics\u2014now we\u2019d call them computer scientists but there was no such thing then\u2014met on Dartmouth College campus in New Hampshire to discuss how to make machines think like humans. One of them, John McCarthy, coined the term \u201cartificial intelligence.\u201d This legendary meeting and the naming of a new field, is well known.<\/p>\n<p>In this century, a variation of the term has stepped to the forefront: artificial general intelligence, or AGI\u2014the stage at which computers can match or surpass human intelligence. AGI was the driver of this week\u2019s headlines: a deal between OpenAI and Microsoft that hinged on what happens if OpenAI achieves it; massive capital expenditures from Meta, Google, and Microsoft to pursue it; the thirst to achieve it helping Nvidia to become a $5 trillion company. US politicians have said if we don\u2019t get it before China does, we\u2019re cooked. Prognosticators say we might get it before the decade is out, and it will change everything. The origin of that term, however, and how it was originally defined, is not so well-known. But there is a clear answer to that question. The person who first came up with the most important acronym of the 21st century so far\u2014 as well as a definition that is still pretty much the way we think of it today\u2014is unfamiliar to just about everybody. This is his story.<\/p>\n<h2>Nano Nerd<\/h2>\n<p>In 1997, Mark Gubrud was obsessed with nanotechnology and its perils. He was a fanboy of Eric Drexler, who popularized the science of the very very small. Gubrud began attending nanotech conferences. His particular concern was how that technology, and other cutting-edge science, could be developed as dangerous weapons of war. \u201cI was a grad student sitting in the sub-sub basement at the University of Maryland, listening to a huge sump pump come on and off very loudly, right behind my desk, and reading everything that I could,\u201d he tells me on a Zoom call from the porch of a cabin in Colorado.<\/p>\n<p>That same year, Gubrud submitted and presented a paper at the Fifth Foresight Conference on Molecular Nanotechnology, called \u201cNanotechnology and International Security.\u201d He argued that breakthrough technologies will redefine international conflicts, making them potentially more catastrophic than nuclear war. He urged nations to \u201cgive up the warrior tradition.\u201d The new sciences he discussed included nanotechnology, of course, but also advanced AI\u2014which he referred to as, yep, \u201cartificial general intelligence.\u201d It seems that no one had previously employed that phrase. Later in the paper he defined it:<\/p>\n<p><em>\u201cBy advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed.\u201d<\/em><\/p>\n<p>Drop the last clause and you have the definition of AGI that most people use today.<\/p>\n<p>\u201cI needed a word to distinguish the AI that I was talking about from the AI that people knew at the time, which was expert systems, and it was pretty clear that was not going to be the kind of general intelligence they were,\u201d he explains. The paper wasn\u2019t circulated widely, and its impact was minimal.<\/p>\n<h2>Real AI<\/h2>\n<p>Fast forward to the early 2000s, a time when AI Winter still chilled the field. Some perceptive researchers sensed a thaw. In 1999, Ray Kurzweil predicted in his book <a href=\"https:\/\/www.amazon.com\/Age-Spiritual-Machines-Computers-Intelligence\/dp\/0140282025\" rel=\"noreferrer\" target=\"_blank\"><em>The Age of Spiritual Machines<\/em><\/a> that AI would be able to match human cognition by around 2030. This struck a chord with computer scientist Ben Goertzel, who began working with like-minded collaborator Cassio Pennachin to edit a book on approaches to AI that could be deployed for wide use, as opposed to using machine learning to address specific and bounded domains, like playing chess or coming up with medical diagnoses.<\/p>\n<p>Kurzweil had referred to this more sweeping technology as \u201cstrong AI,\u201d but that seemed fuzzy. Goertzel toyed with calling it \u201creal AI,\u201d or maybe \u201csynthetic intelligence.\u201d Neither alternative enchanted the book\u2019s contributors, so he invited them to bat around other ideas. The thread included future AI influencers like Shane Legg, Pei Wang, and Eliezer Yudkowsky (yep, the guy who would become the doomer-in-chief).<\/p>\n<p>Legg, who then had a master\u2019s degree and had worked with Goertzel, came up with the idea to add the word \u201cgeneral\u201d to AI. As he puts it now, \u201cI said in an email, \u2018Ben, don&#039;t call it real AI\u2014that&#039;s a big screw you to the whole field. If you want to write about machines that have general intelligence, rather than specific things, maybe we should call it artificial general intelligence or AGI. It kind of rolls off the tongue.\u201d Goertzel recalls that Wang suggested a different word order, suggesting the pursuit should be called general artificial intelligence. Goertzel noted that when pronounced out loud the acronym GAI might introduce an unintended connotation. \u201cNot that there\u2019s anything wrong with that,\u201d he quickly adds. They stuck with Legg\u2019s AGI.<\/p>\n<p>Wang, who now teaches at Temple University, says he only vaguely remembers the discussion but says he might have suggested some alternatives. More importantly, he tells me that what those contributors dubbed AGI in circa 2002 is \u201cbasically the original AI.\u201d The Dartmouth founders envisioned machines that would express intelligence with the same breadth as humans did. \u201cWe needed a new label because the only one had changed its common usage,\u201d he says.<\/p>\n<p>The die was cast. \u201cWe all started using it in some online forums, this phrase AGI,\u201d says Legg. (He didn\u2019t always use it: \u201cI never actually mentioned AGI in my PhD thesis, because I thought it would be too controversial,\u201d he says.) Goerztel\u2019s book, <em>Artificial General Intelligence,<\/em> didn\u2019t come out until mid-decade, but by then the term was taking off, with a journal and conference by that name.<\/p>\n<p>Gubrud did manage to claim credit in naming AGI. In the mid-2000s, Gubrud himself called it to the attention of those popularizing the term. As Legg puts it, \u201cSomebody pops up out of the woodwork and says, \u2018Oh, I came up with the term in \u201897,&#039; and we&#039;re like, &#039;Who the hell are you?&#039; And then sure enough, we looked it up, and he had a paper that had it. So [instead of inventing it] I kind of reinvented the term.&quot; (Legg of course is the cofounder and chief AGI scientist at Google\u2019s DeepMind.)<\/p>\n<p>Gubrud attended the second AGI conference in 2006 and met Goertzel briefly. He never met Legg, though over the years he occasionally interacted with him online, always in a friendly manner. Gubrud understands that his own lack of follow-up edged him out of the picture.<\/p>\n<p>\u201cI will accept the credit for the first citation and give them credit for a lot of other work that I didn&#039;t do, and maybe should have\u2014but that wasn&#039;t my focus.\u201d he says. \u201cMy concern was the arms race. The whole point of writing that paper was to warn about that.\u201d Gubrud hasn\u2019t been prolific in producing work after that\u2014his career has been peripatetic, and he now spends a lot of time caring for his mother\u2014but he has authored a number of papers arguing for a ban on autonomous killer robots and the like.<\/p>\n<p>Gubrud can\u2019t ignore the dissonance between his status and that of the lords of AGI. \u201cIt\u2019s taking over the world, worth literally trillions of dollars,\u201d he says. \u201cAnd I am a 66-year-old with a worthless PhD and no name and no money and no job.\u201d But Gubrud does have a legacy. He gave a name to AGI. His definition still stands. And his warnings about its dangers are still worth listening to.<\/p>\n<p><em>This is an edition of<\/em> <a href=\"https:\/\/www.wired.com\/author\/steven-levy\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>Steven Levy\u2019s<\/strong><\/em><\/a> <em><a href=\"https:\/\/www.wired.com\/newsletter?sourceCode=editarticle\" rel=\"noreferrer\" target=\"_blank\"><strong>Backchannel newsletter<\/strong><\/a>. Read previous newsletters<\/em> <a href=\"https:\/\/www.wired.com\/tag\/backchannel-nl\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>here.<\/strong><\/em><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story In the summer of 1956, a group of academics\u2014now we\u2019d call them computer scientists but there was no such thing then\u2014met on Dartmouth College campus in New Hampshire to discuss how to make machines think like humans. One of them, John McCarthy, coined the term \u201cartificial intelligence.\u201d This [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":37218,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-37217","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/37217","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=37217"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/37217\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/37218"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=37217"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=37217"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=37217"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}