{"id":35555,"date":"2025-10-17T16:31:15","date_gmt":"2025-10-17T16:31:15","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/can-ai-avoid-the-enshittification-trap\/"},"modified":"2025-10-17T16:31:15","modified_gmt":"2025-10-17T16:31:15","slug":"can-ai-avoid-the-enshittification-trap","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/can-ai-avoid-the-enshittification-trap\/","title":{"rendered":"Can AI Avoid the Enshittification Trap?"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>I recently vacationed in Italy. As one does these days, I ran my itinerary past GPT-5 for sightseeing suggestions and restaurant recommendations. The bot reported that the top choice for dinner near our hotel in Rome was a short walk down Via Margutta. It turned out to be one of the best meals I can remember. When I got home, I asked the model how it chose that restaurant, which I hesitate to reveal here in case I want a table sometime in the future (Hell, who knows if I\u2019ll even return: It is called Babette. Call ahead for reservations.) The answer was complex and impressive. Among the factors were rave reviews from locals, notices in food blogs and the Italian press, and the restaurant\u2019s celebrated combination of Roman and contemporary cooking. Oh, and the short walk.<\/p>\n<p>Something was required from my end as well: trust. I had to buy into the idea that GPT-5 was an honest broker, picking my restaurant without bias; that the restaurant wasn\u2019t shown to me as sponsored content and wasn\u2019t getting a cut of my check. I could have done deep research on my own to double-check the recommendation (I did look up the website), but the point of using AI is to bypass that friction.<\/p>\n<p>The experience bolstered my confidence in AI results but also made me wonder: As companies like OpenAI get more powerful, and as they try to pay back their investors, will AI be prone to the erosion of value that seems endemic to the tech apps we use today?<\/p>\n<h2>Word Play<\/h2>\n<p>Writer and tech critic Cory Doctorow calls that erosion \u201censhittification.\u201d His premise is that platforms like Google, Amazon, Facebook, and TikTok start out aiming to please users, but once the companies vanquish competitors, they intentionally become less useful to reap bigger profits. After WIRED republished Doctorow\u2019s pioneering 2022 essay about the phenomenon, the term entered the vernacular, mainly because people recognized that it was totally on the mark. Enshittification was chosen as the American Dialect Society\u2019s 2023 Word of the Year. The concept has been cited so often that it transcends its profanity, appearing in venues that normally would hold their noses at such a word. Doctorow just published an eponymous book on the subject; the cover image is the emoji for \u2026 guess what.<\/p>\n<p>If chatbots and AI agents become enshittified, it could be worse than Google Search becoming less useful, Amazon results getting plagued with ads, and even Facebook showing less social content in favor of anger-generating clickbait.<\/p>\n<p>AI is on a trajectory to be a constant companion, giving one-shot answers to many of our requests. People already rely on it to help interpret current events and get advice on all sorts of buying choices\u2014and even life choices. Because of the massive costs of creating a full-blown AI model, it\u2019s fair to assume that only a few companies will dominate the field. All of them plan to spend hundreds of billions of dollars over the next few years to improve their models and get them into the hands of as many people as possible. Right now, I\u2019d say AI is in what Doctorow calls the \u201cgood to the users\u201d stage. But the pressure to make back the massive capital investments will be tremendous\u2014especially for companies whose user base is locked in. Those conditions, as Doctorow writes, allow companies to abuse their users and business customers \u201cto claw back all the value for themselves.\u201d<\/p>\n<p>When one imagines the enshittification of AI, the first thing that comes to mind is advertising. The nightmare is that AI models will make recommendations based on which companies have paid for placement. That\u2019s not happening now, but AI firms are actively exploring the ad space. In a recent interview, OpenAI CEO Sam Altman said, \u201cI believe there probably is some cool ad product we can do that is a net win to the user and a sort of positive to our relationship with the user.\u201d Meanwhile, OpenAI just announced a deal with Walmart so the retailer\u2019s customers can shop inside the ChatGPT app. Can\u2019t imagine a conflict there! The AI search platform Perplexity has a program where sponsored results appear in clearly labeled follow-ups. But, it promises, \u201cthese ads will not change our commitment to maintaining a trusted service that provides you with direct, unbiased answers to your questions.\u201d<\/p>\n<p>Will those boundaries hold? Perplexity spokesperson Jesse Dwyer tells me, \u201cFor us, the number one guarantee is that we won\u2019t let it.\u201d And at OpenAI\u2019a recent developer\u2019s day, Altman said that the company is \u201chyper aware of the need to be very careful\u201d about serving its users rather than serving itself. The Doctorow doctrine doesn\u2019t put much credence in statements like that: \u201cOnce a company <em>can<\/em> enshittify its products, it will face the perennial temptation <em>to<\/em> enshittify its products,\u201d he writes in his book.<\/p>\n<p>Putting ads in chatbot conversations or in search results is not the only way that AI can become enshittified. Doctorow cites examples where companies, once they dominate a market, change their business model and fees. For instance, in 2023, Unity, the most popular provider of videogame development tools, decided to charge a new \u201cruntime fee.\u201d That misbehavior was so egregious that users revolted and got the fee walked back. But look at what has happened to streaming services like Amazon Prime Video: It used to be an ad-free service. Now it makes you watch commercials before and during the movie. You have to pay to turn them off. Oh, and the price of Amazon Prime keeps rising. So it might be standard big-tech practice to lock users into a service and then charge ever higher fees. It could even be that in order to maintain the same level of intelligence in a chatbot\u2019s results, users one day might have to upgrade to a higher, even more expensive tier\u2014another enshittification trick. Maybe companies that once promised that your chatbot activities would not be used to train future models will change their minds about that\u2014simply because they can get away with it.<\/p>\n<h2>Cory Speaks<\/h2>\n<p>Doctorow didn\u2019t address AI in his book, so I gave him a call to see whether he thinks the category is destined to travel down defecation row. I expected that he might outline the various ways that AI companies will fall prey to his smelly syndrome. To my surprise, he had a different take. He is not a fan of AI, and he claims the field has not even reached the \u201cgood to users\u201d stage I outlined earlier. Nonetheless, he says, it could be that the enshittification process happens anyway. Because it\u2019s so hard to see what goes on inside the \u201cblack boxes\u201d of LLMs, he says, \u201cthey have an ability to disguise their enshittifying in a way that would allow them to get away with an awful lot.\u201d Most of all, he says, the \u201cterrible economics\u201d of the field demand that the companies can\u2019t afford to wait and will enshittify even before they deliver value. \u201cI think they\u2019ll try every sweaty gambit you can imagine as the economics circle the drain,\u201d he said.<\/p>\n<p>I disagree with Doctorow about the value of AI. Hey, it found Babette for me! But I do fear that the technology might be prone to the enshittification process that he unerringly identified in the current tech giants. And guess what\u2014GPT-5 agrees with me. When I posed the question to the chatbot, it replied, \u201cDoctorow\u2019s \u2018enshittification\u2019 framework (platforms start good for users, then shift value to business customers, then extract it for themselves) maps disturbingly well onto AI systems if incentives go unchecked.\u201d GPT-5 then proceeded to lay out a number of methods by which AI companies could degrade their products for profit and power. AI companies might assure us they won\u2019t enshittify. But their own products have already written the blueprint.<\/p>\n<p><em>This is an edition of<\/em> <a href=\"https:\/\/www.wired.com\/author\/steven-levy\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>Steven Levy\u2019s<\/strong><\/em><\/a> <em><a href=\"https:\/\/www.wired.com\/newsletter?sourceCode=editarticle\" rel=\"noreferrer\" target=\"_blank\"><strong>Backchannel newsletter<\/strong><\/a>. Read previous newsletters<\/em> <a href=\"https:\/\/www.wired.com\/tag\/backchannel-nl\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>here.<\/strong><\/em><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story I recently vacationed in Italy. As one does these days, I ran my itinerary past GPT-5 for sightseeing suggestions and restaurant recommendations. The bot reported that the top choice for dinner near our hotel in Rome was a short walk down Via Margutta. It turned out to be [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":35556,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-35555","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/35555","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=35555"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/35555\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/35556"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=35555"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=35555"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=35555"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}