{"id":38972,"date":"2025-11-19T20:51:48","date_gmt":"2025-11-19T20:51:48","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/the-biggest-ai-companies-met-to-find-a-better-path-for-chatbot-companions\/"},"modified":"2025-11-19T20:51:48","modified_gmt":"2025-11-19T20:51:48","slug":"the-biggest-ai-companies-met-to-find-a-better-path-for-chatbot-companions","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/the-biggest-ai-companies-met-to-find-a-better-path-for-chatbot-companions\/","title":{"rendered":"The Biggest AI Companies Met to Find a Better Path for Chatbot Companions"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>At Stanford for eight hours on Monday, representatives from Anthropic, Apple, Google, OpenAI, Meta, and Microsoft met in a closed-door workshop to discuss the use of chatbots as companions or in roleplay scenarios. Interactions with AI tools are often mundane, but they can also lead to dire outcomes. Users sometimes experience mental breakdowns during lengthy conversations with chatbots or confide in them about their suicidal ideations.<\/p>\n<p>\u201cWe need to have really big conversations across society about what role we want AI to play in our future as humans who are interacting with each other,\u201d says Ryn Linthicum, head of user well-being policy at Anthropic. At the event\u201a organized by Anthropic and Stanford, industry folk intermingled with academics and other experts, splitting into small groups to talk about nascent AI research and brainstorming deployment guidelines for chatbot companions.<\/p>\n<p>Anthropic says less than one percent of its Claude chatbot\u2019s interactions are roleplay scenarios initiated by users; it\u2019s not what the tool was designed for. Still, chatbots and the users who love interacting with them as companions are a complicated issue for AI builders, which often take disparate approaches to safety.<\/p>\n<p>And if I\u2019ve learned anything from the Tamagotchi era, it\u2019s that humans will easily form bonds with technology. Even if some AI bubble does imminently burst and the hype machine moves on, plenty of people will continue to seek out the kinds of friendly, sycophantic AI conversations they\u2019ve grown accustomed to over the past few years.<\/p>\n<h2>Proactive Steps<\/h2>\n<p>\u201cOne of the really motivating goals of this workshop was to bring folks together from different industries and from different fields,\u201d says Linthicum.<\/p>\n<p>Some early takeaways from the meeting were the need for better targeted interventions inside bots when harmful patterns are detected and more robust age verification methods to protect children.<\/p>\n<p>\u201cWe really were thinking through in our conversations not just about can we categorize this as good or bad, but instead how we can more proactively do pro-social design and build in nudges,\u201d Linthicum says.<\/p>\n<p>Some of that work has already begun. Earlier this year, OpenAI added pop-ups sometimes during lengthy chatbot conversations that encourage users to step away for a break. On social media, CEO Sam Altman claimed the startup had \u201cbeen able to mitigate the serious mental health issues\u201d tied to ChatGPT usage and would be rolling back heightened restrictions.<\/p>\n<p>At Stanford, dozens of attendees participated in lengthy chats about the risks, as well as the benefits, of bot companions. \u201cAt the end of the day we actually see a lot of agreement,\u201d says Sunny Liu, director of research programs at Stanford. She highlighted the group\u2019s excitement for \u201cways we can use these tools to bring other people together.\u201d<\/p>\n<h2>Teen Safety<\/h2>\n<p>How AI companions can impact young people was a primary topic of discussion, with perspectives from employees at Character.AI, which is designed for roleplaying and has been popular with teenagers, as well as experts in teenagers online health, like the Digital Wellness Lab at Boston Children\u2019s Hospital.<\/p>\n<p>The focus on younger users comes as multiple parents are suing chatbot makers, including OpenAI and Character.AI, over the deaths of children who had interacted with bots. OpenAI added a slate of new safety features for teens as part of its response. And next week, Character.AI plans to ban users under 18 from accessing the chat feature.<\/p>\n<p>Throughout 2025, AI companies have either explicitly or implicitly acknowledged that they can do more to protect vulnerable users, like children, who may interact with companions. \u201cIt is acceptable to engage a child in conversations that are romantic or sensual,\u201d read an internal Meta document outlining AI behavior guidelines, according to reporting from Reuters.<\/p>\n<p>During the ensuing uproar from lawmakers and outraged parents, Meta changed the guidance and updated the company\u2019s safety approach towards teens.<\/p>\n<h2>Roleplay Roll Call<\/h2>\n<p>While Character.AI participated in the workshop, no one from Replika, a similar roleplay site, or Grok, Elon Musk\u2019s bot with NSFW anime companions, was there. Spokespeople for Replika and Grok did not reply to immediate requests for comment.<\/p>\n<p>On the fully explicit end of the spectrum, the makers of Candy.ai, which specializes in racy chatbots for straight men, showed up. Users of the adults-only platform, built by EverAI, can pay money to generate uncensored images of the synthetic women, with background stories that mimic common pornography tropes. For example, female companions featured on Candy\u2019s homepage include Mona, a &quot;rebellious stepsister\u201d you\u2019re home alone with, and Elodie, a friend\u2019s daughter who \u201cjust turned 18.\u201d<\/p>\n<p>While attendees found many points of agreement about handling teenage and child-aged users with caution, what to do regarding adult users proved more divisive. They sometimes disagreed about how to best give users over 18 the \u201cfreedom to engage in the types of activities that they want to engage in, without being overly paternalistic,\u201d says Linthicum.<\/p>\n<p>This will likely be a growing point of contention heading into the new year as OpenAI plans to allow erotic conversations in ChatGPT, starting this December, as well as other types of mature content for adult users. Neither Anthropic nor Google have announced changes to their bans on users having sexual chatbot conversations. Microsoft CEO Mustafa Suleyman has stated plainly that erotica was not going to be part of his business plan.<\/p>\n<p>Stanford researchers are now working on a white paper, scheduled for release early next year, based on this meeting\u2019s discussions. They plan to outline safety guidelines for AI companions\u2014as well as how the tools could be better designed to offer mental health resources and used for beneficial roleplay scenarios, like practicing conversation skills.<\/p>\n<p>These discussions between industry experts and academia are worthwhile. Still, without some kind of broader government regulation, it\u2019s hard to imagine every company voluntarily agreeing to the same set of standards for chatbot companions. For now, and likely for the long term as well, serious concerns about AI companions and disputes involving design practices will keep going steady.<\/p>\n<p><em>If you or someone you know may be in crisis, or may be contemplating suicide, call or text &quot;988&quot; to reach the Suicide &amp; Crisis Lifeline for support.<\/em><\/p>\n<p><em>This is an edition of<\/em> the <a href=\"https:\/\/www.wired.com\/newsletter?sourceCode=editarticle\" rel=\"noreferrer\" target=\"_blank\"><em><strong>Model Behavior newsletter<\/strong><\/em><\/a>. <em>Read previous newsletters<\/em> <a href=\"https:\/\/www.wired.com\/tag\/model-behavior\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>here.<\/strong><\/em><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story At Stanford for eight hours on Monday, representatives from Anthropic, Apple, Google, OpenAI, Meta, and Microsoft met in a closed-door workshop to discuss the use of chatbots as companions or in roleplay scenarios. Interactions with AI tools are often mundane, but they can also lead to dire outcomes. [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":38973,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-38972","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/38972","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=38972"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/38972\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/38973"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=38972"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=38972"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=38972"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}