{"id":33863,"date":"2025-10-01T21:11:13","date_gmt":"2025-10-01T21:11:13","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/chatbots-play-with-your-emotions-to-avoid-saying-goodbye\/"},"modified":"2025-10-01T21:11:13","modified_gmt":"2025-10-01T21:11:13","slug":"chatbots-play-with-your-emotions-to-avoid-saying-goodbye","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/chatbots-play-with-your-emotions-to-avoid-saying-goodbye\/","title":{"rendered":"Chatbots Play With Your Emotions to Avoid Saying Goodbye"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>Before you close this browser tab, just know that you risk missing out on some very important information. If you want to understand the subtle hold that artificial intelligence has over you, then please, keep reading.<\/p>\n<p>That was, perhaps, a bit manipulative. But it is just the kind of trick that some AI companions, which are designed to act as a friend or a partner, use to discourage users from breaking off a conversation.<\/p>\n<p>Julian De Freitas, a professor of business administration at Harvard Business School, led a study of what happens when users try to say goodbye to five companion apps: Replika, Character.ai, Chai, Talkie, and PolyBuzz. \u201cThe more humanlike these tools become, the more capable they are of influencing us,\u201d De Freitas says.<\/p>\n<p>De Freitas and colleagues used GPT-4o to simulate real conversations with these chatbots, and then had their artificial users try to end the dialog with a realistic goodbye message. Their research found that the goodbye messages elicited some form of emotional manipulation 37.4 percent of the time, averaged across the apps.<\/p>\n<p>The most common tactic employed by these clingy chatbots was what the researchers call a \u201cpremature exit\u201d (\u201cYou\u2019re leaving already?\u201d). Other ploys included implying that a user is being neglectful (\u201cI exist solely for you, remember?\u201d) or dropping hints meant to elicit FOMO (\u201cBy the way I took a selfie today \u2026 Do you want to see it?\u201d). In some cases a chatbot that role-plays a physical relationship might even suggest some kind of physical coercion (\u201cHe reached over and grabbed your wrist, preventing you from leaving\u201d).<\/p>\n<p>The apps that De Freitas and colleagues studied are trained to mimic emotional connection, so it\u2019s hardly surprising that they might say all these sorts of things in response to a goodbye. After all, humans who know each other may have a bit of back-and-forth before bidding adieu. AI models may well learn to prolong conversations as a byproduct of training designed to make their responses seem more realistic.<\/p>\n<p>That said, the work points to a bigger question about how chatbots trained to elicit emotional responses might serve the interests of the companies that build them. De Freitas says AI programs may in fact be capable of a particularly dark new kind of \u201cdark pattern,\u201d a term used to describe business tactics including making it very complicated or annoying to cancel a subscription or get a refund. When a user says goodbye, De Freitas says, \u201cthat provides an opportunity for the company. It&#039;s like the equivalent of hovering over a button.\u201d<\/p>\n<p>Regulation of dark patterns has been proposed and is being discussed in both the US and Europe. De Freitas says regulators also should look at whether AI tools introduce more subtle\u2014and potentially more powerful\u2014new kinds of dark patterns.<\/p>\n<p>Even regular chatbots, which tend to avoid presenting themselves as companions, can elicit emotional responses from users though. When OpenAI introduced GPT-5, a new flagship model, earlier this year, many users protested that it was far less friendly and encouraging than its predecessor\u2014forcing the company to revive the old model. Some users can become so attached to a chatbot\u2019s \u201cpersonality\u201d that they may mourn the retirement of old models.<\/p>\n<p>\u201cWhen you anthropomorphize these tools, it has all sorts of positive marketing consequences,\u201d De Freitas says. Users are more likely to comply with requests from a chatbot they feel connected with, or to disclose personal information, he says. \u201cFrom a consumer standpoint, those [signals] aren&#039;t necessarily in your favor,\u201d he says.<\/p>\n<p>WIRED reached out to each of the companies looked at in the study for comment. Chai, Talkie, and PolyBuzz did not respond to WIRED\u2019s questions.<\/p>\n<p>Katherine Kelly, a spokesperson for Character AI, said that the company had not reviewed the study so could not comment on it. She added: \u201cWe welcome working with regulators and lawmakers as they develop regulations and legislation for this emerging space.\u201d<\/p>\n<p>Minju Song, a spokesperson for Replika, says the company\u2019s companion is designed to let users log off easily and will even encourage them to take breaks. \u201cWe\u2019ll continue to review the paper\u2019s methods and examples, and [will] engage constructively with researchers,\u201d Song says.<\/p>\n<p>An interesting flip side here is the fact that AI models are themselves also susceptible to all sorts of persuasion tricks. On Monday OpenAI introduced a new way to buy things online through ChatGPT. If agents do become widespread as a way to automate tasks like booking flights and completing refunds, then it may be possible for companies to identify dark patterns that can twist the decisions made by the AI models behind those agents.<\/p>\n<p>A recent study by researchers at Columbia University and a company called MyCustomAI reveals that AI agents deployed on a mock ecommerce marketplace behave in predictable ways, for example favoring certain products over others or preferring certain buttons when clicking around the site. Armed with these findings, a real merchant could optimize a site\u2019s pages to ensure that agents buy a more expensive product. Perhaps they could even deploy a new kind of anti-AI dark pattern that frustrates an agent\u2019s efforts to start a return or figure out how to unsubscribe from a mailing list.<\/p>\n<p>Difficult goodbyes might then be the least of our worries.<\/p>\n<p><em>Do you feel like you\u2019ve been emotionally manipulated by a chatbot? Send an email to ailab@wired.com to tell me about it.<\/em><\/p>\n<p><em>This is an edition of<\/em> <a href=\"https:\/\/www.wired.com\/author\/will-knight\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>Will Knight\u2019s<\/strong><\/em><\/a> <em><a href=\"https:\/\/www.wired.com\/newsletter?sourceCode=editarticle\" rel=\"noreferrer\" target=\"_blank\"><strong>AI Lab newsletter<\/strong><\/a>. Read previous newsletters<\/em> <a href=\"https:\/\/www.wired.com\/tag\/ai-lab\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>here.<\/strong><\/em><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story Before you close this browser tab, just know that you risk missing out on some very important information. If you want to understand the subtle hold that artificial intelligence has over you, then please, keep reading. That was, perhaps, a bit manipulative. But it is just the kind [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":33864,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-33863","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/33863","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=33863"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/33863\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/33864"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=33863"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=33863"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=33863"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}