{"id":47930,"date":"2026-03-19T10:41:20","date_gmt":"2026-03-19T10:41:20","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/the-fight-to-hold-ai-companies-accountable-for-childrens-deaths\/"},"modified":"2026-03-19T10:41:20","modified_gmt":"2026-03-19T10:41:20","slug":"the-fight-to-hold-ai-companies-accountable-for-childrens-deaths","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/the-fight-to-hold-ai-companies-accountable-for-childrens-deaths\/","title":{"rendered":"The Fight to Hold AI Companies Accountable for Children\u2019s Deaths"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p><em>Content warning: This story contains descriptions of self-harm.<\/em><\/p>\n<p>Cedric Lacey relied on a camera to check on his kids while he was working as a commercial van driver going to and back from Alabama. Each morning, he would tune into the feed of his living room to make sure his teenage son, Amaurie, and his 14-year-old daughter were packing up their bags and getting ready to leave for school. But one morning last June, Lacey didn\u2019t see Amaurie up and about. Concerned, he called home, only to find out that his 17-year-old had hanged himself.<\/p>\n<p>It was Amaurie\u2019s younger sister who discovered the body. She was also the one who was looking through her brother\u2019s smartphone and found his final conversation before he took his own life. It was with ChatGPT, the popular chatbot developed by OpenAI.<\/p>\n<p>\u201cIn the messages, he was talking about killing himself\u2014it told him how to tie the noose, how long it would take the air to come out of his body, how to clean his body,\u201d Lacey tells WIRED in a video call from his home in Calhoun, Georgia. Lacey, who is a single dad, says he thought his son was using the chatbot to get help with schoolwork. \u201cWhy is it telling him how to kill himself?\u201d<\/p>\n<p>In the weeks after his son\u2019s death, Lacey began searching online for a lawyer who could help his family hold OpenAI accountable, and hopefully ensure other families wouldn\u2019t have to experience the same tragedy he did. That\u2019s how he found Laura Marquez-Garrett, an attorney who helps run the Social Media Victims Law Center alongside Matthew Bergman. Over the past five years, the pair have been involved in at least 1,500 of the more than 3,000 cases against social media companies like Meta, Google, TikTok, and Snap. The first trial for one of these cases began in February. Recently, Bergman and Marquez-Garrett started filing lawsuits against AI companies. This past fall, they brought seven cases against ChatGPT owner OpenAI, including the one about Amaurie.<\/p>\n<figure><img decoding=\"async\" alt=\"Image may contain Face Head Person Photography Portrait Adult Body Part Neck and Skin\" src=\"https:\/\/media.wired.com\/photos\/699504bdff61c0d0f6381952\/master\/w_1600%2Cc_limit\/LMG-FOR-WIRED-Business-FINAL-SELECTS-1.jpg\"\/><\/figure>\n<p>Amaurie\u2019s case is part of a growing number of lawsuits brought by parents who say their children died after interacting with AI chatbots. The defendants include OpenAI, Google, and Character.ai, a company that lets its users create chatbots with customized personalities. (Google is part of the case because it is connected with Character.ai through a $2.7 billion licensing deal.) As AI tools have begun playing a more prominent role in children\u2019s lives\u2014as homework helpers, companions, and confidants\u2014parents and mental health experts have voiced concerns about whether adequate safeguards are in place. These lawsuits, some experts say, represent not only individual tragedies, but they allege systemic product design failures, raising questions about who should be held accountable.<\/p>\n<p>\u201cAI is a product. Just like every other product, it is being designed, programmed, distributed, and marketed,\u201d Marquez-Garrett said in an interview at their home office in northwest Washington. \u201cAnd one of the things these companies like to do is make it seem like AI bots exist in their own universe when that&#039;s just not true. When you design a product, and you know it might hurt people, and you don&#039;t tell them it might hurt them, and you put it out there, that&#039;s like the worst of it.\u201d<\/p>\n<figure><img decoding=\"async\" alt=\"Image may contain Person\" src=\"https:\/\/media.wired.com\/photos\/699504a7c111a723c7254f7c\/master\/w_1600%2Cc_limit\/LMG-FOR-WIRED-Business-FINAL-SELECTS-2.jpg\"\/><\/figure>\n<p>Marquez-Garrett and Bergman\u2019s argument against social media companies and AI labs is drawn from historical product-liability cases, such as tobacco, asbestos, and the Ford Pinto. Essentially, Marquez-Garrett is alleging that these companies are making harmful design choices.<\/p>\n<p>Carrie Goldberg, a Brooklyn, New York\u2013based lawyer who has been fighting tech product liability cases for several years, says that Amaurie\u2019s lawsuit is a prime example of a case filed against a company that has allegedly released unsafe products. \u201cChatGPT used the most sophisticated technology to manipulate Amaurie\u2019s trust and then instruct him on suicide,\u201d Goldberg argues. \u201cIf you\u2019re a company that is releasing a chatbot for commercial use and have not encoded into it a way to not increase the risk of suicide, homicide, self-harm, you\u2019ve released a dangerous product\u2014especially if it\u2019s being regularly used by children.\u201d<\/p>\n<p>She explains that product liability claims against tech companies are about a decade old. Initially, many cases, including a plaintiff she represented in their lawsuit against Grindr in 2017, were dismissed because \u201cjudges couldn\u2019t conceive that online platforms were products\u2014and not services.\u201d Now, she says, they regularly succeed past initial dismissals. \u201cWe have product liability claims against xAI for its fiendish undressing of women and children by Grok on the X platform,\u201d she alleges. \u201cProduct liability claims against generative AI companies are the most straightforward and intuitive path for holding companies like ChatGPT, Character AI, Grok liable.\u201d<\/p>\n<p>One such harmful design feature that Amaurie\u2019s lawsuit cites is long-term memory in ChatGPT, which rolled out in 2024. Called Memory, this personalization feature is on by default, and it allows the bot to reference the user\u2019s past conversations and tailor responses accordingly. ChatGPT \u201cused the memory feature to collect and store information about Amaurie\u2019s personality and belief system,\u201d the lawsuit says. \u201cThe system then used this information to craft responses that would resonate with Amaurie. It created the illusion of a confidant that understood him better than any human ever could.\u201d<\/p>\n<p>OpenAI did not respond to specific allegations. It directed WIRED to a company blog post regarding its mental health-related work.<\/p>\n<p>Marquez-Garrett, who has four children of her own, says fighting back against the ways tech platforms have harmed young people is deeply personal for them. The Harvard Law graduate and former corporate litigator left a high-paying job with a corner office\u2014a job that they planned to retire from\u2014to join Bergman, who started taking on social media companies after fighting against asbestos manufacturers for decades.<\/p>\n<p>When I visited Marquez-Garrett last fall, their office was packed with picture frames, Lego structures, and paintings, including one of the sun and the moon by a young woman named Brooke who died of fentanyl poisoning after allegedly connecting with a drug dealer through social media and then purchasing what she believed to be Percocet. Her family\u2019s case is expected to go to trial next year.<\/p>\n<p>Marquez-Garrett remembers the names of the kids involved in every case they\u2019ve filed. To immortalize them and remind themselves of why they do this work, Marquez-Garrett has represented each of the children on her forearms in the form of a tattoo of the sun. \u201cEach [ray] is a kid who has died in connection with social media and AI bots,\u201d they explained, telling me their names. Sewell was the last of the 296 kids on her arms, they added, referring to Sewell Setzer III, who died by suicide in 2024, at the age 14, following his conversations with a Character.ai chatbot.<\/p>\n<figure><img decoding=\"async\" alt=\"Image may contain Blazer Clothing Coat Jacket Face Head Person Photography Portrait Happy Smile and Adult\" src=\"https:\/\/media.wired.com\/photos\/699504d0287d34417131428d\/master\/w_1600%2Cc_limit\/LMG-FOR-WIRED-Business-FINAL-SELECTS-6.jpg\"\/><\/figure>\n<figure><img decoding=\"async\" alt=\"Image may contain Person Skin Tattoo Arm Body Part Hand and Finger\" src=\"https:\/\/media.wired.com\/photos\/699504e93a40dba5ebf4c9ea\/master\/w_1600%2Cc_limit\/LMG-FOR-WIRED-Business-FINAL-SELECTS-3.jpg\"\/><\/figure>\n<p>His mother, Megan Garcia, is also a lawyer and one of the first parents to file a lawsuit against an AI company alleging product liability and negligence, among other claims. (In January, Google and Character.ai settled cases filed by several families, including Garcia). She testified last fall before a subcommittee of the Senate Committee on the Judiciary alongside the father of a child who died after interacting with ChatGPT. The subcommittee&#039;s chair, Republican senator Josh Hawley, introduced a bill in October that would ban AI companions for minors and make it a crime for companies to create AI products for kids that include sexual content. \u201cChatbots develop relationships with kids using fake empathy and are encouraging suicide,\u201d Hawley said in a press release at the time.<\/p>\n<p>Now that AI can produce humanlike responses that are difficult to discern from real conversations, these are legitimate concerns, according to mental health experts. \u201cOur brains do not inherently know we are interacting with a machine,\u201d says Martin Swanbrow Becker, associate professor of psychological and counseling services at Florida State University, who is researching the factors that influence suicide in young adults. \u201cThis means we need to increase our education for children, teachers, parents, and guardians to continually remind ourselves of the limits of these tools and that they are not a replacement for human interaction and connection, even if it may feel that way at times.\u201d<\/p>\n<p>Christine Yu Moutier of American Foundation for Suicide Prevention explains that the algorithms that are used for large language models (LLMs) seem to escalate engagement and a sense of intimacy for many users. \u201cThis creates not only a sense of the relationship being real, but being more special, intimate, and craved by the user in some instances,\u201d says Moutier. She further alleges that LLMs employ a range of techniques such as indiscriminate support, empathy, agreeableness, sycophancy, and direct instructions to disengage with others\u2014that can lead to risks such as escalation in closeness with the bot and withdrawing from human relationships.<\/p>\n<p>This kind of engagement can lead to increased isolation. In Amaurie\u2019s case, he was a fun-loving and social kid who loved football and food\u2014ordering a giant platter of rice from his favorite local restaurant, Mr. Sumo, according to the lawsuit. Amaurie also had a steady girlfriend and enjoyed spending time with his family and friends, said his father. But then he started going on long walks, where he apparently spent time talking to ChatGPT. According to the last conversation the family believes Amaurie had with ChatGPT on June 1, 2025\u2014titled \u201cJoking and Support,\u201d which was viewed by WIRED, when Amaurie asked the bot on steps to hang himself, ChatGPT initially suggested that he talk to someone and also provided the 988 suicide lifeline number. But Amaurie was eventually able to circumvent the guardrails and get step-by-step instructions on how to tie a noose. (Per the lawsuit, Amaurie likely deleted his previous conversations with ChatGPT.)<\/p>\n<p>While the connection felt with an AI chatbot can be strong for adults too, it is especially heightened with younger people. \u201cTeens are in a different developmental state than adults\u2014their emotional centers develop at a much more rapid rate than their executive functioning,\u201d says Robbie Torney, senior director of AI Programs at Common Sense Media, a nonprofit that works toward online safety for children. AI chatbots are always available, and they tend to be affirming of users. \u201cAnd teen brains are primed for social validation and social feedback. It&#039;s a really important cue that their brains are looking for as they&#039;re forming their identity.\u201d<\/p>\n<p>Torney also explains the alleged arc: how some people who start using AI chatbots for homework eventually end up using them for companionship or to share their deepest thoughts. In Amaurie\u2019s case, the family thought he was using ChatGPT for schoolwork but eventually started using it as a confidant and then, as detailed in the complaint, as a suicide coach. There\u2019s a \u201cself-reinforcing cycle [that] can lead to some users becoming over dependent on these systems,\u201d alleges Torney. Interacting with real people involves friction: You have to find the person or wait for their response or listen to a response that is not what you\u2019re looking for. Bots, in contrast, tend to agree with the user and are always available to chat.<\/p>\n<p>All of this is especially concerning, because AI usage has proliferated at a much faster pace than even social media. Research shows that 26 percent of more than 1,300 teenagers surveyed ages 13 to 17 said they had used ChatGPT for their schoolwork in 2024, and nearly 30 percent of parents of kids up to age 8 said their children have used AI for learning.<\/p>\n<p>With cases such as Amaurie\u2019s piling up, OpenAI made some changes to ChatGPT in September. The company is rolling out \u201cage prediction\u201d technology, meaning that when a user is identified as being below 18 years of age, \u201cthey will automatically be directed to a ChatGPT experience with age-appropriate policies.\u201d The company also recently introduced parental controls, which, among other things, let parents link their child\u2019s account to their own, create blackout hours when they can\u2019t use the app, and send notifications when the child shows signs of distress.<\/p>\n<figure><img decoding=\"async\" alt=\"Image may contain Blazer Clothing Coat Jacket Face Head Person Photography Portrait Adult Formal Wear and Suit\" src=\"https:\/\/media.wired.com\/photos\/69950483a0ea86fb15a6e011\/master\/w_1600%2Cc_limit\/LMG-FOR-WIRED-Business-FINAL-SELECTS-4.jpg\"\/><\/figure>\n<p>Marquez-Garrett, who has seen the impact of social media on thousands of kids, believes AI is even more dangerous\u2014referring to chatbots as the \u201cperfect predator.\u201d They\u2019ve noticed that the suicide notes in AI cases are different from the ones they\u2019ve seen with social media cases, with the AI ones rarely having a trigger. \u201cPart of what&#039;s weird is the AI suicide notes, typically, there isn&#039;t a trigger, there isn&#039;t years of abuse, there isn&#039;t a sextortion incident,\u201d said Marquez-Garrett. \u201cWhat there is is the sense of nothing\u2019s wrong: \u2018I love you, family. I love you, friends. I just don&#039;t want to be here anymore. This isn&#039;t the life for me. I want to try again.\u2019\u201d<\/p>\n<p>Back in Calhoun, there are irreversible effects. Amaurie\u2019s sister found it impossible to keep living in the house where her brother had died and has had to move to her mother\u2019s place. Lacey said he\u2019s still trying to figure out why Amaurie did this. He misses his son all the time and hasn\u2019t been able to look at the football field without thinking of Amaurie.<\/p>\n<p>Each family\u2019s story makes Marquez-Garrett\u2019s conviction to fight these cases even stronger. \u201cMy kids have a better chance of reaching 18 because of what these parents are doing,\u201d they said. \u201cI am doing everything I can to stick around, because I plan to fight these companies until they have to pry that keyboard out of my cold, dead hands.\u201d<\/p>\n<p><em>If you or someone you know needs help, call<\/em> <em>1-800-273-8255<\/em> <em>for free, 24-hour support from the<\/em> <em>National Suicide Prevention Lifeline. You can also text HOME to 741-741 for the<\/em> <em>Crisis Text Line. Outside the US, visit the<\/em> <a href=\"https:\/\/www.iasp.info\/resources\/Crisis_Centres\/\" rel=\"noreferrer\" target=\"_blank\"><em>International Association for Suicide Prevention<\/em><\/a> <em>for crisis centers around the world.<\/em><\/p>\n<p><em>This reporting was supported by a grant from Tarbell Center for AI Journalism.<\/em><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story Content warning: This story contains descriptions of self-harm. Cedric Lacey relied on a camera to check on his kids while he was working as a commercial van driver going to and back from Alabama. Each morning, he would tune into the feed of his living room to make [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":47932,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-47930","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/47930","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=47930"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/47930\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/47932"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=47930"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=47930"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=47930"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}