{"id":32794,"date":"2025-09-24T09:28:12","date_gmt":"2025-09-24T09:28:12","guid":{"rendered":"https:\/\/agooka.com\/news\/business\/openais-teen-safety-features-will-walk-a-thin-line\/"},"modified":"2025-09-24T09:28:12","modified_gmt":"2025-09-24T09:28:12","slug":"openais-teen-safety-features-will-walk-a-thin-line","status":"publish","type":"post","link":"https:\/\/agooka.com\/news\/business\/openais-teen-safety-features-will-walk-a-thin-line\/","title":{"rendered":"OpenAI&#8217;s Teen Safety Features Will Walk a Thin Line"},"content":{"rendered":"<p>Save StorySave this storySave StorySave this story<\/p>\n<p>OpenAI announced new teen safety features for ChatGPT on Tuesday as part of an ongoing effort to respond to concerns about how minors engage with chatbots. The company is building an age-prediction system that identifies if a user is under 18 years old and routes them to an \u201cage-appropriate\u201d system that blocks graphic sexual content. If the system detects that the user is considering suicide or self-harm, it will contact the user\u2019s parents. In cases of imminent danger, if a user&#039;s parents are unreachable, the system may contact the authorities.<\/p>\n<p>In a blog post about the announcement, CEO Sam Altman wrote that the company is attempting to balance freedom, privacy, and teen safety.<\/p>\n<p>\u201cWe realize that these principles are in conflict, and not everyone will agree with how we are resolving that conflict,\u201d Altman wrote. \u201cThese are difficult decisions, but after talking with experts, this is what we think is best and want to be transparent in our intentions.\u201d<\/p>\n<p>While OpenAI tends to prioritize privacy and freedom for adult users, for teens the company says it puts safety first. By the end of September, the company will roll out parental controls so that parents can link their child\u2019s account to their own, allowing them to manage the conversations and disable features. Parents can also receive notifications when \u201cthe system detects their teen is in a moment of acute distress,\u201d according to the company\u2019s blog post, and set limits on the times of day their children can use ChatGPT.<\/p>\n<p>The moves come as deeply troubling headlines continue to surface about people dying by suicide or committing violence against family members after engaging in lengthy conversations with AI chatbots. Lawmakers have taken notice, and both Meta and OpenAI are under scrutiny. Earlier this month, the Federal Trade Commission asked Meta, OpenAI, Google, and other AI firms to hand over information about how their technologies impact kids, according to Bloomberg.<\/p>\n<p>At the same time, OpenAI is still under a court order mandating that it preserve consumer chats indefinitely\u2014a fact that the company is extremely unhappy about, according to sources I\u2019ve spoken to. Today\u2019s news is both an important step toward protecting minors and a savvy PR move to reinforce the idea that conversations with chatbots are so personal that consumer privacy should only be breached in the most extreme circumstances.<\/p>\n<h2>\u201cA Sexbot Avatar in ChatGPT\u201d<\/h2>\n<p>From the sources I\u2019ve spoken to at OpenAI, the burden of protecting users weighs heavily on many researchers. They want to create a user experience that is fun and engaging, but it can quickly veer into becoming disastrously sycophantic. It&#039;s positive that companies like OpenAI are taking steps to protect minors. At the same time, in the absence of federal regulation, there&#039;s still nothing forcing these firms to do the right thing.<\/p>\n<p>In a recent interview, Tucker Carlson pushed Altman to answer exactly <em>who<\/em> is making these decisions that impact the rest of us. The OpenAI chief pointed to the model behavior team, which is responsible for tuning the model for certain attributes. \u201cThe person I think you should hold accountable for those calls is me,\u201d Altman added. \u201cLike, I\u2019m a public face. Eventually, like, I\u2019m the one that can overrule one of those decisions or our board.\u201d<\/p>\n<p>He\u2019s right, yet some of the imminent harms seem to escape him. In another podcast interview with YouTuber Cleo Abrams, Altman said that \u201csometimes we do get tempted\u201d to launch products \u201cthat would really juice growth.\u201d He added: \u201cWe haven\u2019t put a sexbot avatar in ChatGPT yet.\u201d <em>Yet<\/em>! How strange.<\/p>\n<p>OpenAI recently released research on who uses ChatGPT, and how they use it. That research excluded users who were under the age of 18. We don\u2019t yet have a full understanding of how teens are using AI, and it\u2019s an important question to answer before the situation grows more dire.<\/p>\n<h2>Sources Say<\/h2>\n<p>Elon Musk\u2019s xAI is suing a former staffer who left the company to join OpenAI, alleging in a complaint that he misappropriated trade secrets and confidential information. In the current era of AI companies swapping staffers for multimillion-dollar compensation packages, I\u2019m sure we\u2019ll see more of these types of lawsuits pop up.<\/p>\n<p>The staffer in question, Xuechen Li, never made it to OpenAI\u2019s internal Slack, according to two sources at the company. It\u2019s unclear whether his offer was rescinded, or if he was onboarded only to be let go. OpenAI and Li did not respond to WIRED\u2019s request for comment.<\/p>\n<p><em>This is an edition of<\/em> <a href=\"https:\/\/www.wired.com\/author\/kylie-robison\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>Kylie Robison\u2019s<\/strong><\/em><\/a> <a href=\"https:\/\/www.wired.com\/newsletter?sourceCode=editarticle\" rel=\"noreferrer\" target=\"_blank\"><em><strong>Model Behavior newsletter<\/strong><\/em><\/a>. <em>Read previous newsletters<\/em> <a href=\"https:\/\/www.wired.com\/tag\/model-behavior\/\" rel=\"noreferrer\" target=\"_blank\"><em><strong>here.<\/strong><\/em><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Save StorySave this storySave StorySave this story OpenAI announced new teen safety features for ChatGPT on Tuesday as part of an ongoing effort to respond to concerns about how minors engage with chatbots. The company is building an age-prediction system that identifies if a user is under 18 years old and routes them to an [&hellip;]<\/p>\n","protected":false},"author":1,"featured_media":32795,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[36],"tags":[],"class_list":{"0":"post-32794","1":"post","2":"type-post","3":"status-publish","4":"format-standard","5":"has-post-thumbnail","7":"category-business"},"_links":{"self":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/32794","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/comments?post=32794"}],"version-history":[{"count":0,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/posts\/32794\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media\/32795"}],"wp:attachment":[{"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/media?parent=32794"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/categories?post=32794"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/agooka.com\/news\/wp-json\/wp\/v2\/tags?post=32794"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}