Instagram to alert parents if teens search for self-harm terms

0
20

Instagram to alert parents if teens search for self-harm terms

Instagram announced a new parental alert system that will notify guardians if a teenager repeatedly searches for suicide or self-harm terms within a short timeframe. The feature is scheduled to launch in the coming weeks for parents already enrolled in the platform’s parental supervision tools. Instagram currently blocks searches for such content; the new update adds a notification layer to inform parents of repeated search attempts. The alerts aim to facilitate parental support for teens exhibiting concerning search behavior.

Searches that may trigger an alert include phrases encouraging suicide or self-harm, phrases indicating a teen might be at risk of harming themselves, and specific terms such as “suicide” or “self-harm.” Instagram stated that parents will receive these alerts via email, text, or WhatsApp, depending on the contact information provided during account setup. In addition to external communications, parents will receive an in-app notification. These notifications will include resources designed to help parents approach conversations with their teen regarding mental health and safety.

The rollout occurs as Meta and other major technology companies face multiple lawsuits regarding the impact of social media on teenage users. During testimony in the U.S. District Court for the Northern District of California this week, Instagram head Adam Mosseri faced questioning regarding the company’s safety protocols. Prosecutors in an ongoing social media addiction case specifically inquired about the delayed rollout of basic safety features, including a nudity filter for private messages sent to teens. This legal scrutiny provides context for the timing of the new alert system.

In a separate legal proceeding in the Los Angeles County Superior Court, testimony revealed findings from internal Meta research. The study indicated that parental supervision and control tools had little measurable impact on children’s compulsive use of social media. The research also identified that children facing stressful life events struggled more with regulating their social media use appropriately. These findings highlight the complexities of managing teen social media consumption despite the availability of monitoring tools.

Instagram addressed the potential for over-notification, stating the company aims to avoid sending alerts unnecessarily to prevent diminishing their effectiveness. To determine the alert threshold, the company analyzed Instagram search behavior and consulted experts from its Suicide and Self-Harm Advisory Group. Instagram explained the rationale in a blog post, stating, “We chose a threshold that requires a few searches within a short period of time, while still erring on the side of caution.” The company acknowledged this approach might result in notifications when no real cause for concern exists but considers it the appropriate starting point based on expert consensus.

The alerts are rolling out in the U.S., U.K., Australia, and Canada next week. Instagram plans to expand availability to other regions later this year. Future updates will extend these notifications to instances where a teen engages the app’s artificial intelligence in conversations about suicide or self-harm. The company continues to adjust safety features amid ongoing legal and regulatory pressure regarding teen user protection.

Featured image credit