Instagram said it would notify parents if their teenager repeatedly searches for terms related to suicide or self-harm within a short period, as pressure mounts on governments to protect children online following Australia’s recent social media restrictions for those under 16.
The move comes amid growing global concern over online safety for minors. In December, Australia restricted access to social media for children under 16, and several countries have been considering similar measures. Britain announced in January that it is evaluating rules to safeguard children, while Spain, Greece, and Slovenia have indicated plans to limit access for younger users in recent weeks.
Owned by Meta Platforms Inc., Instagram said on Thursday that parents who use its optional supervision setting will receive alerts if their teenagers attempt to access content related to suicide or self-harm. “These alerts build on our existing work to help protect teens from potentially harmful content on Instagram,” the company said. “We have strict policies against content that promotes or glorifies suicide or self-harm.”
Instagram already blocks searches for such material and redirects users to support resources. The new alert system will begin next week for families in the United States, Britain, Australia, and Canada. Parents using the supervision feature can monitor their teenager’s activity and receive notifications, while teenagers must give consent for any additional monitoring layers.
The platform’s teen accounts, designed for users under 16, require parental permission to adjust settings. Parents who activate the supervision tools gain extra oversight, but the system is designed to balance protection with teen consent.
Governments have been increasingly focused on protecting children from online harm, particularly following concerns about AI-generated content. Recent debates were sparked by the chatbot Grok, which produced non-consensual sexualized images, raising alarms about the safety of minors in digital spaces.
Instagram’s announcement reflects a wider trend in which social media companies face pressure to strengthen safeguards for young users. By combining content restrictions with parental notifications, the platform aims to provide both preventive and responsive measures to protect vulnerable teens.
The initiative highlights the growing responsibility of online platforms in addressing mental health risks and managing exposure to harmful content. With social media usage among teenagers continuing to rise, regulators and technology firms are navigating how to create safer digital environments while respecting user autonomy.
Instagram said the alerts will roll out gradually, allowing parents in the four countries to monitor repeated searches for sensitive topics and respond quickly, directing teenagers toward professional support when needed.
