Father of Molly Russell says ‘we shouldn’t bury our heads’ about harmful content

Charities have branded the move “flimsy” as they called on Meta to do more to protect children.

Share

Parents will be sent alerts if their children search for content related to self-harm or suicide on Instagram, Meta has said.

The alerts will begin within the next two weeks and will only be sent to parents who have signed up for the social media platform’s supervision tool.

Charities have branded the move “flimsy” as they called for Meta to do more to protect children.

Announcing the policy change, Meta said: “We understand how sensitive these issues are, and how distressing it could be for a parent to receive an alert like this.

“The vast majority of teens do not try to search for suicide and self-harm content on Instagram, and when they do, our policy is to block these searches, instead directing them to resources and helplines that can offer support.

“These alerts are designed to make sure parents are aware if their teen is repeatedly trying to search for this content, and to give them the resources they need to support their teen.”

The app will track search terms used by children and flag any repeated mentions of suicide or self-harm, the Facebook and WhatsApp owner claims.

The alerts will be introduced on Instagram first before being implemented on Meta’s AI platforms, the tech giant added.

“We’re launching these alerts on Instagram search first, but we know teens are increasingly turning to AI for support,” it said.

“While our AI is already trained to respond safely to teens and provide resources on these topics as appropriate, we’re now building similar parental alerts for certain AI experiences.

“These will notify parents if a teen attempts to engage in certain types of conversations related to suicide or self-harm with our AI. This is important work and we’ll have more to share in the coming months.”

The Molly Rose Foundation said the policy is “fraught with risk.”

Andy Burrows, chief executive of the charity, said: “This clumsy announcement is fraught with risk and we are concerned that forced disclosures could do more harm than good.

“Every parent would want to know if their child is struggling, but these flimsy notifications will leave parents panicked and ill-prepared to have the sensitive and difficult conversations that will follow.

“Our research shows Instagram’s algorithm still actively recommends harmful depression, suicide and self-harm material to vulnerable young people and the onus should be on addressing these risks rather than making yet another cynically timed announcement that passes the buck to parents.”