Meta further tightens restrictions on DMs from strangers to underage users

Meta further tightened restrictions to prevent strangers from directly contacting underage users on Thursday — a move critics said is long overdue as concerns mount about child safety on Facebook and Instagram.

The new rules were revealed on Thursday as Meta faces a series of explosive lawsuits — including a sweeping salvo from 33 states accusing the company of fueling a youth mental health crisis and an alarming complaint by New Mexico alleging Meta has exposed underage users to alleged sex predators.

A new default setting will block younger Instagram users from receiving direct messages or being added to group chats from accounts they aren’t already connected to or follow — regardless of the listed age of those account users, Meta said in a blog post.

In March 2021, Meta had blocked Facebook and Instagram users listed as over age 19 from contacting minors on the platform if they weren’t already followed by them.

The new change applies to all US users under age 16 — and under age 18 in the UK and Europe “as we work to take into account varying regional expectations,” a Meta spokesman said.

Meta is enacting a similar feature for teens on its Messenger app which blocks them from receiving messages unless they are already connected to the sender as Facebook friends or through phone contacts.

“We want teens to have safe, age-appropriate experiences on our apps,” Meta said in the blog post.

Meta is added more direct message restrictions for teens. Getty Images

Additionally, parents will now have the ability to approve or deny attempts to alter account safety setting for children under the age of 16. Under the previous version of Instagram’s parental supervision feature, parents were only notified if a teen changed the safety settings.

Meta said it is also preparing another feature that would “help protect teens from seeing unwanted and potentially inappropriate images in their messages from people they’re already connected to, and to discourage them from sending these types of images themselves.” Meta said it would have “more to share” on that feature later this year.

The renewed safety push has occurred after reports and lawsuits have accused top Meta executives — including Zuckerberg and Instagram chief Adam Mosseri — of vetoing or watering down proposed features aimed at protecting teens in the recent past.

In one case, Zuckerberg reportedly rejected an effort to ban filters that simulated the effects of plastic surgery — despite concerns that they were fueling teen body dysmorphia.

The safety update is the latest of several changes by Meta. Meta

Josh Golin, the executive director of the child safety advocacy group Fairplay, said Meta’s announcement was long overdue.

“Today’s announcement demonstrates that Meta is, in fact, capable of making changes to the design of Instagram to make it safer by default,” Golin said. “But it shouldn’t have taken over a decade of predation on Instagram, whistleblower revelations, lawsuits, furious parents, and Mark Zuckerberg getting hauled before Congress for Meta to make this change.”

New Mexico attorney general Raúl Torrez said “parents have every reason to remain skeptical that Meta’s latest policy changes are meaningful or that Meta is likely to implement those changes faithfully.”

“With respect to child safety on Facebook and Instagram, the evidence shows Meta has consistently done the minimum or worse,” Torrez said in a statement. “While every step forward is welcome, it should not have required litigation to finally prompt action to protect children on Meta’s platform.”

Meta said it wants its apps to provide “age appropriate” experiences. Getty Images

Critics of Mark Zuckerberg’s social media sites allege they use addictive features, such as rampant notifications and the “like” button, to keep young users hooked — even as disturbing content on the platforms fuel bad outcomes like anxiety, depression, body image issues and even self-harm.

New Mexico’s lawsuit revealed that an unnamed Apple executive once complained to Meta that his 12-year-old had been “solicited” on Instagram.

The complaint, which cited various company documents and communications, also detailed an internal 2021 presentation that showed “100,000 children per day received online sexual harassment, such as pictures of adult genitalia.”

Elsewhere, the lawsuit filed by the 33 state attorneys general included a sweeping look at Meta’s response to its mounting child safety crisis.

Meta faces multiple lawsuits over the safety of its platforms. REUTERS

The states alleged that Meta drastically downplayed the prevalence of content depicting self-harm on Instagram that was shown to teens — in contradiction of its own internal research.

Earlier this month, Meta revealed that it would place more restrictions on content settings for young users.

That included controls on search terms designed to prevent teens from being exposed to sensitive subjects such as eating disorders or suicide.

Zuckerberg, X CEO Linda Yaccarino, TikTok CEO Shou Chew and other Big Tech leaders are set to testify before a Senate panel next Wednesday as part of a hearing on the “online child sexual exploitation crisis.”

Similar Posts

Leave a Reply

Your email address will not be published. Required fields are marked *