Instagram Suggested ‘Groomers’ Connect With Minors, FTC Says
Instagram’s automated software systems recommended that child “groomers” connect with minors on the app, making it easier for them to find victims, according to a 2019 internal document presented in court by the Federal Trade Commission.
The Meta Platforms Inc. report noted that minors made up 27% of the follow recommendations that the social media app surfaced to “groomers,” a term the company used to refer to accounts they identified as exhibiting predatory behavior toward children. More broadly, the company found that 7% of Instagram follow recommendations to all adults were minors. The data, which was shared between company executives in June 2019, was presented in federal court on Tuesday as part of the FTC’s antitrust lawsuit against Meta.
The document also included an analysis of 3.7 million user reports flagging inappropriate comments to the company. Meta, which was called Facebook at the time, found that about one-third of those reports came from minors. Of the minors who reported an inappropriate comment, 54% were reporting an adult.
Earlier in the trial the FTC offered evidence that Meta Chief Executive Officer Mark Zuckerberg, when presented with safety issues on Instagram, chose not to offer that app enough resources to address the risks to users. After seeing the 2019 data, US District Judge James Boasberg asked the FTC’s lawyers to speed things along.
“Out-of-context and years-old documents about acquisitions that were reviewed by the FTC more than a decade ago will not obscure the realities of the competition we face or overcome the FTC’s weak case,” Meta said in a statement. The company added that it has “long invested in child safety efforts,” and in 2018 began work to restrict recommendations for potentially suspicious adults and encouraged the National Center for Missing and Exploited Children to expand its reporting process to include additional grooming situations it noticed.
Guy Rosen, Meta’s chief information security officer, argued on the stand Tuesday that difficulties protecting young people online are not unique to Meta. “These challenges are present everywhere in the industry,” Rosen said when questioned by Meta’s lawyers. Rosen took over Meta’s integrity efforts in 2017, and leads the team responsible for fighting content that violates the company’s policies.
Lawyers for the FTC surfaced the internal data as part of an argument that Meta’s acquisition of Instagram ultimately harmed consumers. Government lawyers have used emails and other internal documents, including testimony from Instagram founder Kevin Systrom, to argue that Meta under-invested in the app’s safety and security efforts. The FTC first sued Meta in 2020, alleging that Meta’s acquisitions of WhatsApp and Instagram were illegal and that the company needs to be broken up.
Earlier in the trial, Systrom argued that Zuckerberg starved Instagram of resources in part because he felt threatened by the app’s success and worried that it would cannibalize the social network he founded, Facebook.
The FTC surfaced more emails Tuesday that supported that theory. In May 2018, Adam Mosseri, a senior Meta product executive who would take a job to lead Instagram later that year, asked Rosen for an honest assessment of Instagram’s integrity work. Rosen warned at the time that Instagram was “behind” in terms of fighting harmful content, including child exploitation and terrorism-related content. Rosen suggested that this posed a risk, particularly to the platform’s younger audiences, and that he was seeking to “expand aggressively” into addressing these issues.
The FTC painted a portrait of a company reluctant to do so. In a different exchange from February 2019, Rosen wrote in an email that he relayed his concerns that Instagram was being underfunded to Zuckerberg during a planning meeting about increasing company headcount. The resource allocation “was deliberate,” Rosen concluded after speaking with Zuckerberg. Zuckerberg thought Instagram had another year or two to catch up to Facebook and didn’t feel the app needed as many resources. “I think we are not sure that’s the case anymore,” Rosen said.
An internal presentation titled “Instagram Well-being H1 2019 – planning” — a planning document for the first half of 2019 — acknowledged that Instagram’s integrity team was thin relative to the scope and importance of the work. Given resource limitations, “we will not be doing major proactive work” in areas like harassment, financial scams, credible threats of violence, impersonation, prostitution and sexual solicitation and forms of child exploitation, the presentation said.
Rosen, when cross-examined by Meta’s lawyers, said it wouldn’t be fair to say that Meta starved the Instagram integrity team, and that Zuckerberg was aligned with him in the need to support Instagram. He believes that nobody in the industry invested as much or prioritized these challenges as much as Meta.
“We’ve grown substantially,” Rosen said.
The company in September launched Instagram Teen Accounts, which have protections to limit who can contact teens and are by default private. “They’re in the strictest messaging settings, meaning they can’t be messaged by anyone they’re not already connected to,” Meta said in a statement. “Teens under 16 need a parent’s permission to change these settings. In 2021, we launched technology to identify adults accounts that had shown potentially suspicious activity, such as being blocked by a teen, and prevent them from finding, following and interacting with teens’ accounts. We don’t recommend these accounts to teens, or vice versa.”
Top photo: An employee walks past branded posters displayed at the Instagram Inc. office in New York, on Monday, June 4, 2018. Photographer: Jeenah Moon/Bloomberg.
- Why Trump’s Plan to Stop Tallying Weather Losses Matters to the Insurance Industry
- Navigating Talent Shortages and Making Claims Cool
- UnitedHealth Sued by Shareholders Over Reaction to Backlash From Executive’s Killing
- Shedeur Sanders Fan Sues NFL for $100M Over Quarterback’s Draft Slide