
European Commission Says Meta Breaches Digital Services Act Over Instagram, Facebook Under-13 Access
Key Takeaways
- Meta expands teen safeguards to 27 EU countries and the United States.
- AI visual analysis scans photos for height and bone structure to flag under-13 accounts.
- Meta says the system is not facial recognition, uses general cues instead.
EU presses Meta on age
The European Commission says Meta has not done enough to prevent minors under 13 from using Instagram and Facebook, announcing preliminary findings that the platforms are in breach of the Digital Services Act (DSA).
“Android Headlines/Android Apps/Facebook & Instagram Turn to AI Bone Analysis for Underage Detection Meta is announcing new updates in its efforts to strengthen underage detection and enforcement”
In the EU’s account of the issue, the Commission said Instagram and Facebook are failing to “diligently identify, assess and mitigate the risks of minors under 13 years old accessing their services.”

The Commission’s preliminary findings also focus on how Meta’s own safeguards can be bypassed, even though Meta says children under 13 cannot set up an Instagram or Facebook account.
Mashable reports that Meta requires a birth date when creating an account, “even to create a Teen Account (for under 16s),” but that minors can enter a false birth date “with no additional checks in place.”
The EU also criticized Meta’s reporting tool for minors under 13, saying it is “difficult to use and not effective, requiring up to seven clicks just to access the reporting form.”
Mashable adds that the Commission said there is “often is no proper follow-up,” allowing a reported minor to “continue to use the service without any type of check.”
The Commission’s next step, as described by Mashable, is that the preliminary findings might be confirmed and the EU could issue a fine “proportionate to the infringement,” with the fine capped at six percent of Meta’s total worldwide annual turnover.
Meta’s AI bone analysis
In response to regulatory pressure and enforcement challenges, Meta is rolling out an AI system that it says can detect underage users by scanning photos and videos for visual clues tied to age.
Multiple outlets describe the same core mechanism: Meta’s AI looks at “general themes and visual cues,” including “height or bone structure,” to estimate someone’s general age.

The Tech Portal quotes Meta’s clarification that “We want to be clear: this is not facial recognition,” and says the system “does not attempt to identify individuals and is not based on facial recognition.”
Android Headlines similarly says Meta is adding a “visual analysis feature that allows AI to scan photos and videos for visual clues about a person’s age,” estimating age from “general themes, like height or bone structure.”
Help Net Security provides Meta’s own language, stating, “We want to be clear: this is not facial recognition. Our AI looks at general themes and visual cues, for example height or bone structure, to estimate someone’s general age; it does not identify the specific person in the image.”
Under Meta’s approach, if an account is determined to be underage, it can be deactivated and the account holder must provide proof of age through an age verification process to prevent deletion.
The Tech Portal describes the consequence as removal or stricter safeguards, while Android Headlines says “If it determines an account may be underage, it will be deactivated, and the account holder will need to prove their age to prevent it from deletion.”
How the system decides
Meta’s AI approach is described as multi-signal, combining visual analysis with contextual clues from a user’s profile and activity.
“Key Takeaways - Meta's initiative to protect its younger users reveals a concern for safeguarding their integrity”
Android Headlines says the system “will analyze entire profiles for contextual clues, like birthday celebrations or mentions of school grades,” and that it will “go through posts, comments, bios, and captions to see if the user is underage.”
The Tech Portal similarly says the technology “analyzes general visual cues like height and bone structure” and “combines them with signals from captions, comments, and user interactions to estimate an age range.”
The Verge adds that Meta’s AI-powered system will “also analyze posts, comments, bios, and captions to search for ‘contextual clues’ that someone might be underage.”
Mezha.net quotes Meta’s blog about analyzing “evidence in various formats, such as posts, comments, and bios,” and says Meta is applying AI to remove accounts of people “aged 13 or younger.”
Once flagged, the system’s described enforcement includes deactivation and an age verification process, with Android Headlines saying deactivated accounts require proof of age “to prevent it from deletion.”
The Verge adds that Facebook and Instagram will “deactivate accounts identified as underage, and the owner will need to verify their age to prevent it from deletion.”
Rollout and teen safeguards
Alongside the under-13 detection effort, Meta is expanding Teen Accounts protections across multiple regions and platforms, with several outlets tying the rollout to regulatory pressure.
Reuters reports that Meta will expand safeguards for teen accounts to “27 European Union countries” and to “Facebook in the United States,” as the company faces pressure “to better protect young people online.”

Reuters also says Meta’s technology will be expanded to “27 countries in the European Union,” and that “Meta is also expanding this technology to Facebook in the United States for the first time, with the UK and EU to follow in June,” quoting the company’s blogpost.
Digital Trends describes the Teen Accounts expansion as covering Instagram in “Brazil and 27 EU countries,” and says Facebook in the US is getting it “for the first time too, with the UK and EU following in June.”
Android Headlines adds that Meta enrolled “hundreds of millions of teens” into protected Teen Accounts across Instagram, Facebook, and Messenger since 2024, and says it is expanding this technology to the EU and Brazil, with Facebook in the US followed by a rollout in the UK and EU in June.
The Verge describes the Teen Accounts changes as stricter content controls, saying the accounts “block messages from strangers” and “prevent users under 16 from livestreaming,” and it reports that Instagram rolled out the tech first and Facebook will do the same in the US.
Taken together, the sources depict a two-track strategy: deactivating accounts likely under 13 and shifting suspected teens into more restricted experiences.
Criticism, privacy, and legal backdrop
The rollout of AI age detection has been accompanied by privacy concerns and by references to legal actions involving child safety and platform enforcement.
“Underage accounts slip through social media’s age barriers daily, exposing children to content they shouldn’t see”
Gadget Review frames Meta’s bone-structure scanning as “proactive age detection” and says it “isn’t your typical content moderation update—it’s proactive age detection that could reshape how platforms verify users.”
It also raises the possibility of false positives, warning that “False positives could lock out legitimate users who look younger than their age—think baby-faced college students or shorter adults.”
The same article quotes Meta’s statement that “Our AI looks at general themes and visual cues, for example height or bone structure, to estimate someone’s general age; it does not identify the specific person,” emphasizing the company’s insistence on avoiding facial recognition.
The Tech Portal connects the new AI effort to legal and regulatory pressure, including a New Mexico jury’s $375 million penalty and a European Commission investigation under the DSA.
TechSpot similarly ties the announcement to a New Mexico jury finding and says Meta was “ordered to pay $375 million in damages” after a verdict tied to the company’s handling of child safety.
Across the sources, the stakes are presented as both enforcement against underage access and the risk of misclassification, with Meta’s tools designed to “deactivate” accounts and require age verification to avoid deletion.
More on Technology and Science

PayPal Reorganizes Into Three Units To Accelerate AI Adoption And Save $1.5 Billion
13 sources compared

ShinyHunters Claims Instructure Breach Stole Data From Nearly 9,000 Schools And 275 Million Individuals
14 sources compared
Greg Brockman Testifies OpenAI Will Spend $50 Billion on Computing in Musk v. Altman Trial
12 sources compared

Colorado Braces For Late-Spring Snowstorm With 2 To 6 Inches In Denver Metro
19 sources compared