Pressure is increasing on social media platforms to strengthen child protection measures, particularly as governments move toward stricter regulation. Following Australia’s decision to ban children under 16 from social media, and similar efforts by gaming platforms to limit adult–child interactions, TikTok is now preparing to introduce a new age verification system across Europe.
According to a Reuters report, TikTok will begin rolling out an age-detection system to identify users who may be under the platform’s minimum age of 13. The system uses a combination of artificial intelligence and behavioural analysis, examining profile details, posted videos, and usage patterns to estimate a user’s age.
TikTok says the system does not automatically ban accounts. Instead, when an account is flagged as potentially belonging to a child under 13, it is sent for review by trained moderators. This additional human review step is intended to reduce errors and ensure that enforcement decisions are not made solely by automated tools.
The company has been testing the age-detection technology in the United Kingdom for roughly a year. Based on those trials, TikTok plans to expand the system to other European countries in the coming weeks. The company stated that the tool was developed specifically to align with European regulatory standards, which place strong emphasis on child safety and data protection.
Regulators across the European Union have been pushing platforms to take greater responsibility for verifying user ages and enforcing minimum-age rules. The European Parliament has already begun discussions on whether the EU should consider restrictions similar to Australia’s, potentially barring children under 15 from social media altogether.
However, age verification remains a complex challenge. Other platforms have struggled to implement effective systems without easy loopholes. Recent examples include Roblox, where users reportedly bypassed facial verification by uploading images of animated characters or altered photos. These cases highlight the difficulty of balancing accessibility, privacy, and accuracy.
TikTok’s approach, which relies on behavior and content analysis rather than mandatory ID uploads, may reduce friction for users but also raises the risk of false positives. Older users could be mistakenly flagged as underage, potentially leading to account restrictions or temporary removals.
As the system expands across Europe, its effectiveness will be closely watched by regulators, parents, and users alike. The rollout is likely to reignite broader debates about online privacy, algorithmic decision-making, and whether social media platforms can reliably police age limits at scale.
