Why Social Media Age Checks for Under 13s are Finally Getting Serious

Why Social Media Age Checks for Under 13s are Finally Getting Serious

The internet was never built for children. For years, the "under-13" rule on platforms like Instagram, TikTok, and Snapchat has been little more than a digital pinky swear. You click a box, lie about your birth year, and suddenly you’re in. Regulators are tired of the charade. Ofcom, the UK’s communications watchdog, is now putting the squeeze on tech giants to prove they actually know who is using their apps. This isn't just another slap on the wrist. It’s a fundamental shift in how the "open" web operates for the next generation.

If you’ve looked at a middle school classroom lately, you know the current system is broken. Estimates suggest millions of underage children are active on platforms that technically forbid them. The problem isn't just that they’re seeing "adult" content. It's that the algorithms driving these feeds are designed for adult brains with adult impulse control. When a 10-year-old gets sucked into a rabbit hole of disordered eating or extreme stunts, the "oops, they lied about their age" excuse doesn't cut it anymore. Also making waves recently: The Logistics of Survival Structural Analysis of Ukraine Integrated Early Warning Systems.

The end of the honor system

Ofcom’s latest guidance under the Online Safety Act marks a turning point. They’re telling social media firms to toughen up or face massive fines. We’re moving away from the era of "self-declaration"—where a kid just types in 1995—and toward mandatory age estimation technology.

What does that actually look like? It’s not just one thing. It’s a layer of different checks. Some companies are testing facial age estimation, where an AI analyzes a selfie to guess if you’re a child or an adult. Others are looking at banking data or even "technical signals" like how a person types or who they interact with. If a profile claims to be 25 but only follows Minecraft influencers and types like a fifth-grader, the system should flag it. More details on this are covered by Wired.

Tech companies hate this. It’s expensive. It adds "friction" to the signup process. Friction is the enemy of growth, and these firms live and die by user growth. But the safety of 4.2 million UK children online isn't a growth metric. It's a legal requirement.

Why facial scanning is the new normal

You’re probably thinking about privacy. I am too. The idea of handing over a face scan to a massive social media corporation feels invasive. However, the technology being pushed—like Yoti’s age estimation—doesn't actually "identify" you. It doesn't link your face to a passport or a name. It just looks at pixels to determine biological age and then deletes the data.

Is it perfect? No. Bias remains a massive hurdle. Early versions of these tools were significantly less accurate for people with darker skin tones. If a system keeps rejecting a 14-year-old because of their ethnicity while letting an 11-year-old through, the "safety" measure becomes a tool for exclusion. Ofcom is demanding that companies prove their tech works across all demographics before they roll it out.

There’s also the "shrouding" technique. This involves hiding content from users who haven't verified their age. If you won't prove you're an adult, the app treats you like a child by default. No targeted ads, no algorithmic "For You" pages filled with junk, and no direct messages from strangers. It’s a "safety by design" approach that flips the script. Instead of the child having to hide, the platform has to protect.

The parents are not alright

Let's be real about the parental side of this. Many parents are the ones helping their kids bypass these rules. They want their kid to have the same apps as their friends. They don't want their child to be the "weird" one who isn't on the group chat.

But there’s a massive gap between what parents think is happening and what the data shows. The Harvard T.H. Chan School of Public Health released a study showing social media platforms generated over $11 billion in ad revenue from US minors in a single year. These companies have a financial incentive to keep kids scrolling. They aren't your co-parent. They’re a business.

What actually happens next

Ofcom isn't just asking nicely. Under the Online Safety Act, they have the power to fine companies up to 10% of their global turnover. For a company like Meta, that’s billions.

We should expect to see three major changes over the next 12 months. First, "hard" age gates. The simple "enter your birthday" screen is going to die. Second, a surge in third-party age verification services. You’ll verify your age once with a trusted provider, and they’ll give a "green light" to the apps without sharing your ID. Third, more aggressive account purges.

If you’re a parent or an educator, don't wait for the tech to catch up. Check the settings on your home router. Look into "hard" blocks at the network level. Talk to your kids about why these rules exist. It isn't about being mean; it's about the fact that 11-year-olds shouldn't be subjected to the same psychological triggers as 30-year-olds.

The era of the digital Wild West for kids is closing. It’s about time. If a bar has to check ID at the door, a platform with billions of users and an algorithm that knows your deepest insecurities should have to do the same.

Start by auditing the apps on your family's devices today. Check if "Family Pairing" is enabled on TikTok or "Parental Supervision" on Instagram. Don't assume the app is doing the work for you. It isn't. Not yet.

KF

Kenji Flores

Kenji Flores has built a reputation for clear, engaging writing that transforms complex subjects into stories readers can connect with and understand.