
Two weeks ago, I wrote about my issues with social media age verification laws, and suggested that device-based enforcement might work better. I got some thoughtful responses worth sharing. Plus, more news broke about the difficult rollout of age verification efforts in the EU and Australia. So here’s a follow-up.
Age verification laws are off to a rocky start
Actual rollouts of age verification are not going well. The EU’s new age-checking app is being promoted heavily by European Commission President Ursula von der Leyen as a privacy-preserving approach of handling everything on-device. Security professionals ripped it to shreds within a matter of days for being easy to compromise, which will lead to easy fraud and more privacy breaches. Australia banned social media use by children under 16 in December 2025, but reports in Fortune and Crikey (a great independent Australian news site) are that at least 60-70% of under-16s are still using social media. Crikey also found out that the Australian government didn’t even validate their approach against their own privacy law before rollout.
It is also clear that government attempts to solve the circumvention problem will involve tighter controls rather than rethinking their approach. Companies doing age verification in Australia are pushing for faster adoption of more invasive technology. The UK House of Lords has already voted to ban children from using VPNs (though that is unlikely to become law).
Trouble with the device-based verification model
On Threads, former Google Senior Staff Software Engineer Jeremy Manson (one of the smartest people I know) pointed out two definitional problems with my device-based counter-proposal.
First, I was not clear about what technology layer my alternative would apply to. The way I wrote it up, it could be read as being a requirement at the hardware level that should apply to all network traffic. This would be technically challenging, and would make shared device options difficult or impossible. I should have called it “operating system level” rather than device level. An operating system setting would be able to support different settings for different user accounts on shared devices.
Second, I glossed over how to handle kids of different ages. An appropriate experience for a twelve-year-old isn’t necessarily acceptable for a four-year-old. I dodged this question because I don’t feel qualified to decide what the boundaries between age groups should be. I also remembered writing about how hard it is for parents to navigate current age-group models, back in last July’s Why is every parent I know frustrated with parental controls? But Jeremy is right to point out that this would need to be addressed to make my proposal viable.
And speaking of the restrictive devil, a new bill requiring OS-level verification called the Parents Decide Act was introduced in the US House of Representatives on April 16. From what I can tell so far, this bill gets almost everything wrong. It requires us to enter our birth date into all of our machines, regardless of what we intend to use them for (not just for using social media or accessing adult content). And it strongly implies that operating systems will need to actively verify ages by scanning IDs or facial features. The slight advantage of this bill over the current model is that you’d only be revealing your identity once, to the OS, instead of having to do age verification over and over again for each app or platform you want to use. But as written, it feels more like introducing an invasive checkpoint at a bottleneck.
An example of the problem with category definitions
Mariana Olaizola Rosenblat from the NYU Stern Center for Business and Human Rights sent me a recent piece she co-wrote for Tech Policy Press emphasizing how private messaging platforms can be vectors for disinformation, but are missed in many regulatory frameworks. Her piece is an elegant expression of my concern about basing age verification on a category as fuzzy as “social media”:
The central argument is to reframe how regulation should approach these hybrid platforms. Rather than classifying entire services as public or private, we propose attaching regulatory obligations to specific features based on their reach, discoverability, access controls and capacity for amplification. A one-to-one encrypted chat does not pose the same systemic risks as a searchable broadcast channel or a mass-forwarding feature. Treating them differently is not an erosion of privacy; it is a prerequisite for proportionate governance.
[See also my related piece from last October: Private, community, public: online identity should not be one size fits all.]
Are contracts and courts a better path than regulation?
Over email, Mariana also connected risks to children with the way platforms force us to accept their terms of service (Emphasis mine):
Because their cognitive and emotional capacities are still developing, [children and teenagers] are less able to recognize manipulative design strategies or give meaningful consent to the collection and monetization of their personal data… The goal is not to prevent children from accessing information or communicating online. Rather, it is to prevent them from entering into contractual relationships with platforms that subject them to pervasive data collection and engagement-maximizing design systems.
I recently wrote about how platforms use terms of service to subvert the rule of law. If an adult can’t understand the pages of legalese that platforms flash in our faces before we click to accept, how could we expect a child of any age to give meaningful consent? I would love to hear from some lawyers about whether that exposes the platforms to extra liability by invalidating their entire contractual framework. We have already seen a New Mexico jury find that Meta’s platforms are harmful to children.
Liability findings might also be possible under existing regulations. For example, the EU found Meta in breach of the Digital Services Act for failing to keep children under 13 off their platforms, despite claiming their products are only for children 13 or older.
We might not need new, invasive age verification laws, if we can convince companies that under-investing in child safety is bad for business.
Ideas? Feedback? Criticism? I want to hear it, because I am sure that I am going to get a lot of things wrong along the way. I will share what I learn with the community as we go. Reach out any time at [email protected].

