This essay originated in an email debate I had with an old colleague who now teaches at a business school. It was hard to write and I am sure that I have missed important concerns on every side of this debate. Please let me know what you think by commenting on my social media posts or emailing me at [email protected]. I hope I will learn from you, and I will write a follow-up piece if I get enough feedback.

An older pundit told me to do it

Age verification requirements for social media are the new hotness. Alarmists like Jonathan Haidt’s Anxious Generation movement have declared that digital devices and online platforms are inherently harmful, and must be kept away from our vulnerable children. Their solution is to force everyone to prove they are an adult before they can use social media, like needing to show ID to buy alcohol. 

Doing this online isn’t as simple as glancing at your driver’s license before ringing up a six-pack. You need to upload your ID to an online service, and/or let them take a video of your face to estimate your age. This is empowering a whole industry of vendors and big tech platforms to create databases full of identifying information about adults.

The EU is trying to do this in a somewhat better way by readying their own privacy-friendly age verification app using zero-knowledge proofs (ZKP). I talked about this technology back in The First Person Project could be the future of online identity. ZKP lets you share a fact like “I’m an adult” without revealing your underlying ID.

Even with better privacy protections, moving to mandatory age verification worries me a lot. I am a parent myself, so I understand where the concerns are coming from, and definitely do not think laissez faire is the right answer. My boys have very limited access to specific types of social media, and my wife and I monitor their overall device usage closely.

I just wish we were trying harder to find a better way. I don’t think the authors of these laws have thought clearly about what it is they are trying to achieve. They’ve also neglected the risk that they will end up entrenching the power of big platforms.

What are we protecting kids from?

For me, these laws stumble out of the gate by failing to define their terms clearly. There are absolutely tech platform practices that are not good for children, and there may be some activities we decide kids shouldn’t do at all. But "social media" is too broad and vague to work as a label for the dangerous space we need to keep kids away from.

Is “social media” about communications? If so, what is the boundary between safe and dangerous communications? Is the goal to stop kids from posting to the world, like on Instagram or YouTube? How about video game chats like on Minecraft or Roblox, or voice chat on PlayStation or XBox? What about direct messaging inside another app? Is a WhatsApp group or Apple Messenger social media?

Is the concern about harmful interactions? Are we trying to stop adults from targeting children, or to prevent bad interactions between children? A kids-only space that adults can’t enter won’t necessarily stop cyber-bullying or the risks of mental harm that come from children comparing themselves with others online. What about the benefits to marginalized or at-risk children who have found help and community through online forums?

Or, is the risk from addictive features like algorithmic feeds, infinite scroll, and auto-play? What’s the line between an addictive algorithm and a useful one? When YouTube search results filter out bad and irrelevant content, is that useful, or just a cynical attempt to get you hooked? Are these features even specific to social media? My twelve year old can get so engrossed in some video games that he has trouble stopping. Should we put age restrictions on Clash Royale and Bloons TD 6 for being too much fun? [See the 2023 controversy over random-reward loot boxes.]

Will definition problems make big platforms even stronger?

How will this work as the Internet continues to change? Whatever platforms are considered “social media” today will be joined (or supplanted) by new apps in the future, or new uses of existing products to evade control. The classic example of this is kids using Google Docs’ real-time collaboration as a quasi social media platform, which has been going on at least since my high schooler was in elementary school. Will this lead to requiring age verification to use Google Docs or Microsoft Office?

Will there be some kind of App Store Monitoring Agency that has to review every new app for “regulated social media features” before allowing it out into the wild? Or, perhaps worse, will app stores and browser makers like Apple and Google be forced to start classifying apps and sites according to the laws of each government?

Putting the burden on tech companies could just help the biggest companies. If every small, independent ActivityPub server has to implement age verification just to exist, it could kill the open social media movement dead. Even mid-sized platforms will struggle with the expense and risk of handling identity data at scale. Last October, Discord had to disclose that 70,000 government IDs were exposed by a third-party vendor.

The EU’s privacy-safe app is, theoretically, a way to alleviate the cost, by giving everyone a common, simple way to make the check. What about people living in the EU who aren’t EU citizens? Wouldn’t that mean platforms still have to build the full ID check system? Even trying to sort this out imposes a legal cost that big platforms can afford but burdens smaller players.

Would device-based enforcement work better?

There is another approach. Instead of every platform doing its own verification, the devices that children use could send an age signal to every app and site they visit. Groups like the International Center for Missing and Exploited Children (ICMEC) have called for this approach in some detail.

I am not enough of an expert in this space to say that this is definitively better, but it seems more promising to me. The “social media is a dangerous space” model puts the burden on every platform and adult who wants to use it. Device-based enforcement would, instead, define the “safer space” we want to protect, and put the extra requirements only on people and businesses who want to work with kids.

As I understand it, device-based enforcement would work something like this:

  1. Require device makers to have an easy, clear option on setup to mark each device as a CHILD or ADULT device. If you want more nuance, it could let you specify an age range.

  2. Require device makers to transmit that status in all circumstances, as part of their on-device APIs and attached to all network communications. That way, any platform or app would always have a signal about the age group of the user.

  3. Require app developers and platforms to check that status and, if they want to serve children, to deliver a safe, age-appropriate experience.

This seems a lot better for open social media to me. You don’t have to verify people’s age, and only have to incur the extra expense of building a kid-safe experience if you want to deliberately target that market. Otherwise, just block child devices and go about your business.

This could also make it easier to hold big platforms accountable for harms to children, because they will need to make an affirmative decision to serve kids, knowing the bar is going to be higher and that if they don’t get it right, they will face liability and legal risk.

There are open questions in this model as well, to be clear. What about kids on shared devices? Would a service aimed at kids still need age verification to stop adults from targeting children? Would this limit kids’ access to information if news and information sites start blocking child devices due to fears of litigation? What about new devices and operating systems?

I will caveat myself again that I’m sure I have missed something here. But my gut feeling is that we’ll do better trying to answer the question of how to define and protect a safer space for children, than adding a ubiquitous age-verification layer to the complex world that we are already grappling with.

Keep Reading