
So far in Platformocracy, I’ve been focusing on the case for democracy in online platforms. I still have a lot to say about that, but I am broadening out to talk about other issues related to platforms’ autocratic mindset, such as their approach to child safety.
Don’t panic
I have two school-age children, but I don’t believe in the moral panic over kids online. There’s always been a scapegoat for the challenges of modern parenting. I lived through the television V-Chip in 1996, Tipper Gore’s Parents Music Resource Center in 1985, and the satanic panic over Dungeons and Dragons in 1981. Before that, television was called a vast wasteland in 1961 and comic books were falsely blamed for the Seduction of the Innocent in 1954. Heck, Samuel Johnson thought novels were corrupting the youth of England in the late 18th century.
The feature launches will continue until parenting improves
New technology doesn’t corrupt children, but it creates new situations that parents need to deal with, and may not even understand. How many parents have even logged in to popular kids’ platforms like Roblox, Minecraft, or Fortnite?
Platforms are responding with an avalanche of new child safety features. Just since April, there have been releases from Roblox (for younger children), Nintendo, Tiktok, Apple, Roblox again (for teens this time), Instagram, and Android (in Canary).
Despite all this activity, I don’t know a parent who feels good about their children’s Internet/device usage. Many aren’t even sure what their kids are interacting with day to day. There’s always a new app, a new trend, a new parental account that you can’t remember the password for. Often, the best-case scenario is catching your kids watching or doing something inappropriate and stopping it quickly. And the worst-case scenarios can be pretty awful.
No wonder some parents throw up their hands and want to just turn it all off.
Safety works differently (and better) in the real world
Say your child asks if they can ride their bike to a friend’s house. As a parent, you don’t have to just say yes or no. You can ask questions and consider the context:
How often do you let your child go out unsupervised? Have they acted responsibly in the past?
What’s your child’s level of experience riding a bike?
Is your child planning to stay at the friend’s house, or to go together to a third destination?
Is the friend trustworthy? Do you know the parents?
Is the bike route to the friend’s house safe?
Is it threatening to rain? Is it close to mealtime or bedtime?
Etc.
Based on all of these factors, your decision is probably a set of conditions specific to your child and the context at this moment in time. Be home by five. Don’t buy any candy. Call me if it rains.
Parents do this real-world context math all the time, and it doesn’t depend on what safety features the bike manufacturer has launched with this year’s model.
Supervising online behavior is harder because parental controls fall short in three areas: consent, context, and customization.
Consent
Behavior economics (and antitrust judges) have spoken a lot about the power of defaults. The biking example requires your child to ask for permission first. Most parental controls default to wide open. Consider Roblox and YouTube.
Roblox recently added an option to block experiences, but only if you know what they are (presumably because your kid was using them and youdidn’t like what you saw). YouTube Kids gets partway there with an Approved Content Only option, but you have to pick everything out in advance, and your child can’t search for or request anything new.
By contrast, and to their credit, both the Apple App Store and Android Play Store have options to require parental approval before downloads or purchases.
Context
Are you really making a choice if you don’t have enough information for a meaningful decision? In the biking example, you have tons of context, and can ask questions to learn more. Most parental control options give you only the most basic description of how they work, and no way to learn more. Back to our examples:
YouTube has seven tiers of age-appropriate content filtering. I applaud the effort, but if I have a ten year old, should I limit them to the Older (9-12) content setting on YouTube Kids, which “allows kids to search and explore more music videos, gaming, science, and more,” or are they ready to move up to the Explore setting on the main YouTube app, which features “a broad range of videos generally suitable for viewers ages 9+, including vlogs, tutorials, gaming videos, music clips, news, educational content and more”? I couldn’t find any way to observe the differences in practice, or even specific examples of videos that show up in one versus the other.
Roblox has four content maturity levels, but how should a busy parent interpret the difference between a Mild experience with “heavy unrealistic blood” and a Moderate experience with “light realistic blood”? I couldn’t find any images of the difference in Roblox documentation for parents. Plus, unlike YouTube ratings, which are set by YouTube, Roblox maturity levels are self-reported by the experience creator. How can a parent know whether a given creator is accurate or trustworthy?
Customization
Platforms fall very, very short of the enabling the range of parenting choices you can make in real-world decisions like in the biking example. As I discussed in my Harvard talk, the Platformocracy is built on scale, which requires presenting a hard bundle of rules and features which you cannot modify or opt out of.
YouTube’s seven tiers of content filtering sounds like a lot, but real-world children are a continuum, not seven buckets. A given child may not even exhibit the same maturity level from topic to topic and day to day. [And this is to say nothing of children who are neurodivergent.]
[This is another reason why I love Bluesky’s pluggable moderation model. I would love to see someone build labellers for a kid-safe version of Bluesky. That wouldn’t even be possible on almost any other platform.]
AI could help (no, seriously, please don’t laugh)
Natural language queries connected to your child’s account would make this a lot easier. Parents should be able to ask platforms questions about how their controls work for their kids:
“If I let my preteen onto YouTube’s main app, based on their watch history, what are some videos they would probably see that are not on YouTube Kids?”
“Can you show me some video clips of ‘heavy unrealistic blood’ in Roblox experiences that my child has played?”
Even better, platforms should use AI to give parents the power to set the same kind of nuanced, context-sensitive rules that they do in the real world. Just like you can tell your kid “it’s OK to bike to your friend’s house today, but I need you home by 5pm and you aren’t allowed to buy any candy,” imagine being able to manage your kid’s online usage in natural language:
“Don’t let my child see violent content, but they are allowed to watch professional sports coverage even if there are brawls or injuries.”
“My child can play any Roblox experiences with the lowest maturity rating, but send me a message to pre-approve anything above that.”
“I caught my kid watching gameplay videos after bedtime on a school night, so lock him down to educational content only until next weekend.”
Maybe this would catch on if we gave it a trendy name. Let’s call it Vibe Parenting.
Ideas? Feedback? Criticism? I want to hear it, because I am sure that I am going to get a lot of things wrong along the way. I will share what I learn with the community as we go. Reach out any time at [email protected].