Last week, I covered a lot of ground in my proposal to bring transparency to online due process. In my haste to get through everything, I used a rhetorical trick to dodge the critical question of whether exposing the workings of platform safety would be so useful to bad actors that we’d actually end up less safe. My good friend and former colleague Ari Paparo from Marketecture noticed this and called it out on Threads. If there’s one thing I know about Ari, it’s that if he thinks I’m wrong, I probably am. So I’m going to spend more time with the issue today.
Can transparency do more harm than good?
I started my due process proposal by trying to keep it manageable. I granted that platforms need to automatically terminate the huge volume of fraudulent accounts they confront every day, and only give due process to valid users. I assumed platforms could build some sort of trustworthiness assessment, using criteria like the age of the account and patterns of usage (logins, likes, posts, etc). I proposed to expose this valid user status to the account holder, and potentially to the community as a trust signal.
A public trust signal could be incredibly valuable for bad actors. Today, they have to infer the state of platform defenses by seeing which of their fake accounts get terminated and which survive. If the product’s UI helpfully tells you that you’re good, it saves a lot of guesswork. Sure, it’s nice to reassure honest people, but if the result is fraudulent accounts skyrocketing, the whole community will end up worse off.
I’m going to admit defeat here and concede the point, because I don’t have any data to refute it. I am not inside a platform company right now, so I can’t analyze historical data or run a new experiment to measure the potential harm. So, I will acknowledge that, as platforms currently operate, disclosing valid status publicly would probably lead to more fraudulent accounts than the benefits of transparency.
That isn’t the end of the line, however. I’ve got a compromise proposal that would still help to expand due process and transparency, with less risk. I am also going to take a closer look at that as platforms currently operate clause.
A compromise: opt-in validation
Valid user status doesn’t have to be automatic and universal. Platforms could make it an opt-in program that offers the security of due process only to people who go through extra vetting to ensure they are real humans acting in good faith. Think of it as something akin to TSA Pre-Check in the US or EasyPass in Germany.
Platforms are already doing enhanced vetting for other purposes, and could add due process protections as a incentive/benefit. Blue check programs are a public trust signal of authenticity, which also makes them the accounts most likely to kick up a public stink if you don’t give them clear, public due process. Other enhanced identity-verification options fit this pattern too, like YouTube’s creator requirements for access to advanced features, which I discussed in Online identity should not be one size fits all back in October.
These programs are small today – a 2020 estimate suggested that only around 3% of Instagram users have a blue check. By comparison, 20 million people are enrolled in TSA Pre-Check, making up 34% of people screened at airport checkpoints (because so many of them are frequent fliers). A platform could set a positive goal of trying to have most of their content produced by users who have gone through extra verification steps.
Scaling up a process of sharing identifiable information to prove you are valid is a fraught topic in its own right due to the privacy risks. So it would be good to work on anonymous options in parallel. For example, the YouTube program I mentioned above will also accept your channel’s history of good behavior instead of an ID. And new technology like zero-knowledge proofs would let you prove something about yourself (like your age or that you possess a valid government ID) without revealing who you are. [See The First Person Project could be the future of online identity for more about ZKPs.]
Why are bad actors such a big problem in the first place?
My entire due process proposal starts by accepting current platform business practices as a given. This is the only way to build a practical proposal that someone might actually build. In that context, with armies of bad actors constantly probing for weaknesses, secrecy seems pretty essential.
But does it need to be this way? Why are online platforms so vulnerable to bad actors? Please allow myself to quote myself:
Platforms don’t verify the real-world identity of every new account holder because their business model is built around growth. Many platform policies explicitly allow multiple accounts from the same device, because it encourages more usage.
Bad actors abuse these open front doors. They make themselves look like different people to bulk-create hundreds of thousands of accounts automatically. They operate data centers stacked with remote-controlled physical phones that simulate huge populations of real people with believable usage patterns.
Platform safety teams work hard and catch the majority of bad accounts quickly, but because there’s no way to uniquely identify repeat offenders, bad guys can just tweak their evasion settings and try again, so eventually sheer volume means a lot of bad accounts get through undetected.
In other words, secrecy isn’t inevitable; it’s a band-aid on a gaping self-inflicted wound.
Open signups are a business decision. There are many ways that a platform could make it harder to get an account. Lots of real-world privileges have more steps – government IDs like passports or drivers licenses, obviously, but also financial transactions like bank accounts, car loans, or mortgages; or joining high-status organizations like social clubs or private schools. Heck, you need two references to join our local swimming pool in New Jersey.
There are obviously huge privacy concerns around the big tech platforms requesting access to even more information about billions of people around the world. But for better or worse, we are already marching in that direction, without gaining any due process protections in return. That’s the effect of the rapid spread of age verification requirements.
The UK and 25 US states require age verification for adult content (which is defined very broadly, not just explicit material). The Australia social media ban requires platforms to take “reasonable steps” to validate user ages. Facebook sometimes uses facial recognition for account recovery and other suspicious situations. Just last week, Discord announced teen-by-default settings that require adults to prove their age to access anything more.
I have serious concerns about all of these programs, but if it’s happening, the minimum fair thing to do would be for platforms to reciprocate by giving age-verified accounts the dignity of due process before termination. Otherwise we’re giving them our identity and getting nothing in return.
Of course, sadly, the communities on these platforms are not part of this conversation. Platforms are making decisions about age verification like they always do – behind closed doors with a focus on power and profits, not people. I don’t want a Godfather deciding what’s right, and what I deserve to know about their business. I want knowledge and consent.
Next week — I read Magna Carta, so you don’t have to.
Ideas? Feedback? Criticism? I want to hear it, because I am sure that I am going to get a lot of things wrong along the way. I will share what I learn with the community as we go. Reach out any time at [email protected].


