
I’ve written about how the Platformocracy undermines community trust and degrades real-world democracy. Even scarier, tech companies’ pervasive content scanning systems, while designed to protect users, could easily be converted into tools of mass repression. Because of platform secrecy, we might not even know if it happens.
White Blood Cells Of The Internet
Bad things like malware, phishing, violence, and child endangerment are hard to keep out of online communities entirely. Trust and safety teams are designed to work like the human immune system, quickly detecting and destroying the threat. Tech companies automatically scan virtually everything you create and share for illegal material. Human contractors review borderline cases and feed corrections back in to improve the software.
This approach is also used for so-called “legal but awful” content that isn’t criminal, but violates company policies and is just generally unpleasant. For example, during my time at Jigsaw, I played a small role in the advancement of Perspective API, a pioneering machine learning tool that evaluates online comments for toxicity, such as harassment or violent threats.
Used properly, tools like Perspective API can do a lot of good. Helping human community managers operate more efficiently lets online communities grow to reach more people. For example, Perspective API made moderators at the New York Times so much more efficient that they were able to triple the number of articles that allowed comments.
Pay No Attention To The Panopticon Behind The Curtain
The New York Times subscriber base is about the size of the population of the state of North Carolina (~11 million), which sounds huge but is tiny by Internet standards. Giants like TikTok, YouTube, and Facebook are 150-250 times bigger, and their digital immune systems are geometrically more complicated. Decades of attacks and controversies have led to novel-length policy guidelines, mammoth machine learning systems, and small armies of human reviewers.
In theory, all this power to police the behavior of billions of humans is only there to keep us safe, but we have no way to know if platforms are telling us the truth. Everything is secret – the software code, the internal guidelines given to human reviewers, and the millions of verdicts issued every day.
Schrödinger’s App: TikTok Angst
This explains the fear and uncertainty in the United States around TikTok. There’s no clear evidence that the Chinese government is secretly suppressing dissent on TikTok today, but there’s also no way to prove that they aren’t, or that they couldn’t do so in the future.
This uncertainty cuts both ways. The US government’s effort to ban TikTok has little credibility with actual TikTok users, since not only is there no evidence that TikTok is uniquely dangerous, there’s also no way to prove that US-based companies are any more trustworthy. It’s all vibes.
At Least We Can Trust Mark Zuckerberg To Do The Right Thing
This secrecy is a huge threat, because even if the platforms are honest today, we wouldn’t know if they start enforcing repressive policies in the future. Most responsible governments try to restrain surveillance by requiring warrants or other judicial review. There are no such guardrails online.
This is not a theoretical risk. We are dependent on company employees acting honorably, and resisting pressure from powerful interests who want to control online discourse. Sometimes this means governments, but company leadership can be just as dangerous.
Consider what Meta offered to get Facebook into China, from Sarah Wynne-Williams’ excellent memoir of her time as a senior leader in their policy team. [To be clear, she was opposed to all of this.]
Facebook would build facial recognition, photo tagging, and other moderation tools to facilitate Chinese censorship. The tools would enable Hony and the Chinese government to review all the public posts and private messages of Chinese users, including messages they get from users outside China…. There would be an emergency switch to block any specific region in China (like Xinjiang, where the Uighurs are) from interacting with Chinese and non-Chinese users. Also an “Extreme Emergency Content Switch” to remove viral content originating inside or outside China “during times of potential unrest, including significant anniversaries” (like the June 4 anniversary of the Tiananmen Square pro-democracy protests and subsequent repression).
Transparency And Accountability (Again, Because Always)
Secret surveillance is dangerous, but so is an Internet free-for-all. The invasion of trolls as Twitter turned into X has taught us that degrading your online immune system is a bad, bad idea. [Note that it’s also a bad idea to degrade your human immune system, as the worst measles outbreak in 30 years is sadly proving.]
Government oversight of trust and safety isn’t a great solution here, either. If we want to reduce the risk of covert corruption of these systems, inviting governments to get more involved in how platforms operate is going in the wrong direction.
Instead, we should demand that platforms open their safety processes and systems to their users. We, the community, make these platforms possible, and we should have an authoritative voice in how they operate.
We should have at least three fundamental rights:
To understand, in detail, how platform safety systems (both human and automated) work.
To consent to changes, especially those that increase surveillance, in advance.
To inspect these systems regularly, to be sure they are doing what they are supposed to and nothing more.
It would be a lot easier to trust a platform that operates its safety systems in a provably transparent and accountable way. Ownership and geographical location should matter less than the will of the community. That would be good for the Internet today, and a safer path for the future.
Ideas? Feedback? Criticism? I want to hear it, because I am sure that I am going to get a lot of things wrong along the way. I will share what I learn with the community as we go. Reach out any time at [email protected].