In September 2024, when I was still Director of the new Applied Social Media Lab at Harvard’s Berkman Klein Center for Internet and Society, I gave the keynote at an event discussing ways to improve online discourse that laid the foundation of what would become this newsletter. I’m especially happy with the term promote and police, the notion that social media gets it wrong when they try to solve complex human problems with algorithms to sort out the good from the bad. This is also the first of many times you’ll hear me rant about the importance of being able to exit one platform for another without having to start over from scratch. I explain all of this better in the original text, so here it is. Or if you aren’t in a reading mood, you can watch my original talk on YouTube.
Social media at its best can be life-changing. It can foster human connection, give marginalized people access to supportive communities, and make us all more aware of what is happening in the wider world. I have experienced the best of social media in my own life. My wife and I met through an online dating service, and during the pandemic, social media saved my sons’ friendships through online chat in video games and the Roll20 platform for online Dungeons and Dragons games.
Unfortunately, too often social media feels life-changing in the worst way. Social media encourages arguments, provides a home for organized hate and harassment, and makes us all more entrenched in unexamined beliefs. Online strife is even spilling out into real-world conflict. In other words, a discourse dumpster fire.
The tech companies hosting these dumpster fires aren’t particularly happy with this, of course. They have made large investments to fight abuse and reduce harms. From my years at Google, I can personally attest that these are some of the most dedicated and mission-driven teams you will ever meet, full of people who work tirelessly to protect the vulnerable.
Despite all of this effort, problems persist, and may even be getting worse. I do not believe this is a problem of willingness or competency. I believe this is a problem of approach. Too many social media platforms are trying to solve problems with the same narrow approach, which I call promote and police.
Promote and police starts when, in search of growth and revenue, platforms standardize what you say, think, and create online into small, separate, measurable, units called “content” that can be processed as efficiently and automatically as possible. This leaves very little room for context or nuance. Long streams of thought, novel vernacular that’s unique to one sub-community, and complex interactions between groups of people will be jettisoned if they interfere with superior financial performance.
Next, platform teams write algorithms, and increasingly turn to artificial intelligence, to identify and promote what they consider to be “good” content. Because these systems are operated by businesses, “good” is usually defined as engagement – whatever will get you to use the platform more, and advertisers to pay more money to reach you. Algorithmic promotion has an unavoidable bias toward the sensational and inflammatory, because that’s what drives more usage.
Platforms know this, and try to limit their systems’ tendency to drive engagement at all costs, which brings us to the police. Platforms write a competing set of algorithms and AI systems that identify “bad” content and try to keep it away from you, either by not promoting it, or by taking it off the platform entirely.
Trying to police human behavior at scale, mostly with computers, is not a good way to encourage human thriving. When an AI tells you that your last comment broke the rules, or worse, that the harassment you reported was actually A-OK, the real message conveyed is that the platform doesn’t trust you. This can even lead to injustice, when innocent people are cut off from their accounts because the algorithm said so, and with limited right to appeal. Platforms do employ tens of thousands of humans to review edge cases, but too often they are instructed to simply double-check the machines for defects like parts on an assembly line, not to make a humane judgment or seek more context.
This battle is never won, because humans in aggregate are almost infinitely resourceful, and will always find a novel way to evade the rules. The default platform response is to close the vulnerability with even more automated oversight, leading to a surveillance nightmare where everything you say or do online is run through multiple systems constantly looking for violations. Artificial intelligence promises to supercharge this arms race, with hostile and policing AIs one-upping each other, swamping the Internet with junk and rules.
A social media platform that removes context, rewards sensationalism, and replaces trust with surveillance is a recipe for bringing out the worst of human nature. As Professor Jonathan Zittrain memorably coined, if you fill the dumpster with mattresses soaked in kerosene, you’re going to get a dumpster fire.
I want to pause to be clear here that I am not blaming companies or their employees, and I do not mean to imply they’ve created this problem from whole cloth. Going back to the question of human nature, these platforms are simply competing to give people what they indicate they want through their choices. As Eric Schlosser wrote in Fast Food Nation in 2001, “The executives who run the fast food industry are not bad men. They are businessmen… They will sell whatever sells at a profit.”
This brings me to my proposal to move beyond discourse dumpster fires. Social media in the public interest must become more open to variety. People need more options about how to communicate, and more freedom to choose where to communicate, so they can at least have the possibility of breaking out of the promote-and-police cycle in search of healthier experiences.
And healthy experiences are out there, in their thousands. Real-world communities give birth to associations such as book clubs, knitting circles, support groups, and dungeons and dragons parties. Academia is a rich vein of communication modes – lectures, seminars, symposia, faculty office hours, student clubs, et cetera. Government gives us even more choices, such as town halls, school board meetings, caucuses, debates, legislatures, courts, and juries. The options go on and on. You can’t reduce all of this rich human dialogue to variations of a Facebook group.
Today’s online world makes it too hard to experiment with all of these different modes of discourse. The large platforms present us with a hard bundle of rules and features which we cannot modify or opt out of in any meaningful way. If we don’t like the bundle, our only option is to try another platform, but transferring our friends and history is difficult or impossible. This lock-in makes it too hard for newer or healthier modes of human interaction to survive and grow, which creates the false impression that most people prefer the status quo.
The availability of more options, and an easier time changing between them, won’t revolutionize the world in itself. But it can make a difference. In the 20 years since the publication of Fast Food Nation, American eating habits have improved - modestly, but measurably. As another example, mobile phone number portability, also only about 20 years old, has led to more competition and lower prices for consumers in many countries around the world. When I try to explain to my 15 year old son that once upon a time, you couldn’t take your phone number with you from Verizon to AT&T, he doesn’t believe me. I hope that 20 years from now, having your own friends and interests permanently locked into one company’s servers will seem just as much of a bad fairy tale.
Ideas? Feedback? Criticism? I want to hear it, because I am sure that I am going to get a lot of things wrong along the way. I will share what I learn with the community as we go. Reach out any time at [email protected].