Starting today, I am proposing a plan for expanding due process that an online platform could realistically implement. My goal is to convince you that this is a package of software and procedural changes that are 1) effective, and 2) wouldn’t come at an extortionate cost. This week defines the goal, scope, and tries to address one big risk up front.

The selfish case for due process: increased trust

To convince you that this is a doable plan, I am setting aside the question of whether we have the right to due process online. Due process in policy enforcement would be good for platforms on purely business terms, because it’s a highly effective way to increase user trust.

When the Internet was in unconstrained growth mode, policy enforcement mattered in aggregate, but delivering good individual user experiences wasn’t that important. As long as a platform didn’t alienate too many users, things would work out because there would be ten new people tomorrow for every loss today.

As the Internet approaches saturation, especially in developed markets, loyalty and retention are growing more and more important. You could argue that we’ve seen the first stirrings of this shift as X usage has flatlined under Musk’s misrule, and in Google Search having to scramble to win back users from ChatGPT.

In this world, policy enforcement interactions become a meaningful moment for building user trust. A famous study identified the service recovery paradox – a customer will actually like a company better if they do a good job fixing a problem, than if the problem had never happened in the first place!

Importantly, due process is a cost-effective way to address trust through procedures instead of having to rely on the (much harder) problem of winning trust through correctness. Obviously a platform should want to get all of their enforcement decisions right, but that’s a complex product design, policing, and safety problem. It’s unrealistic to expect 100% accuracy, so due process provides a safety valve for the inevitable errors.

Due process is doubly effective for trust because it lets people determine for themselves that decisions are being made in good faith. Without this visibility, even accurate decisions expose the platform to reputational risk from accusations of unfairness. In particular, bad actors have a strong incentive to claim they were falsely accused if the platform won’t provide anything to contradict them.

Scope: account terminations

A realistic improvement project can’t fix everything. Account terminations are a good place to start, because they hit a sweet spot of frequency and severity for both the individual user and the platform as a whole. Improvements to this process would therefore do the most good relative to effort and cost.

The frequency benefit of focusing on terminations is hopefully self-evident. Every active account on a social network posts hundreds or thousands of times, so there are vastly more policy decisions to be made on individual pieces of content than on whether or not to terminate an account.

The severity of account termination versus other enforcement actions is more debatable. A single piece of content can do a lot of harm. Consider doxxing someone, or posting a despicable deepfake a la Grok. There is also continuing controversy about whether social media platforms censor political views, and how much that influences real-world election outcomes.

I fully grant all of this concern, but a realistic proposal can’t address everything at once. I am sticking with terminations as a starting point because being removed from a community against your will the most severe possible consequence from a software perspective. Once your account data is purged, that’s the end. [See Account terminations are easy, but unjust.]

If this proposal was adopted by a platform and proved successful, there could always be a discussion about expanding it to other harms in the future.

Out of scope: policies, warnings, and suspensions

For this proposal, I am taking platform policies as a given, and not questioning how they came to be or whether they are justified. Regular readers will know that I would like to see this change, but participatory policy-setting would be a much more fundamental (even radical) change. It’s also somewhat orthogonal. Due process is about criminal justice, but rule-setting is more of a legislative process. So, as a first step, I am deferring to platforms to set the termination rules.

I am also not looking at any pre-termination steps. This proposal only kicks in once a platform has made a decision that they want to terminate an account.

Many platforms issue warnings/strikes, or issue short-term suspensions for policy violations. We could debate about whether these rules are helpful, or even if they are fair. (Just ask my sons, who have each lived through temporary suspensions from video-game platforms. They have A LOT OF THOUGHTS.)

Regardless, these pre-termination steps are not the same thing as due process. Each event that triggers a warning is a small judgement in its own right. Making three unilateral judgements instead of one isn’t giving the user any more recourse – it’s just a more elaborate enforcement standard versus one strike and you’re out.

Risk: abusive account creation

The first major drawback to my proposal is that not every account termination is targeting an account belonging to a real person acting in good faith. Bad actors evade new-account protections to create millions of fraudulent accounts every day, which counter-abuse teams try to detect and terminate as quickly as possible. [See Can we protect the Internet without revealing our real-world identities?]

Giving every brand-new account due process would tax the resources of even the richest platform. Even worse, it would open up a new denial-of-service attack on small platforms — creating tons of garbage accounts that clog the termination process.

Thus, for this proposal to be practical, there has to be some kind of exception that allows for rapid bulk termination of fraudulent accounts. This means granting platforms authority to decide which accounts are valid, and thus entitled to the protection of due process, and which are not.

A naive approach would be to just focus on the age of the account. Presumably it’s not such a big deal to accidentally terminate someone’s brand-new account, versus one that they’ve been relying on for months or years. This doesn’t work in practice, however, because bad actors are smart enough to figure that out, and will just have their fraudulent accounts behave like a real, honest person until the deadline has passed, and then go bad.

This becomes a cat-and-mouse game for every platform. Anything they add to the validity criteria becomes fodder for bad actors to try to discover and game. The best-case scenario is finding criteria that are sufficiently expensive or time-consuming to throttle the pace of bad account creation. However, this creates a privacy risk, since platforms need to basically put accounts under 24-hour surveillance to watch them for signs of invalid behavior. [See Unchecked Surveillance In The Name Of Safety Is Dangerous.]

There is no way for me to solve the validity-criteria problem generally in this proposal, so I am going to punt. I will trust that a platform has good reasons for deciding which accounts deserve due process. I don’t love this compromise, so I will come back to it in a future Platformocracy.

Next up: Process Diagrams!

So, we’ve established that this proposal seeks to increase platform trust by focusing on valid accounts who the platform believes have violated a stated policy with the consequence of account termination. Next week, I will define the typical process platforms follow today for this situation, and the practical due-process changes I am proposing.

Ideas? Feedback? Criticism? I want to hear it, because I am sure that I am going to get a lot of things wrong along the way. I will share what I learn with the community as we go. Reach out any time at [email protected].

Keep Reading

No posts found