Welcome back to my continuing series on improving Due Process (1, 2, 3). Last week, I narrowed my focus to finding practical ways to improve user trust during account terminations. I left policy-setting and pre-termination warnings out of scope, and granted that platforms will need a filtering process for valid users, to avoid being swamped by obviously fraudulent accounts.

Today, I propose three changes to the typical account termination process, and address two likely objections.

Three practical changes would improve the termination process

While the details vary by platform, the general pattern of account terminations looks like this:

  1. Decide that an account should be terminated (perhaps called “suspended,” “disabled,” “restricted,” or “blocked”). 

  2. Lock down the account so the user can’t use platform services (and often can’t log in at all), and hide their profile and content from other users.

  3. Notify the user of the termination decision, either in the user interface or with an email.

  4. Give the user a period of time to appeal the decision (perhaps called “request a review” or “disagree with decision”) by filling out a form with additional information for the platform to consider.

  5. Review the decision using the additional information, if provided. 

  6. If the appeal is denied, or there is no appeal within the time limit, terminate the account permanently. Either immediately or at a future time, delete the user’s data.

Introducing practical due process involves one terminology change, one software change, and one procedural change.

  1. Same as above.

  2. Same as above.

  3. Terminology change: Notify the user that the platform intends to terminate their account based on a policy violation, but that they have the right to request judgement.

  4. Software change: Give the user a logged-in due process workflow which allows them to:

    1. See the case file specifying the policy in question, and which user content or actions the platform believes were violations.

    2. Request judgement, with the option to provide evidence, both written text and uploading files.

    3. Get updates on the status and results of the judgement process.

  5. Procedural change: The platform evidence and the user’s additional information are judged by someone from another team, who doesn’t have a direct stake in the outcome.

  6. Same as above.

Terminology change: communicate intent, instead of decision

“Innocent until proven guilty” is a foundational principle of due process in many justice systems including the Byzantine Empire, the Talmud, and Islamic Law. Platforms make a mockery of this ideal. The word “appeal” gives the game away – all of the burden is on you to prove your innocence, after the authorities have already decided you are guilty.

This might be defensible for content takedowns, which have higher volume and lower stakes, more like traffic tickets. But even a traffic ticket doesn’t declare you guilty and put points on your license automatically. Until you pay the fine, you can still plead not guilty and contest it with a judge. The severity and finality of ejecting you from an online community deserves language that’s at least as humane as what’s printed on the back of a speeding ticket.

This is hopefully a no-brainer for platform teams, because changing the words to tell the user that you intend to terminate them, but they have the right to request judgement, wouldn’t actually cost the platform anything (besides the effort of changing the language). Without the other two changes below, the process is exactly the same — suspended, notified, and given a chance to disagree.

Software change: due process workflow

Most platforms are built so that locking down someone’s account means they can’t even log in*, which leads to a Kafkaesque and ineffective termination process that incinerates user trust.

  • You can’t be notified unless you’ve provided an alternate contact method. [This is usually Gmail, so if Google suspends your account, you are especially screwed.]

  • You can’t review your history to see any context around the actions or content the platform claims violated their policies.

  • To appeal, you need to fill out a separate form and prove to the platform that it’s really your account.

  • You can’t log in to track the status of your appeal, so you can’t tell if the process is under way, if you made a mistake on the form, or if the platform dropped the ball.

Besides torturing users, all of this confusion also costs the platform money from redundant appeals, since the only path users have to try to shake an answer out of the system is to submit the form over and over again.

A logged-in due process workflow would replace chaos with clarity. The software development required would mostly consist of wiring together pieces that already exist.

  • The account still exists. The platform needs to create a restricted state that lets the user log in only to access the due process workflow.

  • A case file with the violative content and the relevant policy is recorded in the enforcement system. It needs to be exposed to the user. 

  • Appeals forms already exist. That data just needs to be associated with the account. This has potential to actually make the software simpler, since logging in will make the linkage to the account automatic, instead of needing to correlate other user-supplied signals.

  • Platforms already track the status of appeals internally. This needs to be exposed in the UI.

*Note: I give Meta a lot of flak in this newsletter (1, 2), but to their credit, Instagram appears to offer a logged-in workflow along these lines, as does Facebook in at least some cases. Nice work, Meta.

Procedural change: independent judges

In The Federalist Papers #10, James Madison wrote that “no man is allowed to be a judge in his own cause, because his interest would certainly bias his judgement, and, not improbably, corrupt his integrity.” This is an ancient, established standard of English common law, dating back to at least 1481. [See Due Process of Law: A Brief History by John V. Orth.]

Platform appeals flagrantly violate this principle. They are generally reviewed by the same organization that made the initial decision, or their supervisors. These are people whose work is measured by their effectiveness in catching and stopping policy violations, not on admitting how often they get it wrong. They are all honorable and well-intentioned, but it’s still asking them to grade their own homework.

So: we need independent judges, but it’s unlikely that even the richest companies would fund a full platform judiciary department. A practical solution for large safety teams would be to swap review duty with other abuse sub-teams. For the largest companies, reviews could be swapped between the abuse teams of different products, so the judges wouldn’t even have a direct commercial stake in the outcome.

While it will require time to cross-train teams on each other’s policy standards, it will have operational benefits to the company. Each team will have a stronger incentive to improve their processes, since they know their work will be reviewed by peers.

This could be accomplished in a smaller company by asking other functions to review safety judgements. This would have another fringe benefit of helping more people appreciate safety work, and could even provide some redundancy in case there’s a staffing gap.

Objection 1: egregious and illegal content

What about accounts terminated for egregious or illegal content like child safety or extreme violence? Should a platform be forced to re-display that material to the user in the due process workflow? (Can they even do that legally?) And wouldn’t it be traumatizing to force someone on another team to see this material in order to judge these cases?

First, the existence of egregious harms doesn’t invalidate the entire plan. A platform could launch these due process changes for every other policy.

Second, there are ways to shield content without abandoning due process. For example, in real-world trials, disturbing evidence may be reviewed by a defense attorney, who then discusses it with the accused. Perhaps platforms could provide a similar intervening person.

Third, while nobody should be forced to see this material against their will, there are great, dedicated professionals who do confront this material, which platforms could hire for a judicial role separate from the core enforcement team. [Besides, platforms are happy enough to pay contractors on outsourced moderation teams to look at this material on their behalf. By that (low) standard, objective judgement is just a new use case.]

Objection 2: increased volume

If you tell people they are innocent until proven guilty, have the right to an independent judge, and can track the process in a handy workflow tool, won’t that mean everyone will request judgement for everything, and swamp the system?

There are at least three levers a platform could use to control the volume.

First, if there are a lot of acquittals, that’s a useful signal that the platform is generating too many false positives. A platform should want to know this, so they can retune their systems. Reducing the number of bad accusations will reduce the number of requests for a review.

Second, if users are initiating a lot of cases but losing most of them, platforms could raise the bar for starting the process. For example, they could require enhanced proof of identity, something that platforms like YouTube are already using to gate advanced features. This would have a beneficial side effect of discouraging repeat offenders, since they could be checked for uniqueness against other terminated accounts.

Third, if volume really does become a problem, platforms could put stricter limits on which users are entitled to the full process. Two-tier justice systems aren’t great, but platforms already do this in much less objective ways. When I was at Google, a special escalation process let Directors and VPs get attention on behalf of friends and family. Meta runs a (previously secret) program called cross-check to provide high-profile accounts with extra human review before removing content. A due process workflow would at least make this consistent, instead of depending on knowing a guy who knows a guy.

What do you think of this proposal? Please tell me!

A proposal like this is only useful to the degree it sparks real consideration. I hope that many of you will have a reaction to it, good or bad. Please share your thoughts! Would you build this at your company? Has anyone tried something like this before? What is wrong with this proposal, and what would make it better? I’ll be posting about this on Bluesky, Threads, and Linkedin, or email me at [email protected]. I will share the dialogue in a future issue.

Next week: transparency

To keep today’s proposal practical, I omitted any changes to make the account termination process more transparent for users and the public. I do believe transparency would do even more to improve trust, so next week I will suggest a few optional transparency changes that a platform could consider on top of this core due process workflow.

Keep Reading