TrustCon was last week, bringing together over 1,000 trust & safety professionals. The legendary journalist Casey Newton of Platformer was there, and wrote a critical piece about industry leaders remaining silent in the face of retrenchment and political attacks. He got a ton of feedback and wrote a thoughtful follow-up. I want to touch on two points that I haven’t seen mentioned yet.

Companies were reactive, not idealistic.

Newton hails a lost era when companies made idealistic commitments like Google’s responsible AI principles and the Meta Oversight Board. This gives the big tech companies more credit than they deserve. These initiatives were well-intentioned, but they were reactions to bad press, not deeply held beliefs.

I’ve been in the room when senior Google executives have debated how to respond to bad PR cycles. Details of how Meta handled similar situations can be found in tell-alls like Broken Code by Jeff Horwitz and Careless People by Sarah Wynn-Williams. Soul-searching is rarely on the menu. The default focus is deciding on a plan to mollify critics with the least possible disruption to the company’s core strategy.

Mark Zuckerberg announced the Oversight Board in November 2018, only after the Cambridge Analytica scandal broke that March. Google announced its responsible AI principles in June 2018, only after widespread employee protests that April over the company’s involvement in Project Maven, a Pentagon initiative to use AI to analyze military drone imagery.

In Google’s case, this was a delay more than a change of course. Google kept finding ways to work with the Pentagon, and this February, Google finally removed its reference to avoiding applications “likely to cause harm” in favor of “support[ing] national security.”

Why hasn’t there been a new walkout in 2025? Because the loudest people are all gone. Two leaders of the internal protests against Maven, Liz Fong-Jones and Meredith Whittaker, left Google in 2019. Timnit Gebru and Margaret Mitchell, co-leads of the Ethical Artificial Intelligence Team, left in late 2020 and early 2021. Mitchell was the only one of the four explicitly fired, but all reported internal pressure to abandon their advocacy.

“Trust and Safety” is actually three loosely-related problems in a PR-designed trenchcoat. 

Newton writes about “trust and safety” as one thing. It’s actually a bunch of separate projects, many of which are essential, and some of which have advanced dramatically recently even as other pieces have been retrenching. This tendency to talk about the whole field either making progress or backsliding as a single unit isn’t Newton’s fault, because tech company PR has been telling journalists and the public an oversimplified story for years.

Trust and safety is a blanket term for (at least) three pretty different problems, driven by different primary stakeholders, which present different sorts of difficulties in enforcement that call for different potential solutions.

Problem

Examples

Primary stakeholder

Illegal behavior

scams, phishing, data theft, selling drugs, sharing CSAM (child sexual abuse material)

Governments

Undesirable but not illegal behavior

harassment, hate speech, violent content, trolling, medical misinformation

Users

Abuse of platform resources

fake accounts, spam, bulk content scraping

The Company

Problem

Difficulties

Possible solution

Illegal behavior

Ambiguous laws, government intrusion

Change the laws, ensure due process

Undesirable but not illegal behavior

No established definitions of terms, users disagree on issues and solutions

Community deliberation

Abuse of platform resources

Does this protect users, or just corporate interests?

Transparency, regulation

If you’re responsible for PR or lobbying, complexity like this is not fun to explain to the outside world. You want a simple, heart-warming story. So you give it one overarching name, and draw focus on the most egregious and unambiguous cases that cast a flattering halo on the whole enterprise. Look at how many kids we’ve saved! Here’s our anti-ad-spam team, aren’t they smart? That kind of thing.

The reality is that there are tons of edge cases where it’s hard to decide what’s illegal and what’s just bad, whether a given case actually violates a policy or not, and whether the current enforcement mechanisms are catching enough of the actual badness.

Consider misinformation/disinformation. Evaluating statements for truth or falsehood is wickedly difficult, and maybe impossible in our current fractured age. When you tell people it’s being handled by the same department that deals with stopping scams, you’re accidentally implying your mis/disinfo work is just as morally unambiguous. That’s waving a red flag at anyone who thinks they were censored, and you end up with laws trying to protect “viewpoint diversity.”

This also sweeps self-serving policies under the rug. Meta has taken a hard line against content scraping and middleware that modifies their products, responding with lawsuits and lifetime bans. This is arguably just an excuse for Meta to defend their product moat, but Meta doesn’t want to have that discussion, so they bury it into legitimate safety work around preventing data compromise. This makes even honest T&S work look suspect, because there’s no way to trust which policies are based on a real threat, and which are corporate doublespeak.

Unsafe At Any Bandwidth?

More transparency about what’s really going on would expose all of the difficulties, gaps in protection, and systemic flaws that T&S teams grapple with (in good faith) every day. This would raise the question of why companies are empowered to do this all on their own, in secret, and without recourse.

Newton sums this up perfectly in his latest post:

When it comes to content moderation, on Substack as on any other social network, your rights come down to the whims of the co-founders and a handful of their employees.

Substack promotes a Nazi, Casey Newton, July 31, 2025

My core thesis, and the reason I write this newsletter, is that a reckoning with this power imbalance is inevitable. The American car industry eventually had to face its Unsafe At Any Speed moment, when Ralph Nader exposed company executives making budget-driven safety decisions behind closed doors, despite the efforts of dedicated professionals to drive change from within.

The more knowledgeable we all become about how platform trust and safety operates, warts and all, the better we can advocate for more authority over our own safety. This will ultimately be good for the trust & safety community, letting them get recognized for their good work without being criticized for executive decisions that weren’t theirs to make, or being asked to keep quiet in the face of the indefensible.

Ideas? Feedback? Criticism? I want to hear it, because I am sure that I am going to get a lot of things wrong along the way. I will share what I learn with the community as we go. Reach out any time at [email protected].

Keep Reading

No posts found