
Welcome to part two of the Platformocracy Holiday Party! Last week, I played the Grinch and complained about my battles with Google Trust & Safety. This week, I revisit the post that started it all and say more about why platform safety requires democracy.
Safety in the business: launch now, fix later
Safety is an uneasy fit inside a business team. A business owner has a clear success metric - growth. Google and Square veteran Gokul Rajaram talks about the need for a check metric to “constrain the NSM [north star metric] and ensure that the NSM grows in a way that is sustainable and creates long-term value.” Safety is not a great check metric for an online platform, because lock-in (due to the lack of good Exit options) means that safety problems almost never destroy long-term value fast enough to prevent growth and profit in the short term.
This means the logical decision for a business is to kick the safety can down the road. Delays cost money, so it’s better to forge ahead and fix it later. We laughed at Google Gemini in 2024 for telling people to eat rocks, but eighteen months later all is forgiven, and the Search team is being recognized for quick decisions and “accepting the risk of AI’s unpredictable responses.”
[To be fair, Gemini has gotten a lot safer and more reliable. My question is why we all had to bear the cost of using it when it wasn’t, in order to help Google meet its business goal of catching up to ChatGPT.]
Safety in legal/policy: Is This Good for the COMPANY?
If business owners can’t manage their own safety, turning it over to the legal and public policy team sounds like a good idea. They are responsible for protecting the company from legal and government threats, which includes making sure the company complies with existing laws. And a lot of safety issues, like fraud and child safety, are clearly illegal.
The problem here is that protecting the company is a completely different goal from protecting (and policing) the people who use the company’s products. At Google, this meant that critical decisions around policies and enforcement ended up filtered through the lens of risk management. Would a lenient policy expose the company to legal liability? Would a strict approach to sensitive issues trigger criticism from governments or influential advocacy organizations? The good or bad impact on actual people often felt like a more abstract concern.
Safety should report directly to the CEO
New regulations mean that the compliance function in finance and health care has been emerging from inside the legal department to become a top-level concern. Could this work for safety? An excellent article on the transition by Adjunct Professor at Fordham Law School Joseph Burke explains the trend in detail, and notes that 31% of public company CCOs (chief compliance officers) already report to the CEO or directly to the board.
This sounds like the future of safety to me. Done right, safety goes far beyond setting policies and detecting violations. Safety needs to be designed and built into products from the beginning, supported with empathetic customer service that helps people who report abuse or appeal decisions, and defended by leaders able to stand up for what is right. This is a new, cross-functional discipline that affects the lives of billions of people. Giving safety an equal voice in leadership debates would make this context clear.
The Zuck/Altman problem
Reporting to the CEO sounds great in theory, but the recent leaks about Meta’s revenue from fraud and scams totally undermine this. Integrity at Meta has had a separate organization under Guy Rosen, but that didn’t help, as reported just this week:
“As a result of Integrity Strategy pivot and follow-up from Zuck,” a late 2024 document notes, the China ads-enforcement team was “asked to pause” its work… After Zuckerberg’s input, the documents show, Meta disbanded its China-focused anti-scam team. It also lifted a freeze it had introduced on granting new Chinese ad agencies access to its platforms.
A seat at the table is also moot if you’re inside a private company whose leadership only cares about growth. The safety-focused internal revolt against OpenAI Sam Altman in 2023 hits differently today, now that we know the company relaxed its protections, allowing ChatGPT to lead people into psychosis and suicide.
If your leadership is amoral, greedy, or corrupt, there is no internal structure that will lead to good behavior.
Communities are not (just) businesses
The health of a community cannot be reduced to a purely economic formula. I do not accept that we can only make our online communities as safe as they can be without compromising the underlying business. Nobody would accept a restaurant saying it wants to achieve as little food poisoning as possible, but it has to keep serving regardless.
This is why we have food hygiene inspectors, and why compliance is such a big deal in finance and health care. In these cases, the goal is rightly inverted: to support as much business as possible, without compromising safety. We have decided that sometimes, it’s better for a business to fail than to put people at an unacceptable level of risk.
Full circle: back to democracy
And this brings me back to giving platform safety teams meaningful independence from their parent organizations. Safety professionals should feel free to work in the public interest of the communities they serve, even if that means tech execs can’t achieve their revenue targets.
The only way we can be sure that companies are allowing safety professionals to work in our interest, not the company’s, is with transparency and accountability. This could involve regulation and inspections, or major structural changes that separate safety entirely. Either way achieves the same goal: we need representation in platform safety decisions that affect us and our communities.
I’m dreaming of a healthy Internet (just like the one I used to know)
I acknowledge that this is a staggeringly idealistic vision. I don’t expect Mark Zuckerberg’s heart to grow three sizes before he dumps our social graphs off the side of Mount Crumpit, or Sam Altman to wake up a changed man after being visited overnight by the ghosts of AI Past, Present, and Future.
I do believe history proves that when enough people band together to demand a voice in their own fate, they tend to succeed. It may take decades or even longer, but if we work together to take back power over our online lives, I believe we will be able to rebuild an Internet that works for us again.
So, that’s my hopeful note as we head for the New Year. I’m taking the next two weeks off, and will be back on Friday, January 9. I hope you and yours have a happy and restful holiday season.
Ideas? Feedback? Criticism? I want to hear it, because I am sure that I am going to get a lot of things wrong along the way. I will share what I learn with the community as we go. Reach out any time at [email protected].

