Welcome to the first Platformocracy reader mailbag. I was lucky to get a lot of comments and feedback, and am sharing some key points here. Please keep your thoughts coming as we explore these ideas together.

Also note that I will be taking the U.S. July 4 holiday off, so my next post will be Friday, July 11.

Juliet Shen, co-founder and Head of Product at ROOST, a new non-profit that promotes open-source safety infrastructure, thinks my thesis applies mostly to big tech:

Small to medium size tech companies are working hard just to have basic trust and safety and make sure abuse isn't on the platform. Many don't have sophisticated tools to cover the different harm types, and there's a difference in reality when it comes to resourcing, team sizes, and ability.

I am sympathetic to how hard it is to keep a platform safe with a small team. However, I believe community participation is a way to make their struggle easier, not harder. Consider open source software, where having many eyes on the code makes it less buggy and more secure. A community of thousands working together to protect itself could be much more effective than company employees taking the whole burden on themselves.

Juliet also points out that the term Platformocracy echoes Harvard Professor Latanya Sweeney’s related concept of technocracy.

“Technology design is the new policymaker,” Sweeney explains. “Because if the technology doesn’t respect it, or doesn’t allow for it, then having a law” — regarding employment discrimination, for instance, or privacy, or voting rights — “doesn’t matter anymore.”

When Technology and Society Clash, Harvard Magazine, November-December 2024

Randall Rothenberg, CEO of the Interactive Advertising Bureau trade group from 2007 to 2020, adds his memories to the story via Facebook:

I recall that this opacity is what sent Tim Carter - the founder of AsktheBuilder.com, one of the original AdSense network sites, and an enormous supporter of Google early — over the edge, turning him into a vocal enemy of the company. When we were creating the Long Tail Alliance at IAB [in 2009], we were shocked to discover how little interface Google had with these creators — so little that it couldn’t even guide us to great creators who could help us build this coalition!

Breaking - just this morning, Michael McNally, a deeply respected anti-fraud engineering veteran from both Google and Facebook, writes on LinkedIn (not in response to me) about his AdSense work:

In those roles, we built machine-learned classifiers for detecting bad behavior. When used for automated termination—meaning no human involvement—we might typically tune the thresholds carefully: 99% or 99.5% precision. That means out of 1,000 banned accounts, 990 to 995 were genuinely harmful. The other 5 to 10 were false positives—collateral damage, or what we called “insults.” As a human, I regretted those. But as a business, it was a cost-benefit tradeoff. More mercy toward edge cases means more fraud from abusers. Sometimes you throw people under the bus.

After McNally’s experience of losing his PayPal account, he is now advocating for strikes, warnings, appeals, and the like. And yet, in private, old Google fraud-fighting colleagues of Michael’s tell me that my praise for YouTube’s more accommodating partner policies is misplaced. They think YouTube is too lenient. Even within the fraud-fighting world, it’s hard to find a path forward.

First off, thanks to Professor Schneider for his kind words on Bluesky. His book Governable Spaces was a huge influence on my decision to start this newsletter.

Second, Christine Moellenberndt, legendary online community manager (including at Wikimedia and as a moderator of moderators at Reddit), notes on Linkedin that elections could worsen, not resolve, the tensions between founder and community.

Those early folks who built and founded a community deserve recognition and credit for that, and what does that look like? Whose vision is more important, the founder's or the community's?

Third, both Ari Paparo, founder of Marketecture (home of the best & funniest coverage of marketing & advertising online), and Alberto Leon of Harvard’s Applied Social Media Lab, ask if Decentralized Autonomous Organizations (DAOs) could be a technology model for change-of-control voting.

I know very little about DAOs given my inherent skepticism of cryptocurrency, but they may be on to something. The DAO approach is using blockchain technology to create “smart contracts” that enshrine voting rights and ensure the results are enforced. This could be the technical foundation for an irrevocable change of control process in online communities. I want to learn more, so if any readers are familiar with DAO voting, I’d love to hear from you and explore these ideas together.

Postscript [Content Warning]

A close friend of mine from business school died unexpectedly last week. She was smart, funny, irrepressible, and a fellow early adopter of the Internet. Social media helped us keep up with each other’s busy lives and families. She would have had a lot to say about this newsletter, and I still can’t believe we’ll never get to have that conversation. This one’s for you, Eugenia.

Ideas? Feedback? Criticism? I want to hear it, because I am sure that I am going to get a lot of things wrong along the way. I will share what I learn with the community as we go. Reach out any time at [email protected].

Keep Reading

No posts found