IMPORTANT CAVEAT SO I DON’T GET FLAMED BY OLD COLLEAGUES: I had a lot of great individual relationships with safety folks at Google. I also acknowledge that my stubborn and direct personality didn’t help. My complaints here are about times when the wrong structures and incentives made it too hard for the company to do the right thing (from my perspective, at least).

Mobile App Ads: slow and steady loses the race

If you’re a regular reader of Platformocracy, you may recall that I wrote a piece more-or-less blaming the Google ad safety team for alienating a generation of small Web publishers. I didn’t mention then that we also came close to blowing it with mobile app developers.

Google got into mobile app advertising in a big way when they bought AdMob for $750 million in November 2009. The iPhone App Store and the first version of Android had launched a year prior, and app usage was exploding. Google’s leadership expected mobile ads to be just as huge, but we struggled to get the business off the ground for years.

One big issue was that we kept hitting roadblocks with the ad safety team. For a long time, they wouldn’t even allow the popular rewarded-video format, because incentivizing people to interact with an ad was considered serious fraud back in the pay-per-click Web days.

Even getting new mobile partners up and running was painful. Developers would have to wait weeks while ad safety inspected each of their apps in detail for policy compliance and possible exploits. Meanwhile, competitors would literally fly teams of engineers around the world to get popular apps up and running in a few days, no questions asked.

Why couldn’t the ad safety team understand the urgency? Didn’t they appreciate how much pressure we were under to grow the business? I was frustrated because I had failed to understand the problem from their point of view.

Mobile wasn’t just a new business. Android and the iPhone were entirely new technology platforms, which meant existing protections wouldn’t work, plus there would be entirely new opportunities for abuse. And since they were already swamped just trying to keep up with Google’s enormous Web ads business, an aggressive move into mobile would have required them to practically double their entire team.

They had explained this to leadership, but got only a fraction of what they asked for. This put them in an impossible position — either lower their standards (and be blamed for any abuse scandals) or slow down mobile growth (and be blamed for underperformance).

The most maddening part of this? Both mobile ads and ad safety rolled up to the same SVP (senior vice-president), Sridhar Ramaswamy. How could he not see the risks of pushing for growth in mobile ads without a corresponding investment in safety?

In retrospect, I don’t think it was about him at all. There is an inherent structural problem with burying safety inside a business. I explained this in my guest post on Rob’s Notes from a few weeks ago:

Safety teams tend to fall down and lose the race for funding because they can’t express their work in OKRs that matter to executives. There are no standards for what ‘safe enough’ looks like. Warnings about high levels of fraud, harassment, and scams end up just sounding like the same squawking as all the other hungry baby birds. Might as well give the worm to that new-business bird in the corner who has potential to grow up big and fat.

Trust & Safety: when I want your opinion, I’ll give it to you

I joined Google’s Counter-Abuse Technology team in late 2020 because the group’s leader, Rahul Roy-Chowdhury, had an inspiring vision of reinventing the whole company’s approach to safety. Unfortunately, he left a few months after I arrived, and the new reality turned out to be a huge disappointment.

While my team had responsibility for the technology of fighting bad guys and protecting Google users, the key policy and process decisions were controlled by Trust and Safety (T&S), a huge organization reporting to Kent Walker, Google’s President of Global Affairs. Kent is a lawyer by training, which had a strong effect on how T&S looked at the world. In my case, it meant that I couldn’t lead the kind of product management organization I had envisioned.

In successful software businesses, since product management and engineering build the company’s core asset, they have to be deeply involved in coming up with new ideas and setting long-term strategy. This is why most of Google’s divisions are led by engineers or highly-technical product managers.

Big companies also need a lot of internal software to support business functions like legal or HR. The teams who build these tools don’t get an equal say in the work of the function. A software engineer isn’t qualified to advise lawyers on whether to settle a lawsuit, and nobody would give a product manager for the performance management system the authority to decide who gets fired.

Coming out of fifteen years in Ads, I approached counter-abuse as a product owner. Based on how Rahul had described the job, I assumed I would have a seat at the table for key decisions. Unfortunately, T&S wanted the internal-software model, and did not welcome my efforts to get involved in what they saw as their exclusive domain of authority. No matter how carefully I tried to build collaborative bridges and share my ideas, the T&S senior leadership attitude was coldly polite at best, and dismissive at worst.

This came to a head in late 2022, not long before I left Google. For most of the previous year, I had been working with like-minded partners within T&S to try to get the company ready for a wave of content safety regulations, starting with the EU’s Digital Services Act. I was convinced that this marked a long-term shift in how Google would need to work with governments and researchers, and wanted to get product teams involved early to plan shared technology to make compliance easier, and even to turn this into an operational advantage.

The senior leaders in T&S were unmoved. After months of deferrals and delays, Kent parachuted in a legal crisis team to take over. I had to sit quietly in a group meeting while the Vice-President in charge told me, and I quote:

Jonathan, I know I’m not a product manager, but when it comes to this project, I am effectively the product manager.

I was already on my way out the door, but being handed a battlefield demotion by the legal team removed any lingering doubts I might have had. That would never be me.

This probably all sounds like sour grapes. Sissie Hsiao and her team eventually made mobile ads a success, and she was rewarded with a big role in AI. Google has been relatively unscathed by the DSA, even as X was fined $140 million by the EU for multiple violations. And Google’s stock price has almost tripled since I left. All good, right?

You won’t be remotely surprised to hear that I don’t agree. I believe things only look OK from the inside because big platforms don’t have to bear the cost of their externalities, namely all the people who are still being harmed by safety issues. It’s only logical, after all. GE and GM weren’t breaking any laws when they polluted the Hudson River decades before the advent of the EPA.

Meta is the most egregious example, with recent leaks about huge revenues from scams, and burying evidence of harms to children, but they aren’t alone. Google recently mistakenly banned someone just for reporting child safety materials in an AI training dataset he had uploaded, and wouldn’t reinstate him until 404 Media reported on it. There have been repeated, ugly scandals in the mobile app advertising space, such as bad formats at Cheetah Mobile and the SEC investigation of AppLovin. Researchers continue to find major ad fraud networks in the Google Play Store. And so on.

Platform PR teams will tell you they take all of this seriously and are doing the best they can, but when so many real-world indicators are flashing red, we can’t afford to take their word for it. Despite the ongoing efforts of dedicated individuals and safety teams, the way these companies operate at the top means their Internet Hudson Rivers will never be clean enough.

Six months ago, I launched Platformocracy by calling for a new structural model for Bluesky trust and safety. So it seems fitting to close out the year by returning to the topic, writ large. Next week, I’ll sketch out some generalized structural principles that would be better for both the people who use these platforms and the people who are working from the inside to keep them safe. See you then.

Ideas? Feedback? Criticism? I want to hear it, because I am sure that I am going to get a lot of things wrong along the way. I will share what I learn with the community as we go. Reach out any time at [email protected].

Keep Reading

No posts found